CN109993424B - Non-interference type load decomposition method based on width learning algorithm - Google Patents

Non-interference type load decomposition method based on width learning algorithm Download PDF

Info

Publication number
CN109993424B
CN109993424B CN201910230729.0A CN201910230729A CN109993424B CN 109993424 B CN109993424 B CN 109993424B CN 201910230729 A CN201910230729 A CN 201910230729A CN 109993424 B CN109993424 B CN 109993424B
Authority
CN
China
Prior art keywords
matrix
vector
augmentation
data
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910230729.0A
Other languages
Chinese (zh)
Other versions
CN109993424A (en
Inventor
杨秦敏
尤利华
董延峰
陈珺
张硕明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Aisheng Internet Of Things Technology Co ltd
Original Assignee
Guangdong Aisheng Internet Of Things Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Aisheng Internet Of Things Technology Co ltd filed Critical Guangdong Aisheng Internet Of Things Technology Co ltd
Priority to CN201910230729.0A priority Critical patent/CN109993424B/en
Publication of CN109993424A publication Critical patent/CN109993424A/en
Application granted granted Critical
Publication of CN109993424B publication Critical patent/CN109993424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Development Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

According to the non-interference type load decomposition method based on the width learning algorithm, operation data before and after the operation of the electrical equipment are collected as input data, the width learning algorithm is combined, and the power load operation condition of a scene of the multiple electrical equipment is obtained in a non-interference mode by means of data preprocessing, initial training of a load decomposition model, incremental learning of the load decomposition model, initial training of a switch state change identification model and incremental learning of the switch state change identification model, and fully considering multiple factors of the electrical equipment information.

Description

Non-interference type load decomposition method based on width learning algorithm
Technical Field
The invention relates to the technical field of load decomposition, in particular to a non-interference type load decomposition method based on a width learning algorithm.
Background
The power load decomposition generally needs to be matched with an interference type or non-interference type device, and the traditional load decomposition method needs to be provided with a special sensor to monitor and collect the running state of the electrical equipment and the related power consumption information, so that the cost is high, the large-scale popularization is not facilitated, the deployment period is long, and the effect is slow. A large number of intelligent ammeter monitoring devices are emerging in the current market, and the intelligent ammeter monitoring devices have the functions of effectively monitoring and collecting information on electric parameters such as voltage (U), current (I), active power (P), reactive power (Q), power Factor (PF), frequency (f), active electric quantity (kWh) and the like of electric equipment. In addition, in the traditional electrical equipment load decomposition method, the electrical factors are generally power, the data are single, and the decomposition accuracy is low. In view of the above drawbacks, improvements are now proposed.
Disclosure of Invention
According to the non-interference type load decomposition method based on the width learning algorithm, a special sensor device is not needed, multiple factors of electrical equipment information are fully considered, and the load decomposition of the electrical equipment is effectively realized by adopting a correlation technology of big data and artificial intelligence, so that a more accurate electricity utilization decision is provided.
The invention provides a non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance to realize the load decomposition of the electric appliance, and comprises the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of K1, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vector
Figure GDA0004158220910000011
And the operation data are spliced to form an input vector after being subjected to Fourier transformation>
Figure GDA0004158220910000012
-adding said input vector->
Figure GDA0004158220910000013
Splicing into input vector->
Figure GDA0004158220910000014
Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>
Figure GDA0004158220910000021
Wherein i=1, 2, …, I is denoted as data number, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at the sampling frequency of K2, and splicing the operation data to form an input vector +.>
Figure GDA0004158220910000022
Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>
Figure GDA0004158220910000023
Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>
Figure GDA0004158220910000024
At the same time, construct tag vector +.>
Figure GDA0004158220910000025
All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector
Figure GDA0004158220910000026
Figure GDA0004158220910000027
Normalized concatenation is input matrix->
Figure GDA0004158220910000028
Other input vectors->
Figure GDA00041582209100000251
Normalized and spliced into a test input matrix>
Figure GDA0004158220910000029
The tag vector +.>
Figure GDA00041582209100000210
Spliced into a label matrix->
Figure GDA00041582209100000211
Other tag vector +.>
Figure GDA00041582209100000212
Splicing to form a test tag matrix>
Figure GDA00041582209100000213
Second class data preprocessing, namely preprocessing the input vector
Figure GDA00041582209100000214
Normalized concatenation is input matrix->
Figure GDA00041582209100000215
Other input vectors->
Figure GDA00041582209100000216
Normalized splice to test matrix->
Figure GDA00041582209100000217
The tag vector +.>
Figure GDA00041582209100000218
Figure GDA00041582209100000219
Spliced into a label vector
Figure GDA00041582209100000220
Other tag vector +.>
Figure GDA00041582209100000221
Splicing into test tag vector->
Figure GDA00041582209100000222
Step 3: initial training of a load decomposition model based on the input matrix
Figure GDA00041582209100000223
Utilize first random initialization matrix +.>
Figure GDA00041582209100000224
First activation function->
Figure GDA00041582209100000225
First bias vector->
Figure GDA00041582209100000226
Constructing a mapping characteristic node matrix->
Figure GDA00041582209100000227
Node matrix based on the mapping feature>
Figure GDA00041582209100000228
Initializing matrix with second random>
Figure GDA00041582209100000229
Second activation function->
Figure GDA00041582209100000230
Second bias vector->
Figure GDA00041582209100000231
Constructing an enhanced node matrix->
Figure GDA00041582209100000232
Node matrix +_using the mapping feature>
Figure GDA00041582209100000233
Enhanced node matrix->
Figure GDA00041582209100000234
Construction of the first augmentation matrix->
Figure GDA00041582209100000252
By the first augmentation matrix +.>
Figure GDA00041582209100000235
The tag matrix->
Figure GDA00041582209100000236
Obtaining a first weight matrix->
Figure GDA00041582209100000237
Step 4: using a test input matrix
Figure GDA00041582209100000238
Testing, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrix
Figure GDA00041582209100000239
Initializing matrix with third random->
Figure GDA00041582209100000240
Second activation function->
Figure GDA00041582209100000241
Third bias vector->
Figure GDA00041582209100000242
Constructing an incremental enhanced node matrix
Figure GDA00041582209100000243
Enhancing the node matrix by the increment>
Figure GDA00041582209100000244
First augmentation matrix ++>
Figure GDA00041582209100000245
Constructing a second augmentation matrix
Figure GDA00041582209100000246
By a second augmentation matrix->
Figure GDA00041582209100000247
Label matrix->
Figure GDA00041582209100000248
Obtaining a second weight matrix->
Figure GDA00041582209100000249
Assigning M+1 to M and m+1 to M, and returning to the step 4;
step 6: initial training of a switch state change recognition model based on the input matrix
Figure GDA00041582209100000250
Utilize fourth random initialization matrix +.>
Figure GDA0004158220910000031
Third activation function->
Figure GDA0004158220910000032
Fourth bias vector->
Figure GDA0004158220910000033
Constructing a mapping characteristic node matrix->
Figure GDA0004158220910000034
Node matrix based on the mapping feature>
Figure GDA0004158220910000035
Initializing matrix with fifth random->
Figure GDA0004158220910000036
Fourth activation function->
Figure GDA0004158220910000037
Fifth bias vector->
Figure GDA0004158220910000038
Constructing an enhanced node matrix->
Figure GDA0004158220910000039
Node matrix +_using the mapping feature>
Figure GDA00041582209100000310
Enhanced node matrix->
Figure GDA00041582209100000311
Construction of a third augmentation matrix->
Figure GDA00041582209100000312
By the third moment of augmentation +>
Figure GDA00041582209100000313
Matrix and said tag matrix>
Figure GDA00041582209100000314
Obtaining a third weight matrix->
Figure GDA00041582209100000315
Step 7: using a test input matrix
Figure GDA00041582209100000316
Testing, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrix
Figure GDA00041582209100000317
Initializing matrix with sixth random->
Figure GDA00041582209100000318
Fourth activation function->
Figure GDA00041582209100000319
Sixth offset vector->
Figure GDA00041582209100000320
Constructing an incremental enhancement node matrix->
Figure GDA00041582209100000321
Enhancing the node matrix by the increment>
Figure GDA00041582209100000322
Third augmentation matrix->
Figure GDA00041582209100000323
Construction of a fourth augmentation matrix->
Figure GDA00041582209100000324
Through the fourth augmentation matrix +.>
Figure GDA00041582209100000325
Label matrix->
Figure GDA00041582209100000326
Obtaining a fourth weight matrix->
Figure GDA00041582209100000327
Assigning M+1 to M and m+1 to M, and returning to the step 7;
step 9: switch state change identification, continuously collecting electrical appliance operation data of K periods at the sampling frequency of K2, and splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for T2 periods, then entering step 10, and if not, executing step 10 at fixed time intervals; and
step 10: electric appliance load decomposition, collecting electric appliance operation information of a single period at a sampling frequency of K1, and splicing and normalizing to form an input vector X 1 And for theThe operation information is subjected to Fourier transformation, spliced and normalized to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle
Preferably, the mapping node matrix is constructed based on the following formula
Figure GDA00041582209100000328
Is->
Figure GDA00041582209100000329
Record->
Figure GDA00041582209100000330
Figure GDA00041582209100000331
Then->
Figure GDA00041582209100000332
Figure GDA00041582209100000333
Record->
Figure GDA00041582209100000334
Figure GDA00041582209100000335
Then->
Figure GDA00041582209100000336
Figure GDA0004158220910000041
Where k=1, 2,..n.
Preferably, the enhanced node matrix is constructed based on the following formula
Figure GDA0004158220910000042
Is->
Figure GDA0004158220910000043
Record->
Figure GDA0004158220910000044
Figure GDA0004158220910000045
Then->
Figure GDA0004158220910000046
Recording device
Figure GDA0004158220910000047
Then->
Figure GDA0004158220910000048
Figure GDA0004158220910000049
Where l=1, 2,..m.
Preferably, step 3 obtains the first weight matrix based on the following formula
Figure GDA00041582209100000410
The first augmentation matrix
Figure GDA00041582209100000411
Solving for the first augmentation matrix>
Figure GDA00041582209100000412
Pseudo-inverse of->
Figure GDA00041582209100000413
Figure GDA00041582209100000414
Obtaining a first weight matrix->
Figure GDA00041582209100000415
Preferably, step 5 builds an incremental enhanced node matrix based on the following formula
Figure GDA00041582209100000416
Figure GDA00041582209100000417
Preferably, step 5 finds the second weight matrix based on the following formula
Figure GDA00041582209100000418
Constructing a second augmentation matrix
Figure GDA00041582209100000419
Wherein, let the
Figure GDA00041582209100000420
Figure GDA00041582209100000421
Figure GDA00041582209100000422
Solving for a second augmentation matrix
Figure GDA00041582209100000423
Pseudo-inverse of->
Figure GDA00041582209100000424
Then solve for a second weight matrix
Figure GDA00041582209100000425
Preferably, step 6 obtains the third weight matrix based on the following formula
Figure GDA00041582209100000426
The third augmentation matrix
Figure GDA00041582209100000427
Solving for a third augmentation matrix>
Figure GDA00041582209100000428
Pseudo-inverse of->
Figure GDA00041582209100000429
Figure GDA00041582209100000430
Obtaining a third weight matrix->
Figure GDA00041582209100000431
Figure GDA00041582209100000432
Preferably, step 8 builds an incremental enhanced node matrix based on the following formula
Figure GDA00041582209100000433
Figure GDA00041582209100000434
Preferably, step 8 obtains a fourth weight matrix based on the following formula
Figure GDA00041582209100000435
Building a fourth augmentation matrix
Figure GDA0004158220910000051
Wherein, let the
Figure GDA0004158220910000052
Figure GDA0004158220910000053
Figure GDA0004158220910000054
Solving for a fourth augmentation matrix
Figure GDA0004158220910000055
Pseudo-inverse of->
Figure GDA0004158220910000056
Then solve for the fourth weight matrix +.>
Figure GDA0004158220910000057
Preferably, the frequency K1 may be selected in the range of 1kHz-10kHz, and the frequency K2 may be selected in the range of 1kHz-10kHz.
According to the non-interference type load decomposition method based on the width learning algorithm, a special sensor device is not needed when load decomposition is achieved, most of electric power information acquisition devices (such as intelligent electric meters and the like) in the market can be reused, multiple factors of electric equipment information are fully considered based on the width learning algorithm, the load decomposition of electric equipment is effectively achieved by adopting the related technology of big data and artificial intelligence, the application cost is greatly reduced, the electric load running condition of a multi-equipment scene is more accurately identified, more accurate electricity utilization decision is provided, and the economic benefit is effectively improved.
Drawings
FIG. 1 is a flow chart of a non-interferometric load decomposition method based on a width learning algorithm according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a width learning load decomposition model of the present invention;
fig. 3 is a voltage and current diagram before and after the state change of the electric heater switch according to the first embodiment of the present invention;
fig. 4 is a graph showing the result of the load decomposition of the electric appliance according to the first embodiment of the present invention.
Detailed Description
The non-interference type load decomposition method based on the width learning algorithm provided by the invention is further described below with reference to the accompanying drawings, and it should be pointed out that only one optimized technical scheme is used for describing the technical scheme and the design principle of the invention in detail.
The first embodiment of the invention provides a non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance so as to realize the load decomposition of the electric appliance.
Referring to fig. 1 and 2, a non-interference type load decomposition method based on a width learning algorithm according to a first embodiment of the present invention includes the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of 10kHz, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vector
Figure GDA0004158220910000061
And the operation data are spliced to form an input vector after being subjected to Fourier transformation>
Figure GDA0004158220910000062
-adding said input vector->
Figure GDA0004158220910000063
Splicing into input vector->
Figure GDA0004158220910000064
Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>
Figure GDA0004158220910000065
Wherein i=1, 2, …, INumbering data, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at a sampling frequency of 1kHz, and splicing the operation data to form an input vector +.>
Figure GDA0004158220910000066
Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>
Figure GDA0004158220910000067
Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>
Figure GDA0004158220910000068
At the same time, construct tag vector +.>
Figure GDA0004158220910000069
All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector
Figure GDA00041582209100000610
Figure GDA00041582209100000611
Normalized concatenation is input matrix->
Figure GDA00041582209100000612
Other input vectors->
Figure GDA00041582209100000639
Normalized and spliced into a test input matrix>
Figure GDA00041582209100000613
The tag vector +.>
Figure GDA00041582209100000614
Spliced into a label matrix->
Figure GDA00041582209100000615
Other tag vector +.>
Figure GDA00041582209100000616
Splicing to form a test tag matrix>
Figure GDA00041582209100000617
Second class data preprocessing, namely preprocessing the input vector
Figure GDA00041582209100000618
Normalized concatenation is input matrix->
Figure GDA00041582209100000619
Other input vectors->
Figure GDA00041582209100000620
Normalized splice to test matrix->
Figure GDA00041582209100000621
The tag vector +.>
Figure GDA00041582209100000622
(j=1,2,…,J t ,J t <J) Spliced into a label vector
Figure GDA00041582209100000623
Other tag vector +.>
Figure GDA00041582209100000624
Splicing into test tag vector->
Figure GDA00041582209100000625
Step 3: initial training of a load decomposition model based on the input matrix
Figure GDA00041582209100000626
Utilize first random initialization matrix +.>
Figure GDA00041582209100000627
First activation function->
Figure GDA00041582209100000628
First bias vector->
Figure GDA00041582209100000629
Constructing a mapping characteristic node matrix->
Figure GDA00041582209100000630
Node matrix based on the mapping feature>
Figure GDA00041582209100000631
Initializing matrix with second random>
Figure GDA00041582209100000632
Second activation function->
Figure GDA00041582209100000633
Second bias vector->
Figure GDA00041582209100000634
Constructing an enhanced node matrix->
Figure GDA00041582209100000635
Node matrix +_using the mapping feature>
Figure GDA00041582209100000636
Enhanced node matrix->
Figure GDA00041582209100000637
Construction of the first augmentation matrix->
Figure GDA00041582209100000638
By the first incrementBroad matrix->
Figure GDA0004158220910000071
The tag matrix->
Figure GDA0004158220910000072
Obtaining a first weight matrix->
Figure GDA0004158220910000073
Specifically, the first weight matrix is obtained in the step 3 based on the following formula
Figure GDA0004158220910000074
Firstly, the mapping node matrix is built>
Figure GDA0004158220910000075
Record->
Figure GDA0004158220910000076
Then->
Figure GDA0004158220910000077
Figure GDA0004158220910000078
Secondly, constructing the enhanced node matrix +.>
Figure GDA0004158220910000079
Record->
Figure GDA00041582209100000710
Then
Figure GDA00041582209100000711
Then, a first augmentation matrix is constructed
Figure GDA00041582209100000712
Solving for the first augmentation matrix>
Figure GDA00041582209100000713
Pseudo-inverse of->
Figure GDA00041582209100000714
Figure GDA00041582209100000715
Finally, a first weight matrix is obtained>
Figure GDA00041582209100000716
Figure GDA00041582209100000717
Step 4: using a test input matrix
Figure GDA00041582209100000718
Testing, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrix
Figure GDA00041582209100000719
Initializing matrix with third random->
Figure GDA00041582209100000720
Second activation function->
Figure GDA00041582209100000721
Third bias vector->
Figure GDA00041582209100000722
Constructing an incremental enhanced node matrix
Figure GDA00041582209100000723
Enhancing the node matrix by the increment>
Figure GDA00041582209100000724
First of allAn augmentation matrix->
Figure GDA00041582209100000725
Constructing a second augmentation matrix
Figure GDA00041582209100000726
By a second augmentation matrix->
Figure GDA00041582209100000727
Label matrix->
Figure GDA00041582209100000728
Obtaining a second weight matrix->
Figure GDA00041582209100000729
Assigning M+1 to M and m+1 to M, and returning to the step 4;
specifically, step 5 obtains the second weight matrix based on the following formula
Figure GDA00041582209100000730
First, construct incremental enhancement node matrix ++>
Figure GDA00041582209100000731
Then, a second augmentation matrix is constructed>
Figure GDA00041582209100000732
Order the
Figure GDA00041582209100000733
Figure GDA00041582209100000734
Figure GDA00041582209100000735
Solving for the second augmentationMatrix array
Figure GDA00041582209100000736
Pseudo-inverse of->
Figure GDA00041582209100000737
Finally, a second weight matrix is determined>
Figure GDA00041582209100000738
Step 6: initial training of a switch state change recognition model based on the input matrix
Figure GDA00041582209100000739
Utilize fourth random initialization matrix +.>
Figure GDA00041582209100000740
Third activation function->
Figure GDA00041582209100000741
Fourth bias vector->
Figure GDA00041582209100000742
Constructing a mapping feature node matrix
Figure GDA0004158220910000081
Node matrix based on the mapping feature>
Figure GDA0004158220910000082
Initializing matrix with fifth random->
Figure GDA0004158220910000083
Fourth activation function->
Figure GDA0004158220910000084
Fifth bias vector->
Figure GDA0004158220910000085
Constructing an enhanced node matrix->
Figure GDA0004158220910000086
Utilizing the mapping feature node matrix
Figure GDA0004158220910000087
Enhanced node matrix->
Figure GDA0004158220910000088
Construction of a third augmentation matrix->
Figure GDA0004158220910000089
By the third moment of augmentation +>
Figure GDA00041582209100000810
Matrix and said tag matrix>
Figure GDA00041582209100000811
Obtaining a third weight matrix->
Figure GDA00041582209100000812
Specifically, step 6 obtains the third weight matrix based on the following formula
Figure GDA00041582209100000813
First, construct the mapping node matrix +.>
Figure GDA00041582209100000814
Recording device
Figure GDA00041582209100000815
Then->
Figure GDA00041582209100000816
Figure GDA00041582209100000817
Secondly, constructing the enhanced node matrix +.>
Figure GDA00041582209100000818
Recording device
Figure GDA00041582209100000819
Then->
Figure GDA00041582209100000820
Figure GDA00041582209100000821
Then, a third augmentation matrix is constructed>
Figure GDA00041582209100000822
Solving for a third augmentation matrix>
Figure GDA00041582209100000823
Pseudo-inverse of->
Figure GDA00041582209100000824
Finally, a third weight matrix is determined>
Figure GDA00041582209100000825
Step 7: using a test input matrix
Figure GDA00041582209100000826
Testing, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrix
Figure GDA00041582209100000827
Initializing matrix with sixth random->
Figure GDA00041582209100000828
Fourth activation function->
Figure GDA00041582209100000829
Sixth offset vector->
Figure GDA00041582209100000830
Constructing an incremental enhancement node matrix->
Figure GDA00041582209100000831
Enhancing the node matrix by the increment>
Figure GDA00041582209100000832
Third augmentation matrix->
Figure GDA00041582209100000833
Construction of a fourth augmentation matrix->
Figure GDA00041582209100000834
Through the fourth augmentation matrix +.>
Figure GDA00041582209100000835
Label matrix->
Figure GDA00041582209100000836
Obtaining a fourth weight matrix->
Figure GDA00041582209100000837
Assigning M+1 to M and m+1 to M, and returning to the step 7;
specifically, step 8 obtains the fourth weight matrix based on the following formula
Figure GDA00041582209100000838
First, construct incremental enhancement node matrix ++>
Figure GDA00041582209100000839
Then, a fourth augmentation matrix is constructed
Figure GDA00041582209100000840
Order the
Figure GDA00041582209100000841
Figure GDA00041582209100000842
Figure GDA00041582209100000843
Solving for a fourth augmentation matrix
Figure GDA0004158220910000091
Pseudo-inverse of->
Figure GDA0004158220910000092
Finally, a fourth weight matrix is determined>
Figure GDA0004158220910000093
Step 9: switch state change recognition, as shown in fig. 3, continuously collecting electrical appliance operation data of K periods at a sampling frequency of 1kHz, splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for 25 periods, then entering the step 10, and if not, executing the step 10 at fixed time intervals;
step 10: the electric appliance load is decomposed, as shown in figure 4, the electric appliance operation information of a single period is collected at the sampling frequency of 10kHz, and the electric appliance operation information is spliced and normalized to form an input vector X 1 And fourier transforming the operation information, splicing and normalizing to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle
Specifically, referring to fig. 3 and 4, in the present invention, the electrical appliance with the identified switch state change is an electric heater, and the electrical appliance with the disassembled load is an electric heater, a television set and a notebook.
According to the non-interference type load decomposition method based on the width learning algorithm, operation data before and after the operation of the electrical equipment are collected as input data, the width learning algorithm is combined, and the power load operation condition of a scene of the multiple electrical equipment is obtained in a non-interference mode by means of data preprocessing, initial training of a load decomposition model, incremental learning of the load decomposition model, initial training of a switch state change identification model and incremental learning of the switch state change identification model, and fully considering multiple factors of the electrical equipment information.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that the above-mentioned preferred embodiment should not be construed as limiting the invention, and the scope of the invention should be defined by the appended claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (2)

1. A non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance to realize the load decomposition of the electric appliance, and is characterized by comprising the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of K1, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vector
Figure QLYQS_3
And the operation data are spliced to form an input vector after being subjected to Fourier transformation>
Figure QLYQS_6
-adding said input vector->
Figure QLYQS_8
Splicing into input vector->
Figure QLYQS_1
Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>
Figure QLYQS_4
Wherein i=1, 2, …, I is denoted as data number, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at the sampling frequency of K2, and splicing the operation data to form an input vector +.>
Figure QLYQS_7
Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>
Figure QLYQS_9
Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>
Figure QLYQS_2
At the same time, construct tag vector +.>
Figure QLYQS_5
All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector
Figure QLYQS_15
(i=1,2,…,I t ,I t <I) Normalized concatenation is input matrix->
Figure QLYQS_11
Other input vectors->
Figure QLYQS_22
Normalized and spliced into a test input matrix>
Figure QLYQS_12
The tag vector +.>
Figure QLYQS_18
1,2,…,I t ,I t <I) Spliced into a label matrix->
Figure QLYQS_14
Other tag vector +.>
Figure QLYQS_25
Splicing to form a test tag matrix>
Figure QLYQS_17
Second class data preprocessing, namely, the input vector is +.>
Figure QLYQS_19
Normalized concatenation is input matrix->
Figure QLYQS_10
Other input vectors->
Figure QLYQS_23
Normalized splice to test matrix->
Figure QLYQS_16
The tag vector +.>
Figure QLYQS_21
Spliced into a label vector->
Figure QLYQS_20
Other tag vector +.>
Figure QLYQS_24
Splicing into test tag vector->
Figure QLYQS_13
Step 3: initial training of a load decomposition model based on the input matrix
Figure QLYQS_30
Initializing a matrix with a first random
Figure QLYQS_34
First activation function->
Figure QLYQS_49
First bias vector->
Figure QLYQS_28
Constructing a mapping characteristic node matrix->
Figure QLYQS_38
Node matrix based on the mapping feature>
Figure QLYQS_33
Initializing matrix with second random>
Figure QLYQS_35
Second activation function->
Figure QLYQS_45
Second bias vector->
Figure QLYQS_55
Constructing an enhanced node matrix->
Figure QLYQS_29
Node matrix +_using the mapping feature>
Figure QLYQS_37
Enhanced node matrix->
Figure QLYQS_31
Construction of the first augmentation matrix->
Figure QLYQS_36
By the first augmentation matrix +.>
Figure QLYQS_32
The tag matrix->
Figure QLYQS_40
Obtaining a first weight matrix->
Figure QLYQS_43
Wherein the mapping characteristic node matrix is constructed based on the following formula>
Figure QLYQS_50
Is->
Figure QLYQS_58
Recording device
Figure QLYQS_62
Then->
Figure QLYQS_26
Figure QLYQS_41
Record->
Figure QLYQS_47
Then->
Figure QLYQS_56
Figure QLYQS_59
Wherein the enhanced node matrix is constructed based on the following formula>
Figure QLYQS_63
Is->
Figure QLYQS_42
Record->
Figure QLYQS_52
Then
Figure QLYQS_44
Figure QLYQS_51
Record->
Figure QLYQS_46
Then
Figure QLYQS_53
Wherein the first weight matrix is determined based on the following formula>
Figure QLYQS_48
The first augmentation matrix->
Figure QLYQS_54
Figure QLYQS_27
Solving a first augmentation matrix
Figure QLYQS_39
Pseudo-inverse of->
Figure QLYQS_57
Figure QLYQS_61
Obtaining a first weight matrix
Figure QLYQS_60
Step 4: using a test input matrix
Figure QLYQS_64
Testing, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrix
Figure QLYQS_72
Initializing matrix with third random->
Figure QLYQS_68
Second activation function->
Figure QLYQS_73
Third bias vector->
Figure QLYQS_66
Constructing an incremental enhancement node matrix->
Figure QLYQS_78
Enhancing the node matrix by the increment>
Figure QLYQS_70
First augmentation matrix ++>
Figure QLYQS_75
Construction of a second augmentation matrix->
Figure QLYQS_67
By a second augmentation matrix->
Figure QLYQS_76
Label matrix->
Figure QLYQS_65
Obtaining a second weight matrix->
Figure QLYQS_77
Assigning m+1 to M and m+1 to M, returning to step 4, wherein an incremental enhancement node matrix is constructed based on the following formula>
Figure QLYQS_69
Figure QLYQS_79
Figure QLYQS_74
The second weight matrix is determined based on the following formula>
Figure QLYQS_80
Construction of a second augmentation matrix->
Figure QLYQS_71
Wherein, let the
Figure QLYQS_81
Figure QLYQS_82
Figure QLYQS_83
Solving for a second augmentation matrix
Figure QLYQS_84
Pseudo-inverse of->
Figure QLYQS_85
Then solve for a second weight matrix
Figure QLYQS_86
Step 6: initial training of a switch state change recognition model based on the input matrix
Figure QLYQS_104
Utilize fourth random initialization matrix +.>
Figure QLYQS_91
Third activation function->
Figure QLYQS_100
Fourth bias vector->
Figure QLYQS_92
Constructing a mapping feature node matrix
Figure QLYQS_97
Node matrix based on the mapping feature>
Figure QLYQS_106
Initializing matrix with fifth random->
Figure QLYQS_108
Fourth activation function->
Figure QLYQS_93
Fifth bias vector->
Figure QLYQS_96
Constructing an enhanced node matrix->
Figure QLYQS_87
Utilizing the mapping feature node matrix
Figure QLYQS_102
Enhanced node matrix->
Figure QLYQS_94
Construction of a third augmentation matrix->
Figure QLYQS_98
Through the third augmentation matrix->
Figure QLYQS_105
The tag matrix->
Figure QLYQS_107
Obtaining a third weight matrix->
Figure QLYQS_88
The third weight matrix is obtained based on the following formula
Figure QLYQS_101
The third augmentation matrix->
Figure QLYQS_90
Solving for a third augmentation matrix>
Figure QLYQS_95
Pseudo-inverse of (2)
Figure QLYQS_89
Figure QLYQS_99
Obtaining a third weight matrix->
Figure QLYQS_103
Figure QLYQS_109
Step 7: using a test input matrix
Figure QLYQS_110
Testing, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrix
Figure QLYQS_118
Initializing matrix with sixth random->
Figure QLYQS_114
Fourth activation function->
Figure QLYQS_120
Sixth offset vector->
Figure QLYQS_117
Constructing an incremental enhancement node matrix->
Figure QLYQS_123
Enhancing the node matrix by the increment>
Figure QLYQS_124
Third augmentation matrix->
Figure QLYQS_126
Building a fourth augmentation matrix
Figure QLYQS_111
Through the fourth augmentation matrix +.>
Figure QLYQS_122
Label matrix->
Figure QLYQS_112
Obtaining a fourth weight matrix->
Figure QLYQS_121
Assigning M+1 to M and m+1 to M, returning to step 7, and constructing an incremental enhancement node matrix based on the following formula>
Figure QLYQS_116
Figure QLYQS_125
Figure QLYQS_115
A fourth weight matrix is calculated based on the following formula>
Figure QLYQS_119
Construction of a fourth augmentation matrix->
Figure QLYQS_113
Wherein, let the
Figure QLYQS_127
Figure QLYQS_128
Figure QLYQS_129
Solving for a fourth augmentation matrix
Figure QLYQS_130
Pseudo-inverse of (2)
Figure QLYQS_131
Then solve for a fourth weight matrix
Figure QLYQS_132
Step 9: switch state change identification, continuously collecting electrical appliance operation data of K periods at the sampling frequency of K2, and splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for T2 periods, then entering step 10, and if not, executing step 10 at fixed time intervals; and
step 10: electric appliance load decomposition, collecting electric appliance operation information of a single period at a sampling frequency of K1, and splicing and normalizing to form an input vector X 1 And fourier transforming the operation information, splicing and normalizing to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle
2. A non-interferometric load decomposition method based on a width learning algorithm according to claim 1, characterized in that the frequency K1 is selectable in the range of 1kHz-10kHz and the frequency K2 is selectable in the range of 1kHz-10kHz.
CN201910230729.0A 2019-03-26 2019-03-26 Non-interference type load decomposition method based on width learning algorithm Active CN109993424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910230729.0A CN109993424B (en) 2019-03-26 2019-03-26 Non-interference type load decomposition method based on width learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910230729.0A CN109993424B (en) 2019-03-26 2019-03-26 Non-interference type load decomposition method based on width learning algorithm

Publications (2)

Publication Number Publication Date
CN109993424A CN109993424A (en) 2019-07-09
CN109993424B true CN109993424B (en) 2023-06-23

Family

ID=67131501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910230729.0A Active CN109993424B (en) 2019-03-26 2019-03-26 Non-interference type load decomposition method based on width learning algorithm

Country Status (1)

Country Link
CN (1) CN109993424B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256123B (en) * 2020-09-25 2022-08-23 北京师范大学 Brain load-based control work efficiency analysis method, equipment and system
CN116304762A (en) * 2023-05-17 2023-06-23 杭州致成电子科技有限公司 Method and device for decomposing load
CN116610922A (en) * 2023-07-13 2023-08-18 浙江大学滨江研究院 Non-invasive load identification method and system based on multi-strategy learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260803A (en) * 2015-11-06 2016-01-20 国家电网公司 Power consumption prediction method for system
CN107330517A (en) * 2017-06-14 2017-11-07 华北电力大学 One kind is based on S_Kohonen non-intrusion type resident load recognition methods
CN108960339A (en) * 2018-07-20 2018-12-07 吉林大学珠海学院 A kind of electric car induction conductivity method for diagnosing faults based on width study
CN109444757A (en) * 2018-10-09 2019-03-08 杭州中恒云能源互联网技术有限公司 A kind of residual capacity of power battery of electric automobile evaluation method
CN109508908A (en) * 2018-12-25 2019-03-22 深圳市城市公共安全技术研究院有限公司 Non-invasive load recognition model training method, load monitoring method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020874B2 (en) * 2011-10-31 2015-04-28 Siemens Aktiengesellschaft Short-term load forecast using support vector regression and feature learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260803A (en) * 2015-11-06 2016-01-20 国家电网公司 Power consumption prediction method for system
CN107330517A (en) * 2017-06-14 2017-11-07 华北电力大学 One kind is based on S_Kohonen non-intrusion type resident load recognition methods
CN108960339A (en) * 2018-07-20 2018-12-07 吉林大学珠海学院 A kind of electric car induction conductivity method for diagnosing faults based on width study
CN109444757A (en) * 2018-10-09 2019-03-08 杭州中恒云能源互联网技术有限公司 A kind of residual capacity of power battery of electric automobile evaluation method
CN109508908A (en) * 2018-12-25 2019-03-22 深圳市城市公共安全技术研究院有限公司 Non-invasive load recognition model training method, load monitoring method and device

Also Published As

Publication number Publication date
CN109993424A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109993424B (en) Non-interference type load decomposition method based on width learning algorithm
Kaselimi et al. Multi-channel recurrent convolutional neural networks for energy disaggregation
Dash et al. Electric energy disaggregation via non-intrusive load monitoring: A state-of-the-art systematic review
Zufferey et al. Machine learning approaches for electric appliance classification
Bu et al. WECC composite load model parameter identification using evolutionary deep reinforcement learning
Liu et al. Admittance-based load signature construction for non-intrusive appliance load monitoring
CN103020459B (en) A kind of cognitive method of various dimensions electricity consumption behavior and system
Andrean et al. A hybrid method of cascade-filtering and committee decision mechanism for non-intrusive load monitoring
CN109633301B (en) Non-invasive electrical appliance load identification method based on quantum genetic optimization
CN111639586B (en) Non-invasive load identification model construction method, load identification method and system
Basu et al. Load identification from power recordings at meter panel in residential households
Jiang et al. Literature review of power disaggregation
Wu et al. A load identification algorithm of frequency domain filtering under current underdetermined separation
Han et al. Non-intrusive load monitoring based on semi-supervised smooth teacher graph learning with voltage–current trajectory
Saha et al. Comprehensive NILM framework: Device type classification and device activity status monitoring using capsule network
Monteiro et al. Non-intrusive load monitoring using artificial intelligence classifiers: Performance analysis of machine learning techniques
Chen et al. Real‐time recognition of power quality disturbance‐based deep belief network using embedded parallel computing platform
Yoon et al. Deep learning-based method for the robust and efficient fault diagnosis in the electric power system
Schirmer et al. Double Fourier integral analysis based convolutional neural network regression for high-frequency energy disaggregation
Sima et al. Diagnosis of small-sample measured electromagnetic transients in power system using DRN-LSTM and data augmentation
Rodríguez Fernández et al. Online identification of appliances from power consumption data collected by smart meters
Li et al. Stacking ensemble learning-based load identification considering feature fusion by cyber-physical approach
Singh et al. Outlier detection and clustering of household’s electrical load profiles
Kim et al. Time-Frequency Domain Deep Convolutional Neural Network for Li-Ion Battery SoC Estimation
Basu Classifcation techniques for non-intrusive load monitoring and prediction of residential loads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant