CN109993424B - Non-interference type load decomposition method based on width learning algorithm - Google Patents
Non-interference type load decomposition method based on width learning algorithm Download PDFInfo
- Publication number
- CN109993424B CN109993424B CN201910230729.0A CN201910230729A CN109993424B CN 109993424 B CN109993424 B CN 109993424B CN 201910230729 A CN201910230729 A CN 201910230729A CN 109993424 B CN109993424 B CN 109993424B
- Authority
- CN
- China
- Prior art keywords
- matrix
- vector
- augmentation
- data
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000008859 change Effects 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 132
- 239000013598 vector Substances 0.000 claims description 59
- 230000003416 augmentation Effects 0.000 claims description 52
- 238000013507 mapping Methods 0.000 claims description 28
- 238000012360 testing method Methods 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 11
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Engineering & Computer Science (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Theoretical Computer Science (AREA)
- Development Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Game Theory and Decision Science (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Supply And Distribution Of Alternating Current (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
According to the non-interference type load decomposition method based on the width learning algorithm, operation data before and after the operation of the electrical equipment are collected as input data, the width learning algorithm is combined, and the power load operation condition of a scene of the multiple electrical equipment is obtained in a non-interference mode by means of data preprocessing, initial training of a load decomposition model, incremental learning of the load decomposition model, initial training of a switch state change identification model and incremental learning of the switch state change identification model, and fully considering multiple factors of the electrical equipment information.
Description
Technical Field
The invention relates to the technical field of load decomposition, in particular to a non-interference type load decomposition method based on a width learning algorithm.
Background
The power load decomposition generally needs to be matched with an interference type or non-interference type device, and the traditional load decomposition method needs to be provided with a special sensor to monitor and collect the running state of the electrical equipment and the related power consumption information, so that the cost is high, the large-scale popularization is not facilitated, the deployment period is long, and the effect is slow. A large number of intelligent ammeter monitoring devices are emerging in the current market, and the intelligent ammeter monitoring devices have the functions of effectively monitoring and collecting information on electric parameters such as voltage (U), current (I), active power (P), reactive power (Q), power Factor (PF), frequency (f), active electric quantity (kWh) and the like of electric equipment. In addition, in the traditional electrical equipment load decomposition method, the electrical factors are generally power, the data are single, and the decomposition accuracy is low. In view of the above drawbacks, improvements are now proposed.
Disclosure of Invention
According to the non-interference type load decomposition method based on the width learning algorithm, a special sensor device is not needed, multiple factors of electrical equipment information are fully considered, and the load decomposition of the electrical equipment is effectively realized by adopting a correlation technology of big data and artificial intelligence, so that a more accurate electricity utilization decision is provided.
The invention provides a non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance to realize the load decomposition of the electric appliance, and comprises the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of K1, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vectorAnd the operation data are spliced to form an input vector after being subjected to Fourier transformation>-adding said input vector->Splicing into input vector->Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>Wherein i=1, 2, …, I is denoted as data number, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at the sampling frequency of K2, and splicing the operation data to form an input vector +.>Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>At the same time, construct tag vector +.>All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector Normalized concatenation is input matrix->Other input vectors->Normalized and spliced into a test input matrix>The tag vector +.>Spliced into a label matrix->Other tag vector +.>Splicing to form a test tag matrix>Second class data preprocessing, namely preprocessing the input vectorNormalized concatenation is input matrix->Other input vectors->Normalized splice to test matrix->The tag vector +.> Spliced into a label vectorOther tag vector +.>Splicing into test tag vector->
Step 3: initial training of a load decomposition model based on the input matrixUtilize first random initialization matrix +.>First activation function->First bias vector->Constructing a mapping characteristic node matrix->Node matrix based on the mapping feature>Initializing matrix with second random>Second activation function->Second bias vector->Constructing an enhanced node matrix->Node matrix +_using the mapping feature>Enhanced node matrix->Construction of the first augmentation matrix->By the first augmentation matrix +.>The tag matrix->Obtaining a first weight matrix->
Step 4: using a test input matrixTesting, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrixInitializing matrix with third random->Second activation function->Third bias vector->Constructing an incremental enhanced node matrixEnhancing the node matrix by the increment>First augmentation matrix ++>Constructing a second augmentation matrixBy a second augmentation matrix->Label matrix->Obtaining a second weight matrix->Assigning M+1 to M and m+1 to M, and returning to the step 4;
step 6: initial training of a switch state change recognition model based on the input matrixUtilize fourth random initialization matrix +.>Third activation function->Fourth bias vector->Constructing a mapping characteristic node matrix->Node matrix based on the mapping feature>Initializing matrix with fifth random->Fourth activation function->Fifth bias vector->Constructing an enhanced node matrix->Node matrix +_using the mapping feature>Enhanced node matrix->Construction of a third augmentation matrix->By the third moment of augmentation +>Matrix and said tag matrix>Obtaining a third weight matrix->
Step 7: using a test input matrixTesting, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrixInitializing matrix with sixth random->Fourth activation function->Sixth offset vector->Constructing an incremental enhancement node matrix->Enhancing the node matrix by the increment>Third augmentation matrix->Construction of a fourth augmentation matrix->Through the fourth augmentation matrix +.>Label matrix->Obtaining a fourth weight matrix->Assigning M+1 to M and m+1 to M, and returning to the step 7;
step 9: switch state change identification, continuously collecting electrical appliance operation data of K periods at the sampling frequency of K2, and splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for T2 periods, then entering step 10, and if not, executing step 10 at fixed time intervals; and
step 10: electric appliance load decomposition, collecting electric appliance operation information of a single period at a sampling frequency of K1, and splicing and normalizing to form an input vector X 1 And for theThe operation information is subjected to Fourier transformation, spliced and normalized to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle 。
Preferably, the enhanced node matrix is constructed based on the following formulaIs->Record-> Then->Recording deviceThen-> Where l=1, 2,..m.
Preferably, step 3 obtains the first weight matrix based on the following formulaThe first augmentation matrixSolving for the first augmentation matrix>Pseudo-inverse of-> Obtaining a first weight matrix->
Preferably, step 5 finds the second weight matrix based on the following formulaConstructing a second augmentation matrixWherein, let the
Preferably, step 6 obtains the third weight matrix based on the following formulaThe third augmentation matrixSolving for a third augmentation matrix>Pseudo-inverse of-> Obtaining a third weight matrix->
Preferably, step 8 obtains a fourth weight matrix based on the following formulaBuilding a fourth augmentation matrixWherein, let the
Solving for a fourth augmentation matrixPseudo-inverse of->Then solve for the fourth weight matrix +.>
Preferably, the frequency K1 may be selected in the range of 1kHz-10kHz, and the frequency K2 may be selected in the range of 1kHz-10kHz.
According to the non-interference type load decomposition method based on the width learning algorithm, a special sensor device is not needed when load decomposition is achieved, most of electric power information acquisition devices (such as intelligent electric meters and the like) in the market can be reused, multiple factors of electric equipment information are fully considered based on the width learning algorithm, the load decomposition of electric equipment is effectively achieved by adopting the related technology of big data and artificial intelligence, the application cost is greatly reduced, the electric load running condition of a multi-equipment scene is more accurately identified, more accurate electricity utilization decision is provided, and the economic benefit is effectively improved.
Drawings
FIG. 1 is a flow chart of a non-interferometric load decomposition method based on a width learning algorithm according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a width learning load decomposition model of the present invention;
fig. 3 is a voltage and current diagram before and after the state change of the electric heater switch according to the first embodiment of the present invention;
fig. 4 is a graph showing the result of the load decomposition of the electric appliance according to the first embodiment of the present invention.
Detailed Description
The non-interference type load decomposition method based on the width learning algorithm provided by the invention is further described below with reference to the accompanying drawings, and it should be pointed out that only one optimized technical scheme is used for describing the technical scheme and the design principle of the invention in detail.
The first embodiment of the invention provides a non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance so as to realize the load decomposition of the electric appliance.
Referring to fig. 1 and 2, a non-interference type load decomposition method based on a width learning algorithm according to a first embodiment of the present invention includes the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of 10kHz, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vectorAnd the operation data are spliced to form an input vector after being subjected to Fourier transformation>-adding said input vector->Splicing into input vector->Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>Wherein i=1, 2, …, INumbering data, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at a sampling frequency of 1kHz, and splicing the operation data to form an input vector +.>Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>At the same time, construct tag vector +.>All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector Normalized concatenation is input matrix->Other input vectors->Normalized and spliced into a test input matrix>The tag vector +.>Spliced into a label matrix->Other tag vector +.>Splicing to form a test tag matrix>Second class data preprocessing, namely preprocessing the input vectorNormalized concatenation is input matrix->Other input vectors->Normalized splice to test matrix->The tag vector +.>(j=1,2,…,J t ,J t <J) Spliced into a label vectorOther tag vector +.>Splicing into test tag vector->
Step 3: initial training of a load decomposition model based on the input matrixUtilize first random initialization matrix +.>First activation function->First bias vector->Constructing a mapping characteristic node matrix->Node matrix based on the mapping feature>Initializing matrix with second random>Second activation function->Second bias vector->Constructing an enhanced node matrix->Node matrix +_using the mapping feature>Enhanced node matrix->Construction of the first augmentation matrix->By the first incrementBroad matrix->The tag matrix->Obtaining a first weight matrix->
Specifically, the first weight matrix is obtained in the step 3 based on the following formulaFirstly, the mapping node matrix is built>Record->Then-> Secondly, constructing the enhanced node matrix +.>Record->ThenThen, a first augmentation matrix is constructedSolving for the first augmentation matrix>Pseudo-inverse of-> Finally, a first weight matrix is obtained>
Step 4: using a test input matrixTesting, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrixInitializing matrix with third random->Second activation function->Third bias vector->Constructing an incremental enhanced node matrixEnhancing the node matrix by the increment>First of allAn augmentation matrix->Constructing a second augmentation matrixBy a second augmentation matrix->Label matrix->Obtaining a second weight matrix->Assigning M+1 to M and m+1 to M, and returning to the step 4;
specifically, step 5 obtains the second weight matrix based on the following formulaFirst, construct incremental enhancement node matrix ++>Then, a second augmentation matrix is constructed>Order the
Solving for the second augmentationMatrix arrayPseudo-inverse of->Finally, a second weight matrix is determined>
Step 6: initial training of a switch state change recognition model based on the input matrixUtilize fourth random initialization matrix +.>Third activation function->Fourth bias vector->Constructing a mapping feature node matrixNode matrix based on the mapping feature>Initializing matrix with fifth random->Fourth activation function->Fifth bias vector->Constructing an enhanced node matrix->Utilizing the mapping feature node matrixEnhanced node matrix->Construction of a third augmentation matrix->By the third moment of augmentation +>Matrix and said tag matrix>Obtaining a third weight matrix->
Specifically, step 6 obtains the third weight matrix based on the following formulaFirst, construct the mapping node matrix +.>
Recording deviceThen-> Secondly, constructing the enhanced node matrix +.>Recording deviceThen-> Then, a third augmentation matrix is constructed>Solving for a third augmentation matrix>Pseudo-inverse of->Finally, a third weight matrix is determined>
Step 7: using a test input matrixTesting, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrixInitializing matrix with sixth random->Fourth activation function->Sixth offset vector->Constructing an incremental enhancement node matrix->Enhancing the node matrix by the increment>Third augmentation matrix->Construction of a fourth augmentation matrix->Through the fourth augmentation matrix +.>Label matrix->Obtaining a fourth weight matrix->Assigning M+1 to M and m+1 to M, and returning to the step 7;
specifically, step 8 obtains the fourth weight matrix based on the following formulaFirst, construct incremental enhancement node matrix ++>Then, a fourth augmentation matrix is constructedOrder the
Solving for a fourth augmentation matrixPseudo-inverse of->Finally, a fourth weight matrix is determined>
Step 9: switch state change recognition, as shown in fig. 3, continuously collecting electrical appliance operation data of K periods at a sampling frequency of 1kHz, splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for 25 periods, then entering the step 10, and if not, executing the step 10 at fixed time intervals;
step 10: the electric appliance load is decomposed, as shown in figure 4, the electric appliance operation information of a single period is collected at the sampling frequency of 10kHz, and the electric appliance operation information is spliced and normalized to form an input vector X 1 And fourier transforming the operation information, splicing and normalizing to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle 。
Specifically, referring to fig. 3 and 4, in the present invention, the electrical appliance with the identified switch state change is an electric heater, and the electrical appliance with the disassembled load is an electric heater, a television set and a notebook.
According to the non-interference type load decomposition method based on the width learning algorithm, operation data before and after the operation of the electrical equipment are collected as input data, the width learning algorithm is combined, and the power load operation condition of a scene of the multiple electrical equipment is obtained in a non-interference mode by means of data preprocessing, initial training of a load decomposition model, incremental learning of the load decomposition model, initial training of a switch state change identification model and incremental learning of the switch state change identification model, and fully considering multiple factors of the electrical equipment information.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that the above-mentioned preferred embodiment should not be construed as limiting the invention, and the scope of the invention should be defined by the appended claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Claims (2)
1. A non-interference type load decomposition method based on a width learning algorithm, which is used for monitoring and analyzing the power running state and related power consumption information of an electric appliance to realize the load decomposition of the electric appliance, and is characterized by comprising the following steps:
step 1: collecting data, namely selecting a plurality of time periods without change of the switching state of the electric appliance, collecting operation data of the electric appliance at the adoption frequency of K1, wherein the operation data comprises voltage, current, instantaneous active power and instantaneous reactive power, and splicing the operation data to form an input vectorAnd the operation data are spliced to form an input vector after being subjected to Fourier transformation>-adding said input vector->Splicing into input vector->Meanwhile, collecting the switch state data of the electric appliance, wherein the switch state data is recorded as 1 when the switch state is opened, the switch state data is recorded as 0 when the switch state is closed, and the switch state data are spliced to form a label vector +.>Wherein i=1, 2, …, I is denoted as data number, I is the total amount of data; second-class data acquisition, namely acquiring operation data of the electric appliance in T periods after the state of the switch of the electric appliance is changed at the sampling frequency of K2, and splicing the operation data to form an input vector +.>Wherein, note j=1, 2, …, J is the data number, J is the total amount of data, and at the same time, construct the tag vector +.>Collecting operation data of the electric appliance in T periods without change of the switching state of the electric appliance at the sampling frequency of K2, and splicing the operation data to form an input vector +.>At the same time, construct tag vector +.>All 0 vectors for the j dimension;
step 2: data preprocessing, namely preprocessing data of a class, and inputting the vector(i=1,2,…,I t ,I t <I) Normalized concatenation is input matrix->Other input vectors->Normalized and spliced into a test input matrix>The tag vector +.>1,2,…,I t ,I t <I) Spliced into a label matrix->Other tag vector +.>Splicing to form a test tag matrix>Second class data preprocessing, namely, the input vector is +.>Normalized concatenation is input matrix->Other input vectors->Normalized splice to test matrix->The tag vector +.>Spliced into a label vector->Other tag vector +.>Splicing into test tag vector->
Step 3: initial training of a load decomposition model based on the input matrixInitializing a matrix with a first randomFirst activation function->First bias vector->Constructing a mapping characteristic node matrix->Node matrix based on the mapping feature>Initializing matrix with second random>Second activation function->Second bias vector->Constructing an enhanced node matrix->Node matrix +_using the mapping feature>Enhanced node matrix->Construction of the first augmentation matrix->By the first augmentation matrix +.>The tag matrix->Obtaining a first weight matrix->Wherein the mapping characteristic node matrix is constructed based on the following formula>Is->Recording deviceThen-> Record->Then-> Wherein the enhanced node matrix is constructed based on the following formula>Is->Record->Then Record->ThenWherein the first weight matrix is determined based on the following formula>The first augmentation matrix-> Solving a first augmentation matrixPseudo-inverse of-> Obtaining a first weight matrix
Step 4: using a test input matrixTesting, outputting a load decomposition model if the first training error is met, and entering a step 6; if the training error does not meet the first training error, the step 5 is entered;
step 5: incremental learning of a load decomposition model based on the mapping feature node matrixInitializing matrix with third random->Second activation function->Third bias vector->Constructing an incremental enhancement node matrix->Enhancing the node matrix by the increment>First augmentation matrix ++>Construction of a second augmentation matrix->By a second augmentation matrix->Label matrix->Obtaining a second weight matrix->Assigning m+1 to M and m+1 to M, returning to step 4, wherein an incremental enhancement node matrix is constructed based on the following formula> The second weight matrix is determined based on the following formula>Construction of a second augmentation matrix->Wherein, let the
Step 6: initial training of a switch state change recognition model based on the input matrixUtilize fourth random initialization matrix +.>Third activation function->Fourth bias vector->Constructing a mapping feature node matrixNode matrix based on the mapping feature>Initializing matrix with fifth random->Fourth activation function->Fifth bias vector->Constructing an enhanced node matrix->Utilizing the mapping feature node matrixEnhanced node matrix->Construction of a third augmentation matrix->Through the third augmentation matrix->The tag matrix->Obtaining a third weight matrix->The third weight matrix is obtained based on the following formulaThe third augmentation matrix->Solving for a third augmentation matrix>Pseudo-inverse of (2) Obtaining a third weight matrix->
Step 7: using a test input matrixTesting, outputting a switch state change identification model if the second training error is met, and entering a step 9; if the training error does not meet the second training error, the step 8 is entered;
step 8: incremental learning of a switch state change recognition model based on the mapping characteristic node matrixInitializing matrix with sixth random->Fourth activation function->Sixth offset vector->Constructing an incremental enhancement node matrix->Enhancing the node matrix by the increment>Third augmentation matrix->Building a fourth augmentation matrixThrough the fourth augmentation matrix +.>Label matrix->Obtaining a fourth weight matrix->Assigning M+1 to M and m+1 to M, returning to step 7, and constructing an incremental enhancement node matrix based on the following formula> A fourth weight matrix is calculated based on the following formula>Construction of a fourth augmentation matrix->Wherein, let the
Step 9: switch state change identification, continuously collecting electrical appliance operation data of K periods at the sampling frequency of K2, and splicing and normalizing to form an input vector X switch X is taken as switch Inputting the switch state change identification model, identifying whether the switch state of the electrical appliance is changed, if so, delaying for T2 periods, then entering step 10, and if not, executing step 10 at fixed time intervals; and
step 10: electric appliance load decomposition, collecting electric appliance operation information of a single period at a sampling frequency of K1, and splicing and normalizing to form an input vector X 1 And fourier transforming the operation information, splicing and normalizing to form an input vector X 2 X is taken as 1 、X 2 Splicing to form input vector X cycle The input vector X cycle Inputting the load decomposition model to obtain an electric appliance load decomposition result Y cycle 。
2. A non-interferometric load decomposition method based on a width learning algorithm according to claim 1, characterized in that the frequency K1 is selectable in the range of 1kHz-10kHz and the frequency K2 is selectable in the range of 1kHz-10kHz.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910230729.0A CN109993424B (en) | 2019-03-26 | 2019-03-26 | Non-interference type load decomposition method based on width learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910230729.0A CN109993424B (en) | 2019-03-26 | 2019-03-26 | Non-interference type load decomposition method based on width learning algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109993424A CN109993424A (en) | 2019-07-09 |
CN109993424B true CN109993424B (en) | 2023-06-23 |
Family
ID=67131501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910230729.0A Active CN109993424B (en) | 2019-03-26 | 2019-03-26 | Non-interference type load decomposition method based on width learning algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109993424B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112256123B (en) * | 2020-09-25 | 2022-08-23 | 北京师范大学 | Brain load-based control work efficiency analysis method, equipment and system |
CN116304762A (en) * | 2023-05-17 | 2023-06-23 | 杭州致成电子科技有限公司 | Method and device for decomposing load |
CN116610922A (en) * | 2023-07-13 | 2023-08-18 | 浙江大学滨江研究院 | Non-invasive load identification method and system based on multi-strategy learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260803A (en) * | 2015-11-06 | 2016-01-20 | 国家电网公司 | Power consumption prediction method for system |
CN107330517A (en) * | 2017-06-14 | 2017-11-07 | 华北电力大学 | One kind is based on S_Kohonen non-intrusion type resident load recognition methods |
CN108960339A (en) * | 2018-07-20 | 2018-12-07 | 吉林大学珠海学院 | A kind of electric car induction conductivity method for diagnosing faults based on width study |
CN109444757A (en) * | 2018-10-09 | 2019-03-08 | 杭州中恒云能源互联网技术有限公司 | A kind of residual capacity of power battery of electric automobile evaluation method |
CN109508908A (en) * | 2018-12-25 | 2019-03-22 | 深圳市城市公共安全技术研究院有限公司 | Non-invasive load recognition model training method, load monitoring method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9020874B2 (en) * | 2011-10-31 | 2015-04-28 | Siemens Aktiengesellschaft | Short-term load forecast using support vector regression and feature learning |
-
2019
- 2019-03-26 CN CN201910230729.0A patent/CN109993424B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260803A (en) * | 2015-11-06 | 2016-01-20 | 国家电网公司 | Power consumption prediction method for system |
CN107330517A (en) * | 2017-06-14 | 2017-11-07 | 华北电力大学 | One kind is based on S_Kohonen non-intrusion type resident load recognition methods |
CN108960339A (en) * | 2018-07-20 | 2018-12-07 | 吉林大学珠海学院 | A kind of electric car induction conductivity method for diagnosing faults based on width study |
CN109444757A (en) * | 2018-10-09 | 2019-03-08 | 杭州中恒云能源互联网技术有限公司 | A kind of residual capacity of power battery of electric automobile evaluation method |
CN109508908A (en) * | 2018-12-25 | 2019-03-22 | 深圳市城市公共安全技术研究院有限公司 | Non-invasive load recognition model training method, load monitoring method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109993424A (en) | 2019-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993424B (en) | Non-interference type load decomposition method based on width learning algorithm | |
Kaselimi et al. | Multi-channel recurrent convolutional neural networks for energy disaggregation | |
Dash et al. | Electric energy disaggregation via non-intrusive load monitoring: A state-of-the-art systematic review | |
Zufferey et al. | Machine learning approaches for electric appliance classification | |
Bu et al. | WECC composite load model parameter identification using evolutionary deep reinforcement learning | |
Liu et al. | Admittance-based load signature construction for non-intrusive appliance load monitoring | |
CN103020459B (en) | A kind of cognitive method of various dimensions electricity consumption behavior and system | |
Andrean et al. | A hybrid method of cascade-filtering and committee decision mechanism for non-intrusive load monitoring | |
CN109633301B (en) | Non-invasive electrical appliance load identification method based on quantum genetic optimization | |
CN111639586B (en) | Non-invasive load identification model construction method, load identification method and system | |
Basu et al. | Load identification from power recordings at meter panel in residential households | |
Jiang et al. | Literature review of power disaggregation | |
Wu et al. | A load identification algorithm of frequency domain filtering under current underdetermined separation | |
Han et al. | Non-intrusive load monitoring based on semi-supervised smooth teacher graph learning with voltage–current trajectory | |
Saha et al. | Comprehensive NILM framework: Device type classification and device activity status monitoring using capsule network | |
Monteiro et al. | Non-intrusive load monitoring using artificial intelligence classifiers: Performance analysis of machine learning techniques | |
Chen et al. | Real‐time recognition of power quality disturbance‐based deep belief network using embedded parallel computing platform | |
Yoon et al. | Deep learning-based method for the robust and efficient fault diagnosis in the electric power system | |
Schirmer et al. | Double Fourier integral analysis based convolutional neural network regression for high-frequency energy disaggregation | |
Sima et al. | Diagnosis of small-sample measured electromagnetic transients in power system using DRN-LSTM and data augmentation | |
Rodríguez Fernández et al. | Online identification of appliances from power consumption data collected by smart meters | |
Li et al. | Stacking ensemble learning-based load identification considering feature fusion by cyber-physical approach | |
Singh et al. | Outlier detection and clustering of household’s electrical load profiles | |
Kim et al. | Time-Frequency Domain Deep Convolutional Neural Network for Li-Ion Battery SoC Estimation | |
Basu | Classifcation techniques for non-intrusive load monitoring and prediction of residential loads |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |