CN113780410A - Method and device for detecting step mutation points of power time sequence data - Google Patents

Method and device for detecting step mutation points of power time sequence data Download PDF

Info

Publication number
CN113780410A
CN113780410A CN202111059994.0A CN202111059994A CN113780410A CN 113780410 A CN113780410 A CN 113780410A CN 202111059994 A CN202111059994 A CN 202111059994A CN 113780410 A CN113780410 A CN 113780410A
Authority
CN
China
Prior art keywords
data
time sequence
power
output
hidden layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111059994.0A
Other languages
Chinese (zh)
Inventor
陈聪
罗海军
匡启帆
官新锋
程晓军
郑超
林涛
余随
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Data Fujian Technology Co ltd
Original Assignee
Tianmu Data Fujian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Data Fujian Technology Co ltd filed Critical Tianmu Data Fujian Technology Co ltd
Priority to CN202111059994.0A priority Critical patent/CN113780410A/en
Publication of CN113780410A publication Critical patent/CN113780410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a device for detecting a step-jump point of power time sequence data, wherein the method comprises the following steps: slicing the preprocessed power time sequence data to obtain time sequence slice data; inputting the time sequence slice data into a pre-trained deep learning discrimination model, and outputting a mutation label sequence; judging mutation information of the power time sequence data at each time sequence point; the mutant tag sequence output ═ f3(layer2*W3+b3) (ii) a output is the output mutation tag sequence, W3 is the output layer weight matrix, b3 is the output layer weight bias, f3 is the output layer activation function, and layer2 is the second hidden layer output. The invention has high data fitting capability and characterization capability by virtue of deep learning, can realize effective detection on the power time sequence data step sudden change electricity, and can realize effective detection on the power time sequence data step sudden change electricityThe recall rate and the precision of identifying the step jump points are remarkably improved.

Description

Method and device for detecting step mutation points of power time sequence data
Technical Field
The invention relates to the technical field of electric power, in particular to a method and a device for detecting a step-jump point of electric power time sequence data.
Background
In a power utilization acquisition system of a national power grid, power indexes such as electric quantity, phase current, phase voltage and station line loss rate of a gateway electric meter of a user are acquired at regular time, and data of the power indexes are expanded according to a time dimension, so that a time sequence (power time sequence data for short) of the power index data is formed. The step-jump point detection of the power time sequence data is an important basis for evaluating the fault or electricity stealing of the electric meter. For example, the phase voltage timing data has a sudden transition drop, which means that the voltage connection pads of the phase voltages may oxidize or loosen; the line loss time sequence data of the transformer area is subjected to jump sudden increase, which means that the suspicion of electricity stealing is recently generated below the transformer area.
For step time sequence mutation detection, the prior art method is as follows:
1) the patent CN 201510121511.3-power system fault data matching method based on mutation point detection combination algorithm, combines a bayesian classification model and artificial experience rules, and performs mutation point detection on power system data; 2) patent CN 201810011792.0-a fire online early warning and rapid analysis method based on mutation point detection, which combines wavelet transformation with binary tree algorithm to perform mutation detection on the time series data of the fire early warning system sensor; 3) patent CN 201510796774.4-a method, device and computing device for locating a spur anomaly point, based on a sliding rule filtering algorithm, performs spur anomaly point detection on monitoring alarm time sequence data.
However, in all of these existing time-series step mutation point identification methods, some step mutation screening filtering rules (for example, patent CN201510121511.3) are manually formulated according to some samples through manual observation, but the situation of time-series data in specific applications is very complicated, and meanwhile, interference of a large amount of noise is included, so that the recall rate and accuracy for identifying the step mutation point based on the manually-specified screening rules are low.
Disclosure of Invention
In view of the above technical problems, an object of the present invention is to provide a method and an apparatus for detecting a step jump point in power time series data, which solve the problem of low recall rate and accuracy caused by identifying the step jump point by manually specifying a screening rule in a conventional method for identifying the step jump point.
The invention adopts the following technical scheme:
a method for detecting a step mutation point of power time sequence data comprises the following steps:
acquiring power time sequence data and preprocessing the power time sequence data;
slicing the preprocessed power time sequence data to obtain time sequence slice data;
inputting the time sequence slice data into a pre-trained deep learning discrimination model, and outputting a mutation label sequence; the deep learning discrimination model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
wherein output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output;
and judging mutation information of the power time sequence data at each time sequence point according to the mutation tag sequence.
Optionally, the second hidden layer output satisfies the following formula:
layer2=f2(layer1*W2+b2)
wherein layer2 is the first hidden layer output; w2A weight matrix of a second hidden layer, b2For the second hidden layer weight bias, f2Is the activation function of the second hidden layer.
Optionally, the depth learning discrimination model divides the time-series slice data into first slice data and second slice data, takes the first slice data as left slice data input to the left hidden layer, and turns over the second slice data to be input to the right hidden layer as right slice data; splicing the left hidden layer and the right hidden layer according to rows to obtain a first hidden layer;
wherein the left hidden layer node output left _ layer1 ═ f1(left_s*W1+b1) (ii) a The right hidden layer node outputs right _ layer1 ═ f1(right_s*W1+b1) (ii) a Wherein, W1A weight matrix being a first hidden layer, b1For the first hidden layer weight bias, f1For the activation function of the first hidden layer, left _ s is left slice data and right _ s is right slice data.
Optionally, the slicing operation is performed on the preprocessed power time series data to obtain time series slice data, and the slicing operation includes:
performing sliding slicing on the preprocessed power time sequence data by using a sliding window with the length of 2w +1 to obtain N-2w time sequence slices with the length of 2w + 1;
wherein the preprocessed power time series data is S ═ x0,x1,...,xN-1],x0Representing the first power data, x, collected per unit timeN-1Representing the power data collected in the Nth unit time, wherein N is a natural number, and w is a natural number; the N-2w time series slices are S1, S2, … and S respectivelyN-2w
And satisfy s1=[x0,x1,...,x2w],s2=[x1,x2,...,x2w+1],…,sN-2w=[xN-2w-1,xN-2w,...,xN-1],N-2w>0。
Optionally, the preprocessing the power timing data includes:
and carrying out missing value filling processing on the power time sequence data, and/or carrying out normalization processing on the power time sequence data.
Optionally, the determining, according to the mutation tag sequence, mutation information of the power timing data at each timing point includes:
the mutation tag sequence is [ Lw,Lw+1,…La…,LN-w-1]Wherein L isw,Lw+1,…La…,LN-w-1Are respectively power data xw,xw+1,…xa…,xN-W-1The corresponding mutation tag; n-w-1>w,LaBelongs to the element of { -1,0,1}, w is more than or equal to a and less than or equal to N-w-1, a is a natural number, and x is a natural numberaPower data collected for the a-th unit time;
when L isaWhen the power is-1, the collected power data is judged to be xaSudden increase and mutation occur at the time sequence point; when L isaWhen the value is 0, the collected power data is judged to be xaThe time sequence point of the time sequence point is not mutated; when L isaWhen the number is 1, the collected power data is judged to be xaThe time sequence point has sudden drop mutation.
Optionally, the deep learning discriminant model is obtained by training through the following method:
acquiring abnormal work order data of the electric power; manually marking a step mutation point on power time sequence data needing modeling in the power abnormal work order data to obtain training sample data, and training a deep learning discriminant model through the training sample data;
or, obtaining sample time sequence data simulated by a computer, setting a step mutation point containing a label on the sample time sequence data to obtain training sample data, and training the deep learning discriminant model through the training sample data.
A power time series data step abrupt change point detection device comprises:
the processing unit is used for acquiring power time sequence data and preprocessing the power time sequence data;
the slicing single chip is used for carrying out slicing operation on the preprocessed power time sequence data to obtain time sequence slicing data;
the output unit is used for inputting the time sequence slice data into a pre-trained deep learning discrimination model and outputting a mutation label sequence;
the judging unit is used for judging mutation information of the power time sequence data at each time sequence point according to the mutation tag sequence;
the deep learning discriminant model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
in the formula, output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is an output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output.
An electronic device, comprising: the apparatus includes at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the power timing data step discontinuity detection method.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the power timing data step discontinuity detection method.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of slicing preprocessed power time sequence data to obtain time sequence slice data; inputting the time sequence slice data into a pre-trained deep learning discrimination model, and outputting a mutation label sequence; according to the mutation tag sequence, mutation information of the power time sequence data at each time sequence point is judged, and automatic identification of the time sequence step mutation points is achieved through a deep learning discrimination model based on deep learning; the time sequence step mutation distinguishing model constructed based on the deep learning framework has stronger data fitting capability and characterization capability compared with a rule model due to the deep learning, for example, the time sequence step mutation distinguishing model constructed based on the deep learning framework can cover the situation that a large number of rules cannot cover by means of a large number of marked training samples, and the recall rate and the precision of step mutation point identification can be remarkably improved.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for detecting a step jump point of power timing data according to an embodiment of the present invention;
FIG. 2 is a block diagram of a deep learning discriminant model according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a power timing data step-jump point detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, and it should be noted that, in the premise of no conflict, the following described embodiments or technical features may be arbitrarily combined to form a new embodiment:
the first embodiment is as follows:
referring to fig. 1-4, fig. 1 shows a method for detecting a step jump point of power timing data according to the present invention, which includes the following steps:
step S1, acquiring power time sequence data and preprocessing the power time sequence data;
in this embodiment, the power time series data may be, but is not limited to, daily power time series data collected by day, or phase voltage or phase current time series data collected by hour, or the like.
As an example, the power timing data is a durationTime series data of daily electric quantity for N days, i.e. time series data of electric power S ═ x0,x1,...,xN-1] (1);
Wherein x is0Representing the amount of electricity, x, of the first day1Representing the amount of electricity in the next day, and so on, xN-1Indicating the charge on day N.
Optionally, the preprocessing the power timing data includes:
and carrying out missing value filling processing on the power time sequence data, and/or carrying out normalization processing on the power time sequence data.
In a specific implementation, the raw power time series data is first preprocessed, but the preprocessing may be, but is not limited to, normalization processing, time series missing value filling, and the like.
For example, normalization for time series may be done as follows:
Xk'=Xk/max(X0,X1,...,XN-1) (2);
wherein, XkFor the power data collected at the kth unit time, k is a natural number, Xk' is normalized power data.
It should be understood that the above pre-processing manner is only an example, not a limitation.
Step S2, slicing the preprocessed power time sequence data to obtain time sequence slice data;
specifically, the step S2 includes:
performing sliding slicing on the preprocessed power time sequence data by using a sliding window with the length of 2w +1 to obtain N-2w time sequence slices with the length of 2w + 1;
wherein the preprocessed power time series data is S ═ x0,x1,...,xN-1],x0Representing the first power data, x, collected per unit timeN-1Representing the power data collected in the Nth unit time, wherein N is a natural number, and w is a natural number; the N-2w time series slices are S1, S2, … and S respectivelyN-2w
And satisfy s1=[x0,x1,...,x2w],s2=[x1,x2,...,x2w+1],…,sN-2w=[xN-2w-1,xN-2w,...,xN-1],N-2w>0。
The following is illustrated by way of example:
if the unit time is day and the collected power data is power data, the power time sequence data is day power time sequence data with the duration of N days, and the power time sequence data S is [ x [ ]0,x1,...,xN-1](1) (ii) a Wherein x is0Representing the amount of electricity, x, of the first day1Representing the amount of electricity in the next day, and so on, xN-1Indicating the charge on day N.
Sliding slicing S from left to right by using a sliding window with the length of 2w +1 to obtain N-2w time sequence slices with the length of 2w +1, namely S1=[x0,x1,...,x2w],s2=[x1,x2,...,x2w+1]…sN-2w=[xN-2w-1,xN-2w,...,xN-1];
It is understood that s1Is the point x of electric quantitywOf the temporal context characteristics, s2Is the point x of electric quantityw+1Time-series context characteristics of, and so on, sN-2wAs point of electric quantity xN-w=1The timing context characteristics of (1), will be S1, S2, …, SN-2wSequentially passing through a trained deep learning discriminant model to respectively obtain corresponding mutation labels [ L ] of time sequence points w, w +1w,Lw+1,...,LN-w-1]。
Step S3, inputting the time sequence slice data into a pre-trained deep learning discrimination model and outputting a mutation label sequence;
in specific implementation, the training data of the deep learning discriminant model can be based on the abnormal power work order data in the real environment; for example, the step-jump points are sorted and manually marked on the power time series data to be modeled to obtain training data. Or simulating time sequence data through a computer to generate step mutation points, and automatically generating corresponding labels to obtain training data.
In a specific implementation, the optimization algorithm for the deep learning discriminant model parameters can adopt, but is not limited to, a gradient descent method.
The deep learning discriminant model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
in the formula, output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is an output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output.
Optionally, the second hidden layer output satisfies the following formula:
layer2=f2(layer1*W2+b2)
wherein layer2 is the first hidden layer output; w2A weight matrix of a second hidden layer, b2For the second hidden layer weight bias, f2Is the activation function of the second hidden layer.
Optionally, the depth learning discrimination model divides the time-series slice data into first slice data and second slice data, takes the first slice data as left slice data input to the left hidden layer, and turns over the second slice data to be input to the right hidden layer as right slice data; splicing the left hidden layer and the right hidden layer according to rows to obtain a first hidden layer;
wherein the left hidden layer node output left _ layer1 ═ f1(left_s*W1+b1) (ii) a The right hidden layer node outputs right _ layer1 ═ f1(right_s*W1+b1) (ii) a Wherein, W1A weight matrix being a first hidden layer, b1For the first hidden layer weight bias, f1For the activation function of the first hidden layer, left _ s is left slice data and right _ s is right slice data.
Specifically, a frame schematic diagram of the deep learning discriminant model can be shown in fig. 2, where fig. 2 is a frame schematic diagram of the deep learning discriminant model;
firstly, time series slice data is obtained, if the time series slice data is s1=[x0,x1,...,x2w](ii) a The deep learning discrimination model divides the time series slice data into first slice data and second slice data;
for example, s1 is divided into first slice data and second slice data by taking the time sequence point w as a quantile point, and the second slice data is flipped to obtain right slice data, i.e., right _ s ═ x2w,x2w-1,...,xw+1];
The left slice data and the right slice data respectively satisfy the following formulas:
left_s=[x0,x1,...,xw-1];
right_s=[x2w,x2w-1,...,xw+1];
then, the first hidden layer performs the following weighted transformation:
left_layer1=f1(left_s*W1+b1);...... (3)
right_layer1=f1(right_s*W1+b1);.... (4)
wherein left _ layer1 is the left hidden layer node, right _ layer1 is the right hidden layer node, W1Is a first hidden layer weight matrix, b1For the first hidden layer weight bias, f1The first hidden layer activation function may be, but is not limited to, relu, tanh, sigmoid, and the like.
Then, left _ layer1 and right _ layer1 are spliced according to rows to obtain a hidden layer node layer 1; and then the second hidden layer carries out the following weighted transformation:
layer2=f2(layer1*W2+b2);......... (5)
wherein, layer2 is the output of the second hidden layer node; w2Is a second hidden layer weight matrix, b2For the second hidden layer weight bias, f2For the second hidden layer activation function, the function can be, but not limited to, relu, tanh, sigmoid, etcAnd (4) counting.
Finally, the following weighted transformation is performed by the output layer:
output=f3(layer2*W3+b3);......... (6)
wherein output is the output score vector; w3As an output layer weight matrix, b3For output layer weight biasing, f3For the output layer activation function, the softmax function is set.
Step S4, judging mutation information of the power time sequence data at each time sequence point according to the mutation label sequence;
optionally, the determining, according to the mutation tag sequence, mutation information of the power timing data at each timing point includes:
the mutation tag sequence is [ Lw,Lw+1,…La…,LN-w-1]Wherein L isw,Lw+1,…La…,LN-w-1Are respectively power data xw,xw+1,…xa…,xN-W-1The corresponding mutation tag; n is a natural number, w is a natural number, N-w-1>w,LaBelongs to the element of { -1,0,1}, w is more than or equal to a and is less than or equal to N-w-1, and a is a natural number; x is the number ofaPower data collected for the a-th unit time;
when L isaWhen the power is-1, the collected power data is judged to be xaThe time sequence point a has sudden increase mutation; when L isaWhen the value is 0, the collected power data is judged to be xaThe time sequence point a has no mutation; when L isaWhen the number is 1, the collected power data is judged to be xaThe time sequence point a has sudden drop mutation.
In this embodiment, when the power time series data is day power time series data collected by day, the power data collected on the a-th day is xaThe corresponding time sequence point is the a day; when the power time sequence data is phase voltage or phase current time sequence data acquired according to hours, the power data acquired in the a hour is xaThe corresponding time point is the a-th hour.
As a specific application, the method randomly generates 3000 time sequences with the length of 30-300 and containing a plurality of step mutation points through computer simulation to serve as training data. In addition, 500 time series with several step discontinuities of length between 30 and 300 were randomly generated as test data.
The precision of the recall (recall rate) of the deep learning discrimination model constructed through the training data on the test data respectively reaches 92.7 percent and 94.4 percent, and compared with the recall and the precision of the same test data based on the step mutation point algorithm based on the rule, the precision of the recall and the precision of the deep learning discrimination model on the same test data do not exceed 30 percent.
In the implementation process, the preprocessed power time sequence data is subjected to slicing operation to obtain time sequence slicing data; inputting the time sequence slice data into a pre-trained deep learning discrimination model, and outputting a mutation label sequence; according to the mutation tag sequence, mutation information of the power time sequence data at each time sequence point is judged, and automatic identification of the time sequence step mutation points is achieved through a deep learning discrimination model based on deep learning; the time sequence step mutation distinguishing model constructed based on the deep learning framework has stronger data fitting capability and characterization capability compared with a rule model due to the deep learning, for example, the time sequence step mutation distinguishing model constructed based on the deep learning framework can cover the situation that a large number of rules cannot cover by means of a large number of marked training samples, and the recall rate and the precision of step mutation point identification can be remarkably improved.
Example two:
referring to fig. 3, fig. 3 shows a power timing data step-jump point detection apparatus according to the present invention, including:
the processing unit 10 is configured to acquire power timing sequence data and preprocess the power timing sequence data;
the slicing single chip 20 is used for carrying out slicing operation on the preprocessed power time sequence data to obtain time sequence slicing data;
the output unit 30 is used for inputting the time series slice data into a pre-trained deep learning discrimination model and outputting a mutation label sequence;
a judging unit 40, configured to judge mutation information of the power time series data at each time series point according to the mutation tag sequence;
the deep learning discriminant model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
in the formula, output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is an output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output.
In the implementation process, the invention provides the step sudden change detection device based on the deep learning framework to realize effective detection of the power time sequence data step sudden change, and by means of the fact that the deep learning has high data fitting capability and characterization capability, for example, the recall rate and the precision of identifying the step sudden change point can be remarkably improved by training through a large number of marked time sequence samples.
Example three:
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and in the present application, an electronic device 100 for implementing a method for detecting a step jump point of power time series data according to an embodiment of the present invention may be described by using the schematic diagram shown in fig. 4.
As shown in fig. 4, an electronic device 100 includes one or more processors 102, one or more memory devices 104, and the like, which are interconnected via a bus system and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 4 are only exemplary and not limiting, and the electronic device may have some of the components shown in fig. 4 and may also have other components and structures not shown in fig. 4, as needed.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement the functions of the embodiments of the application (as implemented by the processor) described below and/or other desired functions. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The invention also provides a computer storage medium on which a computer program is stored, in which the method of the invention, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer storage medium and used by a processor to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer storage media may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer storage media that does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (10)

1. A method for detecting a step-jump point of power time series data is characterized by comprising the following steps:
acquiring power time sequence data and preprocessing the power time sequence data;
slicing the preprocessed power time sequence data to obtain time sequence slice data;
inputting the time sequence slice data into a pre-trained deep learning discrimination model, and outputting a mutation label sequence; the deep learning discrimination model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
wherein output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output;
and judging mutation information of the power time sequence data at each time sequence point according to the mutation tag sequence.
2. The method of claim 1, wherein the second hidden layer output satisfies the following equation:
layer2=f2(layer1*W2+b2)
wherein layer2 is the first hidden layer output; w2A weight matrix of a second hidden layer, b2For the second hidden layer weight bias, f2Is the activation function of the second hidden layer.
3. The method according to claim 1, wherein the deep learning discriminant model divides the time series slice data into a first slice data and a second slice data, the first slice data is used as a left slice data for inputting into a left hidden layer, and the second slice data is inverted and then is used as a right slice data for inputting into a right hidden layer; splicing the left hidden layer and the right hidden layer according to rows to obtain a first hidden layer;
wherein the left hidden layer node output left _ layer1 ═ f1(left_s*W1+b1) (ii) a The right hidden layer node outputs right _ layer1 ═ f1(right_s*W1+b1) (ii) a Wherein, W1A weight matrix being a first hidden layer, b1For the first hidden layer weight bias, f1For the activation function of the first hidden layer, left _ s is left slice data and right _ s is right slice data.
4. The method according to claim 1, wherein the slicing operation is performed on the preprocessed power time series data to obtain time series slice data, and the method comprises:
performing sliding slicing on the preprocessed power time sequence data by using a sliding window with the length of 2w +1 to obtain N-2w time sequence slices with the length of 2w + 1; wherein the preprocessed power time series data is S ═ x0,x1,...,xN-1],x0Representing the first power data, x, collected per unit timeN-1Representing the power data collected in the Nth unit time, wherein N is a natural number, and w is a natural number; the N-2w time series slices are S1, S2, … and S respectivelyN-2w
And satisfy s1=[x0,x1,...,x2w],s2=[x1,x2,...,x2w+1],…,sN-2w=[xN-2w-1,xN-2w,...,xN-1],N-2w>0。
5. The method of claim 1, wherein the preprocessing the power timing data comprises:
and carrying out missing value filling processing on the power time sequence data, and/or carrying out normalization processing on the power time sequence data.
6. The method according to claim 4, wherein the determining the mutation information of the power time series data at each time series point according to the mutation tag sequence comprises:
the mutation tag sequence is [ Lw,Lw+1,…La…,LN-w-1]Wherein L isw,Lw+1,…La…,LN-w-1Are respectively power data xw,xw+1,…xa…,xN-W-1The corresponding mutation tag; n-w-1>w,LaBelongs to the element of { -1,0,1}, w is more than or equal to a and less than or equal to N-w-1, a is a natural number, and x is a natural numberaPower data collected for the a-th unit time;
when L isaWhen the power is-1, the collected power data is judged to be xaSudden increase and mutation occur at the time sequence point; when L isaWhen the value is 0, the collected power data is judged to be xaThe time sequence point of the time sequence point is not mutated; when L isaWhen the number is 1, the collected power data is judged to be xaThe time sequence point has sudden drop mutation.
7. The method according to claim 1, wherein the deep learning discriminant model is trained by:
acquiring abnormal work order data of the electric power; manually marking a step mutation point on power time sequence data needing modeling in the power abnormal work order data to obtain training sample data, and training a deep learning discriminant model through the training sample data;
or, obtaining sample time sequence data simulated by a computer, setting a step mutation point containing a label on the sample time sequence data to obtain training sample data, and training the deep learning discriminant model through the training sample data.
8. The utility model provides a power time series data step discontinuity detection device which characterized in that includes:
the processing unit is used for acquiring power time sequence data and preprocessing the power time sequence data;
the slicing single chip is used for carrying out slicing operation on the preprocessed power time sequence data to obtain time sequence slicing data;
the output unit is used for inputting the time sequence slice data into a pre-trained deep learning discrimination model and outputting a mutation label sequence;
the judging unit is used for judging mutation information of the power time sequence data at each time sequence point according to the mutation tag sequence;
the deep learning discriminant model comprises an input layer, a first hidden layer, a second hidden layer and an output layer; the mutant tag sequence satisfies the following formula:
output=f3(layer2*W3+b3);
in the formula, output is an output mutation tag sequence, namely a score vector output by the deep learning discriminant model, W3 is an output layer weight matrix, b3 is an output layer weight bias, f3 is an output layer activation function, and layer2 is a second hidden layer output.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the power timing data step discontinuity detection method of any one of claims 1-7.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the power timing data step discontinuity detection method according to any one of claims 1-7.
CN202111059994.0A 2021-09-10 2021-09-10 Method and device for detecting step mutation points of power time sequence data Pending CN113780410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111059994.0A CN113780410A (en) 2021-09-10 2021-09-10 Method and device for detecting step mutation points of power time sequence data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111059994.0A CN113780410A (en) 2021-09-10 2021-09-10 Method and device for detecting step mutation points of power time sequence data

Publications (1)

Publication Number Publication Date
CN113780410A true CN113780410A (en) 2021-12-10

Family

ID=78842317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111059994.0A Pending CN113780410A (en) 2021-09-10 2021-09-10 Method and device for detecting step mutation points of power time sequence data

Country Status (1)

Country Link
CN (1) CN113780410A (en)

Similar Documents

Publication Publication Date Title
CN111178456B (en) Abnormal index detection method and device, computer equipment and storage medium
CN109271374B (en) Database health degree scoring method and system based on machine learning
CN110097037B (en) Intelligent monitoring method and device, storage medium and electronic equipment
CN112288021B (en) Medical wastewater monitoring data quality control method, device and system
CN108399248A (en) A kind of time series data prediction technique, device and equipment
CN110636066B (en) Network security threat situation assessment method based on unsupervised generative reasoning
CN112085621B (en) Distributed photovoltaic power station fault early warning algorithm based on K-Means-HMM model
CN111898644B (en) Intelligent identification method for health state of aerospace liquid engine under fault-free sample
CN111343147A (en) Network attack detection device and method based on deep learning
Son et al. Deep learning-based anomaly detection to classify inaccurate data and damaged condition of a cable-stayed bridge
CN114841199A (en) Power distribution network fault diagnosis method, device, equipment and readable storage medium
CN109669017B (en) Refinery distillation tower top cut water ion concentration prediction method based on deep learning
CN116306806A (en) Fault diagnosis model determining method and device and nonvolatile storage medium
Mishra et al. Reliable local explanations for machine listening
CN113988210A (en) Method and device for restoring distorted data of structure monitoring sensor network and storage medium
CN117056678B (en) Machine pump equipment operation fault diagnosis method and device based on small sample
CN114295967A (en) Analog circuit fault diagnosis method based on migration neural network
CN113053536A (en) Infectious disease prediction method, system and medium based on hidden Markov model
CN112365093A (en) GRU deep learning-based multi-feature factor red tide prediction model
CN117171702A (en) Multi-mode power grid fault detection method and system based on deep learning
CN113780410A (en) Method and device for detecting step mutation points of power time sequence data
CN117060353A (en) Fault diagnosis method and system for high-voltage direct-current transmission system based on feedforward neural network
Noor et al. Prediction map of rainfall classification using random forest and inverse distance weighted (IDW)
CN116842684A (en) Electric energy meter, evaluation method and system of operation reliability of electric energy meter and electric energy meter processor
CN116720098A (en) Abnormal behavior sensitive student behavior time sequence modeling and academic early warning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination