CN111027058A - Method for detecting data attack in power system, computer equipment and storage medium - Google Patents
Method for detecting data attack in power system, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111027058A CN111027058A CN201911097514.2A CN201911097514A CN111027058A CN 111027058 A CN111027058 A CN 111027058A CN 201911097514 A CN201911097514 A CN 201911097514A CN 111027058 A CN111027058 A CN 111027058A
- Authority
- CN
- China
- Prior art keywords
- data
- neural network
- time
- value
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013528 artificial neural network Methods 0.000 claims abstract description 84
- 230000015654 memory Effects 0.000 claims abstract description 65
- 238000005259 measurement Methods 0.000 claims abstract description 47
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 27
- 238000010606 normalization Methods 0.000 claims abstract description 24
- 230000007246 mechanism Effects 0.000 claims abstract description 21
- 238000012706 support-vector machine Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 24
- 210000004027 cell Anatomy 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 15
- 230000007787 long-term memory Effects 0.000 claims description 12
- 210000002569 neuron Anatomy 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 10
- 230000006403 short-term memory Effects 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 10
- 238000007418 data mining Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 239000012634 fragment Substances 0.000 claims description 3
- 230000000644 propagated effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 claims description 2
- 230000001343 mnemonic effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000007786 learning performance Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010224 classification analysis Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for detecting data attack in a power system, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring historical measurement data of a data-driven power system and carrying out normalization pretreatment on the historical measurement data; inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features; outputting the full connection layer FC of the convolutional neural network by a long-time memory neural network and capturing time characteristics; a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network; and an output layer of the long-time memory neural network is provided with a support vector machine classifier and outputs a judgment result of the attack detection. By implementing the invention, the attack detection accuracy is higher, the detection time interval is reasonable, the detection generalization performance is better, and the detector can identify the false data, thereby taking effective and timely measures.
Description
Technical Field
The invention belongs to the field of power monitoring, and relates to a method for detecting data attack in a power system, computer equipment and a storage medium.
Background
At present, with the basic completion of the third industrial revolution and the gradual promotion of the internet of things 3.0 of the energy industrial system, the energy internet formed by the ubiquitous power internet of things and the strong smart grid is gradually deeply researched and applied in the field of power systems. The composition and the operation mechanism of a power information physical fusion system which is an important component of an energy internet tend to be complex, and a new generation energy power system represents the close coupling and the high-efficiency interaction of the energy flow of primary equipment and the information flow of secondary equipment. Therefore, the measurement information of the power system is easy to be attacked and tampered, so that the information security risk of the power system is increased. The method is characterized in that relevant research is carried out aiming at the characteristics and consequences of information attack which may occur in the power system, and a risk perception machine is preliminarily constructed to carry out abnormal detection on the measured data, so that the risk and loss of the information attack of the power system can be reduced, and corresponding references are provided for planning design, safety and stability, scheduling distribution and emergency control of new generation energy Internet development.
An EMS/SCADA system is mostly adopted by related power departments at home and abroad to detect, identify and eliminate bad data uploaded to a regulation center by a Phasor synchronous Measurement Unit (PMU) and a Remote Terminal (RTU) equivalent Measurement Unit based on state estimation and bad data identification (BDD). Specifically, the practical detection method of the bad data of the power system based on the state estimation is mostly the least square method and the recursion state estimation method based on the kalman filter, and the characteristics of external network attacks such as tampering attack and the like are not considered. Meanwhile, when mastering the topology Data of the actual power system, a network attacker can construct and inject False Data (FDIA) to successfully bypass the BDD, and the False Data makes the power system make wrong decisions so as to gradually destabilize, thereby finally causing a blackout accident. In a word, poor data detected only by means of traditional state estimation cannot fully guarantee network security in a new generation ubiquitous power internet of things information system.
In the prior art, new methods for detecting abnormal data based on a neural network, a fuzzy theory and cluster analysis, intermittent statistics, a support vector machine, a Bayesian network and the like are proposed by related researchers. The method can successfully identify bad data to a certain extent, but most of the bad data only consider the spatial correlation or the temporal correlation of the measured data, the complex space-time characteristics of the power system data are not fully exploited, and the monitoring level is still to be further improved. Meanwhile, in order to improve the detection performance of the framework, the construction of the deep learning framework enables most of the methods to consume a large amount of computer resources during training, the convergence time is increased, and the real-time performance and the practicability of the monitoring cannot be fully guaranteed. Data uploaded to the control center by the measurement unit generally has a time-space correlation, the time correlation refers to a change characteristic of the same measurement value in time, and the space correlation refers to a correlation of a plurality of measurement values in the whole network or in a certain area in the same time period.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method, a computer device, and a storage medium for detecting data attacks in a power system, so as to solve the problems that in the prior art, there is no detector architecture based on temporal-spatial correlation, the detection accuracy is low, the success rate of identifying false data is poor, and the information security of the power system cannot be effectively protected.
In one aspect of the present invention, a method for detecting data attacks in a power system is provided, which includes the following steps:
step S1, acquiring historical measurement data of the data-driven power system and carrying out normalization preprocessing on the historical measurement data;
step S2, inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
step S3, outputting the full connection layer FC of the convolutional neural network by the long and short term memory neural network and capturing time characteristics;
step S4, a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and meanwhile, an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and step S5, setting a support vector machine classifier and outputting a judgment result of the attack detection by an output layer of the long-time memory neural network.
Further, in step S1, the specific process of performing the preprocessing and normalization operations includes:
step S11, reading the historical original PMU/RTU measurement data stored in the local into the memory of the detector, and preprocessing according to different areas, different measurement units and different scheduling units;
step S12, classifying PMU data into voltage amplitude, voltage phase, current amplitude, and current phase, training and learning m measurement units in a certain area of the power system by the detector at the same time, where the length of the sampling time sequence is T, the number of measurement parameters of one measurement unit is n, the dimensionality of data is d ═ mxn, and the data set is expressed by the following formula:
wherein D represents a metrology data set; x is the number oftThe measured value measured by the measuring unit at time t, which is real and has a data dimension of m × n, can be expressed as
Detector data mining reshapes the time segment data into a time segment value, and the resulting data is represented as:
wherein ,DmapRepresenting the processed metrology data set; x is the number ofvmRepresenting voltage amplitude data; x is the number ofcaRepresenting current phase data;t is the time sequence length; t' is the length of the time fragment sequence; processed real data set xt,mapHas the dimension of
Further, in step S11, the preprocessing performs normalization preprocessing according to the following formula:
wherein ,yscalerDistribution in y for the measured values after normalizationminTo ymaxY ofminIs the minimum value of the normalized value, ymaxIs the normalized maximum value, xminIs the minimum value of the actual measured value, xmaxIs the maximum value of the actual measurement values, and x represents the actual measurement value of the normalization process.
Further, in step S2, the convolutional neural network captures the data space features according to the following formula:
wherein ,in order to input the measured data, the measuring device,the convolution kernel of the jth feature map of the first layer convolution layer, M is the selected matrix block value at each sliding in the convolution process, b represents the bias matrix parameter, down (. degree.) is the pooling function, βjRepresenting trainable scalars, Q2Showing the size of the pooling block in the pooling process, Flatten being a one-dimensional function, ReLU being a common modified linear activation function, w(n)The model is a parameter which needs to be adjusted continuously when the error of the neural network is reversely propagated.
Further, in step S3, the time characteristic of the captured data of the neural network is memorized according to the following formula:
ft=σ(Wxfxt+Whfht-1+bf)
it=σ(Wxixt+Whiht-1+bi)
ot=σ(Wxoxt+Whoht-1+bo)
ht=ottanh(ct)
wherein ,ftIs the forgetting coefficient of the forgetting gate, sigma is sigmoid activation function, W is weight matrix, xtIs an input value of the current time, ht-1Is the output value of the cell hidden layer at the previous moment, b is the bias matrix, itIs the weight value coefficient of the input gate,for the state value of the new input cell, tanh is the activation function, ctCell state being the updated current state, ct-1Is the state value of the cell at the previous time, otIs the value of the weight coefficient of the output gate, htIs the output value of the hidden layer.
Further, in step S4, the specific process of setting the Dropout layer and the batch normalization layer is to randomly break the connection between the neurons according to the probability when the detector trains and fits the data so that the detector does not excessively learn the local features specific to the training set;
the distribution of the input value of each neuron is forcedly processed into standard normal distribution with the mean value of 0 and the variance of 1 by a regularization normalization method, and the input and output feedback of each neuron is adjusted according to the following formula:
y(k)=γ(k)x(k)+β(k)
wherein ,for the k-th normalized neural network input data value, x(k)Is the original input data value, E.]To average its input data, Var [.]To find the variance value of its input data value, y(k)The output value of the neural network corresponding to the input data is gamma, the weight parameter during the neural network training is gamma, and β is the weight bias during the neural network training.
Further, in step S4, the attention setting mechanism of the output layer of the long-time memory neural network is specifically according to the following formula:
hi=oitanh(ci)=f1(xi,hi-1)tanh(f2(xi,hi-1))=f(xi,hi-1)
eti=vTtanh(Whhi+Wsst-1+b)
wherein ,hiFor memorizing the hidden layer output value o of the neural network for the time of iiFor long and short duration memorization of the value of the weight coefficient of the output gate of the neural network, ciFor long and short term memory of the current state of the neural network cells, αtiIs the current output stWeight coefficient assigned to data value, stIs an output vector value based on an attention mechanism.
Further, in step S5, the specific process of setting the support vector machine classifier by the output layer of the long-time memory neural network is that the output layer sets the support vector machine classifier to decide whether the measured data network attack occurs, which is specifically shown in the following formula:
f(x)=ωTx+b
where D is a two-dimensional data value to be classified, x and y are two-dimensional data of the data, and ω isTIs a normal vector, b is a displacement term, f (x) function is a dividing hyperplane, gamma is a geometric interval,as a function of the spacing, yiIs a tag for data.
Accordingly, the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring historical measurement data of a data-driven power system and carrying out normalization pretreatment on the historical measurement data;
inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
outputting the full connection layer FC of the convolutional neural network by a long-time memory neural network and capturing time characteristics;
a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and an output layer of the long-time memory neural network is provided with a support vector machine classifier and outputs a judgment result of the attack detection.
Accordingly, a further aspect of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of;
acquiring historical measurement data of a data-driven power system and carrying out normalization pretreatment on the historical measurement data;
inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
outputting the full connection layer FC of the convolutional neural network by a long-time memory neural network and capturing time characteristics;
a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and an output layer of the long-time memory neural network is provided with a support vector machine classifier and outputs a judgment result of the attack detection.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method for detecting data attack of a power system, computer equipment and a storage medium, which integrates the functions of effectively extracting measured data space characteristics by a convolutional neural network and effectively memorizing long-term and short-term memory neural networks into long-term sequence data characteristic learning, and improves the generalization performance and robustness of a detector by utilizing the classification performance of a support vector machine classifier;
compared with other learners, the detector provided by the invention has better learning performance in offline learning, namely, the attack detection accuracy is higher, the detection time interval is reasonable, the detection generalization performance is better, and the detector can be applied to attack detection of measurement information of a power system and has better application prospect;
the attack detector is trained by utilizing the complex time-space characteristics of the measured data, so that the information safety of the power system can be effectively guaranteed, the detector can identify the false data, and effective and timely measures can be taken.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a main flow diagram of a method for detecting a data attack in an electrical power system according to the present invention.
Fig. 2 is a schematic diagram of a process of extracting data space features by a convolutional neural network in an embodiment of the present invention.
Fig. 3 is a structural diagram of a single neuron of the long-term and short-term memory neural network according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a long-term and short-term memory neural network model based on an attention mechanism according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, a main flow diagram of an embodiment of a method for detecting a data attack in a power system provided by the present invention is shown, and in this embodiment, the method includes the following steps:
step S1, acquiring historical measurement data of the data-driven power system and carrying out normalization preprocessing on the historical measurement data;
in a specific embodiment, the specific process of performing the preprocessing and the normalization operation includes:
step S11, reading the historical original PMU/RTU measurement data stored in local into the memory of the detector, and preprocessing according to different areas, different measurement units and different scheduling units, wherein the preprocessing comprises conversion of different reference powers and voltage levels, unification of data formats and the like;
specifically, the preprocessing is normalized according to the following formula:
wherein ,yscalerDistribution in y for the measured values after normalizationminTo ymaxY ofminIs the minimum value of the normalized value, ymaxIs the normalized maximum value, xminIs the minimum value of the actual measured value, xmaxIs the maximum value of the actual measurement values, and x represents the actual measurement value of the normalization process.
Step S12, arranging the measured data according to a certain rule in the preprocessing, classifying the PMU data into a voltage amplitude, a voltage phase, a current amplitude, and a current phase, training and learning m measurement units in a certain area of the power system by the detector at the same time, where the sampling time sequence length is T, the number of measurement parameters of one measurement unit is n, the dimensionality of the data is d ═ mxn, and the data set is expressed by the following formula:
wherein D represents a metrology data set; x is the number oftThe measured value measured by the measuring unit at the time t is real number and the dimension of the data is mn, can be represented as
Detector data mining reshapes the time segment data into a time segment value, and the resulting data is represented as:
wherein ,DmapRepresenting the processed metrology data set; x is the number ofvmRepresenting voltage amplitude data; x is the number ofcaRepresenting current phase data; t is the time sequence length; t' is the length of the time fragment sequence; processed real data set xt,mapHas the dimension of
Step S2, the time sequence data is input into the convolutional neural network in equal batch according to time/time period and captures spatial characteristics, the spatial characteristic value of the convolutional neural network is effectively extracted through training, and the time sequence is convenient to learn and predict in the subsequent steps;
in a specific embodiment, the convolutional neural network captures data space features according to the following formula:
wherein ,in order to input the measured data, the measuring device,the convolution kernel of the jth feature map of the first layer convolution layer, M is the selected matrix block value at each sliding in the convolution process, b represents the bias matrix parameter, down (. degree.) is the pooling function, βjRepresenting trainable scalars, Q2Showing the size of the pooling block in the pooling process, Flatten being a one-dimensional function, ReLU being a common modified linear activation function, w(n)The model is a parameter which needs to be adjusted continuously when the error of the neural network is reversely propagated;
the problem of the neural network of the multilayer perceptron is well solved after the convolutional neural network appears, the complexity of the network is effectively reduced by changing the full connection of the neurons of the multilayer perceptron into the local connection, the adjustment links of parameters in the learning process are reduced, and the problem of overfitting is solved; the network model of the convolutional neural network can be gradually adjusted according to actual training data, and most of the network model is composed of an input layer, an output layer, a plurality of convolutional layers, a pooling layer and a full connecting layer, as shown in fig. 2, after the input layer extracts corresponding data, data features of the convolutional layers are captured for multiple times and a plurality of feature maps are reserved, in order to reduce data dimension, the pooling layer performs pooling summary on the convolutional layer feature maps, after the data is repeatedly performed for multiple times, corresponding results are output to be one-dimensional feature vectors through the full connecting layer, and the output layer generally adopts a softmax function for multi-classification.
Step S3, outputting the full connection layer FC of the convolutional neural network by the long and short term memory neural network and capturing time characteristics;
in a specific embodiment, as shown in fig. 3, the time characteristic of capturing data by the neural network is memorized according to the following formula:
ft=σ(Wxfxt+Whfht-1+bf)
it=σ(Wxixt+Whiht-1+bi)
ot=σ(Wxoxt+Whoht-1+bo)
ht=ottanh(ct)
wherein the input gate is input with the new state value of the cellAnd input gate weight value itTwo parts, both of which are composed of the input value x at the current timetLast time cell hidden layer output ht-1A weight matrix W and an offset b, W being the weight matrix, xtIs an input value of the current time, ht-1Is the output value of the cell hidden layer at the previous moment, b is the bias matrix,determining the value of the input data at time t, i, for the state value of the newly input celltDetermining whether to let the data flow into the cell or not for the weight coefficient of the input gate, wherein the former generally uses tanh as an activation function, and the latter uses a sigmoid function;
forgetting coefficient f of forgetting gatetThe composition of the forgetting gate is similar to that of the composition, and the forgetting gate is used as an important component part of a long-time memory neural network and controls the state value c of the cell at the last timet-1The inflow weight value is used for adjusting the error gradient, so that gradient explosion and gradient disappearance are avoided; new state value of cell is ctThe system comprises a cell value at the last moment and a newly input cell value, wherein the cell value and the newly input cell value are distributed through weight, when the weight of an input gate is 0, any data cannot enter the cell, and when the value of a forgetting gate is 0, the cell discards historical sequence data information;
the output gate is composed ofWeight value o of output gatetAnd hidden layer output htSimilarly, when the weight value of the output gate is 0, any data cannot be output; the data value is locked in the cell when both the input gate and the output gate are closed, so that the data value does not increase or decrease, nor affect the current output.
Specifically, the long-time memory neural network can effectively capture the time characteristics of a large number of off-line time sequence measurement values, and can not forget the change trend of historical data due to the addition of new data; the long-time memory neural network predicts a data value of the next time by inputting a historical time sequence, wherein the length of the historical time sequence is called as a lookback; therefore, the long-time memory neural network outputs a predicted value and a real value to be compared after setting a proper lookback value and effective training, so that abnormal data in a time dimension can be identified in time and early warning is rapidly given out.
Step S4, a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and meanwhile, an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
in a specific embodiment, the specific process of setting the Dropout layer and the batch normalization layer is that when the detector trains and fits data, the connection between neurons is randomly disconnected according to probability so that the detector does not excessively learn local features specific to a training set;
the distribution of the input value of each neuron is forcedly processed into standard normal distribution with the mean value of 0 and the variance of 1 by a regularization normalization method, so that the activation input value is effectively prevented from approaching a saturation region of a nonlinear function (activation function) along with the deepening of a network or the gradual shift or change of the distribution in the training process, and the convergence process of a model after the input value of the neuron enters the saturation region becomes very difficult. Therefore, batch standardization can effectively solve the difficult problem of difficult parameter adjustment, training and convergence caused by deepening of the network depth, and input and output feedback of each neuron is adjusted according to the following formula:
y(k)=γ(k)x(k)+β(k)
wherein ,for the k-th normalized neural network input data value, x(k)Is the original input data value, E.]To average its input data, Var [.]To find the variance value of its input data value, y(k)The output value of the neural network corresponding to the input data is obtained, gamma is a weight parameter during neural network training, and β is weight bias during neural network training;
specifically, as shown in fig. 4, in order to reduce computer resource consumption during the learning of the detector and improve the detection accuracy of the detector, the attention mechanism is set in the output layer of the long and short term memory neural network according to the following formula:
hi=oitanh(ci)=f1(xi,hi-1)tanh(f2(xi,hi-1))=f(xi,hi-1)
eti=vTtanh(Whhi+Wsst-1+b)
wherein ,hiFor memorizing the hidden layer output value o of the neural network for the time of iiFor long and short duration memorization of the value of the weight coefficient of the output gate of the neural network, ciFor long-term memory of the current state of the neural network cells, it is determined from the current input data xiHidden layer output h from previous timei-1Decision, αtiIs the current output stThe weight coefficient assigned to a data value, i.e. attention weight, the larger the value is, the output pair α is representedtiThe greater the weight value of the input data is, etiThe weight values for learning are determined by a learning parameter matrix transpose v, a coefficient matrix W and a bias matrix b, the parameters are converged and determined in the learning process, and finally the matrix block value is output stThe output vector value based on the attention mechanism is obtained by respectively giving different weights to the output values of the long-time and short-time memory neural network cells at different moments and summing the weights.
Specifically, the neural network needs 4 linear MLP layers for each sub-network unit when training data, a large amount of bandwidth and computer resources are consumed, and model training is difficult due to explosive growth of data dimension and sequence length when learning large-scale power system historical measurement data; the model based on the attention mechanism can focus the current output of the network to an important hidden layer output htHardware requirements during training can be effectively reduced, for example, fig. 4 is a long-and-short memory neural network based on an attention mechanism.
The input time sequence of the long-time memory neural network is expressed asD represents a time-series data set, xiRepresenting the sequence value at time i,the sequence value is represented as a real value of m dimension, so that the hidden layer output of the memory neural network at the time of i time is represented as:
hi=oitanh(ci)=f1(xi,hi-1)tanh(f2(xi,hi-1))=f(xi,hi-1)
therefore, in the T time, the long-time memory neural network hidden layer output is H ═ H1,h2,...,ht,...,hT]Then the attention mechanism may simply be represented by vector α, which is thenThe state vector after the attention focusing can be expressed by s, and then the following formula is expressed:
α=softmax(wTtanh(H))
s=HαT
wherein ,wTTo learn the parameters, and to more clearly show the process of attention transfer of the attention mechanism, the above can also be written as:
eti=vTtanh(Whhi+Wsst-1+b)
step S5, setting a support vector machine classifier and outputting a judgment result of attack detection by an output layer of the long-time memory neural network, and further improving the detection performance of the detector;
in a specific embodiment, the specific process of setting the support vector machine classifier in the output layer of the long-time and short-time memory neural network is that the support vector machine classifier is set in the output layer to make a decision on whether a measured data network attack occurs, and the specific process is as follows:
the support vector machine is a powerful supervised learning model for sequence data classification and regression analysis, and the basic rule is to find the optimal hyperplane which can maximally tolerate the local disturbance noise and limitation of a data set, wherein the hyperplane can enable the classification result to have the most robustness, and the generalization performance on a test set is better. For linear separation of the training set D in the two-dimensional plane, the classification function can be expressed as:
D={(x(1),y(1)),(x(2),y(2)),...,(x(m),y(m))}
f(x)=ωTx+b
where D is a two-dimensional data value to be classified, and x and y are two of the dataData of dimensions, ωTA normal vector, b is a displacement term, f (x) function is a hyperplane which is divided, and a straight line (two-dimensional data) is drawn for classification;
the distance of any data point in data D to hyperplane f (x) is represented by the following equation:
suppose thatWhen one wants to find the maximum separation to minimize the classification effect due to noise and local variations, i.e. let γ have a maximum value while continuously adjusting the parameters ω and b, the formula can be written as:
wherein ,yiAs a label for the data, y for the problem of binary classificationiCan be set to 0 and 1, or can be set to-1 and 1, and the above formula considers yiFind the maximum interval to satisfy the condition of classification 1 or-1.
Accordingly, another aspect of the present invention also provides a computer device including a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection.
It will be appreciated by those skilled in the art that the above-described computer apparatus is merely part of the structure associated with the present application and does not constitute a limitation on the computer apparatus to which the present application is applied, and that a particular computer apparatus may comprise more or less components than those described above, or some components may be combined, or have a different arrangement of components.
In one embodiment, such a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring historical measurement data of a data-driven power system and carrying out normalization pretreatment on the historical measurement data;
inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
outputting the full connection layer FC of the convolutional neural network by a long-time memory neural network and capturing time characteristics;
a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and an output layer of the long-time memory neural network is provided with a support vector machine classifier and outputs a judgment result of the attack detection.
Accordingly, a further aspect of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of;
acquiring historical measurement data of a data-driven power system and carrying out normalization pretreatment on the historical measurement data;
inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
outputting the full connection layer FC of the convolutional neural network by a long-time memory neural network and capturing time characteristics;
a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and an output layer of the long-time memory neural network is provided with a support vector machine classifier and outputs a judgment result of the attack detection.
It is understood that more details of the steps involved in the computer device and the computer-readable storage medium can be found in the limitations of the foregoing methods and will not be described herein.
Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
For more details, reference may be made to and combined with the preceding description of fig. 1-4, which will not be described in detail here.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method for detecting data attack of a power system, computer equipment and a storage medium, wherein the measured data of a data-driven power system is abnormally identified by utilizing a space-time relevance detector framework, so that the fault of an information system is prevented from spreading to an energy system in time, and the safety and the stability of the power system are further kept to the maximum extent;
the convolutional neural network and the long and short time memory neural network are fused to carry out data mining on the spatial relevance and the time relevance of the measured data, and experiments prove that the detection accuracy of a frame considering the spatial relevance is higher than that of a method considering the single spatial relevance and time relevance; meanwhile, an attention mechanism is added to an output layer for memorizing the neural network at long and short time, so that the convergence time and computer resources during training are effectively reduced; in order to further improve the performance of the detector, a vector machine classifier with minimized structural risk is combined with an output layer of a long-time and short-time memory neural network, so that the detection accuracy of the frame is further improved; adding proper Dropout and batch standardization in the framework, effectively preventing the learner from falling into a local optimal solution and overfitting during training;
the convolutional neural network is fused to effectively extract measured data spatial features and the long-term memory neural network and the short-term memory neural network are suitable for long-term sequence data feature learning, and the generalization performance and the robustness of the detector are improved by using the classification performance of the support vector machine classifier;
compared with other learners, the detector provided by the invention has better learning performance in offline learning, namely, the attack detection accuracy is higher, the detection time interval is reasonable, the detection generalization performance is better, and the detector can be applied to attack detection of measurement information of a power system and has better application prospect;
the attack detector is trained by utilizing the complex time-space characteristics of the measured data, so that the information safety of the power system can be effectively guaranteed, the detector can identify the false data, and effective and timely measures can be taken.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (10)
1. A method for detecting data attack in a power system is characterized by comprising the following steps:
step S1, acquiring historical measurement data of the data-driven power system and carrying out normalization preprocessing on the historical measurement data;
step S2, inputting the time sequence data into the convolutional neural network in equal batch according to time/time period and capturing spatial features;
step S3, outputting the full connection layer FC of the convolutional neural network by the long and short term memory neural network and capturing time characteristics;
step S4, a Dropout layer and a batch standardization layer are arranged in the convolutional neural network and the long-time and short-time memory neural network, and meanwhile, an attention mechanism is arranged on an output layer of the long-time and short-time memory neural network;
and step S5, setting a support vector machine classifier and outputting a judgment result of the attack detection by an output layer of the long-time memory neural network.
2. The method according to claim 1, wherein in step S1, the specific process of performing the preprocessing and normalization operation includes:
step S11, reading the historical original PMU/RTU measurement data stored in the local into the memory of the detector, and preprocessing according to different areas, different measurement units and different scheduling units;
step S12, classifying PMU data into voltage amplitude, voltage phase, current amplitude, and current phase, training and learning m measurement units in a certain area of the power system by the detector at the same time, where the length of the sampling time sequence is T, the number of measurement parameters of one measurement unit is n, the dimensionality of data is d ═ mxn, and the data set is expressed by the following formula:
wherein D represents a metrology data set; x is the number oftThe measured value measured by the measuring unit at time t, which is real and has a data dimension of m × n, can be expressed as
Detector data mining reshapes the time segment data into a time segment value, and the resulting data is represented as:
3. The method of claim 2, wherein in step S11, the preprocessing is normalized according to the following formula:
wherein ,yscalerDistribution in y for the measured values after normalizationminTo ymaxY ofminIs the minimum value of the normalized value, ymaxIs the normalized maximum value, xminIs the minimum value of the actual measured value, xmaxIs the maximum value of the actual measurement values, and x represents the actual measurement value of the normalization process.
4. The method of claim 3, wherein in step S2, the convolutional neural network is performed to grab the data space features according to the following formula:
wherein ,in order to input the measured data, the measuring device,the convolution kernel of the jth feature map of the first layer convolution layer, M is the selected matrix block value at each sliding in the convolution process, b represents the bias matrix parameter, down (. degree.) is the pooling function, βjRepresenting trainable scalars, Q2Showing the size of the pooling block in the pooling process, Flatten being a one-dimensional function, ReLU being a common modified linear activation function, w(n)The model is a parameter which needs to be adjusted continuously when the error of the neural network is reversely propagated.
5. The method of claim 4, wherein in step S3, the neural network captures data temporal features according to a chronogram memory according to the following formula:
ft=σ(Wxfxt+Whfht-1+bf)
it=σ(Wxixt+Whiht-1+bi)
ot=σ(Wxoxt+Whoht-1+bo)
ht=ottanh(ct)
wherein ,ftIs the forgetting coefficient of the forgetting gate, sigma is sigmoid activation function, W is weight matrix, xtIs an input value of the current time, ht-1Is the output value of the cell hidden layer at the previous moment, b is the bias matrix, itIs the weight value coefficient of the input gate,for the state value of the new input cell, tanh is the activation function, ctCell state being the updated current state, ct-1Is the state value of the cell at the previous time, otIs the value of the weight coefficient of the output gate, htIs the output value of the hidden layer.
6. The method as claimed in claim 5, wherein in step S4, the specific procedure of setting Dropout layer, batch normalization layer is to randomly break the connection between neurons according to probability when the detector trains and fits the data so that the detector does not over-learn the local features specific to the training set;
the distribution of the input value of each neuron is forcedly processed into standard normal distribution with the mean value of 0 and the variance of 1 by a regularization normalization method, and the input and output feedback of each neuron is adjusted according to the following formula:
y(k)=γ(k)x(k)+β(k)
wherein ,for the k-th normalized neural network input data value, x(k)Is the original input data value, E.]To average its input data, Var [.]To find the variance value of its input data value, y(k)The output value of the neural network corresponding to the input data is gamma, the weight parameter during the neural network training is gamma, and β is the weight bias during the neural network training.
7. The method of claim 6, wherein in step S4, the output layer of the long-time mnemonic neural network sets an attention mechanism according to the following formula:
hi=oitanh(ci)=f1(xi,hi-1)tanh(f2(xi,hi-1))=f(xi,hi-1)
eti=vTtanh(Whhi+Wsst-1+b)
wherein ,hiFor memorizing the hidden layer output value o of the neural network for the time of iiFor long and short duration memorization of the value of the weight coefficient of the output gate of the neural network, ciFor long and short term memory of the current state of the neural network cells, αtiIs the current output stWeight coefficient assigned to data value, stIs an output vector value based on an attention mechanism.
8. The method as claimed in claim 7, wherein in step S5, the specific process of the output layer of the long-term memory neural network setting support vector machine classifier is that the output layer sets support vector machine classifier to make a decision as to whether the measured data network attack occurs, which is shown in the following formula:
D={(x(1),y(1)),(x(2),y(2)),...,(x(m),y(m))}
f(x)=ωTx+b
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911097514.2A CN111027058B (en) | 2019-11-12 | 2019-11-12 | Method for detecting data attack of power system, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911097514.2A CN111027058B (en) | 2019-11-12 | 2019-11-12 | Method for detecting data attack of power system, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111027058A true CN111027058A (en) | 2020-04-17 |
CN111027058B CN111027058B (en) | 2023-10-27 |
Family
ID=70201368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911097514.2A Active CN111027058B (en) | 2019-11-12 | 2019-11-12 | Method for detecting data attack of power system, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111027058B (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111600393A (en) * | 2020-06-18 | 2020-08-28 | 国网四川省电力公司电力科学研究院 | Method for reducing voltage measurement data of transformer substation in different voltage classes and identification method |
CN111651337A (en) * | 2020-05-07 | 2020-09-11 | 哈尔滨工业大学 | SRAM memory space service fault classification failure detection method |
CN111669384A (en) * | 2020-05-29 | 2020-09-15 | 重庆理工大学 | Malicious flow detection method integrating deep neural network and hierarchical attention mechanism |
CN111669385A (en) * | 2020-05-29 | 2020-09-15 | 重庆理工大学 | Malicious traffic monitoring system fusing deep neural network and hierarchical attention mechanism |
CN111880998A (en) * | 2020-07-30 | 2020-11-03 | 平安科技(深圳)有限公司 | Service system anomaly detection method and device, computer equipment and storage medium |
CN112072708A (en) * | 2020-07-27 | 2020-12-11 | 中国电力科学研究院有限公司 | Method for improving wind power consumption level of electric power system |
CN112115184A (en) * | 2020-09-18 | 2020-12-22 | 平安科技(深圳)有限公司 | Time series data detection method and device, computer equipment and storage medium |
CN112115999A (en) * | 2020-09-15 | 2020-12-22 | 燕山大学 | Wind turbine generator fault diagnosis method of space-time multi-scale neural network |
CN112383518A (en) * | 2020-10-30 | 2021-02-19 | 广东工业大学 | Botnet detection method and device |
CN112464848A (en) * | 2020-12-07 | 2021-03-09 | 国网四川省电力公司电力科学研究院 | Information flow abnormal data monitoring method and device based on density space clustering |
CN112906673A (en) * | 2021-04-09 | 2021-06-04 | 河北工业大学 | Lower limb movement intention prediction method based on attention mechanism |
CN112926646A (en) * | 2021-02-22 | 2021-06-08 | 上海壁仞智能科技有限公司 | Data batch standardization method, computing equipment and computer readable storage medium |
CN112929381A (en) * | 2021-02-26 | 2021-06-08 | 南方电网科学研究院有限责任公司 | Detection method, device and storage medium for false injection data |
CN113177366A (en) * | 2021-05-28 | 2021-07-27 | 华北电力大学 | Comprehensive energy system planning method and device and terminal equipment |
WO2021212377A1 (en) * | 2020-04-22 | 2021-10-28 | 深圳市欢太数字科技有限公司 | Method and apparatus for determining risky attribute of user data, and electronic device |
CN113709089A (en) * | 2020-09-03 | 2021-11-26 | 南宁玄鸟网络科技有限公司 | System and method for filtering illegal data through Internet of things |
CN113765880A (en) * | 2021-07-01 | 2021-12-07 | 电子科技大学 | Power system network attack detection method based on space-time correlation |
CN115051834A (en) * | 2022-05-11 | 2022-09-13 | 华北电力大学 | Novel power system APT attack detection method based on STSA-transformer algorithm |
CN115086029A (en) * | 2022-06-15 | 2022-09-20 | 河海大学 | Network intrusion detection method based on two-channel space-time feature fusion |
CN116915513A (en) * | 2023-09-14 | 2023-10-20 | 国网江苏省电力有限公司常州供电分公司 | False data injection attack detection method and device |
CN117527450A (en) * | 2024-01-05 | 2024-02-06 | 南京邮电大学 | Method, system and storage medium for detecting false data injection attack of power distribution network |
CN118694613A (en) * | 2024-08-26 | 2024-09-24 | 中孚安全技术有限公司 | Multi-stage attack detection method, system and medium based on association engine detection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107961007A (en) * | 2018-01-05 | 2018-04-27 | 重庆邮电大学 | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term |
US20180288086A1 (en) * | 2017-04-03 | 2018-10-04 | Royal Bank Of Canada | Systems and methods for cyberbot network detection |
CN109784480A (en) * | 2019-01-17 | 2019-05-21 | 武汉大学 | A kind of power system state estimation method based on convolutional neural networks |
CN110213244A (en) * | 2019-05-15 | 2019-09-06 | 杭州电子科技大学 | A kind of network inbreak detection method based on space-time characteristic fusion |
-
2019
- 2019-11-12 CN CN201911097514.2A patent/CN111027058B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180288086A1 (en) * | 2017-04-03 | 2018-10-04 | Royal Bank Of Canada | Systems and methods for cyberbot network detection |
CN107961007A (en) * | 2018-01-05 | 2018-04-27 | 重庆邮电大学 | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term |
CN109784480A (en) * | 2019-01-17 | 2019-05-21 | 武汉大学 | A kind of power system state estimation method based on convolutional neural networks |
CN110213244A (en) * | 2019-05-15 | 2019-09-06 | 杭州电子科技大学 | A kind of network inbreak detection method based on space-time characteristic fusion |
Non-Patent Citations (1)
Title |
---|
杨永娇 等: ""一种基于深度 Encoder-Decoder 神经网络的智能 电网数据服务器流量异常检测算法"" * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021212377A1 (en) * | 2020-04-22 | 2021-10-28 | 深圳市欢太数字科技有限公司 | Method and apparatus for determining risky attribute of user data, and electronic device |
CN111651337A (en) * | 2020-05-07 | 2020-09-11 | 哈尔滨工业大学 | SRAM memory space service fault classification failure detection method |
CN111651337B (en) * | 2020-05-07 | 2022-07-12 | 哈尔滨工业大学 | SRAM memory space service fault classification failure detection method |
CN111669384A (en) * | 2020-05-29 | 2020-09-15 | 重庆理工大学 | Malicious flow detection method integrating deep neural network and hierarchical attention mechanism |
CN111669385A (en) * | 2020-05-29 | 2020-09-15 | 重庆理工大学 | Malicious traffic monitoring system fusing deep neural network and hierarchical attention mechanism |
CN111669384B (en) * | 2020-05-29 | 2021-11-23 | 重庆理工大学 | Malicious flow detection method integrating deep neural network and hierarchical attention mechanism |
CN111600393A (en) * | 2020-06-18 | 2020-08-28 | 国网四川省电力公司电力科学研究院 | Method for reducing voltage measurement data of transformer substation in different voltage classes and identification method |
CN112072708A (en) * | 2020-07-27 | 2020-12-11 | 中国电力科学研究院有限公司 | Method for improving wind power consumption level of electric power system |
WO2021139251A1 (en) * | 2020-07-30 | 2021-07-15 | 平安科技(深圳)有限公司 | Server system anomaly detection method and apparatus, computer device, and storage medium |
CN111880998A (en) * | 2020-07-30 | 2020-11-03 | 平安科技(深圳)有限公司 | Service system anomaly detection method and device, computer equipment and storage medium |
CN113709089A (en) * | 2020-09-03 | 2021-11-26 | 南宁玄鸟网络科技有限公司 | System and method for filtering illegal data through Internet of things |
CN112115999A (en) * | 2020-09-15 | 2020-12-22 | 燕山大学 | Wind turbine generator fault diagnosis method of space-time multi-scale neural network |
WO2021169361A1 (en) * | 2020-09-18 | 2021-09-02 | 平安科技(深圳)有限公司 | Method and apparatus for detecting time series data, and computer device and storage medium |
CN112115184A (en) * | 2020-09-18 | 2020-12-22 | 平安科技(深圳)有限公司 | Time series data detection method and device, computer equipment and storage medium |
CN112383518A (en) * | 2020-10-30 | 2021-02-19 | 广东工业大学 | Botnet detection method and device |
CN112464848B (en) * | 2020-12-07 | 2023-04-07 | 国网四川省电力公司电力科学研究院 | Information flow abnormal data monitoring method and device based on density space clustering |
CN112464848A (en) * | 2020-12-07 | 2021-03-09 | 国网四川省电力公司电力科学研究院 | Information flow abnormal data monitoring method and device based on density space clustering |
CN112926646A (en) * | 2021-02-22 | 2021-06-08 | 上海壁仞智能科技有限公司 | Data batch standardization method, computing equipment and computer readable storage medium |
CN112926646B (en) * | 2021-02-22 | 2023-07-04 | 上海壁仞智能科技有限公司 | Data batch normalization method, computing device, and computer-readable storage medium |
CN112929381A (en) * | 2021-02-26 | 2021-06-08 | 南方电网科学研究院有限责任公司 | Detection method, device and storage medium for false injection data |
CN112906673A (en) * | 2021-04-09 | 2021-06-04 | 河北工业大学 | Lower limb movement intention prediction method based on attention mechanism |
CN113177366A (en) * | 2021-05-28 | 2021-07-27 | 华北电力大学 | Comprehensive energy system planning method and device and terminal equipment |
CN113177366B (en) * | 2021-05-28 | 2024-02-02 | 华北电力大学 | Comprehensive energy system planning method and device and terminal equipment |
CN113765880A (en) * | 2021-07-01 | 2021-12-07 | 电子科技大学 | Power system network attack detection method based on space-time correlation |
CN115051834A (en) * | 2022-05-11 | 2022-09-13 | 华北电力大学 | Novel power system APT attack detection method based on STSA-transformer algorithm |
CN115051834B (en) * | 2022-05-11 | 2023-05-16 | 华北电力大学 | Novel power system APT attack detection method based on STSA-transformer algorithm |
CN115086029A (en) * | 2022-06-15 | 2022-09-20 | 河海大学 | Network intrusion detection method based on two-channel space-time feature fusion |
CN116915513A (en) * | 2023-09-14 | 2023-10-20 | 国网江苏省电力有限公司常州供电分公司 | False data injection attack detection method and device |
CN116915513B (en) * | 2023-09-14 | 2023-12-01 | 国网江苏省电力有限公司常州供电分公司 | False data injection attack detection method and device |
CN117527450A (en) * | 2024-01-05 | 2024-02-06 | 南京邮电大学 | Method, system and storage medium for detecting false data injection attack of power distribution network |
CN118694613A (en) * | 2024-08-26 | 2024-09-24 | 中孚安全技术有限公司 | Multi-stage attack detection method, system and medium based on association engine detection |
Also Published As
Publication number | Publication date |
---|---|
CN111027058B (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111027058B (en) | Method for detecting data attack of power system, computer equipment and storage medium | |
Shi et al. | Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions | |
Zhang et al. | Review on deep learning applications in frequency analysis and control of modern power system | |
Shen et al. | Wind speed prediction of unmanned sailboat based on CNN and LSTM hybrid neural network | |
Li et al. | An intelligent transient stability assessment framework with continual learning ability | |
CN116245033B (en) | Artificial intelligent driven power system analysis method and intelligent software platform | |
Zhu et al. | Data/model jointly driven high-quality case generation for power system dynamic stability assessment | |
CN115221233A (en) | Transformer substation multi-class live detection data anomaly detection method based on deep learning | |
CN115964503B (en) | Safety risk prediction method and system based on community equipment facilities | |
Wu et al. | Identification and correction of abnormal measurement data in power system based on graph convolutional network and gated recurrent unit | |
Wang et al. | Stealthy attack detection method based on Multi-feature long short-term memory prediction model | |
Wang et al. | A locational false data injection attack detection method in smart grid based on adversarial variational autoencoders | |
Hu et al. | Training A Dynamic Neural Network to Detect False Data Injection Attacks Under Multiple Unforeseen Operating Conditions | |
Adhikari et al. | Real-Time Short-Term Voltage Stability Assessment using Temporal Convolutional Neural Network | |
Bento et al. | An Overview Of The Latest Machine Learning Trends In Short-Term Load Forecasting | |
Yuanyuan et al. | Artificial intelligence and learning techniques in intelligent fault diagnosis | |
Massaoudi et al. | FLACON: A Deep Federated Transfer Learning-Enabled Transient Stability Assessment During Symmetrical and Asymmetrical Grid Faults | |
Pei et al. | Data-driven measurement tampering detection considering spatial-temporal correlations | |
Wang et al. | A svm transformer fault diagnosis method based on improved bp neural network and multi-parameter optimization | |
Li et al. | Towards Practical Physics-Informed ML Design and Evaluation for Power Grid | |
Li et al. | Research on Load State Identification Method Based on CNN-Transformer | |
Yang et al. | Deep learning-based hybrid detection model for false data injection attacks in smart grid | |
Cao et al. | CLAD: A Deep Learning Framework for Continually Learning in Anomaly Detection | |
Jiang et al. | Photovoltaic Hot Spots Detection Based on Kernel Entropy Component Analysis and Information Gain | |
Douglas | Machine Learning Strategies for Power Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |