CN110365647B - False data injection attack detection method based on PCA and BP neural network - Google Patents

False data injection attack detection method based on PCA and BP neural network Download PDF

Info

Publication number
CN110365647B
CN110365647B CN201910512476.6A CN201910512476A CN110365647B CN 110365647 B CN110365647 B CN 110365647B CN 201910512476 A CN201910512476 A CN 201910512476A CN 110365647 B CN110365647 B CN 110365647B
Authority
CN
China
Prior art keywords
data
neural network
label
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910512476.6A
Other languages
Chinese (zh)
Other versions
CN110365647A (en
Inventor
刘俊辉
刘义
杨超
谢胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910512476.6A priority Critical patent/CN110365647B/en
Publication of CN110365647A publication Critical patent/CN110365647A/en
Application granted granted Critical
Publication of CN110365647B publication Critical patent/CN110365647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1466Active attacks involving interception, injection, modification, spoofing of data unit addresses, e.g. hijacking, packet injection or TCP sequence number attacks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a false data injection attack detection method based on PCA and BP neural network, which adopts Principal Component Analysis (PCA) to perform dimensionality reduction processing on measurement data. And then, taking the data subjected to dimensionality reduction as a training sample of the BP neural network, adding false data into a part of the sample, marking the false data as an attack sample, training, and effectively detecting whether false data injection attack exists or not through a model obtained through training. The invention reduces the dimension of the extracted data by using the PCA technology, improves the accuracy rate, reduces the training time, and effectively detects the attack value of the injection attack of the false data by using the BP neural network.

Description

False data injection attack detection method based on PCA and BP neural network
Technical Field
The invention relates to the technical field of power system safety, in particular to a false data injection attack detection method based on PCA and BP neural networks.
Background
The aging of the power industry, coupled with the increased demand by industrial and residential customers, is a major motivation for policy makers to make roadmaps for the next generation of power systems, known as smart grids. In a smart grid, the overall monitoring cost will decrease, but at the same time, the risk of cyber attacks may increase. Recently, a new type of attack (called False Data Injection (FDI)) has been introduced, which cannot be detected by the traditional Bad Data Detection (BDD) using state estimation and hides any deviation in the value of the estimated state, which seriously damages the normal operation of the grid, such as: in 2010, the network attack in the first real sense, namely, the earthquake network virus invades the Iran nuclear power station, so that the equipment is paralyzed in function and cannot normally operate; in 2015, power systems in certain areas of Ukrainian suffer network attacks, large-area power failure accidents are caused, and the life of people is greatly influenced. Therefore, the network security of the present day not only affects the information security of individuals, but also affects the security of social public facilities and even national security. Therefore, network security is gradually highly valued by researchers, and detection and defense against FDI attacks are necessary.
The existing power system state estimation is generally based on a direct current state estimation model, and specifically, the direct current state estimation model including m measurement data and n +1 node power systems is represented as follows:
z=Hx+e (1)
in the formula:
Figure RE-GDA0002178749100000011
is the measured value of the sensor, x ═ x1,x2,...xn)TH is a Jacobian matrix of m multiplied by n; e ═ e (e)1,e2,...em)TFor random measurement errors, assume e obeys a mean of 0 and the covariance matrix is recorded as ∑eThen the state variable estimate can be obtained by Weighted Least-Squares (WLS) using equation (1):
Figure RE-GDA0002178749100000012
in order to ensure the accuracy of the state estimation result, detection of bad data randomly appearing in the measured data is required in the state estimation process, and maximum normalized residual error (LNR) test is a classic method for bad data detection. And (3) a hacker injects fraudulent data into the measured data to construct an attack vector a, so that a state estimation state error vector is c. At this time, the residual can be expressed by equation (3):
Figure RE-GDA0002178749100000021
in the formula: r isaAnd r respectively represent residual values in the presence or absence of fraudulent data; tau isaRepresenting residual increments due to fraudulent data. Obviously, when a is Hc, formula (3) satisfies
Figure RE-GDA0002178749100000022
I.e. tauaAnd when the value is 0, the residual error value of LNR detection is not influenced by fraudulent data, and the traditional bad data detection and identification are effectively avoided. It is clear that when a hacker learns the power system network parameters, the topology and is able to manipulate specific metrology values, one can construct a network that satisfies τa0, the casual manipulation of the power system state estimation result is achieved.
Principal Component Analysis (PCA) is a well-known method, and mathematically, PCA reduces an n-dimensional data to an r-dimensional data (r ≦ n). Data in the space from n-dimensional down to r-dimensional data has two important attributes: 1) different dimensions of the data are not facies; 2) the dimensions are arranged according to the importance of the information. In the present invention, the handle may be
Figure RE-GDA0002178749100000023
Dimension data reduction to
Figure RE-GDA0002178749100000024
Dimension data.
Disclosure of Invention
The invention provides a false data injection attack detection method based on PCA and BP neural networks, which can effectively detect whether false data injection attacks exist in a smart grid or not.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a false data injection attack detection method based on PCA and BP neural network comprises the following steps:
s1: obtaining measurement values of all sensors in the smart grid through a control center of the smart grid
Figure RE-GDA0002178749100000025
z is an m x n matrix and,
Figure RE-GDA0002178749100000026
are all 1 Xn matrixes;
s2: in order to solve the problems of overfitting and data sparsification caused by overhigh dimension, the dimension reduction technology of PCA is adopted to reduce the dimension to each sensorMeasured value of
Figure RE-GDA0002178749100000027
PCA dimension reduction is carried out to obtain a feature data set after dimension reduction
Figure RE-GDA0002178749100000028
z' is an m x r matrix, r is less than or equal to n,
Figure RE-GDA0002178749100000029
are all 1 × r matrixes;
s3: injecting false data into a part of data in the feature data group after the dimensionality reduction, attaching a label with the label that false _ label is-1 to each group of data in the part of data injected with the false data, and attaching a label with the label that label is 1 to each group of data in the rest of data in the feature data group after the dimensionality reduction;
s4: defining the combination of each group of data and the label thereof in the feature data group after dimensionality reduction as a sample, disordering all samples, randomly collecting part of the samples as a test set, and taking the rest of the samples as a training set;
s5: training a BP neural network by utilizing a training set;
s6: predicting the test set by the trained BP neural network to obtain a predicted label, judging the predicted label and the label of the test set, extracting the number of correctly classified labels, and comparing the number of correctly classified labels with the total number of the labels to obtain the classification accuracy;
s7: and detecting the false data by using the trained BP neural network.
Preferably, the step S3 injects dummy data, specifically:
za=z'+a
a=Hc
in the formula, zaFor the feature data group after injecting the false data, a is an attack vector of n × 1, c is an arbitrary non-zero constant, and H is a Jacobian matrix of m × n.
Preferably, the training BP neural network by using the training set in step S5 specifically includes:
the data set z' is an r × m matrix, and the data is processedCollection
Figure RE-GDA0002178749100000031
Is the ith group of characteristic vectors and is marked as xiAnd the label corresponding to the ith group of feature vectors is marked as yi,yiE { -1,1 }; handle (x)i,yi) As a training sample set, reversely transmitting the error between the output of the BP neural network and the expected output to the weight W and the threshold theta of each neuron;
in the sample pair (X, Y), X ═ X1,x2,...,xm]T,Y=[y1,y2,...,ym]TThe hidden layer neuron of BP neural network is O ═ O1,o2,...,ol]Network weights w between neurons in the input layer and hidden layer1And the network weight w between the hidden layer and the output layer neuron2Respectively as follows:
Figure RE-GDA0002178749100000032
threshold θ for hidden layer neurons1And threshold θ of output layer neurons2Respectively as follows:
Figure RE-GDA0002178749100000033
the output of the hidden layer neuron is then:
Figure RE-GDA0002178749100000041
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0002178749100000042
f (-) is the transfer function of the hidden layer;
the output of the output layer neurons is:
Figure RE-GDA0002178749100000043
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0002178749100000044
g (-) is the transfer function of the output layer;
the error of the output of the BP neural network from the expected output is:
Figure RE-GDA0002178749100000045
error E to weight between hidden layer and output layer neurons
Figure RE-GDA0002178749100000046
The partial derivatives of (a) are:
Figure RE-GDA0002178749100000047
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0002178749100000048
the adjustment formula of the obtained weight is as follows:
Figure RE-GDA0002178749100000049
in the formula eta12Learning step lengths of a hidden layer and an output layer respectively;
error E vs. threshold of output layer neurons
Figure RE-GDA00021787491000000410
The partial derivatives of (a) are:
Figure RE-GDA00021787491000000411
error E thresholding of hidden layer neurons
Figure RE-GDA00021787491000000412
The partial derivatives of (a) are:
Figure RE-GDA0002178749100000051
the adjustment formula of the available threshold value is
Figure RE-GDA0002178749100000052
The weight W and the threshold θ can be obtained from the above equation.
Preferably, 20 hidden layer neurons of the BP neural network are set, the learning step length is set to be 0.01, the target error of training is 1e-3, and the number of training rounds is 10000.
Preferably, the accuracy of the classification is obtained in step S5 according to the following formula:
Figure RE-GDA0002178749100000053
in the formula, Accuracy is Accuracy, Right _ Predict is the number of correctly classified tags, and testdata _ num is the total number of tags.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention adopts a principal component analysis method to perform dimension reduction processing on the measurement data. And then, taking the data subjected to dimensionality reduction as a training sample of the BP neural network, adding false data into a part of the sample, marking the false data as an attack sample, training, and effectively detecting whether false data injection attack exists or not through a model obtained through training. The invention improves the accuracy and reduces the training time by using the PCA technology to reduce the dimension of the extracted data, and effectively detects the attack value of the injection attack of the false data by using the BP neural network.
Drawings
Fig. 1 is a flow chart of a false data injection attack detection method based on PCA and BP neural networks.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides a false data injection attack detection method based on PCA and BP neural network, as shown in fig. 1, including the following steps:
s1: obtaining measurement values of all sensors in the smart grid through a control center of the smart grid
Figure RE-GDA0002178749100000061
z is an m x n matrix and,
Figure RE-GDA0002178749100000062
are all 1 Xn matrixes;
s2: in order to solve the problems of overfitting and data sparsification caused by overhigh dimension, the dimension reduction technology of PCA is adopted to reduce the dimension and measure the measured values of all the sensors
Figure RE-GDA0002178749100000063
PCA dimension reduction is carried out to obtain a feature data set after dimension reduction
Figure RE-GDA0002178749100000064
z' is an m x r matrix, r is less than or equal to n,
Figure RE-GDA0002178749100000065
are all 1 × r matrixes;
s3: injecting false data into a part of data in the feature data group after the dimensionality reduction, attaching a label with the label that false _ label is-1 to each group of data in the part of data injected with the false data, and attaching a label with the label that label is 1 to each group of data in the rest of data in the feature data group after the dimensionality reduction;
s4: defining the combination of each group of data and the label thereof in the feature data group after dimensionality reduction as a sample, disordering all samples, randomly collecting part of the samples as a test set, and taking the rest of the samples as a training set;
s5: training a BP neural network by utilizing a training set;
s6: predicting the test set by the trained BP neural network to obtain a predicted label, judging the predicted label and the label of the test set, extracting the number of correctly classified labels, and comparing the number of correctly classified labels with the total number of the labels to obtain the classification accuracy;
s7: and detecting the false data by using the trained BP neural network.
Step S3, injecting dummy data, specifically:
za=z'+a
a=Hc
in the formula, zaFor the feature data group after injecting the false data, a is an attack vector of n × 1, c is an arbitrary non-zero constant, and H is a Jacobian matrix of m × n.
In step S5, training the BP neural network using the training set specifically includes:
the data set z' is an r x m matrix, and the data set is divided into
Figure RE-GDA0002178749100000066
Is the ith group of characteristic vectors and is marked as xiAnd the label corresponding to the ith group of feature vectors is marked as yi,yiE { -1,1 }; handle (x)i,yi) As a training sample set, reversely transmitting the error between the output of the BP neural network and the expected output to the weight W and the threshold theta of each neuron;
in the sample pair (X, Y), X ═ X1,x2,...,xm]T,Y=[y1,y2,...,ym]TThe hidden layer neuron of BP neural network is O ═ O1,o2,...,ol]Network weights w between neurons in the input layer and hidden layer1And the network weight w between the hidden layer and the output layer neuron2Respectively as follows:
Figure RE-GDA0002178749100000071
threshold θ for hidden layer neurons1And threshold θ of output layer neurons2Respectively as follows:
Figure RE-GDA0002178749100000072
the output of the hidden layer neuron is then:
Figure RE-GDA0002178749100000073
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0002178749100000074
f (-) is the transfer function of the hidden layer;
the output of the output layer neurons is:
Figure RE-GDA0002178749100000075
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0002178749100000076
g (-) is the transfer function of the output layer;
the error of the output of the BP neural network from the expected output is:
Figure RE-GDA0002178749100000077
error E to weight between hidden layer and output layer neurons
Figure RE-GDA0002178749100000078
The partial derivatives of (a) are:
Figure RE-GDA0002178749100000079
in the formula (I), the compound is shown in the specification,
Figure RE-GDA00021787491000000710
the adjustment formula of the obtained weight is as follows:
Figure RE-GDA0002178749100000081
in the formula eta12Learning step lengths of a hidden layer and an output layer respectively;
error E vs. threshold of output layer neurons
Figure RE-GDA0002178749100000082
The partial derivatives of (a) are:
Figure RE-GDA0002178749100000083
error E thresholding of hidden layer neurons
Figure RE-GDA0002178749100000084
The partial derivatives of (a) are:
Figure RE-GDA0002178749100000085
the adjustment formula of the available threshold value is
Figure RE-GDA0002178749100000086
The weight W and the threshold θ can be obtained from the above equation.
20 hidden layer neurons of the BP neural network are set, the learning step length is set to be 0.01, the target error of training is 1e-3, and the number of training rounds is 10000.
In step S5, the classification accuracy is obtained according to the following formula:
Figure RE-GDA0002178749100000087
in the formula, Accuracy is Accuracy, Right _ Predict is the number of correctly classified tags, and testdata _ num is the total number of tags.
In a specific implementation process, step 1, feature data extraction: simulation of the operation of the power grid with Matpower, a tool pack in matlab, in which measurements are collected from each transmission line
Figure RE-GDA0002178749100000088
Thus, in the case118 bus study, there are 304 measured characteristics (one for each transmission line). The measurement vector varies over time due to the randomness of the load. Using monte carlo simulations, 1000 different instances of the measurement vector were recorded.
Step 2, PCA preprocessing data: the data is reduced to 2 dimensions by carrying out PCA dimension reduction on the obtained measurement data to obtain
Figure RE-GDA0002178749100000091
And step 3: injecting false data: injecting dummy data into 300 values of the dimensionality reduced data, and marking the label of the dummy data as-1. Then, the remaining 700 data are normal data, whose normal data label is 1.
And 4, step 4: dividing training set and test set data: all 1000 samples are randomly shuffled, 800 samples are taken as a training set, and the remaining 200 data are taken as a test set.
And 5: training a model: a test set of 800 samples (x, y) is trained with the measurement data z as input x and the labels as output y. Firstly, randomly initializing weights and thresholds, setting hidden layer neurons as 20, setting learning step length as 0.01, training target error as 1e-3, and training round number as 10000 times. Training was performed using the remaining 200 test sets.
Step 6: and (3) calculating the detection accuracy: the effect of the final experiment is shown in table 2, the accuracy rate reaches 96.6%, and the experiment has better classification capability. The results of the detection method based on the BP neural network spurious data injection attack directly using raw measurements are shown in table 2.
TABLE 1
Figure RE-GDA0002178749100000092
TABLE 2
Figure RE-GDA0002178749100000101
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (3)

1. A false data injection attack detection method based on PCA and BP neural network is characterized by comprising the following steps:
s1: control through smart gridMethod for acquiring measurement values of sensors in smart grid by center
Figure FDA0003201824370000011
z is an m x n matrix and,
Figure FDA0003201824370000012
are all 1 Xn matrixes;
s2: measured value of each sensor
Figure FDA0003201824370000013
PCA dimension reduction is carried out to obtain a feature data set after dimension reduction
Figure FDA0003201824370000014
z' is an m x r matrix, r is less than or equal to n,
Figure FDA0003201824370000015
are all 1 × r matrixes;
s3: injecting false data into a part of data in the feature data group after the dimensionality reduction, attaching a label with the label that false _ label is-1 to each group of data in the part of data injected with the false data, and attaching a label with the label that label is 1 to each group of data in the rest of data in the feature data group after the dimensionality reduction;
s4: defining the combination of each group of data and the label thereof in the feature data group after dimensionality reduction as a sample, disordering all samples, randomly collecting part of the samples as a test set, and taking the rest of the samples as a training set;
s5: training a BP neural network by utilizing a training set;
s6: predicting the test set by the trained BP neural network to obtain a predicted label, judging the predicted label and the label of the test set, extracting the number of correctly classified labels, and comparing the number of correctly classified labels with the total number of the labels to obtain the classification accuracy;
s7: detecting false data by using the trained BP neural network;
step S3, injecting dummy data, specifically:
za=z'+a
a=Hc
in the formula, zaThe method comprises the steps that a is an attack vector of n multiplied by 1, c is an arbitrary non-zero constant, and H is a Jacobian matrix of m multiplied by n for a characteristic data group after dummy data are injected;
in step S5, training the BP neural network using the training set specifically includes:
the data set z' is an r x m matrix, and the data set is divided into
Figure FDA0003201824370000016
Is the ith group of characteristic vectors and is marked as xiAnd the label corresponding to the ith group of feature vectors is marked as yi,yiE { -1,1 }; handle (x)i,yi) As a training sample set, reversely transmitting the error between the output of the BP neural network and the expected output to the weight W and the threshold theta of each neuron;
in the sample pair (X, Y), X ═ X1,x2,...,xm]T,Y=[y1,y2,...,ym]TThe hidden layer neuron of BP neural network is O ═ O1,o2,...,ol]Network weights w between neurons in input and hidden layers1And the network weight w between the hidden layer and the output layer neuron2Respectively as follows:
Figure FDA0003201824370000021
threshold θ for hidden layer neurons1And threshold θ of output layer neurons2Respectively as follows:
Figure FDA0003201824370000022
the output of the hidden layer neuron is then:
Figure FDA0003201824370000023
in the formula (I), the compound is shown in the specification,
Figure FDA0003201824370000024
f (-) is the transfer function of the hidden layer;
the output of the output layer neurons is:
Figure FDA0003201824370000025
in the formula (I), the compound is shown in the specification,
Figure FDA0003201824370000026
g (-) is the transfer function of the output layer;
the error of the output of the BP neural network from the expected output is:
Figure FDA0003201824370000027
error E to weight between hidden layer and output layer neurons
Figure FDA0003201824370000028
The partial derivatives of (a) are:
Figure FDA0003201824370000029
in the formula (I), the compound is shown in the specification,
Figure FDA00032018243700000210
the adjustment formula of the obtained weight is as follows:
Figure FDA0003201824370000031
in the formula eta12Learning step lengths of a hidden layer and an output layer respectively;
error E vs. threshold of output layer neurons
Figure FDA0003201824370000032
The partial derivatives of (a) are:
Figure FDA0003201824370000033
error E thresholding of hidden layer neurons
Figure FDA0003201824370000034
The partial derivatives of (a) are:
Figure FDA0003201824370000035
the adjustment formula of the available threshold value is
Figure FDA0003201824370000036
The weight W and the threshold θ can be obtained from the above equation.
2. The method of claim 1, wherein the number of hidden layer neurons of the BP neural network is 20, the learning step size is set to 0.01, the target error of training is 1e-3, and the number of training rounds is 10000.
3. The method for detecting the injection attack of the false data based on the PCA and the BP neural network as claimed in claim 1, wherein the accuracy of the classification is obtained in step S5 according to the following formula:
Figure FDA0003201824370000037
in the formula, Accuracy is Accuracy, Right _ Predict is the number of correctly classified tags, and testdata _ num is the total number of tags.
CN201910512476.6A 2019-06-13 2019-06-13 False data injection attack detection method based on PCA and BP neural network Active CN110365647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910512476.6A CN110365647B (en) 2019-06-13 2019-06-13 False data injection attack detection method based on PCA and BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910512476.6A CN110365647B (en) 2019-06-13 2019-06-13 False data injection attack detection method based on PCA and BP neural network

Publications (2)

Publication Number Publication Date
CN110365647A CN110365647A (en) 2019-10-22
CN110365647B true CN110365647B (en) 2021-09-14

Family

ID=68217369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910512476.6A Active CN110365647B (en) 2019-06-13 2019-06-13 False data injection attack detection method based on PCA and BP neural network

Country Status (1)

Country Link
CN (1) CN110365647B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889111A (en) * 2019-10-23 2020-03-17 广东工业大学 Power grid virtual data injection attack detection method based on deep belief network
CN110826888B (en) * 2019-10-29 2022-06-07 西安交通大学 Data integrity attack detection method in power system dynamic state estimation
CN110995761B (en) * 2019-12-19 2021-07-13 长沙理工大学 Method and device for detecting false data injection attack and readable storage medium
CN111031064A (en) * 2019-12-25 2020-04-17 国网浙江省电力有限公司杭州供电公司 Method for detecting power grid false data injection attack
US11394742B2 (en) 2020-08-17 2022-07-19 International Business Machines Corporation Detecting trojan neural networks
CN113410839B (en) * 2021-06-24 2022-07-12 燕山大学 Detection method and system for false data injection of power grid
CN113516180B (en) * 2021-06-25 2022-07-12 重庆邮电大学 Method for identifying Z-Wave intelligent equipment
CN114036506B (en) * 2021-11-05 2024-07-12 东南大学 Method for detecting and defending false data injection attack based on LM-BP neural network
CN115293244B (en) * 2022-07-15 2023-08-15 北京航空航天大学 Smart grid false data injection attack detection method based on signal processing and data reduction
CN118509260A (en) * 2024-07-18 2024-08-16 卓望数码技术(深圳)有限公司 Network attack analysis method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600059A (en) * 2016-12-13 2017-04-26 北京邮电大学 Intelligent power grid short-term load predication method based on improved RBF neural network
CN108989330A (en) * 2018-08-08 2018-12-11 广东工业大学 The double-deck defence method of false data injection attacks in a kind of electric system
WO2019004350A1 (en) * 2017-06-29 2019-01-03 株式会社 Preferred Networks Data discriminator training method, data discriminator training device, program and training method
CN109165504A (en) * 2018-08-27 2019-01-08 广西大学 A kind of electric system false data attack recognition method generating network based on confrontation
CN109377218A (en) * 2018-09-20 2019-02-22 北京邮电大学 A kind of method, server and the mobile terminal of the false perception attack of containment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600059A (en) * 2016-12-13 2017-04-26 北京邮电大学 Intelligent power grid short-term load predication method based on improved RBF neural network
WO2019004350A1 (en) * 2017-06-29 2019-01-03 株式会社 Preferred Networks Data discriminator training method, data discriminator training device, program and training method
CN108989330A (en) * 2018-08-08 2018-12-11 广东工业大学 The double-deck defence method of false data injection attacks in a kind of electric system
CN109165504A (en) * 2018-08-27 2019-01-08 广西大学 A kind of electric system false data attack recognition method generating network based on confrontation
CN109377218A (en) * 2018-09-20 2019-02-22 北京邮电大学 A kind of method, server and the mobile terminal of the false perception attack of containment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
能源互联背景下电网运行安全评估;陈冠霖;《中国优秀硕士学位论文全文数据库(电子期刊)》;20181231;第15-52页 *

Also Published As

Publication number Publication date
CN110365647A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110365647B (en) False data injection attack detection method based on PCA and BP neural network
Cao et al. A novel false data injection attack detection model of the cyber-physical power system
WO2019174142A1 (en) Multi-mode degradation process modelling and remaining service life prediction method
CN109308522B (en) GIS fault prediction method based on recurrent neural network
CN108881250B (en) Power communication network security situation prediction method, device, equipment and storage medium
CN104125112B (en) Physical-information fuzzy inference based smart power grid attack detection method
CN107682317B (en) method for establishing data detection model, data detection method and equipment
Hutton et al. Real-time burst detection in water distribution systems using a Bayesian demand forecasting methodology
CN112019529B (en) New forms of energy electric power network intrusion detection system
CN109767351A (en) A kind of security postures cognitive method of power information system daily record data
CN113780443A (en) Network security situation assessment method oriented to threat detection
CN108931700A (en) A kind of power grid security Warning System based on WSNs
CN112710914B (en) Intelligent substation fault diagnosis method considering control center fault information tampering
CN118316744B (en) Monitoring method, device, equipment and storage medium for power distribution network
CN114679310A (en) Network information security detection method
CN110826888B (en) Data integrity attack detection method in power system dynamic state estimation
He et al. Detection of false data injection attacks leading to line congestions using Neural networks
CN115378699A (en) Power grid state topology collaborative false data attack defense method
CN114818817A (en) Weak fault recognition system and method for capacitive voltage transformer
CN114189047A (en) False data detection and correction method for active power distribution network state estimation
CN113094702B (en) False data injection attack detection method and device based on LSTM network
CN110161387A (en) A kind of power equipment partial discharge amount prediction technique based on improvement gradient boosted tree
CN117768235A (en) Real-time flow monitoring alarm system based on Internet of things
CN115865458B (en) Network attack behavior detection method, system and terminal based on LSTM and GAT algorithm
CN111885084A (en) Intrusion detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant