CN110648055A - Electric power accident event and cause relation construction method based on convolutional neural network - Google Patents

Electric power accident event and cause relation construction method based on convolutional neural network Download PDF

Info

Publication number
CN110648055A
CN110648055A CN201910832757.XA CN201910832757A CN110648055A CN 110648055 A CN110648055 A CN 110648055A CN 201910832757 A CN201910832757 A CN 201910832757A CN 110648055 A CN110648055 A CN 110648055A
Authority
CN
China
Prior art keywords
layer
accident event
output
neural network
incentive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910832757.XA
Other languages
Chinese (zh)
Inventor
黄晓晴
黄勇
刘辉
褚健
邓高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning Power Supply Bureau of Guangxi Power Grid Co Ltd
Original Assignee
Nanning Power Supply Bureau of Guangxi Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning Power Supply Bureau of Guangxi Power Grid Co Ltd filed Critical Nanning Power Supply Bureau of Guangxi Power Grid Co Ltd
Priority to CN201910832757.XA priority Critical patent/CN110648055A/en
Publication of CN110648055A publication Critical patent/CN110648055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for constructing a relation between an electric power accident event and an inducement based on a convolutional neural network, which constructs a convolutional neural network model based on electric power accident event data and further constructs the relation between the electric power accident event and the inducement, is different from a mode of manually extracting characteristics and directly inputting accident information to obtain a relation characteristic result, and simultaneously solves the problem that the accuracy and reliability of the traditional method cannot meet the requirements due to complex characteristic extraction and data reconstruction process.

Description

Electric power accident event and cause relation construction method based on convolutional neural network
Technical Field
The invention relates to the technical field of power engineering, in particular to a method for constructing a power accident event and incentive relation based on a convolutional neural network.
Background
At present, along with social development, the scale and complexity of power demand and power grid scale are increasing day by day, and are influenced by various uncertain factors, the change of the operating condition of a power system is also increasingly complicated, the frequency of power accidents is also increased, and in order to prevent the occurrence of the power accidents, the scale of the accident influence can be effectively controlled after the occurrence of the accidents, and a power enterprise needs to be capable of timely and accurately discriminating the type and the cause of the power accidents.
The existing method is generally to manually analyze the type and cause of an accident event by calling up an accident analysis after the accident happens, but the analysis process in this way is time-consuming. In order to improve the accident event analysis efficiency, a shallow structure feature extraction method is utilized to analyze the theory of the relationship between the safety accident and the incentive, the theory aims at the characteristics of current power production and the specific situation of a power enterprise, based on a large amount of historical information existing in the recent typical accident compilation of a power grid company, the relationship between the safety accident and the incentive of the power enterprise is constructed through big data mining, however, the shallow structure feature extraction method has high data dependency, and under the conditions that the power accident event information is complex and the inference rule is not clear, the method is not high in identification accuracy, and therefore the method is not well popularized and applied.
Therefore, how to provide an accurate and reliable method for constructing the relationship between the power accident event and the cause is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a method for constructing a relationship between an electric power accident event and a cause based on a convolutional neural network, which constructs a convolutional neural network model based on electric power accident event data, further constructs a relationship between the electric power accident event and the cause, and avoids the problem that the accuracy of the method is influenced by the complicated feature extraction and data reconstruction process in the traditional method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for constructing a power accident event and incentive relation based on a convolutional neural network comprises the following steps:
type extraction and coding: extracting accident event types from the short questions of the story event report, and coding the accident event types;
incentive extraction and coding: extracting accident event causes from the reason analysis of the accident event report, and coding the accident event causes;
sample data combination and labeling: combining the encoded accident event type data and the encoded accident event incentive data according to an encoding sequence to obtain sample data, and adding a label to the sample data;
constructing and training a model: constructing a CNN model by using the sample data, performing parameter training on the constructed CNN model, and outputting the trained CNN model;
testing and determining the model: and testing the trained CNN model, evaluating the classification accuracy of the model, and determining a final power accident event and incentive relation model according to the comparison result of the test error and a preset error threshold.
Further, the accident event type is encoded by adopting a 1 × 4 vector t, and the specific formula is as follows:
Figure BDA0002191243950000021
in the formula (I), the compound is shown in the specification,
Figure BDA0002191243950000022
the result of the accident event is represented, and the value range is [0,1 ]]-[9,9]A total of 99 values;
Figure BDA0002191243950000023
representing accident grade, and taking value as [0,1 ]]、[1,0]、[1,1]For a total of three levels.
Further, the accident event causes are coded, the causes in the accident event report can be simplified and summarized into various levels of causes of the grid power production safety accident event cause library, the accident causes are divided into four levels in view of the accident event cause library, 4 vectors of 1 × 2 are adopted to code the accident event causes so as to correspond to the 4 levels of the accident event cause library, and the specific formula is as follows:
Figure BDA0002191243950000024
in the formula (I), the compound is shown in the specification,
Figure BDA0002191243950000025
the incentive code representing the grade q has a value range of [0, 1%]-[9,9]And 99 values in total.
Further, the process of sample data combination and labeling specifically includes:
coding the accident event type and coding the accident event inducement to obtain a coding sequence which is combined into a vector I (t, c) of 1 multiplied by 12, wherein the vector I is used as sample data and added with a label;
and taking 80% of sample data as a training set and taking 20% of sample data as a test set.
Further, the label of the sample data includes direct reason, indirect reason, management reason and no association.
Further, the process of constructing the model specifically includes the following steps:
constructing a CNN model comprising 2 convolutional layers, 2 pooling layers and 1 full-connection layer, wherein the structure is that the convolutional layers and the pooling layers are alternately repeated, and the output of the layer is used as the input of the next layer;
setting a convolution kernel of the CNN model as a one-dimensional convolution kernel to adapt to one-dimensional sample data, determining the number of the convolution kernels through a trial and error method, and extracting different characteristics of the sample data by using different convolution kernels;
performing dimensionality reduction operation on the characteristics of each sample data obtained by convolution through a pooling layer;
and connecting a full connection layer behind the pooling layer, fully connecting the full connection layer with the Softmax output layer, and outputting the probability values occupied by each sample data label by the neuron of the Softmax output layer.
Further, the parameter training of the constructed CNN model specifically comprises the following steps:
randomly initializing the weight and the offset value of each layer of the network in the CNN model;
training sample data L ═ Ii,OiInputting n into a CNN model, and carrying out forward propagation on sample characteristics through each layer to obtain an output value through calculation; wherein L is a training set, Ii、OiRespectively representing the characteristic data and the label of the sample i, wherein n is the number of training samples;
performing backward propagation on the calculated output value by adopting a gradient descent method, reversely and sequentially calculating error items of each layer, and updating the weight and the bias layer by layer;
and repeating the forward propagation process and the backward propagation process, performing iterative calculation until all the variation values of W and b are smaller than the iteration stop threshold value, and outputting the CNN model.
Further, the process of calculating the output value by the forward propagation of the sample characteristics through each layer specifically comprises the following steps:
performing convolution operation on the input features of the previous layer on the convolution layer, wherein the sliding step length of a window of a convolution kernel is 1, and the output of the convolution layer is calculated as follows based on the one-dimensional accident event feature data:
Figure BDA0002191243950000041
in the formula: a isl,jJ is the jth column element of the 1 × J output vector of the convolutional layer (the l-th layer of the neural network), J is 1, …, J; wl.mThe mth column element of the 1 xM convolution kernel is the sharing weight; x is the number ofl.j+m-1The j + m-1 column element in the l layer input vector x is used; b is a bias variable; f is an activation function, and a ReLu/leakage ReLu function is taken;
the average pooling is adopted in the pooling layer to reduce the output dimension, the input is divided into a plurality of non-overlapping areas, and the average value of all values in each area is obtained, wherein the formula is as follows:
Figure BDA0002191243950000042
in the formula: d is the number of the rows of the divided pooling areas;
Figure BDA0002191243950000043
for 1 × (J/D) output vector of pooling layer (first layer of neural network)
Figure BDA0002191243950000044
The elements of the column are, in turn,
Figure BDA0002191243950000045
output vector for layer preceding pooling (layer l-1 of neural network)
Figure BDA0002191243950000046
Elements of a column;
and finally, performing output calculation through the full connection layer and the Softmax layer.
Further, the error items of each layer are reversely and sequentially calculated, and the weight and the bias are updated layer by layer, and the method specifically comprises the following steps:
establishing a loss function of the CNN model, wherein the formula is as follows:
Figure BDA0002191243950000047
in the formula, the superscript L represents the output layer,
Figure BDA0002191243950000048
outputting the actual output of the layer for sample i;
defining the residual error of the first layer of the network as the partial derivative of the loss function C to the input of the activation function in the operation of each layer, and calculating the residual error of the output layer through the loss function of the output layer, wherein the formula is as follows:
Figure BDA0002191243950000049
in the formula, deltaLRepresenting the output layer residual; z is a radical ofLAn input representing an output layer activation function; a isLRepresenting an output layer output; an indication of a dot product operation; f' (. cndot.) denotes the derivative of the activation function;
based on the calculated known residual errors, recursion is carried out on the residual errors of all layers layer by layer according to a reduction method;
and adjusting the weight and the bias layer by layer based on each layer of residual error obtained by calculation.
Compared with the prior art, the invention discloses a method for constructing the relationship between the electric power accident event and the incentive based on the convolutional neural network, the method builds a convolution neural network model based on the electric power accident event data so as to build the relation between the electric power accident event and the inducement, the method is different from the mode of manually extracting the characteristics, directly inputting accident information to obtain a relation characteristic result, meanwhile, the problem that the accuracy and the reliability of the traditional method can not meet the requirements due to the complicated feature extraction and data reconstruction process is solved, under the conditions of complex information and unclear inference rule in the power accident, the cause and responsibility analysis of the specific accident event can be more intuitively and flexibly carried out, and weak links of the system are found out, so that reliable data support is provided for implementation of management measures and technical precautionary measures for preventing similar accidents in the later period.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic overall flow chart of a method for constructing a relationship between an electric power accident event and a cause based on a convolutional neural network according to the present invention;
FIG. 2 is a schematic diagram illustrating a detailed flow of a method for constructing a relationship between an electric power accident event and a cause based on a convolutional neural network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a typical CNN (convolutional neural network) model in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and fig. 2, an embodiment of the present invention discloses a method for constructing a relationship between an electric power accident event and a cause based on a convolutional neural network, which includes the following steps:
s1, type extraction and coding: extracting accident event types from the short questions of the story event report, and coding the accident event types;
s2, incentive extraction and coding: extracting accident event causes from the reason analysis of the accident event report, and coding the accident event causes;
s3 sample data combination and label: combining the encoded accident event type data and the encoded accident event incentive data according to an encoding sequence to obtain sample data, and adding a label to the sample data;
s4, constructing and training a model: constructing a CNN model by using the sample data, performing parameter training on the constructed CNN model, and outputting the trained CNN model;
s5 testing and determining model: and testing the trained CNN model to evaluate the classification accuracy of the model, and determining a final power accident event and incentive relation model according to the comparison result of the test error and a preset error threshold.
In a specific embodiment, invalid information such as a company name, an accident event date, a substation/line name and the like in an accident event report summary is ignored, accident event types are extracted from the story event summary, for example, the accident event types which can be extracted from 'xx power supply station' xx.xx 'substation voltage loss secondary event'/'xx power supply station' xx.xx 'xx line unplanned shutdown tertiary event' are respectively 'substation voltage loss secondary' and 'line unplanned shutdown tertiary', the accident event types are coded, and the accident event types are coded by adopting a 1 × 4 vector t, wherein the specific formula is as follows:
Figure BDA0002191243950000061
in the formula (I), the compound is shown in the specification,
Figure BDA0002191243950000062
the result of the accident event is represented, and the value range is [0,1 ]]-[9,9]A total of 99 values;
Figure BDA0002191243950000063
representing accident grade, and taking value as [0,1 ]]、[1,0]、[1,1]For a total of three levels.
In a specific embodiment, the causes in the accident event report can be simplified and summarized into the causes of the grid power production safety accident cause library, the accident causes are divided into four levels in view of the accident cause library, the accident event causes are coded, 4 vectors c with the size of 1 × 2 are adopted to code the accident event causes, and the specific formula is as follows:
in the formula (I), the compound is shown in the specification,
Figure BDA0002191243950000072
the incentive code representing the grade q has a value range of [0, 1%]-[9,9]And 99 values in total.
In a specific embodiment, the process of sample data combination and tagging specifically includes:
combining coding sequences obtained by coding the accident event type and the accident event inducement into a vector I (t, c) of 1 multiplied by 12, wherein the vector I is used as sample data and added with a label;
and taking 80% of sample data as a training set and taking 20% of sample data as a test set.
It should be noted that a plurality of causes are described in the analysis of the cause in an accident event, i.e. a plurality of sample data and tags can be extracted from an accident event report.
In a particular embodiment, the tags of the sample data include direct reason, indirect reason, administrative reason and no association.
In the accident event report, the relationship between the accident event consequence (namely type) and the accident event cause is expressed as a direct reason, an indirect reason and a management reason, besides, the invention additionally adds a non-associated label, and 4 relationship characteristics are provided, so 4 neurons are arranged on an output layer to correspond to the relationship.
An output layer neuron activation value of 1 indicates that the input data corresponds to the type-cause relationship, and 0 indicates that the input data corresponds to the type-cause relationship, so that the type-cause relationship can be encoded, as shown in table 1 below:
TABLE 1 type-incentive relation coding
Referring to fig. 3, in one embodiment, the process of constructing the model specifically includes the following steps:
the method comprises the following steps: constructing a CNN model comprising 2 convolutional layers, 2 pooling layers and 1 full-connection layer, wherein the structure is that the convolutional layers and the pooling layers are alternately repeated, and the output of the layer is used as the input of the next layer;
step two: setting convolution kernels of the CNN model as one-dimensional convolution kernels, determining the number of the convolution kernels through a trial and error method, and extracting different characteristics of sample data by using different convolution kernels;
step three: performing dimensionality reduction operation on the characteristics of each sample data obtained by convolution through a pooling layer;
step four: and connecting a full connection layer behind the pooling layer, fully connecting the full connection layer with the Softmax output layer, and outputting the probability values occupied by each sample data label by the neuron of the Softmax output layer.
Specifically, the input feature data I ═ t, c ] is a one-dimensional array, in the convolutional layer, to adapt to the one-dimensional input data, the convolutional kernels are set as one-dimensional convolutional kernels, different convolutional kernels extract different features of the input data, such as a single input sample I, which contains 12 elements in total [ t, c ], the number of convolutional kernels is N, each convolutional kernel is 1 × M in size, the size of the feature of the convolved output data is (12-M +1) × 1, the number of connections between the convolutional layer and the input layer is (12-M +1) × (M +1) × N, and the number of trainable parameters is (M +1) × N.
In actual operation, the number of convolution kernels is determined through a trial and error method, 10 different convolution kernels are adopted for feature extraction in order to describe the data features of the power accident event, and it is noted that the number of the convolution kernels is increased, so that the extracted features are increased, the classification capability is improved, but the increase of the classification accuracy rate increased to a certain number is not large.
And (3) performing dimensionality reduction operation on the features obtained by convolution through a pooling layer, so that output nodes of the pooling layer are greatly reduced, the network calculation amount is reduced, then connecting a full connection layer behind the pooling layer, and fully connecting the full connection layer with a Softmax layer, wherein 4 relation features correspondingly defined by the Softmax output layer are set to be 4 neurons, the output probability value of the neurons in the output layer is 1, and the sum of the output values of the 4 neurons is 1.
In a specific embodiment, the parameter training is performed on the constructed CNN model, and the training process adopts a BP algorithm, and includes two stages of forward propagation and backward propagation, which specifically includes the following steps:
the method comprises the following steps: randomly initializing the weight and the offset value of each layer of the network in the CNN model;
step two: training sample data L ═ Ii,OiInputting n into a CNN model, and carrying out forward propagation on sample characteristics through each layer to obtain an output value through calculation; wherein L is a training set, Ii、OiRespectively representing the characteristic data and the label of the sample i, wherein n is the number of training samples;
step three: performing backward propagation on the calculated output value by adopting a gradient descent method, reversely and sequentially calculating error items of each layer, and updating the weight and the bias layer by layer;
step four: and repeating the forward propagation and backward propagation calculation for iteration until all the variation values of W and b are smaller than the iteration stop threshold value, and outputting the network model.
In a specific embodiment, the process of calculating the output value by propagating the sample features forward through each layer specifically includes the following steps:
(1) performing convolution operation on the input features of the previous layer on the convolution layer, wherein the sliding step length of a window of a convolution kernel is 1, and the output of the convolution layer is calculated as follows based on the one-dimensional accident event feature data:
Figure BDA0002191243950000091
in the formula: a isl,jJ is the jth column element of the 1 × J output vector of the convolutional layer (the l-th layer of the neural network), J is 1, …, J; wl.mThe mth column element of the 1 xM convolution kernel is the sharing weight; x is the number ofl.j+m-1The j + m-1 column element in the l layer input vector x is used; b is a bias variable; f is an activation function, and a ReLu/leakage ReLu function is taken;
(2) the average pooling is adopted in the pooling layer to reduce the output dimension, the input is divided into a plurality of non-overlapping areas, and the average value of all values in each area is obtained, wherein the formula is as follows:
in the formula: d is the number of the rows of the divided pooling areas;
Figure BDA0002191243950000096
for 1 × (J/D) output vector of pooling layer (first layer of neural network)
Figure BDA0002191243950000093
The elements of the column are, in turn,
Figure BDA0002191243950000094
output vector for layer preceding pooling (layer l-1 of neural network)
Figure BDA0002191243950000095
Elements of a column;
(3) and finally, performing output calculation through the full connection layer and the Softmax layer.
In a specific embodiment, the error terms of each layer are calculated in reverse order, and the weight and the bias are updated layer by layer, which specifically includes the following steps:
(1) establishing a loss function of the CNN model, wherein the formula is as follows:
Figure BDA0002191243950000101
in the formula, the superscript L represents the output layer,
Figure BDA0002191243950000102
is the actual output of the output layer;
(2) defining the residual error of the first layer of the network as the partial derivative of the loss function C to the input of the activation function in the operation of each layer, and calculating the residual error of the output layer through the loss function of the output layer, wherein the formula is as follows:
Figure BDA0002191243950000103
in the formula, deltaLRepresenting the output layer residual; z is a radical ofLAn input representing an output layer activation function; a isLRepresenting an output layer output; an indication of a dot product operation; f' (. cndot.) denotes the derivative of the activation function;
(3) and based on the calculated known residual errors, recursion is carried out on the residual errors of all layers layer by layer according to a reduction method.
Specifically, based on the calculated known residual errors, the residual errors of each layer can be recurred layer by layer according to a reduction method, and the calculation is performed in the following 3 cases:
1) if the current l layer is a full connection layer, the residual error of the last l-1 layer is:
δl-1=(Wl)Tδl⊙f′(zl-1)
2) if the current l layer is a convolutional layer, the residual error of the last l-1 layer can be calculated as:
Figure BDA0002191243950000104
in the formula (I), the compound is shown in the specification,
Figure BDA0002191243950000105
represents a convolution; rot180() represents flipping the convolution kernel by 180 degrees.
3) If the current layer l is a pooling layer, the residual error of the previous layer l-1 is:
δl-1=upsample(δl)⊙f′(zl-1)
where upsamplle (-) represents an upsample operation, with the goal of restoring the matrix to pre-pooling. For average pooling, the upsampling operation first restores the matrix size by padding with 0, and then the element values within each partitioned region are equal to the quotient of the element value before restoration and the number of region elements.
Residual error delta obtained based on the above calculationlAnd adjusting the weight and the bias layer by layer according to the following 2 conditions:
1) if the current l-th layer is a fully connected layer:
Wl=(Wl)-α[δl(al-1)T]
bl=bl-αδl
wherein alpha is the learning rate and takes the value of [ 0-1%],δlIs the residual, W is the weight, and b is the offset.
2) If it is currently a convolutional layer:
Figure BDA0002191243950000111
bl=bl-αδl
in the formulaAlpha is learning rate and takes value of [ 0-1%],Representing convolution, W is the weight, b is the offset.
Specifically, the process of testing and determining the model specifically includes: and testing the trained CNN electric power accident event and incentive relation construction model by using the test data set, and calculating the classification accuracy. If the test error reaches the allowable range, constructing a model by taking the model as the relation between the power accident event and the cause; and otherwise, the number of convolution kernels is adjusted or the training samples are increased to retrain the CNN model.
The flow of the method proposed in the above embodiment is briefly described below with reference to fig. 2:
firstly, collecting historical power production safety accident event analysis reports;
extracting accident event types from the accident event report brief questions and coding the accident event types;
then extracting accident event causes in the accident event report cause analysis and coding the accident event causes;
after the coding is finished, sequentially combining the type coding and the incentive coding to form one-dimensional input characteristic data, and correspondingly adding four labels of direct reason/indirect reason/management reason/no incidence relation;
and taking 80% of samples which are coded as a training set, taking the rest samples as a test set, and taking the training set as input and output of supervised learning.
After data processing is completed, a CNN model is built according to the characteristics of a data structure, wherein the CNN model comprises 2 convolutional layers, 2 pooling layers and 1 full-connection layer, convolution kernels are correspondingly set as one-dimensional convolution kernels on the basis of a one-dimensional input data structure of the convolutional layers, and proper convolution kernel quantity is selected to extract input data characteristics;
setting 4 neuron Softmax layers corresponding to the 4 relation characteristics behind the full connection layer as output;
and finally, inputting the sample data into the CNN model, training by adopting a BP algorithm to obtain a model meeting a threshold value, testing the trained CNN model by utilizing a test data set, and calculating the classification accuracy.
In summary, compared with the prior art, the method for constructing the electric power accident event and cause relationship based on the convolutional neural network provided by the embodiment of the invention has the following advantages:
the method is different from a mode of manually extracting features and directly inputting accident information to obtain a relational feature result, solves the problem that the accuracy and reliability of the traditional method cannot meet the requirements due to complex feature extraction and data reconstruction process, and more intuitively and flexibly analyzes the cause and responsibility of a specific accident event under the conditions of complex information and unclear inference rule in the power accident so as to find out the weak link of the system, thereby providing reliable data support for implementation of management measures and technical precautionary measures for preventing similar accident events in the later period.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for constructing a power accident event and incentive relation based on a convolutional neural network is characterized by comprising the following steps:
type extraction and coding: extracting accident event types from the short questions of the story event report, and coding the accident event types;
incentive extraction and coding: extracting accident event causes from the reason analysis of the accident event report, and coding the accident event causes;
sample data combination and labeling: combining the encoded accident event type data and the encoded accident event incentive data according to an encoding sequence to obtain sample data, and adding a label to the sample data;
constructing and training a model: constructing a CNN model by using the sample data, performing parameter training on the constructed CNN model, and outputting the trained CNN model;
testing and determining the model: and testing the trained CNN model, evaluating the classification accuracy of the model, and determining a final power accident event and incentive relation model according to the comparison result of the test error and a preset error threshold.
2. The method for constructing the relationship between the electric power accident event and the incentive based on the convolutional neural network as claimed in claim 1, wherein the accident event type is encoded by using a 1 x 4 vector t, and the specific formula is as follows:
Figure FDA0002191243940000011
in the formula (I), the compound is shown in the specification,the result of the accident event is represented, and the value range is [0,1 ]]-[9,9]A total of 99 values;
Figure FDA0002191243940000013
representing accident grade, and taking value as [0,1 ]]、[1,0]、[1,1]For a total of three levels.
3. The method for constructing the relationship between the electric power accident event and the incentive based on the convolutional neural network as claimed in claim 2, wherein the incentive of the accident event is encoded by using 4 1 x 2 vectors c, and the concrete formula is as follows:
Figure FDA0002191243940000014
in the formula (I), the compound is shown in the specification,
Figure FDA0002191243940000015
the incentive code representing the grade q has a value range of [0, 1%]-[9,9]And 99 values in total.
4. The method for constructing the power accident event and incentive relation based on the convolutional neural network as claimed in claim 3, wherein the process of sample data combination and labeling specifically comprises:
coding the accident event type and coding the accident event inducement to obtain a coding sequence which is combined into a vector I (t, c) of 1 multiplied by 12, taking I as sample data and adding a label to the sample data;
and taking 80% of sample data as a training set and taking 20% of sample data as a test set.
5. The method according to claim 4, wherein the label of the sample data comprises direct reason, indirect reason, management reason and no association.
6. The method for building the relationship between the electric power accident event and the cause based on the convolutional neural network as claimed in claim 4, wherein the process of building the model specifically comprises the following steps:
constructing a CNN model comprising 2 convolutional layers, 2 pooling layers and 1 full-connection layer, wherein the structure is that the convolutional layers and the pooling layers are alternately repeated, and the output of the layer is used as the input of the next layer;
setting convolution kernels of the CNN model as one-dimensional convolution kernels, determining the number of the convolution kernels through a trial and error method, and extracting different characteristics of sample data by using different convolution kernels;
performing dimensionality reduction operation on the characteristics of each sample data obtained by convolution through a pooling layer;
and connecting a full connection layer behind the pooling layer, fully connecting the full connection layer with the Softmax output layer, and outputting the probability values occupied by each sample data label by the neuron of the Softmax output layer.
7. The method for building the relationship between the electric power accident event and the incentive based on the convolutional neural network as claimed in claim 6, wherein the parameter training of the built CNN model specifically comprises the following steps:
initializing weights and bias values of each layer of a network in a CNN model;
training sample data L ═ Ii,OiInputting n into a CNN model, and carrying out forward propagation on sample characteristics through each layer to obtain an output value through calculation; wherein L is a training set, Ii、OiRespectively representing the characteristic data and the label of the sample i, wherein n is the number of training samples;
performing backward propagation on the calculated output value by adopting a gradient descent method, reversely and sequentially calculating error items of each layer, and updating the weight and the bias layer by layer;
and repeating the forward propagation process and the backward propagation process, performing iterative calculation until all the weight values and the bias change values are smaller than the iteration stop threshold value, and outputting the CNN model.
8. The method for constructing the relationship between the power accident event and the incentive based on the convolutional neural network as claimed in claim 7, wherein the process of calculating the output value by the forward propagation of the sample characteristics through each layer comprises the following steps:
performing convolution operation on the input features of the previous layer on the convolution layer, wherein the sliding step length of a window of a convolution kernel is 1, and the output of the convolution layer is calculated as follows based on the one-dimensional accident event feature data:
Figure FDA0002191243940000031
in the formula: a isl,jJ is the jth column element of the 1 xj output vector of the l-th layer of the neural network, J being 1, …, J; wl.mThe mth column element of the 1 xM convolution kernel is the sharing weight; x is the number ofl.j+m-1The j + m-1 column element in the l layer input vector x is used; b is a bias variable; f is an activation function, and a ReLu/leakage ReLu function is taken;
the average pooling is adopted in the pooling layer to reduce the output dimension, the input is divided into a plurality of non-overlapping areas, and the average value of all values in each area is obtained, wherein the formula is as follows:
Figure FDA0002191243940000032
in the formula: d is the number of the rows of the divided pooling areas;
Figure FDA0002191243940000033
for the 1 x (J/D) output vector of the l layer of the neural networkThe elements of the column are, in turn,outputting the first layer of the vector for the l-1 layer of the neural network
Figure FDA0002191243940000036
Elements of a column;
and performing output calculation through the full connection layer and the Softmax layer.
9. The method for constructing the relationship between the power accident and the incentive based on the convolutional neural network as claimed in claim 7, wherein error terms of each layer are calculated in reverse order, and the weight and the bias are updated layer by layer, specifically comprising the following steps:
establishing a loss function of the CNN model, wherein the formula is as follows:
Figure FDA0002191243940000037
in the formula, the superscript L represents the output layer,
Figure FDA0002191243940000041
is the actual output of the output layer;
defining the residual error of the first layer of the network as the partial derivative of the loss function C to the input of the activation function in the operation of each layer, and calculating the residual error of the output layer through the loss function of the output layer, wherein the formula is as follows:
Figure FDA0002191243940000042
in the formula, deltaLRepresenting the output layer residual; z is a radical ofLAn input representing an output layer activation function; a isLRepresenting an output layer output; an indication of a dot product operation; f' (. cndot.) denotes the derivative of the activation function;
based on the calculated known residual errors, recursion is carried out on the residual errors of all layers layer by layer according to a reduction method;
and adjusting the weight and the bias layer by layer based on each layer of residual error obtained by calculation.
10. The method for constructing the electric power accident event and incentive relation based on the convolutional neural network as claimed in claim 7, wherein a final electric power accident event and incentive relation model is determined according to the comparison result of the test error and the preset error threshold, and the specific process is as follows:
if the test error reaches the allowable range of the preset error threshold, taking the currently trained CNN model as a power accident event and incentive relation construction model; and otherwise, the number of convolution kernels is adjusted or the training samples are increased to retrain the CNN model.
CN201910832757.XA 2019-09-04 2019-09-04 Electric power accident event and cause relation construction method based on convolutional neural network Pending CN110648055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910832757.XA CN110648055A (en) 2019-09-04 2019-09-04 Electric power accident event and cause relation construction method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910832757.XA CN110648055A (en) 2019-09-04 2019-09-04 Electric power accident event and cause relation construction method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN110648055A true CN110648055A (en) 2020-01-03

Family

ID=68991543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910832757.XA Pending CN110648055A (en) 2019-09-04 2019-09-04 Electric power accident event and cause relation construction method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110648055A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190943A (en) * 2020-01-08 2020-05-22 中国石油集团安全环保技术研究院有限公司 Intelligent analysis method for accident cause
CN111222791A (en) * 2020-01-08 2020-06-02 中国石油集团安全环保技术研究院有限公司 Intelligent analysis method for accident event multidimensional service
CN112115124A (en) * 2020-09-25 2020-12-22 平安国际智慧城市科技股份有限公司 Data influence degree analysis method and device, electronic equipment and storage medium
CN117688485A (en) * 2024-02-02 2024-03-12 北京中卓时代消防装备科技有限公司 Fire disaster cause analysis method and system based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190943A (en) * 2020-01-08 2020-05-22 中国石油集团安全环保技术研究院有限公司 Intelligent analysis method for accident cause
CN111222791A (en) * 2020-01-08 2020-06-02 中国石油集团安全环保技术研究院有限公司 Intelligent analysis method for accident event multidimensional service
CN111222791B (en) * 2020-01-08 2024-02-02 中国石油天然气集团有限公司 Intelligent analysis method for accident event multidimensional service
CN111190943B (en) * 2020-01-08 2024-04-30 中国石油天然气集团有限公司 Intelligent analysis method for accident event cause
CN112115124A (en) * 2020-09-25 2020-12-22 平安国际智慧城市科技股份有限公司 Data influence degree analysis method and device, electronic equipment and storage medium
CN117688485A (en) * 2024-02-02 2024-03-12 北京中卓时代消防装备科技有限公司 Fire disaster cause analysis method and system based on deep learning
CN117688485B (en) * 2024-02-02 2024-04-30 北京中卓时代消防装备科技有限公司 Fire disaster cause analysis method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110648055A (en) Electric power accident event and cause relation construction method based on convolutional neural network
CN112131673B (en) Engine surge fault prediction system and method based on fusion neural network model
CN113496262B (en) Data-driven active power distribution network abnormal state sensing method and system
CN114091615B (en) Electric energy metering data complement method and system based on generation countermeasure network
CN111931989A (en) Power system short-term load prediction method based on deep learning neural network
CN113743016A (en) Turbofan engine residual service life prediction method based on improved stacked sparse self-encoder and attention echo state network
CN112508286A (en) Short-term load prediction method based on Kmeans-BilSTM-DMD model
CN113988449A (en) Wind power prediction method based on Transformer model
CN113360848A (en) Time sequence data prediction method and device
CN116402352A (en) Enterprise risk prediction method and device, electronic equipment and medium
CN113779882A (en) Method, device, equipment and storage medium for predicting residual service life of equipment
CN114707712A (en) Method for predicting requirement of generator set spare parts
CN111985719A (en) Power load prediction method based on improved long-term and short-term memory network
CN115456306A (en) Bus load prediction method, system, equipment and storage medium
CN112560997B (en) Fault identification model training method, fault identification method and related device
CN117075582A (en) Industrial process generalized zero sample fault diagnosis method based on DSECMR-VAE
CN114519471A (en) Electric load prediction method based on time sequence data periodicity
CN118013408A (en) Load prediction method considering multi-scale time sequence information
CN112990584A (en) Automatic production decision system and method based on deep reinforcement learning
CN112184317A (en) Waste mobile phone pricing method based on value-preserving rate and discrete neural network
CN114781577A (en) Buck circuit fault diagnosis method based on VMD-DCNN-SVM
CN114266201A (en) Self-attention elevator trapping prediction method based on deep learning
Tran et al. Effects of Data Standardization on Hyperparameter Optimization with the Grid Search Algorithm Based on Deep Learning: A Case Study of Electric Load Forecasting
CN116776209A (en) Method, system, equipment and medium for identifying operation state of gateway metering device
CN116739130A (en) Multi-time scale load prediction method of TCN-BiLSTM network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103

RJ01 Rejection of invention patent application after publication