CN116245222A - Power grid icing prediction method, device, equipment and storage medium - Google Patents

Power grid icing prediction method, device, equipment and storage medium Download PDF

Info

Publication number
CN116245222A
CN116245222A CN202310031328.9A CN202310031328A CN116245222A CN 116245222 A CN116245222 A CN 116245222A CN 202310031328 A CN202310031328 A CN 202310031328A CN 116245222 A CN116245222 A CN 116245222A
Authority
CN
China
Prior art keywords
prediction
prediction model
model
prediction result
icing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310031328.9A
Other languages
Chinese (zh)
Inventor
潘浩
周仿荣
耿浩
曹俊
马仪
钱国超
于辉
马宏明
马御棠
文刚
张辉
徐真
高振宇
马显龙
彭兆裕
许保瑜
卜威
曹家军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202310031328.9A priority Critical patent/CN116245222A/en
Publication of CN116245222A publication Critical patent/CN116245222A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02GINSTALLATION OF ELECTRIC CABLES OR LINES, OR OF COMBINED OPTICAL AND ELECTRIC CABLES OR LINES
    • H02G1/00Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines
    • H02G1/02Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines for overhead lines or cables
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a power grid icing prediction method, a power grid icing prediction device, computer equipment and a computer readable storage medium. The power grid icing prediction method comprises the steps of generating a training sample set through historical data to train an initial model to obtain a first prediction model; predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet the precision condition; if the first prediction result or the first prediction model does not meet the precision condition, performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result; and if the first prediction result and the first prediction model both meet the precision condition, taking the first prediction result as a prediction result and outputting the prediction result. Therefore, the method and the device can be used for predicting the risk of the icing disaster of the power grid with higher calculation accuracy.

Description

Power grid icing prediction method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of meteorological disaster prediction, and particularly relates to a power grid icing prediction method, a power grid icing prediction device, computer equipment and a computer readable storage medium.
Background
With frequent global low-temperature, rain, snow, freezing and other disastrous weather, the disaster event of the power grid caused by severe weather is continuously increased, especially the damage caused by icing of the power line is more serious, the flashover trip is caused when the power line is light, the accidents such as the damage to the power transmission line fittings, broken line, tower falling and the like occur when the power line is heavy, the safe operation of the power grid is seriously threatened, and great economic loss and social influence are caused. The problem of ice coating disaster of the power transmission line has become one of the greatest threats for the safety of the line at present. The power grid icing disaster is mainly caused by rime formed by freezing rain, rime formed by wet snow, rime formed by heavy fog and the like, and is most influenced by conditions such as micro topography, microclimate and the like. At present, related researches on prediction and early warning of icing disasters of a power transmission line have been carried out in the prior art, wherein the related researches comprise a risk assessment method based on event evolution dynamics, a disaster risk assessment method based on scene analysis, a disaster risk assessment method based on historical disaster data, a disaster risk assessment method based on a big data intelligent algorithm and the like.
However, the existing methods have the defects that: the risk assessment method based on event evolution dynamics and the disaster risk assessment method based on scene analysis pay attention to physical process analysis of event evolution, but ice coating disasters have historical regularity and influence factor uncertainty, and the method cannot fully play the value of historical data; the evaluation based on the historical disaster data generally goes through 3 stages of an extremum evaluation method, a probability evaluation method and a fuzzy evaluation method, wherein the extremum evaluation method has obvious deviation in risk evaluation, the probability evaluation method has larger deviation in evaluation results when encountering probability distribution of samples which are less and cannot be obtained accurately, and the evaluation results of the fuzzy evaluation method are more relations or fuzzy sets and cannot be directly compared; the risk assessment based on the scene analysis is mostly simulation of disaster risk scenes, but the simulation of specific and effective execution of the disaster risk assessment is not yet involved, and certain limitations exist; the disaster risk assessment method based on the big data intelligent algorithm mainly depends on accumulation and experience of historical data, and ice coating disaster risk assessment is carried out through a preliminary machine learning method by combining a related model, but the general intelligent algorithm only stays in a one-sided machine learning process and cannot deeply learn the ice coating disaster risk characteristics, so that the prediction early warning accuracy is lower. Based on the above, a power transmission line icing disaster risk prediction and early warning method with stronger initiative, higher intelligent degree and wider application range is urgently needed.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a grid icing prediction method, a grid icing prediction apparatus, a computer device, and a computer-readable storage medium, which can effectively predict a grid icing disaster risk.
The technical problem that this application solved is realized by adopting following technical scheme:
the application provides a power grid icing prediction method, which comprises the following steps: acquiring historical icing disaster data and line icing physical data to generate a training sample set; establishing an initialization model, training the initialization model according to a training sample set to obtain first parameters, and setting the initialization model to obtain a first prediction model; predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet the precision condition; if the first prediction result or the first prediction model does not meet the precision condition, performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result; and if the first prediction result and the first prediction model both meet the precision condition, taking the first prediction result as a prediction result and outputting the prediction result.
In an alternative embodiment of the present application, the line icing physical data includes meteorological data, geographic data, and line data; the initialization model is a limited Boltzmann machine model and comprises a display layer and a hidden layer; the display layer is composed of meteorological data and geographic data and comprises M neurons; the hidden layer is composed of line data and comprises N neurons; m, N is an integer greater than zero; establishing an initialization model, which comprises the following steps: and setting initial parameters of an initialization model according to the training sample set, wherein the initial parameters comprise initial weights between a display layer and a hidden layer, and initial display layer bias of each neuron in the display layer and initial hidden layer bias of each neuron in the hidden layer.
In an alternative embodiment of the present application, the first parameter includes a first display layer bias, a first hidden layer bias, and a first inter-layer weight; training the initialization model according to the training sample set to obtain a first parameter and setting the initialization model to obtain a first prediction model, including: setting an excitation function; when the state of the display layer is known according to the training sample set, calculating the activation probability of each neuron in the display layer according to the excitation function, so as to update the initial display layer bias; when the hidden layer is known according to the training sample set, calculating the activation probability of each neuron in the hidden layer according to the excitation function, so as to update the initial hidden layer bias; updating the initial weight according to the updated initial display layer bias and the updated initial hidden layer bias; repeatedly and iteratively updating the initial parameters until the initial parameters meet the iteration conditions, obtaining initial parameters of the last iteration round, solving a log likelihood function in the maximized unsupervised limited boltzmann machine according to the initial parameters, and setting an initialization model by taking a solving result as a first parameter to obtain a first prediction model.
In an optional embodiment of the present application, according to a first prediction result obtained by performing prediction with a first prediction model, determining whether the first prediction result and the first prediction model meet a precision condition includes: generating a first input data set according to the training sample set and injecting a first prediction result obtained by a first prediction model, wherein the first input data set comprises a disaster feature vector generated according to historical icing disaster data and line icing physical data, a historical disaster occurrence identifier and a disaster duration determined according to the historical icing disaster data, and the first prediction result comprises a first prediction disaster occurrence identifier, a first prediction disaster duration and a first prediction disaster grade; the method comprises the steps of obtaining a predicted ice coating thickness in a first predicted disaster level, and calculating the predicted ice coating thickness and an actual ice coating thickness of historical ice coating disaster data to obtain average precision; judging whether the average precision is smaller than or equal to a precision threshold value; judging whether the first prediction model meets iteration conditions or not; if the average precision is larger than the precision threshold value, or the first prediction model does not meet the iteration condition, the method belongs to the situation that any one of the first prediction result or the first prediction model does not meet the precision condition; if the average precision is smaller than or equal to the precision threshold value and the first prediction model meets the iteration condition, the method belongs to the condition that both the first prediction result and the first prediction model meet the precision condition.
In an alternative embodiment of the present application, performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model, including: acquiring a plurality of disaster feature vectors generated according to historical icing disaster data and line icing physical data to generate an input parameter set; establishing a third prediction model based on the input parameter set and the first prediction model, and judging whether the third prediction result and the third prediction model meet the precision condition according to a third prediction result obtained by predicting the third prediction model, wherein the third prediction model is established according to an optimization algorithm; when the third prediction model does not meet the precision condition, updating the first parameter in the third prediction model into the second parameter according to an optimization algorithm, and acquiring the characteristic elements again for iteration; and when the third prediction model meets the precision condition, setting the third prediction model according to the second parameter of the last iteration round to obtain a second prediction model.
In an optional embodiment of the present application, a third prediction model is established based on the input parameter set and the first prediction model, and a third prediction result obtained by predicting according to the third prediction model includes: setting weight connection of display layer neurons in a first prediction model according to an input parameter set, and setting transfer functions of hidden layer neurons in the first prediction model to obtain a third prediction model; and injecting the input parameter set into a third prediction model, calculating the input parameter set by using the connection of the transfer function and the weight, and outputting the calculated result when the calculated result meets the neuron threshold value, thereby obtaining a third prediction result.
In an alternative embodiment of the present application, the second parameter includes a second display layer bias, a second hidden layer bias; updating the first parameter in the third prediction model to the second parameter according to the optimization algorithm, including: updating the first parameter in the third prediction model into the second parameter according to the back propagation algorithm, and adjusting the connection weight of the third prediction model, wherein the connection weight comprises the hidden layer neuron input weight and the hidden layer neuron output weight.
The application also provides a power grid icing prediction device, which comprises: the sample set generation module is used for acquiring historical icing disaster data and line icing physical data to generate a training sample set; the initial model building module is used for building an initial model, training the initial model according to a training sample set to obtain first parameters and setting the initial model to obtain a first prediction model; the precision detection module is used for carrying out prediction according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet precision conditions or not; the first prediction module is used for carrying out iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model if any one of the first prediction result or the first prediction model does not meet the precision condition; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result; and the second prediction module is used for taking the first prediction result as the prediction result and outputting the prediction result if the first prediction result and the first prediction model both meet the precision condition.
The application also provides a computer device comprising a processor and a memory: the processor is configured to execute the computer program stored in the memory to implement the method as described above.
The present application also provides a computer readable storage medium storing a computer program which when executed by a processor implements a method as described above.
By adopting the embodiment of the application, the method has the following beneficial effects:
the method and the device not only consider the internal factor and the external factor affecting the icing disaster of the power transmission line, but also fully utilize the historical icing disaster information of the power grid, and are more practical and practical in accordance with the actual situation of the icing disaster risk of the power transmission line. Meanwhile, a method of combining unsupervised training by using a deep neural network formed by a limited Boltzmann machine and supervised training by using a back propagation neural network is adopted, so that the model can learn the structural information of the data, and the method is beneficial to improving the calculation of the high-precision power grid icing disaster risk prediction and early warning problem.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification, so that the foregoing and other objects, features and advantages of the present application can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
fig. 1 is a flow chart of a method for predicting ice coating on a power grid according to an embodiment;
fig. 2 is a schematic block diagram of a power grid icing prediction apparatus according to an embodiment;
fig. 3 is a schematic block diagram of a computer device according to an embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Aiming at the defects and the shortcomings in the prior art, the application provides a power grid icing prediction method which comprises steps S110-S150. In order to clearly describe the method for predicting ice coating on the power grid provided in this embodiment, please refer to fig. 1.
Step S110: historical icing disaster data and line icing physical data are acquired to generate a training sample set.
Step S120: and establishing an initialization model, training the initialization model according to a training sample set to obtain first parameters, and setting the initialization model to obtain a first prediction model.
In one embodiment, the line icing physical data includes meteorological data, geographic data, and line data; the initialization model is a limited Boltzmann machine model and comprises a display layer and a hidden layer; the display layer is composed of meteorological data and geographic data and comprises M neurons; the hidden layer is composed of line data and comprises N neurons; m, N is an integer greater than zero. Step S120: establishing an initialization model, which comprises the following steps: and setting initial parameters of an initialization model according to the training sample set, wherein the initial parameters comprise initial weights between a display layer and a hidden layer, and initial display layer bias of each neuron in the display layer and initial hidden layer bias of each neuron in the hidden layer.
In one embodiment, a model for prediction, i.e., a first prediction model, needs to be first built, and the first prediction model needs to be built by training an initial model, so that training by using a training sample is also needed. The training sample set is mainly generated by historical icing disaster data and line icing physical data, and the characteristic factors of icing disaster prediction are influenced together. Specifically, the historical icing disaster data may include data of whether the line suffers from icing disasters, the number of days, severity, thickness of icing, and the like; the line icing physical data can comprise meteorological data, geographic data and line data, wherein the meteorological data, the geographic data and the line data are used as external factors of icing disasters, and the line icing physical data are used as internal factors of the icing disasters. Still further, the meteorological data may include, but is not limited to, spatial temperature, wind speed and direction, liquid water content in the air, diameter of supercooled water droplets in the air or cloud, etc.; the geographic data may include, but is not limited to, mountain strike, mountain watershed, tuyere region, altitude, etc.; the line data may include, but is not limited to, characteristics of the transmission line itself, such as wire orientation, wire suspension height, wire diameter, load current, etc. The training sample set is generated by the data together, meanwhile, the training sample and the training data set for training can be split into verification data sets for verification, the number of the training data sets is larger than that of the verification data sets, in a preferred embodiment, the training data sets can be one order of magnitude more, for example, 36 groups of training sample sets are generated, and then 32 groups of data can be used as the training data sets and the last 4 groups of data can be used as the verification data sets. For a training sample set as a training data set, a specific pattern as input data may be a = { (X) 1 ,y 1 ,z 1 ),(X 2 ,y 2 ,z 2 ),...,(X 36 ,y 36 ,z 36 ) (wherein X is i Characteristic factor vector, y, of ice coating disaster sample of ith transmission line i = { -1,1} is the sign of the ice coating disaster of the ith sample, wherein-1 represents that the power transmission line has no ice coating accident, 1 represents that the power transmission line has ice coating accident, and z i = {0,1,2,3,.. } is the i-th transmission line icing hazard occurrence accident duration (in days).
In an embodiment, the model needs to be built, and it is worth noting that for the first prediction model, the second model, and the like, which are described later, the model is built based on the initialization model. While for the initialization model, in a preferred embodiment, it may be a limited boltzmann machine model, including a display layer and a hidden layer. In particular the display layer v= { v i I=0,.. M }, hidden layer h = { h j Node variable values in i j=0,.. ij I=0 and the number of the groups, M; j=0..and N }, wherein M, N is the number of neurons in the display layer and the hidden layer, and can specifically take the values of m=20 and n=20. The display layer is composed of meteorological data and geographic data, the hidden layer is composed of line data, the layers are not connected, the layers are symmetrically connected and have no self feedback, namely, the display layer belongs to an unsupervised generation model. For a given set of states (v, h) its energy function E (v, h) can be defined, the joint probability distribution of the display layer and the hidden layer being denoted P (v, h), where the energy function is defined as:
Figure BDA0004047161130000071
here, v i The ith neuron, h, representing the display layer i Represents the j-th neuron, ω, of the hidden layer ij A is the connection weight of the ith neuron of the hidden layer and the jth neuron of the display layer i Bias for display layer ith neuron, b i For the bias of the display layer i-th neuron, M and N are the number of nodes of the display layer and the hidden layer, respectively.
In an embodiment, the first parameter includes a first display layer bias, a first hidden layer bias, and a first inter-layer weight. Step S120: training the initialization model according to the training sample set to obtain a first parameter and setting the initialization model to obtain a first prediction model, including: setting an excitation function; when the state of the display layer is known according to the training sample set, calculating the activation probability of each neuron in the display layer according to the excitation function, so as to update the initial display layer bias; when the hidden layer is known according to the training sample set, calculating the activation probability of each neuron in the hidden layer according to the excitation function, so as to update the initial hidden layer bias; updating the initial weight according to the updated initial display layer bias and the updated initial hidden layer bias; repeatedly and iteratively updating the initial parameters until the initial parameters meet the iteration conditions, obtaining initial parameters of the last iteration round, solving a log likelihood function in the maximized unsupervised limited boltzmann machine according to the initial parameters, and setting an initialization model by taking a solving result as a first parameter to obtain a first prediction model.
In one embodiment, the training process may be to set the excitation function first:
f(x)=1/(1+e -x )
(2)
the explicit layer and hidden layer conditional probabilities are calculated respectively, that is, when the state of the display layer is known, the activation probability calculation mode for calculating the j-th neuron of the hidden layer may be:
Figure BDA0004047161130000081
when the hidden layer is known, the activation probability calculation manner of the ith neuron of the display layer can be as follows:
Figure BDA0004047161130000082
wherein a is i Bias for display layer ith neuron, b j Bias for the jth neuron of the hidden layer. For the iteration condition, the training sample set is utilized, and LO is utilized according to the training sample setThe SS loss value (either mean square error or cross entropy can be used) determines convergence as a determination iteration condition. If the condition is not satisfied, performing further iterative parameter adjustment; and if the conditions are met, acquiring initial parameters of the last iteration round, and solving a log likelihood function in the maximized unsupervised limited Boltzmann machine according to the initial parameters. The specific solving process can be that the recording parameter theta= { a i ,b jij The specific calculation formula of the bias and connection layer weights is as follows:
Figure BDA0004047161130000091
wherein T in the formula is the number of known training set samples, v t For the known t-th input sample, L (θ) is a log-likelihood function on the training set, and the specific calculation formula of this function is as follows:
Figure BDA0004047161130000092
setting the solving result as a first parameter to an initialization model to obtain a first prediction model, wherein the first parameter is the display layer bias a i Hidden layer bias b j Weights ω of two layers relative to each other ij
Step S130: and predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet the precision condition.
If the first prediction result or the first prediction model does not meet the accuracy condition, step S140 is executed: performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result;
if the first prediction result and the first prediction model both meet the accuracy condition, step S150 is executed: and taking the first prediction result as a prediction result and outputting the prediction result.
In one embodiment, step S130: predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet the precision condition or not comprises the following steps: generating a first input data set according to the training sample set and injecting a first prediction result obtained by a first prediction model, wherein the first input data set comprises a disaster feature vector generated according to historical icing disaster data and line icing physical data, a historical disaster occurrence identifier and a disaster duration determined according to the historical icing disaster data, and the first prediction result comprises a first prediction disaster occurrence identifier, a first prediction disaster duration and a first prediction disaster grade; the method comprises the steps of obtaining a predicted ice coating thickness in a first predicted disaster level, and calculating the predicted ice coating thickness and an actual ice coating thickness of historical ice coating disaster data to obtain average precision; judging whether the average precision is smaller than or equal to a precision threshold value; judging whether the first prediction model meets iteration conditions or not; if the average precision is larger than the precision threshold value, or the first prediction model does not meet the iteration condition, the method belongs to the situation that any one of the first prediction result or the first prediction model does not meet the precision condition; if the average precision is smaller than or equal to the precision threshold value and the first prediction model meets the iteration condition, the method belongs to the condition that both the first prediction result and the first prediction model meet the precision condition.
In one embodiment, the first predictor is obtained by a process similar to the iterative process described above. First, a first input data set is generated according to a training sample set, wherein the first input data set comprises disaster characteristic vectors generated according to historical icing disaster data and line icing physical data, historical disaster occurrence identification determined according to the historical icing disaster data and disaster duration. And outputting a first prediction result according to the first prediction model, wherein the first prediction result comprises a first prediction disaster occurrence identification, a first prediction disaster duration and a first prediction disaster grade. The method comprises the steps of obtaining a predicted icing thickness in a first predicted disaster level, and calculating the predicted icing thickness and the actual icing thickness of historical icing disaster data to obtain average precision, wherein a calculation formula can be referred to:
Figure BDA0004047161130000101
wherein X is i The actual ice thickness of the transmission line in the corresponding ith sample; y is Y i And predicting the icing thickness for the ith sample power transmission line. And the obtained average precision is used for judging whether the first prediction model and the first prediction result meet the precision condition.
In one embodiment, step S140: performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model, wherein the iterative parameter adjustment comprises the following steps: acquiring a plurality of disaster feature vectors generated according to historical icing disaster data and line icing physical data to generate an input parameter set; establishing a third prediction model based on the input parameter set and the first prediction model, and judging whether the third prediction result and the third prediction model meet the precision condition according to a third prediction result obtained by predicting the third prediction model, wherein the third prediction model is established according to an optimization algorithm; when the third prediction model does not meet the precision condition, updating the first parameter in the third prediction model into the second parameter according to an optimization algorithm, and acquiring the characteristic elements again for iteration; and when the third prediction model meets the precision condition, setting the third prediction model according to the second parameter of the last iteration round to obtain a second prediction model.
In one embodiment, a third prediction model is established based on the input parameter set and the first prediction model, and a third prediction result obtained by predicting according to the third prediction model includes: setting weight connection of display layer neurons in a first prediction model according to an input parameter set, and setting transfer functions of hidden layer neurons in the first prediction model to obtain a third prediction model; and injecting the input parameter set into a third prediction model, calculating the input parameter set by using the connection of the transfer function and the weight, and outputting the calculated result when the calculated result meets the neuron threshold value, thereby obtaining a third prediction result.
In an embodiment, the second parameter includes a second display layer bias, a second hidden layer bias; updating the first parameter in the third prediction model to the second parameter according to the optimization algorithm, including: updating the first parameter in the third prediction model into the second parameter according to the back propagation algorithm, and adjusting the connection weight of the third prediction model, wherein the connection weight comprises the hidden layer neuron input weight and the hidden layer neuron output weight.
In an embodiment, for the first prediction model and the first prediction result do not meet the preset condition, it is described that iterative parameter adjustment needs to be performed again on the first prediction model to meet the corresponding condition. Specifically, K disaster feature vectors in step S120 may be obtained first to generate an input parameter set { x } 1 ,x 2 ,...,x K The disaster feature vector is composed of historical icing disaster data and icing key influence factors of line icing physical data, and the value of K does not find the number of neurons, namely is smaller than M or N. Further, a third prediction model is established according to the input parameter set and the first prediction model, wherein the third prediction model is a power grid icing disaster risk prediction model based on a back propagation algorithm. The third predictive model is specifically described as: { x 1 ,x 2 ,...,x K The inputs to neurons in the model, lambda i Is the threshold of the i neuron, ω 1i2i ,...,ω ki I neuron pairs x, respectively 1 ,x 2 ,...,x K Weight connection, y i Let f be the output of the i neuron, where f is the transfer function, determining that the i neuron is subject to the input x 1 ,x 2 ,...,x K To a threshold, i.e.:
Figure BDA0004047161130000111
as described above, the third prediction model and the first prediction model have similar features, so the calculation process of the third prediction model to obtain the third prediction result may refer to the calculation process of the first prediction model, which is not described herein. The third prediction model and the third prediction result are the same as the first prediction model and the first prediction result, and whether the precision condition is satisfied is determined. If not, further iteration needs to be executed, namely, the iteration process can be to finely adjust and update the bias and the connection weights of the parameters in the power grid icing disaster risk prediction model according to a back propagation algorithm, wherein the connection weights comprise weights of input and hidden neurons and output. And if the precision condition is met, setting the third prediction model according to the second parameter of the last iteration round to obtain a second prediction model. And carrying out the same prediction process as the first prediction model according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the second prediction result.
In an embodiment, if the first prediction model and the first prediction result meet the preset conditions, it is indicated that both the model and the result meet the corresponding conditions, and the first prediction result may be directly output as the prediction result, which may include, but is not limited to, whether a disaster occurs, duration, icing thickness, disaster level, and the like.
Wherein, the prediction result for the entire back propagation neural network is defined as follows:
Figure BDA0004047161130000121
therefore, the method and the device not only consider the internal factor and the external factor affecting the icing disaster of the power transmission line, but also fully utilize the historical icing disaster information of the power grid, and more accord with the actual situation of the icing disaster risk of the power transmission line, and have more practicability. Meanwhile, a method of combining unsupervised training by using a deep neural network formed by a limited Boltzmann machine and supervised training by using a back propagation neural network is adopted, so that the model can learn the structural information of the data, and the method is beneficial to improving the calculation of the high-precision power grid icing disaster risk prediction and early warning problem.
FIG. 2 illustrates an internal block diagram of a grid icing prediction apparatus in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 2, the power grid icing prediction apparatus 20 includes: the sample set generating module 21 is configured to acquire historical icing disaster data and line icing physical data to generate a training sample set. The initial model building module 22 is configured to build an initial model, and train the initial model according to the training sample set to obtain a first parameter and set the initial model to obtain a first prediction model. The precision detection module 23 is configured to predict according to the first prediction model to obtain a first prediction result, and determine whether the first prediction model and the first prediction result satisfy a precision condition. The first prediction module 24 is configured to iteratively tune the first prediction model according to an optimization algorithm to obtain a second prediction model if either the first prediction result or the first prediction model does not meet the accuracy condition; and predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result. The second prediction module 25 is configured to output the first prediction result as a prediction result if the first prediction result and the first prediction model both satisfy the accuracy condition. By means of the power grid icing prediction apparatus 20 provided in this embodiment, the steps in the power grid icing prediction method described above can be implemented. The specific implementation process and the technical effects that can be achieved in each step of the method have been described in detail in the foregoing, and will not be described in detail herein.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of: step S110: acquiring historical icing disaster data and line icing physical data to generate a training sample set; step S120: establishing an initialization model, training the initialization model according to a training sample set to obtain first parameters, and setting the initialization model to obtain a first prediction model; step S130: predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet the precision condition; if the first prediction result or the first prediction model does not meet the accuracy condition, step S140 is executed: performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result; if the first prediction result and the first prediction model both meet the accuracy condition, step S150 is executed: and taking the first prediction result as a prediction result and outputting the prediction result.
FIG. 3 illustrates an internal block diagram of a computer device in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 3, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by the processor, causes the processor to implement a grid icing prediction method. The internal memory may also have stored therein a computer program which, when executed by the processor, causes the processor to perform the age identification method. It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the present application also proposes a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to perform the steps of the method as described above,
those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The power grid icing prediction method is characterized by comprising the following steps of:
acquiring historical icing disaster data and line icing physical data to generate a training sample set;
establishing an initialization model, training the initialization model according to the training sample set to obtain a first parameter, and setting the initialization model to obtain a first prediction model;
predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet a precision condition or not;
if the first prediction result or the first prediction model does not meet the precision condition, performing iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result;
and if the first prediction result and the first prediction model both meet the precision condition, taking the first prediction result as the prediction result and outputting the prediction result.
2. The grid icing prediction method of claim 1, wherein the line icing physical data comprises meteorological data, geographic data, and line data; the initialization model is a limited Boltzmann machine model and comprises a display layer and a hidden layer; the display layer is composed of the meteorological data and the geographic data and comprises M neurons; the hidden layer is composed of the line data and comprises N neurons; m, N is an integer greater than zero
The establishing an initialization model comprises the following steps:
and setting initial parameters of the initialization model according to the training sample set, wherein the initial parameters comprise initial weights between the display layer and the hidden layer, and initial display layer bias of each neuron in the display layer and initial hidden layer bias of each neuron in the hidden layer.
3. The grid icing prediction method of claim 2, wherein the first parameter comprises a first display layer bias, a first hidden layer bias, and a first inter-layer weight;
training the initialization model according to the training sample set to obtain a first parameter and setting the initialization model to obtain a first prediction model, including:
setting an excitation function;
calculating activation probability of each neuron in the display layer according to the excitation function when the display layer state is known according to the training sample set, so as to update the initial display layer bias; calculating activation probability of each neuron in the hidden layer according to the excitation function when the hidden layer is known according to the training sample set, so as to update the initial hidden layer bias; updating initial weights according to the updated initial display layer bias and the updated initial hidden layer bias;
repeatedly iterating and updating the initial parameters until the initial parameters meet the iteration conditions, obtaining initial parameters of the last iteration round, solving a log likelihood function in a maximized unsupervised limited boltzmann machine according to the initial parameters, and setting the initial model by taking a solving result as the first parameters to obtain a first prediction model.
4. The method for predicting ice coating on a power grid according to claim 1, wherein the determining whether the first prediction result and the first prediction model satisfy the accuracy condition according to the first prediction result obtained by predicting the first prediction model comprises:
generating a first input data set according to the training sample set and injecting the first prediction result obtained by the first prediction model, wherein the first input data set comprises a disaster feature vector generated according to the historical icing disaster data and the line icing physical data, a historical disaster occurrence identifier and a disaster duration time which are determined according to the historical icing disaster data, and the first prediction result comprises a first prediction disaster occurrence identifier, a first prediction disaster duration time and a first prediction disaster grade;
acquiring a predicted ice coating thickness in the first predicted disaster level, and calculating the predicted ice coating thickness and the actual ice coating thickness of the historical ice coating disaster data to obtain average precision;
judging whether the average precision is smaller than or equal to a precision threshold value; judging whether the first prediction model meets iteration conditions or not;
if the average precision is greater than the precision threshold, or the first prediction model does not meet the iteration condition, the method belongs to the situation that any one of the first prediction result or the first prediction model does not meet the precision condition;
if the average precision is smaller than or equal to the precision threshold and the first prediction model meets the iteration condition, the method belongs to the condition that both the first prediction result and the first prediction model meet the precision condition.
5. The method of claim 2, wherein iteratively tuning the first prediction model according to an optimization algorithm to obtain a second prediction model comprises:
acquiring a plurality of disaster feature vectors generated according to the historical icing disaster data and the line icing physical data to generate an input parameter set;
establishing a third prediction model based on the input parameter set and the first prediction model, and judging whether the third prediction result and the third prediction model meet the precision condition according to a third prediction result obtained by predicting the third prediction model, wherein the third prediction model is a prediction model established according to the optimization algorithm;
when the third prediction model does not meet the precision condition, updating the first parameter in the third prediction model to a second parameter according to the optimization algorithm, and acquiring the characteristic elements again for iteration;
and when the third prediction model meets the precision condition, setting the third prediction model according to the second parameter of the last iteration round so as to obtain the second prediction model.
6. The method of claim 5, wherein the establishing a third prediction model based on the input parameter set and the first prediction model, and predicting according to the third prediction model to obtain a third prediction result, includes:
setting the weight connection of the display layer neurons in the first prediction model according to the input parameter set, and setting the transfer function of the hidden layer neurons in the first prediction model to obtain the third prediction model;
and injecting the input parameter set into the third prediction model, calculating the input parameter set by using the transfer function and the weight connection, and outputting when the calculated result meets a neuron threshold value, so as to obtain the third prediction result.
7. The grid icing prediction method of claim 6, wherein the second parameter comprises a second display layer bias, a second hidden layer bias;
the updating the first parameter in the third prediction model to a second parameter according to the optimization algorithm includes:
updating the first parameter in the third prediction model to the second parameter according to a back propagation algorithm, and adjusting the connection weight of the third prediction model, wherein the connection weight comprises a hidden layer neuron input weight and a hidden layer neuron output weight.
8. The utility model provides a power grid icing prediction device which characterized in that includes:
the sample set generation module is used for acquiring historical icing disaster data and line icing physical data to generate a training sample set;
the initial model building module is used for building an initial model, training the initial model according to the training sample set to obtain a first parameter, and setting the initial model to obtain a first prediction model;
the precision detection module is used for predicting according to the first prediction model to obtain a first prediction result, and judging whether the first prediction model and the first prediction result meet a precision condition or not;
the first prediction module is used for carrying out iterative parameter adjustment on the first prediction model according to an optimization algorithm to obtain a second prediction model if any one of the first prediction result or the first prediction model does not meet the precision condition; predicting according to the second prediction model to obtain a second prediction result, and taking the second prediction result as a prediction result and outputting the prediction result;
and the second prediction module is used for taking the first prediction result as the prediction result and outputting the prediction result if the first prediction result and the first prediction model both meet the precision condition.
9. A computer device comprising a processor and a memory;
the processor is configured to execute a computer program stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1 to 7.
CN202310031328.9A 2023-01-10 2023-01-10 Power grid icing prediction method, device, equipment and storage medium Pending CN116245222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310031328.9A CN116245222A (en) 2023-01-10 2023-01-10 Power grid icing prediction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310031328.9A CN116245222A (en) 2023-01-10 2023-01-10 Power grid icing prediction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116245222A true CN116245222A (en) 2023-06-09

Family

ID=86628925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310031328.9A Pending CN116245222A (en) 2023-01-10 2023-01-10 Power grid icing prediction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116245222A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117172554A (en) * 2023-10-31 2023-12-05 中国铁建电气化局集团有限公司 Icing disaster risk prediction method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117172554A (en) * 2023-10-31 2023-12-05 中国铁建电气化局集团有限公司 Icing disaster risk prediction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11409347B2 (en) Method, system and storage medium for predicting power load probability density based on deep learning
CN108280553B (en) Mountain torrent disaster risk zoning and prediction method based on GIS-neural network integration
CN111639817A (en) Emergency material demand prediction method and system for power grid meteorological disasters
CN113536665B (en) Road surface temperature short-term prediction method and system based on characteristic engineering and LSTM
CN116245222A (en) Power grid icing prediction method, device, equipment and storage medium
CN112990603B (en) Air conditioner cold load prediction method and system considering frequency domain decomposed data characteristics
CN113988655A (en) Power transmission line running state evaluation method considering multiple meteorological factors
CN112257956A (en) Method, device and equipment for predicting power transmission line suffering from rainstorm disaster
CN114266384A (en) Method and device for rapidly predicting typhoon path based on fusion model
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
Okuda et al. Non-parametric prediction interval estimate for uncertainty quantification of the prediction of road pavement deterioration
CN116021981A (en) Method, device, equipment and storage medium for predicting ice coating faults of power distribution network line
CN113962465A (en) Precipitation forecasting method, equipment, device and storage medium
CN116258270A (en) Training method of icing prediction model of power transmission line and related device and method
Nazeri-Tahroudi et al. Estimation of dew point temperature in different climates of Iran using support vector regression
CN117114161A (en) Method for predicting wind deflection flashover risk of power transmission line based on meta-learning
Peng et al. Meteorological satellite operation prediction using a BiLSTM deep learning model
Liu et al. Research on ship abnormal behavior detection method based on graph neural network
CN112232557B (en) Short-term prediction method for health degree of switch machine based on long-short-term memory network
CN114519464A (en) Equipment disaster accident prediction method
CN114219127A (en) Photovoltaic output probability distribution prediction method based on Bayes-long and short-term memory neural network
Wang et al. Research on parking space prediction based on long short-term memory
Bin et al. A short-term power load forecasting method based on eemd-abgru
Tao et al. Predictive analysis of indoor temperature and humidity based on BP neural network single-step prediction method
CN117455122B (en) Road surface state evaluation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination