CN114647979A - Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network - Google Patents
Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network Download PDFInfo
- Publication number
- CN114647979A CN114647979A CN202210279113.4A CN202210279113A CN114647979A CN 114647979 A CN114647979 A CN 114647979A CN 202210279113 A CN202210279113 A CN 202210279113A CN 114647979 A CN114647979 A CN 114647979A
- Authority
- CN
- China
- Prior art keywords
- data
- long
- memory network
- short
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/08—Thermal analysis or thermal optimisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Analysis (AREA)
- Computing Systems (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a transformer hot spot temperature prediction method based on kernel principal component analysis and a long-term and short-term memory network, which comprises the following steps of 1, acquiring parameters related to the transformer hot spot temperature by using a sensor, and constructing a characteristic set; step 2, preprocessing the characteristic data, screening the preprocessed data by utilizing kernel principal component analysis, calculating the contribution rate and the cumulative contribution rate of each characteristic value, screening the characteristic values of which the cumulative contribution rate of the principal components is more than or equal to 90%, and constructing a new characteristic set; step 3, establishing a long-term and short-term memory network initialization model, inputting the screened characteristic data into a network model for training, and establishing a transformer hot spot temperature prediction model based on the long-term and short-term memory network; and 4, inputting the effective test data subjected to the kernel principal component analysis processing into the constructed model, and outputting the layer output temperature. The method can improve the data accuracy and the calculation speed, can find the temperature fault of the transformer in time, and ensures the stable and reliable operation of the power system.
Description
Technical Field
The invention belongs to the technical field of transformers, and discloses a transformer hot spot temperature prediction method based on kernel principal component analysis and a long-time and short-time memory network.
Background
At present, the temperature monitoring of the transformer is an important means for measuring the real-time operation condition of the transformer and determining the safe operation of the transformer, and is also one of important conditions for realizing the unattended operation of the comprehensive automatic transformer substation, so the importance of the temperature monitoring is self-evident.
The corresponding fault state of the transformer can be judged according to the temperature of the transformer, and the existing temperature measuring method has two basic forms: the first type is contact measurement, which is based on the principle that objects in the same balance state have the same temperature, and the contact of the instrument and the measured object are in thermal balance state during measurement, so that the display result is accurate. The contact type temperature sensor is characterized in that the temperature measuring element is directly contacted with a measured object, the temperature measuring element and the measured object are subjected to sufficient heat exchange, and finally, the heat balance is achieved, the value of the physical parameter of the temperature sensing element represents the temperature value of the measured object, the adaptability is weak, and the measurement precision is not high. The second type is contactless measurement: non-contact thermometry mainly uses optical radiation to measure the temperature of an object. Any object is heated to convert a portion of the heat energy into radiant energy, and the higher the temperature, the more energy is emitted into the surrounding space. The radiation energy takes the oil surface temperature (top layer oil temperature) of most of the current domestic transformers as a switching signal for protecting the safe operation of the transformers, and the prediction precision is low due to the loss of the radiation energy. The invention provides an indirect transformer hot spot temperature monitoring method, which comprises the steps of collecting parameters related to transformer hot spot temperature by using a sensor, processing monitoring data through kernel principal component analysis, screening effective information, eliminating interference and redundant information, constructing a transformer hot spot temperature prediction model by combining a long-time memory network and a short-time memory network, and calculating the transformer hot spot temperature.
Disclosure of Invention
In order to solve the problem that the existing transformer hot spot temperature is difficult to monitor, the invention provides a transformer hot spot temperature prediction method based on kernel principal component analysis and a long-time memory network. The method integrates data through kernel principal component analysis, reduces the dimension of the data in a high-dimensional space, and screens effective information in the data; and a long-time and short-time memory network is combined to construct a transformer hot spot temperature prediction model based on the long-time and short-time memory network, and Adam algorithm is adopted to optimize model parameters, so that the model prediction precision is further improved. The technical scheme adopted by the invention for solving the technical problems is as follows:
a transformer hot spot temperature prediction method based on kernel principal component analysis and a long-time and short-time memory network comprises the following steps:
step 1, acquiring parameters related to the temperature of a hot spot of a transformer by using a sensor, wherein the parameters include: constructing a characteristic set by using the ambient temperature, the ambient humidity, the wind speed, the transformer load current, the active loss, the reactive loss, the top oil temperature, the bottom oil temperature and the surface temperature;
step 2, preprocessing the characteristic data, screening the preprocessed data by utilizing kernel principal component analysis, calculating the contribution rate and the cumulative contribution rate of each characteristic value, screening the characteristic values of which the cumulative contribution rate of the principal components is more than or equal to 90%, and constructing a new characteristic set;
step 3, establishing a long-term and short-term memory network initialization model, inputting the screened characteristic data into a network model for training, and establishing a transformer hot spot temperature prediction model based on the long-term and short-term memory network;
and 4, inputting the effective test data subjected to the kernel principal component analysis processing into the constructed model, and outputting the layer output temperature.
Further, the step 2 of preprocessing the data specifically comprises the following steps:
1) characteristic data collected for a sensorPerforming integrated cleaning, namely screening out obvious abnormal data and replacing the data with a mean value;
2) centralizing the data after the integrated cleaning:
let x be xij-μij
In the formula, xijFor the ith value, μ, of the jth class of features in the feature setiIs the average value of j-th class feature data in the feature set.
Still further, the step 2 of screening the preprocessed data by using the kernel principal component analysis specifically comprises the following steps:
1) inputting a data matrix S of l rows and m columns ═ { x ═ x1,x2,…,xl},xiRepresenting the ith sample, m represents the feature dimension, and the data S is output to the high-dimensional feature space through a nonlinear mapping phi, namely, a kernel function phi of x → phi (x). Projecting data S and obtaining a data dimension k;
2) calculating a kernel matrix: k ═ Kij)l×l,kij=K(xi,xj) I, j ═ 1,2, …, l, where K (x, z) is a kernel function, which is specifically selected from gaussian kernel functions
6) and determining the reserved principal components according to the fact that the accumulated contribution rate of the eigenvalues is greater than or equal to 90%, and selecting the eigenvectors corresponding to the largest n eigenvalues. The cumulative contribution rate is defined as:
furthermore, the step 3 constructs a transformer hot spot temperature prediction model based on a long-time and short-time memory network, wherein the long-time and short-time memory network comprises an input layer, an output layer and a hidden layer, the hidden layer consists of memory cells of the long-time and short-time memory network, and the memory cells comprise input gates itForgetting door ftOutput gate otThe 3 gates control the flow of information between tuples and the network. Construction based on chronographThe concrete process of the transformer hot spot temperature prediction model of the memory network is as follows:
1) the old cell state C is determined by the sigmoid layer of the forgetting gatet-1The input of the forgotten information is the input x of the current layertAnd the output h of the previous layert-1The cell state output at this point is:
2) information that needs to be updated is generated and stored in the cell state.
3) The output gate determines the output information,
ht=ot×tanh(Ct)
1) -3) W in1 i、W1 f、W1°、W1 CAre respectively communication xtThe input gate, the forgetting gate, the output gate of the AND tuple and the weight matrix of the tuple input;are respectively connected to ht-1The input gate, the forgetting gate, the output gate of the AND tuple and the weight matrix of the tuple input; bi、bf、bo、bCRespectively inputting offset vectors input by a gate, a forgetting gate, an output gate and a tuple; σ denotes the sigmoid activation function.
4) The mean square error is used as a loss function of a prediction model, and the mathematical expression of the loss function is as follows:
wherein n is the number of training samples, hiIs a predicted value of the ith sample, h'iIs the actual output value of the ith sample.
5) Initializing parameters of a long-time and short-time memory network prediction model, continuously optimizing and adjusting the parameters of the prediction model by using an Adam algorithm, selecting the model parameters with the minimum error by taking a model loss function as a target function, and finishing the training of the long-time and short-time memory network-based transformer hot spot temperature prediction model.
Compared with the prior art, the transformer hot spot temperature prediction method based on the kernel principal component analysis and the long-time and short-time memory network has the advantages that the transformer hot spot temperature is monitored through an indirect method, the kernel principal component analysis and the long-time and short-time memory network are combined, interference and redundant information are removed through a kernel principal component analysis method, nonlinear characteristic data are processed in depth, and the data accuracy and the calculation speed are improved; a transformer hot spot temperature prediction model is built by combining a long-time memory network, and the transformer hot spot temperature is predicted, so that the transformer temperature fault can be timely found and processed, and the stable and reliable operation of a power system is guaranteed.
Drawings
Fig. 1 is a flowchart of a transformer hot spot temperature monitoring method based on kernel principal component analysis and a long-term and short-term memory network.
Detailed Description
The method of the present invention is further described in detail below with reference to the drawings and examples.
Taking a certain transformer substation as an example, collecting data of 8 days of a main transformer 750MVA/500KV transformer from 2019, 9, 25 months and 2019, 10, 3 months and the like of the transformer, such as winding hot spot temperature, top layer oil temperature, bottom layer oil temperature, surface temperature, load current, active loss, reactive loss, environment temperature, environment humidity, wind speed and the like, and using the data as an input and output data set based on a long-time memory network transformer hot spot temperature prediction model.
Referring to fig. 1, a transformer hot spot temperature prediction method based on kernel principal component analysis and long-term and short-term memory network includes the following steps:
step 1: data are acquired through the sensors, the data acquisition interval is 0.5, and the data are acquired for 8 days, so that the data are 384 samples in total, each sample comprises 9 characteristic quantities such as top layer oil temperature, bottom layer oil temperature, surface temperature, load current, active loss, reactive loss, environment temperature, environment humidity, wind speed and the like, and winding hot spot temperature data corresponding to each sample are recorded and used as output quantities of the model and are respectively used for training and verifying the model.
Step 2: preprocessing data, screening the preprocessed data by using kernel principal component analysis, calculating the contribution rate and the accumulated contribution rate of each characteristic value, screening the characteristic value of which the accumulated contribution rate of the principal component is more than or equal to 90 percent, and the process is as follows:
1) characteristic data collected for a sensorPerforming integrated cleaning, namely screening out obvious abnormal data and replacing with a mean value;
2) and (3) centralizing the data after the integrated cleaning:
let x be xij-μij
In the formula, xijFor the ith value, μ, of the jth class of features in the feature setiIs the average value of j-th class feature data in the feature set.
3) The specific steps for performing kernel principal component analysis on the data after the centralization processing are as follows:
input data matrix S ═ x of l rows and m columns1,x2,…,xl},xiRepresenting the ith sample, m representing the feature dimension, and data S output to the high-dimensional feature by nonlinear mapping phiSpatially, the nonlinear mapping φ, i.e., the kernel function, is φ: x → φ (x). Projecting data S and obtaining a data dimension k;
calculating a kernel matrix: k ═ Kij)l×l,kij=K(xi,xj) I, j is 1,2, …, l, where K (x, z) is a kernel function, and the kernel function is a gaussian kernel function
determining the reserved main components according to the characteristic value accumulated contribution rate of more than or equal to 90%, and selecting the characteristic vectors corresponding to the maximum n characteristic values. The cumulative contribution rate is defined as:
and calculating according to experimental data, wherein the accumulated contribution rate of the first 3 eigenvalues is more than 90%, and selecting the first 3 eigenvalues and corresponding eigenvectors for data reconstruction.
and step 3: establishing a long-time and short-time memory network initialization model, inputting the screened data into a network model for training, and establishing a transformer hot spot temperature prediction model based on the long-time and short-time memory network. The long and short term memory network comprises an input layer, an output layer and a hidden layer, wherein the hidden layer comprises memory cell groups of the long and short term memory network, and the memory cell groups comprise input gates itDoor f for forgetting to leavetAnd an output gate otThe 3 gates control the flow of information between tuples and the network. The specific process of constructing the transformer hot spot temperature prediction model based on the long-time and short-time memory network comprises the following steps:
1) old cell state C is determined by sigmoid layer of forgetting gatet-1The input of the forgotten information is the input x of the current layertAnd the output h of the previous layert-1The cell state output at this point is:
2) information that needs to be updated is generated and stored in the cell state.
3) The output gate determines the output information and,
ht=ot×tanh(Ct)
1) -3) W in1 i、W1 f、W1°、W1 CAre respectively communication xtThe input gate, the forgetting gate, the output gate of the AND tuple and the weight matrix of the tuple input;are respectively connected to ht-1A weight matrix input by an input gate, a forgetting gate, an output gate and a tuple of tuples; bi、bf、bo、bCRespectively inputting offset vectors input by a gate, a forgetting gate, an output gate and a tuple; σ denotes the sigmoid activation function.
4) The mean square error is used as a loss function of a prediction model, and the mathematical expression of the loss function is as follows:
wherein n is the number of training samples, hiIs a predicted value of the ith sample, h'iIs the actual output value of the ith sample.
5) Initializing parameters of a long-time and short-time memory network prediction model, continuously optimizing and adjusting the parameters of the prediction model by using an Adam algorithm, selecting the model parameters with the minimum error by taking a model loss function as a target function, and finishing the training of the long-time and short-time memory network-based transformer hot spot temperature prediction model.
And 4, step 4: inputting effective test data after the kernel principal component analysis processing into the constructed model, and outputting the layer output temperature.
The present invention is not limited to the above embodiments, and those skilled in the art can implement the present invention in other various embodiments according to the disclosure of the present invention, so that all designs and concepts of the present invention can be changed or modified without departing from the scope of the present invention.
Claims (4)
1. A transformer hot spot temperature prediction method based on kernel principal component analysis and a long-time and short-time memory network is characterized by comprising the following steps:
step 1, acquiring parameters related to the temperature of a hot spot of a transformer by using a sensor, wherein the parameters include: constructing a characteristic set by using the ambient temperature, the ambient humidity, the wind speed, the transformer load current, the active loss, the reactive loss, the top oil temperature, the bottom oil temperature and the surface temperature;
step 2, preprocessing the characteristic data, screening the preprocessed data by utilizing kernel principal component analysis, calculating the contribution rate and the cumulative contribution rate of each characteristic value, screening the characteristic values of which the cumulative contribution rate of the principal components is more than or equal to 90%, and constructing a new characteristic set;
step 3, establishing a long-term and short-term memory network initialization model, inputting the screened characteristic data into a network model for training, and establishing a transformer hot spot temperature prediction model based on the long-term and short-term memory network;
and 4, inputting the effective test data subjected to the kernel principal component analysis processing into the constructed model, and outputting the layer output temperature.
2. The transformer hot spot temperature prediction method based on kernel principal component analysis and long-term memory network as claimed in claim 1, wherein said step 2, the specific steps of preprocessing the characteristic data are as follows:
1) characteristic data collected for a sensorPerforming integrated cleaning, namely screening out obvious abnormal data and replacing the data with a mean value;
2) and (3) centralizing the data after the integrated cleaning:
let x be xij-μij
In the formula, xijFor the ith value, μ, of the jth class of features in the feature setiIs the average value of j-th class feature data in the feature set.
3. The transformer hot spot temperature prediction method based on kernel principal component analysis and long-term memory network as claimed in claim 1, wherein in the step 2, the preprocessed data are screened by using kernel principal component analysis, and the specific steps are as follows:
1) input data matrix S ═ x of l rows and m columns1,x2,…,xl},xiRepresenting the ith sample, wherein m represents the dimension of the characteristic quantity, the data S is output to a high-dimensional characteristic space through nonlinear mapping phi, the nonlinear mapping phi is a kernel function phi, x → phi (x), and the projected data dimension k of the data S is a dimension k;
2) calculating a kernel matrix: k ═ Kij)l×l,kij=K(xi,xj) I, j ═ 1,2, …, l, where K (x, z) is a kernel function, which is specifically selected from gaussian kernel functions
6) determining the reserved principal components according to the fact that the cumulative contribution rate of the characteristic values is larger than or equal to 90%, selecting the characteristic vectors corresponding to the largest n characteristic values, and defining the cumulative contribution rate as follows:
4. the method as claimed in claim 1, wherein the long-term and short-term memory network comprises an input layer, an output layer, and a hidden layer, the hidden layer comprises memory cells of the long-term and short-term memory network, the memory cells include an input gate itDoor f for forgetting to leavetAnd an output gate ot3 gates control information flow between tuples and networks, and the specific process of constructing the transformer hot spot temperature prediction model based on the long-time and short-time memory network comprises the following steps:
1) the old cell state C is determined by the sigmoid layer of the forgetting gatet-1The input of the forgotten information is the input x of the current layertAnd the output h of the previous layert-1The cell state output at this point is:
2) information that needs to be updated is generated and stored in the cell state,
3) the output gate determines the output information,
ht=ot×tanh(Ct)
step 1) -step 3) W1 i、W1 f、W1°、W1 CAre respectively communication xtThe input gate, the forgetting gate, the output gate of the AND tuple and the weight matrix of the tuple input;are respectively connected to ht-1A weight matrix input by an input gate, a forgetting gate, an output gate and a tuple of tuples; bi、bf、bo、bCRespectively inputting offset vectors input by a gate, a forgetting gate, an output gate and a tuple; sigma represents a sigmoid activation function;
4) the mean square error is used as a loss function of a prediction model, and the mathematical expression of the loss function is as follows:
wherein n is the number of training samples, hiIs the predicted value of the ith sample, hi' is the actual output value of the ith sample;
5) initializing parameters of a long-time and short-time memory network prediction model, continuously optimizing and adjusting the parameters of the prediction model by using an Adam algorithm, selecting the model parameters with the minimum error by taking a model loss function as a target function, and finishing the training of the long-time and short-time memory network-based transformer hot spot temperature prediction model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210279113.4A CN114647979A (en) | 2022-03-21 | 2022-03-21 | Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210279113.4A CN114647979A (en) | 2022-03-21 | 2022-03-21 | Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114647979A true CN114647979A (en) | 2022-06-21 |
Family
ID=81994642
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210279113.4A Pending CN114647979A (en) | 2022-03-21 | 2022-03-21 | Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114647979A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117250449A (en) * | 2023-09-20 | 2023-12-19 | 湖南五凌电力科技有限公司 | Transformer insulation state evaluation method and device, electronic equipment and storage medium |
-
2022
- 2022-03-21 CN CN202210279113.4A patent/CN114647979A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117250449A (en) * | 2023-09-20 | 2023-12-19 | 湖南五凌电力科技有限公司 | Transformer insulation state evaluation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gu et al. | Neural network soil moisture model for irrigation scheduling | |
CN109659933A (en) | A kind of prediction technique of power quality containing distributed power distribution network based on deep learning model | |
Ren et al. | Improving monthly streamflow prediction in alpine regions: integrating HBV model with Bayesian neural network | |
CN109034191B (en) | ELM-based one-dimensional telemetry data abnormal interpretation method | |
CN113537844B (en) | Method and system for analyzing load behaviors of regional energy Internet based on random matrix | |
Wang et al. | Prediction of early stabilization time of electrolytic capacitor based on ARIMA-Bi_LSTM hybrid model | |
CN110580021A (en) | Granary environmental safety intelligent monitoring system based on field bus | |
CN111461187B (en) | Intelligent building settlement detection system | |
May Tzuc et al. | Artificial intelligence techniques for modeling indoor building temperature under tropical climate using outdoor environmental monitoring | |
CN114397043B (en) | Multi-point temperature intelligent detection system | |
CN114647979A (en) | Transformer hot spot temperature prediction method based on kernel principal component analysis and long-time and short-time memory network | |
CN114239991A (en) | Building heat supply load prediction method, device and equipment based on data driving | |
CN113991711B (en) | Capacity configuration method for energy storage system of photovoltaic power station | |
CN116930609A (en) | Electric energy metering error analysis method based on ResNet-LSTM model | |
Saha et al. | Prediction of the ENSO and EQUINOO indices during June–September using a deep learning method | |
Asanza et al. | Fpga based meteorological monitoring station | |
CN114861555A (en) | Regional comprehensive energy system short-term load prediction method based on Copula theory | |
CN116681158A (en) | Reference crop evapotranspiration prediction method based on integrated extreme learning machine | |
CN107437112A (en) | A kind of mixing RVM model prediction methods based on the multiple dimensioned kernel function of improvement | |
CN110009132A (en) | A kind of short-term electric load fining prediction technique based on LSTM deep neural network | |
CN116227650A (en) | Lithium battery temperature distribution prediction model construction method and model based on orthogonal enhancement type local maintenance projection algorithm | |
CN110909943A (en) | Multi-scale multi-factor joint-driven monthly runoff probability forecasting method | |
Abter et al. | Intelligent forecasting temperature measurements of solar PV cells using modified recurrent neural network | |
CN113515885B (en) | Intelligent health state diagnosis method for photovoltaic module | |
Grubišić et al. | Neural model for soil moisture level determination based on weather station data smart campus use case |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |