CN107977748B - Multivariable distorted time sequence prediction method - Google Patents
Multivariable distorted time sequence prediction method Download PDFInfo
- Publication number
- CN107977748B CN107977748B CN201711267794.8A CN201711267794A CN107977748B CN 107977748 B CN107977748 B CN 107977748B CN 201711267794 A CN201711267794 A CN 201711267794A CN 107977748 B CN107977748 B CN 107977748B
- Authority
- CN
- China
- Prior art keywords
- time series
- multivariate
- layer
- neural network
- time sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/06—Electricity, gas or water supply
Abstract
The invention discloses a multivariate distortion time sequence prediction method, which comprises the following steps: 1, establishing a training sample set; 2, constructing a multivariate time series convolution neural network model; the multivariate convolutional neural network model at least comprises an input layer, a feature extraction layer, a convolutional layer module, all variable full-connection layers and an output layer which are sequentially connected; 3 training a multivariate time series convolution neural network model; 4 assembling a multivariate time sequence convolution neural network model to obtain a multivariate time sequence prediction system; and 5, predicting the multivariable time series of the power consumption by using a multivariable time series prediction system. The method is used for predicting the power consumption of the power grid system, overcomes the defects that the traditional method does not fully utilize sequence abstract characteristics and is easily influenced by data distortion, can reduce the influence of the distorted data on the accuracy of a prediction result, and has strong reliability.
Description
Technical Field
The invention relates to the technical field of multivariate time sequence prediction, in particular to a multivariate distorted time sequence prediction method.
Background
With the development of industry and the construction of cities, the power supply capacity of urban power networks and the quality of life of residents become closely related, and time series analysis is one of effective ways for planning power supply of the power networks. The time sequence is a series of data point values arranged according to the time sequence, and the time sequence analysis is based on the ordered observation data and utilizes mathematical statistics and other methods to research the statistical rules, thereby predicting future data and solving the actual problem.
The existing time series prediction method does not fully utilize all abstract features of an available sequence, and single-variable time series prediction is often adopted, namely, only the abstract features of a prediction target time series are utilized for prediction. However, things are related to each other, that is, the time series of different variables in the same application field have relevance and co-occurrence, the limitation of single variable time series characteristics exists, the obtained information is not comprehensive and accurate, and the prediction result is not accurate. Furthermore, it is another drawback of the existing method that time series data distortion cannot be overcome, and such shifting or disordering of time slices on the time axis is a challenge for the analysis and prediction of time series.
Disclosure of Invention
The present invention aims to solve at least to some extent one of the problems of the prior art described above.
To this end, one embodiment of the present invention provides a multivariate warped time series prediction method based on a convolutional neural network, which includes the following steps:
s101, establishing a training sample set;
the training sample set is a multivariate time sequence of a power grid system and at least comprises a target time sequence and a condition time sequence;
s102, constructing a multivariate time series convolution neural network model;
the multivariate convolutional neural network model at least comprises an input layer, a feature extraction layer, a convolutional layer module, all variable full-connection layers and an output layer which are sequentially connected;
s103, training a multivariate time sequence convolution neural network model;
after initialization, iterating the multivariate time sequence convolutional neural network model constructed in the step S102 by adopting a random gradient descent method, checking the gradient once every iteration to find the optimal solution of the weight and the bias of each network layer, and iterating for multiple times to obtain the optimal multivariate time sequence convolutional neural network model;
s104, assembling a multivariate time series convolution neural network model;
sequentially connecting the network layers of the optimal multivariate time sequence convolutional neural network model obtained in the step S103 to obtain a multivariate time sequence prediction system;
s105 predicts the multivariate time series of power consumption by using the multivariate time series prediction system obtained in step S104.
Further, the input layer provides an input interface for each variable time sequence.
Furthermore, the feature extraction layer provides an embedded warped data processing mechanism for fusing several adjacent time segments in the warped time sequence to obtain an abstract feature vector;
the feature extraction layer output is defined as follows:
where j represents the layer-number network that is the model, X ∈ Rd×lMultivariate conditional time series, x, representing inputsi∈RlIs X ∈ Rd×lRepresents the ith variable time series, y is the predicted target time series,is the input of the ith dimension of the j-th network, Wi (j)Is the ith variant of the j-th networkWeighted value of quantity, often sparse matrixWhich represents whether or not there is a relevant weight,a weight value in (a) represents the case where each element is combined with its neighbors,representing the bias of the feature extraction layer, g(j)(. h) is an activation function for layer j.
Furthermore, the convolutional layer module comprises a plurality of convolutional layers connected in parallel, the time sequence of each variable corresponds to one convolutional layer, and each convolutional layer is used for performing convolutional operation on the time sequence of the variable corresponding to the convolutional layer;
for the ith variable, the kth element of the f-th filter output, the convolution operation is defined as follows:
where cs represents the size of a single convolutional layer.
Furthermore, the variable full-link layers fuse the abstract features of the variable time sequences to obtain high-level abstract features.
Further, the all-variable fully-connected layer fuses high-level abstract features of all variables to obtain final fusion features of all variable time sequences.
Further, to represent the nonlinear relationship between the variable characteristics, a sigmoid function S (x) is used as an activation function, which is defined as follows:
x is the weighted result of the highly abstract feature of each variable time sequence feature of all variable full-connection layers, e is a natural constant, and S (x) maps the weighted result into an interval (0, 1).
Compared with the prior art, the prediction method has the following advantages:
1) the prediction method adopts the neural network to carry out feature learning, simulates the process of learning and distinguishing objects of the human brain, and is more accurate and effective than the traditional autoregressive moving average;
2) the prediction method fully utilizes all data characteristics of the grid system, including a target time sequence and a condition time sequence, and provides more information for accurately predicting the target;
3) according to the prediction method, the feature extraction layer is added on the basis of the convolutional neural network, so that the problem of data distortion existing in time series data is solved, and the reliability of a prediction result is improved.
Drawings
FIG. 1 is a flow chart of steps of a prediction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multivariate time series convolutional neural network model framework according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the connection of the time series of the single variable in the prediction method at the feature extraction layer according to the embodiment of the invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical solution of the present invention, the following description is made with reference to the accompanying drawings by way of specific embodiments.
Fig. 1 is a flow schematic of a multivariate distortion time series prediction method based on a convolutional neural network in this embodiment, which may be applied to prediction analysis of an electric power system to provide data support for planning power supply of a power grid, and it should be noted that the method is also applicable to prediction analysis of other working systems.
The multivariate warped time series prediction method based on the convolutional neural network of the present embodiment includes the following steps S101 to S105:
step S101, establishing a training sample set, and preprocessing time sequence data in the training sample set.
Wherein the multivariate time series in the sample set is preferably linked from the UCI data set (Machine learning retrieval)http://archive.ics.uci.edu/ml/datasets.html) The method comprises the steps of collecting a sample set, wherein the sample set comprises 10000 records, each record has 7 attribute variable values, a target attribute is global dynamic power supply, the rest are condition attributes, time granularity is per minute, the first 90% of data records are used as a training sample set, data in all the training sample sets are standardized, and the calculation method is thatTherefore, the range of a sample value range can be reduced, and the learning speed of the model can be accelerated, wherein x is the sample value of a variable time series data point, mu is the average value of the sample values of all the data points of the time series, and sigma is the standard deviation of the sample values of all the data points of the time series.
S102, constructing a multivariate time series convolution neural network model;
a framework of the multivariate time series convolutional neural network model is shown in fig. 2, and specifically, the multivariate time series convolutional neural network is sequentially connected with an input layer, a feature extraction layer, a convolutional layer module, all variable full-connection layers and an output layer; the various layers of the multivariate time series convolutional neural network model are described in detail below:
1) the input layer provides an input interface for the time sequence of each variable, and in the embodiment of the invention, the number of the input layer interfaces is preferably 7, and 7 input layer interfaces respectively correspond to the time sequences of 7 attribute variables of the power grid system.
2) The feature extraction layer provides an embedded distorted data processing mechanism for the multivariate time series convolution neural network model, and feature extraction is respectively carried out on the time series of each variable. In the feature extraction layer, two super parameters are provided, wherein the two super parameters are respectively time sequence length l and window size ws, and the two super parameters are respectively used for controlling the input time sequence length and combining continuous elements during feature extraction.
As a preferred embodiment, it is preferable that the length l of the time series of the single variable is 7 and the window size ws is 2, that is, each time the input time series contains 7 data point values, and adjacent 2 data point values are fused into one feature value for feature extraction, where fig. 3 illustrates how the time series of the single variable completes the connection at the feature extraction layer in this preferred embodiment.
With X ∈ R6×7Represents a conditional time series of inputs, where xi∈R7Is the ith conditional time series and y represents the target time series. W represents the weight matrix of the connection,is the weighted value of the ith variable time sequence in the 1 st hidden layer and the constant sparse matrix with the same sizeRepresents whether there is a relevant weight, thereforeAndcollectively representing all weight values of the feature extraction layer.
Under this embodiment, the normally sparse matrix of parameters is defined as:
therefore, is atThe weight value in (1) indicates how each element is combined with its neighbor, and specifically, the output of the feature extraction layer is defined as follows:
wherein the content of the first and second substances,representing the input of the feature extraction layer (also called hidden layer 1),represents the weight value of the feature extraction layer,representing the bias of the feature extraction layer, g(1)Representing the activation function of the feature extraction layer. In the feature extraction layer, each neuron has its own weight, and for each time segment, the neurons can obtain global optimal information.
3) The convolution layer module further acquires the characteristics of the temporary characteristics output by the characteristic extraction layer through a convolution filter, and comprises the step of performing convolution operation on the output of the characteristic extraction layer of each variable time sequence. The purpose of convolution is to obtain abstract features through different convolution filters, and the process is weight sharing, so that the complexity of calculation is reduced.
For the time series of the ith variable, the kth element of the f-th filter output may be defined as:
where cs represents the size of the convolution layer, in the embodiment of the present invention, cs is preferably 2.
The convolutional layer differs from the feature extraction layer in that it has a shared convolutional kernel, the purpose of which is to make the results conform to a uniform pattern as much as possible.
4) The variable full-connection layers combine the output abstract characteristics of each variable time sequence passing through the convolutional layer to form a highly abstract representation, and the weight of the conversion process can be obtained through training.
5) And the all-variable full-connection layer further fuses the high abstract characteristics of each variable time sequence output by each variable full-connection layer to obtain a final result. Each variable is processed respectively in the previous layers, and in all variable full-connection layers, the high-level abstract features of all variables are considered, and the features of each variable have respective weights, so that the complex nonlinear relation between the variables is reflected. Through the fusion operation of all variable full-connection layers, the relevance and the co-occurrence degree between different variable time sequences are fully utilized.
To better fit the non-linear relationship between the data, a sigmoid function was used as the activation function, defined as:
wherein x is a weighted result of the highly abstract feature of each variable time sequence feature of all variable full-connection layers, e is a natural constant, and S (x) maps the weighted result into an interval (0, 1).
6) And the output layer outputs a final prediction result for the model.
Step S103, training a multivariate time series convolution neural network model;
after initializing parameters in the multivariate time sequence convolutional neural network model, iterating the multivariate time sequence convolutional neural network model constructed in the step S102 by adopting a random gradient descent method, checking the gradient once every iteration to find the optimal solution of the network layer weight and the bias, and when the loss function is minimum, obtaining the optimal multivariate time sequence convolutional neural network model trained at this time, wherein the definition of the loss function is as follows:
where N is the length of the time series, i.e., the number of data points, λ is a scaling parameter, and the remaining symbol definitions are the same as those described above.
The specific steps of training the multivariate time series convolution neural network model are as follows:
step S1031, writing the multivariate time sequence data in the training sample data set into a data file, wherein the data format of the data file conforms to the read-in data interface of the multivariate time sequence convolutional neural network model;
step S1032, setting training parameters: reading in a file path, iteration times, a learning rate and the like, and setting the size, initial training weight and training bias of each network layer;
step S1033, loading a training file: loading training data consisting of a multivariate time sequence convolutional neural network definition file, a network layer parameter definition file and a training data set;
and S1034, iterating the multivariate time sequence convolutional neural network model constructed in the step (2) by adopting a random gradient descent method, checking the gradient once every iteration to seek the optimal solution of the network layer weight and the bias, and iterating for multiple times to obtain the optimal model of the training. The rule for updating the weight is as follows:
where W is a weight matrix of network connections, b is a bias of the network, X is a multivariate conditional time series of network inputs, y is a predicted target time series of network inputs, and η represents a learning rate.
Step S104, assembling a multivariate time series convolution neural network model: and (5) sequentially connecting the network layers of the optimal multivariate time series convolutional neural network model in the step (S103) to obtain a multivariate time series prediction system. Because the model is trained by the stochastic gradient descent method under different initialization conditions, the prediction results are different every time, the output of the model obtained by training under different initial conditions can be taken as the output of the whole system after being statistically averaged, and finally the power consumption prediction system is obtained;
step S105, taking the data of the last 10% of the sample set as a test sample set, and standardizing the data in the test sample set, wherein the calculation method is as follows:
and predicting the data in the test sample set by using the power consumption prediction system obtained in the step S104 to obtain a prediction result.
The parts of the prediction method in the embodiment of the present invention that are not expanded can refer to the corresponding parts of the prediction method in the above embodiment, and are not expanded in detail here.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and those skilled in the art can make changes, modifications, substitutions and alterations to the above embodiments within the scope of the present invention.
Claims (6)
1. A multivariate warped time series prediction method based on a convolutional neural network is characterized by comprising the following steps:
s101, establishing a training sample set;
the training sample set is a multivariate time sequence of a power grid system and at least comprises a target time sequence and a condition time sequence;
s102, constructing a multivariate time series convolution neural network model;
the multivariate time series convolution neural network model at least comprises an input layer, a feature extraction layer, a convolution layer module, all variable full-connection layers and an output layer which are sequentially connected;
s103, training a multivariate time sequence convolution neural network model;
after initialization, iterating the multivariate time sequence convolutional neural network model constructed in the step S102 by adopting a random gradient descent method, checking the gradient once every iteration to find the optimal solution of the weight and the bias of each network layer, and iterating for multiple times to obtain the optimal multivariate time sequence convolutional neural network model;
s104, assembling a multivariate time series convolution neural network model;
sequentially connecting the network layers of the optimal multivariate time sequence convolutional neural network model obtained in the step S103 to obtain a multivariate time sequence prediction system;
s105, predicting the multivariable time series of the power consumption by using the multivariable time series prediction system obtained in the step S104;
the feature extraction layer provides an embedded distorted data processing mechanism for fusing several adjacent time segments in a distorted time sequence to obtain an abstract feature vector;
the feature extraction layer output is defined as follows:
where j represents the layer number network of the model, xi∈RlIs X ∈ Rd×lRepresents the ith variable time series, y is the predicted target time series,is the input of the ith dimension of the j-th network, Wi (j)Is the weighted value of the ith variable of the j-th network, the constant sparse matrixWhich represents whether or not there is a relevant weight,a weight value in (a) represents the case where each element is combined with its neighbors,representing the bias of the feature extraction layer, g(j)(. h) is an activation function for layer j.
2. The method of claim 1, wherein the input layer provides an input interface for each time sequence of variables.
3. The convolutional neural network-based multivariate warped time series prediction method as claimed in claim 1, wherein the convolutional layer module comprises a plurality of convolutional layers connected in parallel, one convolutional layer corresponding to each time series of variables, and each convolutional layer is used for performing convolutional operation on the time series of variables corresponding to the convolutional layer;
for the ith variable, the kth element of the f-th filter output, the convolution operation is defined as follows:
where cs represents the size of a single convolutional layer.
4. The multivariate warped time series prediction method based on the convolutional neural network as claimed in claim 3, wherein the fully connected layers of the variables fuse the abstract features of the time series of the variables to obtain high-level abstract features.
5. The method as claimed in claim 4, wherein the fully-connected layers of all variables fuse the high-level abstract features of all variables to obtain the final fused features of all variable time series.
6. The method of claim 5, wherein the activation function is a sigmoid function to represent the non-linear relationship between the variable features, and is defined as follows:
wherein x is a weighted result of the highly abstract feature of each variable time sequence feature of all variable full-connection layers, e is a natural constant, and S (x) maps the weighted result into an interval (0, 1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711267794.8A CN107977748B (en) | 2017-12-05 | 2017-12-05 | Multivariable distorted time sequence prediction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711267794.8A CN107977748B (en) | 2017-12-05 | 2017-12-05 | Multivariable distorted time sequence prediction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107977748A CN107977748A (en) | 2018-05-01 |
CN107977748B true CN107977748B (en) | 2022-03-11 |
Family
ID=62009354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711267794.8A Active CN107977748B (en) | 2017-12-05 | 2017-12-05 | Multivariable distorted time sequence prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107977748B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147878B (en) * | 2018-10-08 | 2021-10-15 | 燕山大学 | Soft measurement method for free calcium of cement clinker |
US11410077B2 (en) | 2019-02-05 | 2022-08-09 | International Business Machines Corporation | Implementing a computer system task involving nonstationary streaming time-series data by removing biased gradients from memory |
CN110838075A (en) * | 2019-05-20 | 2020-02-25 | 全球能源互联网研究院有限公司 | Training and predicting method and device for prediction model of transient stability of power grid system |
CN111651935B (en) * | 2020-05-25 | 2023-04-18 | 成都千嘉科技股份有限公司 | Multi-dimensional expansion prediction method and device for non-stationary time series data |
CN112700051A (en) * | 2021-01-04 | 2021-04-23 | 天津科技大学 | Res-TCN neural network-based intelligent prediction method for oil well liquid production associated gas |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502799A (en) * | 2016-12-30 | 2017-03-15 | 南京大学 | A kind of host load prediction method based on long memory network in short-term |
CN106934497A (en) * | 2017-03-08 | 2017-07-07 | 青岛卓迅电子科技有限公司 | Wisdom cell power consumption real-time predicting method and device based on deep learning |
CN107146015A (en) * | 2017-05-02 | 2017-09-08 | 联想(北京)有限公司 | Multivariate Time Series Forecasting Methodology and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160143594A1 (en) * | 2013-06-20 | 2016-05-26 | University Of Virginia Patent Foundation | Multidimensional time series entrainment system, method and computer readable medium |
-
2017
- 2017-12-05 CN CN201711267794.8A patent/CN107977748B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502799A (en) * | 2016-12-30 | 2017-03-15 | 南京大学 | A kind of host load prediction method based on long memory network in short-term |
CN106934497A (en) * | 2017-03-08 | 2017-07-07 | 青岛卓迅电子科技有限公司 | Wisdom cell power consumption real-time predicting method and device based on deep learning |
CN107146015A (en) * | 2017-05-02 | 2017-09-08 | 联想(北京)有限公司 | Multivariate Time Series Forecasting Methodology and system |
Non-Patent Citations (2)
Title |
---|
一种多变量时间序列的短期负荷预测方法研究;雷绍兰等;《电工技术学报》;20050430;第20卷(第4期);第62-67页 * |
关于关联变量时滞分析卷积神经网络的生产过程时间序列预测方法;张浩等;《化工学报》;20170628;第68卷(第9期);第3501-0510页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107977748A (en) | 2018-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977748B (en) | Multivariable distorted time sequence prediction method | |
Wang et al. | Deep separable convolutional network for remaining useful life prediction of machinery | |
US9785886B1 (en) | Cooperative execution of a genetic algorithm with an efficient training algorithm for data-driven model creation | |
US10635978B2 (en) | Ensembling of neural network models | |
Behera et al. | Multiscale deep bidirectional gated recurrent neural networks based prognostic method for complex non-linear degradation systems | |
CN115270956B (en) | Continuous learning-based cross-equipment incremental bearing fault diagnosis method | |
EP3792840A1 (en) | Neural network method and apparatus | |
KR20220117336A (en) | Method and apparatus, device and readable storage medium for estimating power consumption | |
Zajkowski | The method of solution of equations with coefficients that contain measurement errors, using artificial neural network | |
CN111985155A (en) | Circuit health state prediction method and system based on integrated deep neural network | |
Wang et al. | Spatio-temporal graph convolutional neural network for remaining useful life estimation of aircraft engines | |
CN114357594A (en) | Bridge abnormity monitoring method, system, equipment and storage medium based on SCA-GRU | |
CN110264270A (en) | A kind of behavior prediction method, apparatus, equipment and storage medium | |
CN116089870A (en) | Industrial equipment fault prediction method and device based on meta-learning under small sample condition | |
WO2022009010A1 (en) | Model fidelity monitoring and regeneration for manufacturing process decision support | |
CN115269247A (en) | Flash memory bad block prediction method, system, medium and device based on deep forest | |
EP4009239A1 (en) | Method and apparatus with neural architecture search based on hardware performance | |
CN112733724B (en) | Relativity relationship verification method and device based on discrimination sample meta-digger | |
US20220027739A1 (en) | Search space exploration for deep learning | |
CN113792768A (en) | Hypergraph neural network classification method and device | |
CN112580807A (en) | Neural network improvement demand automatic generation method and device based on efficiency evaluation | |
CN116502672A (en) | Neural network quantitative deployment method, system, equipment and medium | |
CN106156845A (en) | A kind of method and apparatus for building neutral net | |
JP6871352B1 (en) | Learning equipment, learning methods and learning programs | |
Mo et al. | Evolutionary Optimization of Convolutional Extreme Learning Machine for Remaining Useful Life Prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |