CN109802862B - Combined network flow prediction method based on ensemble empirical mode decomposition - Google Patents

Combined network flow prediction method based on ensemble empirical mode decomposition Download PDF

Info

Publication number
CN109802862B
CN109802862B CN201910230095.9A CN201910230095A CN109802862B CN 109802862 B CN109802862 B CN 109802862B CN 201910230095 A CN201910230095 A CN 201910230095A CN 109802862 B CN109802862 B CN 109802862B
Authority
CN
China
Prior art keywords
imf
network
value
signal
mode decomposition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910230095.9A
Other languages
Chinese (zh)
Other versions
CN109802862A (en
Inventor
唐宏
姚立霜
刘丹
王云峰
裴作飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910230095.9A priority Critical patent/CN109802862B/en
Publication of CN109802862A publication Critical patent/CN109802862A/en
Application granted granted Critical
Publication of CN109802862B publication Critical patent/CN109802862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of network traffic prediction, and particularly relates to a combined network traffic prediction method based on ensemble empirical mode decomposition, which comprises the following steps: acquiring original flow data and preprocessing the original flow data; decomposing network flow into IMF components with single frequency on different time scales by ensemble empirical mode decomposition; determining the stationarity of the IMF component through autocorrelation and partial autocorrelation analysis; predicting the stable IMF component by using a linear ARMA model; predicting non-stationary IMF components by using a non-linear Elman neural network; summing the predicted values of the IMF components to obtain a predicted value of the network flow; the invention describes and predicts the actual network flow more accurately and comprehensively, thereby improving the prediction precision and increasing the prediction reliability.

Description

Combined network flow prediction method based on ensemble empirical mode decomposition
Technical Field
The invention belongs to the technical field of network traffic prediction, and particularly relates to a combined network traffic prediction method based on ensemble Empirical Mode Decomposition (EMD).
Background
In recent years, with the rapid development of the internet industry, the network scale is increasingly large, the network structure is increasingly complex, and the network management faces a great challenge. It is an important issue for network managers to improve network operation efficiency through an efficient network management mechanism.
In the face of more and more complex network interconnection environments and increasing network traffic, researchers and scholars need to use more resources and time to monitor and analyze the network traffic to deal with the sudden situations of network congestion and congestion to ensure good network quality. Traditional network management uses a responsive approach, i.e. to resolve a problem that occurs after an alarm occurs, at which time the network service has been affected, and there is often no time to take corresponding corrective action when the alarm is received. The network flow prediction is to establish a network flow prediction model according to the collected actual network flow observation value sequence, predict future flow data and judge the possibility and occurrence time of exceeding a threshold value in the future. The administrator can pay special attention to the network in the important time period and take precautionary measures before the network is overloaded, so that the stability of the network performance is effectively guaranteed, and the aim of continuously serving network users is fulfilled.
Aiming at the short correlation characteristics of the traditional network traffic, some linear prediction models are provided, such as an autoregressive model (AR), a moving average Model (MA), an autoregressive moving average model (ARMA) and the like, which are similar to the earlier Poisson model and Markov model and can only predict a stationary process. With the development of the internet, network flow increasingly presents the characteristics of nonlinearity, non-stationarity and the like, and a linear prediction model has a plurality of limitations, so that a plurality of nonlinear prediction models are continuously proposed, and the model complexity and the calculation complexity of a neural network, a support vector machine and the like are relatively high.
After the intensive research on the network traffic characteristics, the actual network traffic has a plurality of characteristics such as obvious nonlinearity, self-similarity, long correlation, multi-fractal property, burstiness and the like in a long time. In the past, a single prediction model cannot completely take into account the characteristics of the network traffic, so that the real characteristics of the network traffic are accurately and comprehensively depicted, and a large error is inevitably generated when the model predicts the network traffic. At present, most of researches on a combined prediction model are based on wavelet decomposition, and then different prediction models are used for predicting a decomposed branch sequence. However, wavelet transform has the problem that the number of decomposition layers and wavelet basis are difficult to select, depends on specific signal characteristics and application fields, and has no adaptivity.
Disclosure of Invention
In order to describe and predict actual network traffic more accurately and comprehensively, the invention provides a combined network traffic prediction method based on ensemble empirical mode decomposition, which comprises the following steps:
s1: acquiring original flow data and preprocessing the original flow data;
s2: decomposing network traffic into finite Intrinsic Mode Function (IMF) components with single frequency on different time scales by ensemble empirical mode decomposition;
s3: performing autocorrelation and partial autocorrelation analysis on the IMF component to determine the stationarity of the IMF component;
s4: predicting the stable IMF component by using a linear ARMA model;
s5: predicting non-stationary IMF components by using a non-linear Elman neural network;
s6: and summing the predicted values of the IMF components to obtain the predicted value of the network flow.
Further, the decomposing the network traffic into a finite number of eigenmode functions IMF components with a single frequency on different time scales by ensemble empirical mode decomposition includes:
s21: let i equal to 1, and select N kinds of white noise signals;
s22: adding an i-th white noise signal into the original signal to form a signal-noise mixture;
s23: carrying out empirical mode decomposition on the signal-noise mixture to decompose the signal-noise mixture into an IMF combination;
s24: and judging whether i is larger than N, averaging all the obtained IMFs if i is larger than N, otherwise, setting i to i +1, and returning to the step S22.
Further, the empirical mode decomposition of the signal-noise mixture comprises:
s221: finding all local maxima and local minima of the signal x (t);
s222: obtaining an upper envelope emax (t) and a lower envelope emin (t) of the signal x (t) by extreme value fitting;
s223: calculating a local mean m (t) expressed as: m (t) ═ m (emin (t) + emax (t)/2;
s224: subtracting the local mean from the original input signal to obtain an oscillation signal h (t), which is expressed as: h (t) x (t) -m (t);
s225: when h (t) satisfies the IMF condition, let c1H (t), then c1For the first IMF, the corresponding margin r1=x(t)-c1(ii) a Otherwise, replace x (t) with h (t) and go to step S221;
s226: when r is1When the frequency information in the original data is still contained, r is added1Replace x (t) and go to step S221 to obtain the second IMF component, and so on to obtain r1-c2=r2,...,rn-1-cn=rn(ii) a When c is going tonOr rnLess than a set value, or rnWhen the function becomes a monotonous function, the sieving process is stopped.
Further, the establishing process of the ARMA model comprises the following steps:
s41: determining an autoregressive order p and a moving average order q of the ARMA model by utilizing the trailing properties of the autocorrelation function and the partial autocorrelation function;
s42: estimating unknown parameters of the ARMA model by using a least square estimation method, wherein the unknown parameters comprise an autoregressive coefficient, a moving average coefficient and a white noise variance;
s43: performing model inspection on different p and q parameter combinations by using an Akaike Information Criterion (AIC) to obtain an optimal p and q parameter combination;
s44: and establishing an ARMA model according to the autoregressive coefficient, the moving average coefficient and the white noise variance.
Further, the training process of the Elman neural network model comprises the following steps:
s51: selecting proper neuron numbers of all layers, initializing parameters of a network structure, initializing connection weights, error indexes epsilon and maximum learning times D, and enabling D to be 1;
s52: calculating the output of each neuron of the hidden layer, the carrying layer and the output layer;
s53: correcting the connection weight between layers according to the error between the predicted value and the true value of the component sequence;
s54: calculating an error square sum function E, judging whether E is less than epsilon, if so, outputting and storing a connection weight between layers, and otherwise, performing S55;
s55: and judging whether D is larger than D, if so, outputting and storing the connection weight between the layers, otherwise, making D equal to D +1 and returning to the step S52.
The method comprises the steps of introducing ensemble empirical mode decomposition to adaptively decompose network flow into a sequence with single frequency aiming at the problem that the decomposition layer number and the wavelet base in wavelet transformation are difficult to select, and solving the problem of mode aliasing possibly existing in the empirical mode decomposition; secondly, deeply analyzing different characteristics of each IMF component after decomposition, and judging the stability of each IMF component; then, predicting the stable IMF component by using a linear ARMA model, and predicting the non-stable IMF component by using a non-linear Elman neural network; finally, the predicted values of all the component sequences are added to obtain a final predicted value; in conclusion, the invention fully exerts the advantages of two different models, namely the ARMA model and the Elman neural network, and more accurately and comprehensively describes and predicts the actual network flow, thereby improving the prediction precision and the prediction reliability.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a flow chart of the ARMA model of the present invention;
FIG. 3 is a diagram of an Elman neural network model of the present invention;
FIG. 4 is a flow chart of the training process of the Elman neural network model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a combined network flow prediction method based on ensemble empirical mode decomposition, as shown in fig. 1, comprising the following steps:
s1: acquiring original flow data and preprocessing the original flow data;
s2: decomposing network flow into IMF components with single frequency on different time scales by ensemble empirical mode decomposition;
s3: determining the stationarity of the IMF component through autocorrelation and partial autocorrelation analysis;
s4: predicting the stable IMF component by using a linear ARMA model;
s5: predicting non-stationary IMF components by using a non-linear Elman neural network;
s6: and summing the predicted values of the IMF components to obtain the predicted value of the network flow.
In this embodiment, the preprocessing includes performing normalization on the time series x (t) of the traffic data to make the data range between 0 and 1, where the normalization specifically includes:
Figure GDA0003325099350000051
wherein, x' is the normalized network flow value, and x is the real predicted value of the network flow; x is the number ofmaxRepresenting the maximum value, x, of the network trafficminRepresenting the minimum value of network traffic.
S21: adding a white noise signal into the original signal to form a signal-noise mixture;
s22: carrying out empirical mode decomposition on the signal-noise mixture to decompose the signal-noise mixture into an IMF combination;
s23: repeating the step S21 and the step S22, adding different white noises each time, and decomposing the white noises into IMF;
s24: repeat N times and average each IMF.
Network traffic is decomposed into IMF components with single frequency on different time scales by ensemble empirical mode decomposition, and the specific process comprises the following steps:
s221: finding all local maxima and local minima of the signal x (t);
s222: obtaining an upper envelope emax (t) and a lower envelope emin (t) of the corresponding signal through extremum fitting;
s223: calculating a local mean m (t) expressed as: m (t) ═ m (emin (t) + emax (t)/2;
s224: subtracting the local mean from the original input signal to obtain an oscillation signal h (t), which is expressed as: h (t) x (t) -m (t);
s225: when h (t) satisfies the IMF condition, let c1H (t), then c1For the first IMF, the corresponding margin r1=x(t)-c1(ii) a Otherwise, replace x (t) with h (t) and go to step S221;
s226: when r is1When the frequency information in the original data is still contained, r is added1Replace x (t) and go to step S221 to obtain the second IMF component, and so on to obtain r1-c2=r2,...,rn-1-cn=rn(ii) a When c is going tonOr rnLess than a set value, or rnStopping the screening process when the function becomes a monotonous function; the value range of the set value is 0.2-0.3.
The conditions of IMF include:
1) the number of extreme points and the number of zero points must be equal or differ by at most one over the entire data set;
2) at any point in time, the mean of the envelope defined by the local maxima and the local minima is zero.
The autocorrelation and partial autocorrelation analysis includes:
when analyzing the autocorrelation and partial autocorrelation of IMF components, the autocorrelation function (ACF) and partial autocorrelation function (PACF) of each component sequence need to be calculated, and the formula is as follows:
Figure GDA0003325099350000061
Figure GDA0003325099350000062
wherein:
Figure GDA0003325099350000063
where ρ iskIs the autocorrelation function at time k; alpha is alphak,jIs a partial autocorrelation function at time k; gamma raykIs the autocovariance at time k; y iskRepresenting network traffic at time k, yt+kRepresenting the network traffic at time t + k, N being the length of the sequence.
The stable IMF component is predicted by a linear ARMA model, and the ARMA model is established, as shown in FIG. 2, and comprises the following steps:
s41: preliminarily determining an autoregressive order p and a moving average order q of the ARMA model by utilizing the trailing properties of the autocorrelation function and the partial autocorrelation function;
s42: estimating unknown parameters of the ARMA model by using a least square estimation method, wherein the unknown parameters comprise an autoregressive coefficient, a moving average coefficient and a white noise variance;
s43: performing model inspection on different p and q parameter combinations by using an akage pool information content criterion AIC; wherein the function of the AIC criterion is expressed as: when the function takes the minimum value, the optimal p and q parameter combination is obtained;
wherein ln is a natural logarithm value, L is a maximum likelihood parameter of the model, g is an independent parameter of the model, and AIC represents a criterion function value.
S44: and establishing an ARMA model according to the obtained parameters, wherein the mathematical model of the ARMA is represented as:
Figure GDA0003325099350000071
wherein,
Figure GDA0003325099350000073
is an autoregressive coefficient, θ1、θ2、...、θqIs a moving average coefficient, xt-pRepresents a time series X inValue of time t-p, εtRepresenting independent and identically distributed random variable sequences.
The non-stationary IMF components are predicted by a non-linear Elman neural network model, such as the model of the Elman neural network shown in FIG. 3, which comprises an input layer, a hidden layer, a connected layer and an output layer, wherein the output of the neuron of the input layer at the time k is represented as u (k), and the output of the neuron of the connected layer at the time k is represented as xc(k) The output of the neuron of the hidden layer at the time k is represented by x (k), the output of the neuron of the output layer at the time k is represented by y (k), and the connection weight value between the input layer and the hidden layer at the time k is represented by w1The connection weight value of the hidden layer and the accepting layer at the moment k is represented as w2The connection weight value of the hidden layer and the output layer at the moment k is represented as w3(ii) a The number of output layer nodes in this embodiment is 1; the training process of the Elman neural network prediction model, as shown in FIG. 4, includes:
s51: selecting proper neuron numbers of all layers, initializing parameters of a network structure, initializing connection weights, error indexes epsilon and maximum learning times D, and enabling D to be 1;
s52: calculating the output of each neuron of the hidden layer, the carrying layer and the output layer;
s53: correcting the connection weight between layers according to the error between the predicted value and the true value of the component sequence;
s54: calculating an error square sum function E, judging whether E is less than epsilon, if so, outputting and storing a connection weight between layers, and otherwise, performing S55;
s55: and judging whether D is larger than D, if so, outputting and storing the connection weight value between the layers, and otherwise, returning to the step S52.
Selecting the proper number of neurons in each layer comprises the following steps: the number of nodes of the input layer is non-stable component number, the number of nodes of the output layer is 1, and the number of nodes of the hidden layer is determined by an empirical formula
Figure GDA0003325099350000072
And determining that n is the number of input nodes, m is an output node, and delta is a constant with the value between 1 and 10.
Preferably, the calculating the output of each neuron of the hidden layer, the connected layer and the output layer comprises:
the output of each neuron of the hidden layer:
x(k)=f(w1·u(k-1)+w2·xc(k));
carrying out output of each neuron of the layer:
xc(k)=x(k-1);
output of each neuron of the output layer:
y(k)=g(w3·x(k));
g(x)=x;
wherein, f (x) represents the transfer function of the hidden layer, g (x) is the transfer function of the output layer, and the Sigmoid function is taken, and the formula of the Sigmoid function is as follows:
Figure GDA0003325099350000081
preferably, the modifying the connection weight between the layers includes:
correcting the connection weight from the input layer to the hidden layer:
Figure GDA0003325099350000082
Figure GDA0003325099350000083
Figure GDA0003325099350000084
correcting the connection weight from the bearing layer to the hidden layer:
Figure GDA0003325099350000085
Figure GDA0003325099350000086
correcting the connection weight from the hidden layer to the output layer:
Figure GDA0003325099350000087
Figure GDA0003325099350000088
wherein,
Figure GDA0003325099350000091
for the updated connection weights of the input layer to the hidden layer,
Figure GDA0003325099350000092
for the weights before updating, λ represents the learning rate of the network, E is the sum of the squares of the errors, djkExpressed as expected values, y, of the nodes of the output layerikExpressing the predicted values of nodes of an output layer, wherein k is 1,2, …, p and p expresses the length of a training sample;
Figure GDA0003325099350000093
to take care of the updated connection weights of the layer to the hidden layer,
Figure GDA0003325099350000094
the weight value before updating;
Figure GDA0003325099350000095
the updated connection weights for the hidden layer to the output layer,
Figure GDA0003325099350000096
to update the previous weight, symbol
Figure GDA0003325099350000097
Indicating partial derivative and the sign delta indicating delta.
Preferably, the adding of the true predicted values of the IMF components includes: and performing inverse normalization processing on the predicted values of the IMF components, wherein the formula is as follows:
x=x'(xmax-xmin);
wherein x' is a normalized network flow value, and x is a real network flow predicted value; x is the number ofmaxAnd xminRepresenting the maximum and minimum values of network traffic, respectively.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A combined network traffic prediction method based on ensemble Empirical Mode Decomposition (EMD), comprising:
s1: acquiring original flow data and preprocessing the original flow data;
s2: decomposing network flow into finite Intrinsic Mode Functions (IMF) components with single frequency on different time scales by ensemble empirical mode decomposition;
s3: performing autocorrelation and partial autocorrelation analysis on the IMF component to determine the stationarity of the IMF component;
s4: predicting the stable IMF component by using a linear ARMA model;
s5: predicting non-stationary IMF components by using a non-linear Elman neural network;
s6: and summing the predicted values of the IMF components to obtain the predicted value of the network flow.
2. The method of claim 1, wherein the preprocessing in step S1 comprises: normalizing the flow data time sequence to enable the data range to be 0-1, wherein the normalization specifically comprises the following steps:
Figure FDA0003387035820000011
wherein x' is a normalized network flow value; x is a real predicted value of the network flow; x is the number ofmaxRepresents the maximum value of network traffic; x is the number ofminRepresenting the minimum value of network traffic.
3. The method of claim 1, wherein said decomposing network traffic into a single finite number of eigenmode functions (IMF) components at different time scales by ensemble empirical mode decomposition comprises:
s21: let i equal to 1, and select N kinds of white noise signals;
s22: adding an i-th white noise signal into the original signal to form a signal-noise mixture;
s23: carrying out empirical mode decomposition on the signal-noise mixture to decompose the signal-noise mixture into a combination of IMF components;
s24: and judging whether i is greater than N, averaging all the obtained IMF components if i is greater than N, otherwise, setting i to i +1 and returning to the step S22.
4. The method of claim 3, wherein the empirical mode decomposition of the SNR mixture comprises:
s221: finding all local maxima and local minima of the signal x (t);
s222: obtaining an upper envelope emax (t) and a lower envelope emin (t) of the signal x (t) by extreme value fitting;
s223: calculating a local mean m (t) expressed as: m (t) ═ m (emin (t) + emax (t)/2;
s224: subtracting the local mean from the original input signal to obtain an oscillation signal h (t), which is expressed as: h (t) x (t) -m (t);
s225: when h (t) satisfies the condition of IMF component, let c1H (t), then c1For the first IMF component, the corresponding margin r1=x(t)-c1(ii) a Otherwise, replace x (t) with h (t) and go to step S221;
s226: when r is1When the frequency information in the original data is still contained, r is added1Replace x (t) and go to step S221 to obtain the second IMF component, and so on to obtain r1-c2=r2,...,rn-1-cn=rn(ii) a When c is going tonOr rnLess than a set value, or rnWhen the function becomes a monotonous function, the sieving process is stopped.
5. The method of claim 1, wherein the autocorrelation and partial autocorrelation analysis of the IMF components comprises:
the autocorrelation function of the IMF component is expressed as:
Figure FDA0003387035820000021
the partial autocorrelation function of the IMF component is expressed as:
Figure FDA0003387035820000022
wherein, γkIs the autocovariance at time k, expressed as
Figure FDA0003387035820000023
ykRepresenting network traffic at time k, yt+kRepresenting the network flow at the moment of t + k, and N is the length of the sequence; rhokIs the autocorrelation function at time k; alpha is alphak,jIs time kA partial autocorrelation function of; gamma ray0The autocovariance at time 0.
6. The method of claim 1, wherein the ARMA model is built by:
s41: determining an autoregressive order p and a moving average order q of the ARMA model by utilizing the trailing properties of the autocorrelation function and the partial autocorrelation function;
s42: estimating unknown parameters of the ARMA model by using a least square estimation method, wherein the unknown parameters comprise an autoregressive coefficient, a moving average coefficient and a white noise variance;
s43: performing model inspection on different p and q parameter combinations by using an akage pool information content criterion AIC to obtain the optimal p and q parameter combination;
s44: and establishing an ARMA model according to the autoregressive coefficient, the moving average coefficient and the white noise variance.
7. The method of claim 1, wherein the training process of the Elman neural network comprises:
s51: selecting proper neuron numbers of all layers, initializing parameters of a network structure, initializing connection weights, error indexes epsilon and maximum learning times D, and enabling D to be 1;
s52: calculating the output of each neuron of the hidden layer, the carrying layer and the output layer;
s53: correcting the connection weight between layers according to the error between the predicted value and the true value of the IMF component;
s54: calculating an error square sum function E, judging whether E is less than epsilon, if so, outputting and storing a connection weight between layers, and otherwise, performing S55;
s55: and judging whether D is larger than D, if so, outputting and storing the connection weight between the layers, otherwise, making D equal to D +1 and returning to the step S52.
8. The method for predicting the network traffic based on ensemble empirical mode decomposition according to claim 1, wherein when the true prediction values of the IMF components are added, inverse normalization processing is required to be performed on the prediction values of the IMF components, which is expressed as:
x=x'(xmax-xmin);
wherein, x' is the normalized network flow value, and x is the real predicted value of the network flow; x is the number ofmaxRepresents the maximum value of network traffic; x is the number ofminRepresenting the minimum value of network traffic.
CN201910230095.9A 2019-03-26 2019-03-26 Combined network flow prediction method based on ensemble empirical mode decomposition Active CN109802862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910230095.9A CN109802862B (en) 2019-03-26 2019-03-26 Combined network flow prediction method based on ensemble empirical mode decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910230095.9A CN109802862B (en) 2019-03-26 2019-03-26 Combined network flow prediction method based on ensemble empirical mode decomposition

Publications (2)

Publication Number Publication Date
CN109802862A CN109802862A (en) 2019-05-24
CN109802862B true CN109802862B (en) 2022-02-22

Family

ID=66563955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910230095.9A Active CN109802862B (en) 2019-03-26 2019-03-26 Combined network flow prediction method based on ensemble empirical mode decomposition

Country Status (1)

Country Link
CN (1) CN109802862B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941929A (en) * 2019-12-06 2020-03-31 长沙理工大学 Battery health state assessment method based on ARMA and Elman neural network combined modeling
CN111064617B (en) * 2019-12-16 2022-07-22 重庆邮电大学 Network flow prediction method and device based on empirical mode decomposition clustering
CN111046323A (en) * 2019-12-24 2020-04-21 国网河北省电力有限公司信息通信分公司 Network traffic data preprocessing method based on EMD
CN113037531A (en) * 2019-12-25 2021-06-25 中兴通讯股份有限公司 Flow prediction method, device and storage medium
CN111241466B (en) * 2020-01-15 2023-10-03 上海海事大学 Ship flow prediction method based on deep learning
CN111464354B (en) * 2020-03-31 2023-02-28 全球能源互联网研究院有限公司 Fine-grained network flow calculation method and device and storage medium
CN112019245B (en) * 2020-08-26 2021-11-16 上海科技大学 Method, apparatus, device and medium for predicting and measuring channel information in real time
CN114449549B (en) * 2020-11-05 2024-08-27 中国移动通信集团广西有限公司 Cell dormancy control method and electronic equipment
CN112469053A (en) * 2020-11-16 2021-03-09 山东师范大学 TD-LTE wireless network data flow prediction method and system
CN113157663B (en) * 2021-03-16 2023-07-11 西安电子科技大学 Network flow prediction method and device based on data reconstruction and hybrid prediction
CN115655887B (en) * 2022-11-01 2023-04-21 广东建设职业技术学院 Concrete strength prediction method
CN116182949B (en) * 2023-02-23 2024-03-19 中国人民解放军91977部队 Marine environment water quality monitoring system and method
CN117060984B (en) * 2023-10-08 2024-01-09 中国人民解放军战略支援部队航天工程大学 Satellite network flow prediction method based on empirical mode decomposition and BP neural network
CN117744893B (en) * 2024-02-19 2024-05-17 西安热工研究院有限公司 Wind speed prediction method and system for energy storage auxiliary black start
CN118134699B (en) * 2024-05-10 2024-08-02 江苏超越新能源科技集团股份有限公司 Distributed photovoltaic power station cluster full period management system based on intelligent contract

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376097A (en) * 2015-11-30 2016-03-02 沈阳工业大学 Hybrid prediction method for network traffic
CN107426026A (en) * 2017-07-31 2017-12-01 山东省计算中心(国家超级计算济南中心) A kind of cloud computing server load short term prediction method based on EEMD ARIMA

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160258991A1 (en) * 2015-03-02 2016-09-08 Hangzhou Shekedi Biotech Co., Ltd Method and System of Signal Processing for Phase-Amplitude Coupling and Amplitude-Amplitude coupling
CN104899656A (en) * 2015-06-05 2015-09-09 三峡大学 Wind power combined predication method based on ensemble average empirical mode decomposition and improved Elman neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376097A (en) * 2015-11-30 2016-03-02 沈阳工业大学 Hybrid prediction method for network traffic
CN107426026A (en) * 2017-07-31 2017-12-01 山东省计算中心(国家超级计算济南中心) A kind of cloud computing server load short term prediction method based on EEMD ARIMA

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Time Delay Prediction Method Based on EMD and Elman Neural Network;Fan Yu等;《2014 Sixth International Conference on Intelligent Human-Machine Systems and Cybernetics》;20141009;全文 *
基于模态分解的网络流量预测模型研究;姚立霜;《中国优秀硕士学位论文全文数据库(电子期刊)》;20210215;全文 *
放松管制市场下基于人工智能的电价预测方法研究;张洋;《中国博士学位论文全文数据库(电子期刊)》;20171215;全文 *

Also Published As

Publication number Publication date
CN109802862A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109802862B (en) Combined network flow prediction method based on ensemble empirical mode decomposition
CN114297036B (en) Data processing method, device, electronic equipment and readable storage medium
CN111277434A (en) Network flow multi-step prediction method based on VMD and LSTM
CN112215422A (en) Long-time memory network water quality dynamic early warning method based on seasonal decomposition
CN113111572B (en) Method and system for predicting residual life of aircraft engine
CN113411216B (en) Network flow prediction method based on discrete wavelet transform and FA-ELM
Mokarram et al. Net-load forecasting of renewable energy systems using multi-input LSTM fuzzy and discrete wavelet transform
CN114912077B (en) Sea wave forecasting method integrating random search and mixed decomposition error correction
CN115587666A (en) Load prediction method and system based on seasonal trend decomposition and hybrid neural network
CN114266416A (en) Photovoltaic power generation power short-term prediction method and device based on similar days and storage medium
CN117592593A (en) Short-term power load prediction method based on improved quadratic modal decomposition and WOA optimization BILSTM-intent
CN116796639A (en) Short-term power load prediction method, device and equipment
CN110909453A (en) EEMD-based power transmission line icing grade prediction method
Xiao et al. Predict stock prices with ARIMA and LSTM
CN117578394A (en) Ultra-short-term wind power prediction method and system
CN117354172A (en) Network traffic prediction method and system
CN111461416B (en) Wind speed prediction method, system, electronic equipment and storage medium
CN113821401A (en) WT-GA-GRU model-based cloud server fault diagnosis method
CN114372618A (en) Student score prediction method and system, computer equipment and storage medium
CN113297540A (en) APP resource demand prediction method, device and system under edge Internet of things agent service
CN112183814A (en) Short-term wind speed prediction method
CN113051809A (en) Virtual health factor construction method based on improved restricted Boltzmann machine
CN114064203B (en) Cloud virtual machine load prediction method based on multi-scale analysis and depth network model
Li et al. Forecasting of provincial tourist population based on grey neural network
CN113878613B (en) Industrial robot harmonic reducer early fault detection method based on WLCTD and OMA-VMD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant