CN110400010A - Prediction technique, device, electronic equipment and computer readable storage medium - Google Patents

Prediction technique, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110400010A
CN110400010A CN201910627110.3A CN201910627110A CN110400010A CN 110400010 A CN110400010 A CN 110400010A CN 201910627110 A CN201910627110 A CN 201910627110A CN 110400010 A CN110400010 A CN 110400010A
Authority
CN
China
Prior art keywords
sequence
value
obtains
model
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910627110.3A
Other languages
Chinese (zh)
Inventor
李军政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN201910627110.3A priority Critical patent/CN110400010A/en
Publication of CN110400010A publication Critical patent/CN110400010A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Operations Research (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the present application provides a kind of prediction technique, device, electronic equipment and computer readable storage medium, comprising: is handled according to historical time sequence of the time series predicting model to tranquilization, obtains fit time sequence and level forecasts value;The difference for calculating historical time sequence and fit time sequence, obtains the first residual sequence;The first residual sequence is handled using memory network model, obtains secondary predicted value;Final predicted value is determined according to level forecasts value and secondary predicted value.The embodiment of the present application obtains level forecasts value and residual sequence first with time series predicting model, then memory network model is recycled to be predicted residual sequence to obtain secondary predicted value, then final predicted value is determined according to level forecasts value and secondary predicted value, which optimizes the forecasting accuracy of time series, compared with prior art, so that the forecasting accuracy of time series such as network bandwidth utilization rate is improved.

Description

Prediction technique, device, electronic equipment and computer readable storage medium
Technical field
This application involves prediction or optimisation technique fields, in particular to a kind of prediction technique, device, electronic equipment And computer readable storage medium.
Background technique
In recent years, with the development of science and technology people start to pursue more intelligence and easily life, more and more business It is moved on network with application, therefore the flow of network links carry is increasing.
When the flow of network links carry is more than or reaches the bandwidth of user setting, it is easy for causing network congestion or loses Packet, to influence the working efficiency and usage experience of user.Therefore, it is necessary to the network bandwidth utilization rates to user to predict. When predicting the network bandwidth utilization rate of user, usually the network bandwidth in past 1 year or longer time user is used Rate is analyzed, so that the bandwidth utilization rate of user's following first quarter or half a year are predicted, if the bandwidth utilization rate predicted It is more than or reaches threshold value, terminal device can prompts the timely dilatation of user.
In the prior art, when predicting the network bandwidth utilization rate of user, often through regression analysis mode into Row, the accuracy that the prior art predicts network bandwidth utilization rate are relatively low.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of prediction technique, device, electronic equipment and computer-readable deposits Storage media, to improve problem relatively low to the accuracy of network bandwidth utilization rate prediction in the prior art.
In a first aspect, the embodiment of the present application provides a kind of prediction technique, which comprises according to time series forecasting Model handles the historical time sequence of tranquilization, obtains fit time sequence and level forecasts value, wherein the fitting The time series of match value composition at the time of time series is at least part numerical value place of the historical time sequence;Meter The difference for calculating the historical time sequence Yu the fit time sequence, obtains the first residual sequence;Utilize memory network model pair First residual sequence is handled, and secondary predicted value is obtained;According to the level forecasts value and the secondary predicted value Determine final predicted value.
Time series predicting model and memory network model are cooperated use by the embodiment of the present application, first with time sequence Column prediction model obtains level forecasts value and residual sequence, then memory network model is recycled to predict residual sequence Secondary predicted value is obtained, then determines that final predicted value, which make the time according to level forecasts value and secondary predicted value The forecasting accuracy of sequence is optimized, compared with prior art, so that the prediction of time series such as network bandwidth utilization rate is quasi- True property is improved.
It is described to be carried out according to historical time sequence of the time series predicting model to tranquilization in a possible design Processing, obtains fit time sequence and level forecasts value, comprising: integrate rolling average ARIMA model to steady according to autoregression The historical time sequence of change is handled, and the corresponding fit time sequence of the historical time sequence and level forecasts are obtained Value, wherein the ARIMA model is obtained according to the training of training time sequence;It is described to utilize memory network model to described first Residual sequence is handled, and secondary predicted value is obtained, comprising: according to gating cycle unit GRU model to the first residual error sequence Column are handled, and secondary predicted value is obtained, wherein the GRU model is trained according to default residual sequence and is obtained, it is described preset it is residual Difference sequence is corresponding to the fitting sequence obtained after the training time series processing with the ARIMA model.
The ARIMA model that training is completed and the GRU model that training is completed are cooperated use by the embodiment of the present application, first benefit Level forecasts value and residual sequence are obtained with ARIMA model, then GRU model is recycled to be predicted to obtain to residual sequence Secondary predicted value, which optimize the forecasting accuracy of time series, compared with prior art, so that time series As the forecasting accuracy of network bandwidth utilization rate is improved.
In a possible design, when integrating history of the rolling average ARIMA model to tranquilization according to autoregression Between sequence handled, it is described before obtaining the corresponding fit time sequence of the historical time sequence and level forecasts value Method further include: d calculus of differences processing is carried out to original time series, obtains the historical time sequence of tranquilization, d is described Parameter in ARIMA model, value are the number for the calculus of differences for making training time sequence stationary and carrying out, and d takes positive integer.
It is to keep original time series flat to the purpose that original time series carry out calculus of differences during above-mentioned realization Surely to get the historical time sequence for arriving tranquilization, the predictability of stable time series is stronger, and predicting it can be into one Step improves the forecasting accuracy of network bandwidth utilization rate.
In a possible design, it is described according to the level forecasts value and the secondary predicted value determine it is final pre- Measured value, comprising: summation operation is carried out to the level forecasts value and the secondary predicted value, obtains adduction predicted value;To institute It states adduction predicted value and carries out inverse differential operation, obtain the final predicted value.
The adduction of level forecasts value and secondary predicted value is first obtained, inverse differential operation then is carried out to adduction predicted value, just Available final predicted value.
In a possible design, the training process of the ARIMA model includes: according to the training as training sample Time series determines parameter d, p, q of ARIMA model, wherein p is autoregression item number, and q is sliding average item number;To the instruction Practice time series and carry out d calculus of differences, obtains training stationary sequence;The trained stationary sequence is substituted into the ARIMA mould Type obtains the expression formula of the first predicted time sequence;The distracter in the expression formula of the first predicted time sequence is removed, is obtained The function formed to the data volume with parameter to be estimated;Determine the expression of the difference of the trained stationary sequence and the function Formula;Determine the parameter to be estimated when the expression formula of the difference meets preset first constraint condition, in the function Solution, the ARIMA model is obtained according to the solution.
During above-mentioned realization, parameter d, p, q of ARIMA model are first determined, then according to parameter d to training time sequence Column carry out d calculus of differences, obtain training stationary sequence, then obtain the expression formula of the first predicted time sequence according to p, q, should Existing constant term in expression formula, and have the unknown of band parameter to be estimated, the unknown of all bands parameter to be estimated is formed into letter Number, then calculates the expression formula of the difference of trained stationary sequence and function again, and the expression formula of difference is enabled to meet the first constraint condition, Then the parameter to be estimated in function is calculated, and then obtains complete expression of the ARIMA model about the first predicted time sequence.
In a possible design, the training process of the GRU model includes: to calculate the training time sequence through d Training stationary sequence and predicted time sequence of the training time sequence after the ARIMA model treatment after secondary calculus of differences The difference of column obtains the second residual sequence;Second residual sequence is handled according to initial GRU model, it is pre- to obtain second Sequencing column;Calculate the penalty values between second forecasting sequence and second residual sequence;If the penalty values are not met Preset second constraint condition then adjusts the weight parameter and offset parameter of the initial GRU model, until GRU adjusted The corresponding penalty values of model meet second constraint condition, obtain the GRU model.
During above-mentioned realization, the difference of the predicted time sequence of training stationary sequence and the output of ARIMA model is first calculated, And the second residual sequence is obtained, the second residual sequence input GRU model is obtained into the second forecasting sequence, calculates the second forecasting sequence The weight parameter of GRU model is adjusted if penalty values are unsatisfactory for the second constraint condition with the penalty values between the second residual sequence And offset parameter utilizes training to complete until the corresponding penalty values of GRU model adjusted meet the second constraint condition Input results of the output result of ARIMA model as GRU model carry out the training of GRU model, so that the GRU that training is completed Model can preferably cooperate with ARIMA model, to obtain more accurate predicted value.
In a possible design, the loss between second forecasting sequence and second residual sequence is calculated Value, comprising: according to multiple trained predicted values in second forecasting sequence and multiple residual in second residual sequence Difference calculates average error, and the average error is the penalty values.
During above-mentioned realization, penalty values specifically can be average error.Penalty values are also possible to count by other The numerical value that calculation mode obtains, it is limitation to the application that the specific calculation of penalty values, which should not be construed,.
In a possible design, if the penalty values do not meet preset second constraint condition, institute is adjusted The weight parameter and offset parameter of initial GRU model are stated, until the corresponding penalty values of GRU model adjusted meet described second Constraint condition obtains the GRU model, comprising: if the average error exceeds the range of default value, is declined using gradient Method adjusts the x-component W of the update door weight in the GRU modelxz, update door weight h component Whz, resetting door weight x point Measure Wxr, resetting door weight h component Whr, implicit candidate state weight r component Wrh, implicit candidate state weight x-component Wxh Numerical value, and execute the initial GRU model of the basis and second residual sequence handled, obtain the second forecasting sequence; Up to the average error is in the range of the default value, by the x-component of the update door weight in the GRU model Wxz, update door weight h component Whz, resetting door weight x-component Wxr, resetting door weight h component Whr, implicit candidate state The r component W of weightrh, implicit candidate state weight x-component WxhNumerical value be determined as the final argument of the GRU model.
During above-mentioned realization, if average error exceeds the range of default value, need first to adjust in GRU model Wxz、Whz、Wxr、Whr、Wrh、WxhNumerical value jump to the second residual error sequence of input then for adjusting the GRU model of numerical value The step of column obtain the second forecasting sequence, until average error is within the scope of default value.By W at this timexz、Whz、Wxr、Whr、 Wrh、WxhFinal argument value of the numerical value as GRU model, complete the training of GRU model.
In a possible design, multiple trained predicted values according in second forecasting sequence and described Multiple residual values in second residual sequence calculate average error, comprising: for N number of training in second forecasting sequence The trained predicted value of each of predicted value, calculate training predicted value be in synchronization second residual sequence in it is residual Square of the difference of difference;Calculate N number of difference square average, obtain the average error, wherein N is second pre- The sample size of column is sequenced.
Specifically, formula is utilizedCalculate average error E, wherein N is the second forecasting sequence Sample size, prejFor the training predicted value at j moment, xjFor the residual values at j moment in the second residual sequence.
In a possible design, the initial GRU model of basis handles second residual sequence, obtains Second forecasting sequence, comprising: be based on tnThe residual values at momenttn-1The hidden state at momentUpdate door weight parameter with And update the biasing b of doorf, calculate and obtain tnThe update door at moment exportsBased on tnThe residual values at momenttn-1Moment Hidden stateIt resets door weight parameter and resets the biasing b of doorr, calculate and obtain tnThe resetting door at moment exportsBase In resetting door outputtn-1The hidden state at momenttnThe residual values at momentImplicit candidate state weight parameter with And the biasing b of implicit candidate stateh, calculate and obtain tnThe implicit candidate state at momentBased on tn-1The hidden state at momenttnThe implicit candidate state at momentAnd tnThe update door at moment exportsIt calculates and obtains tnThe hidden state at momentBased on tnThe hidden state at momentThe weight W of full Connection Neural Network layerpre, full Connection Neural Network layer biasing bpreIt calculates and obtains tn+1The training predicted value at moment
Specifically, according to Calculate each residual valuesLater moment in time training prediction ValueWherein,For tnThe update door at moment exports, WxzFor update door weight x-component,For tnThe residual error at moment Value, WhzFor update door weight h component,For tn-1The hidden state at moment, bfFor the biasing for updating door;For tnMoment Reset door output, WxrFor the x-component for resetting door weight, WhrFor the h component for resetting door weight, brFor the biasing for resetting door;For tnThe implicit candidate state at moment, WrhFor the r component for implying candidate state weight, WxhFor imply candidate state weight x-component, bhFor the biasing for implying candidate state;For tnThe hidden state at moment,For tn+1The training predicted value at moment, WpreFor The weight of full Connection Neural Network layer, bpreFor the biasing of full Connection Neural Network layer, * representing matrix element multiplication.
During above-mentioned realization, it can apply a formula by above-mentioned one and obtain trained predicted value, it then will training predicted value It arranges sequentially in time, obtains the second forecasting sequence.
Second aspect, the embodiment of the present application provide a kind of prediction meanss, and described device includes: that level forecasts value obtains mould Block, for being handled according to historical time sequence of the time series predicting model to tranquilization, obtain fit time sequence and Level forecasts value, wherein at the time of the fit time sequence is at least part numerical value place of the historical time sequence The time series of match value composition;First residual sequence module, for calculating the historical time sequence and the fit time The difference of sequence obtains the first residual sequence;Secondary predicted value obtains module, for residual to described first using memory network model Difference sequence is handled, and secondary predicted value is obtained;Final predictor calculation module, for according to the level forecasts value and institute It states secondary predicted value and determines final predicted value.
In a possible design, level forecasts value obtains module, specifically for integrating rolling average according to autoregression ARIMA model handles the historical time sequence of tranquilization, obtains the corresponding fit time sequence of the historical time sequence Column and level forecasts value, wherein the ARIMA model is obtained according to the training of training time sequence;Secondary predicted value obtains mould Block, specifically for being handled according to gating cycle unit GRU model first residual sequence, the secondary predicted value of acquisition, Wherein, the GRU model is obtained according to the training of default residual sequence, and the default residual sequence and the ARIMA model are to institute It is corresponding to state the fitting sequence obtained after training time series processing.
In a possible design, described device further includes calculus of differences module, for carrying out d to original time series Secondary calculus of differences processing obtains the historical time sequence of tranquilization, and d is the parameter in the ARIMA model, and value is to make to train The number for the calculus of differences that time series is steady and carries out, d take positive integer.
In a possible design, the final predictor calculation module further include: adduction submodule, for described Level forecasts value and the secondary predicted value carry out summation operation, obtain adduction predicted value;Inverse differential operation submodule, is used for Inverse differential operation is carried out to the adduction predicted value, obtains the final predicted value.
In a possible design, described device further include: parameter determination module, for according to as training sample Training time sequence determines parameter d, p, q of ARIMA model, wherein p is autoregression item number, and q is sliding average item number;Training Stationary sequence module obtains training stationary sequence for carrying out d calculus of differences to the training time sequence;Prediction expression Formula obtains module, for the trained stationary sequence to be substituted into the ARIMA model, obtains the expression of the first predicted time sequence Formula;Function comprising modules, the distracter in expression formula for removing the first predicted time sequence are obtained with being intended to estimate The function of the data volume composition of parameter;Difference expression formula determining module, for determining the trained stationary sequence and the function Difference expression formula;ARIMA model obtains module, meets preset first constraint in the expression formula of the difference for determining When condition, the solution of the parameter to be estimated in the function obtains the ARIMA model according to the solution.
In a possible design, described device further include: the second residual sequence module, when for calculating described trained Between training stationary sequence of the sequence after d calculus of differences and the training time sequence after the ARIMA model treatment The difference of predicted time sequence obtains the second residual sequence;Second forecasting sequence module is used for according to initial GRU model to described Second residual sequence is handled, and the second forecasting sequence is obtained;Penalty values computing module, for calculating second forecasting sequence With the penalty values between second residual sequence;GRU model obtains module, if not meeting preset for the penalty values Two constraint conditions then adjust the weight parameter and offset parameter of the initial GRU model, until GRU model adjusted is corresponding Penalty values meet second constraint condition, obtain the GRU model.
In a possible design, the penalty values computing module is specifically used for according in second forecasting sequence Multiple residual values in multiple trained predicted values and second residual sequence calculate average error, the average error For the penalty values.
In a possible design, GRU model obtains module, if being specifically used for the average error exceeds present count The range of value adjusts the x-component W of the update door weight in the GRU model using gradient descent methodxz, update the h point of door weight Measure Whz, resetting door weight x-component Wxr, resetting door weight h component Whr, implicit candidate state weight r component Wrh, it is implicit The x-component W of candidate state weightxhNumerical value, and execute the initial GRU model of the basis to second residual sequence at Reason obtains the second forecasting sequence;Until the average error is in the range of the default value, it will be in the GRU model Update door weight x-component Wxz, update door weight h component Whz, resetting door weight x-component Wxr, resetting door weight h Component Whr, implicit candidate state weight r component Wrh, implicit candidate state weight x-component WxhNumerical value be determined as it is described The final argument of GRU model.
In a possible design, the penalty values computing module is specifically used for the N in second forecasting sequence The trained predicted value of each of a trained predicted value calculates training predicted value and second residual sequence for being in synchronization In residual values difference square;Calculate N number of difference square average, obtain the average error, wherein N is The sample size of second forecasting sequence.
In a possible design, the second forecasting sequence module is used to be based on tnThe residual values at momenttn-1Moment Hidden stateIt updates door weight parameter and updates the biasing b of doorf, calculate and obtain tnThe update door at moment exportsBase In tnThe residual values at momenttn-1The hidden state at momentIt resets door weight parameter and resets the biasing b of doorr, calculate Obtain tnThe resetting door at moment exportsBased on resetting door outputtn-1The hidden state at momenttnThe residual error at moment ValueThe biasing b of implicit candidate state weight parameter and implicit candidate stateh, calculate and obtain tnThe implicit candidate shape at moment StateBased on tn-1The hidden state at momenttnThe implicit candidate state at momentAnd tnThe update door at moment exports It calculates and obtains tnThe hidden state at momentBased on tnThe hidden state at momentThe weight W of full Connection Neural Network layerpre、 The biasing b of full Connection Neural Network layerpreIt calculates and obtains tn+1The training predicted value at moment
The third aspect, the application provide a kind of electronic equipment, comprising: processor, memory and bus, the memory are deposited The executable machine readable instructions of the processor are contained, when electronic equipment operation, the processor and the storage By bus communication between device, execution first aspect or first aspect when the machine readable instructions are executed by the processor Method described in any optional implementation.
Fourth aspect, the application provide a kind of computer readable storage medium, store on the computer readable storage medium There is computer program, any optional realization of first aspect or first aspect is executed when which is run by processor Method described in mode.
5th aspect, the application provide a kind of computer program product, and the computer program product is transported on computers When row, so that computer executes the method in any possible implementation of first aspect or first aspect.
Above objects, features, and advantages to enable the embodiment of the present application to be realized are clearer and more comprehensible, be cited below particularly compared with Good embodiment, and cooperate appended attached drawing, it is described in detail below.
Detailed description of the invention
Illustrate the technical solutions in the embodiments of the present application or in the prior art in order to clearer, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 shows the system flow chart of prediction;
Fig. 2 shows the flow diagrams of prediction technique provided by the embodiments of the present application;
Fig. 3 shows a kind of flow diagram of specific embodiment of prediction technique provided by the embodiments of the present application;
Fig. 4 shows the flow diagram of the training process of ARIMA model;
Fig. 5 shows the flow diagram of the training process of GRU model;
Fig. 6 shows the schematic block diagram of prediction meanss provided by the present application;
Fig. 7 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application;
Fig. 8 is the structural schematic diagram of GRU model;
Fig. 9, which is shown, to be used prediction technique predicted time sequence provided by the embodiments of the present application and uses ARIMA model pre- Survey the comparison diagram of time series.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Ground description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.Usually exist The component of the embodiment of the present application described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed the application's to the detailed description of the embodiments herein provided in the accompanying drawings below Range, but it is merely representative of the selected embodiment of the application.Based on embodiments herein, those skilled in the art are not being done Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile the application's In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Before introducing the specific embodiment of the application, first the application scenarios of the application are simply introduced.
In the prior art, need to predict the network bandwidth utilization rate of user, it is common practice that past 1 year or The network bandwidth utilization rate of the user of longer time is analyzed and processed, and then predicts user's following first quarter or half a year Bandwidth utilization rate.
After obtaining the network bandwidth utilization rate of user of 1 year in the past or longer time, the prior art often only passes through The mode of regression analysis handles network bandwidth utilization rate, such as only integrates rolling average by autoregression (Autoregressive Integrated Moving Average, abbreviation ARIMA) model predicts network bandwidth utilization rate, Forecasting accuracy is relatively low, and ARIMA model is time series predicting model.
To solve the above-mentioned problems, the embodiment of the present application provides a kind of binding time sequential forecasting models and memory network mould The method that type predicts the unknown-value in time series, it should be appreciated that this method is not limited to apply in network bandwidth utilization rate Prediction on, can also be applied to other prediction, for example, predicted city special bus flow, prediction website visiting amount.For the ease of It describes, in the embodiment of the present application, time series predicting model is by taking ARIMA model as an example, and memory network model is with gating cycle list For first (Gated Recurrent Unit, abbreviation GRU) model, and it is situated between by taking the prediction of network bandwidth utilization rate as an example It continues.
Referring to Figure 1, Fig. 1 shows the whole system flow chart that the embodiment of the present application carries out time series forecasting, can be with It is divided into data acquisition, data prediction, model training, whether is reached using model prediction time series, the judgment models after training The parts such as mark, model optimization, result feedback and model application.If model is up to standard, model application can be directly carried out, if mould Type is below standard, then can carry out the process of model optimization and result feedback.
The process of data acquisition can be with are as follows: network flow collector passes through Simple Network Management Protocol (Simple Network Management Protocol, abbreviation SNMP) every interchanger in more interchangers of acquisition multilink Data traffic, can be primary at interval of acquisition in one minute.
After data acquisition, the step of can carrying out data prediction, data prediction includes the following steps:
By for statistical analysis to each of the links in multilink, it is known that the utilization rate of part of links in multilink It is lower, therefore link can be screened, it sorts according to the sequence of junctor usage from high to low to link, then can take The link that junctor usage is preceding 70%, it should be understood that preceding 70% is merely illustrative, or other numerical value, such as Preceding 80%, it is also possible to preceding 60%.
White noise verification can be carried out to each link in the link after screening, reject white noise sequence.White noise Sequence is nulling variance, the random sequence of zero-mean, can be sayed without rule, is worth to model training lower.
After rejecting white noise sequence, that there may be numerical value is obviously high for data on flows in a certain link of multilink In the abnormal point of the numerical value of the numerical value or later moment in time of previous moment, the abnormal point in data on flows can smoothly be located Reason, the mode of smoothing processing can be the mean value of the numerical value of the numerical value and later moment in time that take previous moment.
If data on flows goes out the case where active, such as continuous several day datas are lost or interval data is lost, can be by equal The modes such as value method, median method and time series forecasting carry out completion to data.For example, for the number in 90 days seasons According to wherein having 10 day datas to lose (may be continuous loss of data in 10 days, it is also possible to the loss of data of compartment and data Lose total number of days be 10 days), using median method come completion data, the median of 80 day datas can be counted, then with 10 day value of the median as loss of data.It, can in obtaining the preset period after the data traffic of every day Data traffic to be normalized, bandwidth utilization rate is obtained, optionally, bandwidth usage amount can use the number of data traffic Value is calculated divided by total network bandwidth.
Due to need obtain be every day in the preset period network bandwidth utilization rate maximum value, The mean-max of the network bandwidth utilization rate of half an hour in daily can be taken as the maximum of the network bandwidth utilization rate on the same day Value, the maximum value of the network bandwidth utilization rate of acquisition can store in the database, and the data format in database can be as follows Shown in table:
After carrying out data prediction, the data after pretreatment can be divided, a part is as model Training sample, test sample of the another part as model.For example, can be by 80% of the data after pretreatment as training Sample is used as test sample for 20% of the data after pretreatment.80% is used as training sample, 20% to be used as test sample only For exemplary illustration, specific may be other numerical value, such as 85% is used as test sample as training sample, 15%; 75% be can be as training sample, 25% as test sample.
The training process of ARIMA model and GRU model will be described below, and after the completion of training, can use The ARIMA model and GRU model that training is completed are predicted, ARIMA model and GRU mould then can be judged according to test sample Whether type is up to standard, if up to standard, model up to standard can be put into and be applied;If below standard, model can be optimized, example Such as training can be re-started to ARIMA model and GRU model according to more training samples, then recycle test sample pair Model after optimization is tested, until model is up to standard.
Fig. 4 is referred to, Fig. 4 shows the training process of ARIMA model in the application, which can be set by calculating Standby to execute, calculating equipment is the equipment with data-handling capacity, can be server, is also possible to terminal device, such as intelligence The training process of mobile device or computer, ARIMA model includes the following steps:
Step S210 determines parameter d, p, q of ARIMA model according to the training time sequence as training sample.
Time series refers to ordered series of numbers made of the chronological order arrangement by the numerical value of same statistical indicator by its generation. In the embodiment of the present application, the numerical value of same statistical indicator can be the user of every day within a preset period Network bandwidth utilization rate maximum value.The preset period can be a longer period, such as a season Or half a year etc..Training time sequence can be the maximum of the network bandwidth usage amount of every day in a certain preset period Ordered series of numbers made of value in chronological sequence sequentially arranges.
ARIMA model includes tri- parameters of d, p, q, and the value of d is the calculus of differences for keeping time series steady and carrying out Number, i.e. time series can flatten steady after d calculus of differences.The value of d is 0 or positive integer, if d takes 0, indicates time sequence Column will be steady through 0 calculus of differences, i.e., time series is exactly stable without calculus of differences.P is returning certainly for ARIMA model Return item number, q is sliding average item number.
The calculating process of parameter d is illustrated below:
Training time sequence might as well be set as a1, a2, a3, a4, a5, a6, a7, it, can when calculating the parameter d of ARIMA model First to carry out stationary test to training time sequence (a1, a2, a3, a4, a5, a6, a7), so that training of judgement time series is It is no steady.
If training time sequence stationary, shows that training time sequence will be steady without calculus of differences, that is, make to instruct The number for practicing the calculus of differences that time series is steady and carries out is 0, then the value of d is 0 at this time.
If training time sequence is unstable, first time calculus of differences is carried out to training time sequence, obtains (a2-a1), (a3-a2),(a4-a3),(a5-a4),(a6-a5),(a7-a6).It might as well take b1=(a2-a1), b2=(a3-a2), b3= (a4-a3), b4=(a5-a4), b5=(a6-a5), b6=(a7-a6), then training time sequence carried out first difference operation Later, available b1, b2, b3, b4, b5, b6.For carried out first difference operation training time sequence (b1, b2, b3, B4, b5, b6) stationary test is carried out, whether judgement (b1, b2, b3, b4, b5, b6) is steady.
If (b1, b2, b3, b4, b5, b6) is steadily, then show that training time sequence will be steady by first difference operation, Training time sequence stationary i.e. to be made and the number of calculus of differences carried out is 1, then the value of d is 1 at this time.
If (b1, b2, b3, b4, b5, b6) is unstable, second of difference fortune is continued to (b1, b2, b3, b4, b5, b6) It calculates, obtains (b2-b1), (b3-b2), (b4-b3), (b5-b4), (b6-b5).C might as well be takent1=(b2-b1), ct2=(b3- b2),ct3=(b4-b3), ct4=(b5-b4), ct5=(b6-b5), then training time sequence carried out twice calculus of differences it Afterwards, available ct1,ct2,ct3,ct4,ct5.For carrying out the training time sequence (c of calculus of differences twicet1,ct2,ct3, ct4,ct5) stationary test is carried out, judge (ct1,ct2,ct3,ct4,ct5) whether steady.
If (ct1,ct2,ct3,ct4,ct5) steadily, then show that calculus of differences will be steady twice for training time sequence process, Training time sequence stationary i.e. to be made and the number of calculus of differences carried out is 2, then the value of d is 2 at this time.
……
When carrying out stationary test, can be carried out by way of unit root test.Unit root test refer to verifying to Whether there is unit root in the time series of inspection if there is unit root is exactly the time series of non-stationary;If there is no list Position root, is exactly stable time series.It is appreciated that carrying out stationary test in addition to the inspection of application unit root, can also lead to Timing diagram inspection is crossed, such as draws out the timing diagram of time series, if the timing diagram shows apparent tendency or periodicity, Then the time series is unstable sequence.
The calculating process of parameter p, q is illustrated below:
P can be determining by partial autocorrelation function (Partial Autocorrelation Function, abbreviation PACF), Q can be determined by auto-correlation function (Autocorrelation Function, abbreviation ACF).P is the training time sequence PARCOR coefficients p rank truncation, q be the training time sequence auto-correlation coefficient q rank truncation.It is appreciated that p and q It can determine by other means, for example, p and q can be by minimizing information rule (Akaike Information Criterion, abbreviation AIC) it determines, p and q can also pass through bayesian information criterion (Bayesian InformationCriterion, abbreviation BIC) it determines.It is to this Shen that the specific method of determination of parameter p and q, which should not be construed, Limitation please.
Step S220 carries out d calculus of differences to the training time sequence, obtains training stationary sequence.
D=2 might as well be set, above example is connect and continues to illustrate, ct1,ct2,ct3,ct4,ct5It is training time sequence A1, a2, a3, a4, a5, a6, a7 carried out the training stationary sequence obtained after calculus of differences twice.
Training stationary sequence is substituted into ARIMA model, obtains the expression formula of the first predicted time sequence by step S230.
According to p and q, determine that the form of the expression formula of the first predicted time sequence of training stationary sequence is xt'=φ0+ φ1xt-12xt-2+...+φpxt-pt1εt-12εt-2-...-θqεt-q, wherein xt' it is t moment in training stationary sequence Predicted value, φ0For constant term, φi, i=1,2 ..., p is actual value xiWeight, xt-i, i=1,2 ..., p is that training is flat The actual value x of t moment in steady sequencetI data before data, and training stationary sequence in (t-i) moment reality Value, θi, i=1,2 ..., q is random disturbances εiWeight, εt-i, i=1,2 ..., q is the random disturbances at (t-i) moment.With The calculation of machine interference are as follows: the mean value for first calculating all numerical value in training stationary sequence, then to train stationary sequence a certain The occurrence at moment subtracts each other with above-mentioned mean value, and what is obtained is the random disturbances at the moment.
For ease of description, for q=2, the training process of ARIMA model might as well be illustrated with p=2, at this time The form of the expression formula of one predicted time sequence specifically:
xt'=φ01xt-12xt-2t1εt-12εt-2
Step S240 removes the distracter in the expression formula of the first predicted time sequence, obtains with being intended to estimate to join The function of several data volume compositions.
Above-mentioned distracter is expression formula xt'=φ01xt-12xt-2t1εt-12εt-2In constant term φ0With εt, remove constant term φ0And εt, the data volume component function F of parameter to be estimated will be hadt(β)=φ1xt-12xt-21 εt-12εt-2Wherein β is for indicating all parameter phis to be estimatedtt
Step S250 determines the expression formula of the difference of the trained stationary sequence and the function.
Training stationary sequence is subtracted each other with above-mentioned function in the true value of t moment, available trained stationary sequence it is residual The expression formula of poor item: ξt=xt-Ft(β), the expression formula are just the expression formula of above-mentioned difference.Wherein, xtFor training stationary sequence In the true value of t moment, for example, if t=t1, xt=ct1
Step S260 determines when the expression formula of difference meets preset first constraint condition, being intended to estimate in function is joined Several solutions obtains the ARIMA model according to solving.
After the expression formula for obtaining the residual error item of training stationary sequence, the residual sum of squares (RSS) of residual error item is obtained:First constraint condition can refer to first obtain it is above-mentioned residual Then poor quadratic sum calculates the minimum value of residual sum of squares (RSS) using iterative algorithm.
Then, when available residual sum of squares (RSS) minimum, the expression formula x of the first predicted time sequencet'=φ01xt-1+ φ2xt-2t1εt-12εt-2In parameter phi to be estimated1、φ2、θ1、θ2, so that obtain determining ARIMA model first is pre- Survey the expression formula of time series.
It is alternatively possible to φ when solving Q (β) minimum using gradient descent methodtAnd θt, wherein when t takes 1, it can get and be intended to Estimate parameter phi1And θ1;When t takes 2, parameter phi to be estimated can get2And θ2
It first determines parameter d, p, q of ARIMA model, d difference fortune is then carried out to training time sequence according to parameter d Calculate, obtain training stationary sequence, the expression formula of the first predicted time sequence is then obtained according to p, q, in the expression formula it is existing often It is several, and have the unknown of band parameter to be estimated, by the unknown component function of all bands parameter to be estimated, instruction is then calculated again The expression formula for practicing the difference of stationary sequence and function enables the expression formula of difference meet the first constraint condition, calculates the desire in function Estimate parameter, and then obtains complete expression of the ARIMA model about the first predicted time sequence.
Fig. 5 is referred to, Fig. 5 shows the training process of GRU model in the application, and GRU model is memory network model, can To understand, which can also be executed by calculating equipment, and the training process of GRU model includes the following steps:
Step S310 calculates training stationary sequence and training time sequence of the training time sequence after d calculus of differences The difference of predicted time sequence after ARIMA model treatment obtains the second residual sequence.
In the expression formula x for determining ARIMA modelt'=φ01xt-12xt-2t1εt-12εt-2Later, it will train Stationary sequence ct1,ct2,ct3,ct4,ct5It substitutes into above-mentioned expression formula, obtains predicted time sequence ct3′,ct4′,ct5′.Wherein,
ct3'=φ01ct22xt1t1εt22εt1
ct4'=φ01ct32xt2t1εt32εt2
ct5'=φ01ct42xt3t1εt42εt3
Calculate training stationary sequence ct3,ct4,ct5With predicted time sequence ct3′,ct4′,ct5' difference, can obtain:
et0=ct3-ct3′;et1=ct4-ct4′;et2=ct5-ct5′.Second residual sequence is et0,et1,et2
Step S320 is handled the second residual sequence according to initial GRU model, obtains the second forecasting sequence.
For the residual values at each moment in the second residual sequence:
It can be based on tnThe residual values at momenttn-1The hidden state at momentUpdate door weight parameter and update The biasing b of doorf, calculate and obtain tnThe update door at moment exports
Based on tnThe residual values at momenttn-1The hidden state at momentIt resets door weight parameter and resets door Bias br, calculate and obtain tnThe resetting door at moment exports
Based on resetting door outputtn-1The hidden state at momenttnThe residual values at momentImplicit candidate state The biasing b of weight parameter and implicit candidate stateh, calculate and obtain tnThe implicit candidate state at moment
Based on tn-1The hidden state at momenttnThe implicit candidate state at momentAnd tnThe update door at moment exportsIt calculates and obtains tnThe hidden state at moment
Based on tnThe hidden state at momentThe weight W of full Connection Neural Network layerpre, full Connection Neural Network layer it is inclined Set bpreIt calculates and obtains tn+1The training predicted value at moment
Specifically, each residual values in the second residual sequence are substituted into the following formula in GRU model:
Calculate each residual values in the second residual sequenceLater moment in time training predicted valueThe structural schematic diagram of GRU model refers to Fig. 8, and GRU model includes resetting door and updates door, resets door and updates gate The memory state of GRU model processed.
Wherein,For tnThe update door at moment exports, and sigmoid function is σ,WxzTo update door weight X-component,For tnThe residual values at moment, WhzFor update door weight h component,For tn-1The hidden state at moment, bfFor Update the biasing of door;For tnThe resetting door at moment exports, WxrFor the x-component for resetting door weight, WhrFor h points for resetting door weight Amount, brFor the biasing for resetting door;Tanh (x) is hyperbolic tangent function, For tnThe implicit time at moment Select state, WrhFor the r component for implying candidate state weight, WxhFor the x-component for implying candidate state weight, bhFor implicit candidate shape The biasing of state;For tnThe hidden state at moment, WpreFor the weight of full Connection Neural Network layer, bpreFor full Connection Neural Network The biasing of layer, * representing matrix element multiplication.
For et0, according to the following formula
It obtains training predicted value
For et1, according to the following formula
It obtains training predicted value
Training predicted value is arranged sequentially in time, obtaining the second forecasting sequence is
Step S330 calculates the penalty values between second forecasting sequence and second residual sequence.
It can be according to multiple trained predicted values in the second forecasting sequence and multiple residual values in the second residual sequence Average error is calculated, which is just above-mentioned penalty values.For N number of training prediction in the second forecasting sequence The trained predicted value of each of value calculates training predicted value and the residual values in second residual sequence in synchronization Difference square;Calculate N number of difference square average, obtain the average error, wherein N is the second pre- sequencing The sample size of column.
Specifically, formula is utilizedCalculate average error E, wherein N is the second forecasting sequence Sample size, in the above example, the sample size of the second forecasting sequence is 2, prejFor the training predicted value at j moment, xjFor The residual values at j moment in second residual sequence.
Step S340 adjusts the weight parameter of initial GRU model if penalty values do not meet preset second constraint condition And offset parameter obtains GRU model until the corresponding penalty values of GRU model adjusted meet the second constraint condition.
Second constraint condition refers to whether error in judgement average value exceeds the range of default value.If the error is average Value exceeds the range of default value, utilizes the W in gradient descent method adjustment at this time the GRU modelxz、Whz、Wxr、Whr、Wrh、Wxh Numerical value, and jump to step S320;Until the average error is in the range of the default value, it will be described at this time W in GRU modelxz、Whz、Wxr、Whr、Wrh、WxhNumerical value be determined as the final argument of the GRU model.
The difference of the predicted time sequence of training stationary sequence and the output of ARIMA model is first calculated, and obtains the second residual error sequence Second residual sequence input GRU model is obtained the second forecasting sequence by column, calculate the second forecasting sequence and the second residual sequence it Between penalty values adjust the weight parameter and offset parameter of GRU model if penalty values are unsatisfactory for the second constraint condition, until The corresponding penalty values of GRU model adjusted meet the second constraint condition, the output result for the ARIMA model for utilizing training to complete As the input results of GRU model, carry out the training of GRU model, the GRU model that training is completed preferably with ARIMA model cooperation, to obtain more accurate predicted value.
Fig. 2 is referred to, Fig. 2 shows prediction techniques provided by the embodiments of the present application, include the following steps:
Step S10 is handled according to historical time sequence of the time series predicting model to tranquilization, when obtaining fitting Between sequence and level forecasts value.
Step S20 calculates the difference of the historical time sequence Yu the fit time sequence, obtains the first residual sequence.
Step S30 handles first residual sequence using memory network model, obtains secondary predicted value.
Step S40 determines final predicted value according to the level forecasts value and the secondary predicted value.
Wherein time series predicting model be can be to the model that time series is predicted, for example, time series forecasting Model can be ARIMA model.The historical time sequence of tranquilization refers to that historical time sequence is smoothly, if original time Sequence be originally it is jiggly, can by carrying out tranquilization processing so that original time series are steady to original time series, Obtain the historical time sequence of tranquilization.The concrete mode for carrying out tranquilization processing to original time series can be with are as follows: to original Time series carries out d calculus of differences.
Original time series can be temporally first for the maximum value of the network bandwidth utilization rate of every day in certain time period Ordered series of numbers made of sequence arranges afterwards.For example, as it is known that continuous 5 days network bandwidth utilization rates of certain user, are intended to predict the 6th day to the 8th It network bandwidth utilization rate, at this point, continuous 5 days network bandwidth utilization rates are above-mentioned original time series.When history Between sequence be that original time series are carried out with obtained stable time series after d calculus of differences, if original time series sheet Body be exactly it is stable, then d takes 0, that is, do not need carry out calculus of differences just obtained stable time series.Fit time sequence is The time series of match value composition at the time of where at least part numerical value of the historical time sequence.Above-mentioned memory net Network model can be GRU model.
Time series predicting model is cooperated with memory network model and is made by prediction technique provided by the embodiments of the present application With, obtain level forecasts value and residual sequence first with time series predicting model, then recycle memory network model pair Residual sequence is predicted to obtain secondary predicted value, then determines final prediction according to level forecasts value and secondary predicted value Value, which optimizes the forecasting accuracy of time series, compared with prior art, so that time series such as Netowrk tape The forecasting accuracy of wide utilization rate is improved.
Next, Fig. 3 is referred to, and it might as well be with time series predicting model for ARIMA model, memory network model is GRU For model, prediction technique provided by the embodiments of the present application is illustrated, is included the following steps, following step can pass through Equipment is calculated to execute:
Step S110 handles according to historical time sequence of the ARIMA model to tranquilization, obtains the historical time The corresponding fit time sequence of sequence and level forecasts value.
The historical time sequence of tranquilization can be what original time series obtained after d calculus of differences.
The expression formula for the ARIMA that training is completed are as follows:
xt'=φ01xt-12xt-2+...+φpxt-pt1εt-12εt-2-...-θqεt-q
By above-mentioned expression formula it is found that according to the value x at (t-p) moment in historical time sequencet-pTo the value at (t-1) moment xt-1And the random disturbances ε at (t-q) moment of historical time sequencet-qTo the random disturbances ε at (t-1) momentt-1Available t The predicted value x at momentt′。
Optionally, the calculation of random disturbances are as follows: first calculate historical time sequence in all numerical value mean value, then with The occurrence at historical time sequence a certain moment subtracts each other with above-mentioned mean value, and what is obtained is the random disturbances at the moment.
If historical time sequence is in t moment, there are given values, predicted value x at this timet' for one in fit time sequence Member;If given value is not present in t moment in historical time sequence, predicted value x at this timet' for a member in level forecasts value.
It for ease of description, next might as well be with d=0, to prediction side provided by the embodiments of the present application for p=2, q=2 Method is further described:
Due to d=0, then original time series carry out 0 calculus of differences can be steady, at this point, historical time sequence with Original time series are identical.For original time series (namely historical time sequence) zt1,zt2,zt3,zt4,zt5, substitute into following formula In:
xt'=φ01xt-12xt-2t1εt-12εt-2
It can obtain:
zt3'=φ01zt22zt1t1εt22εt1
zt4'=φ01zt32zt2t1εt32εt2
……
zt6'=φ01zt52zt4t1εt52εt4
zt7'=φ01zt6′+φ2zt5t1εt62εt5
zt8'=φ01zt7′+φ2zt6′+εt1εt72εt6
Since historical time sequence is respectively present given value z at t3, t4, t5 momentt3,zt4,zt5, therefore above-mentioned formula obtains The z arrivedt3′,zt4′,zt5' belong to fit time sequence;Due to historical time sequence be not present at t6, t7, t8 moment it is known Value, therefore, the z that above-mentioned formula obtainst6′,zt7′,zt8' belong to level forecasts value.
When predicting the level forecasts value at a certain specific moment, if existing and the above-mentioned specific moment in historical time sequence Corresponding true value is then predicted using the true value in historical time sequence;If in historical time sequence there is no with Above-mentioned specific moment corresponding true value, then predicted using the level forecasts value that previous prediction goes out.For example, in prediction zt6′ When, z need to be usedt4,zt5, there are z in historical time sequencet4And zt5, therefore can directly use.In prediction zt7' when, it need to use zt5,zt6', z is only existed in historical time sequencet5, therefore the z gone out with previous predictiont6' with historical time sequence existing for zt5Altogether With prediction zt7′。
In a specific embodiment, if d is not 0, such as d is 1, then to original time series zt1,zt2,zt3,zt4, zt5First difference operation is carried out, historical time sequence (z is obtainedt2-zt1),(zt3-zt2),(zt4-zt3),(zt5-zt4), then again The expression formula that historical time sequence is substituted into ARIMA model, since the calculating process after substitution expression formula is opposite with above content It answers, does not just repeat them here herein.
Step S120 calculates the difference of historical time sequence and the fit time sequence, obtains the first residual sequence.
The sequence obtained after d calculus of differences is historical time sequence, the historical time sequence at certain available moment In value and synchronization fit time sequence in value, both then subtract each other, obtained difference arranges sequentially in time, just Available first residual sequence.
Above d=0 is met, the example of p=2, q=2 continue to illustrate, historical time sequence zt1,zt2,zt3,zt4,zt5 In zt3,zt4,zt5Respectively with the z in fit time sequencet3′,zt4′,zt5' subtract each other, obtain the first residual sequence zt3″,zt4″, zt5″.Wherein, zt3"=zt3-zt3', zt4"=zt4-zt4', zt5"=zt5-zt5′。
Step S130 is handled first residual sequence according to GRU model, obtains secondary predicted value.
For ease of description, the first residual sequence is recompiled, by zt3″,zt4″,zt5It " recompiles as vt3,vt4, vt5, then the residual values of the first residual sequence are substituted into
Calculate each residual valuesLater moment in time secondary predicted value
For vt3, it can be obtained after substitution:
Wherein,For random value or 0.
For vt4, it can be obtained after substitution:
For vt5, it can be obtained after substitution:
It is predictingWhen, it can use what above formula obtainedWithIt is predicted:
It is predictingWhen, it can useWithIt is predicted, prediction process and predictionShi Xiangtong, herein Just it does not repeat them here.
During calculating secondary predicted value, when predicting the secondary predicted value at a certain specific moment, if in the first residual error There is true value corresponding with above-mentioned specific moment in sequence, is then predicted using the true value in the first residual sequence;If There is no true values corresponding with above-mentioned specific moment in the first residual sequence, then the secondary predicted value gone out with previous prediction into Row prediction.For example, predictingWhen, v need to be usedt5, there are v in the first residual sequencet5, therefore can directly use.Pre- It surveysWhen, v is not present in historical time sequencet6, therefore gone out with previous predictionTo predict
By above-mentioned interative computation, obtaining secondary predicted value is respectively
Step S140 determines final predicted value according to the level forecasts value and the secondary predicted value.
Summation operation is carried out to the level forecasts value and the secondary predicted value, obtains final predicted value.
Summation operation can be the operation for being directly added level forecasts value with secondary predicted value;In a kind of specific embodiment party In formula, be also possible to multiplied by weight corresponding to level forecasts value and level forecasts value and obtain a product, secondary predicted value with Multiplied by weight corresponding to secondary predicted value obtains another product, then by two product additions.
In the embodiment of the present application, since d is 0, the secondary predicted value of level forecasts value and synchronization can be calculated Adduction, by the adduction directly as final predicted value.For example, Original time series zt1,zt2,zt3,zt4,zt5Final predicted value be zt6″′,zt7″′,zt8″′。
If in a kind of specific embodiment, if d is not 0, after calculating the adduction predicted value of historical time sequence, Also inverse differential operation is carried out to adduction predicted value, to obtain final predicted value.
During above-mentioned realization, first obtained according to the ARIMA model that training is completed corresponding with original time series quasi- Close time series and level forecasts value.Calculate the obtained historical time sequence by original time series after d calculus of differences The difference of column and above-mentioned fit time sequence, obtains the first residual sequence that above-mentioned difference arranges formation sequentially in time.By One residual sequence is inputted into GRU model, obtains secondary predicted value, then determines original according to level forecasts value and secondary predicted value The final predicted value of beginning time series, the embodiment of the present application match the GRU model that the ARIMA model that training is completed is completed with training It is used in combination, obtains level forecasts value and residual sequence first with ARIMA model, then recycle GRU model to residual error Sequence is predicted to obtain secondary predicted value, and which optimizes the forecasting accuracy of time series, with the prior art It compares, so that the forecasting accuracy of time series such as network bandwidth utilization rate is improved.
Fig. 6 is referred to, Fig. 6 shows the schematic block diagram of prediction meanss provided by the present application, it should be appreciated that the device 400 is corresponding to Fig. 5 embodiment of the method with above-mentioned Fig. 3, is able to carry out each step that above method embodiment is related to, the device 400 specific functions may refer to it is described above, it is appropriate herein to omit detailed description to avoid repeating.Device 400 includes At least one can be stored in memory or be solidificated in the operating system of device 400 in the form of software or firmware (firmware) Software function module in (operating system, OS).Specifically, which includes:
Level forecasts value obtains module 410, for the historical time sequence according to time series predicting model to tranquilization It is handled, obtains fit time sequence and level forecasts value, wherein the fit time sequence is the historical time sequence At least part numerical value where at the time of match value composition time series.
First residual sequence module 420 is obtained for calculating the difference of the historical time sequence Yu the fit time sequence To the first residual sequence.
Secondary predicted value obtains module 430, for being handled using memory network model first residual sequence, Obtain secondary predicted value.
Final predictor calculation module 440, for being determined most according to the level forecasts value and the secondary predicted value Whole predicted value.
Level forecasts value obtains module 410, specifically for integrating rolling average ARIMA model to tranquilization according to autoregression Historical time sequence handled, obtain the corresponding fit time sequence of the historical time sequence and level forecasts value, Wherein, the ARIMA model is obtained according to the training of training time sequence.
Secondary predicted value obtains module 430, is specifically used for according to gating cycle unit GRU model to the first residual error sequence Column are handled, and secondary predicted value is obtained, wherein the GRU model is trained according to default residual sequence and is obtained, it is described preset it is residual Difference sequence is corresponding to the fitting sequence obtained after the training time series processing with the ARIMA model.
Final predictor calculation module 440 includes: adduction submodule, for the level forecasts value and the secondary Predicted value carries out summation operation, obtains adduction predicted value;Inverse differential operation submodule, it is anti-for being carried out to the adduction predicted value Calculus of differences obtains the final predicted value.
Described device further include:
Calculus of differences module, for carrying out d calculus of differences processing to original time series, when obtaining the history of tranquilization Between sequence, d be the ARIMA model in parameter, value be the calculus of differences for making training time sequence stationary and carrying out time Number, d take positive integer.
Parameter determination module, for determined according to the training time sequence as training sample ARIMA model parameter d, P, q, wherein p is autoregression item number, and q is sliding average item number.
Training stationary sequence module obtains the steady sequence of training for carrying out d calculus of differences to the training time sequence Column.
Prediction expression obtains module, and for the trained stationary sequence to be substituted into the ARIMA model, it is pre- to obtain first Survey the expression formula of time series.
Function comprising modules, the distracter in expression formula for removing the first predicted time sequence, are had The function of the data volume composition of parameter to be estimated.
Difference expression formula determining module, the expression formula of the difference for determining the trained stationary sequence and the function.
ARIMA model obtains module, for determining when the expression formula of the difference meets preset first constraint condition, The solution of the parameter to be estimated in the function obtains the ARIMA model according to the solution.
Second residual sequence module, for calculating training steady sequence of the training time sequence after d calculus of differences The difference of column and predicted time sequence of the training time sequence after the ARIMA model treatment obtains the second residual sequence.
Second forecasting sequence module obtains for handling according to initial GRU model second residual sequence Two forecasting sequences.
Penalty values computing module, for calculating the loss between second forecasting sequence and second residual sequence Value.
The penalty values computing module be specifically used for according in second forecasting sequence multiple trained predicted values and Multiple residual values in second residual sequence calculate average error, and the average error is the penalty values.
The penalty values computing module is specifically used for each of N number of trained predicted value in second forecasting sequence Training predicted value calculates the flat of the difference of training predicted value and the residual values being in second residual sequence of synchronization Side;It is described calculate equipment calculate N number of difference square average, obtain the average error, wherein N is second pre- The sample size of column is sequenced.
GRU model obtains module, if not meeting preset second constraint condition for the penalty values, adjusts described first The weight parameter and offset parameter of beginning GRU model, until the corresponding penalty values of GRU model adjusted meet second constraint Condition obtains the GRU model.
GRU model obtains module, if exceeding the range of default value specifically for the average error, using under gradient Drop method adjusts the x-component W of the update door weight in the GRU modelxz, update door weight h component Whz, resetting door weight x Component Wxr, resetting door weight h component Whr, implicit candidate state weight r component Wrh, implicit candidate state weight x-component WxhNumerical value, and execute the initial GRU model of the basis and second residual sequence handled, obtain the second sequencing in advance Column;Until in the range of the default value, the x of the update door weight in the GRU model is divided for the average error Measure Wxz, update door weight h component Whz, resetting door weight x-component Wxr, resetting door weight h component Whr, implicit candidate shape The r component W of state weightrh, implicit candidate state weight x-component WxhNumerical value be determined as the final argument of the GRU model. Second forecasting sequence module, for being based on tnThe residual values at momenttn-1The hidden state at momentUpdate door weight ginseng Number and the biasing b for updating doorf, calculate and obtain tnThe update door at moment exportsBased on tnThe residual values at momenttn-1When The hidden state h at quartertn-1, resetting door weight parameter and reset door biasing br, calculate and obtain tnThe resetting door at moment exportsBased on resetting door outputtn-1The hidden state at momenttnThe residual values at momentImplicit candidate state weight ginseng The biasing b of several and implicit candidate stateh, calculate and obtain tnThe implicit candidate state at momentBased on tn-1The implicit shape at moment StatetnThe implicit candidate state at momentAnd tnThe update door at moment exportsIt calculates and obtains tnThe hidden state at momentBased on tnThe hidden state at momentThe weight W of full Connection Neural Network layerpre, full Connection Neural Network layer biasing bpreIt calculates and obtains tn+1The training predicted value at moment
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
It can using prediction technique provided by the embodiments of the present application, device, electronic equipment and computer readable storage medium To predict the bandwidth utilization rate of user's lower first quarter or next year according to the bandwidth utilization rate in former years, to be made to network bandwidth Correctly planning, to meet the business demand of user's lower first quarter or next year, moreover it is possible to the bandwidth of bandwidth bottleneck and waste is found out, To be adjusted in time in the future, response speed is improved, is cut operating costs.
What can be carried out due to the application is prediction to time series, and bandwidth utilization rate is one in time series Kind, therefore, prediction technique provided by the embodiments of the present application can also be used other than it can be used to predict network bandwidth utilization rate To predict other time serieses.Such as urban transportation vehicle flowrate, training sample is adjusted to the city of a certain preset time period Special bus flow trains ARIMA model and GRU model, prediction technique provided by the embodiments of the present application can be utilized to carry out city The prediction of city's special bus flow.
The application also provides a kind of device, and Fig. 7 is the structural block diagram of the device 500 in the embodiment of the present application, such as Fig. 7 institute Show.Device 500 may include processor 510, communication interface 520, memory 530 and at least one communication bus 540.Wherein, Communication bus 540 is for realizing the direct connection communication of these components.Wherein, in the embodiment of the present application equipment communication interface 520 for carrying out the communication of signaling or data with other node devices.Processor 510 can be a kind of IC chip, tool There is the processing capacity of signal.Above-mentioned processor 510 can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be number Signal processor (DSP), specific integrated circuit (ASIC), ready-made programmable gate array (FPGA) or other programmable logic devices Part, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute the disclosure in the embodiment of the present application Each method, step and logic diagram.General processor can be microprocessor or the processor 510 be also possible to it is any often The processor etc. of rule.
Memory 530 may be, but not limited to, random access memory (Random Access Memory, RAM), only It reads memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc.. Computer-readable instruction fetch is stored in memory 530, when the computer-readable instruction fetch is executed by the processor 510 When, device 500 can execute each step that above-mentioned Fig. 3 is related to Fig. 5 embodiment of the method.
Device 500 can also include storage control, input-output unit, audio unit, display unit 8.
The memory 530, processor 510, Peripheral Interface, input-output unit, audio unit, is shown storage control Show that each element of unit is directly or indirectly electrically connected between each other, to realize the transmission or interaction of data.For example, these elements It can be realized and be electrically connected by one or more communication bus 540 between each other.The processor 510 is for executing memory The executable module stored in 530, such as software function module or computer program that device 300 includes.
Input-output unit is used to be supplied to user input data and realizes user and the server (or local terminal) Interaction.The input-output unit may be, but not limited to, mouse and keyboard etc..
Audio unit provides a user audio interface, may include one or more microphones, one or more loudspeaking Device and voicefrequency circuit.
Display unit provided between the electronic equipment and user an interactive interface (such as user interface) or It is referred to for display image data to user.In the present embodiment, the display unit can be liquid crystal display or touch-control is aobvious Show device.It can be the capacitance type touch control screen or resistance type touch control screen of support single-point and multi-point touch operation if touch control display Deng.Single-point and multi-point touch operation is supported to refer to that touch control display can sense one or more positions on the touch control display The touch control operation setting place while generating, and the touch control operation that this is sensed transfers to processor to be calculated and handled.Display is single Member can execute the composograph that the step of Fig. 3 to Fig. 5 is shown obtains with video-stream processor 510, can also show in region to be checked Route whether there is hidden danger judging result.
Input-output unit is used to be supplied to the interaction that user input data realizes user and processing terminal.The input is defeated Unit may be, but not limited to, out, mouse and keyboard etc..
It is appreciated that structure shown in Fig. 7 is only to illustrate, described device 500 may also include it is more than shown in Fig. 7 or Less component, or with the configuration different from shown in Fig. 7.Each component shown in fig. 7 can using hardware, software or its Combination is realized.
The application also provides a kind of computer readable storage medium, is stored with computer on the computer readable storage medium Program executes method described in embodiment of the method when the computer program is run by processor.
The application also provides a kind of computer program product to be made when the computer program product is run on computers It obtains computer and executes method described in embodiment of the method.
Refer to Fig. 9, Fig. 9 show using prediction technique provided by the embodiments of the present application with to only use ARIMA model pre- The comparison diagram for surveying time series, straight line is the time series true value of the bandwidth utilization rate in certain enterprise one month, band water chestnut in figure The line segment of form point is the fitting sequence only obtained with ARIMA models fitting, and the line segment with square points is to use the embodiment of the present application The fitting sequence that the prediction technique of offer is fitted, as shown in figure 9, being fitted with prediction technique provided by the embodiments of the present application The average relative error of the fitting sequence arrived is 0.21089, and being averaged for the fitting sequence only obtained with ARIMA models fitting is opposite Error is 0.27608.Therefore, the planning for carrying out bandwidth utilization rate based on prediction technique provided by the embodiments of the present application can be more The accurately bandwidth utilization rate of prediction enterprise's next stage.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other. For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng See the part explanation of embodiment of the method.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through it Its mode is realized.The apparatus embodiments described above are merely exemplary, for example, the flow chart and block diagram in attached drawing are aobvious The device of multiple embodiments according to the application, architectural framework in the cards, the function of method and computer program product are shown It can and operate.In this regard, each box in flowchart or block diagram can represent one of a module, section or code Point, a part of the module, section or code includes one or more for implementing the specified logical function executable Instruction.It should also be noted that function marked in the box can also be attached to be different from some implementations as replacement The sequence marked in figure occurs.For example, two continuous boxes can actually be basically executed in parallel, they sometimes may be used To execute in the opposite order, this depends on the function involved.It is also noted that each of block diagram and or flow chart The combination of box in box and block diagram and or flow chart can be based on the defined function of execution or the dedicated of movement The system of hardware is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the application can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.It needs Illustrate, herein, relational terms such as first and second and the like be used merely to by an entity or operation with Another entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this realities The relationship or sequence on border.Moreover, the terms "include", "comprise" or its any other variant are intended to the packet of nonexcludability Contain, so that the process, method, article or equipment for including a series of elements not only includes those elements, but also including Other elements that are not explicitly listed, or further include for elements inherent to such a process, method, article, or device. In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element Process, method, article or equipment in there is also other identical elements.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.It should also be noted that similar label and letter exist Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing It is further defined and explained.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.

Claims (13)

1. a kind of prediction technique, which is characterized in that be applied to calculate equipment, which comprises
It is handled according to historical time sequence of the time series predicting model to tranquilization, obtains fit time sequence and primary Predicted value, wherein the fitting at the time of fit time sequence is at least part numerical value place of the historical time sequence It is worth the time series of composition;
The difference for calculating the historical time sequence Yu the fit time sequence, obtains the first residual sequence;
First residual sequence is handled using memory network model, obtains secondary predicted value;
Final predicted value is determined according to the level forecasts value and the secondary predicted value.
2. the method according to claim 1, wherein the described method includes:
It is described to be handled according to historical time sequence of the time series predicting model to tranquilization, obtain fit time sequence and Level forecasts value, comprising:
Rolling average ARIMA model is integrated according to autoregression to handle the historical time sequence of tranquilization, is gone through described in acquisition The corresponding fit time sequence of history time series and level forecasts value, wherein the ARIMA model is according to training time sequence Training obtains;
It is described that first residual sequence is handled using memory network model, obtain secondary predicted value, comprising:
First residual sequence is handled according to gating cycle unit GRU model, obtains secondary predicted value, wherein institute GRU model is stated to be obtained according to the training of default residual sequence, the default residual sequence and the ARIMA model to it is described trained when Between the fitting sequence that obtains after series processing it is corresponding.
3. according to the method described in claim 2, it is characterized in that, integrating rolling average ARIMA model pair according to autoregression The historical time sequence of tranquilization is handled, and obtains the corresponding fit time sequence of the historical time sequence and primary is pre- Before measured value, the method also includes:
D calculus of differences processing is carried out to original time series, obtains the historical time sequence of tranquilization, d is the ARIMA mould Parameter in type, value are the number for the calculus of differences for making training time sequence stationary and carrying out, and d takes positive integer.
4. according to the method described in claim 3, it is characterized in that, described according to the level forecasts value and described secondary pre- Measured value determines final predicted value, comprising:
Summation operation is carried out to the level forecasts value and the secondary predicted value, obtains adduction predicted value;
Inverse differential operation is carried out to the adduction predicted value, obtains the final predicted value.
5. according to the method described in claim 2, it is characterized in that, the training process of the ARIMA model includes:
Parameter d, p, q of ARIMA model are determined according to the training time sequence as training sample, wherein p is autoregression item Number, q are sliding average item number;
D calculus of differences is carried out to the training time sequence, obtains training stationary sequence;
The trained stationary sequence is substituted into the ARIMA model, obtains the expression formula of the first predicted time sequence;
The distracter in the expression formula of the first predicted time sequence is removed, the data volume with parameter to be estimated is obtained and forms Function;
Determine the expression formula of the difference of the trained stationary sequence and the function;
Determine the parameter to be estimated when the expression formula of the difference meets preset first constraint condition, in the function Solution, the ARIMA model is obtained according to the solution.
6. method according to any one of claims 2 to 5, which is characterized in that the training process of the GRU model includes:
It calculates described in training stationary sequence of the training time sequence after d calculus of differences and training time sequence warp The difference of predicted time sequence after ARIMA model treatment obtains the second residual sequence, and d is the parameter in the ARIMA model, Value is the number for the calculus of differences for making training time sequence stationary and carrying out, and d takes positive integer;
Second residual sequence is handled according to initial GRU model, obtains the second forecasting sequence;
Calculate the penalty values between second forecasting sequence and second residual sequence;
If the penalty values do not meet preset second constraint condition, adjust the initial GRU model weight parameter and partially Parameter is set, until the corresponding penalty values of GRU model adjusted meet second constraint condition, obtains the GRU model.
7. according to the method described in claim 6, it is characterized in that, calculating second forecasting sequence and the second residual error sequence Penalty values between column, comprising:
According to multiple trained predicted values in second forecasting sequence and multiple residual values in second residual sequence Average error is calculated, the average error is the penalty values.
8. if the method according to the description of claim 7 is characterized in that the penalty values do not meet preset second constraint Condition then adjusts the weight parameter and offset parameter of the initial GRU model, until the corresponding loss of GRU model adjusted Value meets second constraint condition, obtains the GRU model, comprising:
If the average error exceeds the range of default value, the update in the GRU model is adjusted using gradient descent method The x-component W of door weightxz, update door weight h component Whz, resetting door weight x-component Wxr, resetting door weight h component Whr、 The r component W of implicit candidate state weightrh, implicit candidate state weight x-component WxhNumerical value, and it is initial to execute the basis GRU model handles second residual sequence, obtains the second forecasting sequence;Until the average error is described pre- If in the range of numerical value, by the x-component W of the update door weight in the GRU modelxz, update door weight h component Whz, resetting The x-component W of door weightxr, resetting door weight h component Whr, implicit candidate state weight r component Wrh, implicit candidate state power The x-component W of weightxhNumerical value be determined as the final argument of the GRU model.
9. the method according to the description of claim 7 is characterized in that multiple training according in second forecasting sequence Multiple residual values in predicted value and second residual sequence calculate average error, comprising:
For the trained predicted value of each of N number of trained predicted value in second forecasting sequence, calculate training predicted value with Square of the difference of residual values in second residual sequence in synchronization;
Calculate it is N number of it is described difference square average, obtain the average error, wherein N be the second forecasting sequence sample Quantity.
10. according to the method described in claim 6, it is characterized in that, according to initial GRU model to second residual sequence into Row processing, obtains the second forecasting sequence, comprising:
Based on tnThe residual values at momenttn-1The hidden state at momentIt updates door weight parameter and updates the biasing of door bf, calculate and obtain tnThe update door at moment exports
Based on tnThe residual values at momenttn-1The hidden state at momentIt resets door weight parameter and resets the biasing of door br, calculate and obtain tnThe resetting door at moment exports rtn
Based on resetting door outputtn-1The hidden state at momenttnThe residual values at momentImplicit candidate state weight The biasing b of parameter and implicit candidate stateh, calculate and obtain tnThe implicit candidate state at moment
Based on tn-1The hidden state at momenttnThe implicit candidate state at momentAnd tnThe update door at moment exportsMeter It calculates and obtains tnThe hidden state at moment
Based on tnThe hidden state at momentThe weight W of full Connection Neural Network layerpre, full Connection Neural Network layer biasing bpreIt calculates and obtains tn+1The training predicted value at moment
11. a kind of prediction meanss, which is characterized in that described device includes:
Level forecasts value obtain module, for according to time series predicting model to the historical time sequence of tranquilization at Reason, obtain fit time sequence and level forecasts value, wherein the fit time sequence be the historical time sequence at least The time series of match value composition at the time of where a part of numerical value;
First residual sequence module obtains first for calculating the difference of the historical time sequence Yu the fit time sequence Residual sequence;
Secondary predicted value obtains module, for being handled using memory network model first residual sequence, obtains secondary Grade predicted value;
Final predictor calculation module, for determining final prediction according to the level forecasts value and the secondary predicted value Value.
12. a kind of electronic equipment characterized by comprising processor, storage medium and bus, the storage medium are stored with The executable machine readable instructions of the processor, when electronic equipment operation, between the processor and the storage medium By bus communication, the processor executes the machine readable instructions, executes when executing such as any institute of claim 1-10 The step of stating method.
13. a kind of computer readable storage medium, which is characterized in that be stored with computer journey on the computer readable storage medium Sequence is executed when the computer program is run by processor such as the step of claim 1-10 any the method.
CN201910627110.3A 2019-07-11 2019-07-11 Prediction technique, device, electronic equipment and computer readable storage medium Pending CN110400010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910627110.3A CN110400010A (en) 2019-07-11 2019-07-11 Prediction technique, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910627110.3A CN110400010A (en) 2019-07-11 2019-07-11 Prediction technique, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110400010A true CN110400010A (en) 2019-11-01

Family

ID=68325348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910627110.3A Pending CN110400010A (en) 2019-07-11 2019-07-11 Prediction technique, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110400010A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414999A (en) * 2020-04-27 2020-07-14 新智数字科技有限公司 Method and device for monitoring running state of equipment
CN111476265A (en) * 2020-02-26 2020-07-31 珠海格力电器股份有限公司 Induction door control method and device, terminal and computer readable medium
CN111898826A (en) * 2020-07-31 2020-11-06 北京文思海辉金信软件有限公司 Resource consumption prediction method and device, electronic equipment and readable storage equipment
CN111931054A (en) * 2020-08-14 2020-11-13 中国科学院深圳先进技术研究院 Sequence recommendation method and system based on improved residual error structure
CN112101400A (en) * 2019-12-19 2020-12-18 国网江西省电力有限公司电力科学研究院 Industrial control system abnormality detection method, equipment, server and storage medium
CN112115416A (en) * 2020-08-06 2020-12-22 深圳市水务科技有限公司 Predictive maintenance method, apparatus, and storage medium
CN112508283A (en) * 2020-12-12 2021-03-16 广东电力信息科技有限公司 Method and device for constructing time series model
WO2021057245A1 (en) * 2019-09-23 2021-04-01 北京达佳互联信息技术有限公司 Bandwidth prediction method and apparatus, electronic device and storage medium
CN112799913A (en) * 2021-01-28 2021-05-14 中国工商银行股份有限公司 Container operation abnormity detection method and device
CN113190429A (en) * 2021-06-03 2021-07-30 河北师范大学 Server performance prediction method and device and terminal equipment
CN113743738A (en) * 2021-08-11 2021-12-03 湖北省食品质量安全监督检验研究院 Method and device for predicting food safety risk grade interval
CN115482837A (en) * 2022-07-25 2022-12-16 科睿纳(河北)医疗科技有限公司 Emotion classification method based on artificial intelligence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5884037A (en) * 1996-10-21 1999-03-16 International Business Machines Corporation System for allocation of network resources using an autoregressive integrated moving average method
CN109684310A (en) * 2018-11-22 2019-04-26 安徽继远软件有限公司 A kind of information system performance Situation Awareness method based on big data analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5884037A (en) * 1996-10-21 1999-03-16 International Business Machines Corporation System for allocation of network resources using an autoregressive integrated moving average method
CN109684310A (en) * 2018-11-22 2019-04-26 安徽继远软件有限公司 A kind of information system performance Situation Awareness method based on big data analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘洋: "基于GRU神经网络的时间序列预测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11374825B2 (en) 2019-09-23 2022-06-28 Beijing Daijia Internet Information Technology Co., Ltd. Method and apparatus for predicting bandwidth
WO2021057245A1 (en) * 2019-09-23 2021-04-01 北京达佳互联信息技术有限公司 Bandwidth prediction method and apparatus, electronic device and storage medium
CN112101400A (en) * 2019-12-19 2020-12-18 国网江西省电力有限公司电力科学研究院 Industrial control system abnormality detection method, equipment, server and storage medium
CN111476265A (en) * 2020-02-26 2020-07-31 珠海格力电器股份有限公司 Induction door control method and device, terminal and computer readable medium
CN111414999A (en) * 2020-04-27 2020-07-14 新智数字科技有限公司 Method and device for monitoring running state of equipment
CN111414999B (en) * 2020-04-27 2023-08-22 新奥新智科技有限公司 Method and device for monitoring running state of equipment
CN111898826A (en) * 2020-07-31 2020-11-06 北京文思海辉金信软件有限公司 Resource consumption prediction method and device, electronic equipment and readable storage equipment
CN112115416A (en) * 2020-08-06 2020-12-22 深圳市水务科技有限公司 Predictive maintenance method, apparatus, and storage medium
CN111931054A (en) * 2020-08-14 2020-11-13 中国科学院深圳先进技术研究院 Sequence recommendation method and system based on improved residual error structure
CN111931054B (en) * 2020-08-14 2024-01-05 中国科学院深圳先进技术研究院 Sequence recommendation method and system based on improved residual error structure
CN112508283A (en) * 2020-12-12 2021-03-16 广东电力信息科技有限公司 Method and device for constructing time series model
CN112799913A (en) * 2021-01-28 2021-05-14 中国工商银行股份有限公司 Container operation abnormity detection method and device
CN113190429A (en) * 2021-06-03 2021-07-30 河北师范大学 Server performance prediction method and device and terminal equipment
CN113743738A (en) * 2021-08-11 2021-12-03 湖北省食品质量安全监督检验研究院 Method and device for predicting food safety risk grade interval
CN115482837A (en) * 2022-07-25 2022-12-16 科睿纳(河北)医疗科技有限公司 Emotion classification method based on artificial intelligence
CN115482837B (en) * 2022-07-25 2023-04-28 科睿纳(河北)医疗科技有限公司 Emotion classification method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN110400010A (en) Prediction technique, device, electronic equipment and computer readable storage medium
Keller International technology diffusion
CN108520357B (en) Method and device for judging line loss abnormality reason and server
Borade et al. Software project effort and cost estimation techniques
De Menezes et al. Review of guidelines for the use of combined forecasts
Claveria et al. Data pre-processing for neural network-based forecasting: does it really matter?
WO2019041439A1 (en) Underwriting difficulty prediction method and device, computer equipment and storage medium
CN110287086A (en) A kind of the trading volume prediction technique and device of periodicity time
CN111310981A (en) Reservoir water level trend prediction method based on time series
CN108830417B (en) ARMA (autoregressive moving average) and regression analysis based life energy consumption prediction method and system
CN105825286A (en) System and method of estimating full life cycle cost of weapon equipment
Nasr et al. Continuous inventory control with stochastic and non-stationary Markovian demand
Prakaulya et al. Railway passenger forecasting using time series decomposition model
Pereira et al. A metamodel for estimating error bounds in real-time traffic prediction systems
CN106656662A (en) Method and system for determining abnormal bandwidth, and electronic device
KR20220115357A (en) A method and apparatus for generating future demand forecast data based on attention mechanism
CN113361810A (en) Passenger flow volume prediction method, device, equipment and storage medium
CN110147388A (en) A kind of method and device of data processing
CN109598607A (en) Method, apparatus and storage medium based on artificial intelligence monitoring self learning model
CN108985595A (en) The move transaction service evaluation method and device mutually commented based on counterparty
Herbst Methods and benchmarks for auto-scaling mechanisms in elastic cloud environments
Zhukov et al. A stochastic dynamics model for shaping stock indexes using self-organization processes, memory and oscillations
CN115860562A (en) Software workload rationality evaluation method, device and equipment
Alsoltany et al. Estimating fuzzy linear regression model for air pollution predictions in Baghdad city
CN114880067A (en) Container scheduling strategy based on container load prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191101

RJ01 Rejection of invention patent application after publication