CN114793197A - Network resource configuration method, device, equipment and storage medium based on NFV - Google Patents

Network resource configuration method, device, equipment and storage medium based on NFV Download PDF

Info

Publication number
CN114793197A
CN114793197A CN202210319949.2A CN202210319949A CN114793197A CN 114793197 A CN114793197 A CN 114793197A CN 202210319949 A CN202210319949 A CN 202210319949A CN 114793197 A CN114793197 A CN 114793197A
Authority
CN
China
Prior art keywords
flow
time sequence
sequence data
nfv
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210319949.2A
Other languages
Chinese (zh)
Other versions
CN114793197B (en
Inventor
杜翠凤
房小兆
韩娜
胡妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
GCI Science and Technology Co Ltd
Guangdong Polytechnic Normal University
Original Assignee
Guangdong University of Technology
GCI Science and Technology Co Ltd
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology, GCI Science and Technology Co Ltd, Guangdong Polytechnic Normal University filed Critical Guangdong University of Technology
Priority to CN202210319949.2A priority Critical patent/CN114793197B/en
Publication of CN114793197A publication Critical patent/CN114793197A/en
Application granted granted Critical
Publication of CN114793197B publication Critical patent/CN114793197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a network resource configuration method, a device, equipment and a storage medium based on NFV, wherein the method comprises the following steps: collecting first flow time sequence data of a NFV network element of a current server; preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data; obtaining third flow time sequence data based on the autoregressive model, the multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data; training a pre-constructed long-short term memory neural network through third flow time sequence data, and predicting the flow value of the NFV network element in a future preset time length through the trained neural network to obtain a flow predicted value; and configuring the NFV network element of the current server according to the flow predicted value and a preset flow safety value. The invention can effectively reduce the flow load of the server and the migration frequency of the VNF network element, thereby reducing the occupied space of computing resources and storage resources.

Description

Network resource configuration method, device, equipment and storage medium based on NFV
Technical Field
The present invention relates to the field of wireless mobile communications technologies, and in particular, to a network resource configuration method and apparatus based on NFV, a terminal device, and a computer-readable storage medium.
Background
With the rapid development of communication technology, traditional Network Functions such as firewalls, load balancers, routers, and the like are packaged into "Virtual Network Functions (VNFs)" through the concept of Network function Virtualization (NFV for short), and run in Virtual Machines (VMs) on commercial hardware. VNFs may be installed on a standard x86 server, monitored and controlled by a hypervisor (also known as a virtual machine monitor). The NFV can accelerate service deployment speed, flexibly manage network services, and facilitate and accelerate a process of adding a new network function or service.
However, most of the current-stage researches implement allocation of Network resources by means of dynamic engineering forced by services, that is, resources are allocated only by using a Network future load state predicted by a simple method, in this Network resource allocation method, once a Network traffic overload occurs in the system, in order to ensure service reliability, the system starts to initiate data redirection to implement rapid data migration, which causes frequent migration of VNF (Virtual Network function), and occupies a large amount of computing resources and storage resources.
Disclosure of Invention
Embodiments of the present invention provide a network resource configuration method and apparatus based on NFV, a terminal device, and a computer-readable storage medium, which can effectively reduce server traffic load and reduce migration frequency of VNF network elements, thereby reducing occupied spaces of computing resources and storage resources.
The embodiment of the invention provides a network resource allocation method based on NFV, which comprises the following steps:
collecting first flow time sequence data of a NFV network element of a current server; the first flow time sequence data comprises flow values of the NFV network element at different moments;
preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
obtaining third flow time sequence data based on an autoregressive model, a multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data;
training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network;
predicting the flow value of the NFV network element within a preset time in the future through the trained long-term and short-term memory neural network to obtain a flow predicted value of the NFV network element;
and configuring the NFV network element of the current server according to the flow predicted value and a preset flow safety value.
As an improvement of the above scheme, the preprocessing the first flow rate time series data by the variational self-encoder to obtain second flow rate time series data includes:
performing feature extraction on the first flow time sequence data through an encoder of the variational self-encoder to obtain a hidden variable data set;
and decoding the hidden variable data set through a decoder of the variational self-encoder to obtain second flow time sequence data.
As an improvement of the above scheme, the obtaining of the third flow time series data based on the autoregressive model, the multi-hypothesis prediction residual reconstruction algorithm, and the second flow time series data includes:
predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
residual error reconstruction is carried out on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm, and the reconstructed flow residual error time sequence data are obtained;
and obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
As an improvement of the above solution, the predicting flow values at different times based on the autoregressive model and the second flow rate time series data to obtain predicted flow rate time series data includes:
predicting the flow values at different moments according to the following formula to obtain predicted flow time sequence data:
Figure BDA0003571248000000031
Figure BDA0003571248000000032
wherein ,
Figure BDA0003571248000000033
for the second traffic timing data to be the second traffic timing data,
Figure BDA0003571248000000034
is the flow value at the time t, p is the order of the autoregressive model, w is the flow use condition characteristic of the server, alpha is a coefficient item,
Figure BDA0003571248000000035
is white noise, beta 1 Is a first weight, β 2 Is a second weight that is a function of the first weight,
Figure BDA0003571248000000036
for the predicted value of the flow at time t,
Figure BDA0003571248000000037
is predicted flow time series data.
As an improvement of the above scheme, the configuring, according to the predicted traffic value and a preset traffic safety value, an NFV network element of the server at present is specifically:
and calculating a difference value between the predicted flow value and a preset flow safety value, and when the difference value is greater than 0, migrating the NFV network element from the current server to a server meeting the flow requirement of the NFV network element.
Accordingly, another embodiment of the present invention provides an NFV-based network resource configuration apparatus, including:
the data acquisition module is used for acquiring first flow time sequence data of the NFV network element of the current server; the first flow time sequence data comprises flow values of the NFV network element at different moments;
the data preprocessing module is used for preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
the data reconstruction module is used for obtaining third flow time sequence data based on an autoregressive model, a multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data;
the model training module is used for training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network;
the traffic prediction module is used for predicting the traffic value of the NFV network element in a preset future time length through the trained long-short term memory neural network to obtain a traffic prediction value of the NFV network element;
and the resource configuration module is used for configuring the NFV network element of the current server according to the flow predicted value and a preset flow safety value.
As an improvement of the above scheme, the data preprocessing module includes:
the coding unit is used for performing feature extraction on the first flow time sequence data through a coder of the variational self-coder to obtain a hidden variable data set;
and the decoding unit is used for decoding the hidden variable data set through a decoder of the variational self-encoder to obtain second flow time sequence data.
As an improvement of the above solution, the data reconstruction module includes:
the flow prediction unit is used for predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
the residual error calculation unit is used for obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
the residual error reconstruction unit is used for carrying out residual error reconstruction on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm to obtain reconstructed flow residual error time sequence data;
and the flow reconstruction unit is used for obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
Another embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the processor implements the NFV-based network resource configuration method as described in any one of the above.
Another embodiment of the present invention provides a computer-readable storage medium, which includes a stored computer program, where when the computer program runs, a device on which the computer-readable storage medium is located is controlled to execute the NFV-based network resource allocation method described in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps that firstly, the acquired first flow time sequence data of the NFV network element of the current server is preprocessed through the variational self-encoder, so that fuzzy, inconsistent and noisy flow data can be smoothed, and the data quality is improved; secondly, performing multiple correction and iteration on second flow time sequence data output by the variational self-encoder based on an autoregressive model and a multi-hypothesis prediction residual reconstruction algorithm to form reasonable and effective third flow time sequence data; then, training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network, and automatically predicting the flow value of the NFV network element within a preset time length in the future through the trained long-short term memory neural network to obtain a flow predicted value of the NFV network element; and finally, judging the accuracy and the rationality of current flow distribution according to the flow predicted value and a preset flow safety value, and automatically carrying out optimization configuration on the NFV network element of the current server so as to realize that the NFV network element with the larger flow load is migrated out of the current server, effectively reduce the flow load of the server, and reduce the migration frequency of the VNF network element caused by dynamic change of a service request, thereby reducing the occupied space of computing resources and storage resources, improving the system performance and reducing the service delay used by a user.
Drawings
Fig. 1 is a schematic flowchart of a method for configuring network resources based on NFV according to an embodiment of the present invention;
fig. 2 is a block diagram of a configuration apparatus for network resources based on NFV according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, fig. 1 is a schematic flowchart of a network resource configuration method based on NFV according to an embodiment of the present invention.
The network resource allocation method based on NFV provided by the embodiment of the invention comprises the following steps:
s11, collecting first flow time sequence data of the NFV network element of the current server; the first flow time sequence data comprises flow values of the NFV network element at different moments;
s12, preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
s13, obtaining third flow time sequence data based on an autoregressive model, a multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data;
s14, training the pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network;
s15, predicting the flow value of the NFV network element in a preset time length in the future through the trained long-term and short-term memory neural network to obtain a flow predicted value of the NFV network element;
s16, configuring the NFV network element of the server according to the predicted flow value and a preset flow safety value.
Preferably, in step S11, the first traffic timing data of the current server NFV network element is collected through continuous X sampling.
Specifically, in step S12, the preprocessing the first traffic sequence data by the variational self-encoder to obtain second traffic sequence data includes:
performing feature extraction on the first flow time sequence data through an encoder of the variational self-encoder to obtain a hidden variable data set;
and decoding the hidden variable data set through a decoder of the variational self-encoder to obtain second flow time sequence data.
It should be noted that, assuming that the acquired flow data is randomly generated and includes an invisible, continuous and random hidden variable z, the generation process of the flow data sample mainly includes two parts: on the one hand, from the prior probability distribution p θ (z) random sampling generates a continuous random hidden variable z (i) (ii) a On the other hand, from the conditional probability distribution p θ (x | z) sampling to generate flow data samples x i However, the sample generation process of the traffic data is mostly hidden and difficult to obtain. Therefore, it is necessary to construct a posterior probability distribution q φ (z | x) to approximate p θ (x | z); wherein z to q (z | x) ═ N (mu; sigma) 2 ),z=μ+σε,ε~N(0,1),N(μ;σ 2 ) Mean is μ and variance is σ 2 Normal distribution of (c).
It can be understood that the posterior probability distribution q φ (z | x) corresponds to the encoder neural network (i.e., encoder) of the variational self-encoder and satisfies the multivariate Gaussian mixture, passing through the first neural network f 1 And a second neural network f 2 Respectively estimating mean value mu and variance sigma of hidden variable distribution corresponding to the acquired first flow time series data 2 ,μ (i) =f 1 (x (i) ),log([σ 2(i) ])=f 2 (x (i) ) (ii) a Sampling determined hidden variable z from mean and variance of hidden variable distribution by using re-parameterization skill (i) ,z (i,l) ~q φ (z|x (i) ),z (i,l) =μ (i)(i) °ε (l); wherein ,ε(l) Gaussian noise for the first sample; generation of a conditional probability distribution p by a decoder neural network θ (x | z), so that the hidden variable is reconstructed into original data to obtain reconstructed data, namely second flow time sequence data; wherein the prior probability distribution p θ (z) satisfies a multivariate Gaussian normal distribution model, i.e. p θ (z)~N (z; 0, I), so that p is available θ (x (i) |z)~N(f(z (i) ) cI), where f is the third neural network and c is a constant greater than zero. I is the covariance of the gaussian normal distribution.
In particular, the posterior probability distribution q φ (z | x) satisfies the following formula:
logq φ (z|x (i) )=logN(z;μ (i)2(i) I);
wherein ,x(i) Is the ith flow sample in the first flow time series data, z is an implicit variable, mu (i) Is the mean, σ, of the hidden variable distribution corresponding to the ith flow sample 2(i) The variance of the implicit variable distribution corresponding to the ith flow sample is shown, I is the covariance of the Gaussian normal distribution, and I is the sample serial number.
In particular, the variation is from the loss function L (θ, φ; x) of the encoder (i) ) Comprises the following steps:
Figure BDA0003571248000000071
wherein ,qφ (z|x (i) ) For posterior probability distribution, p θ (z) is the prior probability distribution, p θ (x | z) is a conditional probability distribution, [ phi ] is a parameter representing a prior probability distribution, [ theta ] is a parameter representing a posterior probability distribution, [ D ] KL (q φ (z|x (i) )|p θ (z)) is a K-L divergence,
Figure BDA0003571248000000081
to reconstruct the losses.
It can be understood that D KL (q φ (z|x (i) )|p θ (z)) is to measure the prior probability distribution p θ (z) and posterior probability distribution q φ (z | x) the degree of approximation,
Figure BDA0003571248000000082
the aim is to make the generated data as close as possible to the input raw data.
As one optional embodiment, the step S13 includes:
predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
residual error reconstruction is carried out on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm, and the reconstructed flow residual error time sequence data are obtained;
and obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
In some preferred embodiments, the predicting the flow values at different time points based on the autoregressive model and the second flow time series data to obtain predicted flow time series data includes:
predicting the flow values at different moments according to the following formula to obtain predicted flow time sequence data:
Figure BDA0003571248000000083
Figure BDA0003571248000000084
wherein ,
Figure BDA0003571248000000085
for the second traffic timing data to be the second traffic timing data,
Figure BDA0003571248000000086
is the flow value at the time t, p is the order of the autoregressive model, w is the flow use condition characteristic of the server, alpha is a coefficient item,
Figure BDA0003571248000000087
is white noise, beta 1 Is a first weight of the weight set to be a first weight,β 2 in order to be the second weight, the weight is,
Figure BDA0003571248000000088
is a predicted value of the flow at the time t,
Figure BDA0003571248000000089
is predicted flow timing data.
It can be understood that the flow values at different moments are predicted based on the autoregressive model and the second flow time series data, and the essence is that the flow at t-1 and t +1 moments is predicted by an autoregressive method, and the flow value at t moment is predicted by combining the flow prediction values at t-1 and t +1 moments. Autoregressive is a method of statistically processing time series, and the future trend of one variable is usually predicted by the relationship of other variables.
Figure BDA0003571248000000091
The flow rate values at a plurality of times before the data itself are used for regression, and p is the order of the autoregressive model and is denoted by ar (p).
Specifically, the obtaining of the flow residual time series data according to the second flow time series data and the predicted flow time series data specifically includes:
and calculating the flow difference value of the second flow time sequence data and the predicted flow time sequence data at each moment to obtain flow residual error time sequence data.
In mathematical statistics, the residual is a difference between an actually observed value and an estimated value (fitting value), and the flow residual is a difference between an actually observed flow value in the second flow time series data and a corresponding predicted flow value in the predicted flow time series data.
In a specific embodiment, the obtaining, according to the reconstructed flow residual time series data and the predicted flow time series data, third flow time series data is specifically:
and calculating the data and the value of the reconstructed flow residual time sequence data and the predicted flow time sequence data at each moment to obtain third flow time sequence data.
It can be understood that the sum of the flow residual of the reconstructed flow residual time series data at the time t and the predicted flow value of the predicted flow residual time series data at the time t is used as the flow value at the time t to obtain third flow residual time series data.
It is worth to be noted that, compared with the rest of neural network algorithms, the long-short term memory neural network is suitable for the problem and task of time sequence sensitivity, has a certain memory effect, can learn and store information for a long time, and can solve the problem of gradient disappearance caused by gradual reduction in the gradient inversion process. The present invention uses a stateful LSTM network to store the server load situation at the previous time and use it to generate future predictions. Firstly, training a pre-constructed long-short term memory neural network (LSTM), training a forgetting gate, an input gate and an output gate of the neural network by combining historical data, calculating a unit state weight matrix and a bias term, and continuously reducing the error of network prediction by error back transmission, so that the LSTM can relatively accurately predict the server load at a certain moment.
In step S14, when training the pre-constructed long-term and short-term memory neural network through the third flow time series data, the third flow time series data needs to be divided into a training data set and a testing data set according to a preset ratio. Preferably, 80% of the third flow timing data is used as the training data set and the remaining 20% is used as the test data set.
Specifically, the long-short term memory neural network includes: a forgetting gate, an input gate and an output gate;
the forgetting door f t Expressed as: f. of t =σ(W f ·[H t-1 ,X t ]+b f );
The input gate I t Expressed as: i is t =σ(W I ·[H t-1 ,X t ]+b I );
The output gate o is expressed as: o ═ σ (W) o ·[H t-1 ,X t ]+b Ο );
The long and short term memory spiritInput node via network
Figure BDA0003571248000000101
Expressed as:
Figure BDA0003571248000000102
unit state C of the long-short term memory neural network t Expressed as:
Figure BDA0003571248000000103
where tanh is the activation function, σ is the sigmoid activation function, H t-1 For the last moment server load situation, X t For the current server behavior case, [ H ] t-1 ,X t ]Means that two vectors are connected into one longer vector, W f To forget the weight of the door, b f To forget the door offset, W I For entry of gate sigmoid layer weights, b I For the input gate sigmoid layer offset, W G For input gate tanh layer weight, b G For input gate tanh layer offset, W o To output the sigmoid layer weights of the gates, b o To output a gate sigmoid layer offset, C t-1 The cell state at time t-1.
Preferably, the traffic prediction value H of the NFV network element t Comprises the following steps: h t =Ο*tanh(C t )。
It should be noted that the forgetting gate determines how much of the last cell state has been left to the current time, and the input gate determines how much of the network input has been left to the cell state at the current time. The LSTM neural network determines the current reconstructed data input value by two activation layers, i.e., sigmoid (input gate layer) and tanh (hyperbolic tangent layer). Input gate layer output I t The value is 1 or 0, indicating that the update and tanh layers are to be output,
Figure BDA0003571248000000104
a new candidate cell state is generated. During the training process of the LSTM neural network, the current input state and the last unit state need to be combined together to form a new sheetThe meta-state, because of forgetting the gate control, can keep the information long before, because of the input gate control, it can avoid the current irrelevant content to enter into the memory. And finally, the output gate predicts the load condition of the server based on the current cell state, and determines which parts of the cell state are calculated through sigmoid (an input gate layer) and tanh (a hyperbolic tangent layer) to obtain the flow prediction result of the server in a future period of time.
Further, in step S16, the configuring, according to the predicted traffic value and a preset traffic safety value, the NFV network element of the server at present is specifically:
and calculating a difference value between the predicted flow value and a preset flow safety value, and when the difference value is greater than 0, migrating the NFV network element from the current server to a server meeting the flow requirement of the NFV network element.
It is worth to explain that, the invention is based on the historical access flow of each server, can automatically carry out intelligent prediction and dynamic allocation to the flow load of the server, ensure reasonable flow allocation, reduce the operation and maintenance difficulty, and avoid resource waste. In terms of algorithm implementation, aiming at the characteristics of self-similarity, multi-diversity, continuity and the like of server access flow, a variational self-encoder is firstly designed to preprocess data, and a flow residual is corrected and iterated for multiple times by using an autoregressive model and a multi-hypothesis prediction residual reconstruction algorithm to form a historical flow data set of each server. And then, inputting historical flow use conditions of the server by using a long-term short-term memory neural network (LSTM), and automatically predicting and setting the future load of the server. Meanwhile, in order to improve the accuracy and the reasonability of flow distribution, the flow predicted value and the preset flow safety value are automatically distinguished, so that the NFV network element with a larger flow load is migrated, the flow load of a server is effectively reduced, the migration frequency of the VNF network element caused by the dynamic change of a service request is reduced, the occupied space of computing resources and storage resources is reduced, the system performance is improved, and the service delay used by a user is reduced.
Fig. 2 is a block diagram of a network resource configuration device based on NFV according to an embodiment of the present invention.
The network resource configuration device based on NFV provided by the embodiment of the invention comprises:
the data acquisition module 21 is configured to acquire first traffic timing data of the NFV network element of the current server; wherein the first traffic timing data includes traffic values of the NFV network element at different times;
the data preprocessing module 22 is configured to preprocess the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
the data reconstruction module 23 is configured to obtain third flow time series data based on an autoregressive model, a multi-hypothesis prediction residual reconstruction algorithm, and the second flow time series data;
the model training module 24 is configured to train a pre-constructed long-short term memory neural network through the third flow time series data to obtain a trained long-short term memory neural network;
a traffic prediction module 25, configured to predict, through the trained long-and-short-term memory neural network, a traffic value of the NFV network element in a preset time duration in the future, so as to obtain a traffic prediction value of the NFV network element;
and a resource configuration module 26, configured to configure the NFV network element of the server currently according to the traffic predicted value and a preset traffic safety value.
As an improvement of the above solution, the data preprocessing module 22 includes:
the encoding unit is used for performing feature extraction on the first flow time sequence data through an encoder of the variational self-encoder to obtain a hidden variable data set;
and the decoding unit is used for decoding the hidden variable data set through a decoder of the variational self-encoder to obtain second flow time sequence data.
As one optional implementation, the data reconstructing module 23 includes:
the flow prediction unit is used for predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
the residual error calculation unit is used for obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
the residual error reconstruction unit is used for carrying out residual error reconstruction on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm to obtain reconstructed flow residual error time sequence data;
and the flow reconstruction unit is used for obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
Preferably, the traffic data unit is specifically configured to:
predicting the flow values at different moments according to the following formula to obtain predicted flow time sequence data:
Figure BDA0003571248000000131
Figure BDA0003571248000000132
wherein ,
Figure BDA0003571248000000133
for the second traffic timing data to be the second traffic timing data,
Figure BDA0003571248000000134
is the flow value at the time t, p is the order of the autoregressive model, w is the flow use condition characteristic of the server, alpha is a coefficient item,
Figure BDA0003571248000000135
is white noise, beta 1 Is a first weight, β 2 Is a second weight that is a function of the first weight,
Figure BDA0003571248000000136
is a predicted value of the flow at the time t,
Figure BDA0003571248000000137
is predicted flow timing data.
As one preferred embodiment, the resource configuration module 26 is specifically configured to:
and calculating a difference value between the predicted flow value and a preset flow safety value, and when the difference value is greater than 0, migrating the NFV network element from the current server to a server meeting the flow requirement of the NFV network element.
It should be noted that, for the specific description and the beneficial effects related to each embodiment of the NFV-based network resource configuration apparatus in this embodiment, reference may be made to the specific description and the beneficial effects related to each embodiment of the NFV-based network resource configuration method described above, and details are not described here again.
Fig. 3 is a block diagram of a terminal device according to an embodiment of the present invention.
A terminal device provided in an embodiment of the present invention includes a processor 10, a memory 20, and a computer program stored in the memory 20 and configured to be executed by the processor 10, where when the processor 10 executes the computer program, the NFV-based network resource configuration method according to any one of the above embodiments is implemented.
The processor 10 implements the steps in the above embodiment of the NFV-based network resource allocation method when executing the computer program, for example, all the steps of the NFV-based network resource allocation method shown in fig. 1. Alternatively, the processor 10, when executing the computer program, implements functions of each module/unit in the above-mentioned NFV-based network resource configuration apparatus embodiment, for example, functions of each module of the NFV-based network resource configuration apparatus shown in fig. 2.
Illustratively, the computer program may be partitioned into one or more modules that are stored in the memory 20 and executed by the processor 10 to implement the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor 10, a memory 20. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of a terminal device, and may include more or less components than those shown, or combine certain components, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 10 is the control center of the terminal device and connects the various parts of the whole terminal device by various interfaces and lines.
The memory 20 can be used for storing the computer programs and/or modules, and the processor 10 implements various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Accordingly, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; when running, the computer program controls a device where the computer readable storage medium is located to execute the NFV-based network resource configuration method according to any of the above embodiments.
To sum up, according to the network resource allocation method, the network resource allocation device, the terminal device, and the computer-readable storage medium provided in the embodiments of the present invention, firstly, the variational self-encoder is used to pre-process the acquired first traffic timing data of the NFV network element of the current server, so that the fuzzy, inconsistent and noisy traffic data can be smoothed, and the data quality can be improved; secondly, performing multiple correction and iteration on second flow time sequence data output by the variational self-encoder based on an autoregressive model and a multi-hypothesis prediction residual reconstruction algorithm, thereby forming reasonable and effective third flow time sequence data; then, training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network, and automatically predicting the flow value of the NFV network element within a preset time length in the future through the trained long-short term memory neural network to obtain a flow predicted value of the NFV network element; and finally, judging the accuracy and the reasonability of current flow distribution according to the flow predicted value and a preset flow safety value, and automatically carrying out optimization configuration on the NFV network element of the current server so as to realize the purpose of migrating the NFV network element with the larger flow load out of the current server, effectively reducing the flow load of the server, and reducing the migration frequency of the VNF network element caused by the dynamic change of a service request, thereby reducing the occupied space of computing resources and storage resources, improving the system performance and reducing the service delay used by a user.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A network resource configuration method based on NFV is characterized by comprising the following steps:
collecting first flow time sequence data of a NFV network element of a current server; wherein the first traffic timing data includes traffic values of the NFV network element at different times;
preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
obtaining third flow time sequence data based on an autoregressive model, a multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data;
training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network;
predicting the flow value of the NFV network element within a preset time in the future through the trained long-term and short-term memory neural network to obtain a flow predicted value of the NFV network element;
and configuring the NFV network element of the current server according to the flow predicted value and a preset flow safety value.
2. The NFV-based network resource configuration method of claim 1, wherein the preprocessing the first traffic timing data by a variational self-encoder to obtain second traffic timing data comprises:
performing feature extraction on the first flow time sequence data through an encoder of the variational self-encoder to obtain a hidden variable data set;
and decoding the implicit variable data set through a decoder of the variational self-encoder to obtain second flow time series data.
3. The NFV-based network resource configuration method of claim 1, wherein the obtaining a third flow time series data based on the autoregressive model, the multi-hypothesis prediction residual reconstruction algorithm, and the second flow time series data comprises:
predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
performing residual error reconstruction on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm to obtain reconstructed flow residual error time sequence data;
and obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
4. The NFV-based network resource configuration method of claim 3, wherein the predicting the flow values at different time based on the autoregressive model and the second flow time series data to obtain predicted flow time series data comprises:
predicting the flow values at different moments according to the following formula to obtain predicted flow time sequence data:
Figure FDA0003571247990000021
Figure FDA0003571247990000022
wherein ,
Figure FDA0003571247990000023
for the second traffic timing data to be the second traffic timing data,
Figure FDA0003571247990000024
is the flow value at the time t, p is the order of the autoregressive model, w is the flow use condition characteristic of the server, alpha is a coefficient term, delta t w Is white noise, beta 1 Is a first weight, β 2 In order to be the second weight, the weight is,
Figure FDA0003571247990000025
for the predicted value of the flow at time t,
Figure FDA0003571247990000026
is predicted flow timing data.
5. The method for configuring network resources based on NFV according to claim 1, wherein the configuring, according to the predicted traffic value and a preset traffic safety value, the NFV network element of the current server is specifically:
and calculating a difference value between the predicted flow value and a preset flow safety value, and when the difference value is greater than 0, migrating the NFV network element from the current server to a server meeting the flow requirement of the NFV network element.
6. An NFV-based network resource configuration apparatus, comprising:
the data acquisition module is used for acquiring first flow time sequence data of the NFV network element of the current server; the first flow time sequence data comprises flow values of the NFV network element at different moments;
the data preprocessing module is used for preprocessing the first flow time sequence data through a variational self-encoder to obtain second flow time sequence data;
the data reconstruction module is used for obtaining third flow time sequence data based on an autoregressive model, a multi-hypothesis prediction residual error reconstruction algorithm and the second flow time sequence data;
the model training module is used for training a pre-constructed long-short term memory neural network through the third flow time sequence data to obtain a trained long-short term memory neural network;
the traffic prediction module is used for predicting the traffic value of the NFV network element within a preset future time length through the trained long-term and short-term memory neural network to obtain a traffic prediction value of the NFV network element;
and the resource configuration module is used for configuring the NFV network element of the current server according to the flow predicted value and a preset flow safety value.
7. The NFV-based network resource configuration device of claim 6, wherein the data preprocessing module comprises:
the coding unit is used for performing feature extraction on the first flow time sequence data through a coder of the variational self-coder to obtain a hidden variable data set;
and the decoding unit is used for decoding the implicit variable data set through a decoder of the variational self-encoder to obtain second flow time series data.
8. The NFV-based network resource configuration device of claim 6, wherein the data reconstruction module comprises:
the flow prediction unit is used for predicting flow values at different moments based on an autoregressive model and the second flow time sequence data to obtain predicted flow time sequence data;
the residual error calculation unit is used for obtaining flow residual error time sequence data according to the second flow time sequence data and the predicted flow time sequence data;
the residual error reconstruction unit is used for performing residual error reconstruction on the flow residual error time sequence data based on a multi-hypothesis prediction residual error reconstruction algorithm to obtain reconstructed flow residual error time sequence data;
and the flow reconstruction unit is used for obtaining third flow time sequence data according to the reconstructed flow residual error time sequence data and the predicted flow time sequence data.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor when executing the computer program implementing the NFV-based network resource configuration method according to any one of claims 1 to 5.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when running, controls an apparatus in the computer-readable storage medium to perform the NFV-based network resource configuration method according to any one of claims 1 to 5.
CN202210319949.2A 2022-03-29 2022-03-29 Network resource allocation method, device, equipment and storage medium based on NFV Active CN114793197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210319949.2A CN114793197B (en) 2022-03-29 2022-03-29 Network resource allocation method, device, equipment and storage medium based on NFV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210319949.2A CN114793197B (en) 2022-03-29 2022-03-29 Network resource allocation method, device, equipment and storage medium based on NFV

Publications (2)

Publication Number Publication Date
CN114793197A true CN114793197A (en) 2022-07-26
CN114793197B CN114793197B (en) 2023-09-19

Family

ID=82462424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210319949.2A Active CN114793197B (en) 2022-03-29 2022-03-29 Network resource allocation method, device, equipment and storage medium based on NFV

Country Status (1)

Country Link
CN (1) CN114793197B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514646A (en) * 2022-10-10 2022-12-23 广东工业大学 Network slice configuration method based on flow analysis and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747477A (en) * 2014-01-15 2014-04-23 广州杰赛科技股份有限公司 Network flow analysis and prediction method and device
US20180137412A1 (en) * 2016-11-16 2018-05-17 Cisco Technology, Inc. Network traffic prediction using long short term memory neural networks
CN109379240A (en) * 2018-12-25 2019-02-22 湖北亿咖通科技有限公司 Car networking flux prediction model construction method, device and electronic equipment
WO2021103135A1 (en) * 2019-11-25 2021-06-03 中国科学院深圳先进技术研究院 Deep neural network-based traffic classification method and system, and electronic device
CN114006826A (en) * 2022-01-04 2022-02-01 南京信息工程大学 Network traffic prediction method fusing traffic characteristics
CN114077492A (en) * 2020-08-18 2022-02-22 中国电信股份有限公司 Prediction model training and prediction method and system for cloud computing infrastructure resources

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747477A (en) * 2014-01-15 2014-04-23 广州杰赛科技股份有限公司 Network flow analysis and prediction method and device
US20180137412A1 (en) * 2016-11-16 2018-05-17 Cisco Technology, Inc. Network traffic prediction using long short term memory neural networks
CN109379240A (en) * 2018-12-25 2019-02-22 湖北亿咖通科技有限公司 Car networking flux prediction model construction method, device and electronic equipment
WO2021103135A1 (en) * 2019-11-25 2021-06-03 中国科学院深圳先进技术研究院 Deep neural network-based traffic classification method and system, and electronic device
CN114077492A (en) * 2020-08-18 2022-02-22 中国电信股份有限公司 Prediction model training and prediction method and system for cloud computing infrastructure resources
CN114006826A (en) * 2022-01-04 2022-02-01 南京信息工程大学 Network traffic prediction method fusing traffic characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514646A (en) * 2022-10-10 2022-12-23 广东工业大学 Network slice configuration method based on flow analysis and related device
CN115514646B (en) * 2022-10-10 2024-09-24 广东工业大学 Network slice configuration method and related device based on flow analysis

Also Published As

Publication number Publication date
CN114793197B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN112000459B (en) Method for expanding and shrinking capacity of service and related equipment
Zhu et al. A novel approach to workload prediction using attention-based LSTM encoder-decoder network in cloud environment
US11562223B2 (en) Deep reinforcement learning for workflow optimization
Minarolli et al. Tackling uncertainty in long-term predictions for host overload and underload detection in cloud computing
US20170168992A9 (en) Techniques to provide significance for statistical tests
CN106933649B (en) Virtual machine load prediction method and system based on moving average and neural network
US20190324822A1 (en) Deep Reinforcement Learning for Workflow Optimization Using Provenance-Based Simulation
US10853718B2 (en) Predicting time-to-finish of a workflow using deep neural network with biangular activation functions
Dogani et al. Multivariate workload and resource prediction in cloud computing using CNN and GRU by attention mechanism
KR20190124846A (en) The design of GRU-based cell structure robust to missing value and noise of time-series data in recurrent neural network
US20210342691A1 (en) System and method for neural time series preprocessing
CN113485833B (en) Resource prediction method and device
CN109542585B (en) Virtual machine workload prediction method supporting irregular time intervals
CN114793197B (en) Network resource allocation method, device, equipment and storage medium based on NFV
US20210103795A1 (en) Intelligent selection of time series models
CN115913967A (en) Micro-service elastic scaling method based on resource demand prediction in cloud environment
Nhu et al. Dynamic network slice scaling assisted by attention-based prediction in 5g core network
US20210042661A1 (en) Workload modeling for cloud systems
Balis et al. Execution management and efficient resource provisioning for flood decision support
Kidane et al. When and how to retrain machine learning-based cloud management systems
Bensalem et al. Benchmarking various ML solutions in complex intent-based network management systems
US20210357781A1 (en) Efficient techniques for determining the best data imputation algorithms
CN114237962A (en) Alarm root cause judgment method, model training method, device, equipment and medium
Hirayama et al. Sparse regression model-based relearning architecture for shortening learning time in traffic prediction
US20210357794A1 (en) Determining the best data imputation algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant