CN114070708B - Virtual network function resource consumption prediction method based on flow characteristic extraction - Google Patents

Virtual network function resource consumption prediction method based on flow characteristic extraction Download PDF

Info

Publication number
CN114070708B
CN114070708B CN202111371502.1A CN202111371502A CN114070708B CN 114070708 B CN114070708 B CN 114070708B CN 202111371502 A CN202111371502 A CN 202111371502A CN 114070708 B CN114070708 B CN 114070708B
Authority
CN
China
Prior art keywords
node
model
feature
cpu
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111371502.1A
Other languages
Chinese (zh)
Other versions
CN114070708A (en
Inventor
苏畅
谭娅
谢显中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111371502.1A priority Critical patent/CN114070708B/en
Publication of CN114070708A publication Critical patent/CN114070708A/en
Application granted granted Critical
Publication of CN114070708B publication Critical patent/CN114070708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/044Network management architectures or arrangements comprising hierarchical management structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a virtual network function resource consumption prediction method based on flow characteristic extraction, belonging to the field of mobile communication. The method comprises the following steps: s1: through the correlation between the flow characteristics, different meta paths are established between the flow characteristics and the CPU of the VNF, so that the HIN is constructed; s2: acquiring feature representation of each flow feature by using an HIN2Vec model; s3: the importance of each feature is measured by using an attention mechanism, different weights are allocated to each feature, and then the weights are input into an MLP model to predict the resource consumption of the VNF. The invention has better performance and is superior to the traditional machine learning model and the common deep learning model in the aspect of prediction precision.

Description

Virtual network function resource consumption prediction method based on flow characteristic extraction
Technical Field
The invention belongs to the field of mobile communication, and relates to a virtual network function resource consumption prediction method based on flow characteristic extraction.
Background
The trend in networks is to use artificial intelligence techniques to control and operate networks, while software defined networks and network function virtualization techniques play a very important role in this process. In the future network age, the edge computing, the cloud computing and the network are more required to cooperate together, so that the resource utilization rate is optimized. In the 5G communication network, the 5G network service is closer to the user demand, so that personalized demands can be provided for the user. Among other representative network service capabilities of 5G networks include network slicing and edge computing. Network slicing is a key feature of Network Function Virtualization (NFV) application to the 5G phase, and can flexibly provide one or more network services according to the needs of slice demander. In 5G networks, dedicated telecommunication devices in the network may be software-ized by NFV technology to implement various network element functions with VNFs deployed on a general, commercial server.
Resource management in a network function virtualization scenario is a complex problem, because the resource requirements of each VNF change with dynamic traffic, which requires that a single VNF be allocated corresponding resources. Because of the limited resources, it is particularly important to find an effective resource management method. In addition, in the network function virtualization environment, the CPU resource requirement of each VNF changes with the change of dynamic traffic, and the underallocation or the excessive allocation of the resources of a single VNF may cause the performance of the whole service chain to be reduced. Furthermore VNFs exhibit very complex and non-linear behaviour in the network, which is difficult to model using standard tools. Therefore, searching for an effective resource prediction method becomes a key to improving the service performance of the whole network. According to the predicted future resource demand situation, on one hand, virtual machines can be strategically reserved actively and VNFs deployed or service function chains constructed, which has important significance in NFV environments or 5G network slices. On the other hand, in this way, resource hotspots in the network can be discovered in advance, so that offloading or migration can be further performed.
In research work on the resource consumption of a single VNF, researchers have explored the feasibility of applying different models and machine learning techniques to modeling complex network elements such as virtual network functions, and successfully to resource management of VNFs. However, in the previous research work, most researchers have focused only on the influence of a single feature on VNF resource consumption, and have not considered the relationship between features and the influence of multiple features on resource consumption. Resource demand allocation situations of a single VNF may affect the performance of a service function chain in the network, so that research on a single VNF is very necessary. In previous work, we described this problem as a regression problem. However, for the resource requirements of VNFs, it is described in the literature "Zaman, zakia, sabidur Rahman, and mahuda naznin" Novel Approaches for VNF Requirement Prediction Using DNN and lstm "2019IEEE Global Communications Conference (GLOBECOM). IEEE,2019" as a classification problem, the predicted number of VNFs. On the basis of exploring the flow characteristics, it may be more efficient to predict the performance range of a single VNF, as this will help to locate the overall range of resource consumption of the VNF in case of different flow fluctuations.
In addition, many NFV scenes contain multiple types of objects and different types of features, and heterogeneous information networks can more comprehensively simulate the real world than widely used homogeneous information networks, and are not easy to cause information loss.
The literature "Dr is an advanced, sevil, and Holger karl," SPRING: scaling, displacement, and routing of heterogeneous services with flexible architecture, "2019IEEE Conference on Network Softwarization (NetSoft)," IEEE,2019, "uses solutions in the areas of network software and virtualized resource management and coordination, improving interoperability of heterogeneous, multi-domain, multi-technology infrastructures. Heterogeneous Information Networks (HIN) are capable of modeling objects with rich information through an explicit network structure, and HIN has received a great deal of attention in recent years by virtue of this strong capability.
The real network data is becoming more and more complex, and we are facing more powerful and flexible heterogeneous networks. In this regard, the meta-path can effectively capture the subtle semantics between objects, so many researchers utilize the meta-path based mining task. The literature "Yin, ying, et al," Heterogeneous Network Representation Learning Method Based on Meta-path, "2019IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA)," IEEE,2019, "proposes a new heterogeneous network learning method that obtains weights between nodes of the same type in a heterogeneous information network based on an attention mechanism. In order to capture rich semantics hidden in HIN, the document "Fu, tao-yang, wang-Chien Lee, and Zhen lei," HIN2vec: explore meta-paths in heterogeneous information networks for representation learning, "Proceedings of the 2017ACM on Conference on Information and Knowledge Management.2017" uses different types of relationships between nodes to capture rich information in HIN, and a new framework for heterogeneous information network representation learning is proposed, which works well in multi-labeled node classification and link prediction. While this model can capture rich relational semantics and details of network structure, it is also lacking in capturing each node and meta-path.
In heterogeneous information networks, correlation metrics for different types of objects also become very important. As the similarity between different types of objects is not only meaningful, but also useful in certain scenarios. Therefore, on the basis of heterogeneous information networks, the accuracy of model prediction of the consumption of the VNF resources can be improved by researching the interrelation between the features by considering the influence of the hidden relationship between the features on a single VNF.
Disclosure of Invention
In view of the above, the present invention aims to provide a method for predicting virtual network function resource consumption based on flow feature extraction, which considers not only the influence of hidden relations between features on a single VNF, but also the interrelation between features on the basis of heterogeneous information networks, and improves the accuracy of model prediction of VNF resource consumption by extracting flow features.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a virtual network function resource consumption prediction method based on flow characteristic extraction models VNF resource consumption by using a machine learning method, and learns an accurate prediction model directly from actual VNF performance data through a hidden relation among flow characteristics, so that resources required by each VNF for processing specific flow load can be accurately predicted. The method specifically comprises the following steps:
s1: establishing different meta paths between the traffic characteristics and the CPU of the VNF by correlation between the traffic characteristics, thereby constructing a heterogeneous information network (Heterogeneous information network, HIN);
s2: acquiring feature representation of each flow feature by using an HIN2Vec model;
s3: the importance of each feature is measured by using an attention mechanism, different weights are allocated to each feature, and the weights are input into a multi-layer perceptron (Multilayer perceptron, MLP) model to predict the resource consumption of the VNF.
Further, in step S1, constructing a heterogeneous information network specifically includes: and constructing information about nodes and edges in the graph for each relation by utilizing the correlation among the flow characteristics, and constructing a heterogeneous information network graph by traversing the relation in turn.
Further, in step S2, each node in the constructed heterogeneous information network graph represents one of the features, and finally, the vector representation of the node is learned by using the HIN2Vec model; the HIN2Vec model is used for extracting node characteristics in the graph, mainly considering that nodes with the same type as the first node are reserved and added into a path; under the condition of meeting the given path length, generating a path for each node in the node set, and then obtaining the characteristic representation of the corresponding node through a random gradient descent algorithm.
Further, in step S2, a vector representation of each node, that is, a feature representation of each node, is obtained by using the HIN2Vec model, which specifically includes: randomly generating a k-dimensional feature vector for each node V in the heterogeneous information network graph V for describing the feature set formed by VAnd extracting the quaternary relation group from each path D epsilon D>Data as training features, where x 1 And x 2 Is any two adjacent nodes on the path d, R (R epsilon R) represents the relationship between the two nodes, is (x) 1 ,x 2 R) represents x 1 And x 2 Whether or not there is a relation r, if so, the value1, and 0 if not present, as shown in formula (1):
after the four-element relation is extracted, updating the characteristics of each node; the specific method comprises the following steps:
establishing a target loss function:
g(x 1 ,x 2 ,r)=Is(x 1 ,x 2 ,r)logp(r|x 1 ,x 2 ) (2)
t(x 1 ,x 2 ,r)=(1-Is(x 1 ,x 2 ,r))log[1-p(r|x 1 ,x 2 )] (3)
F(x 1 ,x 2 ,r)=g(x 1 ,x 2 ,r)+t(x 1 ,x 2 ,r) (4)
wherein g (x 1 ,x 2 R) represents the probability of a positive sample, p (r|x) 1 ,x 2 ) Represents x 1 And x 2 Probability of having relation r, t (x 1 ,x 2 R) represents the probability of a negative sample, F (x) 1 ,x 2 R) represents a target loss function;
maximizing the objective function F using a random gradient descent algorithm (SGD), using the data according to equations (5) - (7)Iteratively updating the node characteristic W epsilon W;
wherein, representative node x 1 Weight matrix of>Representative node x 1 Is represented by a, β, ω representing the step size, W, in the gradient descent algorithm r ' represents a weight matrix with a target relationship r; after the feature representation of the node set is obtained, all the node sets representing the flow feature U and the CPU I are taken out, and the potential relation feature of the flow feature and the CPU is +.>The expression is as follows:
wherein, representing matrix multiplication +.>Weights of different meta paths representing flow characteristics,/->Weights representing different meta paths of the CPU T represents transpose,/and T represents the weight of the CPU>The feature value of the mth meta-path representing the ith CPU, M represents the set of all meta-paths.
Further, in step S3, the MLP model includes an input layer, a hidden layer, and an output layer, and specifically includes:
first layer input layer:
wherein h is j A weighted sum of all inputs representing the current node, w ij Representing weights, x representing the value of the input, N representing the number of samples;
second layer hidden layer:
wherein F is i For the output value of hidden layer neurons, f (·) represents the activation function; the activation function used is a linear rectification function (Rectified Linear Unit, reLU); the advantage of using a ReLU is that the dependency between parameters can be reduced, thus reducing the over-fitting situation.
Third layer output layer: output by softmax, i.e., f_output=softmax (F);
finally, the Loss function Loss of the MLP prediction model is shown in equation (11):
wherein N is the number of samples, y i And representing the real label corresponding to the ith category, wherein f_output is the predicted result.
The invention has the beneficial effects that:
1) The prediction method of the invention further researches the influence of the flow characteristics on the CPU performance of the VNF through the hidden relation among the HIN mining flow characteristics.
2) Aiming at the resource consumption prediction of the VNF, the invention divides the interval of the CPU, converts the interval into the classification problem, and predicts by using the MLP model. In addition, the invention considers the relation between the characteristics and the CPU, and adopts an attention mechanism in the model to measure the importance of each characteristic. Compared with the traditional machine learning model and the deep learning model, the method has better effect.
3) The invention simulates the flow load fluctuation of the VNF through flow playback, is mainly used for analyzing the nonlinear relation between the flow characteristic and the VNF resource consumption, and pre-processes the flow characteristic by utilizing the correlation between the characteristics before extracting the characteristics, thereby improving the prediction precision of the model.
4) The invention fully considers the relation between the flow characteristics and the CPU, and experiments are carried out by combining different element paths and different data sets, so that the method has more flexibility and effectiveness for the selection and utilization of the flow characteristics.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of resource consumption of a VNF;
FIG. 2 is a flow chart of a method for predicting virtual network function resource consumption based on flow feature extraction according to the present invention;
FIG. 3 is a schematic illustration of an experimental device deployment;
FIG. 4 is a graph of correlation effects between features in an embodiment;
FIG. 5 is a graph of correlation effects between selected features and a CPU in an embodiment;
FIG. 6 is a graph of the effect of correlation between other selected features and CPI in an embodiment;
FIG. 7 is a diagram showing the result of different meta-paths in an embodiment;
FIG. 8 is a diagram of the result of combining different meta-paths with the same numbering in an embodiment;
FIG. 9 is a graph showing the result of the presence or absence of the attention mechanism in the example.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Referring to fig. 1 to 9, the present invention provides a VNF resource consumption prediction method (VNF-RPHIN) based on flow feature extraction, which uses a Heterogeneous Information Network (HIN) and a multi-layer perceptron (MLP), and specifically includes: firstly, constructing a heterogeneous information network through correlation among traffic characteristics; secondly, acquiring feature representation of each flow feature by using an HIN2Vec model; finally, the importance of each feature is measured by using an attention mechanism, different weights are distributed to the features, and the features are input into an MLP model; and predicting the resource consumption of the VNF by mining the hidden relation among the flow characteristics.
Wherein, the prediction model based on HIN is:
meta-path is a sequence that contains relationships that are defined between different nodes. In the present invention, we apply the idea of such meta-paths to the performance consumption prediction of VNFs. As shown in fig. 1, the present invention establishes different meta paths between traffic characteristics and the CPU of the VNF according to the similarity between the characteristics, thereby constructing a heterogeneous information network.
Meta-path is mainly directed to similarity searching in heterogeneous information networks, which can represent semantic relationships between different features, which plays an important role in heterogeneous information networks. The invention mainly uses this idea to mine the hidden relations between the flow characteristics. The present invention thus defines the meta-paths as shown in table 1 based on the correlation between the different flow characteristics.
TABLE 1 Meta Path
The flow of the invention is shown in figure 2, and in the feature extraction module, the invention constructs a heterogeneous information network diagram through the relation between the features. Specifically, the invention utilizes the correlation between flow characteristics to construct a piece of information about nodes and edges in the graph for each relation, and constructs a heterogeneous information network graph by traversing the relation in turn. Each node in the constructed graph represents one of the features, and finally the vector representation of the node is learned by using the HIN2Vec model. The HIN2Vec is used for extracting node characteristics in the graph, mainly considering that nodes with the same type as the first node are reserved and added into a path; under the condition of meeting the given path length, generating a path for each node in the node set, and then obtaining the characteristic representation of the corresponding node through a random gradient descent algorithm.
The method comprises the following specific steps:
(1) constructing a heterogeneous information network (graph) through the relation (correlation) among the features;
(2) traversing the whole graph in a random walk mode according to the element path defined in the above, and obtaining a node path of the graph;
(3) and obtaining the characteristic representation of each node by using the HIN2Vec model.
And in the prediction module, different weights are distributed to the flow characteristics by using an attention mechanism according to the characteristic representation of the multi-path and the flow characteristics, so that the importance of each characteristic is measured. Finally, the model is input into an MLP model for prediction. The MLP model constructed by the invention mainly comprises an input layer, a hidden layer and an output layer. The feature representation of each node is an input to the MLP model, and the results of the model are ultimately obtained through the Softmax layer.
For the resource prediction problem of the VNF, the main target of the present invention is the CPU. For resource prediction of VNF, it is considered necessary to predict it to a certain precise extent, which is also more efficient when VNF deployment is performed. Therefore, the present invention divides the CPU of the VNF, thereby converting it into a classification problem. Extracting flow characteristics by using the HIN, further excavating hidden relations among the flow characteristics, and predicting CPU resources of the VNF by using the MLP model. Wherein,
1) HIN-based feature extraction
In order to be able to derive the relationship between flow and CPU consumption, the extraction of flow features is a critical step. In this context, HIN, a natural, generic representation of data in the real world, is used in many tasks to model complex, heterogeneous data by mining hidden relationships between flow features. Therefore, the invention adopts the HIN2Vec method to extract the characteristics, the core of the HIN2Vec is a neural network model, and the vector representation of each node, namely the characteristic representation of each node, can be obtained through the HIN2 Vec.
Randomly generating a k-dimensional feature vector for each node V in the heterogeneous information network graph V for describing the feature set formed by VAnd extracting the quaternary relation group from each path D epsilon D>Data as training features, where x 1 And x 2 Is any two adjacent nodes on the path d, R (R epsilon R) represents the relationship between the two nodes, is (x) 1 ,x 2 R) represents x 1 And x 2 Whether there is a relation r, if there is a value of 1, if there is no, it is 0, as shown in formula (1):
after the four-element relation is extracted, updating the characteristics of each node; the specific method comprises the following steps:
establishing a target loss function:
g(x 1 ,x 2 ,r)=Is(x 1 ,x 2 ,r)logp(r|x 1 ,x 2 ) (2)
t(x 1 ,x 2 ,r)=(1-Is(x 1 ,x 2 ,r))log[1-p(r|x 1 ,x 2 )] (3)
F(x 1 ,x 2 ,r)=g(x 1 ,x 2 ,r)+t(x 1 ,x 2 ,r) (4)
wherein g (x 1 ,x 2 R) represents the probability of a positive sample, p (r|x) 1 ,x 2 ) Represents x 1 And x 2 Probability of having relation r, t (x 1 ,x 2 R) represents the probability of a negative sample, F (x) 1 ,x 2 R) represents a target loss function;
maximizing the objective function F using a random gradient descent algorithm (SGD), using the data according to equations (5) - (7)Iteratively updating the node characteristic W epsilon W;
wherein, representative node x 1 Weight matrix of>Representative node x 1 Is represented by a, β, ω representing the step size, W, in the gradient descent algorithm r ' represents a weight matrix with a target relationship r; after the feature representation of the node set is obtained, all the node sets representing the flow feature U and the CPU I are taken out, and the potential relation feature of the flow feature and the CPU is +.>The expression is as follows:
wherein, representing matrix multiplication +.>Weights of different meta paths representing flow characteristics,/->Weights representing different meta paths of the CPU T represents transpose,/and T represents the weight of the CPU>The feature value of the mth meta-path of the ith CPU is represented, and M represents the set of all meta-paths.
2) MLP prediction model
The MLP is also called an Artificial Neural Network (ANN), and may include a plurality of hidden layers in addition to an input layer and an output layer. MLP has a high degree of parallel processing power and nonlinear global effects, which are quite compatible with resource prediction of VNF. From another point of view, the MLP not only has good fault tolerance and associative memory functions, but also has very strong self-adaptation and self-learning functions. For a large amount of VNF performance data, it is advantageous to solve the classification problem by means of a neural network algorithm.
The invention takes MLP as a model of VNF resource prediction, extracts the characteristic representation of flow characteristics through HIN2Vec, takes the MLP as the input of the MLP, and distributes different weights for each characteristic by using an attention mechanism before the MLP model is input. The invention mainly uses the Sequential model and the method in the Keras framework, thereby predicting the resource consumption condition of a single VNF. The Keras framework can realize a plurality of layers, including a core layer, a Convolition Convolution layer, a Pooling layer and the like, which are very rich in interesting network structures. In general, the present invention creates a Sequential model by passing a list of layers to the constructor of the Sequential. Furthermore, the invention constructs a fully connected layer with 64 hidden neurons, and the final output has 7 neurons. The MLP model used in the present invention includes an input layer, a hidden layer, and an output layer. The specific steps for constructing the MLP are as follows:
first layer input layer:
wherein h is j A weighted sum of all inputs representing the current node, w ij Representing weights, x representing the value of the input, N representing the number of samples;
second layer hidden layer:
wherein F is i For the output value of hidden layer neurons, f (·) represents the activation function; the activation function used is a linear rectification function (Rectified Linear Unit, reLU); the advantage of using a ReLU is that the dependency between parameters can be reduced, thus reducing the over-fitting situation.
Third layer output layer: output by softmax, i.e., f_output=softmax (F);
finally, the Loss function Loss of the MLP prediction model is shown in equation (11):
wherein N is the number of samples, y i And representing the real label corresponding to the ith category, wherein f_output is the predicted result.
Verification experiment:
1) Experimental setup
In the experiment deployment module, the flow load fluctuation in the VNF is simulated by using a flow playback mode, and the performance data of the VNF is monitored through the Hypervisor, so that the condition of resource consumption in the flow playback process is obtained.
The experiment models the VNF mainly through real traffic, and the resource consumption condition of the VNF is estimated by taking the traffic characteristics of the data packets in the network as the input of the MLP. The experiment reproduces the process flow of the VNF in a controlled environment, which is continuously processed in the VNF, while the average CPU consumption is measured at time intervals of 20 s. At the same time, the flow characteristics were also processed in the same 20s batch. As shown in fig. 3, the experiment was implemented in two different real VMs managed by the same virtual machine manager esxi5.5 host (hypervisor), operating through VMware vSphere Client, and acquiring data of resource consumption of VNF (Router in the present invention).
In this embodiment, experiments are mainly performed using different VMs under a unified hypervisor, and CPU consumption of the entire VNF may be measured by a hypervisor. Traffic-based playback methods require that traffic in the real network be captured and recorded in advance and then played back using a unique playback tool. Aiming at experimental deployment of the experiment, the specific steps are as follows:
(1) the data preparation adopts a strategy of firstly grabbing the packet and then playing back the packet, and the traffic data packet needs to be acquired on line and then downloaded to an off-line environment so as to be played back off line through tools such as Tcpreplay or Tcpcopy.
(2) The method comprises the steps of performing flow playback, logging in an ESXI by using VMware vSphere Client to perform operation, establishing two virtual machines in the ESXI, installing a Router in one virtual machine, and installing a Tcpreplay for flow playback.
(3) The CPU consumption of the VNF during the playback of traffic can be observed by the VMware vSphere Client client.
Specifically, this embodiment has tested Router VNF with the actual flow tracked, and the total data set collected in this embodiment contains approximately 13 ten thousand data points. These traffic are packets from the WIDE backbone [30]. To better verify our model, this example used three different data sets for experiments, as shown in table 2, the sizes of this example were 0.12MB, 0.74MB and 3MB, respectively, with data batch sizes of 1000, 5831 and 36769, respectively. Different sized data sets may not only better exhibit the complexity of VNF performance data, but may also better conform to the distribution of its data. Here, two 5min traffic traces are used, and traffic is generated in another uburn virtual machine. And copying the traffic in the test environment, and replaying the traffic from one Ubuntu virtual machine to the virtual machine with the Router VNF deployed through Tcpreplay.
Table 2 three different data sets
Data sets Dataset size Number of batches
MAWI-0.12MB 0.12MB 1000
MAWI-0.74MB 0.74MB 5831
MAW1-3MB 3MB 36769
2) Flow characteristics
The extraction of flow characteristics is a key problem of the invention. In the field of network research, traffic collection tools and different feature extraction modes are provided for different specific problems. Researchers can use online data sets and data analysis tools to obtain their desired results.
Traffic is represented by a set of features that characterize small time batches from the transport layer to the application layer. In the description of the network traffic model, the traffic characteristics are a set of counters describing the sequence properties of packets, and are mainly divided into two categories: one is a feature that changes in real time according to network changing behavior, and the other is a feature that can be directly extracted from traffic. For example, we can obtain some information directly from the captured traffic, such as the length of the traffic packets, the number of bytes transmitted, the protocol of the packets, the source and destination ports, the source and destination addresses, etc. These basic features describe in more detail the operational status of the network traffic by analyzing these traffic features. The experiment will analyze the performance of the CPU, so the selected flow characteristics are very critical. The experiment needs to find a set of flow characteristics that are simple and computationally inexpensive, while being able to contribute to predicting CPU consumption.
Where ρ represents a correlation coefficient representing the degree of linear correlation between the variables.And->Representing the average number of x and y, respectively.
According to formula (12), 6 features having a larger influence on the CPU are selected from the features shown in table 1, i.e., the basis for the selection is the correlation between them (as shown in fig. 4). Moreover, the experiment visualizes the finally selected flow characteristics, as shown in fig. 5. In particular, the present experiment removes redundancy between features based on the correlation between features. The correlation between the selected characteristics and the CPU is at least 89%, namely the flow characteristics with larger influence degree are reserved. Table 3 shows all flow characteristics, and finally 6 flow characteristics selected from all flow characteristics were used as experiments. The final selected flow characteristics and distribution ranges are shown in table 4.
Table 3 all flow characteristics
Characteristics Paraphrase
Batch_num Num.of batches
Number_of_packets Num.of all packets
Ave_time Average inter-arrival time
Dif_src_add Num.of diferent IP source addresses
Dif_pair_add Num.of diferent pairs IP source addresses
Num_ipv6 Num.of IPv6 packets
Num_tcp Num.of TCP
Dif_tcp_dport Num.of different destination ports in TCP packets
Dif_udp_sport Num.of different destination ports in UDP packets
Dif_sip-sport Num.of different pairs ipSrc-PortSrc
Num_syn-tcp Num.of SYN-TCP packets
Num_res-tcp Num.of RES-TCP packets
Ave_len Average length of the packets
Transmitted_bytes Transmitted bytes
Std_Ave_time Std of the inter-arrival time
Dif_dst_add MemNum.of different IP destination addresses
Num_ipv4 Num.of the ICMP packets over IPv4
Dif_tcp_sport Num.of different source ports in TCP packets
Dif_dip-dport Num.of different pairs ipDst-PortDst
Num_icmp_ipv4 Num.ofthe ICMP packets over IPv4
Num_UDP Num.of UDP packets
Dif_udp_dport Num.of different destination ports in UDP packets
Num_fin-tcp Nuum.of FIN-TCP packets
Table 4 selected flow characteristics and CPU distribution Range
Abbreviation Characteristics Distribution
NP Number_of_packets 3348~26045
AT Ave_time 0.0007633~0.004856492
NT Num-tcp 3142~24573
TB Transmitted_bytes 224154~1885418
SA Std_Ave_time 0.001048531~0.068421383
NI4 Num_ipv4 2996~24912
CPU the CPU of the VNF 12.8~79.4
The performance data of the VNF is generally in the form of a numerical value, but in practical applications, it is difficult to correspond to the numerical value of the CPU performance by flow fluctuation. Such discretized data is not significant in measuring VNF performance. Therefore, the present experiment ranges the performance data of the VNF, and the types of CPU partitioning are shown in table 5. Taking 3MB of data as an example, the experiment corresponds to 0-6 intervals, respectively, and finally describes the problem as a multi-classification prediction problem.
TABLE 5 kinds of CPU division
3) Evaluation index and comparison method
The evaluation index of this experiment mainly comprises two, one is the accuracy and the other is the Area Under Curve (AUC). The accuracy represents the percentage of all classification correctness, i.e. the proportion of the data that the model judges correctly to the total data. The accuracy is calculated as shown in equation (13), and the accuracy shows the accuracy of the classification of the target class by the algorithm, which is the part of the input sample total number of the correct predicted value. The AUC is calculated as the area under the ROC curve, which is used as an evaluation index for the classification model, and represents that the probability of a classifier ordering randomly selected positive examples is higher than the probability of randomly selected negative examples. A formula is introduced to calculate the AUC of the classifier, as shown in formula (14), for the classification problem, under each class, the probability that the test sample is of that class can be obtained.
Where TP represents the number of true cases, TM represents the number of true cases, P represents all positive samples, and M represents all negative samples.
Wherein n is 0 And n 1 Respectively representing the number of positive samples and negative samples, rank i Representing the rank of the i-th positive sample in the ordered list.
To verify the validity of the models herein, we selected some models for comparison:
NB: the Naive Bayes model is a probabilistic classifier based on bayesian theorem and has strong independence assumptions about the features in the model. In the training phase of the classifier, the conditional probabilities of all partitions can be computed for each positive attribute.
LR: logistic Regression is also a common classification method that can be used to address multiple classification issues. The method is simple and efficient, and has strong interpretability. The validity of the model herein is further illustrated by the experiment performed using Logistic Regression.
And (3) SVM: support Vector Machine has certain advantages in high dimensional data sets and the training effect is still considerable in the case of a large number of samples. Like MLP, SVM is a traditional machine learning method to solve the problem of multivariate classification herein.
Precision Tree: decision trees are a supervised learning approach to predict the value of a target variable from data features by creating a model. The method has good performance in numerical data and classification data, and the experiment uses a decision tree to realize multi-classification on a data set.
Random Forest: random forests are a classifier that can train and predict samples through multiple trees and have the characteristic of high flexibility. The random forest is formed by a plurality of decision trees, and when the classification task is carried out, each tree in the random forest is respectively judged and classified, and finally a classification result is obtained.
Factorization Machine (FM): FM is a matrix decomposition-based algorithm that has great advantages in terms of data sparsity because it can have good learning ability for sparse data, which also makes FM suitable for data sets with large data dimensions. In this context, FM is taken as another algorithm that compares in terms of data characteristics.
MLP: as a deep learning method, the MLP can fully approximate to a complex nonlinear relation, and has better generalization capability and fault tolerance. In this experiment, MLP was the benchmark method for constructing classifiers based on flow characteristics. The present experiment inputs the features to be learned by the MLP to the final classifier without using the features extracted by the HIN.
4) Experimental results
(1) Model Compare (Model Comparison): the method predicts the consumption condition of the CPU of a single VNF through the flow characteristic, and divides each data set into two sets, wherein the training set comprises 90% of samples, and the test set comprises the rest 10% of samples. The end result is shown in table four, where the model used herein has higher accuracy than other conventional models. The method of the present invention using the meta-path has better performance than other methods because the meta-path captures potential relationships between features. Experimental results show that under the condition of using HIN for feature extraction, the model used by the method is superior to other models when predicting VNF resource consumption. From the third data set, from the aspect of accuracy of the model, the experimental results of the invention have better effects than other models. Compared with the traditional model, the accuracy of the model is improved by 23.55%, and the AUC is improved by 35.06%, as shown in Table 6. Compared with other deep learning models, the accuracy is improved by 17.32%, and the AUC is improved by 26.86%, as shown in Table 7. This also represents an advantage of HIN for feature extraction, which is also a great advantage in improving the accuracy of the model.
Table 6 accuracy of different algorithms and datasets
Table 7 AUC for different algorithms and data sets
Aiming at the VNF resource prediction in the invention, the model has better effect than the traditional model and the depth model from the overall effect of the model:
(1) compared with the traditional classification model, the prediction precision effect of the traditional model is better than that of the MLP, and the model precision used by the method is better, so that the accuracy rate reaches 98.96%. As can be seen from Table IV, RF performs best in the traditional machine learning model because RF can reduce the over-fitting situation and does not change drastically due to the size change of the training data.
(2) Aiming at a deep learning model, the model of the invention has better effect, and the accuracy is improved by 17.32 percent compared with MLP and 17.69 percent compared with FM. On the one hand, although MLP can avoid the problems of selection of neural network structure and local minimization, sensitivity to data is extremely high. On the other hand, although the MLP can learn the nonlinear relationship between the traffic characteristics and VNF resource consumption, it not only needs to train a large number of parameters (such as initial values of network structure, weights and thresholds), but also the model learning time is relatively long. The effect of the model is not significant for a deep learning model such as MLP when the relationship between features is not considered or the influence between variables is not considered. Therefore, when predicting VNF resource requirements through traffic characteristics, the prediction accuracy of no model herein is high.
In addition to comparing the model itself, the experiment was performed with data sets of different sizes, about 0.12MB, 0.74MB and 3MB, respectively. The experimental results are shown in tables 6 and 7, and it can be seen from Table 6 that the method of the present invention has more generalization ability. On the one hand, the accuracy of other methods has a good effect on small datasets, but not on slightly more complex datasets. This may be due to the fact that the model is prone to over-fitting over a large dataset, so the accuracy is low. In addition, the dataset becomes larger and the model may be more sensitive to the introduced noise, so the accuracy of classification is rather degraded. On the other hand, as can be seen from table 7, the AUC of the other methods performed better on a data set of size 0.74MB, but overall the methods herein perform better.
In summary, the method of the present invention has good effect on MAWI-3MB data set, both in terms of accuracy and AUC, which also indicates that the method of the present invention is more suitable for more complex data. Other methods do not perform well with larger data sets, and it can also be seen that some models may not learn sufficiently about the information provided by the large amount of data when the amount of data is large, resulting in poor accuracy or AUC of the model. Although a larger data set may introduce some irrelevant noise, the method of the invention can learn the correlation between the features well, and further extract the hidden relationship between the features, thereby improving the accuracy of model prediction.
(2) Results for different meta-path combinations (The Results OfDifferent Combination OfMeta Paths): for the model of the invention, different meta paths and different combination modes (shown in table 1) are designed through the similarity among the features, which helps the model of the invention to mine the relation among the flow features, so that the influence of the relation among the features on the final result is researched. The experiment was performed using different meta-paths, the specific multi-paths being shown in table 8, where each meta-path corresponds to a meta-path in table 1. As can be seen from FIG. 7, the model has a good prediction effect regardless of the number of different element paths or the number of different element paths, and the generalization capability of the model is also seen to be strong. Among them, the result is the best when the number of meta paths is 3, and the meta path combination condition AT this time is P1P2P3 in Table 1, namely, "AT-CPU-AT", "AT-SA-CPU-SA-AT", "TB-CPU-TB".
TABLE 8 different meta-paths
the Number of Paths Multiple Meta-Paths
1 P1
2 P1,P2
3 p1,P2,P3
4 P1,P2,P3,P4
5 P1,P2,P3,P4,P5
6 P1,P2,P3,P4,P5,P6
7 P1,P2,P3,P4,P5,P6,P7
In addition to studying the effect of different combinations of meta-paths on the CPU, this experiment also explored the effect of different meta-paths at the same number on the CPU. From the previous section, the result is best when the number of meta-paths is 3, because this embodiment designs the combination of different meta-paths with the number of 3 to perform experiments, and discusses the influence of different features on the CPU. The combination of different element paths in the experimental design is shown in table 9, and the corresponding final experimental result is shown in fig. 8, and it can be seen from fig. 8 that when the number of element paths is 3, the prediction results of the combination of different features on the CPU are not the same. When extracting features by means of a combination of multiple meta-paths, the hidden relationship between the features and the CPU is also considered.
TABLE 9 combination of different meta-paths
Paths Multiple Meta-Paths
Original Meta-Paths AT-CPU-AT,AT-SA-CPU-SA-AT,TB-CPU-TB
The First Case NP-CPU-NP,N14-CPU-NI4,NT-CPU-NT
The Second Case SA-CPU-SA,AT-CPU-AT,NP-CPU-NP
The Third Case AT-CPU-AT,NP-CPU-NP,TB-CPU-TB
The Fourth Case NT-CPU-NT,TB-CPU-TB,SA-CPU-SA
(3) Results with or without attention mechanisms (With Or Without Attention Mechanism): the inventive method introduces a mechanism of attention at the model level, and it can be seen from fig. 9 that the effect of different features on the result is different. However, the attention mechanism fully considers the relation between each feature and the CPU, and can assign different weights to the features so as to measure the importance degree of each feature, thus being capable of better training a model.
In general, when the meta-path is designed, the method better reserves the characteristics with higher similarity with the CPU, and reduces the influence of the characteristics with smaller similarity on the CPU, so that the model of the invention has better flexibility and effectiveness in the aspect of characteristic utilization. Moreover, the method fully considers the relation between the features and the CPU, expresses each relation as a vector, and adopts an attention mechanism to measure the importance of each feature. On the model level, the method of the invention is more in line with the data distribution for the feature selection and utilization, so that the method provided by the invention can show better performance.
From the experimental results, if the hidden relation between the flow characteristics is mined by using the HIN, the accuracy of the model can be greatly improved, which proves that the HIN has great advantages in the aspect of extracting the characteristics. On the basis of heterogeneous information networks, the overall range of resource consumption of VNFs can be predicted by considering the impact of hidden relations between features on a single VNF. The use of HIN not only allows for the study of hidden relationships between features, but also improves the learning ability of the model.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (3)

1. The virtual network function resource consumption prediction method based on flow characteristic extraction is characterized by comprising the following steps:
s1: establishing different meta paths between the traffic characteristics and the CPU of the VNF by correlation between the traffic characteristics, thereby constructing a heterogeneous information network (Heterogeneous information network, HIN); wherein the VNF represents a virtual network function;
s2: acquiring feature representation of each flow feature by using an HIN2Vec model;
the vector representation of each node, namely the characteristic representation of each node, is obtained through the HIN2Vec model, and specifically comprises the following steps: randomly generating a k-dimensional feature vector for each node V in the heterogeneous information network graph V for describing the feature set formed by VAnd extracting the quaternary relation group from each path D epsilon D>Data as training features, where x 1 And x 2 Is any two adjacent nodes on the path d, r represents the relationship between the two nodes, is (x 1 ,x 2 R) represents x 1 And x 2 Whether there is a relation r, if there is a value of 1, if there is no, it is 0, as shown in formula (1):
after the four-element relation is extracted, updating the characteristics of each node; the specific method comprises the following steps:
establishing a target loss function:
g(x 1 ,x 2 ,r)=Is(x 1 ,x 2 ,r)logp(r|x 1 ,x 2 ) (2)
t(x 1 ,x 2 ,r)=(1-Is(x 1 ,x 2 ,r))log[1-p(r|x 1 ,x 2 )] (3)
F(x 1 ,x 2 ,r)=g(x 1 ,x 2 ,r)+t(x 1 ,x 2 ,r) (4)
wherein g (x 1 ,x 2 R) represents the probability of a positive sample, p (r|x) 1 ,x 2 ) Represents x 1 And x 2 Probability of having relation r, t (x 1 ,x 2 R) represents the probability of a negative sample, F (x) 1 ,x 2 R) represents a target loss function;
maximizing the objective function F using a random gradient descent algorithm, using the data according to equations (5) - (7)Iteratively updating the node characteristic W epsilon W;
wherein, representative node x 1 Weight matrix of>Representative node x 1 Is represented by a, β, ω representing the step size, W, in the gradient descent algorithm r ' represents a weight matrix with a target relationship r; after the feature representation of the node set is obtained, all the node sets representing the flow feature U and the CPU I are taken out, and the potential relation feature of the flow feature and the CPU is +.>The expression is as follows:
wherein, representing matrix multiplication +.>Weights of different meta paths representing flow characteristics,/->Weights representing different meta paths of the CPU T represents transpose,/and T represents the weight of the CPU>The characteristic value of the M-th element path of the ith CPU is represented, and M represents all element path sets;
s3: the importance of each feature is measured by using an attention mechanism, different weights are distributed to the features, and the features are input into a multi-layer perceptron (Multilayer perceptron, MLP) model to predict the resource consumption of the VNF;
the MLP model comprises an input layer, a hidden layer and an output layer, and specifically comprises the following steps:
first layer input layer:
wherein h is j A weighted sum of all inputs representing the current node, w ij Representing weights, x representing the value of the input, N representing the number of samples;
second layer hidden layer:
wherein F is i For the output value of hidden layer neurons, f (·) represents the activation function; the activation function used is a linear rectification function (Rectified Linear Unit, reLU);
third layer output layer: output by softmax, i.e., f_output=softmax (F);
finally, the Loss function Loss of the MLP prediction model is shown in equation (11):
wherein N is the number of samples, y i And representing the real label corresponding to the ith category, wherein f_output is the predicted result.
2. The method for predicting virtual network function resource consumption according to claim 1, wherein in step S1, a heterogeneous information network is constructed, specifically comprising: and constructing information about nodes and edges in the graph for each relation by utilizing the correlation among the flow characteristics, and constructing a heterogeneous information network graph by traversing the relation in turn.
3. The method for predicting virtual network function resource consumption according to claim 2, wherein in step S2, each node in the constructed heterogeneous information network diagram represents a feature, and finally the vector representation of the node is learned by using HIN2Vec model; the HIN2Vec model is used for extracting node characteristics in the graph, reserving nodes with the same type as the first node, and adding the nodes into a path; under the condition of meeting the given path length, generating a path for each node in the node set, and then obtaining the characteristic representation of the corresponding node through a random gradient descent algorithm.
CN202111371502.1A 2021-11-18 2021-11-18 Virtual network function resource consumption prediction method based on flow characteristic extraction Active CN114070708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111371502.1A CN114070708B (en) 2021-11-18 2021-11-18 Virtual network function resource consumption prediction method based on flow characteristic extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111371502.1A CN114070708B (en) 2021-11-18 2021-11-18 Virtual network function resource consumption prediction method based on flow characteristic extraction

Publications (2)

Publication Number Publication Date
CN114070708A CN114070708A (en) 2022-02-18
CN114070708B true CN114070708B (en) 2023-08-29

Family

ID=80278167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111371502.1A Active CN114070708B (en) 2021-11-18 2021-11-18 Virtual network function resource consumption prediction method based on flow characteristic extraction

Country Status (1)

Country Link
CN (1) CN114070708B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022195B (en) * 2022-05-26 2023-10-10 电子科技大学 Flow dynamic measurement method for IPv6 network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018013023A1 (en) * 2016-07-15 2018-01-18 Telefonaktiebolaget Lm Ericsson (Publ) A server and method performed thereby for determining a frequency and voltage of one or more processors of the server
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction
CN113535399A (en) * 2021-07-15 2021-10-22 电子科技大学 NFV resource scheduling method, device and system
CN113656797A (en) * 2021-10-19 2021-11-16 航天宏康智能科技(北京)有限公司 Behavior feature extraction method and behavior feature extraction device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018013023A1 (en) * 2016-07-15 2018-01-18 Telefonaktiebolaget Lm Ericsson (Publ) A server and method performed thereby for determining a frequency and voltage of one or more processors of the server
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction
CN113535399A (en) * 2021-07-15 2021-10-22 电子科技大学 NFV resource scheduling method, device and system
CN113656797A (en) * 2021-10-19 2021-11-16 航天宏康智能科技(北京)有限公司 Behavior feature extraction method and behavior feature extraction device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向5G通信网络的NFV内存资源管理方法;苏畅;《计算机科学》;全文 *

Also Published As

Publication number Publication date
CN114070708A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
Neelakandan et al. RETRACTED ARTICLE: An automated exploring and learning model for data prediction using balanced CA-SVM
Verma et al. Evaluation of network intrusion detection systems for RPL based 6LoWPAN networks in IoT
Halcrow et al. Grale: Designing networks for graph learning
Li et al. A comparative analysis of evolutionary and memetic algorithms for community detection from signed social networks
Hu et al. FCAN-MOPSO: an improved fuzzy-based graph clustering algorithm for complex networks with multiobjective particle swarm optimization
Zhang et al. Blockchain phishing scam detection via multi-channel graph classification
Tabarzad et al. A heuristic local community detection method (HLCD)
US10187297B2 (en) Classification with a switch
Sahmoud et al. A general framework based on dynamic multi-objective evolutionary algorithms for handling feature drifts on data streams
Papineni et al. Big data analytics applying the fusion approach of multicriteria decision making with deep learning algorithms
CN114070708B (en) Virtual network function resource consumption prediction method based on flow characteristic extraction
Luo et al. TINET: learning invariant networks via knowledge transfer
Dong et al. Traffic identification model based on generative adversarial deep convolutional network
Gao et al. A novel link prediction model in multilayer online social networks using the development of Katz similarity metric
Kaur et al. A geo-location and trust-based framework with community detection algorithms to filter attackers in 5G social networks
Al-Sharoa et al. Robust community detection in graphs
Perez-Cervantes et al. Using link prediction to estimate the collaborative influence of researchers
Nandita et al. Malicious host detection and classification in cloud forensics with DNN and SFLO approaches
Kazemitabar et al. A graph-theoretic approach toward autonomous skill acquisition in reinforcement learning
Sun et al. Aledar: An attentions-based encoder-decoder and autoregressive model for workload forecasting of cloud data center
Liu et al. Feature optimization based on artificial fish-swarm algorithm in intrusion detections
Zhou et al. Detecting overlapping community structure with node influence
Sayed-Mouchaweh Learning from Data Streams in Evolving Environments: Methods and Applications
Cui et al. Incremental community discovery via latent network representation and probabilistic inference
Liu et al. Prediction model for non-topological event propagation in social networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant