CN109995583B - Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system - Google Patents

Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system Download PDF

Info

Publication number
CN109995583B
CN109995583B CN201910199568.3A CN201910199568A CN109995583B CN 109995583 B CN109995583 B CN 109995583B CN 201910199568 A CN201910199568 A CN 201910199568A CN 109995583 B CN109995583 B CN 109995583B
Authority
CN
China
Prior art keywords
cloud platform
nfv cloud
tenant
delay
virtual network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910199568.3A
Other languages
Chinese (zh)
Other versions
CN109995583A (en
Inventor
江勇
艾硕
李清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201910199568.3A priority Critical patent/CN109995583B/en
Publication of CN109995583A publication Critical patent/CN109995583A/en
Application granted granted Critical
Publication of CN109995583B publication Critical patent/CN109995583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Abstract

The invention discloses a delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction method and a system, wherein the delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction method comprises the following steps: collecting network configuration information, tenant configuration information and running logs of the NFV cloud platform; according to the collected network configuration information, tenant configuration information and operation logs of the NFV cloud platform, predicting the average arrival rate of data packets of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model to perform flow prediction, and analyzing the average processing delay of the data packets of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packets of each tenant in the next time period; according to the flow prediction result and the average processing delay of the data packet of each service chain, carrying out dynamic expansion and contraction capacity decision of the NFV cloud platform, wherein the decision information comprises the deployment number and position of various virtual network function instances and a flow forwarding rule; and translating the decision information into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation.

Description

Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system
Technical Field
The invention relates to a delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system, and belongs to the field of computer networks.
Background
Computer networks play an important role in human production and life, and are important infrastructures for supporting economic development and technological innovation in modern society. The development of emerging network technologies such as mobile internet, cloud computing and internet of things provides new requirements and challenges for the aspects of expansibility, security and availability of computer networks, and the research of networks in the future is urgent.
Network Function Virtualization (NFV) technology is an important technology for realizing Network Virtualization, and is one of the mainstream research directions of future networks. The network function is a generic term of a Middlebox (Middlebox) deployed in a network, such as a Firewall (Firewall), an Intrusion Detection System (IDS), a load balancing system (LoadBalancer), and the like. To increase the security and availability of a network, a certain number of network function devices are typically deployed in a computer network. However, in the conventional network, the network function device is generally deployed in the network in the form of hardware, which has high maintenance cost and lacks flexibility, and cannot meet the requirement of the virtual network. The Network Function virtualization technology realizes a more flexible and controllable Network Function deployment mode by running Virtual Network Function (Virtual Network Function) software in a common server. This new network function deployment model has the following advantages:
1) the cost of the user is reduced, the user can reduce the purchase Cost (CAPEX) and the maintenance cost (OPEX) of the hardware network function by adopting a novel deployment mode of renting the virtual network function service, and the service cost can be paid according to the self service condition;
2) the equipment is convenient and timely to upgrade, and because the virtual network function is deployed in the general server, the function upgrading operation is more convenient than the traditional hardware network function, and the security vulnerability repair of equipment such as a firewall, an intrusion detection system and the like is more timely and convenient;
3) capacity expansion/reduction as required, computer network flow often has high peak and low valley, in a traditional network, a hardware network function is generally deployed aiming at a flow peak value, which causes performance waste in a flow low valley period, and a virtual network function deployment mode is capacity expansion/reduction as required, so that the processing requirement is met, and simultaneously resource waste is avoided.
Although the network function virtualization technology has many advantages, there still exist many problems on the way of large-scale application. From the perspective of NFV cloud service operators, how to maximize resource utilization and reduce operation cost while ensuring Quality of service (QoS) is an important problem for commercializing NFV cloud services. However, the bursty nature of network traffic is a big challenge for NFV dynamic scaling, and whether cloud service resources are provided too much or too little, it is not good for NFV cloud service operators to maximize their benefits. On the other hand, network latency is an important criterion for measuring the service quality of the NFV cloud. Because the virtual network function is deployed between the communication nodes, the packet processing delay generated by the virtual network function may degrade the end-to-end communication quality of the user. For reliable transport protocols (such as TCP, etc.), network delays may trigger congestion control mechanisms, which suppress sender sending rates and reduce throughput. In summary, NFV cloud service dynamic expansion/contraction mainly faces the following three challenges:
1) and (4) estimating the demand. Firstly, the burst characteristic of network flow is a big difficulty in estimating the demand of cloud resources; secondly, the virtual network Function often processes the data packet in the form of a Service Function Chain (Service Function Chain), and dynamic routing rules in the SDN network may cause a failure of a demand prediction algorithm for a single node;
2) and (5) delay guarantee. The processing delay of a packet depends mainly on the processing capacity of a set of functionally distinct virtual network function instances on the service chain. The deployment position of these virtual network function instances in the data center also affects the transmission delay of the data packets on the service chain;
3) and (5) carrying out statistical multiplexing. An NFV cloud service operator often needs to serve multiple tenants, the same network function may exist in a service chain subscribed by each tenant, and if a reasonable statistical multiplexing strategy is used, the utilization rate of cloud resources can be further improved. But this would lead to the dynamic expansion/contraction problem becoming more complex.
Since elastic expansion is one of the main characteristics of cloud computing, there are many related works for studying dynamic expansion technology in a cloud computing environment. However, most of these efforts focus on the expansion scheme of the Web application deployed on the cloud, and these schemes are not suitable for the NFV cloud service scenario that includes multiple types of virtual network function instances cooperating with each other in the form of a service chain. In recent years, the solution for dynamic capacity expansion of NFV cloud services is summarized as follows:
the Ghaznavi et al provides a VNF instance placement algorithm for a horizontal capacity expansion scene of the same type of virtual network function instances to minimize the overhead caused by instance migration. Eramo et al propose a detailed capacity expansion strategy for a longitudinal capacity expansion scenario of an NFV cloud platform, including updating service link routing, VNF instance placement and migration, and the like. Wang et al propose an algorithm for solving the optimal number of VNF instances during the service chain expansion phase. Zhang et al propose a strategy of selecting an optimal VM instance ordering period by predicting a traffic rate for an NFV broker service model, to reduce the cost of renting cloud resources. Fei et al propose an NFV cloud platform active capacity expansion strategy to minimize the overhead caused by prediction errors and instance deployment.
However, these solutions only consider the problem of capacity expansion of the NFV service chain from the perspective of throughput, neglect the study of packet processing delay, and guarantee throughput alone cannot ensure good quality of service. Moreover, the above solutions only consider the case of a single service chain, and there is no research on the multi-tenant statistical multiplexing service chain scenario. For packet processing delay, the NFV-RT provides a delay guarantee scheme for the NFV service chain by means of a real-time scheduling mechanism provided by a host and a virtual machine operating system, but it assumes that the whole service chain is located in the same physical server, and this deployment approach obviously does not facilitate implementation of dynamic scaling. NFVnice studies performance bottleneck problems that may exist on multiple intersecting NFV service chains and proposes a Backpressure (Backpressure) mechanism to ensure performance isolation, but for a high load situation, only a policy of upstream VNF instance packet loss is adopted and service quality cannot be ensured.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed before the filing date of the present patent application.
Disclosure of Invention
The invention mainly aims to provide a delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction method and system.
The technical scheme provided by the invention for achieving the purpose is as follows:
a delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method comprises the following steps: collecting network configuration information, tenant configuration information and running logs of the NFV cloud platform; according to the collected network configuration information, tenant configuration information and operation logs of the NFV cloud platform, predicting the average arrival rate of data packets of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model to perform flow prediction, and analyzing the average processing delay of the data packets of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packets of each tenant in the next time period; according to the flow prediction result and the average processing delay of the data packet of each service chain, carrying out dynamic expansion and contraction capacity decision of the NFV cloud platform, wherein the decision information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule; and translating the decision information into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation.
According to the technical scheme, the average rate of network flow of each tenant is predicted, the queuing network model is established for the service chain, the processing delay is analyzed, and the NFV cloud platform is dynamically expanded and scaled on the premise that the processing delay of each tenant meets the Service Level Agreement (SLA), so that the system overhead is minimized, and the resource utilization rate is maximized.
The invention further provides a delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction system, which includes: the monitoring program is used for collecting network configuration information, tenant configuration information and operation logs of the NFV cloud platform; the analysis program is used for predicting the average arrival rate of the data packet of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model according to the network configuration information, the tenant configuration information and the running log of the NFV cloud platform collected by the monitoring program so as to predict the flow, and analyzing the average processing delay of the data packet of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packet of each tenant in the next time period; the planning program is used for carrying out dynamic capacity expansion and reduction decision-making of the NFV cloud platform by taking the average processing delay of the data packet of each service chain as a constraint condition according to the flow prediction result obtained by the analysis program, wherein the decision-making information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule; and the execution program is used for translating the decision information obtained by the planning program into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation.
Drawings
Fig. 1 is a flowchart of a delay-guaranteed NFV cloud platform dynamic scaling method according to an embodiment of the present invention;
fig. 2 is a system architecture and a schematic block diagram of a delay-guaranteed NFV cloud platform dynamic scaling system according to an embodiment of the present invention;
3-1 and 4-1 are schematic diagrams of two exemplary types of VNF instances;
figures 3-2 and 4-2 are classified jackson queuing network models of the VNF example operation shown in figures 3-1 and 4-1, respectively.
Detailed Description
The invention is further described with reference to the following figures and detailed description of embodiments.
The specific embodiment of the invention provides a delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction method, namely, under the system architecture of the NFV cloud platform, multiple intersected virtual network function service chains are dynamically expanded/reduced by predicting the network traffic of each tenant, on the premise that the traffic processing delay meets the delay requirement of each tenant is guaranteed, the resource utilization rate is maximized, and the processing cost is minimized.
Referring to fig. 1, the NFV cloud platform dynamic scaling method for delay guarantee provided by the present invention includes: collecting network configuration information, tenant configuration information and running logs of the NFV cloud platform; according to the collected network configuration information, tenant configuration information and operation logs of the NFV cloud platform, predicting the average arrival rate of data packets of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model to perform flow prediction, and analyzing the average processing delay of the data packets of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packets of each tenant in the next time period; according to the flow prediction result and the average processing delay of the data packet of each service chain, carrying out dynamic expansion and contraction capacity decision of the NFV cloud platform, wherein the decision information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule; and translating the decision information into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation. The scheme is mainly based on Software Defined Network (SDN) and virtualization technologies, wherein an SDN controller running in an NFV cloud platform is mainly used for arranging a Virtual Network Function (VNF) service chain, and a Hypervisor is mainly used for resource management in a Virtual environment, including creation, migration, recovery and the like of a VNF instance.
The network configuration information to be collected mainly includes a traffic forwarding path; the tenant configuration information comprises service chain types and data packet processing delay requirements; the running log comprises the number of tenant data packets per unit time on the NFV cloud platform gateway, the deployment type and number of virtual network function instances in the NFV cloud platform, the deployment position and the running time. Preferably, a plurality of modes and frequencies can be adopted to collect data, including reading a routing table of a switch, monitoring the arrival flow of a gateway switch, arranging tenant configuration information and the like.
After the required data is collected, the average arrival rate of the data packets of the next time period of each tenant and the average processing delay of the data packets of each service chain are analyzed according to the data.
The predicting the average arrival rate of the data packets of each tenant in the next time period by using the log-linear poisson autoregressive model specifically comprises the following steps:
firstly, according to historical data of the number of tenant data packets in unit time on an NFV cloud platform gateway in the collected running log and a periodic rule of traffic (for example, a rule that the tenant traffic is more daytime and less midnight), selecting data in a proper time period as reference historical data, for example, setting unit time of an expansion and contraction period as 1 hour, setting a current period as T, predicting the average arrival rate of the data packets in the T +1 period, selecting the arrival numbers of the data packets in the T-1 period, the T-2 period, the T-3 period and the like as historical data required for prediction, and selecting the arrival numbers of the data packets in the T-1 period, the T-24 period and the like as historical data required for prediction if the periodic rule is considered, and selecting the average arrival rate of the data packets in the same historical period;
then, the selected historical data is brought into an initial log-linear Poisson autoregressive model, and parameters corresponding to the reference historical data are estimated by using a maximum likelihood method, so that the log-linear Poisson autoregressive model used for predicting the average arrival rate of the data packets of each tenant in the next time period is established. The log-linear poisson autoregressive model is as follows:
Figure BDA0001996928040000061
wherein λ istIndicating the average arrival rate of packets over a period t, which depends on the number of packets A in each past periodt-i(e.g., the number of packets A of the previous time period)t-1Number of packets A of the second preceding time periodt-2… …) and its own history value lambdat-j(e.g., the value of the previous time period λ)t-1The value lambda of the second preceding time intervalt-2) (ii) a g (-) is a logarithmic correlation function; i represents the record number of the selected historical data packet quantity, and j represents the record number of the average arrival rate of the selected historical data packets; the parameter to be estimated for establishing the prediction model is θ ═ α ═01,…,αp1,…,βq) And in order to ensure the smoothness and the ergodicity of the data packet arrival process, the following constraint conditions are required to be set for the parameters
Figure BDA0001996928040000062
The parameter theta' is estimated by maximum likelihood method, the likelihood function is
Figure BDA0001996928040000063
From the probability mass function of the poisson distribution, we can obtain:
Figure BDA0001996928040000064
by maximizing the likelihood function, the parameter θ' can be estimated, and the unknown parameters are estimated, that is, a log-linear poisson autoregressive model which can be used for predicting the average arrival rate of the data packets of each tenant in the next time period is established. Then, by using the established available model, the average arrival rate of the data packets of each tenant in the next period can be predicted.
Because the invention adopts the time series (formula (1) is a time series model) analysis method to regress a mean value, belongs to probability distribution prediction and is different from the common prediction method, the prediction result can be used for the subsequent delay analysis.
Next, how to analyze the packet average processing delay per service chain using the classified jackson queuing network model is described. Modeling multiple intersected service chains in the NFV cloud platform by adopting a classified Jackson queuing network (classified Jackson Networks), so as to analyze the average processing delay of the data packet of each service chain. In the NFV cloud platform, there are mainly two types of typical VNF instance operation modes, that is, an operation mode in which multiple network flows multiplex one VNF instance as shown in fig. 3-1, and an operation mode in which multiple VNF instances of the same type collectively process one network flow as shown in fig. 4-1. We need to model these two VNF instance modes of operation separately using the classified jackson queuing network model.
An example of multiplexing multiple network flows into a VNF instance is shown in fig. 3-1, where packets of network flows a and B are processed by VNF instance NF1, processed by NF1, and then processed by NF3 processing A, NF2, respectively, to process B. Thus, N can be fully utilized on the premise of ensuring the service qualityProcessing power of F1. Since the processing power of different kinds of VNF instances is different, it is feasible that a single VNF instance handles multiple network flows. FIG. 3-2 shows a queuing network model and associated parameters, λ, for the mode of operation of FIG. 3-1a、λbRespectively, indicating the packet arrival rate, mu, of the network flow A, B1、μ2、μ3The packet processing rates of the virtual network function instances NF1, NF2, NF3 are indicated, respectively. The packet arrival processes of network flows a and B are independent Poisson processes (Poisson processes), so that it can be concluded that the packet arrival Process flowing into VNF instance NF1 is still a Poisson Process, and the average arrival rate is λab
An example of a specific multiple VNF instances of the same type co-processing a network flow is given in fig. 4-1, where network flow a is co-processed by two VNF instances of the same type NF3 and NF 4. When network traffic is in a peak period, the processing capability of a single VNF instance may not guarantee the service quality, and a plurality of network function instances of the same type need to cooperate together. FIG. 4-2 shows a queuing network model and associated parameters for the exemplary operation of FIG. 4-1, where λa'、λbIndicating the packet arrival rates, μ, of network flows A and B, respectively1、μ3、μ4The packet processing rates of the virtual network function instances NF1, NF3, NF4 are indicated, respectively. NF3 and NF4 are two instances of the same kind of VNF for processing packets of network flow a, and the proportion of traffic that is processed through NF1 is then processed by NF3 depends on the traffic type. That is, if the packet belongs to network flow a, it will be forwarded to NF3 with probability of p and to NF4 with probability of q, where p + q is 1, and where p and q have different meanings as represented by p and q in formula (2). Therefore, for this type of service chain, it is not feasible to use classical jackson queuing network modeling because its forwarding probability depends only on the source node and the destination node, and not on the traffic type. Therefore, the invention adopts a Classification Jackson queuing Network (Classed Jackson Network) model to model the service chain, and perfectly solves the modeling problem.
When modeling is carried out by using a classified Jackson queuing network, the data packet arrival process of each tenant is assumed to be subjected to a poisson process which is independent of each other, and the data packet processing time of a VNF instance is subjected to negative exponential distribution. While real-life network traffic or service time does not necessarily follow strictly the above-specified statistical distribution, making such assumptions makes it well-suited to applying the Markov (Markov) property, thereby making packet processing delay analysis more feasible. The order of processing of packets in the VNF instances is First Come First Served (FCFS), and each VNF instance has a fixed-size buffer for storing packets waiting to be processed. Although the fixed size of the buffer means that the queue length is finite, the scheme can perform capacity expansion operation before packet loss, so that the queue can still be regarded as infinite size. In addition, the VNF instance only records processing operations on the data packet in the service chain, so as to facilitate processing of the data packet by a subsequent VNF instance (for example, OpenNetVM uses an mbuf structure to store the data packet and record the processing operations). Therefore, the influence of packet loss on the packet departure rate is not necessary to be worried about.
The step of analyzing the average processing delay of the data packets of each service chain by using the classified Jackson queuing network model comprises the following steps:
1) using a logic matrix F to represent the virtual network function instance type on the service chain of each tenant in the NFV cloud platform; the logic matrix F has m rows and n columns, m represents the number of tenants, n represents the number of types of virtual network function instances, and if the jth type of virtual network function instance is used on the service chain of the tenant i, the corresponding element F [ i ] [ j ] on the logic matrix F is 1, otherwise, the element F [ i ] [ j ] is 0;
2) respectively calculating the average processing time length of the data packet of each type of virtual network function instance in the NFV cloud platform according to the tenant flow and the use condition of the virtual network function instance, and recording the average processing time length in a vector d;
3) and calculating the matrix product F.d, wherein the obtained vector is the average processing delay of the data packet of the service chain of the m tenants. In a queue in series, the nature of the departure process after the packet has been processed is crucial. According to the Burke theorem, when an arrival process follows a poisson process of a rate λ, a departure process follows the poisson process of the rate λ as well. However, if the two arrival processes are not mutually independent poisson processes, their mixed arrival process is not a poisson process. For example, assuming that NF3 and NF4 in fig. 4-1 are IDSs that would send suspicious packets to another more powerful IDS, then the packet arrival process to this more powerful IDS is not a poisson process. However, similar to classical jackson networks, the form of multiplication still holds in the classification jackson queuing network. Thus, the present invention may still treat the processing queues of each VNF instance as M/1 queues independent of each other. Further, from Little's Law, the packet average processing delay per VNF instance can be obtained.
Based on the foregoing traffic prediction and delay analysis results, the method needs to adjust the number of various VNF instances deployed in the NFV cloud platform to adapt to the change of the demand, and at the same time, minimize the system overhead. This problem can be solved as a non-linear integer programming problem, with the decision variables being the number of VNF instances of each type. However, in practical deployment, this problem cannot be regarded as a pure optimization problem. Firstly, deploying a VNF instance requires completing a series of work, such as instance initialization, state migration, service chain orchestration, and the like; secondly, the capacity adjustment of each time interval is based on the deployment condition of the previous time interval, which means that a reasonable capacity adjustment scheme can minimize not only the resource overhead but also the deployment overhead, thereby avoiding the jitter caused by frequent adding and deleting instances. Therefore, in order to ensure the stability of the service quality, the invention proposes a heuristic algorithm EIMA (evolution automation algorithm) to solve the capacity adjustment problem.
The method for solving the number of various virtual network function instances needing to be deployed in the next time period by adopting an EIMA algorithm comprises the following steps:
firstly, initializing an original solution space by using an EIMA algorithm, and marking as R (0);
secondly, deployment x according to the current time period of the NFV cloud platformtAnd next period of non-multiplexing deployment plan x &t+1The EIMA algorithm may limit the solution spaceWithin a certain range, the convergence speed is accelerated, and the difference between the final optimal solution and the current deployment is reduced; wherein VNF instance deployment x for the current time periodtCan be obtained from data such as a running log collected at the beginning; and the next-stage non-multiplexing deployment plan x &t+1The method includes the steps that the number of various VNF instances required on each tenant and a service chain subscribed by the tenant is calculated through independent watching, and then the number of the various VNF instances required on the service chain is summarized to be the number of the various VNF instances required in a cloud platform;
and then, carrying out iteration of the algorithm, wherein the original solution space R (0) is updated for multiple times in the iterative process of the algorithm until convergence, so as to obtain an optimal solution, namely the number of various virtual network function instances needing to be deployed in the next period. In the iterative process of the algorithm, each solution in the solution space is evaluated, the resource expenditure of each solution meeting the constraint condition is calculated to measure the quality degree of the solution, the solution not meeting the constraint condition is endowed with an infinite value, and the solution space of the next generation is solved through the high-quality solution meeting the preset resource expenditure condition in the generation, so that the optimal solution meeting the constraint condition is screened out generation by generation. The algorithm utilizes the current deployment condition of the NFV cloud platform and the deployment condition when the NFV cloud platform is not multiplexed to define the upper bound of a solution space and reduce the solution space, so that the convergence speed of the algorithm is higher than that of a common evolutionary algorithm.
After calculating the number of VNF instances required for the next time period, it is known whether to expand or contract. If the scenario is a capacity reduction scenario, it needs to decide that those VNF instances should be deleted; if the capacity expansion scene is the scene, it is necessary to plan which traffic should be forwarded to the newly added virtual network function instance for processing. Since VNF instance deployment is closely related to routing rules, the number of decision variables in the routing cost minimization problem is affected by the VNF instance scheduling algorithm. That is, for a particular service chain, more routing paths are used meaning more routing rules need to be updated and more overhead for VNF instance state synchronization. Therefore, the present invention proposes a comprehensive algorithm dcro (digital and queue Routing optimization) including an instance scheduling and traffic forwarding policy to solve the deployment problem.
The step of adopting DCRO algorithm to make dynamic expansion and contraction capacity decision includes:
the DCRO algorithm divides various virtual network function instances deployed in the NFV cloud platform into a permanent type and a temporary type according to the existence duration; under a capacity reduction scene, preferentially deleting a corresponding number of temporary virtual network function instances, and solving an objective function by using a DCRO algorithm to obtain a traffic forwarding rule which enables the sum of traffic forwarding path delays of all tenants to be the lowest; under the capacity expansion scene, the DCRO algorithm is utilized to solve the objective function, and the traffic forwarding rule and the deployment position which enable the sum of the traffic forwarding path delays of all tenants to be the lowest are obtained. The scheme optimizes the deployment strategy of the VNF instances, and permanently and temporarily classifies the deployed VNF instances, so that the NFV cloud platform can maintain a relatively stable state and cannot shake in the multiple capacity expansion/contraction process.
For each type of VNF instance, the DCRO algorithm maintains a logical stack to record deployment information of each VNF instance in the cloud platform. When a new VNF instance is initialized, the instance ID is recorded in the corresponding logical stack; when an instance ID is removed from the stack, its corresponding VNF instance is recycled by the system. In addition, the DCRO algorithm adopts a dictionary (Map) data structure to store the relation between the VNF instance and the tenant, so that when the stack is changed, the affected tenant traffic is obtained, the routing update is carried out, and the traffic of each tenant can be transmitted. For one-time capacity expansion, the VNF instance types and the number which need to be increased are converted into instructions and sent to a virtual machine Hypervisor (Hypervisor) of the NFV cloud platform, and then the virtual machine Hypervisor starts a new VNF instance and initializes the new VNF instance. After the path is optimized, the system sends a new routing rule to the SDN controller, and then the SDN controller updates a routing table of the SDN switch, so that dynamic scaling under the condition of delay guarantee is realized.
Corresponding to the foregoing method, the embodiment of the present invention provides a delay guaranteed NFV cloud platform dynamic scaling system, and with reference to fig. 2, the delay guaranteed NFV cloud platform dynamic scaling system includes:
the monitoring program is used for collecting network configuration information, tenant configuration information and operation logs of the NFV cloud platform; the monitoring program adopts a multi-mode and multi-frequency data collection method, including reading a routing table of a switch, monitoring the arrival flow of a gateway switch, arranging tenant configuration information and the like, the data are stored in a database through the unified data storage interface provided by the invention, and the data are updated by adopting different frequencies according to different data, so that the timeliness of the data is ensured;
the analysis program is used for predicting the average arrival rate of the data packet of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model according to the network configuration information, the tenant configuration information and the running log of the NFV cloud platform collected by the monitoring program so as to predict the flow, and analyzing the average processing delay of the data packet of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packet of each tenant in the next time period;
the planning program is used for carrying out dynamic capacity expansion and reduction decision-making of the NFV cloud platform by taking the average processing delay of the data packet of each service chain as a constraint condition according to the flow prediction result obtained by the analysis program, wherein the decision-making information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule;
and the execution program is used for translating the decision information obtained by the planning program into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation. The calculation result obtained by the planning program cannot be directly used for actual deployment, so that the execution program is required to translate the deployment scheme into a machine instruction, and interact with the SDN controller and the virtual machine manager, so as to really implement the expansion and contraction capacity deployment scheme.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (10)

1. A delay-guaranteed NFV cloud platform dynamic capacity expansion and reduction method is characterized by comprising the following steps:
collecting network configuration information, tenant configuration information and running logs of the NFV cloud platform;
according to the collected network configuration information, tenant configuration information and operation logs of the NFV cloud platform, predicting the average arrival rate of data packets of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model to perform flow prediction, and analyzing the average processing delay of the data packets of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packets of each tenant in the next time period;
according to the flow prediction result and the average processing delay of the data packet of each service chain, carrying out dynamic expansion and contraction capacity decision of the NFV cloud platform, wherein the decision information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule;
and translating the decision information into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation.
2. The NFV cloud platform dynamic scaling method for delay guarantees of claim 1, in which the network configuration information includes traffic forwarding paths; the tenant configuration information comprises service chain types and data packet processing delay requirements; the running log comprises the number of tenant data packets per unit time on the NFV cloud platform gateway, the deployment type and number of virtual network function instances in the NFV cloud platform, the deployment position and the running time.
3. The delay-guaranteed NFV cloud platform dynamic scaling method of claim 1, wherein the step of predicting the average arrival rate of packets for each tenant at a next time period using a log-linear Poisson autoregressive model comprises:
1) selecting reference historical data according to historical data of the number of tenant data packets in unit time on the NFV cloud platform gateway in the operation log and a periodic rule of flow, wherein the reference historical data comprises the number of data packets in a selected historical time period;
2) substituting the reference historical data into an initial log-linear Poisson autoregressive model, and estimating parameters corresponding to each reference historical data by using a maximum likelihood method, thereby establishing a log-linear Poisson autoregressive model for predicting the average arrival rate of data packets of each tenant in the next time period;
3) and predicting the average arrival rate of the data packets of each tenant in the next period by using the model established in the step 2).
4. The NFV cloud platform dynamic scaling method for delay guarantees of claim 3, wherein the estimating the parameter corresponding to each of the reference historical data by using a maximum likelihood method comprises:
first, a log-linear Poisson autoregressive model is used
Figure FDA0001996928030000021
Wherein λ istIndicating the average arrival rate of packets over a period t, which depends on the number of packets A in each past periodt-iAnd its own history value lambdat-j(ii) a g (-) is a logarithmic correlation function; i represents the record number of the selected historical data packet quantity, and j represents the record number of the average arrival rate of the selected historical data packets; the parameter to be estimated is θ ═ α ═01,…,αp1,…,βq) And the following constraints are required to be set for the parameters:
Figure FDA0001996928030000022
then, the parameter θ' is estimated by maximum likelihood method, and the likelihood function is
Figure FDA0001996928030000023
Theta' is the transposition of theta, and according to the probability mass function of Poisson distribution, the probability mass function of the Poisson distribution can be obtained as follows:
Figure FDA0001996928030000024
by maximizing the likelihood function, the parameter θ' can be estimated.
5. The NFV cloud platform dynamic scaling method for delay guarantees of claim 1, wherein the step of analyzing the average packet processing delay of each service chain using a classification jackson queuing network model based on the average packet arrival rate of the next time period of each tenant comprises:
1) using a logic matrix F to represent the virtual network function instance type on the service chain of each tenant in the NFV cloud platform; the logic matrix F has m rows and n columns, m represents the number of tenants, n represents the number of types of virtual network function instances, and if the jth type of virtual network function instance is used on the service chain of the tenant i, the corresponding element F [ i ] [ j ] on the logic matrix F is 1, otherwise, the element F [ i ] [ j ] is 0;
2) respectively calculating the average processing time length of the data packet of each type of virtual network function instance in the NFV cloud platform according to the tenant flow and the use condition of the virtual network function instance, and recording the average processing time length in a vector d;
3) and calculating the matrix product F.d, wherein the obtained vector is the average processing delay of the data packet of the service chain of the m tenants.
6. The NFV cloud platform dynamic capacity expansion and reduction method for delay guarantee according to claim 1, wherein an EIMA algorithm is adopted to solve the number of various types of virtual network function instances to be deployed in the next time period according to a traffic prediction result and using an average processing delay of a data packet of each service chain as a constraint condition.
7. The NFV cloud platform dynamic scaling method for delay guarantees according to claim 6, wherein the step of solving the number of various types of virtual network function instances to be deployed in the next time period using the EIMA algorithm includes:
firstly, initializing an original solution space by using an EIMA algorithm, and marking as R (0);
secondly, deployment x according to the current time period of the NFV cloud platformtAnd next-period non-multiplexed deployment planning
Figure FDA0001996928030000031
Limiting a solution space within a certain range by using an EIMA algorithm to accelerate the convergence speed and simultaneously reduce the difference between the final optimal solution and the current deployment;
then, iteration of the algorithm is carried out, the original solution space R (0) is updated for many times in the iterative process of the algorithm until convergence, and thus the optimal solution is obtained, namely the number of various virtual network function instances needing to be deployed in the next period;
in the iterative process of the algorithm, each solution in the solution space is evaluated, the resource expenditure of each solution meeting the constraint condition is calculated to measure the quality degree of the solution, the solution not meeting the constraint condition is endowed with an infinite value, and the solution space of the next generation is solved through the high-quality solution meeting the preset resource expenditure condition in the generation, so that the optimal solution meeting the constraint condition is screened out generation by generation.
8. The NFV cloud platform dynamic capacity expansion and reduction method for delay guarantee according to claim 7, wherein after calculating the number of each type of virtual network function instances to be deployed in the next time period, a DCRO algorithm is adopted to perform a dynamic capacity expansion and reduction decision according to a traffic prediction result and an average processing delay of a data packet of each service chain, that is, which virtual network function instances need to be deleted in a capacity reduction scenario; under the condition of capacity expansion, the traffic is forwarded to the newly added virtual network function instance for processing.
9. The NFV cloud platform dynamic scaling method for delay guarantee of claim 8, wherein the step of using DCRO algorithm to make dynamic scaling decision comprises:
the DCRO algorithm divides various virtual network function instances deployed in the NFV cloud platform into a permanent type and a temporary type according to the existence duration;
under a capacity reduction scene, preferentially deleting a corresponding number of temporary virtual network function instances, and solving an objective function by using a DCRO algorithm to obtain a traffic forwarding rule which enables the sum of traffic forwarding path delays of all tenants to be the lowest;
under the capacity expansion scene, the DCRO algorithm is utilized to solve the objective function, and the traffic forwarding rule and the deployment position which enable the sum of the traffic forwarding path delays of all tenants to be the lowest are obtained.
10. A delay-guaranteed NFV cloud platform dynamic scaling system, comprising:
the monitoring program is used for collecting network configuration information, tenant configuration information and operation logs of the NFV cloud platform;
the analysis program is used for predicting the average arrival rate of the data packet of each tenant in the next time period by using a logarithmic linear Poisson autoregressive model according to the network configuration information, the tenant configuration information and the running log of the NFV cloud platform collected by the monitoring program so as to predict the flow, and analyzing the average processing delay of the data packet of each service chain by using a classified Jackson queuing network model based on the average arrival rate of the data packet of each tenant in the next time period;
the planning program is used for carrying out dynamic capacity expansion and reduction decision-making of the NFV cloud platform by taking the average processing delay of the data packet of each service chain as a constraint condition according to the flow prediction result obtained by the analysis program, wherein the decision-making information comprises the deployment number and the deployment position of various virtual network function instances and a flow forwarding rule;
and the execution program is used for translating the decision information obtained by the planning program into instructions and respectively sending the instructions to an SDN controller and a controller of the NFV cloud platform to execute the capacity expansion and contraction operation.
CN201910199568.3A 2019-03-15 2019-03-15 Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system Active CN109995583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910199568.3A CN109995583B (en) 2019-03-15 2019-03-15 Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910199568.3A CN109995583B (en) 2019-03-15 2019-03-15 Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system

Publications (2)

Publication Number Publication Date
CN109995583A CN109995583A (en) 2019-07-09
CN109995583B true CN109995583B (en) 2021-08-06

Family

ID=67129348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910199568.3A Active CN109995583B (en) 2019-03-15 2019-03-15 Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system

Country Status (1)

Country Link
CN (1) CN109995583B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505212B (en) * 2019-07-24 2020-10-13 武汉大学 Internet of things virtual safety equipment based on Middlebox
CN110730138A (en) * 2019-10-21 2020-01-24 中国科学院空间应用工程与技术中心 Dynamic resource allocation method, system and storage medium for space-based cloud computing architecture
CN113132175B (en) * 2019-12-31 2024-03-26 北京华为数字技术有限公司 Network resource scheduling method and device
CN112000459B (en) * 2020-03-31 2023-06-27 华为云计算技术有限公司 Method for expanding and shrinking capacity of service and related equipment
CN113825152A (en) * 2020-06-18 2021-12-21 中兴通讯股份有限公司 Capacity control method, network management device, management arrangement device, system and medium
CN111970167B (en) * 2020-08-04 2022-04-01 广州大学 End-to-end cloud service delay calculation method
CN111953551B (en) * 2020-08-27 2023-05-16 网易(杭州)网络有限公司 Log data transmission method and device, electronic equipment and storage medium
CN112199153A (en) * 2020-09-25 2021-01-08 国网河北省电力有限公司信息通信分公司 Virtual network function VNF instance deployment method and device
CN114666223B (en) * 2020-12-04 2023-11-21 中国移动通信集团设计院有限公司 Cloud computing resource pool processing method and device and readable storage medium
CN112732409B (en) * 2021-01-21 2022-07-22 上海交通大学 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture
CN113098707B (en) * 2021-03-16 2022-05-03 重庆邮电大学 Virtual network function demand prediction method in edge network
CN113422821B (en) * 2021-06-21 2022-07-12 广东电网有限责任公司计量中心 State update data packet scheduling method and system
CN113518016B (en) * 2021-06-22 2022-08-30 新华三大数据技术有限公司 Message sending method, VNFM and computer readable storage medium
CN113766008A (en) * 2021-08-06 2021-12-07 苏州浪潮智能科技有限公司 method, system, terminal and storage medium for dynamically adjusting storage capacity under mcs
CN113762421B (en) * 2021-10-22 2024-03-15 中国联合网络通信集团有限公司 Classification model training method, flow analysis method, device and equipment
CN114466016B (en) * 2022-03-04 2023-06-09 烽火通信科技股份有限公司 Method and system for realizing dynamic load balancing of data center gateway

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471611A (en) * 2014-09-05 2016-04-06 中兴通讯股份有限公司 Processing method, device and system for providing user service
CN106559207A (en) * 2015-09-25 2017-04-05 英特尔Ip公司 The method of mobile terminal device, mobile processing circuit and process signal
CN107124303A (en) * 2017-04-19 2017-09-01 电子科技大学 The service chaining optimization method of low transmission time delay

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10944664B2 (en) * 2016-09-01 2021-03-09 Nokia Of America Corporation Estimating bandwidth in a heterogeneous wireless communication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471611A (en) * 2014-09-05 2016-04-06 中兴通讯股份有限公司 Processing method, device and system for providing user service
CN106559207A (en) * 2015-09-25 2017-04-05 英特尔Ip公司 The method of mobile terminal device, mobile processing circuit and process signal
CN107124303A (en) * 2017-04-19 2017-09-01 电子科技大学 The service chaining optimization method of low transmission time delay

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Optimal Decision Making for Big Data Processing at Edge-Cloud Environment:An SDN Perspective;Gagangeet Singh Aujla,Neeraj Kumar,Albert Y. Zomaya等;《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》;20170811;全文 *
移动核心网NFV 扩缩容场景下的大数据应用探析;林清阳,龙彪;《广东通信技术》;20151215;全文 *

Also Published As

Publication number Publication date
CN109995583A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109995583B (en) Delay-guaranteed NFV cloud platform dynamic capacity expansion and contraction method and system
US10855604B2 (en) Systems and methods of data flow classification
Hamdan et al. Flow-aware elephant flow detection for software-defined networks
US8612625B2 (en) Characterizing data flow in a network based on a footprint measurement and taking action to avoid packet loss including buffer scaling or traffic shaping by adapting the footprint measurement
US20030061017A1 (en) Method and a system for simulating the behavior of a network and providing on-demand dimensioning
CN116389365B (en) Switch data processing method and system
Chen et al. Minimizing age-of-information for fog computing-supported vehicular networks with deep Q-learning
Yan et al. A survey of low-latency transmission strategies in software defined networking
CN112148381A (en) Software definition-based edge computing priority unloading decision method and system
Matnee et al. Analyzing methods and opportunities in software-defined (SDN) networks for data traffic optimizations
CN114448899A (en) Method for balancing network load of data center
Tong et al. VNF dynamic scaling and deployment algorithm based on traffic prediction
JP2004201304A (en) Packet scheduling system and method for high-speed packet network
CN114124732B (en) Cloud-oriented in-band computing deployment method, device and system
CN117837132A (en) System and method for determining energy efficiency quotient
CN110086662B (en) Method for implementing demand definition network and network architecture
Emran et al. Real Time Experimentation Study of Software Defined Networking based on Openflow Protocol
US6804196B1 (en) Determining traffic information in a communications network
Zhong et al. Performance analysis of application-based QoS control in software-defined wireless networks
Wang et al. PopFlow: a novel flow management scheme for SDN switch of multiple flow tables based on flow popularity
Sedaghat et al. R2T-DSDN: reliable real-time distributed controller-based SDN
Zhuang et al. Dynamic Resource Management in Service-Oriented Core Networks
Shang Performance evaluation of the control plane in openflow networks
Pan et al. Deep reinforcement learning-based dynamic bandwidth allocation in weighted fair queues of routers
Yang et al. Design dynamic virtualized bandwidth allocation scheme to improve networking performance in cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant