CN110971451B - NFV resource allocation method - Google Patents

NFV resource allocation method Download PDF

Info

Publication number
CN110971451B
CN110971451B CN201911108149.0A CN201911108149A CN110971451B CN 110971451 B CN110971451 B CN 110971451B CN 201911108149 A CN201911108149 A CN 201911108149A CN 110971451 B CN110971451 B CN 110971451B
Authority
CN
China
Prior art keywords
average
data packet
physical
rate
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911108149.0A
Other languages
Chinese (zh)
Other versions
CN110971451A (en
Inventor
张正文
钟成
郭少勇
贺文晨
马涛
马慧卓
胡杏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
Beijing University of Posts and Telecommunications
Original Assignee
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co, Beijing University of Posts and Telecommunications filed Critical Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
Priority to CN201911108149.0A priority Critical patent/CN110971451B/en
Publication of CN110971451A publication Critical patent/CN110971451A/en
Application granted granted Critical
Publication of CN110971451B publication Critical patent/CN110971451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities

Abstract

The invention provides an NFV resource allocation method, which comprises the following steps: modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model; modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process; and solving the modeling result of the delay according to a preset constraint condition and a utility function, and obtaining a resource allocation result of the flow so as to minimize the delay. The invention models the time delay in the processing process and the transmission process based on the queuing theory to evaluate the end-to-end delay when the data packet passes through the embedded NFV chain, and defines the utility function related to the delay as the environment feedback of the allocation strategy, thereby obtaining the resource allocation result with the minimum time delay.

Description

NFV resource allocation method
Technical Field
The invention belongs to the technical field of network function virtualization, and particularly relates to an NFV resource allocation method.
Background
With the development of resource Virtualization, Network Functions Virtualization (NFV) plays a central role in the development of Network infrastructure. NFV supports the separation of Network Functions (NFs) from proprietary hardware devices and the flexible configuration of Virtual Network Functions (VNFs) on top of an optimally shared physical infrastructure. A set of VNFs are linked together through a virtual link to form a Service Function Chaining (SFC). By scaling VNFs in SFCs, NFVs support more flexible resource allocation methods between services.
Due to the rapid growth of delay sensitive services such as audio, video and gaming services, networks need to guarantee good delay performance to guarantee reliable network services. The resource allocation policy directly affects the network delay. For embedded VNF chains, the method of sharing of CPU (Central Processing Unit) and bandwidth resources among multiple streams determines the Processing and transmission rates, affecting the average E2E (End to End) packet delay. Therefore, Minimum Delay (MD) is a considerable problem in NFV resource allocation. However, due to the complexity of modern networks and the random arrival of service requests, network state and traffic are difficult to model accurately. In NFV networks, the problem of resource allocation remains a challenge. Therefore, there is a need for a dynamic adaptive allocation method to handle changes in network and service requests.
The Deep Reinforcement Learning (DRL) framework has proven to be highly advantageous in solving such dynamic problems, as it enables the determination of the best strategy based on experience with the environment. Under the resource allocation framework based on the DRL, an optimal resource allocation strategy with optimal delay performance can be obtained by using an end-to-end delay function as environmental feedback. However, since the resource allocation space belongs to a contiguous domain, the basic DRL technique does not work. Although an advanced method, Asynchronous Advantage Actor commentary (A3C), has been proposed for continuous control, it still has drawbacks in learning speed and robustness.
Disclosure of Invention
In order to overcome the above problems that network state and traffic modeling are difficult when minimum delay is considered in NFV resource allocation, and the learning speed of the dynamic allocation problem is slow and robustness is poor or at least partially solve the above problems, embodiments of the present invention provide an NFV resource allocation method.
The embodiment of the invention provides an NFV resource allocation method, which comprises the following steps:
modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model;
modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process;
and solving the modeling result of the delay according to a preset constraint condition and a utility function, and acquiring a resource allocation result of the flow so as to minimize the delay.
Preferably, the NFV network is modeled, and the step of constructing the network model includes:
representing the NFV network by an undirected graph;
wherein, the physical nodes in the NFV network are taken as the vertexes of the undirected graph;
taking the total amount of resources used by each physical node for processing the data packet as data corresponding to the vertex;
connecting lines between two physical nodes passing through before and after the flow are used as edges of the undirected graph;
and taking the total amount of resources for transmitting flow between the two physical nodes connected by the connecting line as data corresponding to the edge of the undirected graph.
Preferably, traffic in the NFV network is modeled, and the step of constructing the traffic model includes:
and modeling a process that the data packet in the flow reaches a physical node and a physical link in the NFV network as a time-varying Poisson process.
Preferably, the step of modeling the processing procedure and the transmission procedure of the data packet in the traffic according to the network model and the traffic model comprises:
acquiring the average processing rate of a physical node where each network function is located in target deployment to the data packet and the average arrival rate of the data packet to the physical node;
and acquiring the average transmission rate of the physical link between the two adjacent physical nodes where the network functions are located to the data packet and the average arrival rate of the data packet to the physical link.
Preferably, the step of obtaining an average processing rate of the physical node to which each network function in the target deployment is located to the data packet and an average arrival rate of the data packet to the physical node includes:
acquiring computing resources distributed to the flow by the physical node at any moment according to the CPU resource rate distributed to the flow by the physical node at any moment and available resources on the physical node;
acquiring the processing rate of the physical node to the data packet at any moment according to the computing resource distributed to the flow by the physical node at any moment and the processing rate of the computing resource in unit time;
taking the average value of the processing rates of the physical nodes to the data packets in a preset time period as the average processing rate;
and acquiring the average speed of the data packet reaching the physical node at any moment, and taking the average value of all the average speeds in the preset time period as the average reaching speed.
Preferably, the step of obtaining an average transmission rate of the data packet by a physical link between two physical nodes where the network function is located and an average arrival rate of the data packet to the physical link includes:
acquiring transmission resources allocated to the traffic by the physical link at any moment according to the link bandwidth ratio allocated to the traffic by the physical link at any moment and the available resources of the physical link at any moment;
acquiring the transmission rate of the physical link to the data packet at any moment according to the transmission resource allocated to the flow at any moment by the physical link and the transmission rate of the unit transmission resource in unit time;
taking the average value of the transmission rate of the physical link to the data packet in a preset time period as the average transmission rate;
and acquiring the average rate of the data packet reaching the physical link at any moment, and taking the average value of all the average rates in the preset time period as the average reaching rate.
Preferably, the step of modeling the delay of the data packet in the processing procedure and the transmission procedure according to the modeling result of the processing procedure and the transmission procedure comprises:
judging whether queuing delay exists on the physical node or not according to the average processing rate of the physical node and the average transmission rate of a physical link which is close to the physical node;
if the data packet exists, acquiring the average waiting time of the data packet before the data packet is processed according to the average processing rate and the average arrival rate of the physical node;
acquiring the average delay of the data packet on the physical node according to the average waiting time of the data packet before processing the data packet and the average processing rate;
judging whether queuing delay exists on the physical link according to the average transmission rate of the physical link and the average processing rate of the physical node immediately before the physical link;
if yes, obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link;
acquiring the average delay of the data packet on the physical link according to the average waiting time of the data packet before the data packet is transmitted and the average transmission rate;
and acquiring the final average delay of the data packet according to the average delay of the data packet on the physical node and the physical link.
Preferably, the average waiting time of the data packet before processing the data packet is obtained according to the average processing rate and the average arrival rate of the physical node by the following formula:
Figure BDA0002271933500000051
wherein, W Q(f,k) An average latency of the data packet before the physical node Q (f, k) representing the kth network function in the target deployment f processes the data packet,
Figure BDA0002271933500000052
represents an average arrival rate of the physical node,
Figure BDA0002271933500000053
represents an average processing rate of the physical node, t represents the arbitrary time,
Figure BDA0002271933500000054
representing an average transmission rate of a physical link between the kth-1 network function and a physical node where the kth network function is located in the target deployment f;
obtaining an average delay of a data packet on the physical node according to an average waiting time of the data packet before the data packet is processed and the average processing rate by the following formula:
Figure BDA0002271933500000055
wherein d is Q(f,k) (t) represents the average delay of a packet on a physical node;
obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link by the following formula:
Figure BDA0002271933500000056
wherein, W P(f,k) Represents the average waiting time of the data packet before the data packet is transmitted by the physical link P (f, k) between the physical nodes where the kth network function and the (k +1) th network function are located in the target deployment f,
Figure BDA0002271933500000057
represents the average arrival rate of the physical link,
Figure BDA0002271933500000058
representing an average transmission rate of the physical link;
obtaining the average delay of the data packet on the physical link according to the average waiting time of the data packet before transmitting the data packet and the average transmission rate by the following formula:
Figure BDA0002271933500000059
wherein d is P(f,k) (t) is the average delay of data packets on the physical link;
and obtaining the final average delay of the data packet according to the average delays of the data packets on the physical node and the physical link by the following formula:
Figure BDA0002271933500000061
wherein d is f (t) is the final average delay of the data packet.
Preferably, the utility function is:
Figure BDA0002271933500000062
wherein alpha is f To adjust the parameters.
Preferably, the preset constraint condition includes:
0≤h Q(f,k) (t)≤1;
0≤h P(f,k) (t)≤1;
Figure BDA0002271933500000063
Figure BDA0002271933500000064
B Q(f,k) (0)=0;
B P(f,k) (0)=0;
Figure BDA0002271933500000065
Figure BDA0002271933500000066
wherein h is Q(f,k) (t) is the CPU resource rate, h, assigned to traffic by physical node Q (f, k) at time t P(f,k) (t) is the link bandwidth ratio of the physical link P (F, k) allocated to the traffic at time t, F (n) z ) Represents passing through a z-th physical node n in the NFV network z Set of all traffic of (2), B Q(f,k) (0) Represents the number of packets waiting to be processed on physical node Q (f, k) at time 0, B P(f,k) (0) Represents the number of waiting data packets on the 0 th physical link P (f, k), T is the end time of the preset time period, B Q(f,k) (t) represents the number of packets waiting to be processed on the physical node Q (f, k) at time t, and E represents the averaging function.
The embodiment of the invention provides an NFV resource allocation method, which models an NFV network and flow, models a processing process and a transmission process of a data packet in the flow, models time delay in the processing process and the transmission process based on a queuing theory to evaluate end-to-end delay of the data packet when the data packet passes through an embedded NFV chain, and defines a utility function related to the delay as environmental feedback of an allocation strategy, thereby obtaining a resource allocation result with minimum time delay according to a constraint condition and the utility function.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without creative efforts for those skilled in the art.
Fig. 1 is a schematic overall flow chart of an NFV resource allocation method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of solving a problem of minimum delay in the NFV resource allocation method according to the embodiment of the present invention;
fig. 3 is a schematic view of an overall structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required for the embodiments or the technical solutions in the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
In an embodiment of the present invention, an NFV resource allocation method is provided, and fig. 1 is a schematic overall flow chart of the NFV resource allocation method provided in the embodiment of the present invention, where the method includes: s101, modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model;
the traffic is processed and transmitted in the form of data packets, the NFV network includes a plurality of physical nodes, and each physical node may have one or more VNF instances deployed thereon. According to the method and the system, a network model is constructed according to data and an organization form of physical nodes in the NFV network. And constructing a flow model according to the attributes of the data packets in the flow. The present embodiment is not limited to the way in which the network model and the traffic model are constructed.
S102, modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process;
and sequentially transmitting the data packets to the physical nodes deployed by the VNFs according to the processing sequence of the VNFs deployed on the physical nodes in the NFV network. After the data packet arrives at the physical node, the VNF on the physical node processes the data packet and transmits the data packet on a path between the physical node and a physical node deployed by a next VNF.
The embodiment models the processing process and the transmission process of the data packet, so as to obtain the processing condition and the transmission condition of the data packet. According to the processing condition and the transmission condition, the time delay of the data packet in the processing and transmission processes can be obtained, and the time delay of the data packet in the two processes is modeled.
S103, solving the modeling result of the delay according to a preset constraint condition and a utility function, and obtaining a resource allocation result of the flow so as to minimize the delay.
The goal of resource allocation in this embodiment is to minimize the delay of the entire network while ensuring fairness between flows, thereby constructing an optimization problem model of resource allocation. In solving the problem of minimizing delay, some constraints also need to be considered.
When resource allocation is carried out, the embodiment adopts a resource allocation framework based on A3C, and can dynamically sense and model the network and the service. It contains two main components: observed environment and intelligent agents. The agent interacts with the environment to obtain environmental status samples for updating global network parameters. These parameters will in turn affect the exploration strategy of the agent. Through this iterative learning process, the algorithm based on A3C implements the optimal strategy for resource allocation.
The environment of the NFV resource allocation problem includes three parts, namely, a service provider, an NFV network, and a client. The NFV network provides a network infrastructure that supports virtualized controllers to place VNFs on it and adjust the resources allocated to them. The service provider provides network services to the customer, who accepts the services and feeds back quality of service information, such as network latency. The agent learns the best policy by interacting with the environment. In the context of NFV resource allocation issues, a virtualized controller can be considered as a proxy, responsible for collecting network information, making decisions, and taking actions.
In the embodiment, the NFV network and the traffic are modeled, the processing process and the transmission process of the data packet in the traffic are modeled, then the delay in the processing process and the transmission process is modeled based on the queuing theory, so as to evaluate the end-to-end delay when the data packet passes through the embedded NFV chain, and the utility function related to the delay is defined as the environmental feedback of the allocation strategy, so that the resource allocation result with the minimum delay is obtained according to the constraint condition and the utility function.
On the basis of the above embodiment, in this embodiment, modeling is performed on the NFV network, and the step of constructing the network model includes: representing the NFV network by an undirected graph; wherein, the physical nodes in the NFV network are taken as the vertexes of the undirected graph; taking the total amount of resources used by each physical node for processing the data packet as data corresponding to the vertex; connecting lines between two physical nodes passing through the traffic before and after are used as edges of the undirected graph; and taking the total resource amount of the transmission flow between the two physical nodes connected by the connecting line as the data corresponding to the edge.
Here, in the present embodiment, an undirected graph G ═ N, E is used to represent the NFV network, and N represents a set of physical nodes in the NFV network. Using n z Represents the z-th physical node in the NFV network, uses
Figure BDA0002271933500000091
Indicating the total amount of resources that the node uses to process packets in the traffic. E denotes the set of edges in the NFV network, E ij ∈E。e ij Representing a connection physical node n i And node n j While using the same
Figure BDA0002271933500000092
Represents an edge e ij The total amount of resources used in transmitting traffic.
On the basis of the foregoing embodiment, in this embodiment, traffic in the NFV network is modeled, and the step of constructing a traffic model includes: and modeling a process that the data packet in the flow reaches a physical node and a physical link in the NFV network as a time-varying Poisson process.
Specifically, the present embodiment uses F to represent a set of user traffic in the NFV network, and F ∈ F represents a set of packets belonging to the same service and having the same source point and destination point. Each f has a respective delay requirement, and the normalized maximum delay uses a f Meaning that the value is between 0 and 1. The more sensitive f is to the delay requirement, the corresponding alpha f The smaller the value of (c). F (n) z ) Representing a physical node n z V (f, k) represents the kth network function type that f passes through.
Q (f, k) represents the physical node where the kth network function in f is deployed. Specifically, Q (f,0) ∈ S, Q (f, | f |) ∈ D. P (f, k) represents the physical path between the physical nodes where the kth and k +1 th VNFs are located. The | P (f, k) | +1 transmission links included in this path can be expressed as
Figure BDA0002271933500000101
The embodiment models the process of the data packet in the flow f arriving at the physical node and the physical link as a time-varying poisson process. Consider a discrete set of time slots {0,1, 2.., T., T-1}, at which time T the average rate at which packets of traffic f reach the kth network function is λ Q(f,k) (t), the network function is deployed on Q (f, k). A. the Q(f,k) (t) represents the number of packets arriving at Q (f, k) at time t. According to the nature of the Poisson process, A Q(f,k) (t) satisfies:
Figure RE-GDA0002372219060000102
B Q(f,k) (t) represents the number of packets for which the flow f is queued at Q (f, k). Similarly, the average arrival rate of the flow f on P (f, k) is λ P(f,k) (t), the number of the traffic packets arriving at the link at the time t and the number of the traffic packets queued in the queue at the time are respectively represented as A P(f,k) (t) and B P(f,k) (t)。
On the basis of the foregoing embodiment, in this embodiment, the step of modeling the processing procedure and the transmission procedure of the data packet in the traffic according to the network model and the traffic model includes: acquiring the average processing rate of a physical node where each network function is located in target deployment to the data packet and the average arrival rate of the data packet to the physical node; and acquiring the average transmission rate of the physical link between the two adjacent physical nodes where the network functions are located to the data packet and the average arrival rate of the data packet to the physical link.
On the basis of the foregoing embodiment, the step of acquiring the average processing rate of the physical node to which each network function in the target deployment is located to the data packet and the average arrival rate of the data packet to the physical node in this embodiment includes: acquiring computing resources distributed to the flow by the physical node at any moment according to the CPU resource rate distributed to the flow by the physical node at any moment and available resources on the physical node;
specifically, when the data of the user arrives at the physical node, the physical node provides the CPU resource to process the data packet, and after the processing is finished, the physical link allocates the bandwidth resource to transmit the data packet. The allocation of resources will affect the processing and transmission rate of the data packets. Since different data packets have different packet structures and sizes and different network function types have different resource requirements.
In order to better describe the resource sharing policy between different flows, the present embodiment defines a resource allocation index h Q(f,k) (t) and
Figure BDA0002271933500000111
h Q(f,k) (t) represents the CPU resource rate allocated by physical node Q (f, k) to flow f at time t,
Figure BDA0002271933500000112
indicating a physical link at time t
Figure BDA0002271933500000113
The ratio of link bandwidth allocated to flow f. C Q(f,k) Representing all available resources on physical node Q (f, k). According to the resource distribution ratio, the computing resource distributed to the flow f by the physical node Q (f, k) is c Q(f,k) (t)=C Q(f,k) h Q(f,k) (t)。
Acquiring the processing rate of the physical node to the data packet at any moment according to the computing resource distributed to the flow by the physical node at any moment and the processing rate of the computing resource in unit time;
this embodiment models the processing of packets as a fixed regular interval distribution. R is to be Q(f,k) The number of packets that can be processed per unit time per unit resource defined as the kth network function. Processing rate μ at physical node Q (f, k) Q(f,k) (t)=R Q(f,k) c Q(f,k) (t)。
Taking the average value of the processing rate of the physical node to the data packet in a preset time period as the average processing rate;
wherein the preset time period is from 0 time to T time, and the average processing rate of the flow f on the physical node Q (f, k) in the preset time period is
Figure BDA0002271933500000114
E is a function of the average value.
And acquiring the average speed of the data packet reaching the physical node at any moment, and taking the average value of all the average speeds in the preset time period as the average reaching speed.
Wherein the average arrival rate of the data packets in the flow f to the physical node Q (f, k) in the preset time period is
Figure BDA0002271933500000121
On the basis of the foregoing embodiment, in this embodiment, the step of obtaining an average transmission rate of the data packet by a physical link between two physical nodes where the network function is located and an average arrival rate of the data packet to the physical link includes: acquiring transmission resources distributed to the flow by the physical link at any moment according to the link bandwidth ratio distributed to the flow by the physical link at any moment and the available resources of the physical link at any moment;
in particular, a physical link
Figure BDA0002271933500000122
All available resources are
Figure BDA0002271933500000123
According to the resource allocation ratio, the physical link
Figure BDA0002271933500000124
The transmission resource allocated to the flow f is
Figure BDA0002271933500000125
Acquiring the transmission rate of the physical link to the data packet at any moment according to the transmission resource allocated to the flow at any moment by the physical link and the transmission rate of the unit transmission resource in unit time;
the present embodiment models the transmission process of the data packets as a fixed regular interval distribution. Bandwidth utilization can only be maximized when all switches and link transmission rates on physical link P (f, k) are equal. Therefore, only the resource allocation of the first link on P (f, k) needs to be considered when performing resource allocation. Transmission rates on the remaining switches and links and
Figure BDA0002271933500000126
the above are consistent.
Figure BDA00022719335000001210
For link transmission rate, indicating having unit resource
Figure BDA0002271933500000127
The number of data packets can be transmitted per unit time. The transmission rates on | P (f, k) | +1 link and | P (f, k) | path switch are the same and can all be calculated as
Figure BDA0002271933500000128
Taking the average value of the transmission rate of the physical link to the data packet in a preset time period as the average transmission rate;
wherein the average transmission rate of the flow f on the physical link P (f, k) in the preset time period is
Figure BDA0002271933500000129
And acquiring the average rate of the data packet reaching the physical link at any moment, and taking the average value of all the average rates in the preset time period as the average reaching rate.
Wherein the average arrival rate of the data packets in the flow f reaching the physical link P (f, k) in the preset time period is
Figure BDA0002271933500000131
On the basis of the foregoing embodiment, in this embodiment, the step of modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process includes: judging whether queuing delay exists on the physical node or not according to the average processing rate of the physical node and the average transmission rate of a physical link immediately before the physical node; if so, acquiring the average waiting time of the data packet before processing the data packet according to the average processing rate and the average arrival rate of the physical nodes; acquiring the average delay of the data packet on the physical node according to the average waiting time of the data packet before processing the data packet and the average processing rate;
specifically, the average delay D of the data packets on the physical nodes and physical links from time 0 to time T is calculated using an M/D/1 model based on the parameters in the above processing and transmission processes Q(f,k) (t) and d P(f,k) (t) of (d). According to queuing theory, the average delay of a packet on physical node Q (f, k) is the processing time and queue latency W Q(f,k) The sum, which can be expressed as:
Figure BDA0002271933500000132
W Q(f,k) related to the transmission process on link P (f, k-1) before physical node Q (f, k). If it is
Figure BDA0002271933500000133
W Q(f,k) 0, indicates no queuing delay on physical node Q (f, k). If it is
Figure BDA0002271933500000134
Indicating that there is a queuing delay.
If there is a queuing delay, the arrival of the packet at physical node Q (f, k) is modeled as a rate parameter of
Figure BDA0002271933500000135
A poisson process of (a), and
Figure BDA0002271933500000136
judging whether queuing delay exists on the physical link according to the average transmission rate of the physical link and the average processing rate of the physical node immediately before the physical link; if yes, obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link; acquiring the average delay of the data packet on the physical link according to the average waiting time of the data packet before the data packet is transmitted and the average transmission rate;
in particular, the average transmission time on the physical link P (f, k) is
Figure BDA0002271933500000137
Removing device
Figure BDA0002271933500000138
In addition, there is no latency on the remaining switches and links. If there is a queuing delay, the data packet is arrived on the physical link
Figure BDA0002271933500000141
To (2)The equation is modeled as a rate parameter of
Figure BDA0002271933500000142
In a poisson process of (a), and
Figure BDA0002271933500000143
and acquiring the final average delay of the data packet according to the average delay of the data packet on the physical node and the physical link.
In particular, for packets in the flow f, they arrive at the physical node n z And the average delay between reaching the next physical node is
Figure BDA0002271933500000144
Wherein k satisfies Q (f, k) ═ n z . The average packet delay of the entire flow from time 0 to time T is the sum of the average delays of all physical nodes in the NFV network and their paths passing from the source node to the destination node, i.e. the average delay over the entire flow is
Figure BDA0002271933500000145
The average time delay of the whole process is
Figure BDA0002271933500000146
On the basis of the foregoing embodiment, in this embodiment, based on an M/D/1 queuing model, the average waiting time of the data packet before processing the data packet is obtained according to the average processing rate and the average arrival rate of the physical node by the following formula:
Figure BDA0002271933500000147
wherein, W Q(f,k) An average latency of the data packet before the physical node Q (f, k) representing the kth network function in the target deployment f processes the data packet,
Figure BDA0002271933500000148
represents an average arrival rate of the physical node,
Figure BDA0002271933500000149
represents an average processing rate of the physical node, t represents the any time,
Figure BDA00022719335000001410
representing an average transmission rate of a physical link between the kth-1 network function and a physical node where the kth network function is located in the target deployment f;
obtaining the average delay of the data packet on the physical node according to the average waiting time and the average processing rate of the data packet before the data packet is processed by the following formula:
Figure BDA00022719335000001411
wherein d is Q(f,k) (t) represents the average delay of a data packet at a physical node;
obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link by the following formula:
Figure BDA0002271933500000151
wherein, W P(f,k) Represents the average waiting time of the data packet before the data packet is transmitted by the physical link P (f, k) between the physical nodes where the kth network function and the (k +1) th network function are located in the target deployment f,
Figure BDA0002271933500000152
represents the average arrival rate of the physical link,
Figure BDA0002271933500000153
representing an average transmission rate of the physical link;
obtaining the average delay of the data packet on the physical link according to the average waiting time of the data packet before transmitting the data packet and the average transmission rate by the following formula:
Figure BDA0002271933500000154
wherein d is P(f,k) (t) is the average delay of a packet transmitted from physical node Q (f, k) to physical node Q (f, k +1) on the physical link;
and obtaining the final average delay of the data packet according to the average delays of the data packets on the physical node and the physical link by the following formula:
Figure BDA0002271933500000155
wherein, d f (t) is the final average delay of the data packet.
On the basis of the foregoing embodiment, the utility function in this embodiment is:
Figure BDA0002271933500000156
wherein alpha is f To adjust the parameters.
In particular, the present embodiment proposes an optimization model of the resource allocation problem. The goal is to minimize the overall network delay while ensuring fairness between flows. To balance the delay between flows, the present embodiment introduces utility function U α (x)=x 1-α (1-. alpha.). α is an adjustment parameter that can be used to trade-off fairness. Based on the above function, the present embodiment defines an α -fair utility function related to the tolerance delay of the traffic f:
Figure BDA0002271933500000157
this function makes a larger proportion of the user traffic having lower delay requirements.
On the basis of the foregoing embodiment, the preset constraint condition in this embodiment includes:
(1) and load constraint, wherein the sum of the resource allocation ratios of the nodes or the links cannot exceed 1 because the total resource allocated to all the traffic on the physical nodes or the physical links cannot exceed the available resources on the nodes and the links, and the formula is as follows:
C 1 :0≤h Q(f,k) (t)≤1;
C 2 :0≤h P(f,k) (t)≤1;
Figure BDA0002271933500000161
Figure BDA0002271933500000162
(2) the queuing constraint, without loss of generality, this embodiment assumes that no data stream arrives before slot 0 and that all queue buffers are infinite. However, in order to ensure the stability of the system, the present embodiment limits the number of packets in the queued state:
C 5 :B Q(f,k) (0)=0;
C 6 :B P(f,k) (0)=0;
Figure BDA0002271933500000163
Figure BDA0002271933500000164
wherein h is Q(f,k) (t) is the CPU resource rate, h, assigned to traffic by physical node Q (f, k) at time t P(f,k) (t) is the link bandwidth ratio of the physical link P (F, k) allocated to the traffic at time t, F (n) z ) Represents passing through a z-th physical node n in the NFV network z Set of all traffic of (2), B Q(f,k) (0) Representing the number of packets waiting to be processed at time 0 on physical node Q (f, k), B P(f,k) (0) Represents the number of waiting data packets to be transmitted on the 0 th physical link P (f, k), T is the end time of the preset time period, B Q(f,k) (t) represents the number of packets waiting to be processed on the physical node Q (f, k) at time t, and E represents the averaging function.
In summary, the problem of minimizing the delay proposed by this embodiment can be expressed as:
Figure BDA0002271933500000165
s.t.:C 1 ,C 2 ,C 3 ,C 4 ,C 5 ,C 6 ,C 7 ,C 8
in order to improve the end-to-end delay satisfaction of the service and ensure the fairness between flows, the embodiment proposes a queuing model to extract the influence of VNF scaling on the end-to-end delay, evaluate the end-to-end delay encountered when a packet traverses an embedded NFV chain, and define a utility function related to the delay as the environment feedback of the allocation policy.
As shown in fig. 2, the steps for solving the minimization of delay problem are as follows:
step 1, resetting the gradient update quantity of the current Actor and Critic: d theta π ←0,dθ V ←0;
Step 2, updating local neural network parameters from the global neural network: theta.theta. π′ =θ π , θ V′ =θ V
Step 3, t start Obtaining an initialization state s in the current network t
Step 4, based on strategy pi(s) tπ ) Act a of selecting a resource allocation t Calculating the delay and throughput after the allocation is executed, thereby obtaining instant rewards and a new network state;
step 5, if s t Is in a terminated state, or reachesIf the maximum length of the single iteration time is reached, the step 6 is carried out, otherwise, the step 4 is carried out;
step 6, calculating the accumulated reward expectation of all tasks;
step 7, updating the accumulated reward expectation at each moment, and updating the local gradient of the accumulator and Critic;
step 8, updating the parameter theta of the global neural network by using the local cumulative gradient π And theta V
And 9, if the maximum iteration times are reached, finishing the algorithm to obtain a resource allocation strategy pi. Otherwise, step 2 is entered.
The embodiment improves A3C into unsupervised reinforcement learning and auxiliary learning for resource allocation, realizes the optimal resource allocation strategy with minimum delay, achieves the optimal VNF resource allocation strategy by optimizing the accumulated reward, reduces end-to-end delay compared with other methods, and improves network throughput and practicability.
To evaluate the performance of this example, experiments were conducted on a conventional desktop computer with an Intel Core 2.6Ghz CPU with 16GB memory using the python language. The CNN network in this architecture is implemented by TensorFlow.
In the experiment, a random network topology with 20 nodes and 80 links was established. Each node has between 50 and 100 units of CPU resources, depending on its type. The link bandwidth is between 100 and 300. Depending on the actual situation, several different VNFs commonly deployed are hosted on the NFV node, such as firewalls, NATs, IDSs, load balancers, WAN optimizers, and traffic monitors, which are simulated and can constitute an SFC. Each SFC contains 1 to 5 VNFs. Different VNF chains may share the same NFV nodes and links. This example generates 30 streams with different SFC requirements. Packets arriving at the source node of each flow follow a poisson process with average arrival values distributed over a window of size 10.
The following two basic algorithms are compared to the modified A3C of the present embodiment. Wherein the random allocation algorithm randomly allocates a proportion of resources to each VNF on each node; dominant resource the dominant resource required for streaming in the dominant resource general processor sharing (DR-GPS) is determined according to the structure of the data packet, and the proportion of the dominant resource is allocated according to the priority of the service; all settings in modified A3C, such as status, action, reward, and CNN parameters, are consistent.
The present embodiment uses average end-to-end packet delay, end-to-end throughput, and the sum of delay and throughput utilities as indicators to measure the performance of each algorithm at different traffic arrival rates and run times. In addition, the present embodiment compares the performance of two DRL algorithms, A3C and improved A3C, in terms of online learning efficiency and jackpot.
The present embodiment changes the center value of the traffic arrival rate window and runs the simulation in 20,000 slots. The improvement A3C clearly reduces network delay compared to the other two basic algorithms, while maintaining good performance in terms of improving network throughput and practicality due to the throughput considered in training. Meanwhile, although the performance of A3C is similar to that of the improved A3C in some cases, the overall trend is more stable in the present embodiment.
When comparing the performance of different algorithms over time, the arrival rate window is used to generate corresponding traffic demands. During operation, the improvement A3C always results in lower latency and higher throughput compared to all other methods. The accumulation of data packets will result in an increase in the average time delay over time. Improvement A3C adaptively provides allocation strategies according to environmental conditions, so that network delay tends to be stable. In contrast, under other algorithms, network performance fluctuates more and the upward trend of delay is greater.
The improvement A3C in this example performed better than A3C in online learning. The improved A3C loss function can be used for converging faster, and the optimal solution can quickly obtain high report; whereas A3C uses more time and only receives a lower jackpot, which corresponds to a locally optimal solution.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)301, a communication Interface (communication Interface)302, a memory (memory)303 and a communication bus 304, wherein the processor 301, the communication Interface 302 and the memory 303 complete communication with each other through the communication bus 304. Processor 301 may call logic instructions in memory 303 to perform the following method: modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model; modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process; and solving the modeling result of the delay according to a preset constraint condition and a utility function, and obtaining a resource allocation result of the flow so as to minimize the delay.
Furthermore, the logic instructions in the memory 303 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The present embodiments provide a non-transitory computer readable storage medium storing computer instructions, the computer instructions causing a computer to perform the methods provided by the above method embodiments, for example, including: modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model; modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process; and solving the modeling result of the delay according to a preset constraint condition and a utility function, and acquiring a resource allocation result of the flow so as to minimize the delay.
Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement the present invention without any inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. An NFV resource allocation method, comprising:
modeling an NFV network, constructing a network model, modeling traffic in the NFV network, and constructing a traffic model;
modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model, and modeling the delay of the data packet in the processing process and the transmission process according to the modeling results of the processing process and the transmission process;
solving the modeling result of the delay according to a preset constraint condition and a utility function, and acquiring a resource allocation result of the flow so as to minimize the delay; wherein the preset constraint condition includes a load constraint on a physical node or a physical link in the NFV network and a queuing constraint of a data flow, and the utility function is used for balancing a delay between flows in the NFV network;
modeling the NFV network, wherein the step of constructing the network model comprises the following steps:
representing the NFV network by an undirected graph;
wherein, the physical nodes in the NFV network are used as the vertexes of the undirected graph;
taking the total amount of resources used by each physical node for processing the data packet as data corresponding to the vertex;
connecting lines between two physical nodes through which the flow passes before and after are used as edges of the undirected graph;
taking the total amount of resources of transmission flow between two physical nodes connected by a connecting line as data corresponding to the edge of the undirected graph;
modeling the traffic in the NFV network, wherein the step of constructing a traffic model comprises:
modeling a process that a data packet in the flow reaches a physical node and a physical link in the NFV network as a time-varying Poisson process;
the step of modeling the processing process and the transmission process of the data packet in the flow according to the network model and the flow model comprises the following steps:
acquiring the average processing rate of a physical node where each network function is located in target deployment to the data packet and the average arrival rate of the data packet to the physical node;
and acquiring the average transmission rate of the physical link between the two adjacent physical nodes where the network functions are located to the data packet and the average arrival rate of the data packet to the physical link.
2. The NFV resource allocation method according to claim 1, wherein the step of obtaining an average processing rate of the physical node to the data packet and an average arrival rate of the data packet to the physical node, where each network function is located in a target deployment, includes:
acquiring computing resources distributed to the flow by the physical node at any moment according to the CPU resource rate distributed to the flow by the physical node at any moment and available resources on the physical node;
acquiring the processing rate of the physical node to the data packet at any moment according to the computing resource distributed to the flow by the physical node at any moment and the processing rate of the computing resource in unit time;
taking the average value of the processing rates of the physical nodes to the data packets in a preset time period as the average processing rate;
and acquiring the average rate of the data packet reaching the physical node at any moment, and taking the average value of all the average rates in the preset time period as the average reaching rate.
3. The NFV resource allocation method according to claim 2, wherein the step of obtaining an average transmission rate of the data packets to the physical link between the physical nodes where the two adjacent network functions are located and an average arrival rate of the data packets to the physical link includes:
acquiring transmission resources allocated to the traffic by the physical link at any moment according to the link bandwidth ratio allocated to the traffic by the physical link at any moment and the available resources of the physical link at any moment;
acquiring the transmission rate of the physical link to the data packet at any moment according to the transmission resource allocated to the flow at any moment by the physical link and the transmission rate of the unit transmission resource in unit time;
taking the average value of the transmission rate of the physical link to the data packet in a preset time period as the average transmission rate;
and acquiring the average rate of the data packet reaching the physical link at any moment, and taking the average value of all the average rates in the preset time period as the average reaching rate.
4. The NFV resource allocation method according to claim 3, wherein the step of modeling the delay of the data packet in the processing and transmission processes according to the modeling result of the processing and transmission processes includes:
judging whether queuing delay exists on the physical node or not according to the average processing rate of the physical node and the average transmission rate of a physical link immediately before the physical node;
if so, acquiring the average waiting time of the data packet before processing the data packet according to the average processing rate and the average arrival rate of the physical nodes;
acquiring the average delay of the data packet on the physical node according to the average waiting time and the average processing rate of the data packet before the data packet is processed;
judging whether queuing delay exists on the physical link according to the average transmission rate of the physical link and the average processing rate of a physical node immediately before the physical link;
if yes, obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link;
acquiring the average delay of the data packet on the physical link according to the average waiting time of the data packet before the data packet is transmitted and the average transmission rate;
and obtaining the final average delay of the data packet according to the average delays of the data packet on the physical node and the physical link.
5. The NFV resource allocation method according to claim 4, wherein the average waiting time of the data packet before the data packet is processed is obtained according to the average processing rate and the average arrival rate of the physical node by the following formula:
Figure FDA0003646394030000031
wherein, W Q(f,k) An average latency of the data packet before the physical node Q (f, k) representing the kth network function in the target deployment f processes the data packet,
Figure FDA0003646394030000032
represents an average arrival rate of the physical node,
Figure FDA0003646394030000041
represents an average processing rate of the physical node, t represents the arbitrary time,
Figure FDA0003646394030000042
Figure FDA0003646394030000043
representing an average transmission rate of a physical link between the kth-1 network function and a physical node where the kth network function is located in the target deployment f;
obtaining an average delay of a data packet on the physical node according to an average waiting time of the data packet before processing the data packet and the average processing rate by the following formula:
Figure FDA0003646394030000044
wherein, d Q(f,k) (t) represents the average delay of a data packet at a physical node;
obtaining the average waiting time of the data packet before transmitting the data packet according to the average transmission rate and the average arrival rate of the physical link by the following formula:
Figure FDA0003646394030000045
wherein, W P(f,k) Represents the average waiting time of the data packet before the data packet is transmitted by the physical link P (f, k) between the physical nodes where the kth network function and the (k +1) th network function are located in the target deployment f,
Figure FDA0003646394030000046
represents the average arrival rate of the physical link,
Figure FDA0003646394030000047
representing an average transmission rate of the physical link;
obtaining the average delay of the data packet on the physical link according to the average waiting time of the data packet before the data packet is transmitted and the average transmission rate by the following formula:
Figure FDA0003646394030000048
wherein, d P(f,k) (t) is the average delay of data packets on the physical link;
and obtaining the final average delay of the data packet according to the average delays of the data packets on the physical node and the physical link by the following formula:
Figure FDA0003646394030000049
wherein, d f (t) is the final average delay of the data packet.
6. The NFV resource allocation method according to claim 5, wherein the utility function is:
Figure FDA0003646394030000051
wherein alpha is f To adjust the parameters.
7. The NFV resource allocation method according to claim 5, wherein the preset constraint condition comprises:
0≤h Q(f,k) (t)≤1;
0≤h P(f,k) (t)≤1;
Figure FDA0003646394030000052
Figure FDA0003646394030000053
B Q(f,k) (0)=0;
B P(f,k) (0)=0;
Figure FDA0003646394030000054
Figure FDA0003646394030000055
wherein h is Q(f,k) (t) is the CPU resource rate, h, assigned to traffic by physical node Q (f, k) at time t P(f,k) (t) is the link bandwidth ratio of the physical link P (F, k) allocated to the traffic at time t, F (n) z ) Represents passing through a z-th physical node n in the NFV network z Set of all traffic of (2), B Q(f,k) (0) Represents the number of packets waiting to be processed on physical node Q (f, k) at time 0, B P(f,k) (0) Represents the number of waiting data packets to be transmitted on the 0 th physical link P (f, k), T is the end time of the preset time period, B Q(f,k) (t) represents the number of packets waiting to be processed on the physical node Q (f, k) at time t, and E represents the averaging function.
CN201911108149.0A 2019-11-13 2019-11-13 NFV resource allocation method Active CN110971451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911108149.0A CN110971451B (en) 2019-11-13 2019-11-13 NFV resource allocation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911108149.0A CN110971451B (en) 2019-11-13 2019-11-13 NFV resource allocation method

Publications (2)

Publication Number Publication Date
CN110971451A CN110971451A (en) 2020-04-07
CN110971451B true CN110971451B (en) 2022-07-26

Family

ID=70030707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911108149.0A Active CN110971451B (en) 2019-11-13 2019-11-13 NFV resource allocation method

Country Status (1)

Country Link
CN (1) CN110971451B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220303191A1 (en) * 2021-03-18 2022-09-22 Nokia Solutions And Networks Oy Network management
CN115622889B (en) * 2022-12-19 2023-05-09 湖北省楚天云有限公司 Containerized network architecture and network function deployment method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395506A (en) * 2017-09-07 2017-11-24 电子科技大学 A kind of service function chain dispositions method of propagation delay time optimization
CN108600101A (en) * 2018-03-21 2018-09-28 北京交通大学 A kind of network for the optimization of end-to-end time delay performance services cross-domain method of combination
CN108900358A (en) * 2018-08-01 2018-11-27 重庆邮电大学 Virtual network function dynamic migration method based on deepness belief network resource requirement prediction
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491688B2 (en) * 2016-04-29 2019-11-26 Hewlett Packard Enterprise Development Lp Virtualized network function placements

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395506A (en) * 2017-09-07 2017-11-24 电子科技大学 A kind of service function chain dispositions method of propagation delay time optimization
CN108600101A (en) * 2018-03-21 2018-09-28 北京交通大学 A kind of network for the optimization of end-to-end time delay performance services cross-domain method of combination
CN108900358A (en) * 2018-08-01 2018-11-27 重庆邮电大学 Virtual network function dynamic migration method based on deepness belief network resource requirement prediction
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
D2D通信无线资源分配研究;冯大权;《中国优秀硕士学位论文数据库》;20150407;全文 *
Deploying elastic routing capability in an SDN/NFV-enabled environment;Steven Van Rossem,Wouter Tavernier;《2015 IEEE Conference on Netwrok Function Virtualization and Software Defined Network》;20160121;全文 *
Piecing together the NFV provisioning puzzle:Effcient placement and chaining of virtual network functions;M.C.Luizelli;《2015 IFIP/IEEE International Symposium on Integrated Network Management》;20151231;全文 *
万物互联时代新型计算模型;施魏松,孙辉;《计算机研究与发展》;20171231;第54卷(第5期);907-924 *

Also Published As

Publication number Publication date
CN110971451A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN109768940B (en) Flow distribution method and device for multi-service SDN
CN112486690B (en) Edge computing resource allocation method suitable for industrial Internet of things
CN110505099B (en) Service function chain deployment method based on migration A-C learning
US9386086B2 (en) Dynamic scaling for multi-tiered distributed systems using payoff optimization of application classes
WO2018095300A1 (en) Network control method, apparatus and system, storage medium
US20190020555A1 (en) System and method for applying machine learning algorithms to compute health scores for workload scheduling
CN111416774B (en) Network congestion control method and device, computer equipment and storage medium
CN109905329B (en) Task type aware flow queue self-adaptive management method in virtualization environment
CN112600759B (en) Multipath traffic scheduling method and system based on deep reinforcement learning under Overlay network
Kim et al. Multi-agent reinforcement learning-based resource management for end-to-end network slicing
CN113708972A (en) Service function chain deployment method and device, electronic equipment and storage medium
Rezazadeh et al. Continuous multi-objective zero-touch network slicing via twin delayed DDPG and OpenAI gym
CN110971451B (en) NFV resource allocation method
CN110247795B (en) Intent-based cloud network resource service chain arranging method and system
EP4024212B1 (en) Method for scheduling inference workloads on edge network resources
Dalgkitsis et al. Dynamic resource aware VNF placement with deep reinforcement learning for 5G networks
CN114866494B (en) Reinforced learning intelligent agent training method, modal bandwidth resource scheduling method and device
Tam et al. Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT.
Yuan et al. Delay-aware NFV resource allocation with deep reinforcement learning
Li et al. Profit maximization for service placement and request assignment in edge computing via deep reinforcement learning
Wang et al. Towards adaptive packet scheduler with deep-q reinforcement learning
CN111340192A (en) Network path allocation model training method, path allocation method and device
Xia et al. Learn to optimize: Adaptive VNF provisioning in mobile edge clouds
CN114584494A (en) Method for measuring actual available bandwidth in edge cloud network
Bensalem et al. Towards optimal serverless function scaling in edge computing network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant