WO2020154439A1 - Dynamic inter-cloud placement of virtual network functions for a slice - Google Patents
Dynamic inter-cloud placement of virtual network functions for a slice Download PDFInfo
- Publication number
- WO2020154439A1 WO2020154439A1 PCT/US2020/014661 US2020014661W WO2020154439A1 WO 2020154439 A1 WO2020154439 A1 WO 2020154439A1 US 2020014661 W US2020014661 W US 2020014661W WO 2020154439 A1 WO2020154439 A1 WO 2020154439A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slice
- cloud
- candidate
- paths
- sla
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
- H04L41/5054—Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/0826—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5009—Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
- H04L41/5025—Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/124—Shortest path evaluation using a combination of metrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/101—Server selection for load balancing based on network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
- H04L41/122—Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/22—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/508—Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
- H04L41/5096—Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to distributed or central networked applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0817—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
- H04L43/0864—Round trip delays
Definitions
- Network slicing is a form of virtualization that allows multiple logical networks to run on top of a shared physical network infrastructure.
- a distributed cloud network can share network resources with various slices to allow different users, called tenants, to multiplex over a single physical infrastructure.
- IoT Internet of Things
- mobile broadband devices and low-latency vehicular devices will all need to share the 5G network.
- These different applications will have different transmission characteristics and requirements.
- the IoT will typically have a large number of devices but very low throughput.
- Mobile broadband will be the opposite, with each device transmitting and receiving high bandwidth content.
- Network slicing can allow the physical network to be partitioned at an end-to-end level to group traffic, isolate tenant traffic, and configure network resources at a macro level.
- VNFs virtual network functions
- existing technologies simply determine the shortest or fastest path between VNFs. Not only will this create bottlenecks, but it also falls short of determining the best arrangement for the particular slice, leaving a slice’s particular performance metric needs unaddressed.
- statically placing these VNFs in slices on the physical network can again inefficiently reserve physical and virtual resources that are not needed or that change over a time period. For example, a sporting event could cause existing slice performance to suffer and fall below service level agreement (“SLA”) requirements.
- SLA service level agreement
- Examples described herein include systems and methods for dynamic inter cloud VNF placement in a slice path over a distributed cloud network.
- the slices can span a multi-cloud topology.
- An optimizer can determine a slice path that will satisfy an SLA while also considering cloud load.
- the optimizer can be part of an orchestrator framework for managing a virtual layer of a distributed cloud network.
- the optimizer can identify an optimal slice path that meets the SLA (or violates the SLA to the lowest extent), while balancing network resources. This means the optimizer will not necessarily choose the shortest or fastest possible path between VNFs. This can provide technical advantages over algorithms such as a Dijkstra algorithm that is single dimensional and would be used to find the shortest path. By considering multiple dimensions all together, the optimizer can balance SLA compliance and network performance. This approach can also allow the optimizer to flexibly incorporate additional SLA attributes, new weights, and react to changing network conditions.
- the optimizer can start determining a slice path from an edge cloud. This can include determining a neighborhood of available clouds based on the number of VNFs in the slice and a maximum number of intercloud links. Limiting intercloud links can keep the pool of candidate slice paths more manageable from a processing standpoint.
- Each candidate slice path can include VNFs placed at a different combination of clouds. The number of different clouds in a candidate slice path can be less than or equal to both (1) the total number of available clouds or (2) the maximum number of intercloud links plus one. Within those boundaries, permutations of VNF-to-cloud assignments can be considered as candidate slice paths.
- the optimizer can determine a performance metric for the candidate slice paths corresponding to an SLA attribute of the slice.
- the optimizer can also determine loads for the candidate slice paths based on load values of the corresponding clouds.
- the optimizer can identify a slice path with the best composite performance.
- the best composite performance can include both the weighted load and performance metrics and can be based on the lowest overall composite score in one example. Then the system can instantiate the VNFs at corresponding clouds specified by the slice path with the best composite score.
- the slice path with the best composite score can be referred to as“the best composite slice path.”
- a graphical user interface can allow a user to adjust weights for SLA attributes. For example, a tenant can make a GUI selection that weights an SLA attribute relative to a second SLA attribute. The optimizer can use these weights to choose optimal VNF placement, while still balancing against network load of the slice path.
- the optimizer can determine candidate slice paths relative to an edge cloud. Each candidate slice path considers a unique permutation of VNF-to-cloud assignments for a given service function chain.
- the optimizer can rank the candidate slice paths based on the relative weightings of performance metrics corresponding to the SLA attributes and a load for the candidate slice path. This can allow the optimizer to balance network requirements while still determining VNF locations based on SLA compliance, rather than simply picking the shortest or fastest path.
- the optimizer (which can be considered part of an orchestrator) can then provision the VNFs at the clouds specified by a top ranked slice path.
- stages can be performed by a system in some examples.
- a non-transitory, computer-readable medium including instructions can cause a processor to perform the stages when the processor executes the instructions.
- FIG. 1 A is a flowchart of an example method for dynamic inter-cloud VNF placement in a slice path.
- FIG. IB is a flowchart of an example method for dynamic inter-cloud VNF placement in a slice path based on service-level agreement attribute weighting.
- FIG. 2 is an example sequence diagram of example stages for dynamic inter cloud placement of VNFs in a slice path.
- FIG. 3 is an example system diagram illustrating performance cost and load calculations for multiple clouds.
- FIG. 4A is an example table of candidate slice paths used for determining optimal VNF placements.
- FIG. 4B is an example table of loads for clouds.
- FIG. 4C is an example matrix of cloud round-trip times between clouds.
- FIG. 4D is an example table of candidate slice paths with composite scores.
- FIG. 5 is an example illustration of a graphical user interface (“GUI”) screen.
- GUI graphical user interface
- FIG. 6 is an example system diagram of a topology for dynamic inter-cloud VNF placement in a slice path.
- a system dynamically chooses optimal VNF locations for slices in a distributed multi-cloud environment, such as a Telco cloud environment.
- a Telco provider can have numerous data center environments, each of which can be a cloud. Each cloud can be one of several nodes located at various geographic locations in the distributed Telco network.
- Edge clouds can be those closest to user devices, such as cell phones, tablets, computers, IoT devices, and other processor-enabled devices that can connect to a mobile network. Edge clouds can act as ingress points for devices utilizing the Telco network.
- Core clouds can be at least one link removed from the user devices and can include core data centers in an example.
- Core clouds can act as egress points to the internet if, for example, a VNF located at the core cloud is responsible for connecting to the internet.
- Edge clouds can also act as egress points in an example, but as will be discussed, the provider can avoid this configuration for congestion reasons in some instances.
- a provider of the Telco network can lease portions of the network to tenants. These portions can be leased to tenants for specific purposes or services. For example, the lease can be used for particular applications, IoT devices, or customers.
- the provider can create and manage one or more network slices, referred to as“slices” for convenience. Each slice can be a virtual network that runs on top of a shared physical network infrastructure distributed across the Telco clouds. In effect, slicing can allow the provider to reserve some portion of the distributed network for each tenant.
- Network slices can be assigned to different tenants, and in some examples a single tenant can have multiple slices for different purposes.
- An SLA can define which performance metrics are required for the slice. Required performance metrics can vary between slices, depending on the intended use of a given slice.
- a slice can include a service chain of VNFs for performing certain network tasks.
- the required combination of VNFs can differ based on the intended use of the slice, such as video streaming or IoT device management.
- the SLA or a separate slice record can specify which VNFs make up the service chain.
- the VNFs can be deployed across a slice path.
- the slice path can represent a subset of the provider’s distributed network and can span one or more clouds.
- the slice path can include virtual and physical elements (such as compute, network, and storage elements) that provide functionality to the network slice.
- the virtual elements can include the VNFs required for the particular slice. These can operate in virtual machines (“VMs”) and utilize virtual computer processing units (“vCPUs”).
- VMs virtual machines
- vCPUs virtual computer processing units
- the slice path can begin at an edge cloud that provides an access point to user devices, but VNFs in the service chain can be placed elsewhere on other clouds.
- the slice path can include be along a selected permutation of VNF-to-cloud assignments.
- the optimizer (or other part of an orchestrator) can evaluate a new slice path based on current conditions and SLA requirements. Placement of VNFs can be optimized based on various dimensional costs such as performance metrics in the SLA, compute costs, and network utilization.
- the optimal slice path can represent a tradeoff between satisfying SLA performance metrics and orchestrating resources in a multi-cloud environment.
- the service provider can instantiate VNFs at the cloud locations specified in the optimal slice path.
- the optimizer can continue to monitor metrics and cloud loads and redistribute the VNFs along a new optimal slice path once metrics or loads fall outside of SLA and cloud load thresholds.
- FIG. 1 is an example flow chart for dynamic inter-cloud VNF placement for a slice.
- An optimizer can be a process running in a core cloud of the provider. The optimizer can run on a server as part of a suite of data center management tools, in an example. The optimizer can select an optimal slice path for a slice, placing VNFs at clouds in a manner that balances required SLA metrics with impact on network resources. [032] To determine an optimal slice path, the optimizer can consider performance requirements of the SLA, cloud resource utilization based on load distribution, and performance metric prioritization for slices. In general, the optimizer can use the cloud loads to choose a slice path that balances network congestion against SLA requirements. Often, edge clouds will have the best performance metrics but the worst cloud load.
- the optimizer can balance the needs of the particular slice with the overall network utilization, in an example.
- the optimizer can attempt to distribute VNFs in a manner that satisfies the SLA while preserving resources at the various clouds, including the edge cloud.
- the optimizer can receive an SLA attribute required by a slice.
- SLA attributes can be any required performance metric of the slice.
- Example SLA attributes include maximum latency or round-trip time, minimum bandwidth, and maximum jitter.
- SLA attributes can be different and be prioritized differently between slices, largely depending on the services provided by the slice. For example, high bandwidth may be most important for video streaming, whereas low latency may be most important for automated driving.
- a tenant can specify which SLA attributes apply to a slice, in an example.
- the SLA attribute is received from a GUI, such as an operator console.
- An operator can manually enter SLA attributes that apply to a slice.
- the SLA attribute can be received from a stored slice record.
- Slice records can be defined for a tenant to programmatically define various slice requirements. For example, a slice record can not only define which SLA attributes apply to the slice, but it can also specify which VNFs are required, particular geographies needed, and monetary spends permitted for the slice. For example, if a service is being offered in San Francisco, a slice record can ensure that particular VNFs are placed near this location.
- One or more required edge clouds can be specified for access to the slice by user devices.
- the optimizer can also receive a maximum number of intercloud links.
- the maximum number of intercloud links can be configured automatically or manually based on slice path performance and to limit the degree of slice path drift across the clouds. This number can define how many connections between different clouds are permitted for the slice path. Because VNFs can be distributed on a slice path that spans multiple cloud locations, a limitation on the number of links between these clouds can help the optimizer define a universe of candidate slice paths. Additionally, slice performance can generally suffer if too many intercloud links are introduced.
- the maximum number of intercloud links is between five and ten. The number of permissible intercloud links can be entered into a GUI by an operator, in one example. Different maximum intercloud link numbers can apply to different slices and tenants.
- the optimizer can determine candidate slice paths relative to an edge cloud.
- the edge cloud is specified in a slice record associated with the slice or tenant to whom the slice is leased or assigned.
- the edge cloud can alternately be selected based on a geographic attribute in the SLA or other information provided by the provider or tenant.
- the optimizer can determine a neighborhood of other available clouds. Then combinations of these available clouds can make up the candidate slice paths.
- the pool of candidate slice paths can be limited based on (1) the number of VNFs in the service chain of the slice, and (2) the maximum number of intercloud links. For example, if a slice includes four VNFs, each candidate slice path can include four or fewer clouds. The number and types of VNFs for any particular slice can vary, based on the intended use of the slice. A slice record can define a series of VNFs for whatever use the tenant or provider has for that slice. These VNFs can be placed on various clouds, starting with the edge cloud that acts as an access point (for example, for video streaming requests or automobile communications).
- the VNFs can connect with each other over intercloud links when they are located at different clouds, forming the slice path.
- One or more of the VNFs can provide connection, for example, to a particular data center or the internet. Clouds with these VNFs can be considered egress clouds.
- the maximum number of intercloud links can further reduce the pool of candidate slice paths, ensuring that the optimization can be performed in a computationally efficient manner.
- the maximum number is three
- the candidate slice paths can be limited to four or fewer different clouds in any one slice path (since there are three links between four clouds). If there are more than three VNFs, this can mean the candidate slice paths will include at least one cloud with multiple VNFs.
- the number of intercloud links can be used to eliminate clouds from the neighborhood of potential clouds. For example, clouds that require too many network hops relative to the edge cloud can be left out of the candidate slice paths.
- cloud 6 can be removed from the neighborhood of available clouds.
- the maximum number of intercloud links can be configured to manage the pool size for candidate slice paths and to limit the degree of slice path drift across the clouds.
- the optimizer can further limit the candidate slice paths based on performance metrics. For example, performance metrics of the candidate slice paths can be measured to determine if a candidate slice path complies with SLA requirements. In one example, a prioritized SLA attribute can be used to eliminate candidate slice paths that do not meet the requirements of the SLA attribute. [041] As an example, a first slice can prioritize latency over bandwidth, meaning only slice paths that meet the latency SLA requirements will be candidates. A second slice can prioritize bandwidth over latency, causing the optimizer to focus on bandwidth performance. In another example, a slice record can indicate that round-trip time (“RTT”) is prioritized. In response, the optimizer can include RTT metrics for the candidate slice paths and eliminate candidate slice paths with RTT above the SLA requirement.
- RTT round-trip time
- the optimizer can create an intercloud matrix that includes an RTT between each candidate cloud. Using these RTT values, the optimizer can derive a total RTT for each candidate slice path. The derivation can be a sum or other function. This total RTT value can be stored as a dimensional performance cost for each candidate path. Other dimensional performance costs can be determined for each candidate slice path using a similar methodology.
- the dimensional performance costs are normalized.
- the normalization can be proportional to the SLA attribute. For example, if the SLA attribute specifies a maximum RTT of 50 milliseconds, normalization can include dividing the total RTT value (dimensional performance cost) by 50 milliseconds. A result greater than 1 can indicate the SLA attribute is not met, whereas a result of less than 1 indicates it is met.
- different linear or non-linear functions can be used to normalize values for an SLA attribute.
- the candidate slice paths can be ranked according to the normalized performance cost, in an example. Candidates that do not comply with the SLA attribute can be omitted.
- the optimizer can retain some number of candidate slice paths organized by how close they come to meeting the SLA requirement. This can help ensure that the optimizer chooses a slice path that is close to satisfying the SLA requirement. For example, a minimum of ten candidate slice paths can be retained in one example, even if not all of the candidate slice paths meet the SLA requirement. However, the non-compliant candidate slice paths can be ranked according to how close they come to meeting the SLA requirement, and those falling below the threshold number of candidate slice paths can be omitted.
- the optimizer can also narrow the pool of candidate slice paths based on specific VNF requirements, in an example.
- a slice record can specify that a particular VNF in the function chain is required to be within a certain distance of a geographic location or have direct connectivity to a particular egress point.
- a VNF requirement can specify a particular cloud to use with a VNF. These sorts of VNF requirements can be useful, for example, when the slice must connect to a geographically specific data center or have a specific egress point.
- the optimizer can use the VNF requirements to limit the pool of candidate slice paths accordingly. For example, if a slice record specifies a geographic requirement for VNF3, the optimizer can limit candidate slice paths to those where VNF3 is on a cloud meeting the geographic requirement.
- the candidate slice paths are determined by an intersection of two sets of slice paths.
- the first set can include every slice path in the neighborhood of an edge cloud that is less than or equal to the number of VNFs and maximum intercloud links.
- the second set can include every slice path that satisfies the SLA for a performance metric.
- the second set can include slice paths closest to satisfying the SLA when no candidates slice paths satisfy it. Then the optimizer can take the intersection of the first and second sets. The remaining slice paths can be the candidate slice paths.
- the optimizer can determine loads for the candidate slice paths. For example, the optimizer can determine load values for each cloud in the candidate slice paths, then add those up. As mentioned previously, the optimizer can use the cloud loads to balance network congestion against SLA requirements.
- One goal of orchestrating a Telco network is to avoid overburdening a cloud and allow for greater network scalability. The optimizer therefore can use load values to select an optimal slice path that uses clouds that may be underutilized compared to clouds in other candidate slice paths. The optimizer can attempt to distribute VNFs in this manner while still satisfying SLA requirements.
- Cloud loads can be calculated proportionally to the demands of a slice, in one example. For example, if an edge cloud has 100 vCPUs and ten slices, the edge node can be considered 90% utilized from the perspective of any slice utilizing 9 vCPUs. However, different examples can calculate cloud loads differently. Load can be based on, for example, compute load (i.e., the percentage of vCPUs utilized), storage, network capacity, bandwidth, or other metrics that can be assigned a cost.
- the load values can also be normalized, in an example.
- the function for normalizing the load value can depend on the manner in which the load is calculated.
- the optimizer can also weight the candidate paths based on the load values. In general, the optimizer can weight candidates negatively for high loads and positively for low loads. This can cause a candidate slice path with lower network utilization to be ranked ahead of one with high utilization if performance metrics are otherwise equal.
- the optimizer can use the loads and performance metrics to determine a slice path with the best composite score.
- this can include normalizing and weighting the dimensional performance costs (performance metrics) and the loads. Then those values can be combined together to arrive at a composite score that, to some degree, represents both.
- the optimizer can separately weight normalized load costs and normalized performance costs to create the weighted candidate slice paths, in an example.
- the relative balance of these different weights can be based on selections from an operator or values from an orchestration process, in an example.
- a higher relative weight for loads can indicate an emphasis on balancing the network versus providing peak slice performance.
- a load weight can be twice as much as a performance weight.
- the weights can be multiplied against the normalized values.
- the resulting weighted costs for performance metrics and loads can be summed or otherwise used to determine a composite score from which the optimal slice path is selected.
- the optimizer can identify a slice path with the best composite score based on the weighted candidate slice paths.
- creating a composite slice path can include calculating a composite value based on the load and performance values.
- the weighted load and performance costs are added together to yield a composite score.
- the candidate slice paths can be ranked based on the lowest composite score, and the top-ranked result can be identified as the slice path with the best composite score. This can be the optimal slice path.
- Using the composite score can allow the optimizer to identify an optimal slice path that meets the SLA (or violates the SLA to the lowest extent), while balancing network resources. This means the optimizer will not necessarily choose the shortest or fastest possible path between VNFs. This can provide technical advantages over algorithms such as a Dijkstra algorithm that is single dimensional and would be used to find the shortest path. By considering multiple dimensions all together, the optimizer can balance SLA compliance and network performance. This approach can also allow the optimizer to flexibly incorporate additional SLA attributes, new weights, and react to changing network conditions.
- the orchestrator or optimizer can provision the VNFs in the manner specified by the slice path with the best composite score.
- This best composite slice path includes associations between each VNF and a respective cloud.
- the orchestrator or optimizer can provision the VNFs at those clouds. For example, if the slice path with the best composite score indicates VNF1 at cloud-0, VNF2 at cloud-5, and VNF3 at cloud-6, the optimizer can send a message to an orchestrator process identifying these VNFs and clouds.
- the orchestrator can then instantiate VNF1 at the cloud location associated with cloud-0, VNF2 at a second cloud location associated with cloud-5, and VNF3 at a third cloud location associated with cloud-6.
- the orchestrator can also provide information to each VNF so that they can communicate with one another as intended for the service function chain of the slice.
- FIG. IB is an example flow chart of stages for optimizing VNF placement in a slice path.
- the optimizer receives a GUI selection that weights a first SLA attribute relative to a second SLA attribute.
- the GUI can be part of an orchestrator console, in an example. Alternatively, it can be part of a tenant-facing portal that allows a tenant to control which SLA attributes should be prioritized.
- the GUI includes a slider for moving a weight between two SLA attributes. This can allow one of the SLA attributes to be the priority or both SLA attributes to be equal (and therefore both priority).
- the GUI can allow selection of multiple SLA attributes and the user can set weights for each one. The SLA attributes can be weighted relative to one another based on the weights set for each one.
- the optimizer can determine candidate slice paths relative to an edge cloud. This stage can occur as part of initially provisioning a slice. Additionally, the optimizer can dynamically perform stage 170 based on an orchestrator or the optimizer determining that a new slice path is needed, in an example. For example, an orchestrator can detect a high load at a particular cloud that is utilized by the current slice path. Alternatively, the orchestrator can detect that performance metrics for a slice no longer meet an SLA requirement. This can cause the optimizer to determine a new slice path to bring the slice back into SLA compliance or alleviate network load.
- an orchestrator can detect a high load at a particular cloud that is utilized by the current slice path.
- the orchestrator can detect that performance metrics for a slice no longer meet an SLA requirement. This can cause the optimizer to determine a new slice path to bring the slice back into SLA compliance or alleviate network load.
- the optimizer can determine a neighborhood of clouds based on the maximum number of intercloud links.
- the optimizer can create a pool of candidate slice paths that includes every combination of the neighborhood of clouds, in an example, limited to the number of VNFs per candidate slice.
- Each candidate can have a unique VNF- to-cloud assignment combination, also referred to as a unique permutation. For example, a first candidate can assign VNF1 to cloud-0 and VNF2 to cloud-1, whereas a second candidate assigns VNF1 to cloud-0 and VNF2 to cloud-2.
- the permutations can also take into account the order of VNFs, since the service function chain can require traversing VNFs in order.
- the optimizer omits candidate slice paths that do not satisfy the prioritized SLA attribute. For example, if the first SLA attribute is weighted more highly than the second SLA attribute, the optimizer can create a pool of candidate slice paths that satisfy the first SLA attribute. If both the first and second SLA attributes are prioritized, then the optimizer can determine a pool of candidate slice paths that have performance metrics satisfying both SLA attributes. However, if no candidates satisfy the SLA, then those with the closest performance can be kept as candidate slice paths.
- the optimizer can rank the candidate slice paths based on the relative weightings of the first and second SLA attributes. In one example, this can include normalizing each of the corresponding SLA metrics of the candidate slice paths, then multiplying by the respective weight values. For example, if the first SLA metric is RTT and the second SLA metric is slice throughput, the first weight can be applied to the normalized RTT value and the second weight can be applied to the normalized throughput value. As explained for stages 130 and 140, normalizing can include applying a function to the metric values. The function applied can vary for different SLA attributes. For example, normalizing RTT can be done linearly by dividing by the SLA attribute value. Slice throughput, on the other hand, can have a non-linear function that favors high throughput by returning a much lower number once a throughput threshold is achieved.
- the optimizer can use a lookup table to map performance metrics to normalized costs.
- Each dimension e.g., each type of performance metric
- the lookup table can define a transform function for each type of performance metric.
- the functions can be linear or non-liner and can be based on the SLA attributes for the slice.
- a normalized value of 0 to 1 indicates SLA compliance.
- a metric-to-cost table can map RTT values into Hoat64 values that are normalized such that values between 0 and 1 satisfy the SLA for RTT, whereas any value above 1 does not satisfy the SLA.
- the weights are applied to the normalized metric values for the candidate slice paths.
- Load values for the clouds can be normalized and weighted, as described for stage 130. This can allow the optimizer to then determine a top-ranked slice path based on a composite score that factors in both the weighted performance metrics and the weighted loads. The provider can shift the prioritization of load distribution versus performance as needed by dynamically adjusting the weight applied to the load versus performance metrics.
- the top-ranked slice path can be provisioned.
- an orchestrator process instantiates VNFs at the corresponding clouds of the top-ranked slice path. This can include setting the VNFs to communicate with one another across the distributed cloud network.
- the optimizer can continue monitoring the slice and occasionally re-perform stages 170, 180, and 190 if a performance metric becomes non-compliant with the SLA or the slice load exceeds a threshold.
- the optimizer can determine a new top-ranked slice path and an orchestration process can provision at least one of the VNFs at a new cloud.
- the orchestrator can also configure the other VNFs to communicate with the newly instantiated VNF. VNF instances that are no longer part of the function chain can be terminated.
- FIG. 2 is an example sequence diagram for dynamically provisioning VNFs in a slice.
- the optimizer can read a slice record to retrieve one or more SLA attributes and information about the slice.
- the slice record can also identify which VNFs are needed in the slice path, specific VNF requirements, and monetary constraints.
- the optimizer can retrieve weight values to apply to one or more performance metrics or loads.
- the weights can be input into a GUI in one example, such as by an administrator associated with the provider.
- the GUI allows tenant access for setting weights for SLA attributes, including setting which SLA attribute is a primary attribute for the slice.
- the weights can also be received as functions from an orchestrator as part of an optimization request, in an example.
- stage 210 can be performed later as part of stage 220, to determine a lowest score for composite slice paths.
- the optimizer can determine candidate slice paths. This can be based on the SLA attributes received from the slice record at stage 205, in an example. In one example, the optimizer creates candidate slice paths that each have the same or fewer clouds as the number of VNFs or the maximum number of intercloud links. The optimizer can create a first set on that basis. The optimizer can then eliminate candidate slice paths that do not comply with one or more of the SLA attributes. The remaining candidate slice paths can make up the pool from which an optimal slice path is chosen. For example, this can include choosing the candidate slice path with the lowest composite score.
- the optimizer can determine which of the candidate slice paths has the lowest composite score. This can include applying the weights received at stage 210.
- the performance metrics can be normalized and weighted.
- load values can be normalized and weighted. Then these weighted values can be added together to result in a composite score.
- the top-ranked candidate slice path can be the one with the lowest composite score.
- the various functions and weights are fashioned to produce a high score for the top-ranked slice path.
- the optimizer can then cause the VNFs to be provisioned at the specific clouds included in the top-ranked candidate slice path.
- the optimizer sends a request to an orchestrator process to perform the provisioning.
- the optimizer can be part of the orchestrator.
- the top-ranked candidate slice path can specify that VNF1 is on cloud 0, which can be an edge cloud. It can also specify that VNF2 and VNF3 are both on cloud 2.
- Cloud 0 can be an index that the optimizer uses to look up provisioning information for a host server, cluster, or cloud location.
- Cloud 1, cloud 2, and cloud 3 can be other indices used for this purpose.
- the orchestrator can provision VNF1 at cloud 0.
- the orchestrator can provision VNF2 at cloud 2, and at stage 226 VNF3 can be provisioned at cloud 2. These VNFs can be configured by the orchestrator to talk to one another.
- Provisioning can include instantiating one or more VMs at each cloud location, including one or more vCPUs for executing the functionality of the respective VNF.
- the optimizer (or orchestrator) can detect that a cloud is overloaded. If the slice is using the cloud, the optimizer can determine a new top-ranked slice path. For example, if cloud 2 has a load that exceeds a threshold, the new top-ranked slice path can be calculated at stage 235. This can include determining which slice path has the new lowest composite score. In this example, the new top-ranked slice path can place VNF1 at cloud 1, VNF 2 at cloud 2, and VNF3 at cloud 3. VNF3 can, for example, be vCPU intensive, such that moving it to cloud 3 helps balance network load.
- the optimizer (or orchestrator) can determine a new cloud path based on a performance metric no longer meeting an SLA attribute.
- the orchestrator can periodically check performance of the slice, in an example. If performance falls below the SLA requirements, a new slice path with the lowest composite score can be calculated at stage 235.
- the orchestrator can provision the VNFs at their new locations at stages 242, 244, and 246.
- VNF2 is not re provisioned, but instead is simply reset to talk to VNF1 and VNF3 at their new locations.
- all three VNFs are re-instantiated at the respective cloud locations when the new slice path is created.
- FIG. 3 is an example system diagram for purposes of explaining how an optimizer determines candidate cloud paths and selects one based on a composite value.
- the composite value can represent multiple dimensions of performance metrics and loads, allowing for the optimizer to determine VNF placement based on both SLA requirements and cloud resource allocation.
- FIGs. 4A-D include example tables of values to explain various stages of the example optimizer operation.
- an optimizer can determine cloud placement for three VNFs 351, 352, 353 in a service chain for a slice.
- VNFs also shown as VI, V2, and V3, can be provisioned on various clouds 310, 320, 330, 340 in the Telco network.
- the illustrated slice path includes VI at edge cloud 310 (Cloud-0), V2 at a first core cloud 320 (Cloud- 1), and V3 at a third core cloud 340 (Cloud-3). Access to the slice can occur from a cell tower 305, which sends data to Cloud-0.
- Each cloud 310, 320, 330, 340 can communicate with the others using intercloud links with performance metric costs 314, 316, 323, 324, 334.
- the costs are represented by C P , where p designates the slice path.
- Co, 3 indicates a performance metric cost between Cloud-0 and Cloud- 1.
- each cloud 310, 320, 330, 340 can be assigned a load value 312, 322, 332, 342 based on load functions utilized by the optimizer. This can be a compute load based on total vCPU usage at the cloud, in an example.
- the optimizer can attempt to determine a new slice path that satisfies SLA requirements and balances the orchestration of resources in the multi-cloud environment of FIG. 3. This can be different than merely implementing a shortest path algorithm, such as Dijkstra, because multiple graphs can be considered across several domains, such as RTT, bandwidth, and cloud load. Each can contribute to a composite score and selection of an optimal slice path.
- a shortest path algorithm such as Dijkstra
- the slice can be defined as [VI, V2, V3] with the SLA specifying a maximum of 50 millisecond RTT on the slice.
- the optimizer can determine a neighborhood of available clouds relative to the edge cloud 310 (Cloud-0). This can include limiting the available clouds based on the number of VNFs and the maximum number of intercloud links. In this example, seven other neighboring clouds can be available for VNF placement. Each of these clouds can be given an index.
- This neighborhood can be used to determine candidate slice paths.
- a few such candidate slice paths are shown in FIG. 4A.
- This table uses the slice’s VNFs at column indices 410 and each row 405 represents a potential candidate slice path.
- candidate slice path 412 can map VI to Cloud-0, V2 to Cloud- 1, and V3 to Cloud-3. This corresponds to the slice path shown in FIG. 3.
- FIG. 4 A illustrates just four such candidate slice paths, but many more can be determined.
- the optimizer creates a first set of slice paths that includes every unique combination of VNFs to the neighborhood clouds, relative to the edge cloud (Cloud-0).
- the optimizer can measure a load value for each available cloud.
- FIG. 4B presents load measurements for the neighborhood of available clouds.
- each cloud has an index 415 and a load value 420.
- Cloud-0 an edge cloud
- Cloud-2 on the other hand, is only 17.8% utilized.
- load can represent the fraction of proportionally allocated compute resources currently required for this slice at the respective cloud.
- load can represent the absolute value of computer resources at the cloud.
- the optimizer can measure the performance metric for RTT for each candidate cloud path. To do this, the optimizer can create a matrix with RTT values between each pair of clouds.
- FIG. 4C illustrates the RTT matrix.
- the row index 425 can correspond to source clouds and the column index 430 can correspond to destination clouds.
- the value at each location in the table can represent RTT in milliseconds between the source and destination clouds. In this example, RTT for the same cloud as source and destination is estimated to be 1 millisecond.
- the other intercloud links have RTT values between 10 and 50 milliseconds.
- the optimizer can create similar matrices for other performance metrics, in an example.
- the optimizer can determine dimensional costs (also called values), C p , of the performance metrics and loads for each candidate slice path.
- FIG. 4D illustrates example dimensional costs 445 for cloud load 446 and RTT 447. These values can be determined for candidate slice paths 425 having different cloud placements 440 for the three VNFs.
- candidate slice path 465, PATH 53 includes a cloud load of 1.031 and an RTT of 24.8.
- a different method of deriving the dimensional costs is possible.
- the optimizer can determine the other dimensional costs 445 for load 446 and RTT 447 following this methodology to solve for Cp . For each possible candidate slice path, the optimizer can sum the corresponding RTT values and load values.
- the optimizer can determine normalized costs 450 by applying normalization functions to the dimensional costs 445.
- the normalization function, f k can vary for different dimensional costs, but can result in normalized costs 450.
- Example methods for solving for g£ can include any of Equations 1-3 below.
- the normalization function can be relative to the SLA attribute corresponding to the dimensional cost (e.g., performance metric value) being normalized. In one example, if the normalized value is between 0 and 1, the SLA is satisfied. Otherwise, the SLA is violated.
- the dimensional cost e.g., performance metric value
- the candidate slice paths 425 all have normalized RTT values between 0 and 1.
- the optimizer has removed other slice paths that do not comply with the SLA for RTT.
- PATH 168 satisfies the SLA the most with a normalized RTT value 476 of .040.
- PATH 168 also includes a relatively poor load value 475 of 11.913 because in that slice path all of the VNFs are assigned to the edge cloud (Cloud-0).
- the optimizer can apply weights, w fc , to the normalized costs to create weighted costs 455.
- the weights can be adjusted or specified by the orchestrator or an administrator.
- the top-ranked candidate slice path 465 is PATH 53, with the lowest composite value 470.
- the RTT for PATH 53 is not the lowest available RTT, which belongs to PATH 168. But PATH 53 provides a better balance of network load while still maintaining SLA compliance. Therefore, it has the best composite score and is selected as the optimal slice path, Po Pt .
- the optimizer can determine composite weighted cost 455 using Equation 4, below.
- Popt can be selected based on the lowest value for G r , in an example.
- FIG. 5 is an example illustration of a GUI screen 510 for adjusting optimizer performance in selecting VNF placement in a slice.
- the GUI can allow an administrator to configure various aspects of the optimizer functionality.
- an administrator can select a slice record 515.
- the slice record can be associated with a tenant, in an example.
- the slice record can provide a definition of a service function chain, including which VNFs are part of the slice and what compute resources each need.
- the slice record can also indicate interservice link needs for the VNFs.
- the slice record includes SLA requirements. Additional VNF attributes can also be included in the slice record. For example, a VNF preference for a core or edge node can be included. Another example VNF preference can include geographic criteria. For example, a service running only in San Francisco can include a VNF preference for that area.
- the slice record can also specify a particular edge node, in one example.
- the GUI can include a field for configuring the degree to which the optimizer determines candidate cloud paths, in one example.
- the GUI can contain a field 540 for entering the maximum number of intercloud links.
- the administrator or an orchestration process can increase or decrease this number to change the edge radius—that is, the number of permitted links to other clouds. This can control the pool size of the candidate slice paths in an example. When the number is lowered, this can increase computing efficiency for the dynamic inter-cloud placement of VNFs.
- the GUI can also include fields 520, 530 for selecting SLA attributes.
- these fields 520, 530 can be dropdown selectors that include all of the SLA attributes from the slice record.
- SLA attributes related to RTT and jitter have been selected.
- the GUI also can contain a control 525 for weighting the SLA attributes.
- the control 525 is a slider that simultaneously increases the weight of one attribute while decreasing the weight of another.
- the GUI can include individual weight controls for adjust weights relative to multiple SLA attributes.
- the weights can be used by the optimizer for determining the top-ranked slice path.
- the highest weighted SLA attribute can be treated as the primary SLA attribute and used to determine which slice paths are candidate slice paths.
- Another field 535 can be used to apply particular cloud definitions to the optimizer.
- this can include selecting a file that defines attributes for one or more clouds.
- a provider may need to designate geographical characteristics of a cloud (for example, located in Kansas or California). If tenant wants a geographically specific use, such as smart cars in California, the specification for clouds in California can be used by the optimizer to limit the potential candidate slice paths.
- the optimizer can consider all attributes ascribed to a VNF against various attributes ascribed to clouds.
- a cloud map can be presented on the GUI, allowing the provider to lasso some clouds and define attributes or apply settings.
- the administrator can also select load functions using a GUI element 545 in one example.
- the selected load function or functions can determine how cloud load is defined and calculated. In this example, the selection bases load on vCPU usage.
- the load function can be flexible and based on other atributes.
- the load can be an absolute value of compute resources at a cloud or it can be the fraction of proportionally allocated compute resources being used by the slice at the respective cloud.
- the GUI can also provide a selection 555 of normalization functions.
- a script, file, or table can define which functions are applied to which SLA atributes or loads.
- the functions can be linear or non-linear.
- the goal of the normalization can be to normalize performance metrics relative to each other and to the load metrics. This can allow the weights to more accurately influence the importance of each in determining optimal VNF placement in the slice.
- normalization functions are provided as a metric-to-cost transform table.
- the table can map particular performance metrics to normalized metric values that are based on SLA satisfaction. For example, functions for different metric types can map the metrics of each cloud to normalized numbers between 0 and 1 when the SLA is satisfied, and numbers greater than 1 when it is not. Lower numbers can indicate a higher degree of satisfaction. Therefore, a number slightly greater than 1 can indicate the SLA is nearly satisfied. In extreme cases where network load results in no candidate slice paths that satisfy the SLA, candidate slice paths can be ranked based on how close they are to 1.
- the GUI can also include one or more fields 550 for displaying or defining monetary cost maximums.
- Monetary costs can vary for each cloud, depending on the cloud’s current load and the amount of load required for a particular VNF. In one example, cloud paths are negatively weighted when the total cost for VNF placement exceeds cost maximums. Monetary costs can be normalized similarly to performance metrics or loads.
- the normalization functions of selection 555 can include functions for normalizing slice path costs, in an example. This can allow costs to be weighted and included in the composite scoring.
- FIG. 6 is an example system diagram including components for dynamic inter cloud VNF placement in slices.
- a distributed Telco cloud network 600 can include edge clouds 620 and core clouds 640. Slices 672, 678, 682 can be distributed across these clouds 620, 640.
- Each cloud 620, 640 can have physical and virtual infrastructure for network function virtualization (“NFV”) 642.
- NFV network function virtualization
- physical servers 644, routers, and switches can run VMs 646 that provide VNF functionality.
- a slice can include a first VNF that executes on an edge cloud 620.
- the VNF can utilize one or more vCPUs, which can be one or more VMs 624 in an example.
- the edge cloud 620 can execute numerous VNFs, often for multiple tenants where the VNFs are part of various slices.
- the slices can be kept separate from a functional perspective, with VNFs from different slices not aware of the existence of each other even when they rely on VMs 624 operating on shared physical hardware 622.
- a first VNF in the slice path can communicate with a second VNF, which can be located in a different cloud 640.
- the second VNF can include one or more VMs 646 operating on physical hardware 644 in a core cloud 640.
- the second VNF can communicate with yet another VNF in the slice path.
- One or more of these VNFs can act as an egress to the internet 660, in an example.
- One or more user devices 602 can connect to a slice in the Telco network 600 using, for example, a 3G, 4G, LTE, or 5G data connection.
- the user devices 602 can be any physical processor-enabled device capable of connecting to a Telco network. Examples include cars, phones, laptops, tablets, IoT devices, virtual reality devices, and others.
- Cell towers 605 or other transceivers can send and receive transmissions with these user devices
- slice selectors 608 can receive data sent from the user devices 602 and determine which slice applies.
- the slice selectors 608 can operate as VMs 624 in the edge cloud or can run on different hardware connected to the edge cloud 620.
- a provider can run a topology 665 of management processes, including an orchestrator 668.
- the orchestrator 668 can include the optimizer process.
- the optimizer can be part of the topology 665 that works with the orchestrator 668.
- the orchestrator can be responsible for managing slices and VNFs, in an example. This can include provisioning new slices or re-provisioning existing slices based on performance metrics and network load.
- the orchestrator can run on one or more physical servers located in one or more core clouds 640 or separate from the clouds.
- the orchestrator 668 can provide tools for keeping track of which clouds and VNFs are included in each slice.
- the orchestrator can further track slice performance for individual tenants 670, 680, and provide a management console such as shown in FIG. 5.
- the orchestrator 668 can also receive performance metrics and load information and determine when the optimizer should find a new slice path.
- a first tenant 670 has multiple slices 672, 674.
- Each slice 672, 678 can be defined by a slice record that indicates VNF requirements for that slice.
- VNFs can each provide different functionality in the service chain.
- VNF attributes can be used to favor certain clouds over others.
- a first slice can have a first VNF 674 that must be an edge cloud 620 at a particular location.
- the first slice can also have a second VNF 676 that acts as an egress point, and therefore is best placed in a core cloud 640.
- the orchestrator 668 can rely on the optimizer to dynamically determine VNF placement for a slice path. Then, the orchestrator 668 can provision VNFs based on the determinations made by the optimizer. This can include instantiating new VMs at the clouds 620, 640 identified by the optimizer.
- the orchestrator 668 can also change settings in the slice selectors 608 to ensure traffic reaches the correct slice 670.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Environmental & Geological Engineering (AREA)
- Computer Hardware Design (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Examples can include an optimizer that dynamically determines where to place virtual network functions for a slice in a distributed Telco cloud network. The optimizer can determine a slice path that complies with a service level agreement and balances network load. The virtual network functions of the slice can be provisioned at clouds identified by the optimal slice path. In one example, performance metrics are normalized, and tenant-selected weights can be applied. This can allow the optimizer to prioritize particular SLA attributes in choosing an optimal slice path.
Description
DYNAMIC INTER-CLOUD PLACEMENT OF VIRTUAL NETWORK FUNCTIONS
FOR A SLICE
Jeremy Tidemann, Constantine Polychronopoulos, Marc Andre Bordeleau, Edward Choh,
Ojas Gupta, Robert Kidd, Raja Kommula, Georgios Oikonomou
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims priority to, and the benefit of, pending U.S. Non- Provisional Patent Application No. 16/256,659, filed on January 24, 2019, and entitled “DYNAMIC INTER-CLOUD PLACEMENT OF VIRTUAL NETWORK FUNCTIONS FOR A SLICE”, which is incorporated herein by reference in its entirety.
BACKGROUND
[002] Today’s 3G, 4G, and LTE networks operate using multiple data centers (“DCs”) that can be distributed across clouds. These networks are centrally managed by only a few operating support systems (“OSSs”) and network operations centers (“NOCs”). 5G technology will dramatically increase network connectivity for all sorts of devices that will need to connect to the Telco network and share the physical network resources. Current network architectures cannot scale to meet these demands.
[003] Network slicing is a form of virtualization that allows multiple logical networks to run on top of a shared physical network infrastructure. A distributed cloud network can share network resources with various slices to allow different users, called tenants, to multiplex over a single physical infrastructure. For example, Internet of Things (“IoT”) devices, mobile broadband devices, and low-latency vehicular devices will all need to share the 5G network. These different applications will have different transmission characteristics and requirements. For example, the IoT will typically have a large number of devices but very low throughput. Mobile broadband will be the opposite, with each device transmitting and receiving high bandwidth content. Network slicing can allow the physical
network to be partitioned at an end-to-end level to group traffic, isolate tenant traffic, and configure network resources at a macro level.
[004] However, current slicing technology is confided to a single datacenter.
Applying this technology across multiple clouds to accommodate slices on the physical network is insufficient and introduces several problems. Because demands fluctuate at different times and locations, a particular geographic location may not have enough compute resources or bandwidth to simply reserve multiple slice paths in a static fashion. Doing so can create bottlenecks and other inefficiencies that limit the gains otherwise promised by 5G technology. As an example, one company may want a long-term lease of a first network slice for connectivity of various sensors for IoT tasks. Meanwhile, a sporting event can require a short-term lease of a network slice for mobile broadband access for thousands of attendees. With a static approach, it may be impossible to satisfy both requirements of the physical network.
[005] Current methods for determining placement of virtual network functions (“VNFs”) in a slice do not take multi-cloud network demands into account. Instead, they consider a single data center at a time when determining how to scale out while meeting network demands. From a multiple-cloud standpoint, existing technologies simply determine the shortest or fastest path between VNFs. Not only will this create bottlenecks, but it also falls short of determining the best arrangement for the particular slice, leaving a slice’s particular performance metric needs unaddressed. In addition, statically placing these VNFs in slices on the physical network can again inefficiently reserve physical and virtual resources that are not needed or that change over a time period. For example, a sporting event could cause existing slice performance to suffer and fall below service level agreement (“SLA”) requirements.
[006] As a result, a need exists for dynamic cross-cloud placement of VNFs within network slices.
SUMMARY
[007] Examples described herein include systems and methods for dynamic inter cloud VNF placement in a slice path over a distributed cloud network. The slices can span a multi-cloud topology. An optimizer can determine a slice path that will satisfy an SLA while also considering cloud load. The optimizer can be part of an orchestrator framework for managing a virtual layer of a distributed cloud network.
[008] In one example, the optimizer can identify an optimal slice path that meets the SLA (or violates the SLA to the lowest extent), while balancing network resources. This means the optimizer will not necessarily choose the shortest or fastest possible path between VNFs. This can provide technical advantages over algorithms such as a Dijkstra algorithm that is single dimensional and would be used to find the shortest path. By considering multiple dimensions all together, the optimizer can balance SLA compliance and network performance. This approach can also allow the optimizer to flexibly incorporate additional SLA attributes, new weights, and react to changing network conditions.
[009] In one example, the optimizer can start determining a slice path from an edge cloud. This can include determining a neighborhood of available clouds based on the number of VNFs in the slice and a maximum number of intercloud links. Limiting intercloud links can keep the pool of candidate slice paths more manageable from a processing standpoint. Each candidate slice path can include VNFs placed at a different combination of clouds. The number of different clouds in a candidate slice path can be less than or equal to both (1) the total number of available clouds or (2) the maximum number of intercloud links plus one. Within those boundaries, permutations of VNF-to-cloud assignments can be considered as candidate slice paths.
[010] The optimizer can determine a performance metric for the candidate slice paths corresponding to an SLA attribute of the slice. The optimizer can also determine loads for the candidate slice paths based on load values of the corresponding clouds. In one example, the optimizer can identify a slice path with the best composite performance. The best composite performance can include both the weighted load and performance metrics and can be based on the lowest overall composite score in one example. Then the system can instantiate the VNFs at corresponding clouds specified by the slice path with the best composite score. The slice path with the best composite score can be referred to as“the best composite slice path.”
[Oi l] In one example, a graphical user interface (“GUI”) can allow a user to adjust weights for SLA attributes. For example, a tenant can make a GUI selection that weights an SLA attribute relative to a second SLA attribute. The optimizer can use these weights to choose optimal VNF placement, while still balancing against network load of the slice path.
[012] In one example, the optimizer can determine candidate slice paths relative to an edge cloud. Each candidate slice path considers a unique permutation of VNF-to-cloud assignments for a given service function chain. The optimizer can rank the candidate slice paths based on the relative weightings of performance metrics corresponding to the SLA attributes and a load for the candidate slice path. This can allow the optimizer to balance network requirements while still determining VNF locations based on SLA compliance, rather than simply picking the shortest or fastest path. The optimizer (which can be considered part of an orchestrator) can then provision the VNFs at the clouds specified by a top ranked slice path.
[013] These stages can be performed by a system in some examples. Alternatively, a non-transitory, computer-readable medium including instructions can cause a processor to perform the stages when the processor executes the instructions.
[014] Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the examples, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[015] FIG. 1 A is a flowchart of an example method for dynamic inter-cloud VNF placement in a slice path.
[016] FIG. IB is a flowchart of an example method for dynamic inter-cloud VNF placement in a slice path based on service-level agreement attribute weighting.
[017] FIG. 2 is an example sequence diagram of example stages for dynamic inter cloud placement of VNFs in a slice path.
[018] FIG. 3 is an example system diagram illustrating performance cost and load calculations for multiple clouds.
[019] FIG. 4A is an example table of candidate slice paths used for determining optimal VNF placements.
[020] FIG. 4B is an example table of loads for clouds.
[021] FIG. 4C is an example matrix of cloud round-trip times between clouds.
[022] FIG. 4D is an example table of candidate slice paths with composite scores.
[023] FIG. 5 is an example illustration of a graphical user interface (“GUI”) screen.
[024] FIG. 6 is an example system diagram of a topology for dynamic inter-cloud VNF placement in a slice path.
DESCRIPTION OF THE EXAMPLES
[025] Reference will now be made in detail to the present examples, including examples illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
[026] In one example, a system dynamically chooses optimal VNF locations for slices in a distributed multi-cloud environment, such as a Telco cloud environment. A Telco
provider can have numerous data center environments, each of which can be a cloud. Each cloud can be one of several nodes located at various geographic locations in the distributed Telco network. Edge clouds can be those closest to user devices, such as cell phones, tablets, computers, IoT devices, and other processor-enabled devices that can connect to a mobile network. Edge clouds can act as ingress points for devices utilizing the Telco network. Core clouds can be at least one link removed from the user devices and can include core data centers in an example. Core clouds and can act as egress points to the internet if, for example, a VNF located at the core cloud is responsible for connecting to the internet. Edge clouds can also act as egress points in an example, but as will be discussed, the provider can avoid this configuration for congestion reasons in some instances.
[027] A provider of the Telco network can lease portions of the network to tenants. These portions can be leased to tenants for specific purposes or services. For example, the lease can be used for particular applications, IoT devices, or customers. To allow multiple tenants to use portions of the Telco network, the provider can create and manage one or more network slices, referred to as“slices” for convenience. Each slice can be a virtual network that runs on top of a shared physical network infrastructure distributed across the Telco clouds. In effect, slicing can allow the provider to reserve some portion of the distributed network for each tenant. Network slices can be assigned to different tenants, and in some examples a single tenant can have multiple slices for different purposes. An SLA can define which performance metrics are required for the slice. Required performance metrics can vary between slices, depending on the intended use of a given slice.
[028] A slice can include a service chain of VNFs for performing certain network tasks. The required combination of VNFs can differ based on the intended use of the slice, such as video streaming or IoT device management. The SLA or a separate slice record can specify which VNFs make up the service chain.
[029] To instantiate the slice, the VNFs can be deployed across a slice path. The slice path can represent a subset of the provider’s distributed network and can span one or more clouds. The slice path can include virtual and physical elements (such as compute, network, and storage elements) that provide functionality to the network slice. The virtual elements can include the VNFs required for the particular slice. These can operate in virtual machines (“VMs”) and utilize virtual computer processing units (“vCPUs”). The slice path can begin at an edge cloud that provides an access point to user devices, but VNFs in the service chain can be placed elsewhere on other clouds. In a multi-cloud setting, the slice path can include be along a selected permutation of VNF-to-cloud assignments.
[030] Because the physical infrastructure of a Telco network is both shared and limited, multiple slices can compete for utilization of that infrastructure. As operating conditions change, the optimizer (or other part of an orchestrator) can evaluate a new slice path based on current conditions and SLA requirements. Placement of VNFs can be optimized based on various dimensional costs such as performance metrics in the SLA, compute costs, and network utilization. The optimal slice path can represent a tradeoff between satisfying SLA performance metrics and orchestrating resources in a multi-cloud environment. In one example, the service provider can instantiate VNFs at the cloud locations specified in the optimal slice path. The optimizer can continue to monitor metrics and cloud loads and redistribute the VNFs along a new optimal slice path once metrics or loads fall outside of SLA and cloud load thresholds.
[031] FIG. 1 is an example flow chart for dynamic inter-cloud VNF placement for a slice. An optimizer can be a process running in a core cloud of the provider. The optimizer can run on a server as part of a suite of data center management tools, in an example. The optimizer can select an optimal slice path for a slice, placing VNFs at clouds in a manner that balances required SLA metrics with impact on network resources.
[032] To determine an optimal slice path, the optimizer can consider performance requirements of the SLA, cloud resource utilization based on load distribution, and performance metric prioritization for slices. In general, the optimizer can use the cloud loads to choose a slice path that balances network congestion against SLA requirements. Often, edge clouds will have the best performance metrics but the worst cloud load. Because edge clouds are ingress points, various performance metrics such as round-trip time (“RTT”) will be lowest when all of the VNFs are at the edge cloud. However, only so many VMs can run on any one edge cloud. Therefore, to allow for greater network scalability, the optimizer can balance the needs of the particular slice with the overall network utilization, in an example. The optimizer can attempt to distribute VNFs in a manner that satisfies the SLA while preserving resources at the various clouds, including the edge cloud.
[033] At stage 110, the optimizer can receive an SLA attribute required by a slice. SLA attributes can be any required performance metric of the slice. Example SLA attributes include maximum latency or round-trip time, minimum bandwidth, and maximum jitter.
SLA attributes can be different and be prioritized differently between slices, largely depending on the services provided by the slice. For example, high bandwidth may be most important for video streaming, whereas low latency may be most important for automated driving. A tenant can specify which SLA attributes apply to a slice, in an example.
[034] In one example, the SLA attribute is received from a GUI, such as an operator console. An operator can manually enter SLA attributes that apply to a slice. In another example, the SLA attribute can be received from a stored slice record. Slice records can be defined for a tenant to programmatically define various slice requirements. For example, a slice record can not only define which SLA attributes apply to the slice, but it can also specify which VNFs are required, particular geographies needed, and monetary spends permitted for the slice. For example, if a service is being offered in San Francisco, a slice
record can ensure that particular VNFs are placed near this location. One or more required edge clouds can be specified for access to the slice by user devices.
[035] The optimizer can also receive a maximum number of intercloud links. In one example, the maximum number of intercloud links can be configured automatically or manually based on slice path performance and to limit the degree of slice path drift across the clouds. This number can define how many connections between different clouds are permitted for the slice path. Because VNFs can be distributed on a slice path that spans multiple cloud locations, a limitation on the number of links between these clouds can help the optimizer define a universe of candidate slice paths. Additionally, slice performance can generally suffer if too many intercloud links are introduced. In one example, the maximum number of intercloud links is between five and ten. The number of permissible intercloud links can be entered into a GUI by an operator, in one example. Different maximum intercloud link numbers can apply to different slices and tenants.
[036] Upon being notified to determine VNF placements for a slice, at stage 120, the optimizer can determine candidate slice paths relative to an edge cloud. In some examples, the edge cloud is specified in a slice record associated with the slice or tenant to whom the slice is leased or assigned. The edge cloud can alternately be selected based on a geographic attribute in the SLA or other information provided by the provider or tenant. Starting from the edge cloud, the optimizer can determine a neighborhood of other available clouds. Then combinations of these available clouds can make up the candidate slice paths.
[037] The pool of candidate slice paths can be limited based on (1) the number of VNFs in the service chain of the slice, and (2) the maximum number of intercloud links. For example, if a slice includes four VNFs, each candidate slice path can include four or fewer clouds. The number and types of VNFs for any particular slice can vary, based on the intended use of the slice. A slice record can define a series of VNFs for whatever use the
tenant or provider has for that slice. These VNFs can be placed on various clouds, starting with the edge cloud that acts as an access point (for example, for video streaming requests or automobile communications).
[038] The VNFs can connect with each other over intercloud links when they are located at different clouds, forming the slice path. One or more of the VNFs can provide connection, for example, to a particular data center or the internet. Clouds with these VNFs can be considered egress clouds.
[039] The maximum number of intercloud links can further reduce the pool of candidate slice paths, ensuring that the optimization can be performed in a computationally efficient manner. As an example, if the maximum number is three, then the candidate slice paths can be limited to four or fewer different clouds in any one slice path (since there are three links between four clouds). If there are more than three VNFs, this can mean the candidate slice paths will include at least one cloud with multiple VNFs. In another example, the number of intercloud links can be used to eliminate clouds from the neighborhood of potential clouds. For example, clouds that require too many network hops relative to the edge cloud can be left out of the candidate slice paths. As one example, if cloud 6 is five cloud hops away from the edge cloud and the maximum for intercloud links is three, cloud 6 can be removed from the neighborhood of available clouds. In this way, the maximum number of intercloud links can be configured to manage the pool size for candidate slice paths and to limit the degree of slice path drift across the clouds.
[040] The optimizer can further limit the candidate slice paths based on performance metrics. For example, performance metrics of the candidate slice paths can be measured to determine if a candidate slice path complies with SLA requirements. In one example, a prioritized SLA attribute can be used to eliminate candidate slice paths that do not meet the requirements of the SLA attribute.
[041] As an example, a first slice can prioritize latency over bandwidth, meaning only slice paths that meet the latency SLA requirements will be candidates. A second slice can prioritize bandwidth over latency, causing the optimizer to focus on bandwidth performance. In another example, a slice record can indicate that round-trip time (“RTT”) is prioritized. In response, the optimizer can include RTT metrics for the candidate slice paths and eliminate candidate slice paths with RTT above the SLA requirement. To do this, the optimizer can create an intercloud matrix that includes an RTT between each candidate cloud. Using these RTT values, the optimizer can derive a total RTT for each candidate slice path. The derivation can be a sum or other function. This total RTT value can be stored as a dimensional performance cost for each candidate path. Other dimensional performance costs can be determined for each candidate slice path using a similar methodology.
[042] In one example, the dimensional performance costs are normalized. The normalization can be proportional to the SLA attribute. For example, if the SLA attribute specifies a maximum RTT of 50 milliseconds, normalization can include dividing the total RTT value (dimensional performance cost) by 50 milliseconds. A result greater than 1 can indicate the SLA attribute is not met, whereas a result of less than 1 indicates it is met.
Alternatively, different linear or non-linear functions can be used to normalize values for an SLA attribute. The candidate slice paths can be ranked according to the normalized performance cost, in an example. Candidates that do not comply with the SLA attribute can be omitted.
[043] In another example, candidate slice paths that do not meet the SLA
requirement can remain in the pool of candidate slice paths if no other candidates satisfy the SLA requirement. In that example, the optimizer can retain some number of candidate slice paths organized by how close they come to meeting the SLA requirement. This can help ensure that the optimizer chooses a slice path that is close to satisfying the SLA requirement.
For example, a minimum of ten candidate slice paths can be retained in one example, even if not all of the candidate slice paths meet the SLA requirement. However, the non-compliant candidate slice paths can be ranked according to how close they come to meeting the SLA requirement, and those falling below the threshold number of candidate slice paths can be omitted.
[044] The optimizer can also narrow the pool of candidate slice paths based on specific VNF requirements, in an example. For example, a slice record can specify that a particular VNF in the function chain is required to be within a certain distance of a geographic location or have direct connectivity to a particular egress point. Alternatively, a VNF requirement can specify a particular cloud to use with a VNF. These sorts of VNF requirements can be useful, for example, when the slice must connect to a geographically specific data center or have a specific egress point. The optimizer can use the VNF requirements to limit the pool of candidate slice paths accordingly. For example, if a slice record specifies a geographic requirement for VNF3, the optimizer can limit candidate slice paths to those where VNF3 is on a cloud meeting the geographic requirement.
[045] In one example, the candidate slice paths are determined by an intersection of two sets of slice paths. The first set can include every slice path in the neighborhood of an edge cloud that is less than or equal to the number of VNFs and maximum intercloud links. The second set can include every slice path that satisfies the SLA for a performance metric. Alternatively, the second set can include slice paths closest to satisfying the SLA when no candidates slice paths satisfy it. Then the optimizer can take the intersection of the first and second sets. The remaining slice paths can be the candidate slice paths.
[046] At stage 130, the optimizer can determine loads for the candidate slice paths. For example, the optimizer can determine load values for each cloud in the candidate slice paths, then add those up. As mentioned previously, the optimizer can use the cloud loads to
balance network congestion against SLA requirements. One goal of orchestrating a Telco network is to avoid overburdening a cloud and allow for greater network scalability. The optimizer therefore can use load values to select an optimal slice path that uses clouds that may be underutilized compared to clouds in other candidate slice paths. The optimizer can attempt to distribute VNFs in this manner while still satisfying SLA requirements.
[047] Cloud loads can be calculated proportionally to the demands of a slice, in one example. For example, if an edge cloud has 100 vCPUs and ten slices, the edge node can be considered 90% utilized from the perspective of any slice utilizing 9 vCPUs. However, different examples can calculate cloud loads differently. Load can be based on, for example, compute load (i.e., the percentage of vCPUs utilized), storage, network capacity, bandwidth, or other metrics that can be assigned a cost.
[048] The load values can also be normalized, in an example. The function for normalizing the load value can depend on the manner in which the load is calculated. The optimizer can also weight the candidate paths based on the load values. In general, the optimizer can weight candidates negatively for high loads and positively for low loads. This can cause a candidate slice path with lower network utilization to be ranked ahead of one with high utilization if performance metrics are otherwise equal.
[049] At stage 140, the optimizer can use the loads and performance metrics to determine a slice path with the best composite score. In one example, this can include normalizing and weighting the dimensional performance costs (performance metrics) and the loads. Then those values can be combined together to arrive at a composite score that, to some degree, represents both.
[050] The optimizer can separately weight normalized load costs and normalized performance costs to create the weighted candidate slice paths, in an example. The relative balance of these different weights can be based on selections from an operator or values from
an orchestration process, in an example. A higher relative weight for loads can indicate an emphasis on balancing the network versus providing peak slice performance. For example, a load weight can be twice as much as a performance weight.
[051] The weights can be multiplied against the normalized values. The resulting weighted costs for performance metrics and loads can be summed or otherwise used to determine a composite score from which the optimal slice path is selected.
[052] By collapsing multiple metrics into a composite score, slice paths with multiple dimensional attributes can be evaluated based on a single composite dimension.
This can greatly reduce computational complexity, making dynamic inter-cloud VNF allocation possible where it might not otherwise be. Furthermore, the relative priority of various metrics such as slice RTT and monetary cost can be configurable by the tenant. Reducing these to a consistent composite dimension can allow the optimizer to perform efficiently even when the numbers and weights of metrics are changed.
[053] At stage 140, the optimizer can identify a slice path with the best composite score based on the weighted candidate slice paths. In one example, creating a composite slice path can include calculating a composite value based on the load and performance values. In one example, the weighted load and performance costs are added together to yield a composite score. The candidate slice paths can be ranked based on the lowest composite score, and the top-ranked result can be identified as the slice path with the best composite score. This can be the optimal slice path.
[054] Using the composite score can allow the optimizer to identify an optimal slice path that meets the SLA (or violates the SLA to the lowest extent), while balancing network resources. This means the optimizer will not necessarily choose the shortest or fastest possible path between VNFs. This can provide technical advantages over algorithms such as a Dijkstra algorithm that is single dimensional and would be used to find the shortest path.
By considering multiple dimensions all together, the optimizer can balance SLA compliance and network performance. This approach can also allow the optimizer to flexibly incorporate additional SLA attributes, new weights, and react to changing network conditions.
[055] At stage 150, the orchestrator or optimizer can provision the VNFs in the manner specified by the slice path with the best composite score. This best composite slice path includes associations between each VNF and a respective cloud. The orchestrator or optimizer can provision the VNFs at those clouds. For example, if the slice path with the best composite score indicates VNF1 at cloud-0, VNF2 at cloud-5, and VNF3 at cloud-6, the optimizer can send a message to an orchestrator process identifying these VNFs and clouds. The orchestrator can then instantiate VNF1 at the cloud location associated with cloud-0, VNF2 at a second cloud location associated with cloud-5, and VNF3 at a third cloud location associated with cloud-6. The orchestrator can also provide information to each VNF so that they can communicate with one another as intended for the service function chain of the slice.
[056] FIG. IB is an example flow chart of stages for optimizing VNF placement in a slice path. At stage 160, the optimizer receives a GUI selection that weights a first SLA attribute relative to a second SLA attribute. The GUI can be part of an orchestrator console, in an example. Alternatively, it can be part of a tenant-facing portal that allows a tenant to control which SLA attributes should be prioritized.
[057] In one example, the GUI includes a slider for moving a weight between two SLA attributes. This can allow one of the SLA attributes to be the priority or both SLA attributes to be equal (and therefore both priority). Alternatively, the GUI can allow selection of multiple SLA attributes and the user can set weights for each one. The SLA attributes can be weighted relative to one another based on the weights set for each one.
[058] At stage 170, the optimizer can determine candidate slice paths relative to an edge cloud. This stage can occur as part of initially provisioning a slice. Additionally, the
optimizer can dynamically perform stage 170 based on an orchestrator or the optimizer determining that a new slice path is needed, in an example. For example, an orchestrator can detect a high load at a particular cloud that is utilized by the current slice path. Alternatively, the orchestrator can detect that performance metrics for a slice no longer meet an SLA requirement. This can cause the optimizer to determine a new slice path to bring the slice back into SLA compliance or alleviate network load.
[059] In one example, the optimizer can determine a neighborhood of clouds based on the maximum number of intercloud links. The optimizer can create a pool of candidate slice paths that includes every combination of the neighborhood of clouds, in an example, limited to the number of VNFs per candidate slice. Each candidate can have a unique VNF- to-cloud assignment combination, also referred to as a unique permutation. For example, a first candidate can assign VNF1 to cloud-0 and VNF2 to cloud-1, whereas a second candidate assigns VNF1 to cloud-0 and VNF2 to cloud-2. The permutations can also take into account the order of VNFs, since the service function chain can require traversing VNFs in order.
[060] In another example, the optimizer omits candidate slice paths that do not satisfy the prioritized SLA attribute. For example, if the first SLA attribute is weighted more highly than the second SLA attribute, the optimizer can create a pool of candidate slice paths that satisfy the first SLA attribute. If both the first and second SLA attributes are prioritized, then the optimizer can determine a pool of candidate slice paths that have performance metrics satisfying both SLA attributes. However, if no candidates satisfy the SLA, then those with the closest performance can be kept as candidate slice paths.
[061] At stage 180, the optimizer can rank the candidate slice paths based on the relative weightings of the first and second SLA attributes. In one example, this can include normalizing each of the corresponding SLA metrics of the candidate slice paths, then multiplying by the respective weight values. For example, if the first SLA metric is RTT and
the second SLA metric is slice throughput, the first weight can be applied to the normalized RTT value and the second weight can be applied to the normalized throughput value. As explained for stages 130 and 140, normalizing can include applying a function to the metric values. The function applied can vary for different SLA attributes. For example, normalizing RTT can be done linearly by dividing by the SLA attribute value. Slice throughput, on the other hand, can have a non-linear function that favors high throughput by returning a much lower number once a throughput threshold is achieved.
[062] In one example, the optimizer can use a lookup table to map performance metrics to normalized costs. Each dimension (e.g., each type of performance metric) can have a different normalization factor. The lookup table can define a transform function for each type of performance metric. The functions can be linear or non-liner and can be based on the SLA attributes for the slice. In one example, a normalized value of 0 to 1 indicates SLA compliance. As an example, a metric-to-cost table can map RTT values into Hoat64 values that are normalized such that values between 0 and 1 satisfy the SLA for RTT, whereas any value above 1 does not satisfy the SLA.
[063] In one example, the weights are applied to the normalized metric values for the candidate slice paths. By using consistent normalization, such as indicating SLA compliance when values are less than 1, weights can more accurately prioritize certain metrics over others for optimization purposes. Load values for the clouds can be normalized and weighted, as described for stage 130. This can allow the optimizer to then determine a top-ranked slice path based on a composite score that factors in both the weighted performance metrics and the weighted loads. The provider can shift the prioritization of load distribution versus performance as needed by dynamically adjusting the weight applied to the load versus performance metrics.
[064] At stage 190, the top-ranked slice path can be provisioned. In one example, an orchestrator process instantiates VNFs at the corresponding clouds of the top-ranked slice path. This can include setting the VNFs to communicate with one another across the distributed cloud network.
[065] The optimizer can continue monitoring the slice and occasionally re-perform stages 170, 180, and 190 if a performance metric becomes non-compliant with the SLA or the slice load exceeds a threshold. The optimizer can determine a new top-ranked slice path and an orchestration process can provision at least one of the VNFs at a new cloud. The orchestrator can also configure the other VNFs to communicate with the newly instantiated VNF. VNF instances that are no longer part of the function chain can be terminated.
[066] FIG. 2 is an example sequence diagram for dynamically provisioning VNFs in a slice. At stage 205, the optimizer can read a slice record to retrieve one or more SLA attributes and information about the slice. The slice record can also identify which VNFs are needed in the slice path, specific VNF requirements, and monetary constraints.
[067] At stage 210, the optimizer can retrieve weight values to apply to one or more performance metrics or loads. The weights can be input into a GUI in one example, such as by an administrator associated with the provider. In one example, the GUI allows tenant access for setting weights for SLA attributes, including setting which SLA attribute is a primary attribute for the slice. The weights can also be received as functions from an orchestrator as part of an optimization request, in an example. In one example, stage 210 can be performed later as part of stage 220, to determine a lowest score for composite slice paths.
[068] At stage 215, the optimizer can determine candidate slice paths. This can be based on the SLA attributes received from the slice record at stage 205, in an example. In one example, the optimizer creates candidate slice paths that each have the same or fewer clouds as the number of VNFs or the maximum number of intercloud links. The optimizer
can create a first set on that basis. The optimizer can then eliminate candidate slice paths that do not comply with one or more of the SLA attributes. The remaining candidate slice paths can make up the pool from which an optimal slice path is chosen. For example, this can include choosing the candidate slice path with the lowest composite score.
[069] At stage 220, the optimizer can determine which of the candidate slice paths has the lowest composite score. This can include applying the weights received at stage 210. The performance metrics can be normalized and weighted. Similarly, load values can be normalized and weighted. Then these weighted values can be added together to result in a composite score. The top-ranked candidate slice path can be the one with the lowest composite score. In another example, the various functions and weights are fashioned to produce a high score for the top-ranked slice path.
[070] The optimizer can then cause the VNFs to be provisioned at the specific clouds included in the top-ranked candidate slice path. In one example, the optimizer sends a request to an orchestrator process to perform the provisioning. Alternatively, the optimizer can be part of the orchestrator.
[071] In this example, the top-ranked candidate slice path can specify that VNF1 is on cloud 0, which can be an edge cloud. It can also specify that VNF2 and VNF3 are both on cloud 2. Cloud 0 can be an index that the optimizer uses to look up provisioning information for a host server, cluster, or cloud location. Cloud 1, cloud 2, and cloud 3 can be other indices used for this purpose. At stage 222, the orchestrator can provision VNF1 at cloud 0. At stage 224, the orchestrator can provision VNF2 at cloud 2, and at stage 226 VNF3 can be provisioned at cloud 2. These VNFs can be configured by the orchestrator to talk to one another. Provisioning can include instantiating one or more VMs at each cloud location, including one or more vCPUs for executing the functionality of the respective VNF.
[072] At stage 230, the optimizer (or orchestrator) can detect that a cloud is overloaded. If the slice is using the cloud, the optimizer can determine a new top-ranked slice path. For example, if cloud 2 has a load that exceeds a threshold, the new top-ranked slice path can be calculated at stage 235. This can include determining which slice path has the new lowest composite score. In this example, the new top-ranked slice path can place VNF1 at cloud 1, VNF 2 at cloud 2, and VNF3 at cloud 3. VNF3 can, for example, be vCPU intensive, such that moving it to cloud 3 helps balance network load.
[073] Similarly, the optimizer (or orchestrator) can determine a new cloud path based on a performance metric no longer meeting an SLA attribute. The orchestrator can periodically check performance of the slice, in an example. If performance falls below the SLA requirements, a new slice path with the lowest composite score can be calculated at stage 235.
[074] When a new slice path is determined, the orchestrator can provision the VNFs at their new locations at stages 242, 244, and 246. In one example, VNF2 is not re provisioned, but instead is simply reset to talk to VNF1 and VNF3 at their new locations. In another example, all three VNFs are re-instantiated at the respective cloud locations when the new slice path is created.
[075] A detailed example of optimizer operation will now be discussed with reference to FIGs. 3 and 4A-D. FIG. 3 is an example system diagram for purposes of explaining how an optimizer determines candidate cloud paths and selects one based on a composite value. The composite value can represent multiple dimensions of performance metrics and loads, allowing for the optimizer to determine VNF placement based on both SLA requirements and cloud resource allocation. FIGs. 4A-D include example tables of values to explain various stages of the example optimizer operation.
[076] In this example, an optimizer can determine cloud placement for three VNFs 351, 352, 353 in a service chain for a slice. These VNFs, also shown as VI, V2, and V3, can be provisioned on various clouds 310, 320, 330, 340 in the Telco network. The illustrated slice path includes VI at edge cloud 310 (Cloud-0), V2 at a first core cloud 320 (Cloud- 1), and V3 at a third core cloud 340 (Cloud-3). Access to the slice can occur from a cell tower 305, which sends data to Cloud-0.
[077] Each cloud 310, 320, 330, 340 can communicate with the others using intercloud links with performance metric costs 314, 316, 323, 324, 334. The costs are represented by CP, where p designates the slice path. For example, Co, 3 indicates a performance metric cost between Cloud-0 and Cloud- 1. Additionally, each cloud 310, 320, 330, 340 can be assigned a load value 312, 322, 332, 342 based on load functions utilized by the optimizer. This can be a compute load based on total vCPU usage at the cloud, in an example.
[078] To better explain some algorithmic stages performed by the optimizer in an example, the terminology of Table 1, below, can be used.
—Table 1—
[079] In one example, the optimizer can attempt to determine a new slice path that satisfies SLA requirements and balances the orchestration of resources in the multi-cloud environment of FIG. 3. This can be different than merely implementing a shortest path algorithm, such as Dijkstra, because multiple graphs can be considered across several
domains, such as RTT, bandwidth, and cloud load. Each can contribute to a composite score and selection of an optimal slice path.
[080] In this example, the slice can be defined as [VI, V2, V3] with the SLA specifying a maximum of 50 millisecond RTT on the slice. The weights, wfc, in this example can be wRTT= 5 for RTT and wioad= 0.5 for cloud load.
[081] The optimizer can determine a neighborhood of available clouds relative to the edge cloud 310 (Cloud-0). This can include limiting the available clouds based on the number of VNFs and the maximum number of intercloud links. In this example, seven other neighboring clouds can be available for VNF placement. Each of these clouds can be given an index.
[082] This neighborhood can be used to determine candidate slice paths. A few such candidate slice paths are shown in FIG. 4A. This table uses the slice’s VNFs at column indices 410 and each row 405 represents a potential candidate slice path. For example, candidate slice path 412 can map VI to Cloud-0, V2 to Cloud- 1, and V3 to Cloud-3. This corresponds to the slice path shown in FIG. 3. FIG. 4 A illustrates just four such candidate slice paths, but many more can be determined. In one example, the optimizer creates a first set of slice paths that includes every unique combination of VNFs to the neighborhood clouds, relative to the edge cloud (Cloud-0).
[083] The optimizer can measure a load value for each available cloud. FIG. 4B presents load measurements for the neighborhood of available clouds. In this table, each cloud has an index 415 and a load value 420. For example, Cloud-0, an edge cloud, can be 99.3% utilized. Cloud-2, on the other hand, is only 17.8% utilized. In this example, load can represent the fraction of proportionally allocated compute resources currently required for this slice at the respective cloud. In an alternate example, load can represent the absolute value of computer resources at the cloud.
[084] Next, the optimizer can measure the performance metric for RTT for each candidate cloud path. To do this, the optimizer can create a matrix with RTT values between each pair of clouds. FIG. 4C illustrates the RTT matrix. The row index 425 can correspond to source clouds and the column index 430 can correspond to destination clouds. The value at each location in the table can represent RTT in milliseconds between the source and destination clouds. In this example, RTT for the same cloud as source and destination is estimated to be 1 millisecond. The other intercloud links have RTT values between 10 and 50 milliseconds. The optimizer can create similar matrices for other performance metrics, in an example.
[085] Next, the optimizer can determine dimensional costs (also called values), Cp , of the performance metrics and loads for each candidate slice path. FIG. 4D illustrates example dimensional costs 445 for cloud load 446 and RTT 447. These values can be determined for candidate slice paths 425 having different cloud placements 440 for the three VNFs. As one example, candidate slice path 465, PATH 53, includes a cloud load of 1.031 and an RTT of 24.8. C ^Tcm be determined by the optimizer by adding Co-3 and C3-3, representing RTT across the clouds 472 of PATH 53. Using the matrix of FIG. 4C to retrieve Co-3 and C3-3, this equates to 23.8 + 1.0 = 24.8, which is shown in FIG. 4D for the dimensional cost 445 of RTT 447 of PATH 53. In other examples, a different method of deriving the dimensional costs is possible.
[086] The optimizer can determine the other dimensional costs 445 for load 446 and RTT 447 following this methodology to solve for Cp . For each possible candidate slice path, the optimizer can sum the corresponding RTT values and load values.
[087] Next, the optimizer can determine normalized costs 450 by applying normalization functions to the dimensional costs 445. The normalization function, fk , can
vary for different dimensional costs, but can result in normalized costs 450. Example methods for solving for g£ can include any of Equations 1-3 below.
TP fe = fk (
— Equation 3—
[088] As shown above in Equations 2 and 3, the normalization function can be relative to the SLA attribute corresponding to the dimensional cost (e.g., performance metric value) being normalized. In one example, if the normalized value is between 0 and 1, the SLA is satisfied. Otherwise, the SLA is violated.
[089] Turning to the example normalized costs 450 in FIG. 4D, the candidate slice paths 425 all have normalized RTT values between 0 and 1. In this example, the optimizer has removed other slice paths that do not comply with the SLA for RTT. Of note, PATH 168 satisfies the SLA the most with a normalized RTT value 476 of .040. However, PATH 168 also includes a relatively poor load value 475 of 11.913 because in that slice path all of the VNFs are assigned to the edge cloud (Cloud-0).
[090] Next, the optimizer can apply weights, wfc, to the normalized costs to create weighted costs 455. The weights can be adjusted or specified by the orchestrator or an administrator. Performance metric weights, such as for RTT, can be modified by the tenant in an example. Applying the weights of this example, wRTT= 5 for RTT and wioad= 0.5 for cloud load, yields the composite values 460 by which the optimizer can rank the candidate slice paths 425.
[091] In this example, the top-ranked candidate slice path 465 is PATH 53, with the lowest composite value 470. Of note, the RTT for PATH 53 is not the lowest available RTT, which belongs to PATH 168. But PATH 53 provides a better balance of network load while still maintaining SLA compliance. Therefore, it has the best composite score and is selected as the optimal slice path, PoPt.
[092] In one example, the optimizer can determine composite weighted cost 455 using Equation 4, below.
— Equation 4—
[093] Popt can be selected based on the lowest value for Gr, in an example.
[094] FIG. 5 is an example illustration of a GUI screen 510 for adjusting optimizer performance in selecting VNF placement in a slice. The GUI can allow an administrator to configure various aspects of the optimizer functionality.
[095] In one example, an administrator can select a slice record 515. The slice record can be associated with a tenant, in an example. The slice record can provide a definition of a service function chain, including which VNFs are part of the slice and what compute resources each need. The slice record can also indicate interservice link needs for the VNFs. In one example, the slice record includes SLA requirements. Additional VNF attributes can also be included in the slice record. For example, a VNF preference for a core or edge node can be included. Another example VNF preference can include geographic criteria. For example, a service running only in San Francisco can include a VNF preference for that area. The slice record can also specify a particular edge node, in one example.
[096] The GUI can include a field for configuring the degree to which the optimizer determines candidate cloud paths, in one example. For example, the GUI can contain a field 540 for entering the maximum number of intercloud links. The administrator or an
orchestration process can increase or decrease this number to change the edge radius— that is, the number of permitted links to other clouds. This can control the pool size of the candidate slice paths in an example. When the number is lowered, this can increase computing efficiency for the dynamic inter-cloud placement of VNFs.
[097] The GUI can also include fields 520, 530 for selecting SLA attributes. In one example, these fields 520, 530 can be dropdown selectors that include all of the SLA attributes from the slice record. In this example, SLA attributes related to RTT and jitter have been selected. The GUI also can contain a control 525 for weighting the SLA attributes. In this example, the control 525 is a slider that simultaneously increases the weight of one attribute while decreasing the weight of another. However, in another example, the GUI can include individual weight controls for adjust weights relative to multiple SLA attributes.
[098] The weights can be used by the optimizer for determining the top-ranked slice path. In one example, the highest weighted SLA attribute can be treated as the primary SLA attribute and used to determine which slice paths are candidate slice paths.
[099] Another field 535 can be used to apply particular cloud definitions to the optimizer. In one example, this can include selecting a file that defines attributes for one or more clouds. For example, a provider may need to designate geographical characteristics of a cloud (for example, located in Kansas or California). If tenant wants a geographically specific use, such as smart cars in California, the specification for clouds in California can be used by the optimizer to limit the potential candidate slice paths. The optimizer can consider all attributes ascribed to a VNF against various attributes ascribed to clouds. In one example, a cloud map can be presented on the GUI, allowing the provider to lasso some clouds and define attributes or apply settings.
[0100] The administrator can also select load functions using a GUI element 545 in one example. The selected load function or functions can determine how cloud load is
defined and calculated. In this example, the selection bases load on vCPU usage. However, the load function can be flexible and based on other atributes. For example, the load can be an absolute value of compute resources at a cloud or it can be the fraction of proportionally allocated compute resources being used by the slice at the respective cloud.
[0101] The GUI can also provide a selection 555 of normalization functions. For example, a script, file, or table can define which functions are applied to which SLA atributes or loads. The functions can be linear or non-linear. The goal of the normalization can be to normalize performance metrics relative to each other and to the load metrics. This can allow the weights to more accurately influence the importance of each in determining optimal VNF placement in the slice.
[0102] In one example, normalization functions are provided as a metric-to-cost transform table. The table can map particular performance metrics to normalized metric values that are based on SLA satisfaction. For example, functions for different metric types can map the metrics of each cloud to normalized numbers between 0 and 1 when the SLA is satisfied, and numbers greater than 1 when it is not. Lower numbers can indicate a higher degree of satisfaction. Therefore, a number slightly greater than 1 can indicate the SLA is nearly satisfied. In extreme cases where network load results in no candidate slice paths that satisfy the SLA, candidate slice paths can be ranked based on how close they are to 1.
Although the number 1 is used as an example normalized SLA threshold, other numbers can be used to the same effect in different examples.
[0103] The GUI can also include one or more fields 550 for displaying or defining monetary cost maximums. Monetary costs can vary for each cloud, depending on the cloud’s current load and the amount of load required for a particular VNF. In one example, cloud paths are negatively weighted when the total cost for VNF placement exceeds cost maximums. Monetary costs can be normalized similarly to performance metrics or loads.
The normalization functions of selection 555 can include functions for normalizing slice path costs, in an example. This can allow costs to be weighted and included in the composite scoring.
[0104] FIG. 6 is an example system diagram including components for dynamic inter cloud VNF placement in slices. As illustrated, a distributed Telco cloud network 600 can include edge clouds 620 and core clouds 640. Slices 672, 678, 682 can be distributed across these clouds 620, 640.
[0105] Each cloud 620, 640 can have physical and virtual infrastructure for network function virtualization (“NFV”) 642. For example, physical servers 644, routers, and switches can run VMs 646 that provide VNF functionality. A slice can include a first VNF that executes on an edge cloud 620. The VNF can utilize one or more vCPUs, which can be one or more VMs 624 in an example. However, the edge cloud 620 can execute numerous VNFs, often for multiple tenants where the VNFs are part of various slices. The slices can be kept separate from a functional perspective, with VNFs from different slices not aware of the existence of each other even when they rely on VMs 624 operating on shared physical hardware 622.
[0106] A first VNF in the slice path can communicate with a second VNF, which can be located in a different cloud 640. For example, the second VNF can include one or more VMs 646 operating on physical hardware 644 in a core cloud 640. The second VNF can communicate with yet another VNF in the slice path. One or more of these VNFs can act as an egress to the internet 660, in an example.
[0107] One or more user devices 602 can connect to a slice in the Telco network 600 using, for example, a 3G, 4G, LTE, or 5G data connection. The user devices 602 can be any physical processor-enabled device capable of connecting to a Telco network. Examples include cars, phones, laptops, tablets, IoT devices, virtual reality devices, and others. Cell
towers 605 or other transceivers can send and receive transmissions with these user devices
602. At the ingress point to edge clouds 620, slice selectors 608 can receive data sent from the user devices 602 and determine which slice applies. The slice selectors 608 can operate as VMs 624 in the edge cloud or can run on different hardware connected to the edge cloud 620.
[0108] To manage the distributed virtual infrastructure, a provider can run a topology 665 of management processes, including an orchestrator 668. The orchestrator 668 can include the optimizer process. Alternatively, the optimizer can be part of the topology 665 that works with the orchestrator 668.
[0109] The orchestrator can be responsible for managing slices and VNFs, in an example. This can include provisioning new slices or re-provisioning existing slices based on performance metrics and network load. The orchestrator can run on one or more physical servers located in one or more core clouds 640 or separate from the clouds. The orchestrator 668 can provide tools for keeping track of which clouds and VNFs are included in each slice. The orchestrator can further track slice performance for individual tenants 670, 680, and provide a management console such as shown in FIG. 5. The orchestrator 668 can also receive performance metrics and load information and determine when the optimizer should find a new slice path.
[0110] In this example, a first tenant 670 has multiple slices 672, 674. Each slice 672, 678 can be defined by a slice record that indicates VNF requirements for that slice.
VNFs can each provide different functionality in the service chain. In addition, VNF attributes can be used to favor certain clouds over others. For example, a first slice can have a first VNF 674 that must be an edge cloud 620 at a particular location. The first slice can also have a second VNF 676 that acts as an egress point, and therefore is best placed in a core cloud 640.
[0111] The orchestrator 668 can rely on the optimizer to dynamically determine VNF placement for a slice path. Then, the orchestrator 668 can provision VNFs based on the determinations made by the optimizer. This can include instantiating new VMs at the clouds 620, 640 identified by the optimizer. The orchestrator 668 can also change settings in the slice selectors 608 to ensure traffic reaches the correct slice 670.
[0112] Although the orchestrator, virtual management topology, and optimizer are referred to separately, these processes can all operate together. The examples are not meant to limit which process performs which step. Instead, the optimizer can be considered any portion of the virtual management topology that performs the described stages.
[0113] Other examples of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the examples disclosed herein.
Though some of the described methods have been presented as a series of steps, it should be appreciated that one or more steps can occur simultaneously, in an overlapping fashion, or in a different order. The order of steps presented are only illustrative of the possibilities and those steps can be executed or performed in any suitable fashion. Moreover, the various features of the examples described here are not mutually exclusive. Rather any feature of any example described here can be incorporated into any other suitable example. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims
1. A method for dynamic inter-cloud placement of virtual network functions (“VNFs”) in a slice path, comprising:
determining candidate slice paths relative to an edge cloud, wherein the candidate slice paths are different permutations of VNF-to-cloud assignments, the permutations being limited by the number of VNFs or a maximum number of intercloud links;
for the candidate slice paths, determining a load based on each cloud in the candidate slice path;
identifying a best composite slice path based on a composite score that includes a performance metric and the load, wherein the performance metric corresponds to a service level agreement (“SLA”) attribute; and
provisioning the VNFs at corresponding clouds specified by the best composite slice path.
2. The method of claim 1, further comprising:
creating a matrix that maps the performance metric to each different cloud in the candidate slice paths, wherein the performance metric of a candidate slice path is derived from values in the matrix; and
when at least one candidate slice path complies with the SLA attribute, eliminating candidate slice paths that are non-compliant with the SLA requirement.
3. The method of claim 2, wherein the SLA attribute is a maximum round-trip time.
4. The method of claim 1, further comprising:
detecting network congestion or a performance metric below an SLA requirement; determining new candidate slice paths; and
provisioning at least one of the VNFs at a new cloud specified by a new best composite slice path, the new best composite slice path having a new best composite score based on a combination of a weighted performance metric and weighted load for the new candidate slice paths.
5. The method of claim 1, further comprising:
for each candidate slice path, calculating a normalized cost of the performance metric; and
when no candidate slice path includes a normalized cost that meets an SLA threshold, ranking the candidate slice paths based on how close the normalized cost is to the SLA threshold.
6. The method of claim 1, wherein determining the candidate cloud paths includes: determining a first set of slice paths that includes all permutations of VNF-to-cloud assignments that are within the number of VNFs and maximum number of intercloud links;
determining a second set of slice paths that complies with the SLA attribute; and taking an intersection of the first and second sets.
7. The method of claim 1, wherein the best composite slice path has the lowest
composite score of multiple composite slice paths, wherein the composite score includes adding a weighted load to a weighted performance metric.
8. A non-transitory, computer-readable medium comprising instructions that, when
executed by a processor, perform stages for dynamic inter-cloud virtual network function (“VNF”) placement in a slice, the stages comprising:
determining candidate slice paths relative to an edge cloud, wherein the candidate slice paths are different permutations of VNF-to-cloud assignments, the
permutations being limited by the number of VNFs or a maximum number of intercloud links;
for the candidate slice paths, determining a load based on each cloud in the candidate slice path;
identifying a best composite slice path having a best composite score based on a performance metric and the load, wherein the performance metric corresponds to a service level agreement (“SLA”) attribute; and
provisioning the VNFs at corresponding clouds specified by the best composite slice path.
9. The non-transitory, computer-readable medium of claim 8, the stages further
comprising:
creating a matrix that maps the performance metric to each different cloud in the candidate slice paths, wherein the performance metric of a candidate slice path is derived from values in the matrix; and
when at least one candidate slice path complies with the SLA attribute, eliminating candidate slice paths that are non-compliant with the SLA requirement.
10. The non-transitory, computer-readable medium of claim 9, wherein the SLA attribute is a maximum round-trip time.
11. The non-transitory, computer-readable medium of claim 8, the stages further
comprising:
detecting network congestion or a performance metric below an SLA requirement; determining new candidate slice paths; and
provisioning at least one of the VNFs at a new cloud specified by a new best
composite slice path, the new best composite slice path having a new best
composite score based on a combination of a weighted performance metric and weighted load for the new candidate slice paths.
12. The non-transitory, computer-readable medium of claim 8, the stages further
comprising:
for each candidate slice path, calculating a normalized cost of the performance metric; and
when no candidate slice path includes a normalized cost that meets an SLA threshold, ranking the candidate slice paths based on how close the normalized cost is to the SLA threshold.
13. The non-transitory, computer-readable medium of claim 8, wherein determining the candidate cloud paths includes:
determining a first set of slice paths that includes all permutations of VNF-to-cloud assignments that are within the number of VNFs and maximum number of intercloud links;
determining a second set of slice paths that complies with the SLA attribute; and taking an intersection of the first and second sets.
14. The non-transitory, computer-readable medium of claim 8, wherein the best
composite slice path has the lowest composite score of multiple composite slice paths, wherein the composite score includes adding a weighted load to a weighted performance metric.
15. A system for dynamic inter-cloud virtual network function (“VNF”) placement in a slice, comprising:
a non-transitory, computer-readable medium containing instructions; and
a processor that executes the instructions to perform stages comprising:
determining candidate slice paths relative to an edge cloud, wherein the candidate slice paths are different permutations of VNF-to-cloud assignments, the permutations being limited by the number of VNFs or a maximum number of intercloud links;
for the candidate slice paths, determining a load based on each cloud in the candidate slice path;
identifying a best composite slice path having a best composite score based on a performance metric and the load, wherein the performance metric corresponds to a service level agreement (“SLA”) attribute; and provisioning the VNFs at corresponding clouds specified by the best
composite slice path.
16. The system of claim 15, the stages further comprising:
creating a matrix that maps the performance metric to each different cloud in the candidate slice paths, wherein the performance metric of a candidate slice path is derived from values in the matrix; and
when at least one candidate slice path complies with the SLA attribute, eliminating candidate slice paths that are non-compliant with the SLA requirement.
17. The system of claim 16, wherein the SLA attribute is a maximum round-trip time.
18. The system of claim 15, the stages further comprising:
detecting network congestion or a performance metric below an SLA requirement; determining new candidate slice paths; and
provisioning at least one of the VNFs at a new cloud specified by a new best
composite slice path, the new best composite slice path having a new best
composite score based on a combination of a weighted performance metric and weighted load for the new candidate slice paths.
19. The system of claim 15, the stages further comprising:
for each candidate slice path, calculating a normalized cost of the performance metric; and
when no candidate slice path includes a normalized cost that meets an SLA threshold, ranking the candidate slice paths based on how close the normalized cost is to the SLA threshold.
20. The system of claim 15, wherein determining the candidate cloud paths includes: determining a first set of slice paths that includes all permutations of VNF-to-cloud assignments that are within the number of VNFs and maximum number of intercloud links;
determining a second set of slice paths that complies with the SLA attribute; and taking an intersection of the first and second sets.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20745963.7A EP3903455A4 (en) | 2019-01-24 | 2020-01-22 | Dynamic inter-cloud placement of virtual network functions for a slice |
CA3118160A CA3118160A1 (en) | 2019-01-24 | 2020-01-22 | Dynamic inter-cloud placement of virtual network functions for a slice |
CN202080010206.6A CN113348651B (en) | 2019-01-24 | 2020-01-22 | Dynamic inter-cloud placement of sliced virtual network functions |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/256,659 US10944647B2 (en) | 2019-01-24 | 2019-01-24 | Dynamic inter-cloud placement of virtual network functions for a slice |
US16/256,659 | 2019-01-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020154439A1 true WO2020154439A1 (en) | 2020-07-30 |
Family
ID=71731732
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/014661 WO2020154439A1 (en) | 2019-01-24 | 2020-01-22 | Dynamic inter-cloud placement of virtual network functions for a slice |
Country Status (5)
Country | Link |
---|---|
US (2) | US10944647B2 (en) |
EP (1) | EP3903455A4 (en) |
CN (1) | CN113348651B (en) |
CA (1) | CA3118160A1 (en) |
WO (1) | WO2020154439A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10855757B2 (en) * | 2018-12-19 | 2020-12-01 | At&T Intellectual Property I, L.P. | High availability and high utilization cloud data center architecture for supporting telecommunications services |
WO2021094812A1 (en) * | 2019-11-12 | 2021-05-20 | Telefonaktiebolaget Lm Ericsson (Publ) | Joint consideration of service function placement and definition for deployment of a virtualized service |
US11012872B1 (en) * | 2020-03-19 | 2021-05-18 | Verizon Patent And Licensing Inc. | Method and system for polymorphic algorithm-based network slice orchestration |
US11528642B2 (en) * | 2020-12-21 | 2022-12-13 | Verizon Patent And Licensing Inc. | Method and system for SLA-based network slice control service |
CN113453285B (en) * | 2021-06-23 | 2023-02-24 | 中国联合网络通信集团有限公司 | Resource adjusting method, device and storage medium |
US11652710B1 (en) | 2021-12-14 | 2023-05-16 | International Business Machines Corporation | Service level agreement aware resource access latency minimization |
US11829234B2 (en) | 2022-01-19 | 2023-11-28 | Dell Products L.P. | Automatically classifying cloud infrastructure components for prioritized multi-tenant cloud environment resolution using artificial intelligence techniques |
WO2023200881A1 (en) * | 2022-04-14 | 2023-10-19 | Dish Wireless L.L.C. | Network provisioning to multiple cores |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016049332A1 (en) * | 2014-09-25 | 2016-03-31 | Microsoft Technology Licensing, Llc | Media session between network endpoints |
KR20180009046A (en) * | 2015-06-16 | 2018-01-25 | 삼성전자주식회사 | Method and apparatus for multipath media delivery |
US20180295180A1 (en) * | 2016-06-28 | 2018-10-11 | At&T Intellectual Property I, L.P. | Service Orchestration to Support a Cloud-Based, Multi-Party Video Conferencing Service in a Virtual Overlay Network Environment |
US20180376338A1 (en) * | 2016-08-05 | 2018-12-27 | Nxgen Partners Ip, Llc | Sdr-based massive mimo with v-ran cloud architecture and sdn-based network slicing |
Family Cites Families (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7391786B1 (en) | 2002-11-27 | 2008-06-24 | Cisco Technology, Inc. | Centralized memory based packet switching system and method |
US7719982B2 (en) | 2005-08-31 | 2010-05-18 | Intel Corporation | Switching device utilizing flow-control management |
US7986700B2 (en) | 2006-09-25 | 2011-07-26 | Futurewei Technologies, Inc. | Multiplexed data stream circuit architecture |
WO2008041434A1 (en) | 2006-10-02 | 2008-04-10 | Panasonic Corporation | Flow control method, transmitting terminal device used in same, receiving terminal device and packet transfer system |
KR101089832B1 (en) | 2010-01-25 | 2011-12-05 | 포항공과대학교 산학협력단 | Network Management System |
US9173156B2 (en) | 2011-08-05 | 2015-10-27 | GM Global Technology Operations LLC | Method and system for transferring information in vehicular wireless networks |
CN103930882B (en) | 2011-11-15 | 2017-10-03 | Nicira股份有限公司 | The network architecture with middleboxes |
US9450882B2 (en) | 2012-04-23 | 2016-09-20 | Cisco Technology, Inc. | Method and apparatus for supporting call admission control using graph assembly and fate-share identifiers |
US20160132798A1 (en) | 2013-07-26 | 2016-05-12 | Hewlett-Packard Development, L.P. | Service-level agreement analysis |
US9497125B2 (en) | 2013-07-28 | 2016-11-15 | Mellanox Technologies Ltd. | Congestion control enforcement in a virtualized environment |
EP2849064B1 (en) * | 2013-09-13 | 2016-12-14 | NTT DOCOMO, Inc. | Method and apparatus for network virtualization |
US9870580B2 (en) | 2014-05-07 | 2018-01-16 | Verizon Patent And Licensing Inc. | Network-as-a-service architecture |
US9672502B2 (en) | 2014-05-07 | 2017-06-06 | Verizon Patent And Licensing Inc. | Network-as-a-service product director |
US10182129B1 (en) | 2014-06-19 | 2019-01-15 | Amazon Technologies, Inc. | Global optimization of a service-oriented system |
US9875126B2 (en) | 2014-08-18 | 2018-01-23 | Red Hat Israel, Ltd. | Hash-based load balancing for bonded network interfaces |
US9722935B2 (en) | 2014-10-16 | 2017-08-01 | Huawei Technologies Canada Co., Ltd. | System and method for transmission management in software defined networks |
US9886296B2 (en) | 2014-12-01 | 2018-02-06 | International Business Machines Corporation | Managing hypervisor weights in a virtual environment |
US9628380B2 (en) * | 2015-03-06 | 2017-04-18 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for routing a network function chain |
US9762402B2 (en) * | 2015-05-20 | 2017-09-12 | Cisco Technology, Inc. | System and method to facilitate the assignment of service functions for service chains in a network environment |
US10142353B2 (en) | 2015-06-05 | 2018-11-27 | Cisco Technology, Inc. | System for monitoring and managing datacenters |
US20180242161A1 (en) | 2015-08-05 | 2018-08-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Distributed management of network slices using a gossip protocol |
US10374956B1 (en) | 2015-09-25 | 2019-08-06 | Amazon Technologies, Inc. | Managing a hierarchical network |
US9866485B2 (en) | 2016-01-21 | 2018-01-09 | Cox Communication, Inc. | Rerouting network traffic flows based on selection criteria |
WO2017131259A1 (en) * | 2016-01-29 | 2017-08-03 | 엘지전자 주식회사 | Method by which network nodes calculate optimum path for virtualized service functions |
US10142427B2 (en) | 2016-03-31 | 2018-11-27 | Huawei Technologies Co., Ltd. | Systems and methods for service and session continuity in software defined topology management |
CN110401972B (en) | 2016-04-08 | 2022-04-22 | 大唐移动通信设备有限公司 | Method, apparatus and system for routing messages in a multi-network sliced network |
US9986025B2 (en) | 2016-05-24 | 2018-05-29 | Nicira, Inc. | Load balancing for a team of network interface controllers |
FR3052324A1 (en) | 2016-06-07 | 2017-12-08 | Orange | METHOD FOR CONNECTING A TERMINAL TO A NETWORK TRENCH |
WO2018000240A1 (en) * | 2016-06-29 | 2018-01-04 | Orange | Method and system for the optimisation of deployment of virtual network functions in a communications network that uses software defined networking |
CN107659419B (en) | 2016-07-25 | 2021-01-01 | 华为技术有限公司 | Network slicing method and system |
KR102576869B1 (en) * | 2016-10-10 | 2023-09-11 | 한국전자통신연구원 | Apparatus and Method for Setting Service Function Path of Service Function Chain based on Software Defined Network |
US10212088B2 (en) | 2016-11-07 | 2019-02-19 | Cisco Technology, Inc. | Tactical traffic engineering based on segment routing policies |
US10505870B2 (en) * | 2016-11-07 | 2019-12-10 | At&T Intellectual Property I, L.P. | Method and apparatus for a responsive software defined network |
US10469376B2 (en) * | 2016-11-15 | 2019-11-05 | At&T Intellectual Property I, L.P. | Method and apparatus for dynamic network routing in a software defined network |
CN108092791B (en) | 2016-11-23 | 2020-06-16 | 华为技术有限公司 | Network control method, device and system |
EP3327990B1 (en) | 2016-11-28 | 2019-08-14 | Deutsche Telekom AG | Radio communication network with multi threshold based sla monitoring for radio resource management |
US9961624B1 (en) | 2017-02-09 | 2018-05-01 | T-Mobile Usa, Inc. | Network slice selection in wireless telecommunication networks |
WO2018176385A1 (en) * | 2017-03-31 | 2018-10-04 | Huawei Technologies Co., Ltd. | System and method for network slicing for service-oriented networks |
US10608895B2 (en) | 2017-03-31 | 2020-03-31 | At&T Intellectual Property I, L.P. | Quality of service management for dynamic instantiation of network slices and/or applications |
WO2018197924A1 (en) * | 2017-04-24 | 2018-11-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system to detect virtual network function (vnf) congestion |
US10749796B2 (en) | 2017-04-27 | 2020-08-18 | At&T Intellectual Property I, L.P. | Method and apparatus for selecting processing paths in a software defined network |
US10819606B2 (en) * | 2017-04-27 | 2020-10-27 | At&T Intellectual Property I, L.P. | Method and apparatus for selecting processing paths in a converged network |
CN108965132B (en) | 2017-05-22 | 2021-06-22 | 华为技术有限公司 | Method and device for selecting path |
WO2018224151A1 (en) | 2017-06-08 | 2018-12-13 | Huawei Technologies Co., Ltd. | Device and method for providing a network slice |
CN113364687A (en) | 2017-06-30 | 2021-09-07 | 华为技术有限公司 | Method for generating forwarding table item, controller and network equipment |
CN107332913B (en) * | 2017-07-04 | 2020-03-27 | 电子科技大学 | Optimized deployment method of service function chain in 5G mobile network |
US10530678B2 (en) | 2017-07-20 | 2020-01-07 | Vmware, Inc | Methods and apparatus to optimize packet flow among virtualized servers |
US10764789B2 (en) | 2017-08-11 | 2020-09-01 | Comcast Cable Communications, Llc | Application-initiated network slices in a wireless network |
CN107995045B (en) * | 2017-12-19 | 2020-10-13 | 上海海事大学 | Adaptive service function chain path selection method and system for network function virtualization |
US11172400B2 (en) | 2018-03-06 | 2021-11-09 | Verizon Patent And Licensing Inc. | Method and system for end-to-end admission and congestion control based on network slicing |
US11329874B2 (en) | 2018-04-12 | 2022-05-10 | Qualcomm Incorporated | Vehicle to everything (V2X) centralized predictive quality of service (QoS) |
CN108540384B (en) | 2018-04-13 | 2020-07-28 | 西安交通大学 | Intelligent rerouting method and device based on congestion awareness in software defined network |
US10638356B2 (en) | 2018-07-23 | 2020-04-28 | Nokia Technologies Oy | Transmission of network slicing constraints in 5G wireless networks |
US10834004B2 (en) * | 2018-09-24 | 2020-11-10 | Netsia, Inc. | Path determination method and system for delay-optimized service function chaining |
US10601724B1 (en) | 2018-11-01 | 2020-03-24 | Cisco Technology, Inc. | Scalable network slice based queuing using segment routing flexible algorithm |
US11424977B2 (en) | 2018-12-10 | 2022-08-23 | Wipro Limited | Method and system for performing effective orchestration of cognitive functions in distributed heterogeneous communication network |
KR102641254B1 (en) | 2019-01-08 | 2024-02-29 | 삼성전자 주식회사 | A method and management device for controlling an end-to-end network in a wireless communication system |
-
2019
- 2019-01-24 US US16/256,659 patent/US10944647B2/en active Active
-
2020
- 2020-01-22 EP EP20745963.7A patent/EP3903455A4/en active Pending
- 2020-01-22 CA CA3118160A patent/CA3118160A1/en active Pending
- 2020-01-22 WO PCT/US2020/014661 patent/WO2020154439A1/en unknown
- 2020-01-22 CN CN202080010206.6A patent/CN113348651B/en active Active
-
2021
- 2021-03-08 US US17/195,058 patent/US11329901B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016049332A1 (en) * | 2014-09-25 | 2016-03-31 | Microsoft Technology Licensing, Llc | Media session between network endpoints |
KR20180009046A (en) * | 2015-06-16 | 2018-01-25 | 삼성전자주식회사 | Method and apparatus for multipath media delivery |
US20180295180A1 (en) * | 2016-06-28 | 2018-10-11 | At&T Intellectual Property I, L.P. | Service Orchestration to Support a Cloud-Based, Multi-Party Video Conferencing Service in a Virtual Overlay Network Environment |
US20180376338A1 (en) * | 2016-08-05 | 2018-12-27 | Nxgen Partners Ip, Llc | Sdr-based massive mimo with v-ran cloud architecture and sdn-based network slicing |
Non-Patent Citations (2)
Title |
---|
GOUAREB RACHA; FRIDERIKOS VASILIS; AGHVAMI ABDOL-HAMID: "Virtual Network Functions Routing and Placement for Edge Cloud Latency Minimization", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS., IEEE SERVICE CENTER, PISCATAWAY., US, vol. 36, no. 10, 1 October 2018 (2018-10-01), US, pages 2346 - 2357, XP011696719, ISSN: 0733-8716, DOI: 10.1109/JSAC.2018.2869955 * |
See also references of EP3903455A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN113348651A (en) | 2021-09-03 |
CA3118160A1 (en) | 2020-07-30 |
EP3903455A4 (en) | 2022-09-21 |
US20210194778A1 (en) | 2021-06-24 |
US11329901B2 (en) | 2022-05-10 |
EP3903455A1 (en) | 2021-11-03 |
CN113348651B (en) | 2023-06-09 |
US10944647B2 (en) | 2021-03-09 |
US20200244551A1 (en) | 2020-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11356338B2 (en) | Dynamic inter-cloud placement of virtual network functions for a slice | |
US11329901B2 (en) | Dynamic inter-cloud placement of virtual network functions for a slice | |
US10203993B2 (en) | Method and system for continuous optimization of data centers by combining server and storage virtualization | |
US9442763B2 (en) | Resource allocation method and resource management platform | |
US9740534B2 (en) | System for controlling resources, control pattern generation apparatus, control apparatus, method for controlling resources and program | |
US10009284B2 (en) | Policy-based session establishment and transfer in a virtualized/cloud environment | |
JP2019533913A (en) | Load balancing optimization method and apparatus based on cloud monitoring | |
US10660069B2 (en) | Resource allocation device and resource allocation method | |
JP6664812B2 (en) | Automatic virtual resource selection system and method | |
US20140089510A1 (en) | Joint allocation of cloud and network resources in a distributed cloud system | |
WO2012173642A1 (en) | Decentralized management of virtualized hosts | |
US10356185B2 (en) | Optimal dynamic cloud network control | |
US20150128138A1 (en) | Decentralized management of virtualized hosts | |
US20210314418A1 (en) | Machine learning method for adaptive virtual network functions placement and readjustment | |
Kim et al. | An energy-aware service function chaining and reconfiguration algorithm in NFV | |
US10983828B2 (en) | Method, apparatus and computer program product for scheduling dedicated processing resources | |
Abreu et al. | A rank scheduling mechanism for fog environments | |
Ziafat et al. | A hierarchical structure for optimal resource allocation in geographically distributed clouds | |
CN109815204A (en) | A kind of metadata request distribution method and equipment based on congestion aware | |
Taka et al. | Service placement and user assignment in multi-access edge computing with base-station failure | |
Liu et al. | Correlation-based virtual machine migration in dynamic cloud environments | |
CN110430236B (en) | Method for deploying service and scheduling device | |
Zhou et al. | Balancing load: An adaptive traffic management scheme for microservices | |
CN112860384B (en) | Multi-dimensional resource load balancing-oriented VNF multiplexing and migration method | |
Beshley et al. | Traffic engineering and QoS/QoE supporting techniques for emerging service-oriented software-defined network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20745963 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3118160 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020745963 Country of ref document: EP Effective date: 20210728 |