CN109074280B - Network function virtualization - Google Patents

Network function virtualization Download PDF

Info

Publication number
CN109074280B
CN109074280B CN201680084433.7A CN201680084433A CN109074280B CN 109074280 B CN109074280 B CN 109074280B CN 201680084433 A CN201680084433 A CN 201680084433A CN 109074280 B CN109074280 B CN 109074280B
Authority
CN
China
Prior art keywords
value
parameter
time parameter
vdu
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680084433.7A
Other languages
Chinese (zh)
Other versions
CN109074280A (en
Inventor
梅格哈什里·达特特里·科达拉古德
穆泰俄赫·温卡塔查拉姆
武-池·冯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Intel Corp
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN109074280A publication Critical patent/CN109074280A/en
Application granted granted Critical
Publication of CN109074280B publication Critical patent/CN109074280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Abstract

Embodiments of the present disclosure describe methods and apparatus for network function virtualization architecture and operation.

Description

Network function virtualization
Technical Field
Embodiments of the present disclosure relate generally to the field of networks and, more particularly, relate to an apparatus and method for network function virtualization in a cellular network.
Background
Network orchestration is the management of physical and virtual devices to meet the deployment and operational requirements of a network. European Telecommunications Standards Institute (ETSI) Network Function Virtualization (NFV) management and orchestration (MANO) describes a framework for providing virtualized network functions and related operations, e.g., configuring virtualized network functions and corresponding infrastructure.
Drawings
Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. For convenience of description, like reference numerals denote like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
Fig. 1 illustrates an NFV architecture and reference points according to some embodiments.
Fig. 2 illustrates portions of the NFV architecture of fig. 1, according to some embodiments.
FIG. 3 illustrates a deployment template descriptor data model of a web service descriptor in accordance with some embodiments.
Fig. 4 illustrates an architecture of a host device according to some embodiments.
Fig. 5 illustrates a process flow for reaction monitoring in the NFV management plane, in accordance with some embodiments.
FIG. 6 illustrates a first level of a performance monitoring and management operational flow/algorithm structure, according to some embodiments.
FIG. 7 illustrates an overuse process of a performance monitoring and management operational flow/algorithm structure in accordance with some embodiments.
FIG. 8 illustrates an underutilized process of performance monitoring and management operational flow/algorithm structures according to some embodiments.
FIG. 9 illustrates a sub-process of the underuse process of FIG. 8, in accordance with some embodiments.
Fig. 10 illustrates a network using a coexistence deployment model according to some embodiments.
FIG. 11 illustrates a computer system that may be used to implement various embodiments.
Fig. 12 illustrates an ethernet controller according to some embodiments.
FIG. 13 illustrates an example operational flow/algorithm structure of a host device according to some embodiments.
Fig. 14 illustrates an example operational flow/algorithm structure of a host device according to some embodiments.
Fig. 15 illustrates an example operational flow/algorithm structure of a virtual network function manager in accordance with some embodiments.
FIG. 16 illustrates an example operational flow/algorithm structure of an element manager or network manager in accordance with some embodiments.
FIG. 17 illustrates an example operational flow/algorithm structure of a network function virtualization orchestrator (organizer) according to some embodiments.
FIG. 18 illustrates an example operational flow/algorithm structure of an operational support system or business support system according to some embodiments.
FIG. 19 illustrates an example computer-readable medium according to some embodiments.
Detailed Description
In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure.
Various operations may be described as multiple discrete acts or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may be performed out of the order of presentation. The described operations may be performed in a different order than the described embodiments. Various additional operations may be performed in additional embodiments or the described operations may be omitted.
For the purposes of this disclosure, the phrases "a or B", "a and/or B" and "a/B" denote (a), (B) or (a and B).
The description may use the phrases "in an embodiment" or "in an embodiment," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.
Fig. 1 illustrates an NFV architecture 100 and a reference point according to some embodiments. NFV architecture 100 may be used in networks operating in compliance with third generation partnership project 3GPP specifications.
The NFV architecture 100 may include an NFV-MANO system 104 coupled to a core network CN services system 108, as shown. Each module shown in NFV architecture 100 may represent a module designed to provide discrete operations for facilitating the provision of network services by CN services system 108, including, for example, management, orchestration, and communication operations. Network services may be implemented by any combination of virtual network functions, VNFs, and physical network functions PNFs, which may be linked together.
The network service may be any type of service provided by a core network element of the cellular network, such as, but not limited to: a mobility management entity MME, a packet data network gateway PDN-GW, a serving gateway S-GW, a policy charging and rules function PCRF, a home location register HLR, a visitor location register VLR, a home subscriber server HSS, a serving general packet radio service support node SGSN, a gateway general packet radio service support node GGSN, etc.
The modules of NFV architecture 100 will be briefly described. However, unless otherwise described, the operation of the modules of NFV architecture 100 may be consistent with the description in European Telecommunications Standards Institute (ETSI) Group Specification (GS) NFV management and orchestration (MAN) 001V1.1.1 (2014-12).
In general, the various computer systems may be adapted to provide the operations described with respect to the modules of architecture 100. Some particularly adapted computer systems are described herein with respect to modules that implement the operations of the various embodiments. However, the operations described with respect to other modules may be performed by similar computer systems adapted based on the goals and implementation details associated with a particular module.
The modules of NFV architecture 100 are shown coupled to each other through various reference points. In some embodiments, a particular implementation of NFV architecture 100 may have some modules combined with other modules. In this case, the reference point of the coupled combining module may be internalized (interpolated).
In general, NFV-MANO system 104 may provide management and orchestration operations to facilitate CN services systems 108 providing virtualized network functions. NFV-MANO system 104 may include a network function virtualization orchestrator NFVO 112 coupled to a virtual network function manager VNFM 116. NFVO 112 may also be coupled to a plurality of data stores, such as, but not limited to: network services NS directory 122, virtual network function VNF directory 124, network function virtualization NFV instance repository 128, and NFV infrastructure NFVI repository 132.
NFVO 112 may provide network service orchestration by coordinating the life cycles of VNFs that collectively implement network services. This may include managing associations between different VNFs and topologies of network services NS, and VNF forwarding graph descriptors VNFFG associated with network services. It may be desirable for NFVO 112 to know all resources available for reservation allocation at NFVI of NS instance.
NFVO 112 may be coupled to VNF manager Vnfm 116 through an Or-Vnfm reference point. The VNFM 116 may be responsible for managing the life cycle of VNF instances. In various embodiments, VNFM 116 may provide conventional management operations such as, but not limited to: fault management, configuration management, billing management, performance management, and security management. The VNFM 116 may also provide scaling operations to change the configuration of virtualized resources. The scaling operation may include, but is not limited to: magnification (e.g., adding a central processing unit, CPU), minification (e.g., removing CPU or freeing some virtualized resources), outward expansion (e.g., adding a new virtual machine, VM), and inward contraction (e.g., closing and deleting VM instances).
In some embodiments, VNFM 116 may include global monitor 118. Global monitor 118 may be a background process that gathers measurements related to performance metrics of VMs operating on VNFs (e.g., VNF 144).
NS catalog 122 may represent a repository of all on-board (on-board) web services to support creation and management of NS deployment templates. NS deployment templates may include, but are not limited to: network service descriptor NSD, virtual link descriptor VLD, VNF descriptor VNFD, and VNF forwarding graph descriptor VNFFGD.
VNF catalog 124 may represent a repository of all on-board VNF packages. As used herein, VNF packages may include, for example, VNFD, software images, manifest files, and the like. The information in VNF catalog 124 may support the creation and management of VNF packages through the interface operations disclosed by NFVO 112. VNF catalog 124 may be coupled with NFVO 112 and VNFM 116 via respective reference points. NFVO 112 or VNFM 116 may query VNF catalog 124 to find and retrieve VNFD to support operations such as, but not limited to, verifying, checking for instantiation feasibility, etc.
NFV instance repository 128 may hold information for all VNFs and NS instances. Each VNF/NS instance may be represented by a VNF/NS record that is updated during the life cycle of the respective instance to reflect changes due to performing VNF/NS life cycle management operations.
NFVI resource repository 132 may hold information about available, reserved, and allocated NFVI resources as abstracted by virtualized infrastructure manager VIM 120 coupled to VNFM 116.
VIM 120 may control and manage NFVI resources, e.g., computing, storage, and network resources for NFV. In some embodiments, VIM 120 may manage only a subset of one or more types of NFVI resources (e.g., compute only, store only, or network only). In other embodiments, VIM 120 may manage multiple types of NFVI resources.
In addition to being coupled to VNFM 116, VIM 120 may also be coupled to NFVO 112 via an Or-Vi reference point.
The CN services system 108 may include an operations support system/business support system (OSS/BSS) 136, which may be comprised of one or more devices to manage and orchestrate legacy systems by providing functionality such as, but not limited to, network inventory, service provisioning, network configuration, and fault management. OSS/BSS 136 may have full end-to-end visibility of services provided by legacy network systems.
OSS/BSS 136 may be coupled to NFVO 112 via an Os-Ma-NFVO reference point.
OSS/BSS 136 may also be coupled to element manager EM 140, which element manager EM 140 may be responsible for the failure, configuration, performance, and security FCAPS management functions of VNFs (e.g., VNF 144). In particular, EM 140 may provide a number of management operations regarding network functions provided by VNF 144. These management operations may include, but are not limited to, configuration, fault management, billing, collection of performance measurements, and security management. In some embodiments, EM 140 may be coupled with Vnfm 116 through a Ve-Vnfm-EM reference point to cooperate with Vnfm 116 to perform functions that rely on the exchange of information about NFVI resources associated with VNF 144.
VNF 144 may be a software implementation of network functions capable of running on NFVI 148. The deployment and operational behavior of VNF 144 may be described in a corresponding VNFD that may be stored in VNF catalog 124.
The VNF 144 may be coupled to the VNFM 116 via a Ve-Vnfm-VNF reference point. The Ve-Vnfm-VNF reference point may support the exchange of messages that provide VNF instantiation, querying, updating, scaling, verification, configuration, etc.
NFVI 148 may represent hardware (e.g., computing, storage, and network circuitry) and software (e.g., hypervisor) components that collectively provide the infrastructure resources to deploy VNF 144. In some embodiments, NFVI 148 may also include a partially virtualized NF, for example, with some of its functions virtualized and other parts embodied in a Physical Network Function (PNF) (e.g., built-in silicon) due to physical constraints or vendor design choices.
NFVI 148 may be coupled to VIM 120 by Nf-Vi reference points. The Nf-Vi reference point may support exchange of VM management messages to provide/update VM resource allocation, migrate/terminate VMs, manage connections between VMs, and the like.
Fig. 2 illustrates selected modules of NFV-MANO system 104 in more detail, according to some embodiments. Specifically, NFVO 112 is shown with functional units including NS record (NSR) manager 204, prediction engine 208, policy directory 212, and monitor module 216. The functional units of NFVO 112 may allow metrics to be collected as part of resource and service orchestration to achieve near real-time event triggering in decisions. This may involve orchestration at multiple levels. An abstraction can be created at each management layer and a high-level rough view of system resources and service states can be propagated back to NFVO 112.
Monitor module 216 may collect measurements and performance parameters for each VNF from VNFM 116 and VIM 120. In the case of a mobile core network element, the measurement and performance parameters may include, for example, request arrival rate, average response time, call per second, etc. The collected measurements may be a combination of both system metrics and application specific metrics for each VNF/virtual link. In some embodiments, monitor module 216 may receive measurements and performance parameters from policy catalog 212. The measurement and performance parameters may be referred to as "monitoring parameters". The monitoring parameters may be a combination of system metrics at the VIM level and application metrics at the VNFM level (referred to as counters in the 3GPP network function management specification).
As will be described, the monitoring may be a multi-level operation with operations performed at NFVO 112, VNFM 116, and VIM 120. Monitor module 216 may maintain an overview of the performance of the network service level because decisions for resource allocation may be routed through NFVO 112.
In some embodiments, monitor module 216 may be coupled with VNFM 116 to instantiate a VNF and verify resource availability when additional resources for the instantiated VNF are requested.
Policy directory 212 may be a repository that stores information about NSD, VNFD, VLD, VNFFG, NFVI resource directories and NFVI instances when network services are on-board. This information may be referred to as an on-board descriptor, and in some embodiments, may be loaded into policy directory 212 from an appropriate distributed repository (e.g., NS directory 120) when the network service is on-board. On-board NSD may be accomplished by OSS/BSS 136 as part of NS instantiation. In some embodiments, the on-board descriptors may be provided to the policy directory 212 by an analysis (profile) engine 220 that interfaces with the OSS/BSS 136.
Policy catalog 212 may provide on-board descriptors to VNFM 116 to facilitate VNF instantiation and lifecycle management.
Initial deployment resource requirements for a given NS may be obtained by analyzing various deployment test scenarios on the test network, but there may be alternative ways of obtaining this data. The monitoring parameters, scaling policies, and NS deployment features may be incorporated as information elements into the NSD.
Fig. 3 illustrates a deployment template descriptor data model 300 of an NSD 304, according to some embodiments. The NSD 304 may be a descriptor file that describes the NS to be deployed. NS may be orchestrated by NFVO 112 and may constitute one or more VNF, PNF, VNFFG. The NSD 304 may include (or include references to) VNFD, VLD, VNFFGD and monitoring parameters of the service level agreement SLA supporting the NS.
The descriptor file may be stored in a repository accessed by the modules of NFV-MANO 104, depending on the state of the deployment. In some embodiments, NSD, VNFFGD, VLD and its lifecycle management may be handled by NFVO 112, while VNFD and its lifecycle management may be managed by VNFM 116.
As shown in fig. 3, NSD 304 may include (or include references to) VNFD 308, VNFD 312, VNFD 316, VLD 320, VLD 324, VNFFGD 328, and VNFFGD 332.
VNFD may describe VNF in terms of deployment and operational behavior requirements. The VNFD may include or otherwise reference initiation and termination scripts and internal and external connectivity. In some embodiments, the VNFD may also contain connectivity, interface, and key performance indicator KPI parameters that may be used by modules of the NFV-MANO system 104 to establish an appropriate virtual link VL within the NFVI between VNFC instances, or between VNF instances and endpoint interfaces to other network functions.
VNFFGD may describe the topology of some or all of the NS by referring to VNF and PNF and VL connecting them.
The VLD may describe an associated VL. The VLD may provide the resource requirements that may be needed for VL between endpoints of VNF, PNF and NS, which may be met by various link options available in NFVI. In some embodiments, the VLD may describe a basic topology of connectivity between one or more VNFs coupled with the VL, as well as other desired parameters (e.g., bandwidth and quality of service (QoS) class).
Each VNFD may be associated with one or more Virtualized Deployment Units (VDUs), and each VDU corresponds to a subset of VNFDs. A VDU may be a construct in an information module that is used to support the deployment of Virtual Network Function Components (VNFCs) and the description of operational behavior. The VNFC may be a module mapped to a single VM and designed to perform discrete sub-functions of the VNF. The VNFC may be an internal component providing this VNF functionality subset defined by the VNF provider. The VDU and VNFC may be used interchangeably because the VDU is an information model representation of the VNFC.
As shown, the VNFD 308 may be associated with VDU 336, VDU 340, and VDU 344; the VNFD 312 may be associated with a VDU 348 and a VDU 352; and VNFD 316 may be associated with VDU 356 and VDU 360.
Each VDU may be associated with various VM resources for supporting the functionality of the respective VNFC, and possibly also with VM metrics to be monitored. For example, VDU 336 may be associated with: computing resources 364, which may be represented in terms of virtual central processing units (vcpus); network resources 368, which may be represented in terms of virtual network bandwidth vNBW; and a storage/memory resource 372, which may be represented in terms of virtual memory vMEM. Similarly, the VDU 352 can be associated with computing resources 376, network resources 380, and storage/memory resources 384; and VDU 360 may be associated with computing resources 388, network resources 392, and storage/memory resources 396.
Referring again to FIG. 2, prediction engine 208 may receive a log of records from monitor module 216. The log may include sample values of parameters monitored by NFVO 112. Prediction engine 208 may use these values, as well as other historical data, to implement proactive decisions. Prediction engine 208 may provide NSR manager 204 with proactive actions that NSR manager 204 will implement.
NSR manager 204 may create, update, delete NS records based on NS descriptor requirements, and may respond to queries from monitor module 216 regarding resources reserved in NFVI for a given NS instance.
Fig. 4 illustrates an architecture of a host device 400 according to some embodiments. Host device 400 may include platform hardware 404, which may generally correspond to NFVI resources. The platform hardware may include, but is not limited to, computing circuitry 408, storage/memory circuitry 412, and network circuitry 416.
As used herein, the term "circuitry" may refer to, be part of, or include any combination of the following: integrated circuits (e.g., field programmable gate array FPGA, application specific integrated circuit ASIC, etc.), discrete circuits, combinational logic circuits, system on a chip SOC, package system SiP, providing digital, analog, mixed signal, or radio frequency functionality.
The computing circuitry 408 may include various processing units (shared, dedicated, or group). In some embodiments, the computing circuitry 408 may include, for example, one or more single-core or multi-core central processing units, CPUs. The computing circuitry 408 may include various other processing units such as, but not limited to, a digital signal processor, a peripheral interface, an accelerometer, a memory interface, a controller, and the like.
Storage/memory circuitry 412 may include any type of volatile or nonvolatile storage for storing information (e.g., data, computer-readable instructions arranged as executable code, etc.). In some embodiments, the computing circuitry 408 may execute computer readable instructions stored in the storage/memory circuitry 412 to implement the modules of the host device 400 and perform various operations associated with the respective modules. In some embodiments, the storage/memory circuit 412 may include flash memory, dynamic random access memory DRAM, static random access memory SRAM, or the like.
The network circuitry 416 may include circuitry to connect the host device with one or more other devices over a wired or wireless network. The network circuitry 416 may include appropriate computing circuitry and storage/memory circuitry to provide the required network connectivity. In various embodiments, the network circuitry 416 may provide one or more interfaces to interface with a network, such as ethernet, evolved universal terrestrial radio access network EUTRAN, and the like.
It will be appreciated that the circuitry of the platform hardware 404 may be arranged in any of a number of architectures, many of which may provide various combinations of components of the corresponding circuitry. Portions of the computing circuitry 408, storage/memory circuitry 412, and network circuitry 416 may be integrated with each other and/or distributed in various platforms, modules, chipsets, devices, servers, etc.
Platform hardware 404 may implement a hypervisor 420 (which may also be referred to as a virtual machine manager VMM) to create and run various VMs on host device 400.
The hypervisor 420 may be a managed hypervisor (similar to other computer programs of the host device 400) running on the operating system OS 424. In some cases, this may be referred to as a type 2 hypervisor. In other embodiments, the hypervisor 420 may be a native, bare metal, or type 1 hypervisor running directly on the platform hardware 404.
The host device 400 may include a plurality of logical domains, and each domain operates independently of the other. For example, an operating system running inside a logical domain may start, stop, and restart independently of operating systems in other logical domains.
Each logical field may be designed for a particular role. For example, the logical domain may be, for example, a control domain, a service domain, an input/output (I/O) domain, a root domain, or a guest domain.
The control domain may control a logical domain environment and may be used to configure machine resources and guest domains and provide services to be used in domain operations.
As shown in fig. 4, host device 400 may include controller domain 0 428. The controller domain 0 428 may include an open V-switch OVS 432 and a layer 2L2 agent 436.OVS 432 may be an implementation of a distributed virtual multi-layer switch that is used to provide a switching stack for a hardware virtualization environment. OVS 432 may support multiple management interfaces and protocols and may support transparent distribution across multiple physical servers.
Although OVS 432 is shown as one example of a virtual multi-layer switch, other embodiments may include other virtual multi-layer switches.
The L2 agent 436 may create L2 connectivity for resource nodes (e.g., compute nodes, storage/memory nodes, and network nodes). The resource node may be a virtualized resource provided by circuitry on host device 400 and/or other devices. For example, a compute node may include a portion of compute circuitry 408, as well as a portion of compute circuitry allocated to another device used by one or more VMs.
L2 agent 436 may create L2 connectivity by configuring a local virtual switch or bridge. In some embodiments, L2 agent 436 may configure two software bridges, an integrated bridge br-int and a tunnel bridge br-tun. The integrated bridge may be used to tag and de-tag traffic to or from the VM. Traffic may be tagged with a local virtual local area network VLAN identifier ID to assign the traffic to a VLAN. Tunnel bridging may be used to translate VLAN IDs into fragments that may be used to encapsulate tunnels, such as GRE tunnels, through generic routing.
Controller domain 0 428 may be coupled with a plurality of VNF applications that are to provide respective VNFs. As shown, host device 400 may include VNF application 440 and VNF application 444.
Each VNF application may include one or more VNFCs to provide discrete sub-functions associated with the VNF. In particular, VNF applications 440 may include VNFC 448, VNFC 452, and VNFC 456. The VNF applications 444 may include VNFC 460, VNFC 464, and VNFC 468.
Each VNFC may be associated with a respective VM. As shown, VNFC 448 may be associated with VM 472, VNFC 452 may be associated with VM 474, VNFC 456 may be associated with VM 476, VNFC 460 may be associated with VM 478, VNFC 464 may be associated with VM 480, and VNFC 468 may be associated with VM 482.
Although fig. 4 shows each VNFC being associated with a respective VM, in some embodiments, each VNFC may additionally/alternatively be associated with a virtual container.
Each VNF application may also run a local monitor thereon. As shown, the VNF application 440 may include a local monitor 484 and the VNF application 444 may include a local monitor 486. The local monitor 484 may be associated with VM 488 and the local monitor 486 may be associated with VM 490.
The local monitor may measure the performance of the VM interconnect to implement the functionality of the VNF. Each local monitor may measure metrics related to various system parameters and report these metrics back to a global monitor (e.g., global monitor 118 running at VNFM 116) or monitor module (e.g., monitor module 216 of NFVO 112) in a measurement report.
In some embodiments, the local monitor may be responsible for monitoring the values of various system parameters of the VM over a sampling interval S. These system parameters (which may also be referred to as performance metrics) may include, but are not limited to, computational utilization, memory utilization, network bandwidth, queue time, service time, and response time. In various embodiments, the local monitor may generate a measurement report after one or more sampling intervals that is to be sent to the global monitor.
In some embodiments, the local monitor may additionally/alternatively be responsible for running decision algorithms regarding lifecycle management of individual VMs supporting a given VNF. The decision algorithm may facilitate deciding whether to instantiate a new VM (e.g., expand outward), shutdown the VM (e.g., shrink inward), or zoom in/out a given VM in terms of virtual CPU, v memory, v link, etc.
In some embodiments, the local monitor may verify the availability of NFVO 112 directly or through VNFM 116 whenever it is desired to allocate new resources to the VNF.
Various embodiments describe measuring the workload characteristics and system parameters (e.g., CPU cycles, network bandwidth, page swap rate, etc.) of each VDU for a given deployment characteristic of a VNF, and triggering an alarm or action based on the obtained measurements. In some embodiments, analysis may be used to provide an initial configuration map of the object under study. The system parameters may be analyzed by sampling their state at timed intervals. These samples may be the basis for providing thresholds for system parameters. The threshold value may also be referred to as an "initial value".
In some embodiments, the threshold of system parameters that are expected to support a particular deployment feature of the VNF/VNFC may be obtained by sandboxed analysis (sandbox profiling). Sandboxed analysis may capture system parameter measurements of the VNF/VNFC when operating as independent entities. For example, consider the network attachment sub-function of the MME performed by VNFC 448. The computation, network and storage/memory resources required for the user equipment UE connection may be determined by sandboxed analysis as { C1, N1, M1} respectively. The total system resources desired by VNFC 448 for an "X" UE connection may then be given by { C1, N1, M1}.
In a similar manner, VNFC system parameters of all VNFCs performing the sub-functions of the VNF may be obtained and included in the computing resources, network resources, and storage/memory resource information elements of the respective NSD. For example, referring again to fig. 3, if VNFD 308 is used to describe a VNF provided by VNF application 440, VDU 336 may be used to describe the deployment and operational behavior requirements of VNFC 448. Accordingly, the network attachment sub-functions performed by VNFC 448 may have thresholds for system parameters defined in computing resources 364, network resources 368, and storage/memory resources 372.
The threshold value determined from the sandboxed analysis of the VNF may be included in the corresponding VNF descriptor. Once the corresponding VNF is instantiated by the VNFM 116, the VNFM 116 may provide the threshold to the local monitor.
In some embodiments, for each VNFC, thresholds for queue time, service time, and response time may be determined in addition to the thresholds for { C, N, M } system parameters. Thus, in some embodiments, sandboxed analysis may provide five system parameters to monitor each VNFC. In some embodiments, the monitoring of these system parameters may be used as a set of inputs to a performance decision algorithm.
The first set of five system parameters obtained by sandboxed analysis during deployment of the VNFC may be represented by ini_qt (initial value of queue time), ini_st (initial value of service time), ini_cc (initial value of calculation period), ini_nb (initial value of network bandwidth), and ini_vu (initial value of virtual memory). The initial value represents the threshold of these parameters for a given deployment trait. For example, in one embodiment, the MME VNF may have a deployment feature for supported multiple UE connections set to 1000 calls per second.
In some embodiments, analysis engine 220 may receive a threshold value of a system parameter when a VM to be monitored is instantiated as part of a web service.
In some embodiments, the local monitor may calculate real-time values for queue time and service time as part of its monitoring responsibility. The calculation of queue time and service time parameters by the local monitor can be described as follows.
Let "S" denote the time interval at which the detected parameter is sampled. For example, multiple incoming requests to a VDU are sampled every second (or every minute, 10 minutes, 10 hours, etc.). Average queue length Q of VDU for a given interval L Can be given by:
Q L = [0, (number of messages to be processed/average size of each message)]。
Corresponding average queue time Q T Can be given by:
Q T =Q L /t,
where t is the inter-arrival time of the message during the sampling interval.
The average total response time R may be given by:
R=∑ Reqt∈S (Resp t -Req t ) /(request number) S
Wherein, req t Is the request arrival time and Resp t Is the response transmission time.
S T The average service time, i.e. the time it takes for a VDU to service a request, can be expressed. The service time may take into account processing time of the request at the VM, excluding network latency. The service time may be the time between the last fragment of the request to the first fragment of the response to leave, and may be represented by:
S T =R-Q T
the average queue time, average total response time and average service time may be calculated by the local monitor for each time interval S to determine a new value.
The parameters used in the performance decision algorithm may be described as follows.
When the VNFC is initiated, the system parameters may be measured by the local monitor over a given sampling interval S to determine a real-time value or "new value" of the system parameters. The New value may be represented by new_qt (New value of queue time), ini_st (New value of service time), new_cc (New value of calculation period), new_nb (New value of network bandwidth), and new_vu (New value of virtual memory).
FIG. 5 illustrates a process flow 500 for reaction monitoring in a FV management plane, according to some embodiments. At 504, process flow 500 may include monitor module 216 of NFVO 112 sending a request to global monitor 118 of VNFM 116 to instantiate a VNF to provide the functionality of the network service instance. Although 504 shows monitor module 216 sending a request to global monitor 118, in other embodiments, other portions of NFVO 112 may send requests to other portions of VNFM 116.
The VNF may be instantiated by the VNFM 116, which VNFM 116 provides instructions to one or more host devices (e.g., host device 400) to execute an associated VNF application, such as VNF application 440.
Global monitor 118 may verify that all VNFs to be instantiated are on-board VNF catalog 124. If this is verified, the global monitor 118 may cause the VNF to be instantiated at 508. In some embodiments, global monitor 118 may cause the VNF to be instantiated by signaling through the Ve-Vnfm-VNF reference point. If global monitor 118 does not verify that all VNFs to be instantiated are on-board VNF catalog 124, global monitor 118 may send an error report to monitor module 216.
At 512, process flow 500 may also include global monitor 118 transmitting the threshold value of the system parameter to be monitored to a local monitor of the VNF application. For example, global monitor 118 may send the threshold to local monitor 484 of VNF application 440.
At 516, process flow 500 may also include the local monitor obtaining new values of system parameters from the VM of the VNFC that is executing the given VNF application. For example, local monitor 484 may obtain new values of system parameters from VM 472, VM 474, and VM 476. The local monitor 484 may obtain these new values in real time while the VNFC is executing the corresponding sub-functions of the VNF. In some embodiments, the local monitor 484 may obtain a new value every time interval.
At 520, the process flow 500 may also include the local monitor 484 to participate in performance monitoring and management. In some embodiments, performance monitoring and management may be provided by implementing performance monitoring and management operational flow/algorithm structures based on the threshold values received from global monitor 118 at 512 and the new values obtained from VMs 472, 474, and 476 at 516. The performance monitoring and management operational flow/algorithm structures, which may be described in further detail in connection with fig. 6-9, may identify an over-use or under-use of resources associated with the VNFC, and may also determine VM management actions that will at least partially address the identified over-use or under-use. In various embodiments, VM management actions may include zooming in/out of a VM, instantiating a new VM, shutting down an executing VM, and so on.
While embodiments describe a local monitor (e.g., local monitor 484) that performs performance monitoring and management at 520, other embodiments may include other components that perform some or all of the operations associated with performance monitoring and management. For example, in some embodiments, the local monitor 484 may perform monitoring and report monitoring values to the VNFM 120. The VNFM 120 may then participate in the performance monitoring and management operational flow/algorithm structures of fig. 6-9 to determine whether VM management actions should be taken.
At 524, process flow 500 may also include local monitor 484 sending a request for VM management actions to be taken. In some embodiments, the request may be sent to the global monitor 118 in the VNFM 116.
The global monitor 118 may check the availability of resources for VM management actions to be taken on the VNF by sending a resource availability request to the monitor module 216 of the NFVO 112 at 528.
At 532, process flow 500 may also include global monitor 216 checking the NS record to ensure that the requested resources are available for reservation of the NS instance. NS records may be created using NS instantiated according to NSD. The NS record may include information about the value of the maximum resource that may be allocated to a particular instance of the NS. At 532, global monitor 216 may confirm that the currently allocated resource plus the requested resource is still at or below the indication of the maximum resource.
If monitor module 216 determines that the requested resource is available, process flow 500 may include monitor module 216 sending a notification of the resource availability to global monitor 118 at 536.
At 540, process flow 500 may also include global monitor 118 sending a request to VIM 120 to allocate resources based on the requested action. VIM 120 may allocate infrastructure resources for completing the requested action. At 544, VIM 120 may send a notification to global monitor 118 that the requested resource has been allocated.
At 548, global monitor 118 may forward a notification to local monitor 484 that the requested action is complete.
In some embodiments, the local monitor 484 may repeatedly obtain new values at each time interval S and run the performance algorithm continuously using the threshold or updated version of the threshold provided at 512.
Fig. 6 illustrates a first level 600 of a performance monitoring and management operational flow/algorithm structure that may be provided by the host device 400, in accordance with some embodiments. At 604, the first tier 600 may include instantiating a VM 472 to run a VNFC 448 and a VM 488 to run a local monitor 484. VMs 472 and 488 may be instantiated by execution of program code associated with VNF application 440 by computing circuitry 408 of platform hardware 404. Execution of the program code of the VNF application 440 may be initiated based on instructions from the VNFM 116. VMs 474 and 476 may also be instantiated and monitored in a similar manner to VM 472. However, for simplicity, the description focuses on the monitoring and management of VM 472/VNFC 448.
At 608, the first hierarchy 600 may include obtaining initial and new values of system parameters. Initial values for the system parameters may be obtained from the global monitor 118, such as described above with respect to 512 of fig. 5. The new values of the system parameters may be obtained from the VM 472 in real-time as the VM 472 runs the VNFC 448. In some embodiments, a new value may be obtained from VM 472 at each sampling interval.
At 612, the first hierarchy 600 may include determining whether all new system parameters are equal to the initial system parameters. If all new system parameters are equal to the initial system parameters, local monitor 484 may determine that an appropriate amount of resources are dedicated to VM 472 to allow VNFC 448 to efficiently perform its associated sub-functions.
If the local monitor 484 determines that all new system parameters are not equal to the initial system parameters at 612, the first tier 600 may include the local monitor 484 determining whether any new system parameters are greater than the initial system parameters at 616.
If it is determined at 616 that the new system parameter is greater than the initial system parameter, at 620, the local monitor 484 may identify an excessive use of resources associated with the VNFC 448.
If it is determined at 612 that the new system parameters are not greater than the initial system parameters, at 624, the local monitor 484 may identify underuse of resources associated with the VNFC 448.
Fig. 7 illustrates an overuse process 700 of performance monitoring and management operational flow/algorithm structures that may be provided by a host device 400, in accordance with some embodiments. The overuse process 700 may begin at 620 with the local monitor 484 identifying overuse of resources associated with the VNFC 448, as described with respect to fig. 6.
After identifying the overuse, at 704, the overuse process 700 may include the local monitor 484 detecting whether an initial value of the queue time parameter (init_qt) is greater than a New value of the queue time parameter (new_qt) and whether an initial value of the service time parameter (init_st) is less than a New time of the service time parameter (new_st).
Upon detecting that the condition at 704 is met, the local monitor may determine whether an initial value of the calculated cycle parameter (ini_cc) is less than a New value of the calculated cycle parameter (new_cc). If it is determined at 708 that the initial value of the calculation cycle parameter is less than the new value of the calculation cycle parameter, the local monitor 484 may perform an overuse management action to add a virtual central processing unit to the VDU at 712. The execution of the overuse management action by the local monitor 484 may include, for example, sending a request to the VNFM 116 to add vCPU.
Thus, upon detecting a first set of conditions including meeting the conditions at both 704 and 708, the local monitor 484 can perform an overuse management action to add a vCPU to the VDU.
For example, if the local monitor 484 determines that the condition at 704 is not met, the new value of the queue time is greater than the initial value of the queue time or the initial value of the service time is greater than the new value of the service time, the overuse process 700 may proceed to 716.
At 716, the local monitor 484 may determine whether the initial value of the queue time is less than the new value of the queue time and whether the initial value of the service time is greater than the new value of the service time. If the condition at 716 is met, the local monitor 484 may perform an overuse management action to instantiate a new VDU at 720. For example, in this embodiment, the local monitor 484 may determine that the first VDU (corresponding to VNFC 448 and VM 472) is over-utilized and thus may instantiate a new VDU, e.g., VNFC 452 and VM 474. The VNFC 452 may perform the same sub-functions as the VNFC 448. Incoming tasks associated with subfunctions may be distributed between VNFC 452 and VNFC 448. In various embodiments, the newly instantiated VDU may be in another VNF application, on another host platform, etc.
If it is determined at 708 that the initial value of the calculation period is not less than the new value of the calculation period, the local monitor 484 may determine at 724 whether the initial value of the network bandwidth is less than the new value of the network bandwidth. The overuse process 700 may also make 724 a determination after the actions performed with respect to block 712.
If the local monitor 484 determines at 724 that the initial value of the network bandwidth is less than the new value of the network bandwidth, the local monitor 484 may perform an overuse management action to increase the throughput of the virtual link by a particular value x at 728. In some embodiments, the throughput of the virtual link may be increased by adding an additional virtual network interface controller vNIC to the VM on which the VNF is instantiated.
After 728 or if it is determined that the initial value of the network bandwidth is not less than the new value of the network bandwidth, the overuse process may proceed to 732, where the local monitor 484 may determine whether the initial value of the virtual memory is less than the new value of the virtual memory. If it is determined that the initial value of the virtual memory is less than the new value of the virtual memory, the local monitor 484 may perform an overuse management action to add a certain number x of memory blocks to the VDU at 736. In some embodiments, x memory blocks (e.g., random Access Memory (RAM)) may be hot-added to VM 472 by hypervisor 420/OS 424 without having to reboot host device 400.
If it is determined at 732 that the initial value of the virtual memory is not less than the new value of the virtual memory, the overuse process may proceed to the determination of block 716.
If it is determined at block 716 that the initial value of the queue time is not less than the new value of the queue time or the initial value of the service time is not greater than the new value of the service time, the overuse process 700 may end at 740.
FIG. 8 illustrates an underutilized process 800 of performance monitoring and management operational flow/algorithm structures that may be provided by a host device 400, according to some embodiments. The underuse process 800 may begin at 624 with the local monitor 484 identifying an underuse of resources associated with the VNFC 448, as described with respect to fig. 6.
After identifying the underuse at 624, the underuse process 800 may include, at 804, the local monitor 484 determining whether an initial value of the queue time is greater than a new value of the queue time and whether an initial value of the service time is greater than a new value of the service time.
If at least one condition at 804 is not met, underuse process 800 may proceed to 824.
If both conditions at 804 are met, the local monitor 484 may determine whether the new value of the queue time is less than a predetermined minimum threshold value of the queue time (Thresh (Min) _qt) and whether the new value of the service time is less than a predetermined minimum threshold value of the service time (Thresh (Min) _st) at 808.
If both conditions at 808 are met, the local monitor 484 may perform an underuse management action to turn off the VDU at 812. The VDU may be turned off by shutting down VM 472 and freeing any resources allocated to it.
If at least one condition at 808 is not met, underuse process 800 may proceed to sub-process D at 900. According to some embodiments, a sub-process D is shown in fig. 9.
At 904, the local monitor 484 can determine whether another VDU hosts the same sub-function as the target VDU. If not, at 916, sub-process D may return to the calling process, e.g., underuse process 800.
If it is determined at 904 that another VDU hosts the same sub-function as the target VDU, the local monitor 484 may obtain the monitoring parameters of the other VDU (which may be referred to as the selected VDU (VDU_N)) at 908 and then return to the calling process at 912. In some embodiments, the local monitor 484 may obtain the monitoring parameters of the selected VDU from the global monitor 118 or directly from a local monitor of another VNF (on the host device or on another host device).
Referring again to fig. 8, after sub-process D at 900, the local monitor 484 may determine whether the initial value of the queue time is greater than the New value of the queue time plus the New value of the queue time for the selected VDU (New QT N), if available, and whether the initial value of the service time is greater than the New value of the service time plus the New value of the service time for the selected VDU (New ST N), if available, at 816.
If both conditions are met at 816, the local monitor can perform an underuse action to migrate the sub-functionality provided by the VDU to the selected VDU and turn off the VDU at 820. In some embodiments, the sub-functionality provided by the VDU may be migrated to the selected VDU by updating a routing table or other calling routine associated with the VNF.
If at least one condition is not met at 816, the local monitor 484 may determine whether the new value of virtual memory is less than a predetermined minimum threshold of virtual memory at 824. If so, at 828, the local monitor 484 may perform an underuse action to remove a certain number "x" of memory blocks from the VDU. After 828, the underuse process may proceed to 832.
If it is determined at 824 that the new value of virtual memory is not less than the predetermined minimum threshold of virtual memory, underuse process 800 may proceed to 832.
At 832, the local monitor 484 may determine whether the new value of the calculation period is less than a predetermined minimum threshold of the calculation period. If so, at 836, the local monitor can perform an underuse action to remove a certain number "x" of memory blocks from the VDU. After 836, the underuse process may proceed to 840.
If the condition at 832 is not satisfied, the underuse process may proceed to 840.
At 840, the local monitor 484 may determine whether the new value of the network bandwidth is less than a predetermined minimum threshold of the network bandwidth. If so, at 844, the local monitor 484 may perform an underuse action to reduce the throughput of the virtual link associated with the VDU by a particular value "x". After 844, the underuse process may proceed to 848.
If the condition at 840 is not met, the underuse process may end at 848.
In some embodiments, some networks may rely on physical network infrastructure alone, NFV infrastructure alone, or a combination of physical network infrastructure and NFV infrastructure.
In a non-coexisting deployment model, deployment of network services may be based on the metropolitan area to be serviced and the capacity requirements of the metropolitan area. Based on the capacity requirements and traffic changes of an area, the infrastructure required to support the area can be determined. For example, in some cases, a fully vertically integrated model (similar to the models that currently exist in telecommunications networks) may be determined as the appropriate network model. This may not have an impact on the 3GPP management architecture defined for the telecommunications network.
Some embodiments may include coexistence deployment models in which network service orchestration is based on full end-to-end infrastructure information, e.g., vertically integrated PNFs and virtualized infrastructure-based VNFs. In these embodiments, the orchestrator may be responsible for deployment and lifecycle management of network services across both physical and virtual network functions.
Fig. 10 illustrates a network 1000 using a coexistence deployment model according to some embodiments. The network 1000 may include an OSS/BSS 1004 for providing advanced operations and business service management similar to that described above with respect to the OSS/BSS 136.
On the physical deployment side, network 1000 may include various layers that provide FCAPS operation. In some embodiments, the physical deployment side may include a network manager 1008 coupled to one or more Element Managers (EM), such as EM 1012 and EM 1016. The EM 1012 and 1016 may in turn be coupled with a physical network infrastructure 1020. In particular, EM 1012 may be coupled with network elements NE 1024 and 1028, and EM 1016 may be coupled with NE 1032. NEs 1024, 1028, and 1032 may each be discrete telecommunications entities that may be managed through a particular interface (e.g., a radio network controller RNC interface). NEs 1024, 1028, and 1032 may each provide a corresponding PNF.
In some embodiments, network Manager (NM) 1008 may handle network configuration (e.g., configuring network routing tables), testing, and traffic analysis primarily. NM 1008 may provide an end user package responsible for managing networks supported, for example, by EM 1012 and 1016.
The EM 1012 and 1016 may be responsible for recording, backing up, and maintaining the hardware and software of the physical network infrastructure 1020. In some cases, the EM 1012 and 1016 may also be responsible for fault handling. The EM 1012 and 1016 may provide an end user package for managing a set of closely related types of network elements. These functions may include element management functions for managing individual network elements, as well as sub-network management functions related to a network model of a first set of network elements comprising sub-networks.
NEs 1024, 1028, and 1032 may be physical network infrastructure 1020 embodied in devices specifically structured to perform functions associated with a particular network service.
In some embodiments, EM 1012 and 1016 may monitor the performance of the PNF provided by NEs 1024, 1028, and 1032 by tracking counters of performance measurements associated with the PNF. Performance measurements may relate to functions provided by an associated PNF and may include, but are not limited to: device measurements, mobility Management (MM) measurements, general packet radio service tunneling protocol (GTP) measurements, internet protocol IP management measurements, inter-radio access technology handover (IRATH) measurements, quality of service measurements, security measurements, session Management (SM) measurements, subscriber management measurements, etc.
MM measurements may include measurements related to MME procedures such as, but not limited to: evolved packet system EPS, attach procedure (e.g., multiple attempted, successful and failed attach procedure), UE-initiated EPS detach procedure (e.g., attempted and successful), MME-initiated EPS detach procedure (e.g., attempted and successful), HSS-initiated EPS detach procedure (e.g., attempted and successful), tracking area update procedure with or without serving gateway change (e.g., attempted, successful and failed), EPS paging procedure (e.g., attempted, successful and failed), MME control of overload related measurements of EPC (e.g., attempted overload start/stop procedure), EPS mobility management, E-m-registered subscriber (e.g., average/maximum number of subscribers), handover (e.g., handover between incoming/outgoing attempted and successful radio access technologies RATs), routing area update with MME interaction with/without S-GW change (attempted and successful), combined tracking/location area update procedure (e.g., attempted, successful and failed).
MM measurements may also include measurements related to PDN-GW of GTP-based S5/S8 interface, such as but not limited to: PDN-GW initiated dedicated bearer creation (e.g., attempted, successful, and failed); PDN-GW initiated dedicated bearer deletion (e.g., attempted, successful, and failed); PDN-GW initiated dedicated bearer modification (e.g., attempted, successful, and failed) with or without QoS update procedures; active EPS bearer related measurements (e.g., average/maximum number of active EPS bearers) of EPC; bearer resource modification related measurements (e.g., attempted, successful, and failed) requested by the UE of the EPC; PDN connectivity related measurements of EPC (e.g., APN average/maximum PDN connectivity number per access point name); and EPS bearer number (e.g., average/maximum EPS bearer number).
SM measurements may include measurements related to MME procedures such as, but not limited to: average/maximum number of dedicated EPS bearers in active mode; dedicated bearer setup time; MME initiated dedicated bearer activation (e.g., attempted, successful, and failed); MME-initiated dedicated bearer deactivation (e.g., attempted, successful, and failed); MME initiated EPS bearer modification (e.g., attempted, successful, and failed); total EPS service requests (e.g., attempted, successful, and failed). SM measurements may also include measurements related to S-GW procedures, e.g., S4/S11 interface measurements, including, for example: EPS default/dedicated bearers create relevant measurements (e.g., attempted and successful); S5/S8 interface measurements, including e.g. EPS default/dedicated bearer creation (e.g. attempted and successful) and EPS default/dedicated bearer modification (e.g. attempted and successful); EPS bearer erasure related measurements (e.g., attempted, successful, and failed); and bearer resource usage related measurements (e.g., maximum/average number of active EPS bearers). SM measurements may also include measurements related to multimedia broadcast/multicast service MBMS, GW procedures, e.g., MBMS session creation related measurements (e.g., attempted, successful, and failed). SM measurements may also include measurements related to PCRF procedures such as, but not limited to, gateway control session establishment related measurements (e.g., attempted, successful, and failed gateway control session establishment).
Subscriber management measurements may include measurements related to other MME procedures, such as, but not limited to: an attempted insert subscriber data request received from the HSS; an attempted delete subscriber data request received from the HSS; number of subscribers in ECM-IDLE state; and the number of subscribers in the ECM-CONNECTED state.
IP management measurements may include measurements related to other MME procedures, such as, but not limited to, S1-MME data volume related measurements, including, for example: the number of incoming IP data packets on the S1-MME interface from eNB to MME, the number of outgoing IP data packets on the S1-MME interface from MME to eNB, the number of octets of incoming IP data packets on the S1-MME interface from eNB to MME, and the number of octets of outgoing IP data packets on the S1-MME interface from MME to eNB.
IP management measurements may also include measurements related to the PDN GW for GTP-based S5/S8 interface procedures, such as, but not limited to, SGi-related measurements (e.g., SGi incoming/outgoing link usage). The IP management measurements may also include measurements related to PCRF procedures, such as IP connection access network IP-CAN session establishment/modification related measurements (e.g., attempted, successful, and failed) and IP-CAN session termination related measurements (e.g., attempted and successful IP-CAN session termination).
The device measurements may include, for example, MME processor usage (e.g., average/peak processor usage).
IRATH measurements may include, for example, S6a related measurements such as, but not limited to: location related measurements (e.g., attempted, successful, and failed) and authentication related measurements (e.g., attempted, successful, and failed) are updated.
GTP measurements may include measurements related to S-GW procedures such as, but not limited to, GTP S5/S8, S4, S12, and S1-U interface measurements. GTP S5/S8 interface measurements may include, but are not limited to: the number of outgoing/incoming GTP data packets on the S5/S8 interface, the number of octets of outgoing/incoming GTP data packets on the S5/S8 interface, the number of outgoing/incoming GTP signaling packets on the S5/S8 interface, and the number of octets of outgoing/incoming GTP signaling packets on the S5/S8 interface. GTP S4 interface measurements may include data volume related measurements such as, but not limited to, the number of octets of outgoing/incoming GTP packets on the S4 interface. GTP S12 interface measurements may include data volume related measurements such as, but not limited to, the number of octets of outgoing/incoming GTP data packets on the S12 interface. The S1-U interface measurements may include data volume related measurements such as, but not limited to: the number of outgoing/incoming GTP data packets on the S1-U interface and the number of octets of outgoing/incoming GTP data packets on the S1-U interface. GTP measurements may also include measurements related to MBMS GW procedures, e.g., M1 data volume related measurements (e.g., number of octets of outgoing/incoming GTP data packets on the M1 interface).
QoS measurements may include measurements related to PCRF procedures such as, but not limited to, measurements related to authorization of QoS resources (e.g., attempted/successful resource authorization procedures at session establishment/modification and failed resource authorization procedures).
Subscriber management measurements may include measurements related to PCRF procedures, such as, but not limited to, trust re-authorization procedure related measurements (attempted, successful, and failed).
EM 1012 in 1016 may report the values of the performance measurements to NM 1008. The NM 1008 may in turn provide reports including performance measurements to the OSS/BSS 1004. In a non-coexistence model, the values of the performance measurements will not be monitored in real-time; instead, they will be recorded in reports that can be evaluated later by the network operator.
On the virtualized deployment side, network 1000 may include NFVO 1036 coupled to Vnfm 1040 by an Or-Vnfm reference point, and also coupled to VIM 1044 by an Or-Vi reference point. The VNM 1040 may be coupled to the VIM 1044 by a Vi-Vnfm reference point. Both VNFM 1040 and VIM 1044 may be coupled with NFVI 1048, which NFVI 1048 includes platform hardware 1052 (e.g., computing, storage/memory, and network resources) to provide virtualization layer 1056 to implement VNFs, e.g., VNF 1060 and VNF 1064.VNFM 1040 is shown coupled to VNF 1064 by a Ve-VNFM reference point, and VIM 1044 is shown coupled to platform hardware 1052 by an Nf-Vi reference point. Unless otherwise noted, components on the virtualized deployment side may be similar to components of the same name in fig. 1, 2, and 4, and may operate in a similar manner.
In the coexistence deployment model as shown in fig. 10, network service orchestration may be based on complete end-to-end physical and virtualized deployment sides. NFVO 1036 may be responsible for deployment and lifecycle management of network services across both physical and virtual network functions. Thus, with the coexistence deployment model, monitoring of the values of system parameters associated with PNFs of NEs 1024, 1028, and 1032 may be done in real-time or near real-time to allow the virtualized deployment side to scale resources in a manner that provides the required and efficient delivery of network services throughout the system. This may be accomplished by providing an interface configured to provide PNFs with NEs 1024, 1028, and 1032 between EM 1012 and 1016 and VNFM 1040; NM 1008 and NFVO 1036; or the OSS/BSS 1004 and NFVO 1036. The details of the operation of these embodiments will be described in more detail with reference to fig. 13-15.
Embodiments of the present disclosure may be implemented in a system using any suitable hardware and/or software configured as desired. FIG. 11 schematically illustrates an example computer system 1100 that can be used to implement the various embodiments described herein. FIG. 11 illustrates an example computer system 1100 for one embodiment, having a processor circuit 1104, a system memory 1108, a non-volatile memory NVM/storage 1112, and communication circuit 1116. As shown, system 1100 may also include interface circuitry 1120 coupled to processor circuitry 1104, system memory 1108, NVM/storage 1112, and communication circuitry 1116.
In some embodiments, system 1100 can be used as NFVO, VNFM, VIM, host device, OSS/BSS, or other device implementing the embodiments described herein. System 1100 may be implemented as a server or device that will operate in conjunction with a server in a 3GPP network. For example, in some embodiments, system 1100 may be a server within a core network (or evolved packet core EPC in system architecture evolution SAE) of a 3GPP network.
Interface circuitry 1120 for one embodiment may include any suitable interface controllers and connectors for interconnecting one or more of processor circuitry 1104, system memory 1108, NVM/storage 1112, and communications circuitry 1116. The interface controller may include, but is not limited to: memory controllers, storage device controllers, such as Redundant Array of Independent Disks (RAID) controllers, baseboard Management Controllers (BMCs), input/output controllers, and the like. Connectors may include, for example, buses (e.g., peripheral component interconnect express (PCIe) buses), ports, slots, jumpers, interconnect modules, and the like.
The processor circuit 1104 may include any type or combination of configurable or non-configurable circuitry designed to perform basic arithmetic, logic, control, or input/output operations specified by instructions of a computer program. The processor circuit 1104 may include one or more single-core or multi-core processors acting as clock-driven, register-based programmable electronic devices that receive digital data as input and process it according to instructions stored in the system memory 1108 and/or the NVM/storage 1112 to provide output to implement the operations described in various portions of this specification. Processor circuit 1104 may be coupled with system memory 1108 and/or NVM/storage 1112 and configured to execute instructions stored therein to implement various applications (e.g., VNF applications 440 and 444), an operating system (e.g., OS 424), a hypervisor (e.g., hypervisor 420) running on system 1100.
The processor circuit 1104 may include any combination of general-purpose and special-purpose processors. In some embodiments, the processor circuit 1104 may include a central processing unit, an application processor, a communication processor, a microprocessor, an ASIC, a Reduced Instruction Set Computer (RISC), a digital signal processor DSP, a coprocessor, a combinational logic circuit, a controller (e.g., memory, bridging, buses, etc.), or the like.
For one embodiment, at least some of the processor circuitry 1104 may be packaged together with logic for one or more controllers of the interface circuitry. For one embodiment, at least one processor of the processor circuits 1104 may be packaged together with logic for one or more controllers of the interface circuits 1120 to form a system in package, siP. For one embodiment, at least one processor of the processor circuit 1120 may be integrated on the same die with logic for one or more controllers of the interface circuit 1120 to form a system on a chip.
The system memory 1108 may be used to load and store data and/or instructions, for example, for the system 1100. The system memory 1108 for one embodiment may include any suitable volatile memory, such as, for example, suitable DRAM or SRAM. In some embodiments, the system memory 1108 may include double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For example, NVM/storage 1112 may be used to store data and/or instructions. NVM/storage 1112 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s), such as one or more hard disk drives HDD, one or more compact disk CD drives, RAID, and/or one or more digital versatile disk DVD drives.
NVM/storage 1112 may include storage/memory resources that may be physically part of, but not necessarily part of, the device on which system 1100 is installed or to which system 1100 may access. For example, NVM/storage 1100 may be accessed over a network via communications circuitry 1116.
The communication circuit 1116 may provide an interface for the system 1100 to communicate over one or more networks and/or with any other suitable device. The system 1100 may communicate with one or more components of a network in accordance with any one of one or more network standards and/or protocols. In some embodiments, the communication circuit 1116 may provide signal processing in accordance with an appropriate communication network protocol. For example, the communication circuit 1116 may include an Ethernet controller implementing an Ethernet protocol such as the 10 gigabit Ethernet, 1000BASE-T, 100BASE-TX or 10BASE-T standard. This embodiment is described in more detail with respect to fig. 12.
In embodiments in which the computer system 1100 is used as a host device (e.g., host device 400), the processor circuit 1104 may correspond to the computing circuit 408, the system memory 1108, and the NVM/storage 1112 may correspond to the storage/memory circuit 412, and the communication circuit 1116 may correspond to the network circuit 416.
Fig. 12 illustrates an ethernet controller 1200 according to some embodiments. Ethernet controller 1200 may be implemented in system 1100 to provide a wired connection. For example, ethernet controller 1200 may be implemented within communication circuit 1116. In some embodiments, ethernet controller 1200 may be particularly suited for embodiments that utilize virtualized resources, such as host device 400.
The ethernet controller 1200 may include a host interface 1212 to couple the ethernet controller 1200 to a host platform through, for example, interface circuitry 1120. In some embodiments, host interface 1212 may be a bus interface to couple with a serial expansion bus, such as a PCIe bus. In some embodiments, the host interface 1212 may be a PCIe endpoint with a single root input-output virtualized SR-IOV to allow isolation of PCIe resources for manageability and performance reasons. This may allow different virtual machines in the virtual environment to share a single PCIe hardware interface. In other embodiments, the host interface 1212 may be a PCIe endpoint with multiple root input/output virtualization that allows the PCIe bus to share resources between different virtual machines on different physical machines.
The ethernet controller 1200 may include a queue management and scheduling QMS circuit 1216.QMS circuit 1216 (which may also be referred to as a network or packet scheduler) may employ queuing/scheduling algorithms to control the transmission and reception of packets by ethernet controller 1200. QMS circuit 1216 may manage the sequence of network packets in the transmit and receive queues of ethernet controller 1200. In some embodiments, QMS circuit 1216 may comprise a plurality of different queues, and each queue holds packets of a flow according to a configured packet classification rule. For example, packets may be divided into flows by their source and destination IP addresses, quality of service requirements, etc.
In some embodiments, QMS circuit 1216 may be used by ethernet controller 1200 to perform receive side scaling to spread incoming packets across the available processing cores of the host platform. QMS circuitry 1216 may also provide flow steering functionality, which includes intelligent offloading to place incoming packets directly to the right core to avoid packets being steered to an available processing core even if another core is running an application that is the target of the packet, e.g., a VNF application.
The ethernet controller 1200 may also include a protocol acceleration/offload a/O circuit 1220. Protocol a/O circuitry 1220 may offload processing of a particular protocol or functionality of a particular protocol from a host processor. For example, in some embodiments, protocol a/O circuitry 1220 may include a transmission control protocol TCP offload engine to offload processing of the TCP/IP stack from the host platform to ethernet controller 1200. This may be particularly useful in high speed network interfaces such as gigabit ethernet and 10 gigabit ethernet. The offloaded processing may include actions associated with the connection-oriented nature of TCP such as, but not limited to, transport layer connection establishment, acknowledgement of received packets, checksum and sequence number calculation, sliding window calculation, and transport layer connection termination.
Ethernet controller 1200 may also include a traffic classifier 1224. Traffic classifier 1224 may implement a process of classifying traffic into a plurality of traffic categories according to various parameters (e.g., port numbers, protocols, etc.). Each traffic class may be treated differently to distinguish between services provided by ethernet controller 1200.
The ethernet controller 1200 may also include a media access controller 1228 to perform MAC layer operations for the ethernet controller 1200 using, for example, a carrier sense multiple access with collision detection (CSMA/CD) protocol. The media access controller 1228 may include a plurality of full duplex Ethernet MAC ports, which may be configured to operate at different speeds, e.g., 40Gb/s, 10Gb/s, 1Gb/s.
The ethernet controller 1200 may also include a PHY 1232 to perform ethernet PHY layer operations. In some embodiments, PHY 1232 may include an interface that connects directly with a communication medium (e.g., a backplane or direct attachment dual-axis copper cable assembly) or through an ethernet interface, which may be considered an external PHY in some cases. PHY 1232 may interface between the analog domain of line modulation of the ethernet network and the digital domain of link layer packet signaling performed by media access controller 1228. In some embodiments, the PHY circuitry may include a multi-rate Media Attachment Unit Interface (MAUI) that may be configured for operation and a plurality of different link speeds, e.g., 40Gb/s, 10Gb/s, 1Gb/s, or 100Mb/s.
The ethernet controller 1200 may also include in-band management circuitry 1236 having a controller or processor to perform various on-chip management functions. In-band management circuit 1236 may pass through system management bus SMBus; NC-SI of sideband interface of network controller; or interface with an off-chip management controller using a connection, such as the host interface 1212 of the management component transport protocol MCTP, to communicate over PCIe.
In-band management circuitry 1236 may include a baseboard management controller or an embedded management processor unit that handles management tasks to be performed by the ethernet controller but not by other circuitry (e.g., device drivers of the ethernet controller). In some embodiments, these tasks may include sequentially executing portions of the power supply, processing AQ commands, initializing ports, participating in various fabric configuration protocols (e.g., data center bridging capability exchange DCBX and other link layer discovery protocol LLDP), and processing configuration requests received by the management interface.
Fig. 13 illustrates an example operational flow/algorithm structure of a host device 400 according to some embodiments. In particular, the operational flow/algorithm structure 1300 may be used by a local monitor, such as the local monitor 484 of the host device 400.
At 1304, the operational flow/algorithm structure 1300 may include instantiating the first VM to run the VNFC to perform a sub-function of the network function. For example, hypervisor 420 of host device 400 may instantiate VM 472 to run VNFC 448 as described above.
At 1308, the operational flow/algorithm structure 1300 may further include instantiating the second VM to run a local monitor to monitor performance of the first VM (e.g., VM 472) and determine VM management actions based on the monitored performance. For example, the hypervisor 420 of the host device 400 may instantiate the VM 488 to run the local monitor 484. The local monitor 484 may use the performance monitoring and management operational flow/algorithm structures as described herein to provide monitoring and determination of block 1308.
In some embodiments, monitoring of the performance of the VM may be done over a predetermined sampling interval, and an over-use or under-use of resources associated with VNFC 448 may be identified. In these embodiments, the VM management action may at least partially address the over-use or under-use of resources, such as by instantiating a new VM, closing VM 472, or zooming in/out VM 472.
Fig. 14 illustrates an example operational flow/algorithm structure 1400 of a host device 400 in accordance with some embodiments. In particular, the operational flow/algorithm structure 1400 may be used by a local monitor, such as the local monitor 484 of the host device 400.
At 1404, the operational flow/algorithm structure 1400 can include determining a first value of a system parameter corresponding to operation of a VDU providing virtualized network functionality. For purposes of this description, a VDU may correspond to VM 472/VNFC 448. The first value (which may also be referred to as an initial value or threshold value) may be included in the NSD descriptor and may be obtained at least initially from the sandbox analysis.
At 1408, the operational flow/algorithm structure 1400 may include receiving a second value of a system parameter. The second value (which may also be referred to as a new value) may represent real-time or near real-time operational statistics of the VDU.
At 1412, the operational flow/algorithm structure 1400 may include comparing the second value to the first value. According to some embodiments, the first value/second value comparison may be similar to the process described above with respect to fig. 6-8.
At 1416, the operational flow/algorithm structure 1400 may include performing underuse or overuse management actions based on the comparison performed at 1412. In some embodiments, the management action may include sending a request to a global monitor (e.g., global monitor 118 of VNFM 116). The request may be a request to perform an overuse management action to add a vCPU to the VDU, increase the throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate another VDU to offload at least some operations of the VDU.
Fig. 15 illustrates an example operational flow/algorithm structure 1500 of a VNFM according to some embodiments. Specifically, for example, operational flow/algorithm structure 1500 may be used by a global monitor of VNFM 1040.
At 1504, the operational flow/algorithm structure 1500 may include receiving a report including performance measurements. The performance measurements may be related to the operations provided by the PNF in the coexistence network model, as shown in fig. 10. The global monitor of VNFM 1040 may receive reports from element managers 1012 and 1016 via the VeEn-VNFM reference point.
The report may include one or more values of the NS identifier, PNF identifier, and performance measure. The signal structure corresponding to the report may include an NS identifier field, a PNF identifier field, and one or more measurement fields for one or more values of the performance measurement. The signal structure may conform to the protocol of the VeEn-Vnfm reference point.
The global monitor may make decisions based on performance measurements for lifecycle management of network functions of the system. Then, at 1508, the operational flow/algorithm structure 1500 may include the global monitor sending one or more requests for management actions. For example, the global monitor may determine whether system resources are over-or under-utilized based on the value of the performance measure, and may determine whether to perform a corresponding over-or under-utilization management action. Similar to the description above with respect to fig. 6-9, the over/under use management actions may include instantiating a new VNF, removing an existing VNF, or zooming in/out of an existing VNF (by increasing/decreasing resources allocated to VMs of the VNF). In some embodiments, the transmitted request may include a request from VNFM 1040 to NFVO 1036 to verify resource availability to instantiate a new VNF or extend an existing VNF while supporting the physical deployment side of network 1000.
Fig. 16 illustrates an example operational flow/algorithm structure 1600 of an EM or NM according to some embodiments. In particular, for example, the operational flow/algorithm structure 1600 may be used by the EM 1016 or NM 1008.
At 1604, the operational flow/algorithm structure 1600 may include an indicator of the reception and processing performance measurement. For example, performance measurements may relate to services provided by the PNF of NE 1032. The indicator may be received by NM 1008 in a report received from EM 1016 via an Os-Nfvo reference point. Alternatively, the indicator may be received by EM 1016 from the PNF of NE 1032.
At 1608, the operational flow/algorithm structure 1600 may also include updating one or more counters. The counter may be logic or circuitry on NM 1008 or EM 1016 configured to temporarily store information related to performance measurements.
At 1612, the operational flow/algorithm structure 1600 may also include detecting an occurrence of a predetermined reporting event. In some embodiments, NM 1008 or EM 1016 may monitor the counter to determine if a predetermined reporting event has occurred. In various embodiments, the predetermined reporting event may be that at least one counter has a value greater than a threshold reporting value. In other embodiments, the report may be a periodic report and the predetermined reporting event is expiration of a timer. In still other embodiments, the predetermined reporting event may be a request received from another entity (e.g., from VNFM 1040 or NFVO 1036).
At 1616, the operational flow/algorithm structure 1600 may also include sending a report. In some embodiments, the report may be sent from EM 1016 to VNFM 1040 via the VeEn-Vnfm reference point, or from NM 1008 to NFVO 1036 via the Os-Nfvo reference point.
Fig. 17 illustrates an example operational flow/algorithm structure 1700 of NFVO according to some embodiments. In particular, for example, the operational flow/algorithm structure 1700 may be used by NFVO 1036.
At 1704, the operational flow/algorithm structure 1700 may include receiving a report including performance measurements. For example, performance measurements may relate to services provided by PNF 1032. In some embodiments, the report may be received by NFVO 1036 from NM 1008 through an Os-NFVO reference point. Alternatively, the report may be received by NFVO 1036 from OSS/BSS 1004.
At 1708, the operational flow/algorithm structure 1700 may also include performing management actions with respect to the VDU. The management actions may include instructing the VNFM 1040 to initiate an over-use or under-use VM management action on the virtual deployment side to at least partially address the over-use or under-use of system resources on the physical deployment side. In some embodiments, the management actions performed by NFVO 1036 may include determining availability of resources and sending a request to a global monitor (e.g., global monitor 118 of VNFM 116). The request may be a request to perform an overuse management action to add a vCPU to the VDU, increase the throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate another VDU to offload at least some operations of the VDU.
Fig. 18 illustrates an example operational flow/algorithm structure 1800 of an OSS/BSS in accordance with some embodiments. In particular, for example, the operational flow/algorithm structure 1800 may be used by the OSS/BSS 1004.
At 1804, the operational flow/algorithm structure 1800 may include receiving a report including performance measurements. For example, performance measurements may relate to services provided by the PNF of NE 1032. In some embodiments, the report may be received by OSS/BSS 1004 from NM 1008.
At 1808, the operational flow/algorithm structure 1800 may also include sending a report to, for example, NFVO 1036. In some embodiments, the sending of the report may be based on the detection of the occurrence of a predetermined reporting event. In some embodiments, OSS/BSS 1004 may monitor the counter to determine if a predetermined reporting event has occurred. In various embodiments, the predetermined reporting event may be that at least one counter has a value greater than a threshold reporting value. In other embodiments, the report may be a periodic report and the predetermined reporting event is expiration of a timer. In still other embodiments, the predetermined reporting event may be a request received from another entity (e.g., from NFVO 1036). In some embodiments, once OSS/BSS 1004 receives the report from NM 1008, OSS/BSS 1004 may forward the report to NFVO 1036.
Fig. 19 illustrates an example computer-readable medium 1904 that may be suitable for storing instructions that, in response to execution of the instructions by an apparatus, cause the apparatus to implement selected aspects of the present disclosure. In some embodiments, the computer readable medium 1904 may be non-transitory. As shown, the computer-readable storage medium 1904 may include programming instructions 1908. The programming instructions 1908 may be configured to enable a device (e.g., NFVO, VNFM, global monitor, host device, OSS/BSS, NM, EM, or similar computing device) to implement any of the methods or elements (or aspects thereof) described herein with respect to VM monitoring and management in response to execution of the programming instructions 1908. In some embodiments, programming instructions 1908 may be configured to, in response to execution of programming instructions 1908, enable a device to implement any of the methods or elements (or aspects thereof) described by the present disclosure with respect to lifecycle management operations performed by PNF, VNF, VNFC and VMs and performing actions to instantiate a new VM/VNFC (VDU), shut down an existing VM/VNFC (VDU), or zoom in/out a VM/VNFC (VDU). In some embodiments, programming instructions 1908 may be disposed on a computer readable medium 1904 that is transient in nature (e.g., signal).
Any combination of one or more computer-usable or computer-readable media may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (e.g., EPROM, EEPROM, or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio frequency, etc.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer readable program instructions may also be stored in a computer readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart or block diagram block or blocks.
Some non-limiting examples are provided below.
Example 1 includes an apparatus comprising: means for instantiating a first virtual machine, VM, to run a virtual network function component, VNFC, for executing a sub-function of a network function provided by a virtual network function, VNF, application and instantiating a second VM to run a local monitor for: monitoring performance of the first VM over a predetermined sampling interval to identify over-use or under-use of resources associated with the VNFC; and determining a VM management action based on the monitored performance to at least partially address over-use or under-use of resources associated with the VNFC; and means for communicating with one or more components of the network to perform VM management actions.
Example 2 includes the apparatus of example 1, wherein the VM management action instantiates a third VM to assist the first VM to at least partially address the over-use of the resource, or shuts down the first VM to at least partially address the under-use of the resource.
Example 3 includes the apparatus of any of examples 1-2, wherein the VM management action scales an operating resource provided to the first VM.
Example 4 includes the apparatus of example 3, wherein scaling the operation resource comprises changing a number of virtual central processing units vCPU, virtual memory, or virtual links available to the first VM.
Example 5 includes the apparatus of any of examples 1-4, wherein the local monitor sends a request to the global monitor in the virtual network function manager VNFM via the means for communicating to verify resource availability for the VM management action.
Example 6 includes the apparatus of any of examples 1-5, wherein the local monitor is to: generating a measurement report based on the monitored performance; and after a predetermined sampling interval, sends the measurement report to the network function virtualization orchestrator NFVO.
Example 7 includes the apparatus of any one of examples 1-6, wherein the local monitor is to: determining a first value of a system parameter; determining a second value of the system parameter based on the monitored performance; and determining a VM management action based on a comparison of the second value to the first value.
Example 8 includes the apparatus of example 7, wherein the system parameter is queue time, service time, number of computing cycles, network bandwidth, or virtual memory.
Example 9 includes the apparatus of example 7 or 8, wherein the first value is determined by sampling a state of the system parameter at a timing interval.
Example 10 includes the apparatus of any of examples 7-9, wherein the first value is determined by sandboxed analysis.
Example 11 includes the apparatus of example 7 or 8, wherein the local monitor is to determine the first value based on a message received from a global monitor of the virtual network function manager VNFM.
Example 12 includes an apparatus comprising: means for providing a local monitor that: determining one or more first values of one or more system parameters corresponding to the operation of a virtual data unit VDU providing virtualized network functionality; receiving one or more second values of one or more system parameters from the VDU; comparing a second value of the one or more second values with a first value of the one or more first values, the first and second values corresponding to a first system parameter of the one or more system parameters; and performing an underuse or overuse management action based on a comparison of the second value with the first value; and means for communicating with one or more components of the network to perform underuse or overuse management actions.
Example 13 includes the apparatus of example 12, wherein to perform the underuse or overuse management action, the local monitor is to: the request is sent via the means for communicating to the global monitor of the virtual network function manager VNFM.
Example 14 includes the apparatus of examples 12 or 13, wherein the local monitor performs an overuse management action to add a virtual central processing unit to the VDU, to increase throughput of the virtual link, to add one or more memory blocks to the VDU, or to instantiate another VDU.
Example 15 includes the apparatus of any of examples 12-14, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a calculation cycle parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; and the first value of the calculated cycle parameter is less than the second value of the calculated cycle parameter; and based on the detection of the first set of conditions, performing an overuse management action to add the virtual central processing unit to the VDU.
Example 16 includes the apparatus of any of examples 12-14, wherein the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, and a network bandwidth parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; and the first value of the network bandwidth parameter is less than the second value of the network bandwidth parameter; and based on the detection of the first set of conditions, performing an overuse management action to increase the throughput of the virtual link.
Example 17 includes the apparatus of any of examples 12-14, wherein the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; and the first value of the virtual memory parameter is less than the second value of the virtual memory parameter; and based on the detection of the first set of conditions, performing an overuse management action to add one or more memory blocks to the VDU.
Example 18 includes the apparatus of any of examples 12-14, wherein the VDU is a first VDU and the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; the first value of the virtual memory parameter is not less than the second value of the virtual memory parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management action to instantiate the second VDU.
Example 19 includes the apparatus of any of examples 12-14, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not less than the second value of the service time parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management operation to instantiate an additional virtual machine for the VDU.
Example 20 includes the apparatus of examples 12 or 13, wherein the local monitor performs an underuse management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 21 includes the apparatus of example 20, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and the second value of the queue time parameter is less than the predetermined minimum threshold value of the queue time parameter and the second value of the service time parameter is less than the predetermined minimum threshold value of the service time parameter; and based on the detection of the first set of conditions, performing an underuse management action to turn off the VDU.
Example 22 includes the apparatus of example 20, wherein the VDU is a first VDU, the one or more system parameters are first system parameters including a first queue time parameter and a first service time parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the first queue time parameter is greater than the second value of the first queue time parameter and the first value of the first service time parameter is greater than the second value of the first service time parameter; and the second value of the first queue time parameter is not less than the predetermined minimum threshold value of the queue time parameter or the second value of the first service time parameter is not less than the predetermined minimum threshold value of the service time parameter; and determining, based on the detection of the first set of conditions, whether the second VDU is performing the same sub-function as the first VDU; and if it is determined that the second VDU is performing the same sub-function as the first VDU, obtaining one or more first values of one or more second system parameters of the second VDU, the one or more second system parameters including a second queue time parameter and a second service time parameter.
Example 23 includes the apparatus of example 22, wherein the local monitor is further to: detecting a second set of conditions, the second set of conditions including a first value of the first queue time parameter being greater than a second value of the first queue time parameter plus a second value of the second queue time parameter, and the first value of the first service time parameter being greater than the second value of the first service time parameter plus the second value of the second service time parameter; and based on the detection of the second set of conditions, performing an underuse management action to migrate operation of the first VDU to the second VDU and to shut down the first VDU.
Example 24 includes the apparatus of example 22, wherein the one or more first system parameters comprise a first virtual memory parameter, and the local monitor is further to: detecting a second set of conditions, the second set of conditions comprising: the first value of the first queue time parameter is no greater than the second value of the first queue time parameter plus the second value of the second queue time parameter, or the first value of the first service time parameter is no greater than the second value of the first service time parameter plus the second value of the second service time parameter; and the second value of the first virtual memory parameter is less than a predetermined minimum threshold value of the first virtual memory parameter; and based on the detection of the second set of conditions, performing an underuse management action to remove one or more memory blocks from the first VDU.
Example 25 includes the apparatus of example 20, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a virtual memory parameter, and the instructions, when executed, further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising a first value of a queue time parameter not greater than a second value of a queue time parameter or a first value of a service time parameter not greater than a second value of a service time parameter, and a second value of a virtual memory parameter less than a predetermined minimum threshold value of a virtual memory parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove one or more memory blocks from the VDU.
Example 26 includes the apparatus of example 20, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, and a calculation cycle parameter, and the local monitor is further to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; and calculating a second value of the cycle parameter that is less than a predetermined minimum threshold value of the cycle parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove the one or more virtual central processing units from the VDU.
Example 27 includes the apparatus of example 20, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, a calculation cycle parameter, and a network bandwidth parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; calculating a second value of the cycle parameter not less than a predetermined minimum threshold value of the cycle parameter; and the second value of the network bandwidth parameter is less than a predetermined minimum threshold of network bandwidth; and, based on the detection of the first set of conditions, performing an underuse management action to reduce the throughput of the virtual link.
Example 28 includes an apparatus for implementing a virtual network function manager VNFM, the apparatus comprising: means for receiving a report from the element manager, the report comprising performance measurements of the physical network function PNF belonging to the network service; and means for sending a request to the network function virtualization orchestrator NFVO for management actions on the virtual data unit VDU providing virtualized network functions for the network service.
Example 29 includes the apparatus of example 28, wherein the management action is an underutilized or overutilized management action.
Example 30 includes the apparatus of example 28 or 29, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 31 includes the apparatus of examples 28 or 29, wherein the request for the management action is to underuse the management action to turn off the VDU, migrate operation of the VDU to another VDU and turn off the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 32 includes the apparatus of any of examples 28-31, wherein the means for receiving is to receive the report over a ven-Vnfm interface.
Example 33 includes the apparatus of any of examples 28-32, wherein the performance measurements include mobility management measurements, general packet radio service tunneling protocol, GTP, measurements, internet protocol, IP, measurements, inter-radio access technology handover measurements, quality of service measurements, security measurements, session management measurements, or subscriber management measurements.
Example 34 includes the apparatus of any of examples 28-33, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and one or more values corresponding to the performance measure.
Example 35 includes an element manager comprising: means for communicating with a network element of a physical network function PNF that is to perform a network service and a virtual network function manager VNFM; and means for updating a counter based on an indicator of a PNF-related performance measure, the indicator received from the network element; and means for detecting an occurrence of a predetermined reporting event, wherein the means for communicating sends an indicator-based report to the virtual network function manager upon detecting the occurrence.
Example 36 includes the element manager of example 35, wherein the means for detecting an occurrence further comprises: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and detecting the occurrence of a predetermined reporting event based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 37 includes the element manager of example 35 or 36, wherein the predetermined reporting event is a periodic reporting event.
Example 38 includes the element manager of any of examples 35-37, wherein the means for communicating receives the indicator from a network element.
Example 39 includes the element manager of any of examples 35-38, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
Example 40 includes a network function virtualization orchestrator NFVO, comprising: means for receiving a report from a network manager, the report comprising performance measurements of a physical network function, PNF, of a network service; and means for performing management actions with respect to the virtual data unit VDU providing virtualized network functions for the network service.
Example 41 includes the NFVO of example 40, wherein the management action is an underused or overused management action.
Example 42 includes the NFVO of example 40 or 41, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase a throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 43 includes the NFVO of example 40 or 41, wherein the management action is underuse of the management action to turn off the VDU, migrate operation of the VDU to another VDU and turn off the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 44 includes the NFVO of any one of examples 40-43, wherein the means for receiving the report receives the report through an Os-NFVO interface.
Example 45 includes the NFVO of any one of examples 40-44, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
Example 46 includes a network manager comprising: means for processing an indicator of a performance measure associated with a physical network function PNF of the network service, the indicator being received from the element manager; means for updating the counter based on the indicator; means for detecting the occurrence of a predetermined reporting event; and means for sending an indicator based report to the network function virtualization orchestrator NFVO based on the detected occurrence.
Example 47 includes the network manager of example 46, wherein the indicator is received via an Itf-N interface between the network manager and the element manager.
Example 48 includes the network manager of example 46 or 47, wherein the network manager sends the report via an Os-NFVO interface between the network manager and the NFVO.
Example 49 includes the network manager of any of examples 46-48, wherein the means for detecting occurs: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and detecting the occurrence of a predetermined reporting event based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 50 includes the network manager of any of examples 46-48, wherein the predetermined reporting event is a periodic reporting event.
Example 51 includes a network function virtualization orchestrator NFVO, comprising: means for receiving a report from a device of an operation support system or a service support system OSS/BSS, the report comprising performance measurements of a physical network function PNF belonging to a network service; and means for performing management actions with respect to the virtual data unit VDU providing virtualized network functions for the network service.
Example 52 includes the NFVO of example 51, wherein the management action is an underused or overused management action.
Example 53 includes the NFVO of example 51 or 52, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase a throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 54 includes the NFVO of example 51 or 52, wherein the request for the management action is an underuse of the management action to turn off the VDU, migrate operation of the VDU to another VDU and turn off the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 55 includes the NFVO of any one of examples 51-54, wherein the means for performing the management action is to: and sending an action request to the Virtual Network Function Manager (VNFM).
Example 56 includes an apparatus to operate a support system OSS or a service support system BSS, the apparatus comprising: means for receiving a message from the network manager NM, the message comprising an indicator of a performance measure of a physical network function PNF belonging to the network service; and means for sending an indicator-based report to the network function virtualization orchestrator NFVO based on the detected occurrence.
Example 57 includes the apparatus of example 56, further comprising: means for determining a first value of a system parameter based on the indicator; means for comparing the first value of the system parameter with a predetermined threshold; and means for identifying the detected occurrence based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 58 includes the apparatus of example 56 or 57, wherein the detected occurrence is a periodic reporting event.
Example 59 includes the apparatus of any one of examples 56-58, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
Example 60 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause an apparatus to: instantiating a first virtual machine, VM, to run a virtual network function component, VNFC, for executing a sub-function of a network function provided by a virtual network function, VNF, application; instantiating the second VM to run a local monitor that: monitoring performance of the first VM over a predetermined sampling interval to identify over-use or under-use of resources associated with the VNFC; and determining a VM management action based on the monitored performance to at least partially address over-use or under-use of resources associated with the VNFC.
Example 61 includes the one or more computer-readable media of example 60, wherein the VM management action instantiates the third VM to assist the first VM in at least partially resolving the over-use of the resource or shuts down the first VM to at least partially resolve the under-use of the resource.
Example 62 includes the one or more computer-readable media of any of examples 60-61, wherein the VM management action scaling is provided to an operating resource of the first VM.
Example 63 includes the one or more computer-readable media of example 62, wherein scaling the operation resources includes changing a number of virtual central processing units vCPU, virtual memory, or virtual links available to the first VM.
Example 64 includes the one or more computer-readable media of any of examples 60-63, wherein the local monitor sends a request to a global monitor in the virtual network function manager VNFM to verify resource availability for the VM management action.
Example 65 includes the one or more computer-readable media of any of examples 60-64, wherein the local monitor is to: generating a measurement report based on the monitored performance; and after a predetermined sampling interval, sends the measurement report to the network function virtualization orchestrator NFVO.
Example 66 includes the one or more computer-readable media of any of examples 60-65, wherein the local monitor is to: determining a first value of a system parameter; determining a second value of the system parameter based on the monitored performance; and determining a VM management action based on a comparison of the second value to the first value.
Example 67 includes the one or more computer-readable media of example 66, wherein the system parameter is queue time, service time, number of computing cycles, network bandwidth, or virtual memory.
Example 68 includes the one or more computer-readable media of examples 66 or 67, wherein the first value is determined by sampling a state of the system parameter at a timing interval.
Example 69 includes the one or more computer-readable media of any of examples 66-68, wherein the first value is determined by sandboxed analysis.
Example 70 includes the one or more computer-readable media of examples 66 or 67, wherein the local monitor is to determine the first value based on a message received from a global monitor of the virtual network function manager VNFM.
Example 71 includes a method, comprising: instantiating a first virtual machine, VM, to run a virtual network function component, VNFC, for executing a sub-function of a network function provided by a virtual network function, VNF, application; and instantiating the second VM to run a local monitor that: monitoring performance of the first VM by a predetermined sampling interval to identify over-use or under-use of resources associated with the VNFC; and determining a VM management action based on the monitored performance to at least partially address over-use or under-use of resources associated with the VNFC.
Example 72 includes the method of example 71, wherein the VM management action instantiates the third VM to assist the first VM in at least partially resolving the over-use of the resource, or shuts down the first VM to at least partially resolve the under-use of the resource.
Example 73 includes the method of example 71 or 72, wherein the local monitor sends a request to a global monitor in the virtual network function manager VNFM to verify resource availability for the VM management action.
Example 74 includes the method of any of examples 71-73, wherein the local monitor: generating a measurement report based on the monitored performance; and after a predetermined sampling interval, sends the measurement report to the network function virtualization orchestrator NFVO.
Example 75 includes the method of any of examples 71-74, wherein the local monitor: determining a first value of a system parameter; determining a second value of the system parameter based on the monitored performance; and determining a VM management action based on a comparison of the second value to the first value.
Example 76 includes the method of example 75, further comprising sampling a state of the system parameter at a timing interval to determine the first value.
Example 77 includes the method of example 75 or 76, further comprising determining the first value by sandboxed analysis.
Example 78 includes the method of example 76 or 77, wherein the local monitor determines the first value based on a message received from a global monitor of the virtual network function manager VNFM.
Example 79 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause a local monitor to: determining one or more first values of one or more system parameters corresponding to the operation of a virtual data unit VDU providing virtualized network functionality; receiving one or more second values of one or more system parameters from the VDU; comparing a second value of the one or more second values with a first value of the one or more first values, the first and second values corresponding to a first system parameter of the one or more system parameters; and performing an underuse or overuse management action based on a comparison of the second value with the first value.
Example 80 includes the one or more computer-readable media of example 79, wherein, to perform the underusage management action or the overusage management action, the local monitor is to: a request is sent to a global monitor of the virtual network function manager VNFM.
Example 81 includes the one or more computer-readable media of examples 79 or 80, wherein the local monitor performs an overuse management action to add a virtual central processing unit to the VDU, to increase throughput of the virtual link, to add one or more memory blocks to the VDU, or to instantiate another VDU.
Example 82 includes the one or more computer-readable media of any of examples 79-81, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a calculation cycle parameter, and the instructions, when executed, further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; and the first value of the calculated cycle parameter is less than the second value of the calculated cycle parameter; and based on the detection of the first set of conditions, performing an overuse management action to add the virtual central processing unit to the VDU.
Example 83 includes the one or more computer-readable media of any of examples 79-81, wherein the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, and a network bandwidth parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; and the first value of the network bandwidth parameter is less than the second value of the network bandwidth parameter; and based on the detection of the first set of conditions, performing an overuse management action to increase the throughput of the virtual link.
Example 84 includes the one or more computer-readable media of any of examples 79-81, wherein the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; and the first value of the virtual memory parameter is less than the second value of the virtual memory parameter; and based on the detection of the first set of conditions, performing an overuse management action to add one or more memory blocks to the VDU.
Example 85 includes the one or more computer-readable media of any of examples 79-81, wherein the VDU is a first VDU and the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; the first value of the virtual memory parameter is not less than the second value of the virtual memory parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management action to instantiate the second VDU.
Example 86 includes the one or more computer-readable media of any of examples 79-81, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the instructions, when executed, further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not less than the second value of the service time parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management operation to instantiate an additional virtual machine for the VDU.
Example 87 includes the one or more computer-readable media of examples 79 or 80, wherein the local monitor performs an underuse management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 88 includes the one or more computer-readable media of example 87, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the instructions, when executed, further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and the second value of the queue time parameter is less than the predetermined minimum threshold value of the queue time parameter and the second value of the service time parameter is less than the predetermined minimum threshold value of the service time parameter; and based on the detection of the first set of conditions, performing an underuse management action to turn off the VDU.
Example 89 includes the one or more computer-readable media of example 87, wherein the VDU is a first VDU, the one or more system parameters are first system parameters including a first queue time parameter and a first service time parameter, and the instructions, when executed, further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the first queue time parameter is greater than the second value of the first queue time parameter and the first value of the first service time parameter is greater than the second value of the first service time parameter; and the second value of the first queue time parameter is not less than the predetermined minimum threshold value of the queue time parameter or the second value of the first service time parameter is not less than the predetermined minimum threshold value of the service time parameter; and determining, based on the detection of the first set of conditions, whether the second VDU is performing the same sub-function as the first VDU; and if it is determined that the second VDU is performing the same sub-function as the first VDU, obtaining one or more first values of one or more second system parameters of the second VDU, the one or more second system parameters including a second queue time parameter and a second service time parameter.
Example 90 includes the one or more computer-readable media of example 89, wherein the instructions, when executed, further cause the local monitor to: detecting a second set of conditions, the second set of conditions including a first value of the first queue time parameter being greater than a second value of the first queue time parameter plus a second value of the second queue time parameter, and the first value of the first service time parameter being greater than the second value of the first service time parameter plus the second value of the second service time parameter; and based on the detection of the second set of conditions, performing an underuse management action to migrate operation of the first VDU to the second VDU and to shut down the first VDU.
Example 91 includes the one or more computer-readable media of example 89, wherein the one or more first system parameters include a first virtual memory parameter, and the instructions, when executed, further cause the local monitor to: detecting a second set of conditions, the second set of conditions comprising: the first value of the first queue time parameter is no greater than the second value of the first queue time parameter plus the second value of the second queue time parameter, or the first value of the first service time parameter is no greater than the second value of the first service time parameter plus the second value of the second service time parameter; and the second value of the first virtual memory parameter is less than a predetermined minimum threshold value of the first virtual memory parameter; and based on the detection of the second set of conditions, performing an underuse management action to remove one or more memory blocks from the first VDU.
Example 92 includes the one or more computer-readable media of example 87, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a virtual memory parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising a first value of a queue time parameter not greater than a second value of a queue time parameter or a first value of a service time parameter not greater than a second value of a service time parameter, and a second value of a virtual memory parameter less than a predetermined minimum threshold value of a virtual memory parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove one or more memory blocks from the VDU.
Example 93 includes the one or more computer-readable media of example 87, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, and a calculation cycle parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; and calculating a second value of the cycle parameter that is less than a predetermined minimum threshold value of the cycle parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove the one or more virtual central processing units from the VDU.
Example 94 includes the one or more computer-readable media of example 87, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, a calculation cycle parameter, and a network bandwidth parameter, and the instructions when executed further cause the local monitor to: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; calculating a second value of the cycle parameter not less than a predetermined minimum threshold value of the cycle parameter; and the second value of the network bandwidth parameter is less than a predetermined minimum threshold of network bandwidth; and, based on the detection of the first set of conditions, performing an underuse management action to reduce the throughput of the virtual link.
Example 95 includes a method of operating a local monitor, comprising: determining one or more first values of one or more system parameters corresponding to the operation of a virtual data unit VDU providing virtualized network functionality; receiving one or more second values of one or more system parameters from the VDU; comparing a second value of the one or more second values with a first value of the one or more first values, the first and second values corresponding to a first system parameter of the one or more system parameters; and performing an underuse or overuse management action based on a comparison of the second value with the first value.
Example 96 includes the method of example 95, wherein performing the underuse or overuse management action includes: a request is sent to a global monitor of the virtual network function manager VNFM.
Example 97 includes the method of example 95 or 96, further comprising performing an overuse management action to add a virtual central processing unit to the VDU, to increase throughput of the virtual link, to add one or more memory blocks to the VDU, or to instantiate another VDU.
Example 98 includes the method of any of examples 95-97, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a calculation cycle parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; and the first value of the calculated cycle parameter is less than the second value of the calculated cycle parameter; and based on the detection of the first set of conditions, performing an overuse management action to add the virtual central processing unit to the VDU.
Example 99 includes the method of any of examples 95-97, wherein the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, and a network bandwidth parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; and the first value of the network bandwidth parameter is less than the second value of the network bandwidth parameter; and based on the detection of the first set of conditions, performing an overuse management action to increase the throughput of the virtual link.
Example 100 includes the method of any of examples 95-97, wherein the one or more system parameters include a queue time parameter, a service time parameter, a computation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; and the first value of the virtual memory parameter is less than the second value of the virtual memory parameter; and based on the detection of the first set of conditions, performing an overuse management action to add one or more memory blocks to the VDU.
Example 101 includes the method of any of examples 95-97, wherein the VDU is a first VDU and the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; calculating a first value of the cycle parameter not less than a second value of the cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; the first value of the virtual memory parameter is not less than the second value of the virtual memory parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management action to instantiate the second VDU.
Example 102 includes the method of any of examples 95-97, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not less than the second value of the service time parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and based on the detection of the first set of conditions, performing an overuse management operation to instantiate an additional virtual machine for the VDU.
Example 103 includes the method of example 95 or 96, wherein the method comprises performing an underuse management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 104 includes the method of example 95, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and the second value of the queue time parameter is less than the predetermined minimum threshold value of the queue time parameter and the second value of the service time parameter is less than the predetermined minimum threshold value of the service time parameter; and based on the detection of the first set of conditions, performing an underuse management action to turn off the VDU.
Example 105 includes the method of example 95, wherein the VDU is a first VDU, the one or more system parameters are first system parameters including a first queue time parameter and a first service time parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the first queue time parameter is greater than the second value of the first queue time parameter and the first value of the first service time parameter is greater than the second value of the first service time parameter; and the second value of the first queue time parameter is not less than the predetermined minimum threshold value of the queue time parameter or the second value of the first service time parameter is not less than the predetermined minimum threshold value of the service time parameter; and determining, based on the detection of the first set of conditions, whether the second VDU is performing the same sub-function as the first VDU; and if it is determined that the second VDU is performing the same sub-function as the first VDU, obtaining one or more first values of one or more second system parameters of the second VDU, the one or more second system parameters including a second queue time parameter and a second service time parameter.
Example 106 includes the method of example 97, wherein the method further comprises: detecting a second set of conditions, the second set of conditions including a first value of the first queue time parameter being greater than a second value of the first queue time parameter plus a second value of the second queue time parameter, and the first value of the first service time parameter being greater than the second value of the first service time parameter plus the second value of the second service time parameter; and based on the detection of the second set of conditions, performing an underuse management action to migrate operation of the first VDU to the second VDU and to shut down the first VDU.
Example 107 includes the method of example 97, wherein the one or more first system parameters include a first virtual memory parameter, and the method further comprises: detecting a second set of conditions, the second set of conditions comprising: the first value of the first queue time parameter is no greater than the second value of the first queue time parameter plus the second value of the second queue time parameter, or the first value of the first service time parameter is no greater than the second value of the first service time parameter plus the second value of the second service time parameter; and the second value of the first virtual memory parameter is less than a predetermined minimum threshold value of the first virtual memory parameter; and based on the detection of the second set of conditions, performing an underuse management action to remove one or more memory blocks from the first VDU.
Example 108 includes the method of example 95, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a virtual memory parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising a first value of a queue time parameter not greater than a second value of a queue time parameter or a first value of a service time parameter not greater than a second value of a service time parameter, and a second value of a virtual memory parameter less than a predetermined minimum threshold value of a virtual memory parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove one or more memory blocks from the VDU.
Example 109 includes the method of example 95, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, and a calculation cycle parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; and calculating a second value of the cycle parameter that is less than a predetermined minimum threshold value of the cycle parameter; and based on the detection of the first set of conditions, performing an underuse management action to remove the one or more virtual central processing units from the VDU.
Example 110 includes the method of example 95, wherein the one or more system parameters include a queue time parameter, a service time parameter, a virtual memory parameter, a computation cycle parameter, and a network bandwidth parameter, and the method further comprises: detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not greater than the second value of the service time parameter; the second value of the virtual memory parameter is not less than a predetermined minimum threshold value of the virtual memory parameter; calculating a second value of the cycle parameter not less than a predetermined minimum threshold value of the cycle parameter; and the second value of the network bandwidth parameter is less than a predetermined minimum threshold of network bandwidth; and, based on the detection of the first set of conditions, performing an underuse management action to reduce the throughput of the virtual link.
Example 111 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause a virtual network function manager VNFM to: receiving a report from the element manager, the report comprising performance measurements of the physical network function PNF belonging to the network service; and sending a request to the network function virtualization orchestrator NFVO for management actions on the virtual data unit VDU that provides the virtualized network function for the network service.
Example 112 includes the one or more computer-readable media of example 111, wherein the management action is an underutilized or overutilized management action.
Example 113 includes the one or more computer-readable media of examples 111 or 112, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 114 includes the one or more computer-readable media of examples 111 or 112, wherein the request for the management action is to underuse the management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 115 includes the one or more computer-readable media of any of examples 11-114, wherein the VNFM is to receive the report via a ven-VNFM interface.
Example 116 includes the one or more computer-readable media of any of examples 111-115, wherein the performance measurements include mobility management measurements, general packet radio service tunneling protocol, GTP, measurements, internet protocol, IP, management measurements, inter-radio access technology handover measurements, quality of service measurements, security measurements, session management measurements, or subscriber management measurements.
Example 117 includes the one or more computer-readable media of any of examples 111-116, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and one or more values corresponding to the performance measure.
Example 118 includes a signal structure comprising: an NS field corresponding to an NS identifier of the network service NS; a PNF field corresponding to a PNF identifier of the physical network function PNF; and one or more measurement fields for one or more values of a performance measurement associated with the PNF.
Example 119 includes the signal structure of example 118, wherein the signal structure conforms to a protocol of a VeEn-Vnfm or Os-Nfvo reference point.
Example 120 includes one or more computer-readable media having instructions that, when executed, cause an element manager to: receiving an indicator of a performance measure associated with a physical network function PNF; updating a counter based on the indicator; detecting the occurrence of a predetermined reporting event; and based on the detected occurrence, sending an indicator-based report to the virtual network function manager.
Example 121 includes the one or more computer-readable media of example 120, wherein the instructions, when executed, further cause the element manager to: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and detecting the occurrence of a predetermined reporting event based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 122 includes the one or more computer-readable media of examples 120 or 121, wherein the predetermined reporting event is a periodic reporting event.
Example 123 includes the one or more computer-readable media of any of examples 120-122, wherein the element manager is to receive the indicator from the network element.
Example 124 includes the one or more computer-readable media of any of examples 120-123, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
Example 125 includes an element manager comprising: interface circuitry for communicatively coupling the element manager to a network element of a physical network function PNF performing a network service, and to a virtual network function manager VNFM; and control circuitry coupled to the interface circuitry, the control circuitry to: receiving, via the interface circuit, an indicator of a PNF-related performance measurement; updating a counter based on the indicator; detecting the occurrence of a predetermined reporting event; and based on the detected occurrence, sending an indicator-based report to the virtual network function manager via the interface circuit.
Example 126 includes the element manager of example 125, wherein the control circuitry is further to: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and detecting the occurrence of a predetermined reporting event based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 127 includes the element manager of example 125 or 126, wherein the predetermined reporting event is a periodic reporting event.
Example 128 includes the element manager of any of examples 125-127, wherein the control circuitry is to receive the indicator from a network element.
Example 129 includes the element manager of any of examples 125-128, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
Example 130 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause a network function virtualization orchestrator NFVO to: receiving a report from a network manager, the report comprising performance measurements of a physical network function PNF of the network service; and performing management actions with respect to the virtual data unit VDU that provides virtualized network functions for the network service.
Example 131 includes the one or more computer-readable media of example 130, wherein the management action is an underutilized or overutilized management action.
Example 132 includes the one or more computer-readable media of example 130 or 131, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 133 includes the one or more computer-readable media of example 130 or 131, wherein the management action is underuse of the management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 134 includes the one or more computer-readable media of any of examples 130-133, wherein the NFVO receives the report through an Os-NFVO interface.
Example 135 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause a network manager to: an indicator for handling performance measurements related to a physical network function PNF of the network service, the indicator being received from the element manager; updating a counter based on the indicator; detecting the occurrence of a predetermined reporting event; and based on the detected occurrence, sending an indicator-based report to the network function virtualization orchestrator NFVO.
Example 136 includes the one or more computer-readable media of example 135, wherein the indicator is received via an Itf-N interface between the network manager and the element manager.
Example 137 includes the one or more computer-readable media of example 135 or 136, wherein the network manager sends the report via an Os-NFVO interface between the network manager and the NFVO.
Example 138 includes the one or more computer-readable media of any of examples 135-137, wherein the instructions, when executed, further cause the network manager to: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and detecting the occurrence of a predetermined reporting event based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 139 includes the one or more computer-readable media of any of examples 135-137, wherein the predetermined reporting event is a periodic reporting event.
Example 140 includes one or more computer-readable media having instructions that, when executed, cause a network function virtualization orchestrator NFVO to: receiving a report from a device of an operation support system or a service support system OSS/BSS, the report comprising performance measurements of a physical network function PNF belonging to a network service; and performing management actions with respect to the virtual data unit VDU that provides virtualized network functions for the network service.
Example 141 includes the one or more computer-readable media of example 140, wherein the management action is an underutilized or overutilized management action.
Example 142 includes the one or more computer-readable media of examples 140 or 141, wherein the management action is overuse of the management action to add a virtual central processing unit to the VDU, increase throughput of the virtual link, add one or more memory blocks to the VDU, or instantiate an additional virtual machine for the VDU.
Example 143 includes the one or more computer-readable media of examples 140 or 141, wherein the management action underuses the management action to turn off the VDU, migrate operation of the VDU to another VDU and turn off the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of the virtual link.
Example 144 includes the one or more computer-readable media of any of examples 140 or 141, wherein to perform the management action, the NFVO sends an action request to the virtual network function manager VNFM.
Example 145 includes one or more computer-readable media having instructions that, when executed by one or more processors, cause an apparatus of an operations support system OSS or a business support system BSS to: receiving a message from a network manager NM, the message comprising an indicator of a performance measure of a physical network function PNF belonging to a network service; and based on the detected occurrence, sending an indicator-based report to the network function virtualization orchestrator NFVO.
Example 146 includes the one or more computer-readable media of example 145, wherein the instructions, when executed, further cause the OSS/BSS to: determining a first value of a system parameter based on the indicator; comparing the first value of the system parameter to a predetermined threshold; and identifying the detected occurrence based on a comparison of the first value of the system parameter to a predetermined threshold.
Example 147 includes the one or more computer-readable media of examples 145 or 146, wherein the detected occurrence is a periodic reporting event.
Example 148 includes the one or more computer-readable media of any of examples 145-147, wherein the report includes a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of the indicator corresponding to the performance measure.
The description of illustrated implementations herein, including what is described in the abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Although specific implementations and examples are described herein for illustrative purposes, various alternative or equivalent embodiments or implementations can be calculated from the foregoing detailed description to achieve the same purposes, as those skilled in the relevant art will recognize without departing from the scope of the present disclosure.

Claims (25)

1. A method, comprising:
instantiating a first virtual machine, VM, to run a virtual network function component, VNFC, for executing a sub-function of a network function provided by a virtual network function, VNF, application; and
instantiating a second VM to run a local monitor, the local monitor to: monitoring performance of the first VM over a predetermined sampling interval to identify over-use or under-use of resources associated with the VNFC; a VM management action is determined based on the monitored performance to at least partially address over-use or under-use of resources associated with the VNFC, wherein the local monitor sends a request to a global monitor in a virtual network function manager, VNFM, to verify resource availability for the VM management action.
2. The method of claim 1, wherein the VM management action instantiates a third VM to assist the first VM to at least partially address the over-use of the resources or to shutdown the first VM to at least partially address the under-use of the resources.
3. The method of claim 1, wherein the local monitor:
generating a measurement report based on the monitored performance; and is also provided with
After the predetermined sampling interval, the measurement report is sent to a network function virtualization orchestrator.
4. The method of claim 1, wherein the local monitor:
determining a first value of a system parameter;
determining a second value of the system parameter based on the monitored performance; and is also provided with
The VM management action is determined based on a comparison of the second value and the first value.
5. The method of claim 4, further comprising sampling a state of the system parameter at timed intervals to determine the first value.
6. The method of claim 4, further comprising determining the first value by sandboxed analysis.
7. The method of claim 4, wherein the local monitor determines the first value based on a message received from a global monitor of a virtual network function manager VNFM.
8. A method of operating a local monitor, comprising:
determining one or more first values of one or more system parameters corresponding to the operation of a virtual data unit VDU providing virtualized network functionality;
receiving one or more second values of the one or more system parameters from the VDU;
comparing a second value of the one or more second values with a first value of the one or more first values, the first value and the second value corresponding to a first system parameter of the one or more system parameters; and
An underuse or overuse management action is performed based on a comparison of the second value and the first value, wherein the local monitor sends a request to a global monitor in a virtual network function manager, VNFM, to verify resource availability for VM management actions.
9. The method of claim 8, wherein performing the underuse or overuse management action comprises:
a request is sent to a global monitor of the virtual network function manager VNFM.
10. The method of claim 8, further comprising performing an overuse management action to add a virtual central processing unit to the VDU, to increase throughput of a virtual link, to add one or more memory blocks to the VDU, or to instantiate another VDU.
11. The method of claim 8, wherein the one or more system parameters include a queue time parameter, a service time parameter, and a calculation cycle parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; and the first value of the calculated cycle parameter is less than the second value of the calculated cycle parameter; and is also provided with
Based on the detection of the first set of conditions, the overuse management action is performed to add a virtual central processing unit to the VDU.
12. The method of claim 8, wherein the one or more system parameters include a queue time parameter, a service time parameter, a computation cycle parameter, and a network bandwidth parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; the first value of the calculation cycle parameter is not smaller than the second value of the calculation cycle parameter; and the first value of the network bandwidth parameter is less than the second value of the network bandwidth parameter; and is also provided with
Based on the detection of the first set of conditions, the overuse management action is performed to increase throughput of the virtual link.
13. The method of claim 8, wherein the one or more system parameters include a queue time parameter, a service time parameter, a computation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the method further comprises:
Detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; the first value of the calculation cycle parameter is not smaller than the second value of the calculation cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; and the first value of the virtual memory parameter is less than the second value of the virtual memory parameter; and is also provided with
Based on the detection of the first set of conditions, the overuse management action is performed to add one or more memory blocks to the VDU.
14. The method of claim 8, wherein the VDU is a first VDU and the one or more system parameters include a queue time parameter, a service time parameter, a calculation cycle parameter, a network bandwidth parameter, and a virtual memory parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is less than the second value of the service time parameter; the first value of the calculation cycle parameter is not smaller than the second value of the calculation cycle parameter; the first value of the network bandwidth parameter is not less than the second value of the network bandwidth parameter; the first value of the virtual memory parameter is not less than the second value of the virtual memory parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and is also provided with
Based on the detection of the first set of conditions, the overuse management action is performed to instantiate a second VDU.
15. The method of claim 8, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is not greater than the second value of the queue time parameter or the first value of the service time parameter is not less than the second value of the service time parameter; and the first value of the queue time parameter is less than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and is also provided with
Based on the detection of the first set of conditions, the overuse management operation is performed to instantiate an additional virtual machine for the VDU.
16. The method of claim 8, wherein the method comprises performing an underuse management action to shut down the VDU, migrate operation of the VDU to another VDU and shut down the VDU, remove one or more memory blocks from the VDU, remove one or more virtual central processing units vCPU from the VDU, or reduce throughput of a virtual link.
17. The method of claim 8, wherein the one or more system parameters include a queue time parameter and a service time parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the queue time parameter is greater than the second value of the queue time parameter and the first value of the service time parameter is greater than the second value of the service time parameter; and the second value of the queue time parameter is less than a predetermined minimum threshold value of the queue time parameter and the second value of the service time parameter is less than a predetermined minimum threshold value of the service time parameter; and is also provided with
Based on the detection of the first set of conditions, the underuse management action is performed to shut down the VDU.
18. The method of claim 8, wherein the VDU is a first VDU, the one or more system parameters are first system parameters including a first queue time parameter and a first service time parameter, and the method further comprises:
detecting a first set of conditions, the first set of conditions comprising: the first value of the first queue time parameter is greater than the second value of the first queue time parameter and the first value of the first service time parameter is greater than the second value of the first service time parameter; a second value of the first queue time parameter is not less than a predetermined minimum threshold value of the queue time parameter or a second value of the first service time parameter is not less than a predetermined minimum threshold value of the service time parameter; and is also provided with
Determining, based on the detection of the first set of conditions, whether a second VDU is performing the same sub-function as the first VDU; and is also provided with
If it is determined that a second VDU is performing the same sub-function as the first VDU, one or more first values of one or more second system parameters of the second VDU are obtained, the one or more second system parameters comprising a second queue time parameter and a second service time parameter.
19. An apparatus having means for performing any of the methods of claims 1-18.
20. An element manager, comprising:
interface circuitry for communicatively coupling the element manager to a network element of a physical network function PNF performing a network service, and to a virtual network function manager VNFM; and
control circuitry coupled with the interface circuitry, the control circuitry:
receiving, via the interface circuit, an indicator of a performance measure associated with the PNF;
updating a counter based on the indicator;
detecting the occurrence of a predetermined reporting event by monitoring the counter; and
based on the detected occurrence of the predetermined reporting event, a report based on the indicator is sent to a virtual network function manager via the interface circuit.
21. The element manager of claim 20, wherein the control circuit is further to:
determining a first value of a system parameter based on the indicator;
comparing the first value of the system parameter to a predetermined threshold; and is also provided with
The occurrence of the predetermined reporting event is detected based on a comparison of the first value of the system parameter to the predetermined threshold.
22. The element manager of claim 20 or 21, wherein the predetermined reporting event is a periodic reporting event.
23. An element manager as claimed in claim 20 or 21, wherein the control circuitry receives the indicator from the network element.
24. An element manager as claimed in claim 20 or 21, wherein the report comprises a network service identifier corresponding to the network service, a PNF identifier corresponding to the PNF, and a value of an indicator corresponding to the performance measure.
25. At least one machine readable medium comprising instructions that, in response to being executed on a computing device, cause the computing device to carry out the method according to any one of claims 1-18.
CN201680084433.7A 2016-04-29 2016-04-29 Network function virtualization Active CN109074280B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2016/030268 WO2017189015A1 (en) 2016-04-29 2016-04-29 Network function virtualization

Publications (2)

Publication Number Publication Date
CN109074280A CN109074280A (en) 2018-12-21
CN109074280B true CN109074280B (en) 2023-11-17

Family

ID=56027190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680084433.7A Active CN109074280B (en) 2016-04-29 2016-04-29 Network function virtualization

Country Status (3)

Country Link
CN (1) CN109074280B (en)
TW (1) TWI722145B (en)
WO (1) WO2017189015A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11418386B2 (en) 2018-03-06 2022-08-16 At&T Intellectual Property I, L.P. Virtual network function creation system
US10785089B2 (en) 2018-05-07 2020-09-22 At&T Intellectual Property I, L.P. Service-level resiliency in virtualization environments
US11563677B1 (en) * 2018-06-28 2023-01-24 Cable Television Laboratories, Inc. Systems and methods for secure network management of virtual network function
CN109714239B (en) * 2018-12-27 2021-04-27 新华三技术有限公司 Management message issuing method, VNFM (virtual network management frequency) equipment and server
US11012365B2 (en) * 2019-09-27 2021-05-18 Intel Corporation Changing a time sensitive networking schedule implemented by a softswitch
TWI760948B (en) * 2020-11-30 2022-04-11 中華電信股份有限公司 System, method and computer-readable medium for accelerating resource allocation for managing telecom-level virtual network services
US11200096B1 (en) * 2021-03-26 2021-12-14 SambaNova Systems, Inc. Resource allocation for reconfigurable processors

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949726A (en) * 2005-10-11 2007-04-18 国际商业机器公司 Method and system for providing performance counters for virtualized network interfaces of communications networks
CN102156665A (en) * 2011-04-13 2011-08-17 杭州电子科技大学 Differential serving method for virtual system competition resources
CN103368768A (en) * 2012-04-04 2013-10-23 思科技术公司 Automatically scaled network overlay with heuristic monitoring in hybrid cloud environment
CN104169881A (en) * 2012-01-18 2014-11-26 Dh2I公司 Systems and methods for server cluster application virtualization
WO2015126430A1 (en) * 2014-02-24 2015-08-27 Hewlett-Packard Development Company, L.P. Virtual network function management with deactivated virtual machines
CN105262664A (en) * 2014-06-09 2016-01-20 诺基亚通信公司 Controlling of virtualized network functions for usage in communication network
WO2016029974A1 (en) * 2014-08-29 2016-03-03 Nec Europe Ltd. Method for operating a virtual network infrastructure

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8566447B2 (en) * 2006-04-10 2013-10-22 Bank Of America Corporation Virtual service switch
JP5428075B2 (en) * 2009-04-17 2014-02-26 株式会社日立製作所 Performance monitoring system, bottleneck determination method and management computer
FR2948247B1 (en) * 2009-07-16 2011-12-09 Univ Paris Curie METHOD AND SYSTEM FOR HIGH PERFORMANCE AND AUTOMATED MANAGEMENT OF VIRTUAL NETWORKS.
US9063769B2 (en) * 2010-02-04 2015-06-23 Telefonaktiebolaget Lm Ericsson (Publ) Network performance monitor for virtual machines
WO2015099035A1 (en) * 2013-12-27 2015-07-02 株式会社Nttドコモ Management system, virtual communication-function management node, and management method
US9806975B2 (en) * 2014-06-12 2017-10-31 Futurewei Technologies, Inc. Methods and systems for managing capacity in a virtualized network
US9742690B2 (en) * 2014-08-20 2017-08-22 At&T Intellectual Property I, L.P. Load adaptation architecture framework for orchestrating and managing services in a cloud computing system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949726A (en) * 2005-10-11 2007-04-18 国际商业机器公司 Method and system for providing performance counters for virtualized network interfaces of communications networks
CN102156665A (en) * 2011-04-13 2011-08-17 杭州电子科技大学 Differential serving method for virtual system competition resources
CN104169881A (en) * 2012-01-18 2014-11-26 Dh2I公司 Systems and methods for server cluster application virtualization
CN103368768A (en) * 2012-04-04 2013-10-23 思科技术公司 Automatically scaled network overlay with heuristic monitoring in hybrid cloud environment
WO2015126430A1 (en) * 2014-02-24 2015-08-27 Hewlett-Packard Development Company, L.P. Virtual network function management with deactivated virtual machines
CN105262664A (en) * 2014-06-09 2016-01-20 诺基亚通信公司 Controlling of virtualized network functions for usage in communication network
WO2016029974A1 (en) * 2014-08-29 2016-03-03 Nec Europe Ltd. Method for operating a virtual network infrastructure

Also Published As

Publication number Publication date
WO2017189015A1 (en) 2017-11-02
CN109074280A (en) 2018-12-21
TW201810991A (en) 2018-03-16
TWI722145B (en) 2021-03-21

Similar Documents

Publication Publication Date Title
CN109074280B (en) Network function virtualization
CN107925588B (en) Method, apparatus, device and medium for platform processing core configuration
EP3382543B1 (en) Micro-level monitoring, visibility and control of shared resources internal to a processor of a host machine for a virtual environment
CN107924341B (en) Real-time local and global datacenter network optimization based on platform telemetry data
US20200167258A1 (en) Resource allocation based on applicable service level agreement
TWI497286B (en) Method and system for analyzing root causes of relating performance issues among virtual machines to physical machines
US20210117241A1 (en) Container service management method and apparatus
US9760391B2 (en) Method and apparatus for network virtualization
US9600319B2 (en) Computer-readable medium, apparatus, and method for offloading processing from a virtual switch to a physical switch
EP3119034B1 (en) Fault handling method, device and system based on network function virtualization
US10756999B2 (en) Virtualized network function information obtaining method, apparatus, and system
CN103368768B (en) The method of the auto zoom network coverage, device and equipment in mixed cloud environment
US20170141944A1 (en) Verifier for network function virtualization resource allocation
WO2017143139A1 (en) Context-aware virtualized control decision support system for providing quality of experience assurance for internet protocol streaming video services
US20180123928A1 (en) Methods and devices for monitoring of network performance for container virtualization
US10360068B1 (en) Dynamic configuration of virtual machines
US20190045000A1 (en) Technologies for load-aware traffic steering
US11561816B2 (en) Elastic container platform architecture
US20180241842A1 (en) Fabric Support for Quality of Service
CN108965147B (en) Network control method, device and network equipment
WO2021147358A1 (en) Network interface establishing method, apparatus, and system
CN112994942B (en) SDN control method and device
US11809890B2 (en) Managing quality of storage service in virtual network
CN114584515A (en) Endpoint notification of storage area network congestion
Magurawalage et al. A resource management protocol for mobile cloud using auto-scaling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200327

Address after: California, USA

Applicant after: Apple Inc.

Address before: California, USA

Applicant before: INTEL Corp.

Effective date of registration: 20200327

Address after: California, USA

Applicant after: INTEL Corp.

Address before: California, USA

Applicant before: INTEL IP Corp.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant