WO2016164736A1 - Network service infrastructure management system and method of operation - Google Patents

Network service infrastructure management system and method of operation Download PDF

Info

Publication number
WO2016164736A1
WO2016164736A1 PCT/US2016/026660 US2016026660W WO2016164736A1 WO 2016164736 A1 WO2016164736 A1 WO 2016164736A1 US 2016026660 W US2016026660 W US 2016026660W WO 2016164736 A1 WO2016164736 A1 WO 2016164736A1
Authority
WO
WIPO (PCT)
Prior art keywords
workload
operating system
executed
network node
task
Prior art date
Application number
PCT/US2016/026660
Other languages
French (fr)
Inventor
Claudia M. COMBELLAS
Dana JOHNSTON
Original Assignee
Level 3 Communications, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Level 3 Communications, Llc filed Critical Level 3 Communications, Llc
Priority to CA2982132A priority Critical patent/CA2982132A1/en
Priority to EP16777364.7A priority patent/EP3281112A4/en
Publication of WO2016164736A1 publication Critical patent/WO2016164736A1/en
Priority to HK18108919.1A priority patent/HK1249601A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • aspects of the present disclosure generally relate to communication networks, and more particularly, to a network service infrastructure management system and method of operation.
  • Network Functions Virtualization refers to a technology in which a virtualization technology is used to design a network structure with industry standard servers, switches, and storage that are provided as devices at a user end. That is, the NFV technology implements network functions as software that can be run in existing industry standard servers and hardware. NFV technology may also be supported by a cloud computing technology and in some cases, may also utilize various industry- standard high volume server technologies.
  • a network service infrastructure management system includes a computing system that communicates with a network service computing device to receive a request to generate a network service comprising one or more network node functions. Based on the request the computing system communicates with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions in which each task is instantiated at a level of workload capability specified for its respective network node function.
  • the computing system may then launch each network node function on its respective task.
  • FIG. 1 illustrates an example network service infrastructure management system according to one embodiment of the present disclosure.
  • FIG. 2 illustrates another example network service infrastructure management system that may be used to manage the operation of network node functions (NNFs) in a virtualized computing environment according to the teachings of the present disclosure.
  • NPFs network node functions
  • FIG. 3 illustrates an example process that may be performed by the network service management application according to one embodiment of the present disclosure.
  • FIG. 4 illustrates an example of a computing system that may implement various systems and methods discussed herein.
  • NNFs network node functions
  • Embodiments of the present disclosure provide a solution to this problem by instantiating NNFs on separate tasks of a computing environment in a manner that provides control over a workload capability for each task so that relatively large quantities of NNFs may be simultaneously provided on the computing system in an efficient, organized manner.
  • FIG. 1 illustrates an example network service infrastructure management system 100 according to the teachings of the present disclosure.
  • the network service infrastructure management system 100 includes a network service infrastructure management application 102 that is executed on a network service infrastructure management computing device 104 to control a network service computing device 106 to execute each of one or more network node functions (NNFs) 108 of a network service 1 10 on a task 1 12 in which each task 1 12 is separately controllable by an operating system 1 14 of the network service computing device 106.
  • NMFs network node functions
  • PBX private branch exchange
  • the network service 1 10 may include any type of communication service provided by network node functions configured according to a network functions virtualization (NFV) network architecture.
  • NFV network functions virtualization
  • the network service 1 10 includes one or more network node functions (NNFs) 108.
  • the NNFs 108 as shown include a session border controller (SBC) 108a, a firewall 108b, and a switch (e.g., router) 108c.
  • SBC session border controller
  • a switch e.g., router
  • Examples of other NNFs that may be provided by the network service infrastructure management system 100 include load balancers, intrusion detection devices, and wide area network (WAN) accelerators to name a few.
  • WAN wide area network
  • the application 102 independently controls the workload capability of each task according to a specified level. That is, the application 102 may instantiate a first NNF 108 on a first task 1 12 with a first workload capacity level, and instantiate a second NNF 108 on a second task 1 12 with a second workload capacity level that is different from the workload capacity level of the first task 1 12.
  • the workload capability of each task 1 12 generally refers to a level of workload that may be performed by that task 1 12.
  • Examples of workload capabilities of each task 1 12 that may be managed by the network service infrastructure management application 102 include a processing capability (e.g., the number of processors, a processing speed of each processor, etc.) of the task, a throughput capability (e.g., the rate at which data may be conveyed through the task), and/or a memory capacity level (e.g., the amount of memory delegated to the task 1 12).
  • a processing capability e.g., the number of processors, a processing speed of each processor, etc.
  • a throughput capability e.g., the rate at which data may be conveyed through the task
  • a memory capacity level e.g., the amount of memory delegated to the task 1 12.
  • the network service infrastructure management application 102 may determine that the firewall 108b needs to have 5.0 percent of the available processing capability of the computing device 106, and have a throughput of approximately 500 Kilobits-per-second. In such a case, the network service infrastructure management application 102 communicates with the operating system 1 14 to instantiate a task 1 12 having sufficient resources to meet this requested workload capability.
  • a scheduler agent 1 16 may be provided that is executed on the operating system 1 14 to communicate between the operating system 1 14 and the application 102 for controlling the operation of the tasks 1 12.
  • the scheduler agent 1 16 may communicate with a monitoring program running on the operating system, such as a 'task manager' program to obtain measured values of workload capability for each task 1 12 as well as overall used workload capability for the operating system and transmit this information to the application 102.
  • the scheduler agent 1 16 may translate instructions received from the application 102 into a form suitable for communication with the operating system 1 14, thus enabling the application 102 to control the operation of tasks 1 12 on differing operating systems.
  • the scheduler agent 1 16 may share some, none, or all processing responsibilities with the application 102 for providing the features of the present disclosure described herein. Additionally, the scheduler agent 1 16 may be omitted if not needed or desired.
  • the tasks 1 12 may be embodied in any suitable form.
  • the tasks 1 12 comprise threads such as those provided by an operating system (e.g., a UNIX operating system, a Linux operating system, etc.).
  • an operating system e.g., a UNIX operating system, a Linux operating system, etc.
  • the workload capability of the tasks may be modified using a 'nice', Venice', and/or 'ionice' executable program issued to each task 1 12.
  • the 'nice' program may be issued to newly instantiated NNFs 108, while the Venice' program may be issued to currently running programs to control a level of priority for each task 1 12.
  • the 'ionice' program may be issued to adjust a throughput capacity of each task 1 12.
  • the 'nice/renice', and 'ionice' programs are used to invoke a utility or shell script with a particular priority, thus giving the resulting task 1 12 more or less processing time and throughput capacity, respectively.
  • the 'nice' program may be invoked with an integer argument that ranges from '-20' (e.g., highest priority) to '19' (e.g., the lowest priority).
  • the application 102 may issue the 'nice' program with an integer argument specifying a workload capability to be associated with that NNF 108.
  • the application 102 may calculate the integer argument according to a combined workload capability of other tasks 1 12 currently running on the operating system 1 14 and the workload capability of the operating system 1 14 itself.
  • the workload capability of other tasks 1 12 may include specified levels of workload capability for each existing task 1 12 stored in one or more task workload inventory records 124 and measured values of workload capability used by each task 1 12.
  • the used workload capability may be obtained in any suitable manner.
  • the application 102 may communicate with a monitoring program running on the operating system, such as a 'task manager' program to obtained measured values of workload capability for each task 1 12 as well as overall used workload capability for the operating system.
  • the application 102 may obtain the total available workload capability of the operating system 1 14 from the computing system inventory records 128, obtain the specified workload capacities of the existing NNFs 108 from the task workload inventory records 124, obtain the total amount of workload capability being used by the currently running NNFs 108, and calculate an integer argument based upon the obtained values.
  • the application 102 may instantiate the new task 1 12 with a integer argument of ⁇ ' to ensure that the new task 1 12 can function at 0.5 MIPS.
  • the application 102 may instantiate the new task 1 12 with a lower integer argument (e.g., '-1 ' to '-19') to ensure that the new task 1 12 can still function at 0.5 MIPS. Additionally, the application 102 may adjust the integer arguments to the other existing tasks 1 12 to ensure their specified workload capability usage is properly met.
  • a lower integer argument e.g., '-1 ' to '-19'
  • the network service management system 100 also includes a data source 122 that stores task workload inventory records 124, operating system translation records 126, and computing system inventory records 128.
  • the task workload inventory records 124 store information about existing tasks 1 12 being executed on the network service computing device 106. For example, when a new task 1 12 is instantiated, its specified workload capability may be stored in the task workload inventory records 124 so that when an ensuing task 1 12 is instantiated, the application 102 may access this information to ensure that the specified workload capability for all tasks 1 12 executed on the operating system 1 14 are maintained.
  • the operating system translation records 126 store information about instructions or other forms of
  • the computing system inventory records 128 store information about the operating system 1 14 in use, such as its rated performance characteristics (e.g., quantity and speed of processors, amount of memory, I/O throughput level, etc.) that may be used by the application 102 to ensure that the specified workload capability may be attained for newly instantiated tasks 1 12.
  • its rated performance characteristics e.g., quantity and speed of processors, amount of memory, I/O throughput level, etc.
  • the network service management computing device 104 includes at least one processor 132 to execute the network service management application 102.
  • the processor 132 includes one or more processors or other processing devices.
  • a processor is hardware. Examples of such a computing device may include one or more servers, personal computers, mobile computers and/or other mobile devices, and other computing devices.
  • the computing device 104 may communicate with the network service computing device 106 in any suitable manner, such as via wireless, wired, and/or optical communications.
  • the network service computing device 104 also includes a memory (e.g., computer readable media) 130 on which the application 102 and data source 122 are stored.
  • the computer readable media 130 may include volatile media, nonvolatile media, removable media, non-removable media, and/or another available media that can be accessed by the computing device 104.
  • computer readable media 130 comprises computer storage media and communication media.
  • Computer storage media includes non-transient storage memory/media, volatile media, nonvolatile media, removable media, and/or non-removable media implemented in a method or technology for storage of information, such as computer/machine readable/executable instructions, data structures, program modules, and/or other data.
  • Communication media may embody computer readable instructions, data structures, program modules, or other data and include an information delivery media or system.
  • the computing device 104 may also include a user interface (Ul) 134 that may be displayed on a display, such as a computer monitor, for displaying data. Entry of user information may be provided by an input device, such as a keyboard or a pointing device (e.g., a mouse, trackball, pen, or touch screen) to enter data into or interact with the user interface 134.
  • a user interface User
  • an input device such as a keyboard or a pointing device (e.g., a mouse, trackball, pen, or touch screen) to enter data into or interact with the user interface 134.
  • FIG. 2 illustrates another example network service infrastructure management system 200 that may be used to manage the operation of NNFs 208 in a virtualized computing environment according to the teachings of the present disclosure.
  • the network service infrastructure management system 200 includes a network service infrastructure management application 202, and scheduler agent 216, and a network service infrastructure management computing device 204 that are similar in design and construction to the network service infrastructure management application 102, scheduler agent 1 16, and network service infrastructure management computing device 104 of FIG. 1 .
  • the network service infrastructure management system 200 is different, however, in that a network service 210 is provided by a network service computing device 206 that functions in a virtualized computing environment.
  • the network service computing device 206 includes a host operating system 208 that executes a hypervisor 220 that manages one or more virtual machines (VMs) 214.
  • VMs virtual machines
  • Each VM 214 includes a guest operating system 218 that independently manages multiple scheduled tasks 212 using a scheduler agent 216.
  • each network node function (NNF) 208 of the network service 210 is executed by a task 212 such that multiple NNFs 208 may be executed on one or a few VMs 214 in which a workload capability of each task 212 may be independently managed by the scheduler agent 216.
  • the application 102 may manage the instantiation of new VMs 214 and/or deletion of existing VMs 214 to ensure that the specified workload capability of the NNFs 108 are maintained. For example, the application 102 may instantiate a new VM 214' and instantiate a new task 1 12' on the new VM 214' when the first VM 214 cannot meet the specified workload capability of the new NNF 108.
  • the application 102 recognizes that the existing VM 214 cannot provide the specified workload capability for the new task 1 12, and thus may instantiate a new VM 214' so that the specified workload capability of the new task may be maintained at its specified level.
  • the application 202 may communicate with the hypervisor 220 to adjust an amount of workload capability provided to the VM 214 such that the workload
  • the network service infrastructure management application 102 may communicate with the hypervisor 1 16 to increase a quantity of processors delegated to the VM 214 such that the addition of the new task 1 12 does not cause the existing tasks 1 12 to fall below their specified workload capability.
  • a NNF 108 may include executable code that is able to manipulate or change the workload capability of its associated task 1 12.
  • the firewall NNF 108b which may not need a large amount of workload capability during off peak hours, may execute code to communicate with the guest operating system 1 14 to reduce its allocated processing capability by issuing a 'nice' command with a high value (e.g., 20) upon the task 1 12, and when a barrage of incoming packets occurs during a peak usage time of day, issue another 'nice' command with a lower value (e.g., -20) to obtain greater processing capability.
  • FIGS. 1 and 2 illustrates example network service infrastructure management applications 102/202 that may be used to orchestrate NNFs 108/208 that function in a NFV architecture, other example network service infrastructure
  • the network service infrastructure management application 102/202 may include additional features, fewer features, or different features than what is described herein above. For example, the network service infrastructure management application 102/202 may control the hypervisor 1 16 to instantiate additional VMs 214 for executing more NNFs 208 using more tasks 1 12/212, and/or remove existing VMs 214 when they are not needed. Additionally, the network service infrastructure management application 102/202 may specify a certain level of workload capability when the NNFs 208 are launched and may modify the level of workload capability of the NNFs 208 as they are executed.
  • the network service infrastructure management application 102 may control an operating system that is not a VM (e.g., the host operating system 1 14) to instantiate tasks 1 12/212 for executing the NNFs 208 of the network service 1 10/210.
  • a VM e.g., the host operating system 1 14
  • FIG. 3 illustrates an example process 300 that may be performed by the network service management application according to one embodiment of the present disclosure.
  • the application launches a scheduler agent on the target network service computing device.
  • the application may identify a type of the operating system and launch one of multiple available scheduler agents based upon the operating system's type. Whereas differing types of operating systems (e.g., UNIX, Linux, Windows, OS/2, Mac OS, RISC OS, etc.) may have unique characteristics and modes of communication, the application may select a scheduler agent that is suitable for communicating with the unique characteristics of the identified type of operating system.
  • the application stores information (e.g., operating system type, performance characteristics, etc.) as one or more computing system inventory records 128 in the data source 122.
  • the application receives a request to generate a network service having one or more NNFs.
  • the request may include a specified workload capability to be assigned to each NNF. That is, the specified workload capability may be included in the request received by the application.
  • the specified workload capability may be received in other forms, such as via manual entry by a user, or a preset value that is received prior to the request being received.
  • the application may determine the specified workload capability according to the type of NNF being generated. For example, one preset workload capability value may be stored for firewall NNFs, while another preset workload capability value may be stored for switch NNFs.
  • the application 102 may identify the type of the NNF and select a preset workload capability value based upon that type of NNF.
  • the preset workload capability values may be stored as task workload inventory records 124 in the data source.
  • step 308 the application 102 determines the current processing
  • the application may communicate with a monitoring application (e.g., task manager) executed on the network service computing system to identify its current usage level and/or access the computing device inventory records to identify is rated performance level. Thereafter in step 310, the application determines whether the NNF can be generated on the network service computing device at its specified workload capacity. If so, processing continues at step 312; otherwise processing continues at step 314.
  • a monitoring application e.g., task manager
  • the application adjusts the network service computing device to increase its workload capability level so that the new NNF may be generated. For example, if the network service computing device is implemented in a virtualized computing environment, the application may, via communication with its hypervisor, add one or more processors and/or add additional memory allocated to the VM that is to execute the NNF. As another example, the application 102 may instantiate another VM within the virtualized computing environment that may be used to execute the new NNF. Once the network service computing device has been adjusted, processing continues at step 308 to determine whether the adjusted network service computing device has sufficient workload capability to execute the new NNF at its specified workload capacity.
  • the application instantiates the new task at the specified workload capacity.
  • the application then launches new NNF on the task to commence its operation at step 318.
  • the application determines whether any additional NNF are to be generated on the network service computing system. If so, processing continues at step 308 to determine whether the network service computing device has sufficient workload capacity; otherwise, processing continues at step 322 in which the process ends.
  • FIG. 4 illustrates an example computing system 400 that may implement various systems, such as the application 102, and methods discussed herein, such as process 400.
  • a general purpose computer system 400 is capable of executing a computer program product to execute a computer process. Data and program files may be input to the computer system 400, which reads the files and executes the programs therein such as the application 504.
  • FIG. 4 Some of the elements of a general purpose computer system 400 are shown in FIG. 4 wherein a processing system 402 is shown having an input/output (I/O) section 404, a hardware central processing unit (CPU) 406, and a memory section 408.
  • the processing system 402 of the computer system 400 may have a single hardware central-processing unit 406 or a plurality of hardware processing units.
  • the computer system 400 may be a conventional computer, a server, a distributed computer, or any other type of computing device, such as one or more external computers made available via a cloud computing architecture.
  • the presently described technology is optionally implemented in software devices loaded in memory 408, stored on a configured DVD/CD-ROM 410 or storage unit 412, and/or
  • the memory section 408 may be volatile media, nonvolatile media, removable media, non-removable media, and/or other hardware media or hardware mediums that can be accessed by a general purpose or special purpose computing device.
  • the memory section 408 may include non-transitory computer storage media and communication media.
  • Non-transitory computer storage media further may include volatile, nonvolatile, removable, and/or non-removable media implemented in a method or technology for the storage (and retrieval) of information, such as computer/machine-readable/executable instructions, data and data structures, engines, program modules, and/or other data.
  • Communication media may, for example, embody computer/machine-readable/executable instructions, data structures, program modules, algorithms, and/or other data.
  • the communication media may also include a non-transitory information delivery technology.
  • the communication media may include wired and/or wireless connections and technologies and be used to transmit and/or receive wired and/or wireless communications.
  • the I/O section 404 is connected to one or more optional user-interface devices (e.g., a user interface such as a keyboard 416 or the user interface 512), an optional disc storage unit 412, an optional display 418, and an optional disc drive unit 420.
  • the disc drive unit 420 is a DVD/CD-ROM drive unit capable of reading the DVD/CD-ROM medium 410, which typically contains programs and data 422.
  • Computer program products containing mechanisms to effectuate the systems and methods in accordance with the presently described technology may reside in the memory section 408, on a disc storage unit 412, on the DVD/CD-ROM medium 410 of the computer system 400, or on external storage devices made available via a cloud computing architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components.
  • a disc drive unit 420 may be replaced or supplemented by a floppy drive unit, a tape drive unit, or other storage medium drive unit.
  • An optional network adapter 424 is capable of connecting the computer system 400 to a network via the network link 414, through which the computer system can receive instructions and data.
  • computing systems examples include personal computers, Intel or PowerPC-based computing systems, AMD-based computing systems, ARM-based computing systems, and other systems running a Windows- based, a UNIX-based, a mobile operating system, or other operating system. It should be understood that computing systems may also embody devices such as Personal Digital Assistants (PDAs), mobile phones, tablets or slates, multimedia consoles, gaming consoles, set top boxes, etc.
  • PDAs Personal Digital Assistants
  • mobile phones tablets or slates
  • multimedia consoles gaming consoles
  • gaming consoles set top boxes
  • the computer system 400 When used in a LAN-networking environment, the computer system 400 is connected (by wired connection and/or wirelessly) to a local network through the network interface or adapter 424, which is one type of communications device.
  • the computer system 400 When used in a WAN-networking environment, the computer system 400 typically includes a modem, a network adapter, or any other type of communications device for establishing communications over the wide area network.
  • program modules depicted relative to the computer system 400 or portions thereof may be stored in a remote memory storage device. It is appreciated that the network
  • connections shown are examples of communications devices for and other means of establishing a communications link between the computers may be used.
  • source code executed by the control circuit 1 18, a plurality of internal and external databases optionally are stored in memory of the control circuit 1 18 or other storage systems, such as the disk storage unit 412 or the DVD/CD-ROM medium 410, and/or other external storage devices made available and accessible via a network architecture.
  • the source code executed by the control circuit 1 18 may be embodied by instructions stored on such storage systems and executed by the processing system 402.
  • processing system 402 which is hardware.
  • local computing systems, remote data sources and/or services, and other associated logic represent firmware, hardware, and/or software configured to control operations the system 100 and/or other
  • FIG. 4 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
  • the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter.
  • the accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
  • the described disclosure may be provided as a computer program product, or software, that may include a non-transitory machine-readable medium having stored thereon executable instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
  • a non-transitory machine-readable medium includes any mechanism for storing
  • the non-transitory machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette), optical storage medium (e.g., CD-ROM); magneto-optical storage medium, read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic executable instructions.
  • magnetic storage medium e.g., floppy diskette
  • optical storage medium e.g., CD-ROM
  • magneto-optical storage medium read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic executable instructions.
  • EPROM and EEPROM erasable programmable memory
  • flash memory or other types of medium suitable for storing electronic executable instructions.

Abstract

A network service infrastructure management system includes a computing system that communicates with a network service computing device to receive a request to generate a network service comprising one or more network node functions. Based on the request the computing system communicates with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions in which each task is instantiated at a level of workload capability specified for its respective network node function. Once instantiated, the computing system may then launch each network node function on its respective task.

Description

NETWORK SERVICE INFRASTRUCTURE MANAGEMENT SYSTEM AND METHOD
OF OPERATION
RELATED APPLICATIONS
[0001] This Patent Cooperation Treaty (PCT) patent application claims priority to U.S. Patent Application Serial No. 62/145, 1 10, filed April 9, 2015, and entitled "Network Service Orchestration System." The contents of 62/145, 1 10 are incorporated herein by reference in their entirety.
TECHNICAL FIELD
[0002] Aspects of the present disclosure generally relate to communication networks, and more particularly, to a network service infrastructure management system and method of operation.
BACKGROUND
[0003] Network Functions Virtualization (NFV) refers to a technology in which a virtualization technology is used to design a network structure with industry standard servers, switches, and storage that are provided as devices at a user end. That is, the NFV technology implements network functions as software that can be run in existing industry standard servers and hardware. NFV technology may also be supported by a cloud computing technology and in some cases, may also utilize various industry- standard high volume server technologies.
[0004] Using NFV, networks may be implemented that scale easily due the extensibility provided by virtualization. Nevertheless, conventional NFV architectures may not scale down easily for relatively small, lightweight services that are numerous, but do not require large amounts of workload capabilities from the resources they are executed on. It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed. SUMMARY
[0005] A network service infrastructure management system includes a computing system that communicates with a network service computing device to receive a request to generate a network service comprising one or more network node functions. Based on the request the computing system communicates with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions in which each task is instantiated at a level of workload capability specified for its respective network node function. Once
instantiated, the computing system may then launch each network node function on its respective task.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The foregoing and other objects, features, and advantages of the present disclosure set forth herein should be apparent from the following description of particular embodiments of those inventive concepts, as illustrated in the accompanying drawings. Also, in the drawings the like reference characters refer to the same parts throughout the different views. The drawings depict only typical embodiments of the present disclosure and, therefore, are not to be considered limiting in scope.
[0007] FIG. 1 illustrates an example network service infrastructure management system according to one embodiment of the present disclosure.
[0008] FIG. 2 illustrates another example network service infrastructure management system that may be used to manage the operation of network node functions (NNFs) in a virtualized computing environment according to the teachings of the present disclosure.
[0009] FIG. 3 illustrates an example process that may be performed by the network service management application according to one embodiment of the present disclosure.
[0010] FIG. 4 illustrates an example of a computing system that may implement various systems and methods discussed herein. DETAILED DESCRIPTION
[0011] Aspects of the present disclosure involve systems and methods for
implementing an infrastructure for a network function virtualization (NFV) environment in which individual network node functions (NNFs) of the NFV may be instantiated as separate tasks in a computing environment. Whereas conventional NNFs have been typically implemented as independently operating virtual machines that required their own distinct operating environment, this type of structure has not been readily
conducive for large quantities of lightweight (e.g., reduced throughput) network services that each typically uses a small fraction of the virtual machine's capabilities, thus wasting a computing system's resources on what could otherwise be provided for other network services on the computing system. Embodiments of the present disclosure provide a solution to this problem by instantiating NNFs on separate tasks of a computing environment in a manner that provides control over a workload capability for each task so that relatively large quantities of NNFs may be simultaneously provided on the computing system in an efficient, organized manner.
[0012] FIG. 1 illustrates an example network service infrastructure management system 100 according to the teachings of the present disclosure. The network service infrastructure management system 100 includes a network service infrastructure management application 102 that is executed on a network service infrastructure management computing device 104 to control a network service computing device 106 to execute each of one or more network node functions (NNFs) 108 of a network service 1 10 on a task 1 12 in which each task 1 12 is separately controllable by an operating system 1 14 of the network service computing device 106. Although only one network service computing device 106 is shown and described herein, it should be understood that the network service infrastructure management system 100 may control multiple network service computing devices 106 to provide an infrastructure for multiple network services 1 10 on multiple computing devices 106. [0013] The network service 1 10 generally refers to one or more applications (e.g., NNFs) running at a network application layer that collectively provide communication services for a user. As shown, the network service 1 10 provides network connectivity of a customer premises equipment (CPE) 1 16, such as a private branch exchange (PBX), to a communication network 1 18, such as the Internet. Nevertheless, other
embodiments contemplate that the network service 1 10 may include any type of communication service provided by network node functions configured according to a network functions virtualization (NFV) network architecture.
[0014] The network service 1 10 includes one or more network node functions (NNFs) 108. The NNFs 108 as shown include a session border controller (SBC) 108a, a firewall 108b, and a switch (e.g., router) 108c. Examples of other NNFs that may be provided by the network service infrastructure management system 100 include load balancers, intrusion detection devices, and wide area network (WAN) accelerators to name a few.
[0015] In general, the application 102 independently controls the workload capability of each task according to a specified level. That is, the application 102 may instantiate a first NNF 108 on a first task 1 12 with a first workload capacity level, and instantiate a second NNF 108 on a second task 1 12 with a second workload capacity level that is different from the workload capacity level of the first task 1 12. The workload capability of each task 1 12 generally refers to a level of workload that may be performed by that task 1 12. Examples of workload capabilities of each task 1 12 that may be managed by the network service infrastructure management application 102 include a processing capability (e.g., the number of processors, a processing speed of each processor, etc.) of the task, a throughput capability (e.g., the rate at which data may be conveyed through the task), and/or a memory capacity level (e.g., the amount of memory delegated to the task 1 12).
[0016] The network service infrastructure management application 102
communicates with the operating system 1 14 to instantiate tasks 1 12, launch an NNF 108 on each task 1 12, and manage or otherwise modify a level of workload capability of each task 1 12 according to the needs of each NNF 108. For example, the network service infrastructure management application 102 may determine that the firewall 108b needs to have 5.0 percent of the available processing capability of the computing device 106, and have a throughput of approximately 500 Kilobits-per-second. In such a case, the network service infrastructure management application 102 communicates with the operating system 1 14 to instantiate a task 1 12 having sufficient resources to meet this requested workload capability.
[0017] In one embodiment, a scheduler agent 1 16 may be provided that is executed on the operating system 1 14 to communicate between the operating system 1 14 and the application 102 for controlling the operation of the tasks 1 12. For example, the scheduler agent 1 16 may communicate with a monitoring program running on the operating system, such as a 'task manager' program to obtain measured values of workload capability for each task 1 12 as well as overall used workload capability for the operating system and transmit this information to the application 102. As another example, the scheduler agent 1 16 may translate instructions received from the application 102 into a form suitable for communication with the operating system 1 14, thus enabling the application 102 to control the operation of tasks 1 12 on differing operating systems. The scheduler agent 1 16 may share some, none, or all processing responsibilities with the application 102 for providing the features of the present disclosure described herein. Additionally, the scheduler agent 1 16 may be omitted if not needed or desired.
[0018] The tasks 1 12 may be embodied in any suitable form. In one particular embodiment, the tasks 1 12 comprise threads such as those provided by an operating system (e.g., a UNIX operating system, a Linux operating system, etc.). In such a case, the workload capability of the tasks may be modified using a 'nice', Venice', and/or 'ionice' executable program issued to each task 1 12. The 'nice' program may be issued to newly instantiated NNFs 108, while the Venice' program may be issued to currently running programs to control a level of priority for each task 1 12. The 'ionice' program may be issued to adjust a throughput capacity of each task 1 12. The 'nice/renice', and 'ionice' programs are used to invoke a utility or shell script with a particular priority, thus giving the resulting task 1 12 more or less processing time and throughput capacity, respectively. In most operating systems that support the 'nice' program, the 'nice' program may be invoked with an integer argument that ranges from '-20' (e.g., highest priority) to '19' (e.g., the lowest priority). Thus, when a NNF 108 is instantiated within a new task 1 12, the application 102 may issue the 'nice' program with an integer argument specifying a workload capability to be associated with that NNF 108.
[0019] In one embodiment, the application 102 may calculate the integer argument according to a combined workload capability of other tasks 1 12 currently running on the operating system 1 14 and the workload capability of the operating system 1 14 itself. The workload capability of other tasks 1 12 may include specified levels of workload capability for each existing task 1 12 stored in one or more task workload inventory records 124 and measured values of workload capability used by each task 1 12. The used workload capability may be obtained in any suitable manner. In one embodiment, the application 102 may communicate with a monitoring program running on the operating system, such as a 'task manager' program to obtained measured values of workload capability for each task 1 12 as well as overall used workload capability for the operating system.
[0020] For example, when a new NNF 108 is to be instantiated with a workload capability capacity of 0.5 million instructions per second (MIPS), the application 102 may obtain the total available workload capability of the operating system 1 14 from the computing system inventory records 128, obtain the specified workload capacities of the existing NNFs 108 from the task workload inventory records 124, obtain the total amount of workload capability being used by the currently running NNFs 108, and calculate an integer argument based upon the obtained values. In one case, if the combined workload capability of the existing tasks 1 12 consume approximately 50 percent of overall workload capability usage, the application 102 may instantiate the new task 1 12 with a integer argument of Ό' to ensure that the new task 1 12 can function at 0.5 MIPS. However, if the combined workload capability of the existing tasks 1 12 consume approximately 80 percent of overall workload capability usage, the application 102 may instantiate the new task 1 12 with a lower integer argument (e.g., '-1 ' to '-19') to ensure that the new task 1 12 can still function at 0.5 MIPS. Additionally, the application 102 may adjust the integer arguments to the other existing tasks 1 12 to ensure their specified workload capability usage is properly met.
[0021] The network service management system 100 also includes a data source 122 that stores task workload inventory records 124, operating system translation records 126, and computing system inventory records 128. The task workload inventory records 124 store information about existing tasks 1 12 being executed on the network service computing device 106. For example, when a new task 1 12 is instantiated, its specified workload capability may be stored in the task workload inventory records 124 so that when an ensuing task 1 12 is instantiated, the application 102 may access this information to ensure that the specified workload capability for all tasks 1 12 executed on the operating system 1 14 are maintained. The operating system translation records 126 store information about instructions or other forms of
communication that may be particular to each type of operating system 1 14. The computing system inventory records 128 store information about the operating system 1 14 in use, such as its rated performance characteristics (e.g., quantity and speed of processors, amount of memory, I/O throughput level, etc.) that may be used by the application 102 to ensure that the specified workload capability may be attained for newly instantiated tasks 1 12.
[0022] The network service management computing device 104 includes at least one processor 132 to execute the network service management application 102. The processor 132 includes one or more processors or other processing devices. A processor is hardware. Examples of such a computing device may include one or more servers, personal computers, mobile computers and/or other mobile devices, and other computing devices. The computing device 104 may communicate with the network service computing device 106 in any suitable manner, such as via wireless, wired, and/or optical communications.
[0023] The network service computing device 104 also includes a memory (e.g., computer readable media) 130 on which the application 102 and data source 122 are stored. The computer readable media 130 may include volatile media, nonvolatile media, removable media, non-removable media, and/or another available media that can be accessed by the computing device 104. By way of example and not limitation, computer readable media 130 comprises computer storage media and communication media. Computer storage media includes non-transient storage memory/media, volatile media, nonvolatile media, removable media, and/or non-removable media implemented in a method or technology for storage of information, such as computer/machine readable/executable instructions, data structures, program modules, and/or other data. Communication media may embody computer readable instructions, data structures, program modules, or other data and include an information delivery media or system.
[0024] According to one aspect, the computing device 104 may also include a user interface (Ul) 134 that may be displayed on a display, such as a computer monitor, for displaying data. Entry of user information may be provided by an input device, such as a keyboard or a pointing device (e.g., a mouse, trackball, pen, or touch screen) to enter data into or interact with the user interface 134.
[0025] FIG. 2 illustrates another example network service infrastructure management system 200 that may be used to manage the operation of NNFs 208 in a virtualized computing environment according to the teachings of the present disclosure. The network service infrastructure management system 200 includes a network service infrastructure management application 202, and scheduler agent 216, and a network service infrastructure management computing device 204 that are similar in design and construction to the network service infrastructure management application 102, scheduler agent 1 16, and network service infrastructure management computing device 104 of FIG. 1 . The network service infrastructure management system 200 is different, however, in that a network service 210 is provided by a network service computing device 206 that functions in a virtualized computing environment.
[0026] The network service computing device 206 includes a host operating system 208 that executes a hypervisor 220 that manages one or more virtual machines (VMs) 214. Each VM 214 includes a guest operating system 218 that independently manages multiple scheduled tasks 212 using a scheduler agent 216. According to embodiments of the present disclosure, each network node function (NNF) 208 of the network service 210 is executed by a task 212 such that multiple NNFs 208 may be executed on one or a few VMs 214 in which a workload capability of each task 212 may be independently managed by the scheduler agent 216.
[0027] When the network service 210 is used in a virtualized computing
environment, the application 102 may manage the instantiation of new VMs 214 and/or deletion of existing VMs 214 to ensure that the specified workload capability of the NNFs 108 are maintained. For example, the application 102 may instantiate a new VM 214' and instantiate a new task 1 12' on the new VM 214' when the first VM 214 cannot meet the specified workload capability of the new NNF 108. Furthering the example from above, if the combined workload capability of the existing tasks 1 12 consumes approximately 90 percent of overall workload capability usage, the application 102 recognizes that the existing VM 214 cannot provide the specified workload capability for the new task 1 12, and thus may instantiate a new VM 214' so that the specified workload capability of the new task may be maintained at its specified level.
[0028] The application 202 may communicate with the hypervisor 220 to adjust an amount of workload capability provided to the VM 214 such that the workload
capabilities provided to all tasks 212 executed on the VM 214 is maintained. For example, when a new task 1 12 is to be launched on the VM 214 that already has 20 tasks 1 12 currently operating at approximately 5 percent processing capability, the network service infrastructure management application 102 may communicate with the hypervisor 1 16 to increase a quantity of processors delegated to the VM 214 such that the addition of the new task 1 12 does not cause the existing tasks 1 12 to fall below their specified workload capability.
[0029] In one embodiment, a NNF 108 may include executable code that is able to manipulate or change the workload capability of its associated task 1 12. For example, the firewall NNF 108b, which may not need a large amount of workload capability during off peak hours, may execute code to communicate with the guest operating system 1 14 to reduce its allocated processing capability by issuing a 'nice' command with a high value (e.g., 20) upon the task 1 12, and when a barrage of incoming packets occurs during a peak usage time of day, issue another 'nice' command with a lower value (e.g., -20) to obtain greater processing capability. [0030] Although FIGS. 1 and 2 illustrates example network service infrastructure management applications 102/202 that may be used to orchestrate NNFs 108/208 that function in a NFV architecture, other example network service infrastructure
management application 102/202 may include additional features, fewer features, or different features than what is described herein above. For example, the network service infrastructure management application 102/202 may control the hypervisor 1 16 to instantiate additional VMs 214 for executing more NNFs 208 using more tasks 1 12/212, and/or remove existing VMs 214 when they are not needed. Additionally, the network service infrastructure management application 102/202 may specify a certain level of workload capability when the NNFs 208 are launched and may modify the level of workload capability of the NNFs 208 as they are executed. As yet another example, the network service infrastructure management application 102 may control an operating system that is not a VM (e.g., the host operating system 1 14) to instantiate tasks 1 12/212 for executing the NNFs 208 of the network service 1 10/210.
[0031] FIG. 3 illustrates an example process 300 that may be performed by the network service management application according to one embodiment of the present disclosure.
[0032] In step 302, the application launches a scheduler agent on the target network service computing device. In one embodiment, the application may identify a type of the operating system and launch one of multiple available scheduler agents based upon the operating system's type. Whereas differing types of operating systems (e.g., UNIX, Linux, Windows, OS/2, Mac OS, RISC OS, etc.) may have unique characteristics and modes of communication, the application may select a scheduler agent that is suitable for communicating with the unique characteristics of the identified type of operating system. Thereafter in step 304, the application stores information (e.g., operating system type, performance characteristics, etc.) as one or more computing system inventory records 128 in the data source 122.
[0033] In step 306, the application receives a request to generate a network service having one or more NNFs. In one embodiment, the request may include a specified workload capability to be assigned to each NNF. That is, the specified workload capability may be included in the request received by the application. In other embodiments, the specified workload capability may be received in other forms, such as via manual entry by a user, or a preset value that is received prior to the request being received. In some cases, the application may determine the specified workload capability according to the type of NNF being generated. For example, one preset workload capability value may be stored for firewall NNFs, while another preset workload capability value may be stored for switch NNFs. Thus, when a new NNF is to be generated, the application 102 may identify the type of the NNF and select a preset workload capability value based upon that type of NNF. The preset workload capability values may be stored as task workload inventory records 124 in the data source.
[0034] In step 308, the application 102 determines the current processing
capabilities of the network service computing system. For example, the application may communicate with a monitoring application (e.g., task manager) executed on the network service computing system to identify its current usage level and/or access the computing device inventory records to identify is rated performance level. Thereafter in step 310, the application determines whether the NNF can be generated on the network service computing device at its specified workload capacity. If so, processing continues at step 312; otherwise processing continues at step 314.
[0035] At step 314, the application adjusts the network service computing device to increase its workload capability level so that the new NNF may be generated. For example, if the network service computing device is implemented in a virtualized computing environment, the application may, via communication with its hypervisor, add one or more processors and/or add additional memory allocated to the VM that is to execute the NNF. As another example, the application 102 may instantiate another VM within the virtualized computing environment that may be used to execute the new NNF. Once the network service computing device has been adjusted, processing continues at step 308 to determine whether the adjusted network service computing device has sufficient workload capability to execute the new NNF at its specified workload capacity.
[0036] At step 316, the application instantiates the new task at the specified workload capacity. The application then launches new NNF on the task to commence its operation at step 318. At step 320, the application determines whether any additional NNF are to be generated on the network service computing system. If so, processing continues at step 308 to determine whether the network service computing device has sufficient workload capacity; otherwise, processing continues at step 322 in which the process ends.
[0037] The process described above may be embodied in other specific forms without departing from the spirit or scope of the present disclosure. For example, the process may include fewer, different, or additional steps than what is described herein. Additionally, the steps may be conducted in a differing order than that described herein.
[0038] FIG. 4 illustrates an example computing system 400 that may implement various systems, such as the application 102, and methods discussed herein, such as process 400. A general purpose computer system 400 is capable of executing a computer program product to execute a computer process. Data and program files may be input to the computer system 400, which reads the files and executes the programs therein such as the application 504. Some of the elements of a general purpose computer system 400 are shown in FIG. 4 wherein a processing system 402 is shown having an input/output (I/O) section 404, a hardware central processing unit (CPU) 406, and a memory section 408. The processing system 402 of the computer system 400 may have a single hardware central-processing unit 406 or a plurality of hardware processing units. The computer system 400 may be a conventional computer, a server, a distributed computer, or any other type of computing device, such as one or more external computers made available via a cloud computing architecture. The presently described technology is optionally implemented in software devices loaded in memory 408, stored on a configured DVD/CD-ROM 410 or storage unit 412, and/or
communicated via a wired or wireless network link 414, thereby transforming the computer system 400 in FIG. 4 to a special purpose machine for implementing the described operations.
[0039] The memory section 408 may be volatile media, nonvolatile media, removable media, non-removable media, and/or other hardware media or hardware mediums that can be accessed by a general purpose or special purpose computing device. For example, the memory section 408 may include non-transitory computer storage media and communication media. Non-transitory computer storage media further may include volatile, nonvolatile, removable, and/or non-removable media implemented in a method or technology for the storage (and retrieval) of information, such as computer/machine-readable/executable instructions, data and data structures, engines, program modules, and/or other data. Communication media may, for example, embody computer/machine-readable/executable instructions, data structures, program modules, algorithms, and/or other data. The communication media may also include a non-transitory information delivery technology. The communication media may include wired and/or wireless connections and technologies and be used to transmit and/or receive wired and/or wireless communications.
[0040] The I/O section 404 is connected to one or more optional user-interface devices (e.g., a user interface such as a keyboard 416 or the user interface 512), an optional disc storage unit 412, an optional display 418, and an optional disc drive unit 420. Generally, the disc drive unit 420 is a DVD/CD-ROM drive unit capable of reading the DVD/CD-ROM medium 410, which typically contains programs and data 422.
Computer program products containing mechanisms to effectuate the systems and methods in accordance with the presently described technology may reside in the memory section 408, on a disc storage unit 412, on the DVD/CD-ROM medium 410 of the computer system 400, or on external storage devices made available via a cloud computing architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components. Alternatively, a disc drive unit 420 may be replaced or supplemented by a floppy drive unit, a tape drive unit, or other storage medium drive unit. An optional network adapter 424 is capable of connecting the computer system 400 to a network via the network link 414, through which the computer system can receive instructions and data. Examples of such systems include personal computers, Intel or PowerPC-based computing systems, AMD-based computing systems, ARM-based computing systems, and other systems running a Windows- based, a UNIX-based, a mobile operating system, or other operating system. It should be understood that computing systems may also embody devices such as Personal Digital Assistants (PDAs), mobile phones, tablets or slates, multimedia consoles, gaming consoles, set top boxes, etc.
[0041] When used in a LAN-networking environment, the computer system 400 is connected (by wired connection and/or wirelessly) to a local network through the network interface or adapter 424, which is one type of communications device. When used in a WAN-networking environment, the computer system 400 typically includes a modem, a network adapter, or any other type of communications device for establishing communications over the wide area network. In a networked environment, program modules depicted relative to the computer system 400 or portions thereof, may be stored in a remote memory storage device. It is appreciated that the network
connections shown are examples of communications devices for and other means of establishing a communications link between the computers may be used.
[0042] In an example implementation, source code executed by the control circuit 1 18, a plurality of internal and external databases optionally are stored in memory of the control circuit 1 18 or other storage systems, such as the disk storage unit 412 or the DVD/CD-ROM medium 410, and/or other external storage devices made available and accessible via a network architecture. The source code executed by the control circuit 1 18 may be embodied by instructions stored on such storage systems and executed by the processing system 402.
[0043] Some or all of the operations described herein may be performed by the processing system 402, which is hardware. Further, local computing systems, remote data sources and/or services, and other associated logic represent firmware, hardware, and/or software configured to control operations the system 100 and/or other
components. The system set forth in FIG. 4 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
[0044] In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are instances of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.
[0045] The described disclosure may be provided as a computer program product, or software, that may include a non-transitory machine-readable medium having stored thereon executable instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A non-transitory machine-readable medium includes any mechanism for storing
information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The non-transitory machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette), optical storage medium (e.g., CD-ROM); magneto-optical storage medium, read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic executable instructions.
[0046] The description above includes example systems, methods, techniques, instruction sequences, and/or computer program products that embody techniques of the present disclosure. However, it is understood that the described disclosure may be practiced without these specific details.
[0047] It is believed that the present disclosure and many of its attendant
advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction, and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes.
[0048] While the present disclosure has been described with reference to various embodiments, it should be understood that these embodiments are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, embodiments in accordance with the present disclosure have been described in the context of particular
implementations. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.

Claims

WHAT IS CLAIMED IS:
1 . A network service infrastructure management system comprising:
a computing system in communication with a network service computing device and comprising at least one memory for storing instructions that are executed by at least one processor to:
receive a request to generate a network service comprising one or more network node functions;
communicate with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions, each task being instantiated at a level of workload capability specified for its respective network node function; and
launch each network node function on its respective task.
2. The system of Claim 1 , wherein the operating system comprises a virtual machine (VM), the instructions further executed to communicate with a hypervisor that manages the VM to adjust the workload capabilities of the VM so that the specified level of workload capability of each task is maintained.
3. The system of Claim 1 , wherein the instructions are executed to perform at least one of instantiating a new VM or deleting an existing VM to maintain the workload capacity at the specified level.
4. The system of Claim 1 , wherein the instructions are executed to:
receive a level of processing capability to be provided for at least one of the network node functions; and
adjust the workload capability of the task on which the one network node function is executed.
5. The system of Claim 1 , wherein the level of processing capability comprises at least one of a processing capability, a throughput capability, and a memory capacity level.
6. The system of Claim 1 , wherein the tasks comprise threads of the operating system.
7. The system of Claim 1 , wherein the instructions are executed to calculate the workload capacity according to the workload capacity of the operating system and the combined workload capacities of other tasks executed on the operating system.
8. The system of Claim 1 , wherein the instructions are executed to communicate with the operating system using a scheduler agent executed on the operating system, the scheduler agent translating instructions from the instructions to a format suitable for use by the operating system.
9. A network service infrastructure management method comprising:
receiving, using instructions stored on at least one computer-readable medium and executed by at least one processor, a request to generate a network service comprising one or more network node functions;
communicating, using the instructions, with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions, each task being instantiated at a level of workload capability specified for its respective network node function; and
launching, using the instructions, each network node function on its respective task.
10. The method of Claim 9, further comprising communicating with a hypervisor that manages a virtual machine (VM) to adjust the workload capabilities of the VM so that the specified level of workload capability of each task is maintained.
1 1 . The method of Claim 10, further comprising performing at least one of instantiating a new VM or deleting an existing VM to maintain the workload capacity at the specified level.
12. The method of Claim 9, further comprising:
receiving a level of processing capability to be provided for at least one of the network node functions; and
adjusting the workload capability of the task on which the one network node function is executed.
13. The method of Claim 9, further comprising calculating the workload capacity according to the workload capacity of the operating system and the combined workload capacities of other tasks executed on the operating system.
14. The method of Claim 9, further comprising communicating with the operating system using a scheduler agent executed on the operating system, the scheduler agent translating instructions from the instructions to a format suitable for use by the operating system.
15. A non-transitory computer-readable medium encoded with a route monitoring service comprising instructions executable by a processor to: receive a request to generate a network service comprising one or more network node functions;
communicate with an operating system of the network service computing device to instantiate one or more tasks corresponding to the one or more network node functions, each task being instantiated at a level of workload capability specified for its respective network node function; and
launch each network node function on its respective task.
16. The non-transitory computer-readable medium of Claim 15, wherein the operating system comprises a virtual machine (VM), the instructions further executed to communicate with a hypervisor that manages the VM to adjust the workload capabilities of the VM so that the specified level of workload capability of each task is maintained.
17. The non-transitory computer-readable medium of Claim 15, further executed to perform at least one of instantiating a new VM or deleting an existing VM to maintain the workload capacity at the specified level.
18. The non-transitory computer-readable medium of Claim 15, further executed to:
receive a level of processing capability to be provided for at least one of the network node functions; and
adjust the workload capability of the task on which the one network node function is executed.
19. The non-transitory computer-readable medium of Claim 15, further executed to calculate the workload capacity according to the workload capacity of the operating system and the combined workload capacities of other tasks executed on the operating system.
20. The non-transitory computer-readable medium of Claim 15, further executed to communicate with the operating system using a scheduler agent executed on the operating system, the scheduler agent translating instructions from the instructions to a format suitable for use by the operating system.
PCT/US2016/026660 2015-04-09 2016-04-08 Network service infrastructure management system and method of operation WO2016164736A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA2982132A CA2982132A1 (en) 2015-04-09 2016-04-08 Network service infrastructure management system and method of operation
EP16777364.7A EP3281112A4 (en) 2015-04-09 2016-04-08 Network service infrastructure management system and method of operation
HK18108919.1A HK1249601A1 (en) 2015-04-09 2018-07-10 Network service infrastructure management system and method of operation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562145110P 2015-04-09 2015-04-09
US62/145,110 2015-04-09

Publications (1)

Publication Number Publication Date
WO2016164736A1 true WO2016164736A1 (en) 2016-10-13

Family

ID=57072167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/026660 WO2016164736A1 (en) 2015-04-09 2016-04-08 Network service infrastructure management system and method of operation

Country Status (5)

Country Link
US (2) US10078535B2 (en)
EP (1) EP3281112A4 (en)
CA (1) CA2982132A1 (en)
HK (1) HK1249601A1 (en)
WO (1) WO2016164736A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2982132A1 (en) 2015-04-09 2016-10-13 Level 3 Communications, Llc Network service infrastructure management system and method of operation
CN106161399B (en) * 2015-04-21 2019-06-07 新华三技术有限公司 A kind of security service delivery method and system
WO2017051630A1 (en) * 2015-09-25 2017-03-30 ソニー株式会社 Information processing device, service processing device, information processing method, program, and information processing system
US11467882B2 (en) * 2018-12-21 2022-10-11 Target Brands, Inc. Methods and systems for rapid deployment of configurable computing resources
EP3959675A1 (en) * 2019-04-25 2022-03-02 Liveperson, Inc. Smart capacity for workload routing
US11012365B2 (en) * 2019-09-27 2021-05-18 Intel Corporation Changing a time sensitive networking schedule implemented by a softswitch

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7275249B1 (en) * 2002-07-30 2007-09-25 Unisys Corporation Dynamically generating masks for thread scheduling in a multiprocessor system
US20120216194A1 (en) * 2011-01-14 2012-08-23 International Business Machines Corporation Hypervisor application of service tags in a virtual networking environment
US20140201374A1 (en) * 2013-01-11 2014-07-17 Futurewei Technologies, Inc. Network Function Virtualization for a Network Device
US20140376555A1 (en) * 2013-06-24 2014-12-25 Electronics And Telecommunications Research Institute Network function virtualization method and apparatus using the same

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9973375B2 (en) * 2013-04-22 2018-05-15 Cisco Technology, Inc. App store portal providing point-and-click deployment of third-party virtualized network functions
WO2014208661A1 (en) * 2013-06-27 2014-12-31 日本電気株式会社 Device, method, system, and program for designing placement of virtual machine
RU2643451C2 (en) * 2013-08-27 2018-02-01 Хуавей Текнолоджиз Ко., Лтд. System and method for virtualisation of mobile network function
US20150082378A1 (en) * 2013-09-18 2015-03-19 Apcera, Inc. System and method for enabling scalable isolation contexts in a platform
KR101595854B1 (en) * 2013-12-24 2016-02-19 주식회사 케이티 Method and Apparatus for placing a virtual machine in cloud system
US9413634B2 (en) * 2014-01-10 2016-08-09 Juniper Networks, Inc. Dynamic end-to-end network path setup across multiple network layers with network service chaining
EP3116177B1 (en) * 2014-03-24 2020-02-26 Huawei Technologies Co. Ltd. Service implementation method for nfv system, and communications unit
US10348825B2 (en) * 2014-05-07 2019-07-09 Verizon Patent And Licensing Inc. Network platform-as-a-service for creating and inserting virtual network functions into a service provider network
US9887959B2 (en) * 2014-08-19 2018-02-06 Futurewei Technologies, Inc. Methods and system for allocating an IP address for an instance in a network function virtualization (NFV) system
US9288148B1 (en) * 2014-10-30 2016-03-15 International Business Machines Corporation Hierarchical network, service and application function virtual machine partitioning across differentially sensitive data centers
CA2982132A1 (en) 2015-04-09 2016-10-13 Level 3 Communications, Llc Network service infrastructure management system and method of operation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7275249B1 (en) * 2002-07-30 2007-09-25 Unisys Corporation Dynamically generating masks for thread scheduling in a multiprocessor system
US20120216194A1 (en) * 2011-01-14 2012-08-23 International Business Machines Corporation Hypervisor application of service tags in a virtual networking environment
US20140201374A1 (en) * 2013-01-11 2014-07-17 Futurewei Technologies, Inc. Network Function Virtualization for a Network Device
US20140376555A1 (en) * 2013-06-24 2014-12-25 Electronics And Telecommunications Research Institute Network function virtualization method and apparatus using the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3281112A4 *

Also Published As

Publication number Publication date
US20160299783A1 (en) 2016-10-13
US10514957B2 (en) 2019-12-24
EP3281112A1 (en) 2018-02-14
US20190018706A1 (en) 2019-01-17
HK1249601A1 (en) 2018-11-02
US10078535B2 (en) 2018-09-18
CA2982132A1 (en) 2016-10-13
EP3281112A4 (en) 2018-11-14

Similar Documents

Publication Publication Date Title
US10514957B2 (en) Network service infrastructure management system and method of operation
US11106456B2 (en) Live updates for virtual machine monitor
US10193963B2 (en) Container virtual machines for hadoop
US9594590B2 (en) Application migration with dynamic operating system containers
US8694638B2 (en) Selecting a host from a host cluster to run a virtual machine
CN107231815B (en) System and method for graphics rendering
AU2013318249B2 (en) Automated profiling of resource usage
US8489744B2 (en) Selecting a host from a host cluster for live migration of a virtual machine
US11113782B2 (en) Dynamic kernel slicing for VGPU sharing in serverless computing systems
US10754677B2 (en) Providing a layered image using a hierarchical tree
US9104456B2 (en) Zone management of compute-centric object stores
US11169840B2 (en) High availability for virtual network functions
US20140082614A1 (en) Automated profiling of resource usage
US10430249B2 (en) Supporting quality-of-service for virtual machines based on operational events
US20190018670A1 (en) Method to deploy new version of executable in node based environments
CN110166507B (en) Multi-resource scheduling method and device
CN110221920B (en) Deployment method, device, storage medium and system
US10728169B1 (en) Instance upgrade migration
Fan et al. Agent-based service migration framework in hybrid cloud
CN109960579B (en) Method and device for adjusting service container
US9535803B2 (en) Managing network failure using back-up networks
Singh et al. Survey on various load balancing techniques in cloud computing
Yang et al. Kubehice: Performance-aware container orchestration on heterogeneous-isa architectures in cloud-edge platforms
CN105653347B (en) A kind of server, method for managing resource and virtual machine manager
CN109800084A (en) Discharge the method and terminal device of resources of virtual machine

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16777364

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2016777364

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2982132

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE