WO2018203780A1 - Manager node and method performed therein for handling one or more network functions in a communication network - Google Patents

Manager node and method performed therein for handling one or more network functions in a communication network Download PDF

Info

Publication number
WO2018203780A1
WO2018203780A1 PCT/SE2017/050443 SE2017050443W WO2018203780A1 WO 2018203780 A1 WO2018203780 A1 WO 2018203780A1 SE 2017050443 W SE2017050443 W SE 2017050443W WO 2018203780 A1 WO2018203780 A1 WO 2018203780A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
neural network
manager node
function
network function
Prior art date
Application number
PCT/SE2017/050443
Other languages
French (fr)
Inventor
Christian Olrog
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE2017/050443 priority Critical patent/WO2018203780A1/en
Publication of WO2018203780A1 publication Critical patent/WO2018203780A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Abstract

Embodiments herein relate to a method performed by a manager node (1000,1100,10,11,12,18,20) for handling one or more network functions in a communication network (1). The manager node reads a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained at least partly for the network function, to help detect signs of situations in the deployment of the network function. The manager node further collects event data from the network function; and runs the event data through the neural network to detect the signs of situations.

Description

MANAGER NODE AND METHOD PERFORMED THEREIN FOR HANDLING ONE OR MORE NETWORK FUNCTIONS IN A COMMUNICATION NETWORK
TECHNICAL FIELD
Embodiments herein relate to a manger node and method performed therein for handling communications. Furthermore, a computer program and a computer readable storage medium are also provided herein. In particular, embodiments herein relate to handling one or more network functions in a communication network. BACKGROUND
Network Operators' networks are populated with a large and increasing variety of proprietary hardware appliances. To launch a new network service often requires yet another variety and finding the space and power to accommodate these boxes is becoming increasingly difficult; compounded by the increasing costs of energy, capital investment challenges and the rarity of skills necessary to design, integrate and operate increasingly complex hardware-based appliances.
Moreover, hardware-based appliances rapidly reach end of life, requiring much of the procure-design-integrate-deploy cycle to be repeated with little or no revenue benefit. Worse, hardware lifecycles are becoming shorter as technology and services innovation accelerates, inhibiting the roll out of new revenue earning network services and constraining innovation in an increasingly network-centric connected world.
Network Functions Virtualization (NFV) aims to address these problems by leveraging standard IT virtualization technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Data centers, Network Nodes and in the end user premises. Network Functions Virtualization is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures.
Virtualizing Network Functions (VNF) could potentially offer many benefits including, but not limited to:
· Reduced equipment costs and reduced power consumption through
consolidating equipment and exploiting the economies of scale of the IT industry.
• Increased speed of Time to Market by minimizing the typical network operator cycle of innovation. Economies of scale required to cover investments in hardware-based functionalities are no longer applicable for software-based development, making feasible other modes of feature evolution. Network Functions Virtualization should enable network operators to significantly reduce the maturation cycle.
• Availability of network appliance multi-version and multi-tenancy, which allows use of a single platform for different applications, users and tenants. This allows network operators to share resources across services and across different customer bases.
• Targeted service introduction based on geography or customer sets is possible. Services can be rapidly scaled up/down as required.
• Enables a wide variety of eco-systems and encourages openness. It opens the virtual appliance market to pure software entrants, small players and academia, encouraging more innovation to bring new services and new revenue streams quickly at much lower risk.
Network Functions Virtualization aims to transform the way that network operators architect networks by evolving standard IT virtualization technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Datacenters, Network Nodes and in the end user premises, as illustrated in Fig. 1. It involves the implementation of network functions in software that can run on a range of industry standard server hardware, and that can be moved to, or instantiated in, various locations in the network as required, without the need for installation of new equipment.
Currently Internet networking approach is IP host based and allows End to End (E2E) session links establishment and communication. The NFV Architectural Framework defines a Network Service as the subset of the end to end service formed by Virtualized Network Functions and associated Virtual Links instantiated on the NFVI, as shown in Fig. 1 . NFV adds new capabilities to communication networks and requires a new set of management and orchestration functions to be added to the current model of operations, administration, maintenance and provisioning. In legacy networks, Network Function (NF) implementations are often tightly coupled with the infrastructure they run on. NFV decouples software implementations of Network Functions from the computation, storage, and networking resources they use. The virtualization insulates the Network Functions from those resources through a virtualization layer.
The decoupling exposes a new set of entities, the Virtualized Network Functions (VNFs), and a new set of relationships between them and the NFV Infrastructure (NFVI). VNFs can be chained with other VNFs and/or Physical Network Functions (PNFs) to realize a Network Service (NS). Since Network Services (including the associated VNF Forwarding Graphs (VNFFGs), Virtual Links (VLs), Physical Network Functions (PNFs)), VNFs, NFVI and the relationships between them did not exist before the emergence of NFV, their handling requires a new and different set of management and orchestration functions denoted Network Functions Virtualization Management and Orchestration (NFV- MANO). The NFV-MANO architectural framework has the role to manage the NFVI and orchestrate the allocation of resources needed by the NSs and VNFs. Such coordination is necessary now because of the decoupling of the Network Functions software from the NFVI.
The virtualization principle stimulates a multi-vendor ecosystem where the different components of NFVI, VNF software, and NFV-MANO architectural framework entities are likely to follow different lifecycles (e.g. on procurement, upgrading, etc.). This requires interoperable standardized interfaces and proper resource abstraction among them. The NFV-MANO architectural framework identifies the following functional blocks that share reference points with NFV-MANO: Element Management (EM); Virtualized Network Function (VNF); Operation System Support (OSS) and Business System Support functions (BSS); and NFV Infrastructure (NFVI).
To get full benefit from simplified deployment it is not enough to deploy an isolated VNF/EM Function (EMF) pair but rather an entire service based on many connected VNF. The EMF may also be connected to the overarching cross domain OSS/BSS. There is a desire to simplify operations and deployment in a complex environment consisting of one or more VNFs.
SUMMARY
An object of embodiments herein is to provide a mechanism for deploying one or more network functions in an efficient manner.
According to an aspect the object is achieved by providing a method performed by a manager node such as a VNF manager, an element manager (EM), an OSS/BSS node, for handling one or more network functions in a communication network. The manager node reads a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained at least partly for the network function, to help detect signs of situations in the deployment of the network function. The manager node collects event data from the network function; and runs the event data through the neural network to detect the signs of situations. According to an aspect the object is achieved by providing a manager node for handling one or more network functions in a communication network. The manager node is configured to read a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function. The manager node is configured to collect event data from the network function; and to run the event data through the neural network to detect the signs of situations.
It is furthermore provided herein a computer program comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out any of the methods above, as performed by the manager node. It is additionally provided herein a computer-readable storage medium, having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any of the methods above, as performed by the manager node.
Embodiments herein allow to extend the descriptor to also include or be
associated with e.g. weights and topology from a pre-trained Neural network used to classify events and describe related actions so that these weights and actions can be read into e.g. a generic manager node, and thus the embodiments herein enable a deployment including continued operation of one or more NFs in an efficient manner.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will now be described in more detail in relation to the enclosed drawings, in which:
Fig. 1 is a block diagram depicting a concept of NFV;
Fig. 2 is a schematic overview depicting an architecture of NFV according to some embodiments herein;
Fig. 3 is a flowchart depicting a method performed by a manager node according to embodiments herein;
Fig. 4 is a combined flowchart and signalling scheme according to some embodiments herein;
Fig. 5 is a flowchart depicting a method performed by a vendor and an operator according to embodiments herein;
Fig. 6 is a block diagram illustrating the import of a descriptor according to embodiments herein; Fig. 7 is a schematic view illustrating the neural network according to
embodiments herein; and
Fig. 8 is a block diagram depicting a manager node according to embodiments herein.
DETAILED DESCRIPTION
Embodiments herein relate to communication networks in general. Fig. 2 is a schematic overview depicting an architecture of a NFV system according to
embodiments herein. The NFV comprises a plurality of elements. A
An Element Management (EM) 10 or Element Management System (EMS) 10 is responsible for Fault, Configuration, Accounting, Performance, and Security (FCAPS) management functionality for a Virtualized Network Function (VNF) 11. This includes:
• Configuration for the network functions provided by the VNF.
• Fault management for the network functions provided by the VNF.
· Accounting for the usage of VNFs.
• Collecting performance measurement results for the functions provided by the
VNF.
• Security management for the VNFs.
The EM may be aware of virtualization and collaborate with the VNF Manager to perform those functions that require exchanges of information regarding the NFVI Resources associated with the VNF.
The NFV further comprises an Operations Support System/Business Support System (OSS/BSS) 12. The OSS/BSS are the combination of the operator's other operations and business support functions that are not otherwise explicitly captured in the present architectural framework, but are expected to have information exchanges with functional blocks in the NFV-MANO architectural framework. OSS/BSS functions may provide management and orchestration of legacy systems and may have full end to end visibility of services provided by legacy network functions in an operator's network.
Within the scope of the present document, Network Functions Virtualization Infrastructure (NFVI) 14 encompasses all the hardware (e.g. compute, storage, and networking) and software (e.g. hypervisors) components that together provide the infrastructure resources where VNFs are deployed. The NFVI may also include partially virtualized NFs. Examples of such partially virtualized network functions are related to "white box" switches, hardware load balancers, DSL Access Multiplexers (DSLAMs), Broadband Remote Access Server (BRAS), Wi-Fi access points, CPEs, etc., for which a certain part of the functionality is virtualized and is in scope of NFV-MANO while other parts are built in silicon (PNF) either due to physical constraints (e.g. digital interfaces to analogue physical channels) or vendor design choices. The present document does not cover the management of PNFs and it is assumed here that it is being taken care of by some other entity, for example the OSS/BSS or an Network Controller.
The NFV system further comprises a NFV MANO 16 that is broken up into three functional blocks:
NFV Orchestrator 18 (NFVO): Responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests
VNF Manager (VNFM) 20: Oversees lifecycle management of VNF instances; coordination and adaptation role for configuration and event reporting between NFVI and E/NMS
Virtualized Infrastructure Manager (VIM) 22: Controls and manages the NFVI compute, storage, and network resources
The NFV-MANO architectural framework identifies the following functional blocks that share reference points with NFV-MANO: · Element Management (EM). · Virtualized
Network Function (VNF). · Operation System Support (OSS) and Business System Support functions (BSS).« NFV Infrastructure (NFVI).
The NFV-MANO architectural framework identifies the following main reference points:
• Os-Ma-nfvo, a reference point between OSS/BSS and NFVO.
• Ve-Vnfm-em, a reference point between EM and VNFM.
• Ve-Vnfm-vnf, a reference point between VNF and VNFM.
· Nf-Vi, a reference point between NFVI and VIM.
• Or-Vnfm, a reference point between NFVO and VNFM.
• Or-Vi, a reference point between NFVO and VIM.
• Vi-Vnfm, a reference point between VIM and VNFM.
A manager node, e.g. a network node, a server or similar, is herein provided to perform the methods mentioned herein. The manager node may be exemplified as implemented in the OSS/BSS 12, the EM 10, the VNF 1 1 , the NFVO 18 or the VNFM 20, for handling one or more network functions in the communication network 1 . The manager node reads a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network. The neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function. The manager node collects event data from the network function; and runs the event data through the neural network to detect the signs of situations. E.g. certain event sequences may correspond to a shortage of compute resources being an example of a sign and the neural network may have been trained to detect these and initiate a scale up action adding more compute resources. Another example of a sign may include the loss of one or more of the resources constituting the network function at which time the neural network may be pre-trained to detect combination of loss of events from some resources and a sequence of events from the remaining resources and initiating a network function restart. The neural network may have been trained in a lab environment where manual or simulated error conditions may have been generated in order to create event sequences that symbolize non optimal conditions. The error conditions may include any fault source e.g. faulty software state, faulty configuration, (virtual) hardware fault, network fault, overload fault, resource starvation in shared environment. The pre-training of the neural network may take any form and can include e.g. supervised learning or self-learning based on a reward/cost function. The neural network may take event history into account, e.g. depending on network topology and training. The neural network may also predict events based on event history and take actions. The actions may include a step for operator approval before being implemented. The network function may e.g. be a physical network function (PNF), a virtual PNF or a VNF. According to embodiments herein, the network function is illustrated and exemplified herein as a virtual network function in a Network Functions Virtualization network.
According to some embodiments herein it is provided a manner to augment the NFV MANO 16 so that the NFV MANO 16 supports a new interface that receives all relevant event data from one or more VNFs, and extend the descriptor of the network function such as an VNF Descriptor so that it is also associated with e.g. weights and topology from the pre-trained Neural network. The neural network may be used to classify events and to describe related actions so that these weights and actions can be read into the generic manager node e.g. a generic EM. Potentially the weights of the pre-trained Neural network may be split into a set of standardized pre-trained weights along with a set of specific weights added to a last neural network layer of the Neural network. The neural network may be a Deep Neural Network (DNN) that can be used to efficiently learn, detect and classify large amounts of information. A trained or at least partly pre-trained DNN can easily be exported and imported. It is possible to extend an existing DNN to refine the classifications by adding a new topology with weights on top. The neural network may be trained once and later "executed" (inferred) or the neural network may continuously update its weights (learn) based on some feedback.
Thus, the manager node may be capable of reading the network function descriptor including the related neural network representation and storing them while also storing a relation so that if the network function is later selected for deployment the related neural network can also be instantiated. Embodiments herein cover not only an actual deployment phase, when you configure and setup the network function into its operational state, but also the network function in its deployment as a continued operation.
The method actions performed by the manager node, e.g. the EM 10 or the OSS/BSS node 12, for handling one or more network functions in the communication network 1 according to some embodiments will now be described with reference to a flowchart depicted in Fig. 3. The actions do not have to be taken in the order stated below, but may be taken in any suitable order. Actions performed in some embodiments are marked with dashed boxes.
Action 301. The manager node reads the descriptor including the representation of the neural network, for setting up the deployment of the network function and the instantiation of the neural network. The neural network is pre-trained at least partly for the network function, to help detect signs of situations in the deployment of the network function. The network function may be physical network function (PNF) or a virtual network function (VNF). The neural network may be a Deep Neural Network (DNN). The instantiation of the neural network is for multiple network functions. The representation of the neural network may be an index, reference, the actual neural network or similar.
For example, an operator receives and reads the descriptor describing a VNF (VNFD) including e.g. the topology of the VNF, resources and functions of a VNF such as Links, Virtual Deployment Unit (VDU) number and their internal relation and relation to connection points, constraints on resources to use etc. The descriptor further includes the representation of e.g. the DNN indicating DNN topology DNN pre-trained weights and indicated actions. The representation of the neural network may describe weights and topology for the neural network. The weights may be split into a set of standardized pre- trained weights along with a set of specific weights added to a last neural network layer. Similarly a NFV Network Service (NS) descriptor containing VNF descriptors is extended to also support embedding of the pre-trained neural network representation. Action 302. The manager node collects event data from the network function. The network function may be a virtual network function in a Network Functions Virtualization network.
Action 303. The manager node further runs the event data through the neural network to detect the signs of situations. The neural network may classify events into certain actions to trigger one or more actions in a managing domain and/or an application domain, e.g. .scale up or down the resources or trigger a reset or re-boot. The neural network may classify events and describe related actions.
Embodiments herein enabler that e.g. a VNF can be tested for normal and extreme behavior in a lab and the learnings can be applied in a generic fashion for different VNF's to help detect signs of pending anomalous situations in real deployment.
Fig. 4 is a schematic combined flow chart and signalling scheme according to embodiments herein for handling one or more network functions in a communication network.
Action 401 . The manager node reads the descriptor including the representation of the neural network, for setting up the deployment of the network function and the instantiation of the neural network. The neural network is pre-trained, at least partly, for the network function to help detect signs of situations in the deployment of the network function.
Action 402. A number of events are created in the network function such as the VNF 1 1 . One option is to augment the VNFM 20 with a new functionality to classify events but another entity could alternatively be defined to classify the events as well.
Action 403. The manager node collects event data from the network function. After import the VNF Catalogue and NS Catalogue should contain the metadata about the NS and VNF including the new entire DNN or reference to which base DNNs should be used and the extension DNNs for each base DNN. For every NS or VNF instance that is instantiated a DNN is instantiated based on description in catalogue and the all event streams, e.g. Simple network management protocol (snmp), multiple log sources, performance data, from each VNFC in the VNF are sent to a relevant or corresponding DNN. Having a set of pre-trained DNNs allows for a potentially very efficient
implementation where e.g. hardware is used for the generic segmentation/pre- classification of events and multiple extended classification steps could be run afterwards (e.g. having the vendor supplied extension along with an implementer's self learning classifier running on the same pre-classification). If Long short-term memory (LSTM) stateful type topology is restricted to the extension part of the neural network the same pre-classifier could be used for several instances saving on memory loads into e.g. a graphics processing unit (GPU) hardware acceleration unit
Action 404. The manager node runs the event data through the neural network to detect the signs of situations.
Action 405. The manager node may trigger actions, e.g. sending a trigger command, at e.g. the VNFM 20 based on outcome from the neural network.
Action 406. The VNFM 20 may then perform the action such as scale up, scale down resources for the VNF or may perform a restart of the VNF.
Embodiments implemented in e.g. NFV disclose a manager node that is able to read and understand neural network information as part of VNF description (in same file or imported separately) when importing/onboarding a VNF. The manger node collects event data from VNF and sends to imported DNN, and it may configure the read DNN and use it for classification of VNF status. Optionally may the manager node use a set of common standardized pre-trained DNNs, e.g. log text, IP packet data, CPU performance data, snmp alarm data, application domain data, which can be extended with pre-trained extension DNN part of VNF description. The manger node may then use the resulting classification to trigger predefined or custom orchestration in VNFM domain (scale, set/clear alarm level, restart, ...) or application domain (set queue sizes, timeouts etc). The manager node may optionally support fine grained classification of scaling needs: Scale up/down # nodes, RAM, CPU, Storage Input/Output Operations Per Second (IOPS), Storage space, Network IOPS.
In the illustrated example the manager node is exemplified as the EM 10 but may be implemented in e.g. the VNFM 20.
The manager node may be capable of identifying the descriptor which instead of representing a neural network by topology and weights can also represent a neural network by a predefined identifier. The network description may then not be fully directly embedded in the descriptor but rather references something standardized, e.g.
standardized topology, bringing own weights. Thus, embodiments herein enable to have a standardized network pre-trained on a generic event stream. This can run on dedicated hardware shared among all network functions. Then a smaller specific neural network can be added onto this as last layer(s).
Embodiments herein disclose a manager node capable of reading the descriptor containing only the virtual network function and the representation of the neural network is read separately. The manager node may then store a relation of the descriptor of the network function and the neural network so that if the network function is later selected for deployment the related neural network can be instantiated. The manager node supports a single neural network instance for multiple network functions. The network function may be a generic software function and the descriptor may be a software package and the manager node is able to read software package and relate it to the pre-trained neural network. As an example referring to Fig. 2, the NFVO 18 may parse the VNFD including representation of the DNN. The DNN can be represented in text form as a number of layers in the DNN. Each layer has parameters for e.g. what layers are connected to which, function in each layer and parameters for said network function. It can also include patterns which allows for condensed description of networks with hundreds of layers partially built out of repetitive constructs. The output layer may include a standardized or vendor specific classification. Each classification in the output layer may also include an executable script - e.g. written in a python programming language. Fig. 5 is a flowchart illustrating an operator importing a descriptor according to embodiments herein.
Action 501 . A vendor trains the neural network such as the DNN for the network function.
Action 502. The vendor packages the neural network into the descriptor.
Action 503. An operator or a VNF operator imports the descriptor of e.g. one or more VNFs and the DNN.
Action 504. The operator instantiates the VNF.
Action 505. The Operator instantiates the DNN and connected VNF event stream, i.e. collecting event data.
Action 506. The Operator then runs the event data through the DNN that begins to classify the network function status, e.g. VNF status, based on the collected events or event data.
As stated above, the manager node reads the descriptor including the
representation of the neural network, for setting up the deployment of the network function and the instantiation of the neural network. The neural network is pre-trained at least partly for the network function, to help detect signs of situations in the deployment of the network function. For example, as indicated in Fig. 6 an operator imports into the NFVO 18 and reads the descriptor describing topology of a NFV describing resources and functions of a NFV such as Links, VNF Catalogues (VNFC) listing VNFs, VNF instances indicating resources to use etc. The descriptor further includes the representation of e.g. the DNN indicating DNN topology DNN pre-trained weights and indicated actions. The neural network classifies events into certain actions to trigger one or more actions in a managing domain and/or an application domain.
Fig. 7 is an example of using DNN indicating DNN topology, DNN pre-trained weights and indicated actions. The representation of the neural network may describe weights and topology for the neural network. The weights may be split into a set of standardized pre-trained weights along with a set of specific weights added to a last neural network layer. The events are input into the pre-trained DNN and all the different options result into suggested actions as an outcome, such as scale up, scale down of resources for the VNF or to restart the VNF.
Fig. 8 is a block diagram depicting the manager node, exemplified herein as two embodiments as a manager node 1000 and a manager node 1100, for handling one or more network functions, such as VNFs, in the communication network according to embodiments herein.
The manager node may comprise a processing circuitry 1101 , e.g. one or more processors, configured to perform the methods herein.
The manager node may comprise a reading module 1102. The manager node
10, the processing circuitry 1 101 and/or the reading module 1 102 is configured to read the descriptor including the representation of the neural network, for setting up the deployment of the network function and the instantiation of the neural network. The neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function. The representation of the neural network may describe weights and topology for the neural network. The network function may be one or more virtual network functions in the NFV network. The neural network may be a DNN.
The manager node may comprise a collecting module 1103. The manager node 10, the processing circuitry 1 101 and/or the collecting module 1 103 is configured to collect event data from the network function.
The manager node may comprise an executing module 1104. The manager node 10, the processing circuitry 1 101 and/or the executing module 1 104 is configured to run the event data through the neural network to detect the signs of situations. The neural network may be configured to classify events into certain actions to trigger one or more actions in a managing domain and/or an application domain. Thus, the neural network may be configured to classify events and to describe related actions. The weights may be split into a set of standardized pre-trained weights along with a set of specific weights added to the last neural network layer.
The manager node further comprises a memory 1105. The memory comprises one or more units to be used to store data on, such as event data, actions, neural network information, network function information, relations, applications to perform the methods disclosed herein when being executed, and similar.
The methods according to the embodiments described herein for the manager node are respectively implemented by means of e.g. a computer program 1106 or a computer program product, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the manager node. The computer program 1 106 may be stored on a computer-readable storage medium 1107, e.g. a USB, a memory, a disc or similar. The computer-readable storage medium 1 107, having stored thereon the computer program, may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the manager node. In some embodiments, the computer- readable storage medium may be a non-transitory computer-readable storage medium. Thus, the manager node may comprise a processor and the memory, said memory comprising instructions executable by said processor whereby said manager node is operative to perform the methods herein. Thus, the manager node is operative to read the descriptor including the representation of the neural network, for setting up the
deployment of the network function and the instantiation of the neural network. The neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function. The manager node is further operative to collect event data from the network function; and to run the event data through the neural network to detect the signs of situations.
A manager node may be capable of reading the network function descriptor including the related neural network representation and storing them while also storing a relation so that if the network function is later selected for deployment from the related neural network can also be instantiated. The manager node may be capable of identifying the descriptor which instead of representing the neural network by topology and weights can also represent a neural network by a predefined identifier. The manager node may be capable of reading the descriptor containing only the virtual network function and reading a separate representation of the neural network and storing their relation so that if the network function is later selected for deployment the related neural network can be instantiated. The manager node may be able to support a single neural network instance for multiple network functions. The network function may be a generic software function and the descriptor may be a software package and the manager node may read the software package and relate the software package to the pre-trained neural network.
As will be readily understood by those familiar with communications design, means or modules may be implemented using digital logic and/or one or more microcontrollers, microprocessors, or other digital hardware. In some embodiments, several or all of the various functions may be implemented together, such as in a single application-specific integrated circuit (ASIC), or in two or more separate devices with appropriate hardware and/or software interfaces between them. Several of the functions may be implemented on a processor shared with other functional
components of a manager node, for example.
Alternatively, several of the functional elements of the processing means discussed may be provided through the use of dedicated hardware, while others are provided with hardware for executing software, in association with the appropriate software or firmware. Thus, the term "processor" or "controller" as used herein does not exclusively refer to hardware capable of executing software and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random-access memory for storing software and/or program or application data, and non-volatile memory. Other hardware, conventional and/or custom, may also be included. Designers of manager nodes will appreciate the cost, performance, and maintenance trade-offs inherent in these design choices.
It will be appreciated that the foregoing description and the accompanying drawings represent non-limiting examples of the methods and apparatus taught herein. As such, the apparatus and techniques taught herein are not limited by the foregoing description and accompanying drawings. Instead, the embodiments herein are limited only by the following claims and their legal equivalents.

Claims

1 . A method performed by a manager node (1000,1 100, 10,1 1 ,12,18,20) for handling one or more network functions in a communication network (1 ), the method comprising:
- reading (301 ) a descriptor including a representation of a neural
network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained at least partly for the network function, to help detect signs of situations in the deployment of the network function;
- collecting (302) event data from the network function; and
- running (303) the event data through the neural network to detect the signs of situations.
The method according to claim 1 , wherein the neural network classifies events into certain actions to trigger one or more actions in a managing domain and/or an application domain.
The method according to any of the claims 1 -2, wherein the representation of the neural network describes weights and topology for the neural network.
The method according to claim 3, wherein the neural network classifies events and describes related actions
The method according to any of the claims 3-4, wherein the weights are split into a set of standardized pre-trained weights along with a set of specific weights added to one or more last neural network layers.
The method according to any of the claims 1 -5, wherein the network function is a virtual network function in a Network Functions Virtualization network.
The method according to any of the claims 1 -6, wherein the neural network is a Deep Neural Network.
8. The method according to any of the claims 1 -7, wherein the instantiation of the neural network is for multiple network functions A computer program comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out any of the methods according to any of the claims 1 -8, as performed by the manager node
(1000,1 100,10,1 1 ,12,18,20).
10. A computer-readable storage medium, having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any of the claims 1 -8, as performed by the manager node (1000,1 100,10,1 1 ,12,18,20).
1 1 . A manager node (1000,1 100,10,1 1 ,12,18,20) for handling one or more network functions in a communication network (1 ), the manager node
(1000,1 100,10,1 1 ,12,18,20) being configured to:
read a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function;
collect event data from the network function; and to
run the event data through the neural network to detect the signs of situations.
12. The manager node (1000,1 100,10,1 1 ,12,18,20) according to claim 1 1 , wherein the neural network is configured to classify events into certain actions to trigger one or more actions in a managing domain and/or an application domain.
13. The manager node (1000,1 100,10,1 1 ,12,18,20) according to any of the claims 1 1 - 12, wherein the representation of the neural network describes weights and topology for the neural network.
14. The manager node (1000,1 100,10,1 1 ,12,18,20) according to claim 13, wherein the neural network is configured to classify events and to describe related actions.
15. The manager node (1000,1 100,10,1 1 ,12,18,20) according to any of the claims 13- 14, wherein the weights are split into a set of standardized pre-trained weights along with a set of specific weights added to one or more last neural network layers.
16. The manager node (1000,1 100,10,1 1 ,12,18,20) according to any of the claims 1 1 - 15, wherein the network function is one or more virtual network functions in a Network Functions Virtualization network.
17. The manager node (1000,1 100,10,1 1 ,12,18,20) according to any of the claims 1 1 -
16, wherein the neural network is a Deep Neural Network.
18. The manager node (1000,1 100,10,1 1 ,12,18,20) according to any of the claims 1 1 -
17, wherein the instantiation of the neural network is for multiple network functions.
19. A manager node for handling one or more network functions in a communication network, which manager node comprises a processor and a memory, said memory containing instructions executable by said processor whereby said manager node is operative to:
read a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function;
collect event data from the network function; and to
run the event data through the neural network to detect the signs of situations.
20. A manager node (1000,1 100,10,1 1 ,12,18,20) for handling one or more network functions in a communication network (1 ), the manager node comprising:
a reading module configured to read a descriptor including a representation of a neural network, for setting up a deployment of a network function and an instantiation of the neural network, which neural network is pre-trained, at least partly, for the network function, to help detect signs of situations in the deployment of the network function;
a collecting module configured to collect event data from the network function; and an executing module configured to run the event data through the neural network to detect the signs of situations.
PCT/SE2017/050443 2017-05-05 2017-05-05 Manager node and method performed therein for handling one or more network functions in a communication network WO2018203780A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050443 WO2018203780A1 (en) 2017-05-05 2017-05-05 Manager node and method performed therein for handling one or more network functions in a communication network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050443 WO2018203780A1 (en) 2017-05-05 2017-05-05 Manager node and method performed therein for handling one or more network functions in a communication network

Publications (1)

Publication Number Publication Date
WO2018203780A1 true WO2018203780A1 (en) 2018-11-08

Family

ID=58745326

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2017/050443 WO2018203780A1 (en) 2017-05-05 2017-05-05 Manager node and method performed therein for handling one or more network functions in a communication network

Country Status (1)

Country Link
WO (1) WO2018203780A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111082960A (en) * 2019-04-15 2020-04-28 中兴通讯股份有限公司 Data processing method and device
IT201900014241A1 (en) * 2019-08-07 2021-02-07 Vodafone Italia S P A Method for identifying and classifying the behavioral methods of a plurality of data relating to a telephone infrastructure for network function virtualization
CN112631717A (en) * 2020-12-21 2021-04-09 重庆大学 Network service function chain dynamic deployment system and method based on asynchronous reinforcement learning
US20210288886A1 (en) * 2018-07-17 2021-09-16 Telefonaktiebolaget Lm Ericsson (Publ) Open network automation platform (onap) - fifth generation core (5gc) interaction for analytics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381423A1 (en) * 2014-06-26 2015-12-31 Futurewei Technologies, Inc. System and Method for Virtual Network Function Policy Management
US20170126792A1 (en) * 2015-11-02 2017-05-04 Telefonaktiebolaget L M Ericsson (Publ) System and methods for intelligent service function placement and autoscale based on machine learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381423A1 (en) * 2014-06-26 2015-12-31 Futurewei Technologies, Inc. System and Method for Virtual Network Function Policy Management
US20170126792A1 (en) * 2015-11-02 2017-05-04 Telefonaktiebolaget L M Ericsson (Publ) System and methods for intelligent service function placement and autoscale based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROMEU PABLO ET AL: "Time-Series Forecasting of Indoor Temperature Using Pre-trained Deep Neural Networks", 10 September 2013, NETWORK AND PARALLEL COMPUTING; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 451 - 458, ISBN: 978-3-540-28012-5, ISSN: 0302-9743, pages: 451 - 458, XP047040810 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210288886A1 (en) * 2018-07-17 2021-09-16 Telefonaktiebolaget Lm Ericsson (Publ) Open network automation platform (onap) - fifth generation core (5gc) interaction for analytics
US11516090B2 (en) * 2018-07-17 2022-11-29 Telefonaktiebolaget Lm Ericsson (Publ) Open network automation platform (ONAP)—fifth generation core (5GC) interaction for analytics
CN111082960A (en) * 2019-04-15 2020-04-28 中兴通讯股份有限公司 Data processing method and device
WO2020211561A1 (en) * 2019-04-15 2020-10-22 中兴通讯股份有限公司 Data processing method and device, storage medium and electronic device
IT201900014241A1 (en) * 2019-08-07 2021-02-07 Vodafone Italia S P A Method for identifying and classifying the behavioral methods of a plurality of data relating to a telephone infrastructure for network function virtualization
EP3772833A1 (en) 2019-08-07 2021-02-10 Vodafone Italia S.p.A. A method of identifying and classifying the behavior modes of a plurality of data relative to a telephony infrastructure for network function virtualization
CN112631717A (en) * 2020-12-21 2021-04-09 重庆大学 Network service function chain dynamic deployment system and method based on asynchronous reinforcement learning
CN112631717B (en) * 2020-12-21 2023-09-05 重庆大学 Asynchronous reinforcement learning-based network service function chain dynamic deployment system and method

Similar Documents

Publication Publication Date Title
US11593252B2 (en) Agentless distributed monitoring of microservices through a virtual switch
US11640465B2 (en) Methods and systems for troubleshooting applications using streaming anomaly detection
US10756976B2 (en) Data network and execution environment replication for network automation and network applications
US10305747B2 (en) Container-based multi-tenant computing infrastructure
CN107689882A (en) The method and apparatus of service deployment in a kind of virtualization network
US11172022B2 (en) Migrating cloud resources
WO2018203780A1 (en) Manager node and method performed therein for handling one or more network functions in a communication network
Chayapathi et al. Network functions virtualization (NFV) with a touch of SDN
Gabbrielli et al. Self-reconfiguring microservices
WO2017113201A1 (en) Network service lifecycle management method and device
CN104580519A (en) Method for rapid deployment of openstack cloud computing platform
WO2016159949A1 (en) Application analyzer for cloud computing
Petroulakis et al. Semiotics architectural framework: End-to-end security, connectivity and interoperability for industrial iot
US11456942B2 (en) Systems and methods for providing traffic generation on network devices
WO2021127640A1 (en) Modeling cloud inefficiencies using domain-specific templates
KR20180058458A (en) Virtualized network function management method and virtualized network function manager using TOSCA based information model, and network function virtualization system using the same
Villota et al. On the feasibility of using hierarchical task networks and network functions virtualization for managing software-defined networks
Hewage et al. An agile farm management information system framework for precision agriculture
Grozev et al. Dynamic selection of virtual machines for application servers in cloud environments
US20200097883A1 (en) Dynamically evolving textual taxonomies
Garcia-Carmona et al. A multi-level monitoring approach for the dynamic management of private iaas platforms
KR102543689B1 (en) Hybrid cloud management system and control method thereof, node deployment apparatus included in the hybrid cloud management system and control method thereof
WO2023092579A1 (en) Method and apparatus for simulating deployment for ai model, storage medium, and electronic device
Mamushiane Towards the development of an optimal SDN controller placement framework to expedite SDN deployment in emerging markets
Hauser Resource profiling for large-scale data centres

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17724949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17724949

Country of ref document: EP

Kind code of ref document: A1