WO2016188562A1 - Network virtualization - Google Patents

Network virtualization Download PDF

Info

Publication number
WO2016188562A1
WO2016188562A1 PCT/EP2015/061561 EP2015061561W WO2016188562A1 WO 2016188562 A1 WO2016188562 A1 WO 2016188562A1 EP 2015061561 W EP2015061561 W EP 2015061561W WO 2016188562 A1 WO2016188562 A1 WO 2016188562A1
Authority
WO
WIPO (PCT)
Prior art keywords
notification
virtual network
message
notification message
network node
Prior art date
Application number
PCT/EP2015/061561
Other languages
French (fr)
Inventor
Cesar Augusto ZEVALLOS
Jani Olavi SODERLUND
Original Assignee
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Solutions And Networks Oy filed Critical Nokia Solutions And Networks Oy
Priority to PCT/EP2015/061561 priority Critical patent/WO2016188562A1/en
Publication of WO2016188562A1 publication Critical patent/WO2016188562A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • H04L41/065Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis involving logical or physical relationship, e.g. grouping and hierarchies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5051Service on demand, e.g. definition and deployment of services in real time

Definitions

  • the invention relates to communications.
  • Network function virtualization allows virtualizing network node functions into building blocks that may be connected to each other in order to create services for an end-user.
  • Network resources may be grouped into virtual network function (VNF) instances. Because service usage in the network is not static, the resources allocated to implement such services need to be scaled on demand.
  • ETSI has created a management and orchestration (MANO) group within NFV for controlling the automation and orchestration of network node functions.
  • MANO management and orchestration
  • Figure 1 illustrates a wireless communication system to which embodiments of the invention may be applied
  • Figure 2 illustrates a core network system to which embodiments of the invention may be applied
  • Figure 3 illustrates a signalling diagram of a procedure for network virtualization according to an embodiment of the invention
  • FIGS 4, 5, 6 and 7 illustrate processes for network virtualization according to some embodiments of the invention
  • Figures 8, 9 and 10 illustrate a notification procedure according to some embodiments of the invention
  • Figures 1 1 and 12 illustrate blocks diagrams of apparatuses according to some embodiments of the invention.
  • a cellular communication system may comprise a radio access network comprising base stations disposed to provide radio coverage in a determined geographical area.
  • the base stations may comprise macro cell base stations 102 arranged to provide terminal devices 106 with the radio coverage over a relatively large area spanning even over several square miles, for example.
  • small area cell base stations 100 may be deployed to provide terminal devices 104 with high data rate services.
  • Such small area cell base stations may be called micro cell base stations, pico cell base stations, or femto cell base stations.
  • the small area cell base stations typically have significantly smaller coverage area than the macro base stations 102.
  • the cellular communication system may operate according to specifications of the 3 rd generation partnership project (3GPP) long-term evolution (LTE) advanced or its evolution versions.
  • 3GPP 3 rd generation partnership project
  • LTE long-term evolution
  • a core network system may comprise an evolved packet core (EPC) comprising a mobility management entity (MME).
  • MME provides the control plane function for mobility between LTE and 2G/3G access networks.
  • EPC further comprises a serving gateway (SGW) that routes and forwards user data packets.
  • SGW serving gateway
  • PGW PDN gateway
  • UE user equipment
  • HSS home subscriber server
  • PCRF policy and charging rules function
  • radio network system and/or the core network system may also comprise other functions and structures such as an access network discovery and selection function (ANDSF) and/or an evolved packet data gateway (ePDG).
  • ANDSF access network discovery and selection function
  • ePDG evolved packet data gateway
  • the virtual network function does not always provide a complete service by itself. Instead, complex services may require the chaining of several VNFs.
  • the chaining of several VNFs may be referred to as service chaining.
  • VNF may need to be aware of network decisions made by other VNFs that belong to the same service chain, and react to those.
  • Existing network function virtual- isation (NFV) technology does not allow such communication.
  • a network orchestrator that is responsible for allocating resources and/or for instantiating, monitoring or terminating VNF instances, is not aware if a certain VNF requires information from a different VNF belonging to the same service chain. This is because both the information elements that need to be shared, and also the triggers for sharing such information, are dependent on the nature of the service being implemented (the different services need to share different information when different events happen to different network elements).
  • NE physical network elements
  • Each physical network element is built from both software and hardware. Hardware is proprietary and it is designed with the right mix of resources needed by the software it is designed for.
  • bottlenecks in the ser- vice may appear at one or more NEs.
  • physical NEs may need to be added to the network. Adding a network element requires adding physical hardware to the network, which requires manually configuring the network to start using the added NE. Based on the manual configuration NEs are aware of the network decisions made on other NEs.
  • the network orchestrator allows the automation of deploying virtualized NEs (VNFs); however, the net- work orchestrator does not allow notifications from one VNF to another. Then the only option is to manually configure VNFs regarding relevant events in other VNFs that belong to the same service chain. This limits the automation capabilities for highly complex services where such information sharing is need- ed.
  • VNFs virtualized NEs
  • MANO does not yet include a notification service, instead MANO has a simple approach which does not scale when the services are too complex.
  • MANO makes each VNF visible from other VNFs as one single entity, regardless of how many instances of that VNF are deployed. Then scaling in/out is transparent to other VNFs.
  • implementing such a solution is not feasible in complex VNFs.
  • Providing one single IP address for VNF (regardless of how many virtual instances are running) incurs in a heavy capacity impact for some NEs, while for other NEs the complexity makes it simply unfeasible. Therefore, the current MANO architecture (VNFs seen as single enti- ties regardless of the number of instances) does not apply to every implementation.
  • Figure 3 illustrates a signalling diagram illustrating a method for signalling network function virtualization pa- rameters between network nodes of a communication system, e.g. a peer VNF, VNF manager, NVF orchestrator and one or more destination VNFs.
  • the network node may be a server computer, host computer, terminal device, base station, access node or any other network element (such as LTE MME, PGW, billing system, eNB, PCRF, etc).
  • the server computer or the host computer may generate a virtual network through which the host computer communicates with the terminal device.
  • virtual networking may involve a process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network.
  • Network virtualization may involve platform virtualization, often combined with resource virtualization.
  • Network virtualization may be categorized as external virtual networking which combines many networks, or parts of networks, into the server computer or the host computer. External network virtualization is targeted to optimized network sharing. Another category is internal virtual networking which provides network-like functionality to the software containers on a single system. Virtual networking may also be used for testing the terminal device.
  • a network node e.g. a peer VNF
  • a network node is configured to define 201 a set of predetermined notification events related to the peer VNF.
  • the network node In response to recognising a predetermined notification event occurring in the peer VNF, the network node causes 202 transmission of a notifica- tion message towards a further network node (e.g. a VNF manager).
  • the notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs.
  • the notification message may be transmitted 202 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event.
  • the notification message is transmitted 202 by using a Diameter protocol.
  • the notification message may be received 203 and forwarded 204 by the further network node (such as the VNF manager) to a yet further net- work node (such as a NVF orchestrator).
  • the yet further network node is configured to receive 205 the notification message, and based on the receiving, create 205 and transmit 206 further notification messages to the destination VNFs, the further notification messages including said at least one service information element.
  • the further notification message may be transmitted by using representational state transfer REST.
  • the further notification message may comprise a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event.
  • the further notification message is received 207 and acknowledged 208 by the destination VNF(s).
  • the NVF orchestrator is able to verify that the further notification message 206 reaches its destination, as a respective acknowledgement message 208 is transmitted and received from the corresponding destination VNF.
  • An embodiment provides a notification mechanism that enables VNFs to pass information to other interested VNFs within the same service chain in a virtualized network environment.
  • the notification mechanism involves notification triggers, the interested VNFs, information elements, and a notification message.
  • the notification triggers may be particular to a service that is being implemented. Different services may have different triggers. For example, scaling a new instance of VNF may be defined as a configuration trigger.
  • the interested VNFs may be identified by a VNF type. Each notification trigger may define which VNFs need to be informed on the occurrence of the trigger.
  • the information elements include information that is passed to the interested VNF.
  • the notification message acts as an interface that allows the information to be shared between different VNFs.
  • the notification message may be directly communicated from a first VNF to a second VNF (see Figure 8).
  • VNF orchestrator VNF manager
  • the notification message may be communicated from a first VNF manager to a second VNF manager (see Figure 9).
  • each VNF manager is not necessarily compatible with this.
  • the NVF orchestrator acts as a central network element when implementing the notification procedure (see Figure 10). This enables better scalability and usability.
  • a peer VNF is configured to define an internal set of notification events. When an event occurs, a notification trigger is due. The notification trigger generates a notification message. The notification message includes the information that needs to be shared and an identification of VNFs that are interested in such information. The notification message is transmitted from the peer VNF to the VNF manager. The VNF manager is configured to forward the notification message to the NFV orchestrator. The NFV orchestrator is configured to build and transmit notification messages to each instance of destination VNFs, and verify that these notification messages reach their destination (each transmission protocol may have its own way to acknowledge a successfully received message).
  • the notification messages may be accomplished by using representational state transfer (REST).
  • REST representational state transfer
  • Diameter protocol may be used (information elements delivered as attribute-value pairs (AVP)).
  • HTML mes- sages by using REST may be implemented as follows.
  • a PUT message may be used to inform the destination VNFs that a notification event has occurred in the peer VNF.
  • the PUT message may include the relevant information regarding the notification event, or it may include a uniform resource locator (URL) indicating where to find the information on the notification event.
  • URL uniform resource locator
  • the destination VNF may (op- tionally) trigger the transmission of a GET message to the peer VNF to retrieve the information related to the peer VNF. This allows synchronizing the information among VNFs.
  • An embodiment involves transferring of charging data records (CDR) between a packet data network gateway (PGW) and a charging gateway (CG).
  • CDR charging data records
  • PGW packet data network gateway
  • CG charging gateway
  • EPC PGW
  • CDRs are generated by PGW and sent to the charging gateway which is a part of the operator's billing domain.
  • PGW and CG need to know each other's IP address in order to enable the transfer of CDRs.
  • PGW and CG addresses may be manually configured in each network element at a deployment phase.
  • PGW VMs and CG VMs may scale in or scale out according to demand. If a new PGW VM scales out, each CG in the network needs to know the IP address of the new PGW (in order to keep a connection for transferring CDRs).
  • the cloud deployment automates the transfer of information (e.g. the IP address of the newly created PGW VM) to interested parties.
  • the notification procedure may involve the scaling in/out of PGW VM as the notification trigger, each CG instance as the interested VNF, and the IP address (that has been created or released) as the information elements.
  • the IP address that has been created or released may be the address of a Gz interface (interface between GW and CG).
  • the REST message sent from the PGW instance to its VNF manager towards the network orchestrator may act as the notification message.
  • the network orchestrator creates the REST messages towards each interested VNF.
  • the information that is to be shared and the triggers for sharing the information may vary greatly from service to service.
  • the notification procedure allows the sharing of any information at any trigger.
  • the notification procedure is also applicable to a case where some of the blocks of the service chain are legacy network elements (legacy NE).
  • the legacy NE is to implement a REST interface.
  • the legacy NE may be a legacy charging gateway (non-virtualized) which is not part of the cloud deployment but which may be reached from the cloud.
  • the network node (such as a peer VNF) is configured to define 401 a set of predetermined notification events related to the peer VNF.
  • the network node causes 403 transmission of a notification message towards a further network node (e.g. a VNF manager).
  • the notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs.
  • the notification message may be transmitted 403 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event.
  • the notification message is transmitted 403 by using a Diameter protocol.
  • the network node may receive 404 a GET message from a destination virtual network function via the further network node.
  • the GET message includes a unified resource locator URL indicating where to find information on the notification event.
  • the network node retrieves 405 the information on the notification event, and transmits 405 towards the destination VNF (via the VNF manag- er/NVF orchestrator) service information related to the peer VNF.
  • the network node (such as a VNF manager) is configured to receive 501 a notification message from another network node (e.g. a peer VNF).
  • the notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs.
  • the notification message may be received 501 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event.
  • the notification message is received 501 by using a Diameter protocol.
  • the notification message may be forwarded 502 by the VNF manager to a further network node (such as a NVF orchestrator).
  • the network node may receive 503 a GET message from a destination virtual network function via the NVF orchestrator.
  • the GET message includes a unified resource locator URL indicating where to find information on the notification event.
  • the network node forwards 504 the GET message to the peer VNF. After that, the network node receives 505, from the peer VNF, service information related to the peer VNF, and forwards 506 the service information towards the destination VNF (via the NVF orchestrator).
  • the network node (such as a NVF orchestrator) is configured to receive 601 the notification message, and based on the receiving, create 602 and transmit 603 further notification messages to the destination VNFs.
  • the further notification messages include said at least one service information element.
  • the further notification message may be transmitted by using representational state transfer REST.
  • the further notification message may comprise a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event.
  • the NVF orchestrator may verify 604 that the further notification message reaches its destination, when a respective acknowledgement message is received 604 from the corresponding destination VNF.
  • the network node may receive 605 a GET message from a destination virtual network func- tion.
  • the GET message includes a unified resource locator URL indicating where to find information on the notification event.
  • the network node forwards 606 the GET message to the peer VNF (via the VNF manager). After that, the network node receives 607, from the peer VNF via the VNF manager, service information related to the peer VNF, and forwards 608 the service information towards the destination VNF.
  • the network node (such as a destination VNF) is configured to receive 701 the further notification message from the NVF orchestrator, and acknowledge 702 the receipt of the further notification message by transmitting a corresponding acknowledgement message to the NVF orchestrator.
  • the network node may transmit 703 a GET message to the peer VNF (via the VNF manager/NVF orchestrator).
  • the GET message includes a unified resource locator URL indicating where to find information on the notification event.
  • the network node receives 704 service information related to the peer VNF (via the VNF manager/NVF orchestrator).
  • a network node defines 201 a set of predetermined notification events related to a peer VNF.
  • a notification message is transmitted 202, 204 towards another network node, the message including at least one service information element to be shared by destination VNFs and an identification of said destination VNFs.
  • Said another network node receives 203, 205 the notification message, and creates 205 and trans- mits 206 further notification messages to said destination VNFs, the messages including said at least one service information element.
  • Yet another network node receives 207 the further notification message indicating that a predetermined notification event has occurred in a peer VNF, the message indicating where to find information on the notification event, and retrieves the information on the notification event from the peer VNF.
  • An embodiment provides an apparatus comprising at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to carry out the procedures of the above-described network element or network node.
  • the at least one processor, the at least one memory, and the computer program code may thus be considered as an embodiment of means for executing the above-described procedures of the network element or the network node.
  • Figure 1 1 illustrates a block diagram of a structure of such an apparatus.
  • the apparatus may be comprised in the network element or in the network node, e.g. the apparatus may form a chipset or a circuitry in the network element or in the network node.
  • the apparatus is the network element or the network node.
  • the apparatus comprises a processing circuitry 10 comprising the at least one processor.
  • the processing circuitry 10 may comprise a communication interface 12 configured to receive a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions.
  • the communication interface 12 may be configured to receive the notification message, as described above, and output information on the received notification message to a further notification message generator 16 configured to create and transmit further notification messages to said destination virtual network functions, the further notifica- tion messages including said at least one service information element.
  • the processing circuitry 10 may comprise the circuitries 12 and 16 as sub-circuitries, or they may be considered as computer program modules executed by the same physical processing circuitry.
  • the memory 20 may store one or more computer program products 24 comprising program instructions that specify the operation of the circuitries 12 and 16.
  • the memory 20 may further store a database 26 comprising definitions for traffic flow monitoring, for example.
  • the apparatus may further comprise a radio interface (not shown in Figure 1 1 ) providing the apparatus with radio communication capability with the terminal devices.
  • the radio interface may comprise a radio communication circuitry enabling wireless communications and comprise a radio frequency signal processing circuitry and a baseband signal processing circuitry.
  • the baseband signal processing circuitry may be configured to carry out the functions of a transmitter and/or a receiver.
  • the radio interface may be connected to a remote radio head comprising at least an antenna and, in some embodiments, radio frequency signal processing in a remote lo- cation with respect to the base station. In such embodiments, the radio interface may carry out only some of radio frequency signal processing or no radio frequency signal processing at all.
  • the connection between the radio interface and the remote radio head may be an analogue connection or a digital connection.
  • the radio interface may comprise a fixed commu- nication circuitry enabling wired communications.
  • An embodiment provides an apparatus comprising at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to carry out the procedures of the above-described network element or network node.
  • the at least one processor, the at least one memory, and the computer program code may thus be considered as an embodiment of means for executing the above-described procedures of the network element or the network node.
  • Figure 12 illustrates a block diagram of a structure of such an apparatus.
  • the apparatus may be comprised in the network element or in the network node, e.g. the apparatus may form a chipset or a circuitry in the network element or in the network node.
  • the apparatus is the network element or the network node.
  • the apparatus comprises a processing circuitry 50 comprising the at least one processor.
  • the processing circuitry 50 may comprise an event set manager 54 configured to define a set of predetermined notification events related to a peer virtual network function.
  • the processing circuitry 50 may further comprise a notification event detector 52 configured to recognise a predetermined notification event occurring in the peer virtual network function.
  • the notification event detector 52 may be configured to recognise the predeter- mined notification event, as described above, and output information on the predetermined notification event to a notification message generator 56 con- figured to cause transmission of a notification message towards a further network node, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions.
  • the processing circuitry 50 may comprise the circuitries 52 to 56 as sub-circuitries, or they may be considered as computer program modules executed by the same physical processing circuitry.
  • the memory 60 may store one or more computer program products 64 comprising program instructions that specify the operation of the circuitries 52 to 56.
  • the memory 20 may fur- ther store a database 66 comprising definitions for traffic flow monitoring, for example.
  • the apparatus may further comprise a radio interface (not shown in Figure 12) providing the apparatus with radio communication capability with the terminal devices.
  • the radio interface may comprise a radio communication circuitry enabling wireless communications and comprise a radio frequency signal processing circuitry and a baseband signal processing circuitry.
  • the baseband signal processing circuitry may be configured to carry out the functions of a transmitter and/or a receiver.
  • the radio interface may be connected to a remote radio head comprising at least an antenna and, in some embodiments, radio frequency signal processing in a remote lo- cation with respect to the base station. In such embodiments, the radio interface may carry out only some of radio frequency signal processing or no radio frequency signal processing at all.
  • the connection between the radio interface and the remote radio head may be an analogue connection or a digital connection.
  • the radio interface may comprise a fixed commu- nication circuitry enabling wired communications.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations such as implementations in only analog and/or digital circuitry; (b) combinations of circuits and software and/or firmware, such as (as applicable): (i) a combination of processor(s) or processor cores; or (ii) portions of processor(s)/software including digital signal processor(s), software, and at least one memory that work together to cause an apparatus to perform specific functions; and (c) circuits, such as a microprocessors) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor, e.g. one core of a multi-core processor, and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular element, a baseband integrated circuit, an application-specific integrated circuit (ASIC), and/or a field-programmable grid array (FPGA) circuit for the apparatus according to an embodiment of the invention.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable grid array
  • the processes or methods described above in connection with Figures 1 to 12 may also be carried out in the form of one or more computer pro- cess defined by one or more computer programs.
  • the computer program shall be considered to encompass also a module of a computer programs, e.g. the above-described processes may be carried out as a program module of a larger algorithm or a computer process.
  • the computer program(s) may be in source code form, object code form, or in some intermediate form, and it may be stored in a carrier, which may be any entity or device capable of carrying the program.
  • Such carriers include transitory and/or non-transitory computer media, e.g. a record medium, computer memory, read-only memory, electrical carrier signal, telecommunications signal, and software distribution package.
  • the computer program may be executed in a single electronic digital processing unit or it may be distributed amongst a number of processing units.
  • the present invention is applicable to cellular or mobile communication systems defined above but also to other suitable communication systems.
  • the protocols used, the specifications of cellular communication systems, their network elements, and terminal devices develop rapidly. Such development may require extra changes to the described embodiments. Therefore, all words and expressions should be interpreted broadly and they are intended to illustrate, not to restrict, the embodiment.

Abstract

A network node defines (201) a set of predetermined notification events related to a peer virtual network function. In response to recognising a predetermined notification event occurring in the peer VNF, a notification message is transmitted (202, 204) towards another network node, the message including at least one service information element to be shared by destination VNFs and an identification of said destination VNFs. Said another network node receives (203, 205) the notification message, and creates (205) and transmits (206) further notification messages to said destination VNFs, the messages including said at least one service information element. Yet another network node receives (207) the further notification message indicating that a predetermined notification event has occurred in a peer VNF, the message indicating where to find information on the notification event, and retrieves the information on the notification event from the peer VNF.

Description

NETWORK VIRTUALIZATION
TECHNICAL FIELD
The invention relates to communications.
BACKGROUND
Network function virtualization (NFV) allows virtualizing network node functions into building blocks that may be connected to each other in order to create services for an end-user. Network resources may be grouped into virtual network function (VNF) instances. Because service usage in the network is not static, the resources allocated to implement such services need to be scaled on demand. ETSI has created a management and orchestration (MANO) group within NFV for controlling the automation and orchestration of network node functions.
BRIEF DESCRIPTION
According to an aspect, there is provided the subject matter of the independent claims. Embodiments are defined in the dependent claims.
One or more examples of implementations are set forth in more detail in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
In the following, the invention will be described in greater detail by means of preferred embodiments with reference to the accompanying drawings, in which
Figure 1 illustrates a wireless communication system to which embodiments of the invention may be applied;
Figure 2 illustrates a core network system to which embodiments of the invention may be applied;
Figure 3 illustrates a signalling diagram of a procedure for network virtualization according to an embodiment of the invention;
Figures 4, 5, 6 and 7 illustrate processes for network virtualization according to some embodiments of the invention;
Figures 8, 9 and 10 illustrate a notification procedure according to some embodiments of the invention; Figures 1 1 and 12 illustrate blocks diagrams of apparatuses according to some embodiments of the invention.
DETAILED DESCRIPTION OF SOME EMBODIMENTS
The following embodiments are exemplary. Although the specifica- tion may refer to "an", "one", or "some" embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiments), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments. Furthermore, words "comprising" and "including" should be understood as not limiting the described embodiments to consist of only those features that have been mentioned and such embodiments may contain also features/structures that have not been specifically mentioned.
Figure 1 illustrates a wireless communication scenario to which embodiments of the invention may be applied. Referring to Figure 1 , a cellular communication system may comprise a radio access network comprising base stations disposed to provide radio coverage in a determined geographical area. The base stations may comprise macro cell base stations 102 arranged to provide terminal devices 106 with the radio coverage over a relatively large area spanning even over several square miles, for example. In densely popu- lated hotspots where improved capacity is required, small area cell base stations 100 may be deployed to provide terminal devices 104 with high data rate services. Such small area cell base stations may be called micro cell base stations, pico cell base stations, or femto cell base stations. The small area cell base stations typically have significantly smaller coverage area than the macro base stations 102. The cellular communication system may operate according to specifications of the 3rd generation partnership project (3GPP) long-term evolution (LTE) advanced or its evolution versions.
Figure 2 illustrates a core network scenario to which embodiments of the invention may be applied. Referring to Figure 2, a core network system may comprise an evolved packet core (EPC) comprising a mobility management entity (MME). MME provides the control plane function for mobility between LTE and 2G/3G access networks. EPC further comprises a serving gateway (SGW) that routes and forwards user data packets. A PDN gateway (PGW) provides connectivity from a user equipment (UE) to external packet data networks (such as the internet). A home subscriber server (HSS) is a da- tabase that stores user-related and subscription-related information. A policy and charging rules function (PCRF) provides subscriber databases and other specialized functions, such as a charging system, in a centralized manner.
It is apparent to a person skilled in the art that the radio network system and/or the core network system may also comprise other functions and structures such as an access network discovery and selection function (ANDSF) and/or an evolved packet data gateway (ePDG). It should be appreciated that the functions, structures, elements and the protocols used in or for network virtualization, are irrelevant to an embodiment. Therefore, they need not to be discussed in more detail here.
The virtual network function does not always provide a complete service by itself. Instead, complex services may require the chaining of several VNFs. The chaining of several VNFs may be referred to as service chaining. In such cases, depending on the nature of the service being implemented, VNF may need to be aware of network decisions made by other VNFs that belong to the same service chain, and react to those. Existing network function virtual- isation (NFV) technology does not allow such communication. A network orchestrator that is responsible for allocating resources and/or for instantiating, monitoring or terminating VNF instances, is not aware if a certain VNF requires information from a different VNF belonging to the same service chain. This is because both the information elements that need to be shared, and also the triggers for sharing such information, are dependent on the nature of the service being implemented (the different services need to share different information when different events happen to different network elements).
If network virtualization is not used, existing complex services may be based on chaining different physical network elements (NE). Each physical network element is built from both software and hardware. Hardware is proprietary and it is designed with the right mix of resources needed by the software it is designed for. When service usage increases, bottlenecks in the ser- vice may appear at one or more NEs. To allow higher service capacity, physical NEs may need to be added to the network. Adding a network element requires adding physical hardware to the network, which requires manually configuring the network to start using the added NE. Based on the manual configuration NEs are aware of the network decisions made on other NEs. When vir- tualizing those same services into cloud deployments, the network orchestrator allows the automation of deploying virtualized NEs (VNFs); however, the net- work orchestrator does not allow notifications from one VNF to another. Then the only option is to manually configure VNFs regarding relevant events in other VNFs that belong to the same service chain. This limits the automation capabilities for highly complex services where such information sharing is need- ed.
MANO does not yet include a notification service, instead MANO has a simple approach which does not scale when the services are too complex. MANO makes each VNF visible from other VNFs as one single entity, regardless of how many instances of that VNF are deployed. Then scaling in/out is transparent to other VNFs. However, implementing such a solution is not feasible in complex VNFs. Providing one single IP address for VNF (regardless of how many virtual instances are running) incurs in a heavy capacity impact for some NEs, while for other NEs the complexity makes it simply unfeasible. Therefore, the current MANO architecture (VNFs seen as single enti- ties regardless of the number of instances) does not apply to every implementation.
Let us now describe an embodiment of the invention for network function virtual ization with reference to Figure 3. Figure 3 illustrates a signalling diagram illustrating a method for signalling network function virtualization pa- rameters between network nodes of a communication system, e.g. a peer VNF, VNF manager, NVF orchestrator and one or more destination VNFs. The network node may be a server computer, host computer, terminal device, base station, access node or any other network element (such as LTE MME, PGW, billing system, eNB, PCRF, etc). For example, the server computer or the host computer may generate a virtual network through which the host computer communicates with the terminal device. In general, virtual networking may involve a process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization may involve platform virtualization, often combined with resource virtualization. Network virtualization may be categorized as external virtual networking which combines many networks, or parts of networks, into the server computer or the host computer. External network virtualization is targeted to optimized network sharing. Another category is internal virtual networking which provides network-like functionality to the software containers on a single system. Virtual networking may also be used for testing the terminal device. Referring to Figure 3, a network node (e.g. a peer VNF) is configured to define 201 a set of predetermined notification events related to the peer VNF. In response to recognising a predetermined notification event occurring in the peer VNF, the network node causes 202 transmission of a notifica- tion message towards a further network node (e.g. a VNF manager). The notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs. The notification message may be transmitted 202 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event. Alternatively, the notification message is transmitted 202 by using a Diameter protocol.
The notification message may be received 203 and forwarded 204 by the further network node (such as the VNF manager) to a yet further net- work node (such as a NVF orchestrator). The yet further network node is configured to receive 205 the notification message, and based on the receiving, create 205 and transmit 206 further notification messages to the destination VNFs, the further notification messages including said at least one service information element. The further notification message may be transmitted by using representational state transfer REST. The further notification message may comprise a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event.
The further notification message is received 207 and acknowledged 208 by the destination VNF(s). Thus the NVF orchestrator is able to verify that the further notification message 206 reaches its destination, as a respective acknowledgement message 208 is transmitted and received from the corresponding destination VNF.
An embodiment provides a notification mechanism that enables VNFs to pass information to other interested VNFs within the same service chain in a virtualized network environment. The notification mechanism involves notification triggers, the interested VNFs, information elements, and a notification message. The notification triggers may be particular to a service that is being implemented. Different services may have different triggers. For example, scaling a new instance of VNF may be defined as a configuration trigger. The interested VNFs may be identified by a VNF type. Each notification trigger may define which VNFs need to be informed on the occurrence of the trigger. The information elements include information that is passed to the interested VNF. The notification message acts as an interface that allows the information to be shared between different VNFs.
In a first embodiment, the notification message may be directly communicated from a first VNF to a second VNF (see Figure 8). However, trusting the lowest layer to handle its own notifications without the involvement of the cloud infrastructure (NVF orchestrator, VNF manager) may involve security risks.
In a second embodiment, the notification message may be communicated from a first VNF manager to a second VNF manager (see Figure 9). However, each VNF manager is not necessarily compatible with this.
In a third embodiment, the NVF orchestrator acts as a central network element when implementing the notification procedure (see Figure 10). This enables better scalability and usability.
In the third embodiment, a peer VNF is configured to define an internal set of notification events. When an event occurs, a notification trigger is due. The notification trigger generates a notification message. The notification message includes the information that needs to be shared and an identification of VNFs that are interested in such information. The notification message is transmitted from the peer VNF to the VNF manager. The VNF manager is configured to forward the notification message to the NFV orchestrator. The NFV orchestrator is configured to build and transmit notification messages to each instance of destination VNFs, and verify that these notification messages reach their destination (each transmission protocol may have its own way to acknowledge a successfully received message).
The notification messages may be accomplished by using representational state transfer (REST). Alternatively, Diameter protocol may be used (information elements delivered as attribute-value pairs (AVP)). HTML mes- sages by using REST may be implemented as follows. A PUT message may be used to inform the destination VNFs that a notification event has occurred in the peer VNF. The PUT message may include the relevant information regarding the notification event, or it may include a uniform resource locator (URL) indicating where to find the information on the notification event. At any point of time (e.g. immediately after receiving the PUT message, after a configurable time period, or when the information is needed), the destination VNF may (op- tionally) trigger the transmission of a GET message to the peer VNF to retrieve the information related to the peer VNF. This allows synchronizing the information among VNFs.
An embodiment involves transferring of charging data records (CDR) between a packet data network gateway (PGW) and a charging gateway (CG). In LTE, a subscriber's traffic is recorded in PGW (EPC), and operators charge their subscribers according to consumed traffic. CDRs are generated by PGW and sent to the charging gateway which is a part of the operator's billing domain. Both PGW and CG need to know each other's IP address in order to enable the transfer of CDRs.
In a static network deployment, PGW and CG addresses may be manually configured in each network element at a deployment phase. In a cloud deployment, PGW VMs and CG VMs may scale in or scale out according to demand. If a new PGW VM scales out, each CG in the network needs to know the IP address of the new PGW (in order to keep a connection for transferring CDRs). The cloud deployment automates the transfer of information (e.g. the IP address of the newly created PGW VM) to interested parties. For example, herein the notification procedure may involve the scaling in/out of PGW VM as the notification trigger, each CG instance as the interested VNF, and the IP address (that has been created or released) as the information elements. The IP address that has been created or released may be the address of a Gz interface (interface between GW and CG). The REST message sent from the PGW instance to its VNF manager towards the network orchestrator may act as the notification message. The network orchestrator creates the REST messages towards each interested VNF.
The information that is to be shared and the triggers for sharing the information may vary greatly from service to service. Thus the notification procedure allows the sharing of any information at any trigger. The notification procedure is also applicable to a case where some of the blocks of the service chain are legacy network elements (legacy NE). The legacy NE is to implement a REST interface. For example, the legacy NE may be a legacy charging gateway (non-virtualized) which is not part of the cloud deployment but which may be reached from the cloud.
An embodiment enables a new instance of PGW to be automatically used by the operator's billing system. Let us now describe embodiments for network virtualization with reference to Figure 4, 5, 6 and 7.
Referring to Figure 4, the network node (such as a peer VNF) is configured to define 401 a set of predetermined notification events related to the peer VNF. In response to recognising 402 a predetermined notification event occurring in the peer VNF, the network node causes 403 transmission of a notification message towards a further network node (e.g. a VNF manager). The notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs. The notification message may be transmitted 403 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event. Alternatively, the notification message is transmitted 403 by using a Diameter protocol. The network node may receive 404 a GET message from a destination virtual network function via the further network node. The GET message includes a unified resource locator URL indicating where to find information on the notification event. Based on the GET message, the network node retrieves 405 the information on the notification event, and transmits 405 towards the destination VNF (via the VNF manag- er/NVF orchestrator) service information related to the peer VNF.
Referring to Figure 5, the network node (such as a VNF manager) is configured to receive 501 a notification message from another network node (e.g. a peer VNF). The notification message includes at least one service information element to be shared by destination VNFs and an identification of said destination VNFs. The notification message may be received 501 by using representational state transfer REST, and the notification message may comprise a PUT message includes a unified resource locator URL indicating where to find information on the notification event. Alternatively, the notification message is received 501 by using a Diameter protocol. The notification message may be forwarded 502 by the VNF manager to a further network node (such as a NVF orchestrator). The network node may receive 503 a GET message from a destination virtual network function via the NVF orchestrator. The GET message includes a unified resource locator URL indicating where to find information on the notification event. The network node forwards 504 the GET message to the peer VNF. After that, the network node receives 505, from the peer VNF, service information related to the peer VNF, and forwards 506 the service information towards the destination VNF (via the NVF orchestrator).
Referring to Figure 6, the network node (such as a NVF orchestrator) is configured to receive 601 the notification message, and based on the receiving, create 602 and transmit 603 further notification messages to the destination VNFs. The further notification messages include said at least one service information element. The further notification message may be transmitted by using representational state transfer REST. The further notification message may comprise a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event. The NVF orchestrator may verify 604 that the further notification message reaches its destination, when a respective acknowledgement message is received 604 from the corresponding destination VNF. The network node may receive 605 a GET message from a destination virtual network func- tion. The GET message includes a unified resource locator URL indicating where to find information on the notification event. The network node forwards 606 the GET message to the peer VNF (via the VNF manager). After that, the network node receives 607, from the peer VNF via the VNF manager, service information related to the peer VNF, and forwards 608 the service information towards the destination VNF.
Referring to Figure 7, the network node (such as a destination VNF) is configured to receive 701 the further notification message from the NVF orchestrator, and acknowledge 702 the receipt of the further notification message by transmitting a corresponding acknowledgement message to the NVF orchestrator. The network node may transmit 703 a GET message to the peer VNF (via the VNF manager/NVF orchestrator). The GET message includes a unified resource locator URL indicating where to find information on the notification event. After that, the network node receives 704 service information related to the peer VNF (via the VNF manager/NVF orchestrator).
Thus in an embodiment, a network node defines 201 a set of predetermined notification events related to a peer VNF. In response to recognising a predetermined notification event occurring in the peer VNF, a notification message is transmitted 202, 204 towards another network node, the message including at least one service information element to be shared by destination VNFs and an identification of said destination VNFs. Said another network node receives 203, 205 the notification message, and creates 205 and trans- mits 206 further notification messages to said destination VNFs, the messages including said at least one service information element. Yet another network node receives 207 the further notification message indicating that a predetermined notification event has occurred in a peer VNF, the message indicating where to find information on the notification event, and retrieves the information on the notification event from the peer VNF.
An embodiment provides an apparatus comprising at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to carry out the procedures of the above-described network element or network node. The at least one processor, the at least one memory, and the computer program code may thus be considered as an embodiment of means for executing the above-described procedures of the network element or the network node. Figure 1 1 illustrates a block diagram of a structure of such an apparatus. The apparatus may be comprised in the network element or in the network node, e.g. the apparatus may form a chipset or a circuitry in the network element or in the network node. In some embodiments, the apparatus is the network element or the network node. The apparatus comprises a processing circuitry 10 comprising the at least one processor. The processing circuitry 10 may comprise a communication interface 12 configured to receive a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions. The communication interface 12 may be configured to receive the notification message, as described above, and output information on the received notification message to a further notification message generator 16 configured to create and transmit further notification messages to said destination virtual network functions, the further notifica- tion messages including said at least one service information element.
The processing circuitry 10 may comprise the circuitries 12 and 16 as sub-circuitries, or they may be considered as computer program modules executed by the same physical processing circuitry. The memory 20 may store one or more computer program products 24 comprising program instructions that specify the operation of the circuitries 12 and 16. The memory 20 may further store a database 26 comprising definitions for traffic flow monitoring, for example. The apparatus may further comprise a radio interface (not shown in Figure 1 1 ) providing the apparatus with radio communication capability with the terminal devices. The radio interface may comprise a radio communication circuitry enabling wireless communications and comprise a radio frequency signal processing circuitry and a baseband signal processing circuitry. The baseband signal processing circuitry may be configured to carry out the functions of a transmitter and/or a receiver. In some embodiments, the radio interface may be connected to a remote radio head comprising at least an antenna and, in some embodiments, radio frequency signal processing in a remote lo- cation with respect to the base station. In such embodiments, the radio interface may carry out only some of radio frequency signal processing or no radio frequency signal processing at all. The connection between the radio interface and the remote radio head may be an analogue connection or a digital connection. In some embodiments, the radio interface may comprise a fixed commu- nication circuitry enabling wired communications.
An embodiment provides an apparatus comprising at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to carry out the procedures of the above-described network element or network node. The at least one processor, the at least one memory, and the computer program code may thus be considered as an embodiment of means for executing the above-described procedures of the network element or the network node. Figure 12 illustrates a block diagram of a structure of such an apparatus. The apparatus may be comprised in the network element or in the network node, e.g. the apparatus may form a chipset or a circuitry in the network element or in the network node. In some embodiments, the apparatus is the network element or the network node. The apparatus comprises a processing circuitry 50 comprising the at least one processor. The processing circuitry 50 may comprise an event set manager 54 configured to define a set of predetermined notification events related to a peer virtual network function. The processing circuitry 50 may further comprise a notification event detector 52 configured to recognise a predetermined notification event occurring in the peer virtual network function. The notification event detector 52 may be configured to recognise the predeter- mined notification event, as described above, and output information on the predetermined notification event to a notification message generator 56 con- figured to cause transmission of a notification message towards a further network node, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions.
The processing circuitry 50 may comprise the circuitries 52 to 56 as sub-circuitries, or they may be considered as computer program modules executed by the same physical processing circuitry. The memory 60 may store one or more computer program products 64 comprising program instructions that specify the operation of the circuitries 52 to 56. The memory 20 may fur- ther store a database 66 comprising definitions for traffic flow monitoring, for example. The apparatus may further comprise a radio interface (not shown in Figure 12) providing the apparatus with radio communication capability with the terminal devices. The radio interface may comprise a radio communication circuitry enabling wireless communications and comprise a radio frequency signal processing circuitry and a baseband signal processing circuitry. The baseband signal processing circuitry may be configured to carry out the functions of a transmitter and/or a receiver. In some embodiments, the radio interface may be connected to a remote radio head comprising at least an antenna and, in some embodiments, radio frequency signal processing in a remote lo- cation with respect to the base station. In such embodiments, the radio interface may carry out only some of radio frequency signal processing or no radio frequency signal processing at all. The connection between the radio interface and the remote radio head may be an analogue connection or a digital connection. In some embodiments, the radio interface may comprise a fixed commu- nication circuitry enabling wired communications.
As used in this application, the term 'circuitry' refers to all of the following: (a) hardware-only circuit implementations such as implementations in only analog and/or digital circuitry; (b) combinations of circuits and software and/or firmware, such as (as applicable): (i) a combination of processor(s) or processor cores; or (ii) portions of processor(s)/software including digital signal processor(s), software, and at least one memory that work together to cause an apparatus to perform specific functions; and (c) circuits, such as a microprocessors) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of 'circuitry' applies to all uses of this term in this application. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor, e.g. one core of a multi-core processor, and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example and if applicable to the particular element, a baseband integrated circuit, an application-specific integrated circuit (ASIC), and/or a field-programmable grid array (FPGA) circuit for the apparatus according to an embodiment of the invention.
The processes or methods described above in connection with Figures 1 to 12 may also be carried out in the form of one or more computer pro- cess defined by one or more computer programs. The computer program shall be considered to encompass also a module of a computer programs, e.g. the above-described processes may be carried out as a program module of a larger algorithm or a computer process. The computer program(s) may be in source code form, object code form, or in some intermediate form, and it may be stored in a carrier, which may be any entity or device capable of carrying the program. Such carriers include transitory and/or non-transitory computer media, e.g. a record medium, computer memory, read-only memory, electrical carrier signal, telecommunications signal, and software distribution package. Depending on the processing power needed, the computer program may be executed in a single electronic digital processing unit or it may be distributed amongst a number of processing units.
The present invention is applicable to cellular or mobile communication systems defined above but also to other suitable communication systems. The protocols used, the specifications of cellular communication systems, their network elements, and terminal devices develop rapidly. Such development may require extra changes to the described embodiments. Therefore, all words and expressions should be interpreted broadly and they are intended to illustrate, not to restrict, the embodiment.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims

1 . A method comprising
receiving, in a first network node, a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions;
based on the receiving, creating and transmitting further notification messages from the first network node to said destination virtual network func- tions, the further notification messages including said at least one service information element.
2. A method according to claim 1 , the method comprising verifying, in the first network node, that the further notification message reaches its destination, if a respective acknowledgement message is re- ceived from the corresponding destination virtual network function.
3. A method according to claim 1 or 2, the method comprising transmitting the notification message and/or the further notification message by using representational state transfer REST.
4. A method according to claim 1 , 2 or 3, wherein
the notification message and/or the further notification message comprise a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event.
5. A method according to claim 4, the method comprising receiving, in the first network node, a GET message from a destination virtual network function; and
based on the GET message, transmitting, from the first network node to the peer virtual network function, a request message to retrieve the information on the notification event.
6. A method according to claim 1 or 2, the method comprising transmitting the notification message and/or the further notification message by using a Diameter protocol.
7. A method comprising
defining, in a second network node, a set of predetermined notifica- tion events related to a peer virtual network function; in response to recognising a predeternnined notification event occurring in the peer virtual network function, causing, in the second network node, transmission of a notification message towards a first network node, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions.
8. A method according to claim 7, the method comprising transmitting the notification message by using representational state transfer REST.
9. A method according to claim 7 or 8, wherein
the notification message comprises a PUT message, wherein the PUT message includes a unified resource locator URL indicating where to find information on the notification event.
10. A method according to claim 9, the method comprising receiving, in the second network node, a GET message from a destination virtual network function via the first network node; and
based on the GET message, retrieving the information on the notification event.
1 1 .A method according to claim 7, the method comprising transmitting the notification message by using a Diameter protocol.
12. A method comprising
receiving, in a third network node, a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element indicating where to find information on the notification event;
based on the receiving, retrieving the information on the notification event from the peer virtual network function.
13. An apparatus comprising
at least one processor; and
at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
receive a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions;
based on the receiving, create and transmit further notification messages from the first network node to said destination virtual network functions, the further notification messages including said at least one service information element.
14. An apparatus according to claim 13, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
perform any of the method steps of claims 2 to 6.
15. An apparatus comprising
at least one processor; and
at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
define a set of predetermined notification events related to a peer virtual network function;
in response to recognising a predetermined notification event occurring in the peer virtual network function, cause transmission of a notification message towards a first network node, the notification message including at least one service information element to be shared by destination virtual network functions and an identification of said destination virtual network functions.
16. An apparatus according to claim 15, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
perform any of the method steps of claims 8 to 1 1 .
17. An apparatus comprising
at least one processor; and
at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
receive a notification message indicating that a predetermined notification event has occurred in a peer virtual network function, the notification message including at least one service information element indicating where to find information on the notification event; based on the receiving, retrieve the information on the notification event from the peer virtual network function.
18. A computer program product embodied on a distribution medium readable by a computer and comprising program instructions which, when loaded into an apparatus, execute the method according to any preceding claim 1 to 12.
19. A computer program product embodied on a non-transitory distribution medium readable by a computer and comprising program instructions which, when loaded into the computer, execute a computer process compris- ing causing a network node to perform any of the method steps of claims 1 to 12.
PCT/EP2015/061561 2015-05-26 2015-05-26 Network virtualization WO2016188562A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/061561 WO2016188562A1 (en) 2015-05-26 2015-05-26 Network virtualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/061561 WO2016188562A1 (en) 2015-05-26 2015-05-26 Network virtualization

Publications (1)

Publication Number Publication Date
WO2016188562A1 true WO2016188562A1 (en) 2016-12-01

Family

ID=53276854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/061561 WO2016188562A1 (en) 2015-05-26 2015-05-26 Network virtualization

Country Status (1)

Country Link
WO (1) WO2016188562A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10574740B2 (en) * 2015-08-06 2020-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for scaling in a virtualized network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140201374A1 (en) * 2013-01-11 2014-07-17 Futurewei Technologies, Inc. Network Function Virtualization for a Network Device
US20140317261A1 (en) * 2013-04-22 2014-10-23 Cisco Technology, Inc. Defining interdependent virtualized network functions for service level orchestration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140201374A1 (en) * 2013-01-11 2014-07-17 Futurewei Technologies, Inc. Network Function Virtualization for a Network Device
US20140317261A1 (en) * 2013-04-22 2014-10-23 Cisco Technology, Inc. Defining interdependent virtualized network functions for service level orchestration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Network Functions Virtualisation (NFV); Management and Orchestration; Or-Vi reference point - Interface and Information Model Specification;GS NFV-IFA005", ETSI DRAFT; GS NFV-IFA005, EUROPEAN TELECOMMUNICATIONS STANDARDS INSTITUTE (ETSI), 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS ; FRANCE, vol. ISG - NFV, no. V0.5.2, 12 May 2015 (2015-05-12), pages 1 - 125, XP014241465 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10574740B2 (en) * 2015-08-06 2020-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for scaling in a virtualized network

Similar Documents

Publication Publication Date Title
US11677715B2 (en) Methods of and systems of service capabilities exposure function (SCEF) based internet-of-things (IOT) communications
JP6511535B2 (en) Network and application management using service layer capabilities
JP6535064B2 (en) Relay device billing
US11706321B2 (en) System and method for using T8 API to deliver data over an IP path
CN105580327A (en) Method for delivering notification messages in m2m system and devices for same
EP4072071A1 (en) Slice control method and apparatus
WO2013004486A1 (en) A node and method for communications handling
EP3520467A1 (en) System and method to facilitate group reporting of user equipment congestion information in a network environment
US20190028249A1 (en) Hierarchical arrangement and multiplexing of mobile network resource slices for logical networks
US11696167B2 (en) Systems and methods to automate slice admission control
Baba et al. Lightweight virtualized evolved packet core architecture for future mobile communication
JP7192140B2 (en) Policy management method and device
US20210219162A1 (en) Method and apparatus to support performance data streaming end-to-end (e2e) procedures
EP3011448A1 (en) Selection of virtual machines or virtualized network entities
WO2019015755A1 (en) Methods and nodes for providing or selecting a user traffic node
US11855856B1 (en) Manager for edge application server discovery function
US20220394595A1 (en) Communication method, apparatus, and system
WO2016188562A1 (en) Network virtualization
US20230345264A1 (en) Systems and methods for time-sensitive networking analytics
KR102244539B1 (en) Method and apparatus for acquiring location information of user equipment based on radio unit
WO2018065632A1 (en) Policy coordinator function for distributed policy control
EP4024913A1 (en) Network slice charging method and device
US20230148200A1 (en) Apparatus, methods, and computer programs
US20230379222A1 (en) Method to update 5g vn group topology update to af for efficient network management
EP4160992A1 (en) Apparatus, methods, and computer programs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15726569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15726569

Country of ref document: EP

Kind code of ref document: A1