WO2023046291A1 - Handling of monitoring messages - Google Patents

Handling of monitoring messages Download PDF

Info

Publication number
WO2023046291A1
WO2023046291A1 PCT/EP2021/076344 EP2021076344W WO2023046291A1 WO 2023046291 A1 WO2023046291 A1 WO 2023046291A1 EP 2021076344 W EP2021076344 W EP 2021076344W WO 2023046291 A1 WO2023046291 A1 WO 2023046291A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
network node
endpoint
endpoints
response message
Prior art date
Application number
PCT/EP2021/076344
Other languages
French (fr)
Inventor
Linus GILLANDER
Trevor NEISH
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2021/076344 priority Critical patent/WO2023046291A1/en
Publication of WO2023046291A1 publication Critical patent/WO2023046291A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/026Capturing of monitoring data using flow identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/087Jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses

Definitions

  • This disclosure relates to techniques for sending or receiving monitoring messages.
  • IP internet protocol
  • vRAN virtualised radio access network
  • UPF user plane functions
  • SGW serving gateways
  • PGW packet data network gateways
  • each NF has used a single IP address per interface, e.g. per General Packet Radio Service Tunnelling Protocol (GTP) based interface.
  • GTP-based interfaces include 4G S1-U, 4G Sx, 4G S5-U, 4G S8-U, 5G N3, 5G N4 and 5G N9.
  • resources e.g. central processing units (CPU), physical servers, virtual machines, or containers
  • an internal load balancer has historically been used to divide the single, shared IP address by all resources.
  • a load balancer has a visible cost in terms of CPU and input/output (I/O) resources.
  • I/O input/output
  • 3GPP 3 rd Generation Partnership Project
  • Fig. 1 shows four radio access network (RAN) nodes (eNodeB or gNodeB) 101 , 102, 103 and 104, and two NFs (PGW, SGW or UPF) 105 and 106.
  • the RAN nodes are connected to the NFs via a 4G S1-U interface or 5G N3 interface (S1 U/N3) 108, and the NF is connected to the public IP network via a LTE SGi or 5G N6 interface (SGi/N6) 107.
  • RAN radio access network
  • the NF 105 maintains multiple resources, also referred to herein as peers, and each resource has a corresponding IP address, also referred to herein as endpoint IP addresses.
  • Each IP address is represented by a circle in Fig. 1.
  • an IP path is defined by two IP addresses, one at either end of the path.
  • an IP path is defined by an IP address at a RAN node and an IP address at a NF corresponding to a NF resource.
  • Each NF resource with its own IP address is referred to herein as an IP endpoint.
  • monitoring messages can be used for monitoring whether IP paths and/or IP endpoints are still active, and collecting performance-related statistics for the IP paths and/or IP endpoints, e.g. latency, jitter (i.e. variation in latency), packet drop, re-ordering statistics, and successfully transferred packets and volumes.
  • Such path messages are referred to herein as monitoring messages, and in particular monitoring request messages and monitoring response messages.
  • Monitoring messages can also include ping messages, echo messages or path management messages.
  • An example of monitoring messages are GTP echo request packets and GTP echo response packets. Details of the echo response can be found in 3GPP technical specification number 29.281 version 17.1.0, section 7.2.2.
  • a further trend is to optimise communication networks by having fewer NF instances along the data path between a user equipment, UE, and data network name, DNN, e.g. the internet.
  • DNN data network name
  • Many customers are using a single UPF and/or combined SGW+PGW deployments. This creates a full mesh on the S1 U/N3 interface between the RAN and the packet core, which increases even further the number of GTP paths seen on S1 U/N3.
  • eNodeBs eNodeBs
  • gNodeBs gNodeBs
  • IP paths that connect the same RAN node and the same NF via the same interface can be considered as equivalent for the purposes of evaluating performance metrics, i.e. there is no difference in the routed path. In other words, they will exhibit similar performance metrics (e.g. latency, jitter (i.e. variation in latency), packet drop, re-ordering statistics, and successfully transferred packets and volumes), and therefore only one of these equivalent IP paths needs to be monitored at any given time.
  • the remote endpoint IP address is the only available identifier for the resources/peers maintained by a network function, and 3GPP technical specification 29.281 version 17.1.0 does not allow for endpoint IP address "equivalency”.
  • IP address equivalency information is included in monitoring response messages sent by NFs that use multiple IP addresses.
  • a method in a first network node wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the method comprises sending, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other
  • a method in a second network node wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the method comprises receiving, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • a first network node wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the first network node is configured to send, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • a second network node wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the second network node is configured to receive, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other
  • a first network node wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the first network node comprises a processor and a memory, the memory containing instructions executable by the processor whereby the first network node is operative to send, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • a second network node wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address.
  • the second network node comprises a processor and a memory, the memory containing instructions executable by the processor whereby the second network node is operative to receive, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • a computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method according to the first aspect, the second aspect, or any embodiment thereof.
  • the eNB/gNB can optimise local path management by aggregating the statistics, reducing the number of path objects, and reducing the number of monitoring messages (e.g. GTP echo messages) transmitted and received.
  • the NF performs horizontal scaling and starts using additional IP addresses, it does not automatically cause an increase in path-related load for the eNB/gNB.
  • the techniques described herein provide network nodes with enough information to optimise functionality relating to IP paths (e.g. GTP tunnels) to enable an extended life of existing hardware platforms in the face of cloud/5G Core (5GC) developments.
  • IP paths e.g. GTP tunnels
  • the techniques thereby result in fewer path objects for network management systems to manage, and less path-related signalling such as GTP echo messages, path statistics and events.
  • Fig. 1 illustrates part of a communication network using a multiple IP address approach
  • Fig. 2 is an example of a communication system in accordance with some embodiments
  • Fig. 3 is a core network node in accordance with some embodiments.
  • Fig. 4 is a radio access network node in accordance with some embodiments.
  • Fig. 5 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized
  • Fig. 6 is a signalling diagram illustrating echo message handling
  • Fig. 7 is a flow chart illustrating embodiments of the techniques described herein;
  • Fig. 8 is a flow chart illustrating embodiments of the techniques described herein;
  • Fig. 9 is a signalling diagram illustrating embodiments of the techniques described herein;
  • Fig. 10 is a flow chart illustrating embodiments of the techniques described herein;
  • Fig. 11 is a flow chart illustrating embodiments of the techniques described herein;
  • Fig. 12 is a signalling diagram illustrating embodiments of the techniques described herein;
  • Fig. 13 is a flow chart illustrating a method in a first network node in accordance with some embodiments.
  • Fig. 14 is a flow chart illustrating a method in a second network node in accordance with some embodiments.
  • Fig. 2 shows an example of a communication system 200 in accordance with some embodiments.
  • the communication system 200 includes a telecommunication network 202 that includes an access network 204, such as a radio access network (RAN), and a core network 206, which includes one or more core network nodes 208.
  • the access network 204 includes one or more radio access network nodes, such as radio access network nodes 210a and 210b (one or more of which may be generally referred to as access network nodes 210), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point.
  • 3GPP 3 rd Generation Partnership Project
  • the access network nodes 210 facilitate direct or indirect connection of wireless devices (also referred to interchangeably herein as user equipment (UE)), such as by connecting UEs 212a, 212b (one or more of which may be generally referred to as UEs 212) to the core network 206 over one or more wireless connections.
  • the access network nodes 210 may be, for example, access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 200 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 200 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the wireless devices/UEs 212 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 210 and other communication devices.
  • the access network nodes 210 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 212 and/or with other network nodes or equipment in the telecommunication network 202 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 202.
  • the core network 206 includes one more core network nodes (e.g. core network node 208) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the wireless devices/UEs, access network nodes, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 208.
  • Example core network nodes include functions or network functions (NFs) of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), Serving Gateway (SGW), Packet Data Network Gateways (PGW), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • SGW Serving Gateway
  • PGW Packet Data Network Gateways
  • the communication system 200 of Fig. 2 enables connectivity between the wireless devices/UEs, network nodes.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • WLAN wireless local area network
  • IEEE Institute of Electrical and Electronics Engineers
  • WiFi wireless local area network
  • WIMax Worldwide Interoperability for Microwave Access
  • NFC Near Field Communication
  • LIFI Low-power wide-area network
  • LPWAN low-power wide-area network
  • the telecommunication network 202 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 202 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 202. For example, the telecommunications network 202 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 212 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 204 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 204.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • Fig. 3 shows a core network node 300 in accordance with some embodiments.
  • core network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • core network nodes include, but are not limited to, nodes that include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), Serving Gateway (SGW), Packet Data Network Gateways (PGW), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • SGW Serving Gateway
  • PGW Packet Data Network
  • the core network node 300 includes processing circuitry 302, a memory 304, a communication interface 306, and a power source 308, and/or any other component, or any combination thereof.
  • the core network node 300 may be composed of multiple physically separate components, which may each have their own respective components.
  • the processing circuitry 302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other core network node 300 components, such as the memory 304, to provide core network node 300 functionality.
  • the processing circuitry 302 may be configured to cause the core network node to perform the methods as described with reference to Figs. 7, 8, 10, 11 , 13 and/or 14.
  • the processing circuitry 302 includes a system on a chip (SOC).
  • the memory 304 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 302.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
  • the memory 304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 302 and utilized by the core network node 300.
  • the memory 304 may be used to store any calculations made by the processing circuitry 302 and/or any data received via the communication interface 306.
  • the processing circuitry 302 and memory 304 is integrated.
  • the communication interface 306 is used in wired or wireless communication of signalling and/or data between network nodes, the access network, the core network, and/or a UE. As illustrated, the communication interface 306 comprises port(s)/terminal(s) 316 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 306, and/or the processing circuitry 302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the core network node. Any information, data and/or signals may be received from an access network node (e.g. eNB or gNB), another core network node and/or any other network node or network equipment. Similarly, the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the core network node. Any information, data and/or signals may be transmitted to an access network node, another core network node and/or any other network node or network equipment.
  • an access network node e.g. eNB or gNB
  • the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the core network node. Any information, data and/or signals may be transmitted to an access network node, another core network node and/or any other network node or network equipment
  • the power source 308 provides power to the various components of core network node 300 in a form suitable for the respective components (e.g. at a voltage and current level needed for each respective component).
  • the power source 308 may further comprise, or be coupled to, power management circuitry to supply the components of the core network node 300 with power for performing the functionality described herein.
  • the core network node 300 may be connectable to an external power source (e.g. the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 308.
  • the power source 308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the core network node 300 may include additional components beyond those shown in Fig. 3 for providing certain aspects of the core network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the core network node 300 may include user interface equipment to allow input of information into the core network node 300 and to allow output of information from the core network node 300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the core network node 300.
  • Fig. 4 shows a radio access network node 400 in accordance with some embodiments.
  • radio access network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access network nodes such as access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Base stations may be categorised based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • radio access network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g. Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g. Evolved Serving Mobile Location Centers (E-SMLCs)
  • the radio access network node 400 includes processing circuitry 402, a memory 404, a communication interface 406, and a power source 408, and/or any other component, or any combination thereof.
  • the radio access network node 400 may be composed of multiple physically separate components (e.g. a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the radio access network node 400 comprises multiple separate components (e.g. BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the radio access network node 400 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g. separate memory 404 for different RATs) and some components may be reused (e.g. a same antenna 410 may be shared by different RATs).
  • the radio access network node 400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into radio access network node 400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within radio access network node 400.
  • RFID Radio Frequency Identification
  • the processing circuitry 402 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other radio access network node 400 components, such as the memory 404, to provide radio access network node 400 functionality.
  • the processing circuitry 402 may be configured to cause the network node to perform the methods as described with reference to Figs. 8, 11 or 14.
  • the processing circuitry 402 includes a system on a chip (SOC). In some embodiments, the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414. In some embodiments, the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 412 and baseband processing circuitry 414 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414.
  • the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of
  • the memory 404 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 402.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
  • the memory 404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 402 and utilized by the radio access network node 400.
  • the memory 404 may be used to store any calculations made by the processing circuitry 402 and/or any data received via the communication interface 406.
  • the processing circuitry 402 and memory 404 is integrated.
  • the communication interface 406 is used in wired or wireless communication of signalling and/or data between network nodes, the access network, the core network, and/or a UE. As illustrated, the communication interface 406 comprises port(s)/terminal(s) 416 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 406 also includes radio front-end circuitry 418 that may be coupled to, or in certain embodiments a part of, the antenna 410.
  • Radio front-end circuitry 418 comprises filters 420 and amplifiers 422.
  • the radio front-end circuitry 418 may be connected to an antenna 410 and processing circuitry 402.
  • the radio front-end circuitry may be configured to condition signals communicated between antenna 410 and processing circuitry 402.
  • the radio front-end circuitry 418 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 418 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 420 and/or amplifiers 422.
  • the radio signal may then be transmitted via the antenna 410.
  • the antenna 410 may collect radio signals which are then converted into digital data by the radio front-end circuitry 418.
  • the digital data may be passed to the processing circuitry 402.
  • the communication interface may comprise different components and/or different combinations of components.
  • Fig. 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) or containers (e.g. docker containers, Ixc) implemented in one or more virtual environments 500 (e.g.
  • VMs virtual machines
  • containers e.g. docker containers, Ixc
  • the node may be entirely virtualized.
  • Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 500 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
  • the VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506.
  • a virtualization layer 506 Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 508, and that part of hardware 504 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
  • Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502.
  • hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signalling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
  • Fig. 6 is a signalling diagram demonstrating the use of monitoring messages in existing network implementations.
  • An eNB/gNB 601 is shown communicating with a UPF 602.
  • the UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3. Each of these resources has an IP endpoint address, and therefore there is an IP path between the eNB/gNB 601 and each of responder X.1 , responder X.2 and responder X.3.
  • the UPF also comprises a controller.
  • the monitoring messages are GTP echo request and response messages.
  • the eNB/gNB 601 sends periodic GTP echo request messages to responder X.1 (signals 603 and 605), and responder X.1 responds to these with GTP echo response messages (signal 604 in response to the request message in signal 603, and signal 606 in response to the request message in signal 605).
  • the eNB/gNB 601 also sends periodic GTP echo request messages to responder X.2 (signals 607 and 609), and responder X.2 responds to these with GTP echo response messages (signal 608 in response to the request message in signal 607, and signal 610 in response to the request message in signal 609).
  • responder X.2 since responder X.2 is maintained by the same UPF 602 as responder X.1 , the IP path between eNB/gNB 601 and responder X.2 should be equivalent to the IP path between eNB/gNB 601 and responder X.1 in terms of performance metrics such as latency, etc. As such, the result of monitoring these two IP paths are expected to be the same, and it is not necessary for the eNB/gNB to separately monitor both IP paths. The same is true for responder X.3 (signals 611 , 612, 613 and 614). These additional, unnecessary monitoring messages represent an inefficient use of signalling resources, and place an unnecessary burden on the eNB/gNB 601.
  • a NF that uses multiple IP addresses for its own resources can indicate to a RAN node or other NF/network node that these IP addresses are associated with each other.
  • This additional information - referred to as IP address equivalency information - can be included in a monitoring response message, e.g. a GTP echo response message.
  • the RAN node or other NF/network node can use this information to determine which IP paths are equivalent to each other and therefore do not need to be separately monitored.
  • the RAN node or other NF/network node can reduce the number of monitoring messages that it sends to, and receives from, the NF, and can aggregate performance-related statistics for the different IP endpoints maintained by the NF.
  • the IP address equivalency information identifies an IP subnet that a particular IP endpoint relates to or is part of.
  • the IP addresses used by a single NF are allocated to the same IP subnet and this IP subnet can be indicated to the RAN node or other NF/network node in a monitoring response message. All IP addresses in the same subnet belong to the same IP routing domain. Any IP addresses within a particular IP subnet can then be treated by the receiver of the monitoring response message as a single path object.
  • Embodiments using IP subnets to convey IP address equivalency information are illustrated by Figs. 7, 8 and 9.
  • the IP address equivalency information is a unique identifier for the NF that maintains the IP endpoints.
  • a unique identifier for the NF is included in the monitoring response message for a particular IP endpoint maintained by the NF.
  • a unique NF identifier is any NF identifier that, for practical purposes, enables the NF to be uniquely identified in the communication network, or in any communication network, i.e. the probability that the identifier is not unique is small/sufficiently close to zero for practical purposes.
  • the unique identifier could be, for example, a 32 or 128 bit value that can be read as an IPv4 address or IPv6 address.
  • NF identifiers examples include an IPv6 global scope address, a zero-padded IPv4 address and an UUID (universally unique identifier) according to "A Universally Unique I Dentifier (UUID) URN Namespace”, RFC 4122, July 2005.
  • UUID Universally Unique I Dentifier
  • a comparable example is the ‘router ID' in the Open Shortest Path First (OSPF) routing protocol (RFC 2328). All IP endpoint addresses having the same NF identifier in the monitoring response message (e.g. GTP echo response packet) can be treated by the receiver of the monitoring response message as the same path object.
  • OSPF Open Shortest Path First
  • IP subnets or NF identifiers are compatible with the current methods of identifying an IP path. Rather than the receiver of a monitoring response message having to identify each IP path using the remote IP address (as in current implementations), a subnet address or an NF identifier that is common to multiple IP paths can be used instead with little or no changes to current management systems such as alarms and counters.
  • Embodiments that use IP subnet addresses as the IP address equivalency information have an additional advantage over embodiments that rely on NF identifiers as the equivalency information. Namely, without having to send a monitoring request message to the NF, a RAN node or other network node can determine whether a newly established IP endpoint is equivalent to any other existing IP endpoints by comparing the IP subnet of the new IP endpoint with the IP subnet belonging to known existing IP endpoints.
  • a RAN node or other network node cannot determine whether a newly established IP endpoint is equivalent to any other existing IP endpoints until it receives the NF identifier for that newly established IP endpoint, e.g. in a monitoring response message.
  • Fig. 7 is a flow chart illustrating a method in a responder of an NF in accordance with embodiments of the techniques described herein.
  • the flow chart begins with a responder in the NF receiving a GTP echo request packet addressed to a tunnel endpoint IP address that is maintained by the responder.
  • the responder in the NF may be, for example, Responder X.1 in the UPF 602 shown in Fig. 6.
  • the responder determines whether this tunnel endpoint IP address is within an equivalent IP subnetwork. That is, the responder determines whether this tunnel endpoint IP address is in the same IP subnet as other tunnel endpoint IP addresses maintained by responders in the NF. In some embodiments, the NF does this by checking a list of equivalent IP subnets and comparing the endpoint IP address with this list. A list of equivalent IP subnets may be maintained by the NF. For example, step 704 shows a responder controller (i.e. a controller for the responders in the NF) updating the responder with the list of IP subnets. The responder controller may be comprised in the NF. In other embodiments, a controller that is external to the NF may maintain a list of equivalent IP subnets and send it to the NF.
  • a responder controller i.e. a controller for the responders in the NF
  • the responder controller may be comprised in the NF. In other embodiments, a controller that is external to the NF may maintain a
  • the responder assembles a GTP echo response packet including the endpoint IP address equivalency information, i.e. the IP subnet corresponding to the endpoint IP address to which the request packet was sent.
  • this response packet is sent by the responder in response to the received GTP echo request packet.
  • Fig. 8 is a flow chart illustrating a method in a RAN node (e.g. an eNB or gNB) in accordance with embodiments of the techniques described herein.
  • the flow chart begins with the RAN node receiving a GTP echo response packet from a responder in a NF.
  • this GTP echo response packet could correspond to the GTP echo response packet sent by the responder in step 703 of the flow chart shown in Fig. 7.
  • the RAN node matches the GTP echo response packet to an outstanding GTP echo request (i.e. the RAN node determines which earlier echo request this echo response packet relates to), and at step 802, the RAN node updates the status of the GTP path via which the GTP echo response packet was sent (e.g. by storing updated performance metrics, path availability, and/or peer availability).
  • the RAN node checks the response packet for IP address equivalency information for the IP endpoint address to which the echo request was sent.
  • the RAN node checks for an IP subnet for the IP address. If an IP subnet is indicated, the RAN node uses this information to optimise the path local management (step 804). That is, the RAN node can identify whether the RAN node is monitoring other IP endpoint addresses in the same IP subnet (i.e. other IP endpoint addresses at the same NF), and determine that only one of those IP endpoint addresses needs to be monitored, enabling the statistics for that IP endpoint address to be used for the other IP addresses in the same subnet. For example, the statistics for equivalent IP paths can be aggregated, thereby reducing the number of path objects monitored by the RAN node. The RAN node will subsequently send periodic GTP echo request messages for only one of the equivalent IP paths.
  • Fig. 9 is a signalling diagram showing the use of monitoring messages in accordance with the embodiments described above with reference to Figs. 7 and 8.
  • the setup is similar to that described with reference to Fig. 6.
  • An eNB/gNB 901 is shown communicating with a UPF 902.
  • the UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3. Each of these resources has an IP endpoint address, and therefore there are three IP paths between the eNB/gNB 901 and the three NF resources, responder X.1 , responder X.2 and responder X.3.
  • the UPF 902 also comprises a controller.
  • the controller maintains a list of IP subnets, which includes an IP subnet common to the IP endpoints corresponding to responders X.1 , X.2 and X.3.
  • the controller updates the responders with the details of the IP subnet, referred to as subnet X.
  • the controller can be external to the UPF.
  • the eNB/gNB 901 sends a GTP echo request message to responder X.1 (signal 903).
  • Responder X.1 responds with a GTP echo response message (signal 904) and includes in the message IP address equivalency information in the form of an identifier for the equivalent subnet X.
  • the eNB/gNB continues to send the periodic GTP echo messages to responder X.1 (signal 905).
  • the eNB/gNB 901 determines that it does not need to monitor responder X.2 because it can determine from the endpoint IP address for responder X.2 that it belongs to the same IP subnet as responder X.1 , and responder X.1 is already being monitored. Therefore, for monitoring responder X.2, the eNB/gNB 901 sends periodic GTP echo request messages to responder X.1 only (signals 907 and 909), and does not send additional request messages to responder X.2.
  • the eNB/gNB 901 determines that it does not need to monitor responder X.3 because it can determine from the endpoint IP address for responder X.3 that it belongs to the same IP subnet as responder X.1 , and responder X.1 is already being monitored. Therefore, for monitoring responder X.2 and responder X.3, the eNB/gNB 901 sends periodic GTP echo request messages to responder X.1 only, and does not send additional request messages to responder X.2 or X.3.
  • Figs. 10, 11 and 12 illustrate embodiments in which a NF identifier is used to convey the IP address equivalency information (and thus IP path equivalency) instead of IP subnets (as used in the embodiments illustrated by Figs. 7, 8 and 9).
  • Fig. 10 is a flow chart for a method in a responder of an NF in accordance with embodiments of the techniques described herein. The flow chart begins with the responder in the NF receiving a GTP echo request packet addressed to a tunnel endpoint IP address that is maintained by the responder.
  • the responder in the NF may be, for example, Responder X.1 in the UPF 602 shown in Fig. 6.
  • the responder (also referred to as the GTP echo responder) checks if the endpoint IP address in the GTP echo request packet has equivalent endpoint IP addresses at the NF. That is, the responder determines whether this tunnel endpoint IP address is associated with the same NF identifier as other tunnel endpoint IP addresses.
  • a list of equivalent endpoint IP addresses may be maintained by the NF.
  • step 1004 shows a responder controller (i.e. a controller for the responders in the NF) updating the responder with the list of equivalent endpoint IP addresses corresponding to the NF identifier.
  • the responder controller may be comprised in the NF. In other embodiments, a controller that is external to the NF may maintain the list and send it to the NF.
  • the NF assembles a GTP echo response packet including the unique NF identifier for the NF that maintains the endpoint IP address to which the request packet was sent.
  • this response packet is sent by the responder in response to the received GTP echo request packet.
  • Fig. 11 is a flow chart for a method in a RAN node (e.g. an eNB or gNB) in accordance with embodiments of the techniques described herein.
  • the flow chart begins with the RAN node receiving a GTP echo response packet from a responder in an NF.
  • this GTP echo response packet could correspond to the GTP echo response packet sent by the responder in step 1003 of the flow chart shown in Fig. 10.
  • the RAN node matches the GTP echo response packet to an outstanding GTP echo request (i.e. the RAN node determines which earlier echo request this echo response packet relates to), and at step 1102, the RAN node updates the status of the GTP path via which the GTP echo response packet was sent (e.g. by storing updated performance metrics, path availability, and/or peer availability).
  • the RAN node checks the response packet for IP address equivalency information for the IP endpoint address to which the echo request was sent.
  • the RAN node checks for a NF identifier for the NF that maintains the responder having the IP endpoint. If an NF identifier is indicated, this information can be stored by the RAN node.
  • the NF identifier is used to determine which other endpoint IP addresses the RAN node is monitoring are equivalent to each other, and thus to optimise the path local management. That is, the RAN node can identify whether the RAN node is monitoring other IP endpoint addresses at the same NF (i.e. other IP endpoint addresses having the same NF identifier), and determine that only one of those IP endpoint addresses needs to be monitored, enabling the statistics for that IP endpoint address to be used for the other IP addresses having the same NF identifier. For example, the statistics for equivalent IP paths can be aggregated, thereby reducing the number of path objects monitored by the RAN node. Fig. 11 shows that this step is performed by a sender controller.
  • the controller can be comprised in the RAN node. In alternative embodiments, the controller can be external to the RAN node.
  • the RAN node will subsequently send periodic GTP echo request messages to only one of the equivalent IP endpoints.
  • Fig. 12 is a signalling diagram showing the use of monitoring messages in accordance with the embodiments described above with reference to Figs. 10 and 11 .
  • the setup is similar to the setups described with reference to Figs. 6 and 9.
  • An eNB/gNB 1201 is shown communicating with a UPF 1202.
  • the UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3.
  • the UPF 1202 also comprises a controller.
  • the controller maintains a list of the equivalent endpoint IP addresses maintained by the UPF and an identifier for the UPF.
  • the controller updates the responders with the unique UPF identifier, which in this embodiment is denoted 7'.
  • the eNB/gNB 1201 When a new session is established for monitoring the GTP endpoint IP address for responder X.1 , the eNB/gNB 1201 sends a GTP echo request message to responder X.1 (signal 1203). Responder X.1 responds with a GTP echo response message (signal 1204) and includes in the message IP address equivalency information in the form of the UPF identifier for the UPF 1202. The eNB/gNB continues to send periodic GTP echo messages to responder X.1 (signal 1205).
  • the eNB/gNB sends a GTP echo request message to responder X.2 (signal 1207).
  • Responder X.2 responds with a GTP echo response message (signal 1208) that includes the UPF identifier for UPF 1202.
  • the eNB/gNB determines from this response message that it need not monitor the endpoint IP address for responder X.2 because responder X.2 uses an IP path between the eNB/gNB 1201 and the UPF 1202 that can be considered equivalent to the IP path used by responder X.1, and responder X.1 is already being monitored. Therefore, the eNB/gNB 1201 continues to send periodic GTP echo request messages to responder X.1 only (signal 1209), and does not send additional request messages to responder X.2.
  • the eNB/gNB sends a GTP echo request message to responder X.3 (signal 1215) and receives a GTP echo response message comprising a NF identifier for UPF 1202 (signal 1216).
  • the eNB/gNB determines that it does not need to monitor the endpoint IP address for responder X.3 because it uses an equivalent IP path to responder X.1, and responder X.1 is already being monitored. Therefore, the eNB/gNB 1201 continues to send periodic GTP echo request messages to responder X.1 only (signal 1217), and does not start sending additional request messages to responder X.3.
  • the eNB/gNB 1201 can select another equivalent IP endpoint with the same NF identifier, e.g. responder X.2, and send periodic GTP echo request messages to responder X.2 instead of responder X.1. This is shown by signal 1223 of Fig. 12.
  • Fig. 13 is a flow chart illustrating a method in a first network node in accordance with the techniques described herein.
  • the first network node maintains a plurality of IP endpoints, each with a corresponding IP address (also referred to herein as an endpoint IP address).
  • the first network node can be a core network node as described with reference to Fig. 3, e.g. a UPF, SGW, PGW or a combined SGW+PGW deployment.
  • the IP endpoints can be resources or peers that are maintained by the first network node.
  • the method of Fig. 13 comprises a step 1300 of sending, to a second network node, a monitoring response message relating to a first IP address for one of the IP endpoints.
  • the second network node can be a radio access network node as described with reference to Fig. 4, or a fixed access network node, e.g. an access gateway function (AGF).
  • the second network node can be a core network function as described with reference to Fig. 3.
  • the monitoring response message can be a path management message relating to the status and/or performance of an IP path between the first network node and the second network node.
  • the monitoring response message can also relate to the status and/or performance of the first network node.
  • the monitoring response message is for indicating that an IP endpoint maintained by the first network node is still active and/or that there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
  • the monitoring response message can comprise one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address.
  • the monitoring response message can be a GTP echo response message, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • the IP address equivalency information indicates that the first IP address belongs to an IP subnet.
  • each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the (same) IP subnet.
  • these embodiments can further comprise a step of determining whether a received monitoring request message is for an IP endpoint having an IP address belonging to an IP subnet.
  • the IP address equivalency information is a unique identifier for the first network node.
  • the unique identifier can be any of: an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, UUID.
  • the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node.
  • IP address equivalency information for any of the plurality of IP endpoints that are maintained by the first network node may indicate that the IP endpoint is maintained by the first network node.
  • the IP address equivalency information may indicate that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node.
  • Equivalent IP paths can mean that the IP paths exhibit the same performance-related metrics. For example, IP path equivalency can mean that the latency of one of the IP paths is likely to be representative of the latency of the other equivalent IP paths. Similarly, if there is an active connection on one of the IP paths, there will likely also be an active connection on the other equivalent IP paths. If a plurality of IP endpoints are known to use equivalent IP paths, this can indicate that only one of the plurality of IP endpoints needs to be monitored and/or that only one of the plurality of IP paths needs to be monitored.
  • the equivalent IP paths are via a common (i.e. the same) GTP-based interface for conveying data between the second network node and the first network node.
  • the equivalent IP paths can be via one of: a 4G S1 -U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
  • the method of Fig. 13 can further comprise a step of receiving, from the second network node, a monitoring request message for an IP endpoint having the first IP address.
  • the monitoring request message is addressed to the first IP address.
  • the monitoring response message that is sent in step 1300 of Fig. 13 is sent in response to the received monitoring request message.
  • the monitoring request message can be a path management message for monitoring the status and/or performance of the IP path.
  • the monitoring request message can also be a message for monitoring the status and/or performance of the first network node.
  • the monitoring request message may be for determining whether an IP endpoint having the first IP address is still active, and/or whether there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
  • the monitoring request message can comprise a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
  • the monitoring request message can be a GTP echo request message.
  • the method of Fig. 13 can further comprise a step of receiving, from the second network node, periodic monitoring request messages for only one of the plurality of IP endpoints.
  • response messages are correspondingly only sent to the second network node for the one IP endpoint for which request messages are received. This constitutes a reduction in path messages compared to existing implementations.
  • the method of any of Figs. 7, 10 and 13 can be an optional implementation for a NF.
  • only network nodes e.g. NFs
  • the word 'resilient' here means that all of the IP endpoints that are maintained by the NF are available whenever the NF itself is available.
  • the inclusion of the IP address equivalency information in monitoring response messages could be implemented via private extension to products, or in the future as a standardised attribute or information element. There need not be any requirement for a node to support or include the additional IP address equivalency information.
  • the implementation of the techniques described herein for the first network node are therefore fully backwards compatible.
  • Fig. 14 is a flow chart illustrating a method in a second network node in accordance with the techniques described herein.
  • a first network node maintains a plurality of IP endpoints each with a corresponding IP address.
  • the second network node may be a radio access network node as described with reference to Fig. 4, a fixed access network node (e.g. AGF), or a core network function as described with reference to Fig. 3.
  • the first network node may be a core network function as described with reference to Fig. 3, e.g. a UPF, SGW, PGW or a combined SGW+PGW deployment.
  • the first network node may correspond to the first network node that performs the method described with reference to Fig. 13.
  • the method of Fig. 14 comprises a step 1400 of receiving, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints.
  • the monitoring response message can be a path management message relating to the status and/or performance of an IP path between the first and second network nodes.
  • the monitoring response message can also relate to the status and/or performance of the first network node.
  • the monitoring response message is for indicating that an IP endpoint at the first network node is still active and/or that there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
  • the monitoring response message can comprise one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
  • the monitoring response message can be a GTP echo response message.
  • the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
  • the IP address equivalency information indicates that the first IP address belongs to an IP subnet, and each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the (same) IP subnet.
  • the method of Fig. 14 can further comprise a step of determining whether a second IP address belongs to a same IP subnet as the first IP address.
  • the second IP address is one that is not already being monitored by the second network node. In other words, the second network node has not yet received any monitoring response messages relating to the second IP address.
  • Whether the second IP address belongs to a same IP subnet as the first IP address can be determined by comparing the second IP address with the known IP subnet for the first IP address. Indeed, the second IP address can be compared to all known IP subnets which indicate of equivalent IP addresses.
  • the second network node determines that the second IP address belongs to the same IP subnet as the first IP address, it determines not to send a monitoring request message relating to the second IP address. Otherwise, the second network node sends a monitoring request message relating to the second IP address.
  • the IP address equivalency information is a unique identifier for the first network node.
  • the unique identifier can be one of: an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, e.g. UUID according to RFC 4122.
  • the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node.
  • IP address equivalency information for any of the plurality of IP endpoints that are maintained by the first network node may indicate that the IP endpoint is maintained by the first network node.
  • the IP address equivalency information may indicate that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node.
  • Equivalent IP paths can mean that the IP paths exhibit the same performance-related metrics. For example, the latency of one of the IP paths may be representative of the latency of the other equivalent IP paths. Similarly, if one of the IP paths is active, the other equivalent IP paths will likely also be active. As such, an indication that a plurality of IP endpoints use equivalent IP paths can indicate that only one of the plurality of IP endpoints needs to be monitored and/or only one of the plurality of IP paths needs to be monitored.
  • the equivalent IP paths are via a common (i.e. the same) GTP-based interface for conveying data between the second network node and the first network node.
  • the equivalent IP paths can be via any one of: a 4G S1-U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
  • the method of Fig. 14 can further comprise, prior to receiving the monitoring response message, sending, to the first network node, a monitoring request message for an IP endpoint having the first IP address.
  • the monitoring response message received in step 1400 of Fig. 14 is received in response to the monitoring request message sent to the first network node.
  • the monitoring request message can be a path management message for monitoring the status and/or performance of the IP path.
  • the monitoring request message can also be a message for monitoring the status and/or performance of the first network node.
  • the monitoring request message may be for determining whether an IP endpoint having the first IP address is still active and/or whether there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
  • the monitoring request message can comprise a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
  • the monitoring request message can be a GTP echo request message.
  • the method of Fig. 14 can further comprise the second network node using statistics and/or performance information relating to the IP endpoint with the first IP address as representative of statistics and/or performance information for other IP endpoints having the same IP address equivalency information.
  • Statistics and/or performance metrics can include latency, jitter, packet drops, and successfully transferred packets and volume.
  • the method can further comprise sending subsequent periodic monitoring request messages to only one of the plurality of IP endpoints that have the same IP address equivalency information.
  • the second network node can store the received IP address equivalency information. This information can be stored locally on the second network node by a controller. Alternatively the information can be stored externally and a controller can update the second network node when the information is required.
  • the method can further comprise a step of determining, based on stored IP address equivalency information, whether a third IP address has the same IP address equivalency information as any IP address already being monitored by the second network node.
  • the third IP address may or may not be an IP address to which the second network node has previously sent monitoring request message(s).
  • the second network node can determine not to send a monitoring request message to the third IP address. Otherwise, the method comprises sending a monitoring request message to the third IP address.
  • the method of any of Figs. 8, 11 and 14 can be an optional implementation for a network node. There need not be any requirement for a node to understand or use received IP address equivalency information. The implementation of the techniques described herein for the second network node are therefore fully backwards compatible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method in a first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the method comprising: sending (1300), to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.

Description

of
Technical Field
This disclosure relates to techniques for sending or receiving monitoring messages.
With the current trend towards the virtualisation of network functions (NF) in communication networks, there is a related change in how internet protocol (IP) addresses are used by NFs. This is especially true for data plane NFs such as virtualised radio access network (vRAN) nodes, user plane functions (UPF), serving gateways (SGW) and packet data network gateways (PGW).
Historically, each NF has used a single IP address per interface, e.g. per General Packet Radio Service Tunnelling Protocol (GTP) based interface. Examples of GTP-based interfaces include 4G S1-U, 4G Sx, 4G S5-U, 4G S8-U, 5G N3, 5G N4 and 5G N9. If multiple resources (e.g. central processing units (CPU), physical servers, virtual machines, or containers) are used to process the interface, an internal load balancer has historically been used to divide the single, shared IP address by all resources.
However, in a cloud environment, a load balancer has a visible cost in terms of CPU and input/output (I/O) resources. For data plane intensive NFs, I/O is usually a bottleneck, which makes this additional cost of a load balancer very high. It is therefore preferable to avoid having a load balancer.
This can be achieved by assigning a different IP address to each resource within a single NF, such that multiple IP addresses are used for a single interface. This is fully supported by the 3rd Generation Partnership Project (3GPP) and has already been implemented in many products on the market.
This multiple IP address approach is depicted in Fig. 1, which shows four radio access network (RAN) nodes (eNodeB or gNodeB) 101 , 102, 103 and 104, and two NFs (PGW, SGW or UPF) 105 and 106. In this example, the RAN nodes are connected to the NFs via a 4G S1-U interface or 5G N3 interface (S1 U/N3) 108, and the NF is connected to the public IP network via a LTE SGi or 5G N6 interface (SGi/N6) 107. Any of these nodes could be virtual nodes and, though not shown in Fig. 1 , any of them could be distributed in a data centre.
The NF 105 maintains multiple resources, also referred to herein as peers, and each resource has a corresponding IP address, also referred to herein as endpoint IP addresses. Each IP address is represented by a circle in Fig. 1. For each of these IP addresses, there is a corresponding IP path between a RAN node and the NF 105, depicted in Fig. 1 as a double-headed arrow. As used herein, an IP path is defined by two IP addresses, one at either end of the path. Thus, in the example shown in Fig. 1 , an IP path is defined by an IP address at a RAN node and an IP address at a NF corresponding to a NF resource. Each NF resource with its own IP address is referred to herein as an IP endpoint.
A direct consequence of using multiple IP addresses for a single interface is an increase in the number of IP paths. In turn, an increase in the number of IP paths causes an increase in monitoring messages that are transmitted via IP paths for the purpose of monitoring IP paths and/or IP endpoints. For example, monitoring messages can be used for monitoring whether IP paths and/or IP endpoints are still active, and collecting performance-related statistics for the IP paths and/or IP endpoints, e.g. latency, jitter (i.e. variation in latency), packet drop, re-ordering statistics, and successfully transferred packets and volumes. Such path messages are referred to herein as monitoring messages, and in particular monitoring request messages and monitoring response messages. Monitoring messages can also include ping messages, echo messages or path management messages. An example of monitoring messages are GTP echo request packets and GTP echo response packets. Details of the echo response can be found in 3GPP technical specification number 29.281 version 17.1.0, section 7.2.2.
A further trend is to optimise communication networks by having fewer NF instances along the data path between a user equipment, UE, and data network name, DNN, e.g. the internet. Many customers are using a single UPF and/or combined SGW+PGW deployments. This creates a full mesh on the S1 U/N3 interface between the RAN and the packet core, which increases even further the number of GTP paths seen on S1 U/N3.
In addition, there are new concepts and possibilities such as network slicing, edge UPFs, multi-access edge computing, MEG, and others that are soon to be implemented by customers and will further increase the number of paths to be handled by each RAN node, e.g. eNodeBs (eNB) and gNodeBs (gNB).
Existing RAN node implementations are designed for significantly fewer NF endpoint IP addresses per GTP interface than will be realised in the near future. It is anticipated that the existing implementations will not scale to the needed levels for the required statistics and path supervision (e.g. the higher number of GTP echo request and response messages) for all remote IP addresses. This is particularly the case for existing RAN node implementations that are limited by the resources of older generation base station hardware. For example, "PNF” (physical function) base stations generally have a longer hardware life-cycle than current cloud-based infrastructure owing to the scale and geographic diversity of installations. Such hardware is less likely to be able to handle the anticipated higher numbers of GTP echo request and response messages. Solutions are therefore required to address the scaling issues related to the high number of peer GTP IP addresses.
Summary
For RAN node purposes, such as tracking and monitoring NF resources, it has been recognised that there may not be a need to track all IP paths corresponding to different IP endpoints within a single NF. IP paths that connect the same RAN node and the same NF via the same interface can be considered as equivalent for the purposes of evaluating performance metrics, i.e. there is no difference in the routed path. In other words, they will exhibit similar performance metrics (e.g. latency, jitter (i.e. variation in latency), packet drop, re-ordering statistics, and successfully transferred packets and volumes), and therefore only one of these equivalent IP paths needs to be monitored at any given time. However, in existing implementations, the remote endpoint IP address is the only available identifier for the resources/peers maintained by a network function, and 3GPP technical specification 29.281 version 17.1.0 does not allow for endpoint IP address "equivalency”.
The techniques proposed herein address these and other challenges. It is proposed to aggregate multiple peers that belong to the same NF so that it can be treated as one peer for the purposes of collecting performance- related statistics and remote NF supervision (e.g. monitoring whether a NF is active). To achieve this, IP address equivalency information is included in monitoring response messages sent by NFs that use multiple IP addresses.
According to a first aspect, there is provided a method in a first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The method comprises sending, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other
According to a second aspect, there is provided a method in a second network node, wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The method comprises receiving, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
According to a third aspect, there is provided a first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The first network node is configured to send, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
According to a fourth aspect, there is provided a second network node, wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The second network node is configured to receive, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other
According to a fifth aspect, there is provided a first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The first network node comprises a processor and a memory, the memory containing instructions executable by the processor whereby the first network node is operative to send, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
According to a sixth aspect, there is provided a second network node, wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address. The second network node comprises a processor and a memory, the memory containing instructions executable by the processor whereby the second network node is operative to receive, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
According to a seventh aspect, there is provided a computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method according to the first aspect, the second aspect, or any embodiment thereof.
By having a mechanism to indicate to RAN nodes (eNB/gNBs), as well as other network nodes, that multiple remote IP addresses are used by the same NF (e.g. SGW, PGW or UPF), the eNB/gNB can optimise local path management by aggregating the statistics, reducing the number of path objects, and reducing the number of monitoring messages (e.g. GTP echo messages) transmitted and received. Hence, even if the NF performs horizontal scaling and starts using additional IP addresses, it does not automatically cause an increase in path-related load for the eNB/gNB.
The techniques described herein provide network nodes with enough information to optimise functionality relating to IP paths (e.g. GTP tunnels) to enable an extended life of existing hardware platforms in the face of cloud/5G Core (5GC) developments. The techniques thereby result in fewer path objects for network management systems to manage, and less path-related signalling such as GTP echo messages, path statistics and events.
Brief of the
Figure imgf000005_0001
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings, in which:
Fig. 1 illustrates part of a communication network using a multiple IP address approach;
Fig. 2 is an example of a communication system in accordance with some embodiments;
Fig. 3 is a core network node in accordance with some embodiments;
Fig. 4 is a radio access network node in accordance with some embodiments;
Fig. 5 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized;
Fig. 6 is a signalling diagram illustrating echo message handling;
Fig. 7 is a flow chart illustrating embodiments of the techniques described herein;
Fig. 8 is a flow chart illustrating embodiments of the techniques described herein;
Fig. 9 is a signalling diagram illustrating embodiments of the techniques described herein;
Fig. 10 is a flow chart illustrating embodiments of the techniques described herein;
Fig. 11 is a flow chart illustrating embodiments of the techniques described herein;
Fig. 12 is a signalling diagram illustrating embodiments of the techniques described herein;
Fig. 13 is a flow chart illustrating a method in a first network node in accordance with some embodiments; and
Fig. 14 is a flow chart illustrating a method in a second network node in accordance with some embodiments.
Detailed
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Fig. 2 shows an example of a communication system 200 in accordance with some embodiments. In the example, the communication system 200 includes a telecommunication network 202 that includes an access network 204, such as a radio access network (RAN), and a core network 206, which includes one or more core network nodes 208. The access network 204 includes one or more radio access network nodes, such as radio access network nodes 210a and 210b (one or more of which may be generally referred to as access network nodes 210), or any other similar 3rd Generation Partnership Project (3GPP) access node or non-3GPP access point. The access network nodes 210 facilitate direct or indirect connection of wireless devices (also referred to interchangeably herein as user equipment (UE)), such as by connecting UEs 212a, 212b (one or more of which may be generally referred to as UEs 212) to the core network 206 over one or more wireless connections. The access network nodes 210 may be, for example, access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 200 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 200 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
The wireless devices/UEs 212 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 210 and other communication devices. Similarly, the access network nodes 210 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 212 and/or with other network nodes or equipment in the telecommunication network 202 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 202.
The core network 206 includes one more core network nodes (e.g. core network node 208) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the wireless devices/UEs, access network nodes, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 208. Example core network nodes include functions or network functions (NFs) of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), Serving Gateway (SGW), Packet Data Network Gateways (PGW), and/or a User Plane Function (UPF).
As a whole, the communication system 200 of Fig. 2 enables connectivity between the wireless devices/UEs, network nodes. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g. 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WIMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LIFI, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
In some examples, the telecommunication network 202 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 202 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 202. For example, the telecommunications network 202 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
In some examples, the UEs 212 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 204 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 204. Additionally, a UE may be configured for operating in single- or multi-RAT or multi-standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
Fig. 3 shows a core network node 300 in accordance with some embodiments. As used herein, core network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of core network nodes include, but are not limited to, nodes that include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), Serving Gateway (SGW), Packet Data Network Gateways (PGW), and/or a User Plane Function (UPF).
The core network node 300 includes processing circuitry 302, a memory 304, a communication interface 306, and a power source 308, and/or any other component, or any combination thereof. The core network node 300 may be composed of multiple physically separate components, which may each have their own respective components.
The processing circuitry 302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other core network node 300 components, such as the memory 304, to provide core network node 300 functionality. For example, the processing circuitry 302 may be configured to cause the core network node to perform the methods as described with reference to Figs. 7, 8, 10, 11 , 13 and/or 14. In some embodiments, the processing circuitry 302 includes a system on a chip (SOC). The memory 304 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 302. The memory 304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 302 and utilized by the core network node 300. The memory 304 may be used to store any calculations made by the processing circuitry 302 and/or any data received via the communication interface 306. In some embodiments, the processing circuitry 302 and memory 304 is integrated.
The communication interface 306 is used in wired or wireless communication of signalling and/or data between network nodes, the access network, the core network, and/or a UE. As illustrated, the communication interface 306 comprises port(s)/terminal(s) 316 to send and receive data, for example to and from a network over a wired connection.
The communication interface 306, and/or the processing circuitry 302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the core network node. Any information, data and/or signals may be received from an access network node (e.g. eNB or gNB), another core network node and/or any other network node or network equipment. Similarly, the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the core network node. Any information, data and/or signals may be transmitted to an access network node, another core network node and/or any other network node or network equipment.
The power source 308 provides power to the various components of core network node 300 in a form suitable for the respective components (e.g. at a voltage and current level needed for each respective component). The power source 308 may further comprise, or be coupled to, power management circuitry to supply the components of the core network node 300 with power for performing the functionality described herein. For example, the core network node 300 may be connectable to an external power source (e.g. the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 308. As a further example, the power source 308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
Embodiments of the core network node 300 may include additional components beyond those shown in Fig. 3 for providing certain aspects of the core network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the core network node 300 may include user interface equipment to allow input of information into the core network node 300 and to allow output of information from the core network node 300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the core network node 300. Fig. 4 shows a radio access network node 400 in accordance with some embodiments. As used herein, radio access network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access network nodes such as access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
Base stations may be categorised based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
Other examples of radio access network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g. Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
The radio access network node 400 includes processing circuitry 402, a memory 404, a communication interface 406, and a power source 408, and/or any other component, or any combination thereof. The radio access network node 400 may be composed of multiple physically separate components (e.g. a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the radio access network node 400 comprises multiple separate components (e.g. BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the radio access network node 400 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g. separate memory 404 for different RATs) and some components may be reused (e.g. a same antenna 410 may be shared by different RATs). The radio access network node 400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into radio access network node 400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within radio access network node 400.
The processing circuitry 402 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other radio access network node 400 components, such as the memory 404, to provide radio access network node 400 functionality. For example, the processing circuitry 402 may be configured to cause the network node to perform the methods as described with reference to Figs. 8, 11 or 14.
In some embodiments, the processing circuitry 402 includes a system on a chip (SOC). In some embodiments, the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414. In some embodiments, the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 412 and baseband processing circuitry 414 may be on the same chip or set of chips, boards, or units.
The memory 404 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 402. The memory 404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 402 and utilized by the radio access network node 400. The memory 404 may be used to store any calculations made by the processing circuitry 402 and/or any data received via the communication interface 406. In some embodiments, the processing circuitry 402 and memory 404 is integrated.
The communication interface 406 is used in wired or wireless communication of signalling and/or data between network nodes, the access network, the core network, and/or a UE. As illustrated, the communication interface 406 comprises port(s)/terminal(s) 416 to send and receive data, for example to and from a network over a wired connection.
The communication interface 406 also includes radio front-end circuitry 418 that may be coupled to, or in certain embodiments a part of, the antenna 410. Radio front-end circuitry 418 comprises filters 420 and amplifiers 422. The radio front-end circuitry 418 may be connected to an antenna 410 and processing circuitry 402. The radio front-end circuitry may be configured to condition signals communicated between antenna 410 and processing circuitry 402. The radio front-end circuitry 418 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 418 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 420 and/or amplifiers 422. The radio signal may then be transmitted via the antenna 410. Similarly, when receiving data, the antenna 410 may collect radio signals which are then converted into digital data by the radio front-end circuitry 418. The digital data may be passed to the processing circuitry 402. In other embodiments, the communication interface may comprise different components and/or different combinations of components.
Fig. 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) or containers (e.g. docker containers, Ixc) implemented in one or more virtual environments 500 (e.g. Kubernetes (k8s) or OpenStack) hosted by one or more of hardware nodes, such as a hardware computing device that operates as an access network node, or a core network node. Further, in embodiments in which the virtual node does not require radio connectivity (e.g. a core network node), then the node may be entirely virtualized.
Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 500 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
The VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506. Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
In the context of NFV, a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 508, and that part of hardware 504 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502. In some embodiments, hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signalling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
Although the computing devices described herein (e.g. radio access network nodes, core network nodes or network functions) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer-readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
Fig. 6 is a signalling diagram demonstrating the use of monitoring messages in existing network implementations. An eNB/gNB 601 is shown communicating with a UPF 602. The UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3. Each of these resources has an IP endpoint address, and therefore there is an IP path between the eNB/gNB 601 and each of responder X.1 , responder X.2 and responder X.3. The UPF also comprises a controller.
In this example, the monitoring messages are GTP echo request and response messages. The eNB/gNB 601 sends periodic GTP echo request messages to responder X.1 (signals 603 and 605), and responder X.1 responds to these with GTP echo response messages (signal 604 in response to the request message in signal 603, and signal 606 in response to the request message in signal 605). The eNB/gNB 601 also sends periodic GTP echo request messages to responder X.2 (signals 607 and 609), and responder X.2 responds to these with GTP echo response messages (signal 608 in response to the request message in signal 607, and signal 610 in response to the request message in signal 609).
The techniques described herein recognise that since responder X.2 is maintained by the same UPF 602 as responder X.1 , the IP path between eNB/gNB 601 and responder X.2 should be equivalent to the IP path between eNB/gNB 601 and responder X.1 in terms of performance metrics such as latency, etc. As such, the result of monitoring these two IP paths are expected to be the same, and it is not necessary for the eNB/gNB to separately monitor both IP paths. The same is true for responder X.3 (signals 611 , 612, 613 and 614). These additional, unnecessary monitoring messages represent an inefficient use of signalling resources, and place an unnecessary burden on the eNB/gNB 601.
Under the disclosed mechanism, a NF that uses multiple IP addresses for its own resources can indicate to a RAN node or other NF/network node that these IP addresses are associated with each other. This additional information - referred to as IP address equivalency information - can be included in a monitoring response message, e.g. a GTP echo response message. The RAN node or other NF/network node can use this information to determine which IP paths are equivalent to each other and therefore do not need to be separately monitored. Thus, the RAN node or other NF/network node can reduce the number of monitoring messages that it sends to, and receives from, the NF, and can aggregate performance-related statistics for the different IP endpoints maintained by the NF.
In some embodiments, the IP address equivalency information identifies an IP subnet that a particular IP endpoint relates to or is part of. The IP addresses used by a single NF are allocated to the same IP subnet and this IP subnet can be indicated to the RAN node or other NF/network node in a monitoring response message. All IP addresses in the same subnet belong to the same IP routing domain. Any IP addresses within a particular IP subnet can then be treated by the receiver of the monitoring response message as a single path object. Embodiments using IP subnets to convey IP address equivalency information are illustrated by Figs. 7, 8 and 9.
In other embodiments, the IP address equivalency information is a unique identifier for the NF that maintains the IP endpoints. A unique identifier for the NF is included in the monitoring response message for a particular IP endpoint maintained by the NF. A unique NF identifier is any NF identifier that, for practical purposes, enables the NF to be uniquely identified in the communication network, or in any communication network, i.e. the probability that the identifier is not unique is small/sufficiently close to zero for practical purposes. The unique identifier could be, for example, a 32 or 128 bit value that can be read as an IPv4 address or IPv6 address. Examples of possible NF identifiers include an IPv6 global scope address, a zero-padded IPv4 address and an UUID (universally unique identifier) according to "A Universally Unique I Dentifier (UUID) URN Namespace”, RFC 4122, July 2005. A comparable example is the ‘router ID' in the Open Shortest Path First (OSPF) routing protocol (RFC 2328). All IP endpoint addresses having the same NF identifier in the monitoring response message (e.g. GTP echo response packet) can be treated by the receiver of the monitoring response message as the same path object. Embodiments using NF identifiers to convey IP address equivalency information are illustrated by Figs. 10, 11 and 12.
Both of these formats for the IP address equivalency information (IP subnets or NF identifiers) are compatible with the current methods of identifying an IP path. Rather than the receiver of a monitoring response message having to identify each IP path using the remote IP address (as in current implementations), a subnet address or an NF identifier that is common to multiple IP paths can be used instead with little or no changes to current management systems such as alarms and counters.
Embodiments that use IP subnet addresses as the IP address equivalency information have an additional advantage over embodiments that rely on NF identifiers as the equivalency information. Namely, without having to send a monitoring request message to the NF, a RAN node or other network node can determine whether a newly established IP endpoint is equivalent to any other existing IP endpoints by comparing the IP subnet of the new IP endpoint with the IP subnet belonging to known existing IP endpoints. In contrast, for embodiments that use a NF identifier as the equivalency information, a RAN node or other network node cannot determine whether a newly established IP endpoint is equivalent to any other existing IP endpoints until it receives the NF identifier for that newly established IP endpoint, e.g. in a monitoring response message.
Fig. 7 is a flow chart illustrating a method in a responder of an NF in accordance with embodiments of the techniques described herein. The flow chart begins with a responder in the NF receiving a GTP echo request packet addressed to a tunnel endpoint IP address that is maintained by the responder. The responder in the NF may be, for example, Responder X.1 in the UPF 602 shown in Fig. 6.
At step 701, the responder determines whether this tunnel endpoint IP address is within an equivalent IP subnetwork. That is, the responder determines whether this tunnel endpoint IP address is in the same IP subnet as other tunnel endpoint IP addresses maintained by responders in the NF. In some embodiments, the NF does this by checking a list of equivalent IP subnets and comparing the endpoint IP address with this list. A list of equivalent IP subnets may be maintained by the NF. For example, step 704 shows a responder controller (i.e. a controller for the responders in the NF) updating the responder with the list of IP subnets. The responder controller may be comprised in the NF. In other embodiments, a controller that is external to the NF may maintain a list of equivalent IP subnets and send it to the NF.
At step 702, the responder assembles a GTP echo response packet including the endpoint IP address equivalency information, i.e. the IP subnet corresponding to the endpoint IP address to which the request packet was sent.
At step 703, this response packet is sent by the responder in response to the received GTP echo request packet.
Fig. 8 is a flow chart illustrating a method in a RAN node (e.g. an eNB or gNB) in accordance with embodiments of the techniques described herein. The flow chart begins with the RAN node receiving a GTP echo response packet from a responder in a NF. For example, this GTP echo response packet could correspond to the GTP echo response packet sent by the responder in step 703 of the flow chart shown in Fig. 7.
At step 801, the RAN node matches the GTP echo response packet to an outstanding GTP echo request (i.e. the RAN node determines which earlier echo request this echo response packet relates to), and at step 802, the RAN node updates the status of the GTP path via which the GTP echo response packet was sent (e.g. by storing updated performance metrics, path availability, and/or peer availability).
At step 803, the RAN node checks the response packet for IP address equivalency information for the IP endpoint address to which the echo request was sent. In this embodiment, the RAN node checks for an IP subnet for the IP address. If an IP subnet is indicated, the RAN node uses this information to optimise the path local management (step 804). That is, the RAN node can identify whether the RAN node is monitoring other IP endpoint addresses in the same IP subnet (i.e. other IP endpoint addresses at the same NF), and determine that only one of those IP endpoint addresses needs to be monitored, enabling the statistics for that IP endpoint address to be used for the other IP addresses in the same subnet. For example, the statistics for equivalent IP paths can be aggregated, thereby reducing the number of path objects monitored by the RAN node. The RAN node will subsequently send periodic GTP echo request messages for only one of the equivalent IP paths.
Fig. 9 is a signalling diagram showing the use of monitoring messages in accordance with the embodiments described above with reference to Figs. 7 and 8. The setup is similar to that described with reference to Fig. 6. An eNB/gNB 901 is shown communicating with a UPF 902. The UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3. Each of these resources has an IP endpoint address, and therefore there are three IP paths between the eNB/gNB 901 and the three NF resources, responder X.1 , responder X.2 and responder X.3. The UPF 902 also comprises a controller. The controller maintains a list of IP subnets, which includes an IP subnet common to the IP endpoints corresponding to responders X.1 , X.2 and X.3. At step 900, the controller updates the responders with the details of the IP subnet, referred to as subnet X. In some embodiments the controller can be external to the UPF.
When a new session is established and the relevant endpoint IP address is to be monitored, the eNB/gNB 901 sends a GTP echo request message to responder X.1 (signal 903). Responder X.1 responds with a GTP echo response message (signal 904) and includes in the message IP address equivalency information in the form of an identifier for the equivalent subnet X. The eNB/gNB continues to send the periodic GTP echo messages to responder X.1 (signal 905).
Subsequently, when a new session is established for monitoring the endpoint IP address for responder X.2, the eNB/gNB 901 determines that it does not need to monitor responder X.2 because it can determine from the endpoint IP address for responder X.2 that it belongs to the same IP subnet as responder X.1 , and responder X.1 is already being monitored. Therefore, for monitoring responder X.2, the eNB/gNB 901 sends periodic GTP echo request messages to responder X.1 only (signals 907 and 909), and does not send additional request messages to responder X.2.
Similarly, when a new session is established for monitoring the endpoint IP address for responder X.3, the eNB/gNB 901 determines that it does not need to monitor responder X.3 because it can determine from the endpoint IP address for responder X.3 that it belongs to the same IP subnet as responder X.1 , and responder X.1 is already being monitored. Therefore, for monitoring responder X.2 and responder X.3, the eNB/gNB 901 sends periodic GTP echo request messages to responder X.1 only, and does not send additional request messages to responder X.2 or X.3.
As noted above, Figs. 10, 11 and 12 illustrate embodiments in which a NF identifier is used to convey the IP address equivalency information (and thus IP path equivalency) instead of IP subnets (as used in the embodiments illustrated by Figs. 7, 8 and 9). Fig. 10 is a flow chart for a method in a responder of an NF in accordance with embodiments of the techniques described herein. The flow chart begins with the responder in the NF receiving a GTP echo request packet addressed to a tunnel endpoint IP address that is maintained by the responder. The responder in the NF may be, for example, Responder X.1 in the UPF 602 shown in Fig. 6.
At step 1001 , the responder (also referred to as the GTP echo responder) checks if the endpoint IP address in the GTP echo request packet has equivalent endpoint IP addresses at the NF. That is, the responder determines whether this tunnel endpoint IP address is associated with the same NF identifier as other tunnel endpoint IP addresses. A list of equivalent endpoint IP addresses may be maintained by the NF. For example, step 1004 shows a responder controller (i.e. a controller for the responders in the NF) updating the responder with the list of equivalent endpoint IP addresses corresponding to the NF identifier. The responder controller may be comprised in the NF. In other embodiments, a controller that is external to the NF may maintain the list and send it to the NF.
At step 1002, the NF assembles a GTP echo response packet including the unique NF identifier for the NF that maintains the endpoint IP address to which the request packet was sent.
At step 1003, this response packet is sent by the responder in response to the received GTP echo request packet.
Fig. 11 is a flow chart for a method in a RAN node (e.g. an eNB or gNB) in accordance with embodiments of the techniques described herein. The flow chart begins with the RAN node receiving a GTP echo response packet from a responder in an NF. For example, this GTP echo response packet could correspond to the GTP echo response packet sent by the responder in step 1003 of the flow chart shown in Fig. 10.
At step 1101 , the RAN node matches the GTP echo response packet to an outstanding GTP echo request (i.e. the RAN node determines which earlier echo request this echo response packet relates to), and at step 1102, the RAN node updates the status of the GTP path via which the GTP echo response packet was sent (e.g. by storing updated performance metrics, path availability, and/or peer availability).
At step 1103, the RAN node checks the response packet for IP address equivalency information for the IP endpoint address to which the echo request was sent. In this embodiment, the RAN node checks for a NF identifier for the NF that maintains the responder having the IP endpoint. If an NF identifier is indicated, this information can be stored by the RAN node.
At step 1104, the NF identifier is used to determine which other endpoint IP addresses the RAN node is monitoring are equivalent to each other, and thus to optimise the path local management. That is, the RAN node can identify whether the RAN node is monitoring other IP endpoint addresses at the same NF (i.e. other IP endpoint addresses having the same NF identifier), and determine that only one of those IP endpoint addresses needs to be monitored, enabling the statistics for that IP endpoint address to be used for the other IP addresses having the same NF identifier. For example, the statistics for equivalent IP paths can be aggregated, thereby reducing the number of path objects monitored by the RAN node. Fig. 11 shows that this step is performed by a sender controller. The controller can be comprised in the RAN node. In alternative embodiments, the controller can be external to the RAN node. The RAN node will subsequently send periodic GTP echo request messages to only one of the equivalent IP endpoints. Fig. 12 is a signalling diagram showing the use of monitoring messages in accordance with the embodiments described above with reference to Figs. 10 and 11 . The setup is similar to the setups described with reference to Figs. 6 and 9. An eNB/gNB 1201 is shown communicating with a UPF 1202. The UPF maintains three resources labelled as responder X.1 , responder X.2 and responder X.3. Each of these resources has an IP endpoint address, and therefore there are three IP paths between the eNB/gNB 1201 and the three UPF resources, responder X.1 , responder X.2 and responder X.3. The UPF 1202 also comprises a controller. The controller maintains a list of the equivalent endpoint IP addresses maintained by the UPF and an identifier for the UPF. At step 1200, the controller updates the responders with the unique UPF identifier, which in this embodiment is denoted 7'.
When a new session is established for monitoring the GTP endpoint IP address for responder X.1 , the eNB/gNB 1201 sends a GTP echo request message to responder X.1 (signal 1203). Responder X.1 responds with a GTP echo response message (signal 1204) and includes in the message IP address equivalency information in the form of the UPF identifier for the UPF 1202. The eNB/gNB continues to send periodic GTP echo messages to responder X.1 (signal 1205).
Subsequently, when a new session is established (step 1206) and the relevant GTP endpoint IP address is to be monitored, the eNB/gNB sends a GTP echo request message to responder X.2 (signal 1207). Responder X.2 responds with a GTP echo response message (signal 1208) that includes the UPF identifier for UPF 1202. The eNB/gNB determines from this response message that it need not monitor the endpoint IP address for responder X.2 because responder X.2 uses an IP path between the eNB/gNB 1201 and the UPF 1202 that can be considered equivalent to the IP path used by responder X.1, and responder X.1 is already being monitored. Therefore, the eNB/gNB 1201 continues to send periodic GTP echo request messages to responder X.1 only (signal 1209), and does not send additional request messages to responder X.2.
Similarly, when a new session is established (step 1210) and the relevant GTP endpoint IP address is to be monitored, the eNB/gNB sends a GTP echo request message to responder X.3 (signal 1215) and receives a GTP echo response message comprising a NF identifier for UPF 1202 (signal 1216). Thus, the eNB/gNB determines that it does not need to monitor the endpoint IP address for responder X.3 because it uses an equivalent IP path to responder X.1, and responder X.1 is already being monitored. Therefore, the eNB/gNB 1201 continues to send periodic GTP echo request messages to responder X.1 only (signal 1217), and does not start sending additional request messages to responder X.3.
If the session associated with the responder X.1 is subsequently removed (step 1218), and the endpoint IP address for responder X.1 can no longer be monitored, the eNB/gNB 1201 can select another equivalent IP endpoint with the same NF identifier, e.g. responder X.2, and send periodic GTP echo request messages to responder X.2 instead of responder X.1. This is shown by signal 1223 of Fig. 12.
Fig. 13 is a flow chart illustrating a method in a first network node in accordance with the techniques described herein. The first network node maintains a plurality of IP endpoints, each with a corresponding IP address (also referred to herein as an endpoint IP address). The first network node can be a core network node as described with reference to Fig. 3, e.g. a UPF, SGW, PGW or a combined SGW+PGW deployment. The IP endpoints can be resources or peers that are maintained by the first network node. The method of Fig. 13 comprises a step 1300 of sending, to a second network node, a monitoring response message relating to a first IP address for one of the IP endpoints. The second network node can be a radio access network node as described with reference to Fig. 4, or a fixed access network node, e.g. an access gateway function (AGF). In alternative embodiments, the second network node can be a core network function as described with reference to Fig. 3.
The monitoring response message can be a path management message relating to the status and/or performance of an IP path between the first network node and the second network node. The monitoring response message can also relate to the status and/or performance of the first network node. In some embodiments, the monitoring response message is for indicating that an IP endpoint maintained by the first network node is still active and/or that there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
In some embodiments, the monitoring response message can comprise one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address. The monitoring response message can be a GTP echo response message, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
In the method of Fig. 13, the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
In some embodiments, the IP address equivalency information indicates that the first IP address belongs to an IP subnet. In these embodiments, each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the (same) IP subnet. Before sending the monitoring response message, these embodiments can further comprise a step of determining whether a received monitoring request message is for an IP endpoint having an IP address belonging to an IP subnet.
In other embodiments, the IP address equivalency information is a unique identifier for the first network node. The unique identifier can be any of: an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, UUID.
In some embodiments, the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node. For example, IP address equivalency information for any of the plurality of IP endpoints that are maintained by the first network node may indicate that the IP endpoint is maintained by the first network node.
The IP address equivalency information may indicate that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node. Equivalent IP paths can mean that the IP paths exhibit the same performance-related metrics. For example, IP path equivalency can mean that the latency of one of the IP paths is likely to be representative of the latency of the other equivalent IP paths. Similarly, if there is an active connection on one of the IP paths, there will likely also be an active connection on the other equivalent IP paths. If a plurality of IP endpoints are known to use equivalent IP paths, this can indicate that only one of the plurality of IP endpoints needs to be monitored and/or that only one of the plurality of IP paths needs to be monitored.
In some embodiments, the equivalent IP paths are via a common (i.e. the same) GTP-based interface for conveying data between the second network node and the first network node. The equivalent IP paths can be via one of: a 4G S1 -U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
Prior to sending the monitoring response message, the method of Fig. 13 can further comprise a step of receiving, from the second network node, a monitoring request message for an IP endpoint having the first IP address. In other words, the monitoring request message is addressed to the first IP address. In these embodiments, the monitoring response message that is sent in step 1300 of Fig. 13 is sent in response to the received monitoring request message.
The monitoring request message can be a path management message for monitoring the status and/or performance of the IP path. The monitoring request message can also be a message for monitoring the status and/or performance of the first network node. For example, the monitoring request message may be for determining whether an IP endpoint having the first IP address is still active, and/or whether there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
In some embodiments, the monitoring request message can comprise a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address. The monitoring request message can be a GTP echo request message.
After sending the monitoring response message, the method of Fig. 13 can further comprise a step of receiving, from the second network node, periodic monitoring request messages for only one of the plurality of IP endpoints. In these embodiments, response messages are correspondingly only sent to the second network node for the one IP endpoint for which request messages are received. This constitutes a reduction in path messages compared to existing implementations.
The method of any of Figs. 7, 10 and 13 can be an optional implementation for a NF. In some embodiments, only network nodes (e.g. NFs) that are resilient would make use of the mechanism for sharing IP address equivalency information. The word 'resilient' here means that all of the IP endpoints that are maintained by the NF are available whenever the NF itself is available. The inclusion of the IP address equivalency information in monitoring response messages (e.g. in a GTP echo response packet) could be implemented via private extension to products, or in the future as a standardised attribute or information element. There need not be any requirement for a node to support or include the additional IP address equivalency information. The implementation of the techniques described herein for the first network node are therefore fully backwards compatible.
Fig. 14 is a flow chart illustrating a method in a second network node in accordance with the techniques described herein. In this method, a first network node maintains a plurality of IP endpoints each with a corresponding IP address. The second network node may be a radio access network node as described with reference to Fig. 4, a fixed access network node (e.g. AGF), or a core network function as described with reference to Fig. 3. The first network node may be a core network function as described with reference to Fig. 3, e.g. a UPF, SGW, PGW or a combined SGW+PGW deployment. The first network node may correspond to the first network node that performs the method described with reference to Fig. 13.
The method of Fig. 14 comprises a step 1400 of receiving, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints. The monitoring response message can be a path management message relating to the status and/or performance of an IP path between the first and second network nodes. The monitoring response message can also relate to the status and/or performance of the first network node. In some embodiments, the monitoring response message is for indicating that an IP endpoint at the first network node is still active and/or that there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
In some embodiments, the monitoring response message can comprise one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address. The monitoring response message can be a GTP echo response message.
In the method of Fig. 14, the monitoring response message comprises IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
In some embodiments, the IP address equivalency information indicates that the first IP address belongs to an IP subnet, and each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the (same) IP subnet.
The method of Fig. 14 can further comprise a step of determining whether a second IP address belongs to a same IP subnet as the first IP address. Here, the second IP address is one that is not already being monitored by the second network node. In other words, the second network node has not yet received any monitoring response messages relating to the second IP address. Whether the second IP address belongs to a same IP subnet as the first IP address can be determined by comparing the second IP address with the known IP subnet for the first IP address. Indeed, the second IP address can be compared to all known IP subnets which indicate of equivalent IP addresses.
If the second network node determines that the second IP address belongs to the same IP subnet as the first IP address, it determines not to send a monitoring request message relating to the second IP address. Otherwise, the second network node sends a monitoring request message relating to the second IP address.
In alternative embodiments, the IP address equivalency information is a unique identifier for the first network node. The unique identifier can be one of: an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, e.g. UUID according to RFC 4122.
In some embodiments, the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node. Thus, IP address equivalency information for any of the plurality of IP endpoints that are maintained by the first network node may indicate that the IP endpoint is maintained by the first network node.
The IP address equivalency information may indicate that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node. Equivalent IP paths can mean that the IP paths exhibit the same performance-related metrics. For example, the latency of one of the IP paths may be representative of the latency of the other equivalent IP paths. Similarly, if one of the IP paths is active, the other equivalent IP paths will likely also be active. As such, an indication that a plurality of IP endpoints use equivalent IP paths can indicate that only one of the plurality of IP endpoints needs to be monitored and/or only one of the plurality of IP paths needs to be monitored.
In some embodiments, the equivalent IP paths are via a common (i.e. the same) GTP-based interface for conveying data between the second network node and the first network node. The equivalent IP paths can be via any one of: a 4G S1-U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
The method of Fig. 14 can further comprise, prior to receiving the monitoring response message, sending, to the first network node, a monitoring request message for an IP endpoint having the first IP address. The monitoring response message received in step 1400 of Fig. 14 is received in response to the monitoring request message sent to the first network node.
The monitoring request message can be a path management message for monitoring the status and/or performance of the IP path. The monitoring request message can also be a message for monitoring the status and/or performance of the first network node. For example, the monitoring request message may be for determining whether an IP endpoint having the first IP address is still active and/or whether there is connectivity between the IP endpoint and the second network node (such messages are sometimes referred to as ping messages).
In some embodiments, the monitoring request message can comprise a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address. The monitoring request message can be a GTP echo request message.
The method of Fig. 14 can further comprise the second network node using statistics and/or performance information relating to the IP endpoint with the first IP address as representative of statistics and/or performance information for other IP endpoints having the same IP address equivalency information. Statistics and/or performance metrics can include latency, jitter, packet drops, and successfully transferred packets and volume.
After receiving the monitoring response message in step 1400, the method can further comprise sending subsequent periodic monitoring request messages to only one of the plurality of IP endpoints that have the same IP address equivalency information.
The second network node can store the received IP address equivalency information. This information can be stored locally on the second network node by a controller. Alternatively the information can be stored externally and a controller can update the second network node when the information is required.
The method can further comprise a step of determining, based on stored IP address equivalency information, whether a third IP address has the same IP address equivalency information as any IP address already being monitored by the second network node. The third IP address may or may not be an IP address to which the second network node has previously sent monitoring request message(s).
If the third IP address has the same IP address equivalency information as any IP address already being monitored by the second network node, the second network node can determine not to send a monitoring request message to the third IP address. Otherwise, the method comprises sending a monitoring request message to the third IP address.
The method of any of Figs. 8, 11 and 14 can be an optional implementation for a network node. There need not be any requirement for a node to understand or use received IP address equivalency information. The implementation of the techniques described herein for the second network node are therefore fully backwards compatible.
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the scope of the disclosure. Various exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art.

Claims

1 . A method in a first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the method comprising: sending (1300), to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
2. A method as claimed in claim 1 , wherein the IP address equivalency information indicates that the first IP address belongs to an IP subnet, wherein each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the IP subnet.
3. A method as claimed in any of claims 1-2, the method further comprising: before sending the monitoring response message, determining whether a received monitoring request message is for an IP endpoint having an IP address belonging to an IP subnet.
4. A method as claimed in claim 1 , wherein the IP address equivalency information is a unique identifier for the first network node.
5. A method as claimed in claim 4, wherein the unique identifier is one of: an IP address, an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, UUID.
6. A method as claimed in any of claims 1 -5, wherein the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node.
7. A method as claimed in any of claims 1 -6, wherein the IP address equivalency information indicates that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node.
8. A method as claimed in claim 7, wherein the equivalent IP paths are via a common General Packet Radio Service Tunnelling Protocol, GTP, based interface for conveying data between the second network node and the first network node.
9. A method as claimed in any of claims 7-8, wherein the equivalent IP paths are via one of: a 4G S1 -U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
22
10. A method as claimed in any of claims 1-9, wherein the monitoring response message further comprises one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
11. A method as claimed in any of claims 1 -10, wherein the monitoring response message is a General Packet Radio Service Tunnelling Protocol, GTP, echo response message.
12. A method as claimed in any of claims 1-11 , the method further comprising: prior to sending the monitoring response message, receiving, from the second network node, a monitoring request message for an IP endpoint having the first IP address; and wherein the monitoring response message is sent in response to the received monitoring request message.
13. A method as claimed in claim 12, wherein the monitoring request message comprises a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
14. A method as claimed in any of claims 1-13, the method further comprising: after sending the monitoring response message, receiving, from the second network node, periodic monitoring request messages for only one of the plurality of IP endpoints.
15. A method as claimed in any of claims 1-14, wherein the first network node is a core network function.
16. A method as claimed in any of claims 1 -15, wherein the second network node is a radio access network node, a fixed access network node, or a core network function.
17. A method in a second network node, wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the method comprising: receiving (1400), from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other.
18. A method as claimed in claim 17, wherein the IP address equivalency information indicates that the first IP address belongs to an IP subnet, wherein each of the IP addresses of the plurality of IP endpoints maintained by the first network node belong to the IP subnet.
19. A method as claimed in any of claims 17-18, the method further comprising: for a second IP address that is not already being monitored by the second network node, determining whether the second IP address belongs to a same IP subnet as the first IP address; if the second IP address belongs to the same IP subnet as the first IP address, determining not to send a monitoring request message relating to the second IP address; and otherwise, sending a monitoring request message relating to the second IP address.
20. A method as claimed in claim 17, wherein the IP address equivalency information Is a unique identifier for the first network node.
21. A method as claimed in claim 20, wherein the unique identifier is one of: an IPv6 global scope address, a zero-padded IPv4 address, or a universally unique identifier, UUID.
22. A method as claimed in any of claims 17-21 , wherein the IP address equivalency information indicates that the plurality of IP endpoints are maintained by the same network node.
23. A method as claimed in any of claims 17-22, wherein the IP address equivalency information indicates that the plurality of IP endpoints use equivalent IP paths between the first network node and the second network node.
24. A method as claimed in claim 23, wherein the equivalent IP paths are via a common General Packet Radio Service Tunnelling Protocol, GTP, based interface for conveying data between the second network node and the first network node.
25. A method as claimed in any of claims 23-24, wherein the equivalent IP paths are via one of: a 4G S1 -U interface, a 4G Sx interface, a 4G S5-U interface, a 4G S8-U interface, a 5G N3 interface, a 5G N4 interface or a 5G N9 interface.
26. A method as claimed in any of claims 17-25, wherein the monitoring response message further comprises one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
27. A method as claimed in any of claims 17-26, wherein the monitoring response message is a General Packet Radio Service Tunnelling Protocol, GTP, echo response message.
28. A method as claimed in any of claims 17-27, the method further comprising: prior to receiving the monitoring response message, sending, to the first network node, a monitoring request message for an IP endpoint having the first IP address.
29. A method as claimed in claim 28, wherein the monitoring request message comprises a request for one or more of: a status of an IP path between the first network node and the second network node, an availability indication of an IP path between the first network node and the second network node, a performance metric of an IP path between the first network node and the second network node, a status of the IP endpoint having the first IP address, an availability indication of the IP endpoint having the first IP address, a performance metric of the IP endpoint having the first IP address, a status of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address, an availability indication of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address and a performance metric of an or any IP endpoint having the same IP address equivalency information as the IP endpoint having the first IP address.
30. A method as claimed in any of claims 17-29, the method further comprising: using statistics and/or performance information relating to the IP endpoint with the first IP address as representative of statistics and/or performance information for other IP endpoints having the same IP address equivalency information.
25 A method as claimed in any of claims 17-30, the method further comprising: after receiving the monitoring response message, sending subsequent periodic monitoring request messages to only one of the plurality of IP endpoints that have the same IP address equivalency information. A method as claimed in any of claims 17-31 , the method further comprising: storing the received IP address equivalency information. A method as claimed in any of claims 17-32, the method further comprising: determining, based on stored IP address equivalency information, whether a third IP address has the same IP address equivalency information as any IP address already being monitored by the second network node; if the third IP address has the same IP address equivalency information as any IP address already being monitored by the second network node, determining not to send a monitoring request message to the third IP address; and otherwise, sending a monitoring request message to the third IP address. A method as claimed in any of claims 17-33, wherein the first network node is a core network function. A method as claimed in any of claims 17-34, wherein the second network node is a radio access network node, a fixed access network node, or a core network function. A first network node (300, 902, 1202), wherein the first network node (300, 902, 1202) maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the first network node (300, 902, 1202) configured to: send, to a second network node (300, 400, 901 , 1201), a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other. A first network node (300, 902, 1202) as claimed in claim 36, wherein the first network node (300, 902, 1202) is further configured to perform the method of any of claims 2-16. A second network node (300, 400, 901, 1201), wherein a first network node (300, 902, 1202) maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the second network node (300, 400, 901 , 1201) configured to: receive, from the first network node (300, 902, 1202), a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address
26 equivalency information that indicates that the plurality of IP endpoints are associated with each other. A second network node (300, 400, 901 , 1201) as claimed in claim 38, wherein the second network node (300, 400, 901 , 1201) is further configured to perform the method of any of claims 18-35. A first network node, wherein the first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the first network node comprising a processor and a memory, the memory containing instructions executable by the processor whereby the first network node is operative to: send, to a second network node, a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other. A first network node as claimed in claim 40, wherein the first network node is further operative to perform the method of any of claims 2-16. A second network node, wherein a first network node maintains a plurality of internet protocol, IP, endpoints each with a corresponding IP address, the second network node comprising a processor and a memory, the memory containing instructions executable by the processor whereby the second network node is operative to: receive, from the first network node, a monitoring response message relating to a first IP address of one of the IP endpoints, the monitoring response message comprising IP address equivalency information that indicates that the plurality of IP endpoints are associated with each other. A second network node as claimed in claim 42, wherein the second network node is further operative to perform the method of any of claims 18-35. A computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method of any of claims 1-16 or
PCT/EP2021/076344 2021-09-24 2021-09-24 Handling of monitoring messages WO2023046291A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/076344 WO2023046291A1 (en) 2021-09-24 2021-09-24 Handling of monitoring messages

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/076344 WO2023046291A1 (en) 2021-09-24 2021-09-24 Handling of monitoring messages

Publications (1)

Publication Number Publication Date
WO2023046291A1 true WO2023046291A1 (en) 2023-03-30

Family

ID=78032425

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/076344 WO2023046291A1 (en) 2021-09-24 2021-09-24 Handling of monitoring messages

Country Status (1)

Country Link
WO (1) WO2023046291A1 (en)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Core Network and Terminals; General Packet Radio System (GPRS) Tunnelling Protocol User Plane (GTPv1-U) (Release 17)", vol. CT WG4, no. V17.1.0, 16 September 2021 (2021-09-16), pages 1 - 34, XP052056534, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/29_series/29.281/29281-h10.zip 29281-h10.docx> [retrieved on 20210916] *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Management and orchestration; 5G performance measurements (Release 17)", vol. SA WG5, no. V17.4.0, 23 September 2021 (2021-09-23), pages 1 - 263, XP052056606, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/28_series/28.552/28552-h40.zip 28552-h40.docx> [retrieved on 20210923] *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; System architecture for the 5G System (5GS); Stage 2 (Release 17)", vol. SA WG2, no. V17.1.1, 24 June 2021 (2021-06-24), pages 1 - 526, XP052029600, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/23_series/23.501/23501-h11.zip 23501-h11.docx> [retrieved on 20210624] *

Similar Documents

Publication Publication Date Title
US11432225B2 (en) Packet forwarding in integrated access backhaul (IAB) networks
US11711863B2 (en) Slicing of network resources for dual connectivity using NR
US20200275356A1 (en) Radio network node, network node, database, configuration control node, and methods performed thereby
US20220264383A1 (en) Implicit Indication of Centralized Unit (CU) Integrated Access Backhaul (IAB) Capability
US11722568B2 (en) Methods providing dynamic NEF tunnel allocation and related network nodes/functions
US11792688B2 (en) Systems and methods for a scalable heterogeneous network orchestrator
US11659451B2 (en) Serving gateway control plane function to manage a plurality of serving gateway user plane functions, and mobility management entity to communicate with the same
WO2019139518A1 (en) Selective encryption of pdcp in integrated access backhaul (iab) networks
US20230379792A1 (en) Rerouting of ul/dl traffic in an iab network
EP3834457A1 (en) Enhancements to cgi reporting in multi-connectivity
EP3753295B1 (en) Inter-radio access technology handover
US20230337056A1 (en) Coordination of Edge Application Server Reselection using Edge Client Subnet
US20220286841A1 (en) Internet protocol address allocation for integrated access backhaul nodes
WO2023046291A1 (en) Handling of monitoring messages
WO2023203365A1 (en) General packet radio service tunneling protocol (gtp) path management enhancement towards cloud native peers
WO2024107097A1 (en) Unknown qfi handling in ran
US20220224627A1 (en) Ran transport interface discovery
WO2024035306A1 (en) Conditional handover configuration storage
WO2024107094A1 (en) Handling of asynchronous reception of quality of experience configuration in master node and secondary node
WO2023083608A1 (en) Mbs session failure
WO2024035311A1 (en) Minimization of drive tests configuration scope for different network types

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21783465

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE