WO2021147014A1 - Method and apparatus for path status detection - Google Patents

Method and apparatus for path status detection Download PDF

Info

Publication number
WO2021147014A1
WO2021147014A1 PCT/CN2020/073864 CN2020073864W WO2021147014A1 WO 2021147014 A1 WO2021147014 A1 WO 2021147014A1 CN 2020073864 W CN2020073864 W CN 2020073864W WO 2021147014 A1 WO2021147014 A1 WO 2021147014A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud network
entity
gateway
network
routing function
Prior art date
Application number
PCT/CN2020/073864
Other languages
French (fr)
Inventor
Wei Sun
Mingliang HUANG
Hua LV
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/CN2020/073864 priority Critical patent/WO2021147014A1/en
Publication of WO2021147014A1 publication Critical patent/WO2021147014A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0811Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0847Transmission error
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Definitions

  • the non-limiting and exemplary embodiments of the present disclosure generally relate to the technical field of communications, and specifically to methods and apparatuses for path status detection.
  • Path status detection technology can be used to detect faults between two communication devices connected by a link. Some path status detection technologies can provide rapid detection of faults to enable efficient network path convergence. For example, Bi-directional Forwarding Detection (BFD) may be configured on each end of the link that is to be monitored. BFD can detect unidirectional link failures on that link, and notify the link peer of the failure. This allows both ends of the link to discontinue using the link even though the fault can only be detected by one of the peers. For example, a device A may be able to receive packets from a device B over a fiber link. But the device B cannot receive packets from the device A due to a fault on that strand of the fiber. BFD on the device B can rapidly detect this error and signal to the device A that there is a fault on the link.
  • BFD Bi-directional Forwarding Detection
  • Path status detection technology can be used in various networks.
  • IP Internet protocol
  • VMs virtual machines
  • vEPC virtual Evolved Packet Core
  • DC-GW data center gateway
  • BGPaaS BGP (Border Gateway Protocol) as a service
  • BGPaaS BGP (Border Gateway Protocol) as a service
  • VRF virtual routing and forwarding
  • FIG. 1a schematically shows a topology for BGPaaS interworking with vEPC.
  • there are at least two DC-GW routers providing redundant next hops for vEPC VMs/containers through VRRP (virtual router redundancy protocol) or multiple gateways.
  • BGPaaS setups BGP neighbors with each DC-GW router, and advertises IP routes reachability/withdrawal to DC-GW routers for vEPC VMs/containers.
  • BFD for BGP between BGPaaS and DC-GW routers is enabled in order to detect path failure between BGPaaS and DC-GW routers immediately.
  • the BGPaaS interworking to vEPC with route/service fast convergence may be mandatory.
  • both the static IP routes with BFD and IGP routing protocol with small interval are limited small size deployment due to the numbers of BFD sessions and IGP sessions supported on DC-GW routers.
  • Static IP routes with BFD requires manual configuration on DC-GW routers, therefore it is not suitable for automated orchestration.
  • IGP routing protocol with small interval introduces a lot of flood packets, therefore it is not suitable for most of cloud networks.
  • APNs Access Point Names
  • DC-GW routers to support at least 500x (the number of VMs/containers) BFD sessions or IGP sessions, and the existing solutions may not be applied in this scenario.
  • FIG. 1b shows some problems of the existing technologies according to an embodiment of the disclosure. As shown in FIG. 1b, once the path between a DC-GW and a VM/Container fails, BGPaaS is not aware of the path failure so it will not withdraw the routes, the traffic will not be delivered to the other paths or the other VMs/containers and will be lost.
  • FIG. 1c shows some problems of the existing technologies according to another embodiment of the disclosure.
  • the issues for BFD for static routes may include: it requires to prevision BFD and static routes on DC-GW routers for each VM/container; VMs/containers auto scaling (in and out) cannot be supported due to above; DC-GW routers does not support so many BFD sessions (especially in case of more than 500 APNs deployed) which exceeds DC-GW hardware capacity; and it cannot be applied in most of cloud networks.
  • FIG. 1d shows some problems of the existing technologies according to another embodiment of the disclosure.
  • the issues for IGP routing such as OSPF may include: OSPF will introduce too many multicast packets in the cloud network since OSPF hello messages are multicast traffic; DC-GW routers does not support so many OSPF sessions/neighbors (especially in case of more than 500 APNs deployed) ; OSPF Hello messages consume much CPU (Central Processing Unit) on DC-GW routers; and it cannot be applied in most of cloud networks.
  • OSPF will introduce too many multicast packets in the cloud network since OSPF hello messages are multicast traffic
  • DC-GW routers does not support so many OSPF sessions/neighbors (especially in case of more than 500 APNs deployed)
  • OSPF Hello messages consume much CPU (Central Processing Unit) on DC-GW routers; and it cannot be applied in most of cloud networks.
  • CPU Central Processing Unit
  • embodiments of the present disclosure propose an improved path status detection solution.
  • a routing function entity such as BGPaaS
  • the routing function entity such as BGPaaS sends probe messages to the entities (such as vEPC VMs/containers) of the cloud network and these messages pass through different gateways (such as DC-GW routers) of the cloud network.
  • the entities such as vEPC VMs/containers
  • their respective gateway could be of respective IP address assigned on each gateway (such as DC-GW router) of the cloud network, or a single virtual IP address negotiated by VRRP between the gateways of the cloud network.
  • the routing function entity such as BGPaaS can identify the path failure between the gateway (such as DC-GW router) of the cloud network and the entity (such as vEPC VM/container) of the cloud network, and can withdraw IP routes to the entity of the cloud network which has path failure towards to the specific gateway of the cloud network accordingly.
  • the gateway such as DC-GW router
  • the entity such as vEPC VM/container
  • the route/service fast convergence can be achieved through BFD for BGP, which is typical configuration for BGP.
  • a method performed by a routing function entity comprises sending a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the method further comprises determining a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  • the determining step may comprise when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network, determining that a path failure has happened between the first gateway of the cloud network and the entity of the cloud network; and when the corresponding probe response message is received by the routing function entity within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the cloud network.
  • a routing path of the corresponding probe response message may pass through the first gateway of the cloud network.
  • a routing path of the corresponding probe response message passes through a second gateway of the cloud network rather than the first gateway of the cloud network
  • the determining step may comprise when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network and between the routing function entity and the second gateway of the cloud network, determining that the path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
  • the method may further comprise sending a route withdrawal message to the first gateway of the cloud network to withdraw at least one route to the entity of the cloud network.
  • the probe message may comprise an Internet control message protocol, ICMP, message and a bidirectional forwarding detection, BFD, message.
  • ICMP Internet control message protocol
  • BFD bidirectional forwarding detection
  • the probe message may be sent in a time interval.
  • the routing function entity may be a border gateway protocol, BGP, as a service, BGPaaS, entity.
  • the entity of the cloud network may be a virtual network function entity of a core network of a wireless network.
  • the entity of the cloud network is able to place at least one route in the routing function entity by using border gateway protocol, BGP.
  • the at least one policy-based route may comprise routes regarding different source Internet protocol addresses on the routing function entity passing through different gateways of the cloud network.
  • different source Internet protocol addresses may be used in probe messages passing through different gateways of the cloud network.
  • the method may further comprise when the corresponding probe response message is received by the routing function entity again after the determination of the path failure, determining the failed path is recovered; and sending a route reachability message to the first gateway of the cloud network to advertise at least one route to the entity of the cloud network.
  • bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network may be enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
  • the cloud network may have two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
  • VRRP virtual router redundancy protocol
  • the sending step may comprise sending two or more probe messages to the entity of the cloud network, wherein the routing paths of the two or more probe messages respectively pass through different gateways of the cloud network.
  • a method performed by an entity of a cloud network.
  • the method comprises receiving a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the method further comprises determining a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  • the determining step may comprise when the probe message from a first Internet protocol address on the routing function entity is not received by the entity of the cloud network within a predefined time period, determining that a path failure has happened either between the first gateway of the cloud network and the entity of the could network or between the first gateway of the could network and the routing function entity of the could network; and when the probe message from the first Internet protocol address on the routing function entity is received by the entity of the cloud network within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the could network and between the first gateway of the could network and the routing function entity of the could network.
  • the method may further comprise in response to a reception of the probe message, sending a corresponding probe response message to the routing function entity.
  • a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
  • a routing path of the corresponding probe response message passes through the second gateway of the cloud network rather than the first gateway of the cloud network.
  • the method may further comprise performing a first action based on the path failure.
  • the first action may comprise at least one of avoiding sending traffic to the path passes through the first gateway of the cloud network, moving the service of the entity of the cloud network to another entity of the cloud network and rebooting the entity of the cloud network.
  • the method may further comprise when the probe message is received by the entity of the cloud network within a predefined time period again after the determination of the path failure, determining the failed path is recovered; and performing a second action based on the recovered path.
  • the second action may comprise canceling the first action.
  • the probe message may be received in a time interval.
  • a routing function entity comprises a processor; and a memory coupled to the processor, said memory containing instructions executable by said processor, whereby said routing function entity is operative to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network. Said routing function entity is further operative to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  • an entity of a cloud network comprises a processor; and a memory coupled to the processor, said memory containing instructions executable by said processor, whereby said entity of the cloud network is operative to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network. Said entity of the cloud network is further operative to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  • a routing function entity comprises a sending module and a determining module.
  • the sending module may be configured to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the determining module may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  • an entity of a cloud network comprises a receiving module and a determining module.
  • the receiving module may be configured to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the determining module may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  • a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the first aspect of the present disclosure.
  • a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the second aspect of the present disclosure.
  • a computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the first aspect of the present disclosure.
  • a computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the second aspect of the present disclosure.
  • a route/service fast convergence method proposed for the cloud network (such as vEPC) does not need to depend on any additional feature on the gateway (such as DC-GW router) of the cloud network and is fully compatible with the gateway (such as DC-GW router) of the cloud network.
  • the proposed method can be applied with no additional function required to DC-GW routers. It does not need any configuration/function used for path failure detection on DC-GW routers.
  • the proposed method supports incremental deployment and there is no impact on the deployed cloud infrastructure.
  • the advantages of the proposed method comprises low cost, add-on feature, no extra capability required to any hardware devices.
  • the proposed method can be applied on massive and scaling deployment scenarios.
  • the proposed method can detect the path failure through messages within the entity (such as VNF (virtual network function) ) of the cloud network internal, and trigger routing protocol converge as soon as possible.
  • the proposed method can be applied in large scale cloud network and does not consume any additional resource (such as computing, storage and hardware resource) on the gateway (such as DC-GW router) of the cloud network.
  • the proposed method can be used in any size of the cloud network.
  • FIG. 1a schematically shows a topology for BGPaaS interworking with vEPC
  • FIG. 1b shows some problems of the existing technologies according to an embodiment of the disclosure
  • FIG. 1c shows some problems of the existing technologies according to another embodiment of the disclosure.
  • FIG. 1d shows some problems of the existing technologies according to another embodiment of the disclosure
  • FIG. 2 illustrates one implementation example for particular embodiments of the present disclosure
  • FIG. 3 illustrates two specific examples of how network device may be implemented in certain embodiments of the present disclosure
  • FIG. 4 shows a flowchart of a method according to an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure
  • FIG. 6a shows an example of a failure detecting solution applied in multiple gateways case according to an embodiment of the present disclosure
  • FIG. 6b shows an example of a failure recovery solution applied in multiple gateways case according to an embodiment of the present disclosure
  • FIG. 7 shows an example of the proposed solution applied in VRRP case according to another embodiment of the present disclosure
  • FIG. 8 is a block diagram showing an apparatus suitable for practicing some embodiments of the disclosure.
  • FIG. 9 is a block diagram showing a routing function entity according to an embodiment of the disclosure.
  • FIG. 10 is a block diagram showing an entity of a cloud network according to an embodiment of the disclosure.
  • the term “network” or “communication network” refers to a network following any suitable (wireless or wired) communication standards.
  • the wireless communication standards may comprise new radio (NR) , long term evolution (LTE) , LTE-Advanced, wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , Code Division Multiple Access (CDMA) , Time Division Multiple Address (TDMA) , Frequency Division Multiple Access (FDMA) , Orthogonal Frequency-Division Multiple Access (OFDMA) , Single carrier frequency division multiple access (SC-FDMA) and other wireless networks.
  • NR new radio
  • LTE long term evolution
  • WCDMA wideband code division multiple access
  • HSPA high-speed packet access
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Address
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency-Division Multiple Access
  • SC-FDMA Single carrier frequency division multiple access
  • a CDMA network may implement a radio
  • UTRA includes WCDMA and other variants of CDMA.
  • a TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM) .
  • An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA) , Ultra Mobile Broadband (UMB) , IEEE 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20, Flash-OFDMA, Ad-hoc network, wireless sensor network, etc.
  • E-UTRA Evolved UTRA
  • UMB Ultra Mobile Broadband
  • IEEE 802.11 Wi-Fi
  • IEEE 802.16 WiMAX
  • IEEE 802.20 Flash-OFDMA
  • Ad-hoc network wireless sensor network
  • the communications between two devices in the network may be performed according to any suitable communication protocols, including, but not limited to, the wireless communication protocols as defined by a standard organization such as 3rd generation partnership project (3GPP) or the wired communication protocols.
  • the wireless communication protocols may comprise the first generation (1G) , 2G, 3G, 4G, 4.5G, 5G communication protocols, and/or any other protocols either currently known or to be developed in the future.
  • references in the specification to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the associated listed terms.
  • the phrase “at least one of A and B” should be understood to mean “only A, only B, or both A and B. ”
  • the phrase “A and/or B” should be understood to mean “only A, only B, or both A and B. ”
  • a communication system may further include any additional elements suitable to support communication between terminal devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or terminal device.
  • the communication system may provide communication and various types of services to one or more terminal devices to facilitate the terminal devices’ access to and/or use of the services provided by, or via, the communication system.
  • FIG. 2 illustrates one implementation example for particular embodiments of the solution described herein.
  • a network device (ND) 200 may, in some embodiments, be an electronic device that can be communicatively connected to other electronic devices on the network (e.g., other network devices, user equipment devices (UEs) , radio base stations, etc. ) .
  • the network device 200 may include radio access features that provide wireless radio network access to other electronic devices (for example a “radio access network device”may refer to such a network device) such as user equipment devices (UEs) .
  • radio access network device may refer to such a network device
  • UEs user equipment devices
  • the network device 200 may be a base station, such as eNodeB in Long Term Evolution (LTE) , NodeB in Wideband Code Division Multiple Access (WCDMA) or other types of base stations, as well as a Radio Network Controller (RNC) , a Base Station Controller (BSC) , or other types of control nodes.
  • the network device 200 may include routing function feature that allows a guest virtual machine (VM) or a container based application to place routes in its own virtual routing and forwarding (VRF) instance using BGP.
  • the network device 200 may be BGPaaS.
  • the example network device 200 comprises processor 201, memory 202, interface 203, and antenna 204.
  • Processor 201 may be a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, any other type of electronic circuitry, or any combination of one or more of the preceding.
  • the processor 201 may comprise one or more processor cores.
  • some or all of the functionality described herein as being provided by the network device 200 may be implemented by processor 201 executing software instructions, either alone or in conjunction with other components, such as memory 202.
  • Memory 202 may store code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using non-transitory machine-readable (e.g., computer-readable) media, such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory) and machine-readable transmission media (e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals) .
  • machine-readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory
  • machine-readable transmission media e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals
  • memory 202 may comprise non-volatile memory containing code to be executed by processor 201.
  • memory202 is non-volatile
  • the code and/or data stored therein can persist even when the network device is turned off (when power is removed) .
  • the network device 200 while the network device 200 is turned on that part of the code that is to be executed by the processor (s) 201 may be copied from non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM) , static random access memory (SRAM) ) of the network device 200.
  • volatile memory e.g., dynamic random access memory (DRAM) , static random access memory (SRAM)
  • Interface 203 may be used in the wired and/or wireless communication of signaling and/or data to or from the network device 200.
  • interface 203 may perform any formatting, coding, or translating to allow the network device 200 to send and receive data whether over a wired and/or a wireless connection.
  • interface 203 may comprise radio circuitry capable of receiving data from other devices in the network over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter (s) , receiver (s) , and/or transceiver (s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc. ) .
  • interface 203 may comprise network interface controller (s) (NICs) , also known as a network interface card, network adapter, local area network (LAN) adapter or physical network interface.
  • NICs network interface controller
  • the NIC (s) may facilitate in connecting the network device 200 to other devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • processor 201 may represent part of interface 203, and some or all of the functionality described as being provided by interface 103 may be provided more specifically by processor 201.
  • a network device (ND) 210 may, in some embodiments, be an electronic device that can be communicatively connected to other electronic devices on the network (e.g., other network devices, etc. ) .
  • the network device 210 may include core network features that provide various core network functions (for example a “core network device”may refer to such a network device) .
  • the network device 210 may be a core network device in EPC or vEPC, a network function in 5GC (5G core network) , etc.
  • the network device 210 may include routing function feature that allows a guest virtual machine (VM) or a container based application to place routes in its own virtual routing and forwarding (VRF) instance using BGP.
  • the network device 210 may be BGPaaS.
  • the example network device 210 comprises processor 211, memory 212 and interface 213. These components may work together to provide various network device functionality as disclosed herein.
  • Processor 211 may be a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, any other type of electronic circuitry, or any combination of one or more of the preceding.
  • the processor 211 may comprise one or more processor cores.
  • some or all of the functionality described herein as being provided by the network device 210 may be implemented by processor 211 executing software instructions, either alone or in conjunction with other components, such as memory 212.
  • Memory 212 may store code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using non-transitory machine-readable (e.g., computer-readable) media, such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory) and machine-readable transmission media (e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals) .
  • machine-readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory
  • machine-readable transmission media e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals
  • memory 212 may comprise non-volatile memory containing code to be executed by processor 211.
  • memory 212 is non-volatile
  • the code and/or data stored therein can persist even when the network device is turned off (when power is removed) .
  • the network device 210 is turned on that part of the code that is to be executed by the processor (s) 211 may be copied from non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM) , static random access memory (SRAM) ) of the network device 210.
  • volatile memory e.g., dynamic random access memory (DRAM) , static random access memory (SRAM)
  • Interface 213 may be used in the wired and/or wireless communication of signaling and/or data to or from the network device 210.
  • interface 213 may perform any formatting, coding, or translating to allow the network device 210 to send and receive data whether over a wired and/or a wireless connection.
  • interface 213 may comprise radio circuitry capable of receiving data from other devices in the network over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter (s) , receiver (s) , and/or transceiver (s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc. ) .
  • interface 213 may comprise network interface controller (s) (NICs) , also known as a network interface card, network adapter, local area network (LAN) adapter or physical network interface.
  • NICs network interface controller
  • the NIC (s) may facilitate in connecting the network device 210 to other devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • processor 211 may represent part of interface 213, and some or all of the functionality described as being provided by interface 213 may be provided more specifically by processor 211.
  • the components of the network device 210 are each depicted as separate boxes located within a single larger box for reasons of simplicity in describing certain aspects and features of the network device 210 disclosed herein. In practice however, one or more of the components illustrated in the example device 210 may comprise multiple different physical elements (e.g., interface 213 may comprise terminals for coupling wires for a wired connection and a radio transceiver for a wireless connection) .
  • modules are illustrated as being implemented in software stored in memory 212, other embodiments implement part or all of each of these modules in hardware.
  • the solution described herein may be implemented in the network devices 200 and 210 by means of a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, where appropriate.
  • FIG. 2 depicts a single network device 200 and a single network device 210, there may be multiple network devices 200 and network devices 210 in other embodiments.
  • network device 200 or network devices 210 may be made up of two or more physically or logically separate components that, taken as a whole, perform the relevant functions or features of the respective device.
  • the network device 200 may comprise a base station component deployed at one location and control node component deployed at a second location. The two components together may comprise a single network device 200 for the purpose of this signaling diagram.
  • the network device 210 may comprise a control plane function component deployed at one location and a user plane function component deployed at a second location.
  • the two components together may comprise a single network device 210 for the purpose of this signaling diagram.
  • the network device 210 may comprise a prober component deployed at one location which may initialize the probe messages to each terminator and a routing function component deployed at a second location.
  • the prober component may be implemented in BGPaaS.
  • the network device 210 is a core network device such as (vEPC VM/container) of wireless network, it may comprise a terminator component deployed at one location which may reply the probe messages to the prober once it receives the probe messages and a core network function component deployed at a second location.
  • the terminator component may be implemented in each core network device such as vEPC VM/container.
  • FIG. 3 illustrates two specific examples of how ND 300 may be implemented in certain embodiments of the described solution including: 1) a special-purpose network device 302 that uses custom processing circuits such as application–specific integrated–circuits (ASICs) and a proprietary operating system (OS) ; and 2) a general purpose network device 304 that uses common off-the-shelf (COTS) processors and a standard OS which has been configured to provide one or more of the features or functions disclosed herein.
  • ASICs application–specific integrated–circuits
  • OS operating system
  • COTS common off-the-shelf
  • Special-purpose network device 302 includes hardware 310 comprising processor (s) 312, and interface 316, as well as memory 318 having stored therein software 320.
  • the software 320 implements the modules described with regard to the previous figures.
  • the software 320 may be executed by the hardware 310 to instantiate a set of one or more software instance (s) 322.
  • Each of the software instance (s) 322, and that part of the hardware 310 that executes that software instance (be it hardware dedicated to that software instance, hardware in which a portion of available physical resources (e.g., a processor core) is used, and/or time slices of hardware temporally shared by that software instance with others of the software instance (s) 322) , form a separate virtual network element 330A-R.
  • a portion of available physical resources e.g., a processor core
  • the example general purpose network device 304 includes hardware 340 comprising a set of one or more processor (s) 342 (which are often COTS processors) and interface 346 , as well as memory 348 having stored therein software 350.
  • the processor (s) 342 execute the software 350 to instantiate one or more sets of one or more applications 364A-R.
  • virtualization layer 354 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 362A-R called software containers that may each be used to execute one (or more) of the sets of applications 364A-R.
  • software containers 362A-R are user spaces (typically a virtual memory space) that may be separate from each other and separate from the kernel space in which the operating system is run.
  • the set of applications running in a given user space may be prevented from accessing the memory of the other processes.
  • virtualization layer 354 may represent a hypervisor (sometimes referred to as a virtual machine monitor (VMM) ) or a hypervisor executing on top of a host operating system; and each of the sets of applications 364A-R may run on top of a guest operating system within an instance 362A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container that is run by the hypervisor) .
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel (s) , which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS services
  • unikernel can be implemented to run directly on hardware 340, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine) , or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 354, unikernels running within software containers represented by instances 362A-R, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers) .
  • the instantiation of the one or more sets of one or more applications 364A-R, as well as virtualization if implemented are collectively referred to as software instance (s) 352.
  • the virtual network element (s) 360A-R perform similar functionality to the virtual network element (s) 330A-R.
  • This virtualization of the hardware 340 is sometimes referred to as network function virtualization (NFV) ) .
  • NFV network function virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in for example data centers and customer premise equipment (CPE) .
  • CPE customer premise equipment
  • different embodiments of the invention may implement one or more of the software container (s) 362A-R differently.
  • each instance 362A-R corresponding to one VNE 360A-R
  • alternative embodiments may implement this correspondence at a finer level granularity; it should be understood that the techniques described herein with reference to a correspondence of instances 362A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the third exemplary ND implementation in FIG. 3 is a hybrid network device 306, which includes both custom ASICs/proprietary OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform virtual machine such as a VM that that implements the functionality of the special-purpose network device 302, could provide for para-virtualization to the hardware present in the hybrid network device 306.
  • FIG. 4 shows a flowchart of a method 400 according to an embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a routing function entity or any other entity having similar functionality.
  • the routing function entity may provide means or modules for accomplishing various parts of the method 400 as well as means or modules for accomplishing other processes in conjunction with other components.
  • the routing function entity may be any suitable entity which can allow another entity (such as a guest VM or container based application) to place routes in the routing function entity (for example, in its own virtual VRF instance) for example by using a BGP message.
  • the routing function entity may be the network device 200 or 210 of FIG. 3 or the BGPaaS.
  • the routing function entity may send a probe message to an entity of a cloud network.
  • the routing path of the probe message passes through a first gateway of the cloud network.
  • the cloud network may be a computer network that exists within or is a part of a cloud computing infrastructure.
  • the cloud network may be vEPC, cloud RAN, cloud data center network, cloud service platform, cloud based network function of 5GC, etc.
  • the entity of the cloud network may be a node of the cloud network which can provide at least one service.
  • the entity of the cloud network may be a VM or container.
  • the entity of the cloud network may be virtual network function entity of a core network (such as EPC or 5GC) of a wireless network (such as EPS (evolved packet system) or 5GS (5G system) ) .
  • the entity of the cloud network is able to place at least one route in the routing function entity.
  • the entity of the cloud network is able to place at least one policy-based route in the routing function entity by using BGP.
  • the at least one policy-based route may comprise routes regarding different source Internet protocol (IP) addresses on the routing function entity passing through different gateways of the cloud network.
  • IP Internet protocol
  • the probe message may be any suitable message which can implement probe function.
  • the probe message may comprise, but not limited to, an Internet control message protocol (ICMP) message and a bidirectional forwarding detection (BFD) message.
  • ICMP Internet control message protocol
  • BFD bidirectional forwarding detection
  • BFD message may be used in case there is a firewall blocking ICMP message.
  • the probe message may be sent in various ways such as periodically, in response to a path status detection request, according to a configuration, in response to an addition of a new entity of the cloud network, etc.
  • the probe message may be sent in a time interval.
  • the time interval may be predefined or configured by the operator.
  • the routing function entity may send two or more probe messages to the entity of the cloud network.
  • the routing paths of the two or more probe messages respectively pass through different gateways of the cloud network.
  • the routing function entity may use different source IP addresses in probe messages passing through different gateways of the cloud network.
  • the cloud network may have two or more gateways providing redundant next hops for the entity of the cloud network through VRRP.
  • different source IP addresses may be used in probe messages passing through different gateways of the cloud network.
  • a source IP address IP_Aof the routing function entity may be used in a probe message passing through the first gateway of the cloud network
  • a source IP address IP_B of the routing function entity may be used in a probe message passing through a second gateway of the cloud network, and so on.
  • the routing function entity may determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  • the routing function entity may determine that a path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
  • the predefined time period may be any suitable time period such as 3 x the above described time interval.
  • the routing function entity may determine that the path failure has not happened between the first gateway of the cloud network and the entity of the cloud network.
  • the path failure between the routing function entity and the first gateway of the cloud network may be detected in various ways.
  • BFD for BGP between the routing function entity and a gateway (such as the first gateway) of the cloud network may be enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
  • the routing path of the corresponding probe response message may be any suitable route path between the routing function entity and the entity of the cloud network. In an embodiment, the routing path of the corresponding probe response message passes through the first gateway of the cloud network.
  • the routing path of the corresponding probe response message passes through a second gateway of the cloud network rather than the first gateway of the cloud network.
  • the probe messages may pass through different gateways of the cloud network. Since there may be only one default gateway in case of VRRP, the response messages may always pass through the default gateway whose role is VRRP master (e.g., the second gateway of the cloud network in this embodiment) .
  • the routing function entity may determine that the path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
  • the routing function entity may send a route withdrawal message to the first gateway of the cloud network to withdraw at least one route to the entity of the cloud network.
  • the route withdrawal message may be a BGP message.
  • the routing function entity may determine the failed path is recovered and send a route reachability message to the first gateway of the cloud network to advertise at least one route to the entity of the cloud network.
  • the route reachability message may be a BGP message.
  • the routing function entity may continue to send the probe message to the entity of the cloud network, wherein the routing path of the probe message passes through the first gateway of the cloud network.
  • the probe message may be received by the entity of the cloud network and the corresponding probe response message may be sent to the routing function entity. After reception of the corresponding probe response message, the routing function entity may determine the failed path is recovered.
  • the routing function entity may get a list of next hop IP addresses for IP routes placed in the routing function entity by entities of the cloud network. These IP addresses may be logical IP addresses on the entities of the cloud network but not the IP addresses connected to the gateway of the cloud network.
  • the routing function entity may get a list of neighbors (for example, gateways of the cloud network) of the routing function entity and their next hop IP addresses. The relationship between the gateways of the cloud network and the interface IP addresses of the gateways of the cloud network can be obtained.
  • the routing function entity may send a probe message to each entity of the cloud network passing through each gateway of the cloud network in a time interval.
  • the destination IP address in the probe messages may be the logical IP address on each entity of the cloud network.
  • the routing function entity may use different source IP addresses in probe messages passing through different gateways of the cloud network. For example, source IP address IP_A is used in probe messages passing through the gateway A of the cloud network, source IP address IP_B may be used in probe messages passing through the gateway B of the cloud network, and so on.
  • the routing function entity may use a MAC (Media Access Control) address of the interface connected to the routing function entity on each gateway of the cloud network as the destination MAC address. This MAC address may be obtained by ARP (address resolution protocol) . Therefore the probe messages can pass through different gateways of the cloud network.
  • ARP address resolution protocol
  • the probe messages passing through the gateway A of the cloud network may use the MAC address MAC_A as the destination MAC address, wherein MAC_A is the MAC address of the interface connected to the routing function entity on the gateway A of the cloud network.
  • the probe messages passing through the gateway B of the cloud network use the MAC address MAC_B as the destination MAC address, wherein MAC_B is the MAC address of the interface connected to the routing function entity on the gateway B of the cloud network.
  • the timer may timeout and notify the routing function entity to send an IP route withdrawal message to the gateway of the cloud network to withdraw the IP route to the entity of the cloud network. If the path recovers and the probe response message is received by the routing function entity again, the timer may restart and notify the routing function entity to send an IP route reachability message to the gateway of the cloud network to advertise the IP route to the entity of the cloud network.
  • FIG. 5 shows a flowchart of a method 500 according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to an entity of a cloud network or any other entity having similar functionality.
  • the entity of the cloud network may provide means or modules for accomplishing various parts of the method 500 as well as means or modules for accomplishing other processes in conjunction with other components.
  • the entity of the cloud network may be any suitable entity which is able to place at least one route in the routing function entity.
  • the entity of the cloud network is about to place at least one route in the routing function entity by using BGP.
  • the at least route may comprise routes regarding different service IP on the entity of the cloud network and UE IP pools served by the entity of the cloud network.
  • the entity of the cloud network may be the network device 200 or 210 of FIG. 3. For some parts which have been described in the above embodiments, the description thereof is omitted here for brevity.
  • the entity of the cloud network may receive a probe message from a routing function entity.
  • the routing path of the probe message passes through a first gateway of the cloud network.
  • the routing function entity may send the probe message to the entity of the cloud network at block 402 of FIG. 4, and then the entity of the cloud network may receive the probe message from the routing function entity for example when there is no path failure of the routing path of the probe message.
  • the probe message may comprise an ICMP message and a BFD message.
  • the probe message may be received in a time interval.
  • the routing function entity may be a BGPaaS entity.
  • the entity of the cloud network may be a virtual network function entity of a core network of a wireless network.
  • the entity of the cloud network may determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  • a path failure is determined to have happened either between the first gateway of the cloud network and the entity of the could network or between the first gateway of the could network and the routing function entity of the could network.
  • the predefined time period may be any suitable time period such as 3 x the above described time interval.
  • the probe message from the first Internet protocol address on the routing function entity is received by the entity of the cloud network within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the could network and between the first gateway of the could network and the routing function entity of the could network.
  • the entity of the cloud network may send a corresponding probe response message to the routing function entity.
  • a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
  • a routing path of the corresponding probe response message passes through the second gateway of the cloud network rather than the first gateway of the cloud network.
  • the probe messages may pass through different gateways of the cloud network. Since there may be only one default gateway in case of VRRP, the response messages may always pass through the default gateway whose role is VRRP master (for example, the second gateway of the cloud network in this embodiment) .
  • the entity of the cloud network may perform a first action based on the path failure.
  • the first action comprises at least one of avoiding sending traffic to the path passes through the first gateway of the cloud network, moving the service of the entity of the cloud network to another entity of the cloud network and rebooting the entity of the cloud network.
  • the entity of the cloud network may determine the failed path is recovered.
  • the routing function entity may continue to send the probe message to the entity of the cloud network in a time interval, wherein the routing path of the probe message passes through the first gateway of the cloud network.
  • the probe message may be received by the entity of the cloud network and the entity of the cloud network may determine the failed path is recovered.
  • the entity of the cloud network may perform a second action based on the recovered path.
  • the second action may comprise canceling the first action.
  • different source Internet protocol addresses are used in probe messages passing through different gateways of the cloud network.
  • bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network is enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
  • the cloud network has two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
  • VRRP virtual router redundancy protocol
  • FIGs. 6a, 6b and 7 show examples of the proposed solution according to some embodiments of the present disclosure.
  • the proposed solution may include two components:
  • Prober which may initialize the probe messages to each terminator.
  • Prober may be implemented in BGPaaS.
  • Terminator which may reply the probe messages to prober once it receives the probe messages. Terminator may be implemented in each vEPC VM/container.
  • the probe message may be ICMP message or BFD message.
  • the probe message may be BFD message in case there is firewall blocking ICMP packets.
  • the procedure in the prober side may be as below:
  • the prober obtains the list of next hop IP addresses for IP routes placed in BGPaaS by vEPC. These IP addresses may be logical IP addresses on VMs/containers rather than the IP addresses connected to DC-GW routers.
  • the prober obtains the list of BGP neighbors and their next hop IP addresses.
  • the relationship between BGP neighbors and the interface IP addresses of the DC-GW routers can be obtained.
  • the prober sends a probe message to each VM/Container passing through each DC-GW router in a time interval.
  • the destination IP address in probe messages can be the IP address on the vNIC (virtual network interface card) of the VM/Container and the logical IP address in the VM/Container.
  • vNIC virtual network interface card
  • the prober may use different source IP addresses in probe messages passing through different DC-GW routers.
  • source IP address IP_A may be used in the probe messages passing through DC-GW router A
  • source IP address IP_B may be used in the probe messages passing through DC-GW router B.
  • the prober may use MAC address of the interface connected to BGPaaS on each DC-GW router as the destination MAC address. This MAC address may be obtained by ARP. Therefore the probe messages can pass through different DC-GW routers. For example, the probe messages passing through DC-GW router A may use the MAC address MAC_A as the destination MAC address, wherein MAC_A is the MAC address of the interface connected to BGPaaS on DC-GW router A. The probe messages passing through DC-GW router B may use the MAC address MAC_B as the destination MAC address, wherein MAC_B is the MAC address of the interface connected to BGPaaS on DC-GW router B.
  • timer with a predefined time period such as 3 x the time interval used to measure the status of the path between a DC-GW router and a VM/container. If there is a probe response message is received by the prober within the predefined time period such as the 3 x the time interval, the timer may refresh. If no probe response message is received by the prober within the 3 x the time interval, the timer may timeout and notify the BGPaaS to send an IP route withdrawal message to the DC-GW router to withdraw at least one IP route to the VM/Container. If the failed path recovers and the probe response message is received by the prober again, the timer may restart and notify BGPaaS to send an IP route reachability message to the DC-GW router to advertise the IP route to the VM/container.
  • a predefined time period such as 3 x the time interval
  • the probe messages may pass through the same path as above.
  • the procedure in the terminator side may be as below:
  • the terminator may provision different static routes to different IP addresses on BGPaaS passing through different DC-GW routers in case of multiple gateways is used. For example, the route to IP_A is provisioned with IP_GW_A as next hop and the route to IP_B is provisioned with IP_GW_B as next hop.
  • the terminator may send response messages to BGPaaS.
  • the response messages with destination IP address of IP_A may pass through DC-GW router A and the response messages with destination IP address of IP_B may pass through DC-GW router B.
  • the terminator After the terminator receives the first probe message from BGPaaS, it may start a timer with the same predefined time period as for the IP address of probe messages. If there is a probe message received by the terminator within the predefined time period, the timer may refresh. If no probe message is received by the terminator within the predefined time period, the timer may timeout and notify vEPC VM/container to take actions. The actions may comprise at least one of avoiding sending traffic to the path passes through the DC-GW router, rebooting VM/Container, moving the service to another vEPC VM/container and so on. If the failed path recovers and the probe message is received by the terminator again, the timer may restart and notify vEPC VM/container to cancel actions which have taken.
  • the response messages may always pass through the DC-GW router whose role is VRRP master (e.g., the default gateway IP_GW_VIP) .
  • a new mechanism is proposed to detect a path status between the gateway (such as DC-GW router) of the cloud network and the entity (such as VM/container) of the cloud network, and this new mechanism does not need any additional feature on the gateway (such as DC-GW router) of the cloud network.
  • a new mechanism is proposed to track the availability of advertised routes on each BGP neighbor for the routing function entity (such as BGPaaS) in the cloud network. If an advertised route is no longer valid on a neighbor of the routing function entity, the routing function entity (such as BGPaaS) can withdraw the route from the neighbor.
  • the routing function entity such as BGPaaS
  • the probe messages and response messages may pass through the gateway (such as DC-GW router) of the cloud network, therefore it can track the availability of advertised routes on the gateway (such as DC-GW router) of the cloud network.
  • the gateway such as DC-GW router
  • a new mechanism is proposed to trigger the routing function entity (such as BGPaaS) to send a network reachability or withdrawal message to specific gateway (such as DC-GW router) of the cloud network, therefore route/service fast convergence can be implemented.
  • the routing function entity such as BGPaaS
  • DC-GW router specific gateway
  • a route/service fast convergence method proposed for the cloud network (such as vEPC) does not need to depend on any additional feature on the gateway (such as DC-GW router) of the cloud network and is fully compatible with the gateway (such as DC-GW router) of the cloud network.
  • the proposed method can be applied with no additional function required to DC-GW routers. It does not need any configuration/function used for path failure detection on DC-GW routers.
  • the proposed method supports incremental deployment and there is no impact on the deployed cloud infrastructure.
  • the advantages of the proposed method comprises low cost, add-on feature, no extra capability required to any hardware devices.
  • the proposed method can be applied on massive and scaling deployment scenarios.
  • the proposed method can detect the path failure through messages within the entity (such as VNF (virtual network function) ) of the cloud network internal, and trigger routing protocol converge as soon as possible.
  • the proposed method can be applied in large scale cloud network and does not consume any additional resource (such as computing, storage and hardware resource) on the gateway (such as DC-GW router) of the cloud network.
  • the proposed method can be used in any size of the cloud network.
  • FIG. 8 is a block diagram showing an apparatus suitable for practicing some embodiments of the disclosure.
  • any one of the routing function entity and the entity of the cloud network described above may be implemented as or through the apparatus 800.
  • the apparatus 800 comprises at least one processor 821, such as a DP, and at least one MEM 822 coupled to the processor 821.
  • the apparatus 820 may further comprise a transmitter TX and receiver RX 823 coupled to the processor 821.
  • the MEM 822 stores a PROG 824.
  • the PROG 824 may include instructions that, when executed on the associated processor 821, enable the apparatus 820 to operate in accordance with the embodiments of the present disclosure.
  • a combination of the at least one processor 821 and the at least one MEM 822 may form processing means 825 adapted to implement various embodiments of the present disclosure.
  • Various embodiments of the present disclosure may be implemented by computer program executable by one or more of the processor 821, software, firmware, hardware or in a combination thereof.
  • the MEM 822 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memories and removable memories, as non-limiting examples.
  • the processor 821 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors DSPs and processors based on multicore processor architecture, as non-limiting examples.
  • the memory 822 contains instructions executable by the processor 821, whereby the routing function entity operates according to the method as described in reference to FIG. 4.
  • the memory 822 contains instructions executable by the processor 821, whereby the entity of the cloud network operates according to the method as described in reference to FIG. 5.
  • FIG. 9 is a block diagram showing a routing function entity according to an embodiment of the disclosure.
  • the routing function entity 900 comprises a sending module 902 and a determining module 904.
  • the sending module 902 may be configured to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the determining module 904 may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  • FIG. 10 is a block diagram showing an entity of a cloud network according to an embodiment of the disclosure.
  • the entity 1000 of the cloud network comprises a receiving module 1002 and a determining module 1004.
  • the receiving module 902 may be configured to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network.
  • the determining module 904 may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  • a computer program product being tangibly stored on a computer readable storage medium and including instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the methods as described above.
  • a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the methods as described above.
  • the present disclosure may also provide a carrier containing the computer program as mentioned above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • the computer readable storage medium can be, for example, an optical compact disk or an electronic memory device like a RAM (random access memory) , a ROM (read only memory) , Flash memory, magnetic tape, CD-ROM, DVD, Blue-ray disc and the like.
  • an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment comprises not only prior art means, but also means for implementing the one or more functions of the corresponding apparatus described with the embodiment and it may comprise separate means for each separate function or means that may be configured to perform two or more functions.
  • these techniques may be implemented in hardware (one or more apparatuses) , firmware (one or more apparatuses) , software (one or more modules) , or combinations thereof.
  • firmware or software implementation may be made through modules (e.g., procedures, functions, and so on) that perform the functions described herein.

Abstract

A method for path status detection performed by a routing function entity (900) comprises sending a probe message to an entity (1000) of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network (402). The method further comprises determining a path status between the first gateway of the cloud network and the entity (1000) of the cloud network based on whether a corresponding probe response message is received or not (404). The method may further comprise sending messages for advertising network reachability or withdrawing to the first gateway according to the path failure detection or recovery.

Description

METHOD AND APPARATUS FOR PATH STATUS DETECTION TECHNICAL FIELD
The non-limiting and exemplary embodiments of the present disclosure generally relate to the technical field of communications, and specifically to methods and apparatuses for path status detection.
BACKGROUND
This section introduces aspects that may facilitate a better understanding of the disclosure. Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is in the prior art or what is not in the prior art.
Path status detection technology can be used to detect faults between two communication devices connected by a link. Some path status detection technologies can provide rapid detection of faults to enable efficient network path convergence. For example, Bi-directional Forwarding Detection (BFD) may be configured on each end of the link that is to be monitored. BFD can detect unidirectional link failures on that link, and notify the link peer of the failure. This allows both ends of the link to discontinue using the link even though the fault can only be detected by one of the peers. For example, a device A may be able to receive packets from a device B over a fiber link. But the device B cannot receive packets from the device A due to a fault on that strand of the fiber. BFD on the device B can rapidly detect this error and signal to the device A that there is a fault on the link.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Path status detection technology can be used in various networks. For example, in a cloud network, Internet protocol (IP) routes reachability/withdrawal for virtual machines (VMs) (such as virtual Evolved Packet Core (vEPC) products) are commonly advertised to a gateway (such as data center gateway (DC-GW) router) of the cloud network by BGPaaS (BGP (Border Gateway Protocol) as a service) through BGP. The BGPaaS allows a guest virtual machine or container based application to place routes in its own virtual routing and forwarding (VRF) instance using BGP. Both BGPaaS and virtual machine (such as vEPC VM/container) can be directly connected to the gateway of the cloud network in IP layer.
FIG. 1a schematically shows a topology for BGPaaS interworking with vEPC. As shown in FIG. 1a, there are at least two DC-GW routers providing redundant next hops for vEPC VMs/containers through VRRP (virtual router redundancy protocol) or multiple gateways. BGPaaS setups BGP neighbors with each DC-GW router, and advertises IP routes reachability/withdrawal to DC-GW routers for vEPC VMs/containers. BFD for BGP between BGPaaS and DC-GW routers is enabled in order to detect path failure between BGPaaS and DC-GW routers immediately. The BGPaaS interworking to vEPC with route/service fast convergence may be mandatory. Once failure occurs on either VM/Container or the link between VM/Container and DC-GW routers, it needs to trigger DC-GW routers to converge the route and forward the packets to the other VMs/containers. This is a typical scenario in vEPC networks. In order to guarantee the route/service fast convergence, some existing technologies may be used in a small size deployment. For example, static IP routes with BFD may be used between vEPC and DC-GW routers. IGP (Interior Gateway Protocol) routing protocol (such as OSPF (Open Shortest Path First) ) with small interval be used between vEPC and DC-GW routers.
However the existing technologies may have some problems. For example, both the static IP routes with BFD and IGP routing protocol with small interval are limited small size deployment due to the numbers of BFD sessions and IGP sessions supported on DC-GW routers. Static IP routes with BFD requires manual configuration on DC-GW routers, therefore it is not suitable for automated orchestration. IGP routing protocol with small interval introduces a lot of flood packets, therefore it is not suitable for most of cloud networks. For example, in a scenario with above 500 APNs (Access Point Names) in the vEPC network, it requires DC-GW routers to support at least 500x (the number of VMs/containers) BFD sessions or IGP sessions, and the existing solutions may not be applied in this scenario.
FIG. 1b shows some problems of the existing technologies according to an embodiment of the disclosure. As shown in FIG. 1b, once the path between a DC-GW and a VM/Container fails, BGPaaS is not aware of the path failure so it will not withdraw the routes, the traffic will not be delivered to the other paths or the other VMs/containers and will be lost.
FIG. 1c shows some problems of the existing technologies according to another embodiment of the disclosure. As shown in FIG. 1c, the issues for BFD for static routes may include: it requires to prevision BFD and static routes on DC-GW routers for each VM/container; VMs/containers auto scaling (in and out) cannot be supported due to above; DC-GW routers does not support so many BFD sessions (especially in case of more than 500 APNs deployed) which exceeds DC-GW hardware capacity; and it cannot be applied in most of cloud networks.
FIG. 1d shows some problems of the existing technologies according to another embodiment of the disclosure. As shown in FIG. 1d, the issues for IGP routing such as OSPF may include: OSPF will introduce too many multicast packets in the cloud network since OSPF hello messages are multicast traffic; DC-GW routers does not support so many OSPF sessions/neighbors (especially in case of more than 500 APNs deployed) ; OSPF Hello messages consume much CPU (Central Processing Unit) on DC-GW routers; and it cannot be applied in most of cloud networks.
To overcome or mitigate at least one of the above mentioned problems or other problems, embodiments of the present disclosure propose an improved path status detection solution.
In an embodiment, it proposes an active service/path failure detection method initiated by a routing function entity such as BGPaaS so that route/service fast convergence can be achieved through service/path failure monitor to the entities of the cloud network (such as vEPC VMs/containers) by the routing function entity such as BGPaaS.
In an embodiment, the routing function entity such as BGPaaS sends probe messages to the entities (such as vEPC VMs/containers) of the cloud network and these messages pass through different gateways (such as DC-GW routers) of the cloud network. Once the entities (such as vEPC VMs/containers) of the cloud network receive the probe messages, they send response messages to the routing function entity such as BGPaaS through their respective gateway. Their respective gateway could be of respective IP address assigned on each gateway (such as DC-GW router) of the cloud network, or a single virtual IP address negotiated by VRRP between the gateways of the cloud network. So in case of no path failure happens between BGPaaS and the gateway of the cloud network, the routing function entity such as BGPaaS can identify the path failure between the gateway (such as DC-GW router) of the cloud network and the entity (such as vEPC VM/container) of the cloud network, and can withdraw IP routes to the entity of the cloud network which has path failure towards to the specific gateway of the cloud network accordingly. Thus, route/service fast convergence can be achieved.
In an embodiment, in case of path failure happens between the routing function entity such as BGPaaS and the gateway of the cloud network, the route/service fast convergence can be achieved through BFD for BGP, which is typical configuration for BGP.
In a first aspect of the disclosure, there is provided a method performed by a routing function entity. The method comprises sending a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network. The method further comprises determining a path status between the first gateway of the cloud  network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
In an embodiment, the determining step may comprise when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network, determining that a path failure has happened between the first gateway of the cloud network and the entity of the cloud network; and when the corresponding probe response message is received by the routing function entity within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the cloud network.
In an embodiment, a routing path of the corresponding probe response message may pass through the first gateway of the cloud network.
In an embodiment, a routing path of the corresponding probe response message passes through a second gateway of the cloud network rather than the first gateway of the cloud network, the determining step may comprise when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network and between the routing function entity and the second gateway of the cloud network, determining that the path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
In an embodiment, when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the method may further comprise sending a route withdrawal message to the first gateway of the cloud network to withdraw at least one route to the entity of the cloud network.
In an embodiment, the probe message may comprise an Internet control message protocol, ICMP, message and a bidirectional forwarding detection, BFD, message.
In an embodiment, the probe message may be sent in a time interval.
In an embodiment, the routing function entity may be a border gateway protocol, BGP, as a service, BGPaaS, entity.
In an embodiment, the entity of the cloud network may be a virtual network function entity of a core network of a wireless network.
In an embodiment, the entity of the cloud network is able to place at least one route in the routing function entity by using border gateway protocol, BGP.
In an embodiment, the at least one policy-based route may comprise routes regarding different source Internet protocol addresses on the routing function entity passing through different gateways of the cloud network.
In an embodiment, different source Internet protocol addresses may be used in probe messages passing through different gateways of the cloud network.
In an embodiment, the method may further comprise when the corresponding probe response message is received by the routing function entity again after the determination of the path failure, determining the failed path is recovered; and sending a route reachability message to the first gateway of the cloud network to advertise at least one route to the entity of the cloud network.
In an embodiment, bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network may be enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
In an embodiment, the cloud network may have two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
In an embodiment, when there are two or more gateways in the cloud network, the sending step may comprise sending two or more probe messages to the entity of the cloud network, wherein the routing paths of the two or more probe messages respectively pass through different gateways of the cloud network.
In a second aspect of the disclosure, there is provided a method performed by an entity of a cloud network. The method comprises receiving a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network. The method further comprises determining a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
In an embodiment of the second aspect of the disclosure, the determining step may comprise when the probe message from a first Internet protocol address on the routing function entity is not received by the entity of the cloud network within a predefined time period, determining that a path failure has happened either between the first gateway of the cloud network and the entity of the could network or between the first gateway of the could network and the routing function entity of the could network; and when the probe message from the first Internet protocol address on the routing function entity is received by the entity of the cloud network within the predefined time period, determining that the path failure has not happened  between the first gateway of the cloud network and the entity of the could network and between the first gateway of the could network and the routing function entity of the could network.
In an embodiment of the second aspect of the disclosure, the method may further comprise in response to a reception of the probe message, sending a corresponding probe response message to the routing function entity.
In an embodiment of the second aspect of the disclosure, a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
In an embodiment of the second aspect of the disclosure, a routing path of the corresponding probe response message passes through the second gateway of the cloud network rather than the first gateway of the cloud network.
In an embodiment of the second aspect of the disclosure, when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the method may further comprise performing a first action based on the path failure.
In an embodiment of the second aspect of the disclosure, the first action may comprise at least one of avoiding sending traffic to the path passes through the first gateway of the cloud network, moving the service of the entity of the cloud network to another entity of the cloud network and rebooting the entity of the cloud network.
In an embodiment of the second aspect of the disclosure, the method may further comprise when the probe message is received by the entity of the cloud network within a predefined time period again after the determination of the path failure, determining the failed path is recovered; and performing a second action based on the recovered path.
In an embodiment of the second aspect of the disclosure, the second action may comprise canceling the first action.
In an embodiment of the second aspect of the disclosure, the probe message may be received in a time interval.
In a third aspect of the disclosure, there is provided a routing function entity. The routing function entity comprises a processor; and a memory coupled to the processor, said memory containing instructions executable by said processor, whereby said routing function entity is operative to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network. Said routing function entity is further operative to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
In a fourth aspect of the disclosure, there is provided an entity of a cloud network. The entity of the cloud network comprises a processor; and a memory coupled to the processor, said memory containing instructions executable by said processor, whereby said entity of the cloud network is operative to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network. Said entity of the cloud network is further operative to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
In a fifth aspect of the disclosure, there is provided a routing function entity. The routing function entity comprises a sending module and a determining module. The sending module may be configured to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network. The determining module may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
In a sixth aspect of the disclosure, there is provided an entity of a cloud network. The entity of the cloud network comprises a receiving module and a determining module. The receiving module may be configured to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network. The determining module may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
In a seventh aspect of the disclosure, there is provided a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the first aspect of the present disclosure.
In an eighth aspect of the disclosure, there is provided a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the second aspect of the present disclosure.
In a ninth aspect of the disclosure, there is provided a computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the first aspect of the present disclosure.
In a tenth aspect of the disclosure, there is provided a computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform any step of the method according to the second aspect of the present disclosure.
Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows. In some embodiments herein, a route/service fast convergence method proposed for the cloud network (such as vEPC) does not need to depend on any additional feature on the gateway (such as DC-GW router) of the cloud network and is fully compatible with the gateway (such as DC-GW router) of the cloud network. For example, the proposed method can be applied with no additional function required to DC-GW routers. It does not need any configuration/function used for path failure detection on DC-GW routers. In some embodiments herein, the proposed method supports incremental deployment and there is no impact on the deployed cloud infrastructure. In some embodiments herein, the advantages of the proposed method comprises low cost, add-on feature, no extra capability required to any hardware devices. In some embodiments herein, the proposed method can be applied on massive and scaling deployment scenarios. In some embodiments herein, the proposed method can detect the path failure through messages within the entity (such as VNF (virtual network function) ) of the cloud network internal, and trigger routing protocol converge as soon as possible. In some embodiments herein, the proposed method can be applied in large scale cloud network and does not consume any additional resource (such as computing, storage and hardware resource) on the gateway (such as DC-GW router) of the cloud network. In some embodiments herein, the proposed method can be used in any size of the cloud network. The embodiments herein are not limited to the features and advantages mentioned above. A person skilled in the art will recognize additional features and advantages upon reading the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, features, and benefits of various embodiments of the present disclosure will become more fully apparent, by way of example, from the following detailed description with reference to the accompanying drawings, in which like reference numerals or letters are used to designate like or equivalent elements. The drawings are illustrated for facilitating better understanding of the embodiments of the disclosure and not necessarily drawn to scale, in which:
FIG. 1a schematically shows a topology for BGPaaS interworking with vEPC;
FIG. 1b shows some problems of the existing technologies according to an embodiment of the disclosure;
FIG. 1c shows some problems of the existing technologies according to another embodiment of the disclosure;
FIG. 1d shows some problems of the existing technologies according to another embodiment of the disclosure;
FIG. 2 illustrates one implementation example for particular embodiments of the present disclosure;
FIG. 3 illustrates two specific examples of how network device may be implemented in certain embodiments of the present disclosure;
FIG. 4 shows a flowchart of a method according to an embodiment of the present disclosure;
FIG. 5 shows a flowchart of a method according to another embodiment of the present disclosure;
FIG. 6a shows an example of a failure detecting solution applied in multiple gateways case according to an embodiment of the present disclosure;
FIG. 6b shows an example of a failure recovery solution applied in multiple gateways case according to an embodiment of the present disclosure;
FIG. 7 shows an example of the proposed solution applied in VRRP case according to another embodiment of the present disclosure;
FIG. 8 is a block diagram showing an apparatus suitable for practicing some embodiments of the disclosure;
FIG. 9 is a block diagram showing a routing function entity according to an embodiment of the disclosure; and
FIG. 10 is a block diagram showing an entity of a cloud network according to an embodiment of the disclosure.
DETAILED DESCRIPTION
The embodiments of the present disclosure are described in detail with reference to the accompanying drawings. It should be understood that these embodiments are discussed only for the purpose of enabling those skilled persons in the art to better understand and thus implement the present disclosure, rather than suggesting any limitations on the scope of the present disclosure. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present disclosure should be or are in any single embodiment of the disclosure. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one  embodiment of the present disclosure. Furthermore, the described features, advantages, and characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the disclosure may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the disclosure.
As used herein, the term “network” or “communication network” refers to a network following any suitable (wireless or wired) communication standards. For example, the wireless communication standards may comprise new radio (NR) , long term evolution (LTE) , LTE-Advanced, wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , Code Division Multiple Access (CDMA) , Time Division Multiple Address (TDMA) , Frequency Division Multiple Access (FDMA) , Orthogonal Frequency-Division Multiple Access (OFDMA) , Single carrier frequency division multiple access (SC-FDMA) and other wireless networks. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA) , etc. UTRA includes WCDMA and other variants of CDMA. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM) . An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA) , Ultra Mobile Broadband (UMB) , IEEE 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20, Flash-OFDMA, Ad-hoc network, wireless sensor network, etc. In the following description, the terms “network” and “system” can be used interchangeably. Furthermore, the communications between two devices in the network may be performed according to any suitable communication protocols, including, but not limited to, the wireless communication protocols as defined by a standard organization such as 3rd generation partnership project (3GPP) or the wired communication protocols. For example, the wireless communication protocols may comprise the first generation (1G) , 2G, 3G, 4G, 4.5G, 5G communication protocols, and/or any other protocols either currently known or to be developed in the future.
References in the specification to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed terms.
As used herein, the phrase “at least one of A and B” should be understood to mean “only A, only B, or both A and B. ” The phrase “A and/or B” should be understood to mean “only A, only B, or both A and B. ”
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components, etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
It is noted that these terms as used in this document are used only for ease of description and differentiation among nodes, devices or networks etc. With the development of the technology, other terms with the similar/same meanings may also be used.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a communication system complied with the exemplary system architecture illustrated in FIGs. 1a, 1b, 1c, 1d and 2. For example, the subject matter described herein may be implemented in a cloud network, a data center network, a cloud RAN (radio access network) , etc. For simplicity, the system architectures of FIGs. 1a, 1b, 1c, 1d and 2 only depict some exemplary elements. In practice, a communication system may further include any additional elements suitable to support communication between terminal devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or terminal device. The communication system may provide communication and various types of services to one or more terminal devices to facilitate the terminal devices’ access to and/or use of the services provided by, or via, the communication system.
FIG. 2 illustrates one implementation example for particular embodiments of the solution described herein. As shown in FIG. 2, a network device (ND) 200 may, in some embodiments, be an electronic device that can be communicatively connected to other electronic devices on the network (e.g., other network devices, user equipment devices (UEs) , radio base stations, etc. ) . In certain embodiments, the network device 200 may include radio access features that provide wireless radio network access to other electronic devices (for example a “radio access network device"may refer to such a network device) such as user equipment devices (UEs) . For example, the network device 200 may be a base station, such as eNodeB in Long Term Evolution (LTE) , NodeB in Wideband Code Division Multiple Access (WCDMA) or other types of base stations, as well as a Radio Network Controller (RNC) , a Base Station Controller (BSC) , or other types of control nodes. In certain embodiments, the network device 200 may include routing function feature that allows a guest virtual machine (VM) or a container based application to place routes in its own virtual routing and forwarding (VRF) instance using BGP. For example, the network device 200 may be BGPaaS. As depicted in FIG. 2, the example network device 200 comprises processor 201, memory 202, interface 203, and antenna 204.
Processor 201 may be a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, any other type of electronic circuitry, or any combination of one or more of the preceding. The processor 201 may comprise one or more processor cores. In particular embodiments, some or all of the functionality described herein as being provided by the network device 200 may be implemented by processor 201 executing software instructions, either alone or in conjunction with other components, such as memory 202.
Memory 202 may store code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using non-transitory machine-readable (e.g., computer-readable) media, such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory) and machine-readable transmission media (e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals) . For instance, memory 202 may comprise non-volatile memory containing code to be executed by processor 201. Where memory202 is non-volatile, the code and/or data stored therein can persist even when the network device is turned off (when power is removed) . In some instances, while the network device 200 is turned on that part of the code that is to be executed by the processor (s) 201 may be copied from non-volatile memory into volatile memory  (e.g., dynamic random access memory (DRAM) , static random access memory (SRAM) ) of the network device 200.
Interface 203 may be used in the wired and/or wireless communication of signaling and/or data to or from the network device 200. For example, interface 203 may perform any formatting, coding, or translating to allow the network device 200 to send and receive data whether over a wired and/or a wireless connection. In some embodiments, interface 203 may comprise radio circuitry capable of receiving data from other devices in the network over a wireless connection and/or sending data out to other devices via a wireless connection. This radio circuitry may include transmitter (s) , receiver (s) , and/or transceiver (s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc. ) . The radio signal may then be transmitted via antennas 204 to the appropriate recipient (s) . In some embodiments, interface 203 may comprise network interface controller (s) (NICs) , also known as a network interface card, network adapter, local area network (LAN) adapter or physical network interface. The NIC (s) may facilitate in connecting the network device 200 to other devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. As explained above, in particular embodiments, processor 201 may represent part of interface 203, and some or all of the functionality described as being provided by interface 103 may be provided more specifically by processor 201.
With reference to FIG. 2, a network device (ND) 210 may, in some embodiments, be an electronic device that can be communicatively connected to other electronic devices on the network (e.g., other network devices, etc. ) . In certain embodiments, the network device 210 may include core network features that provide various core network functions (for example a “core network device"may refer to such a network device) . For example, the network device 210 may be a core network device in EPC or vEPC, a network function in 5GC (5G core network) , etc. In certain embodiments, the network device 210 may include routing function feature that allows a guest virtual machine (VM) or a container based application to place routes in its own virtual routing and forwarding (VRF) instance using BGP. For example, the network device 210 may be BGPaaS. As depicted in FIG. 2, the example network device 210 comprises processor 211, memory 212 and interface 213. These components may work together to provide various network device functionality as disclosed herein.
Processor 211 may be a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, any other type of electronic circuitry, or any combination of one or more of the preceding. The processor 211 may comprise one or more processor cores. In  particular embodiments, some or all of the functionality described herein as being provided by the network device 210 may be implemented by processor 211 executing software instructions, either alone or in conjunction with other components, such as memory 212.
Memory 212 may store code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using non-transitory machine-readable (e.g., computer-readable) media, such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM) , flash memory devices, phase change memory) and machine-readable transmission media (e.g., electrical, optical, radio, acoustical or other form of propagated signals –such as carrier waves, infrared signals) . For instance, memory 212 may comprise non-volatile memory containing code to be executed by processor 211. Where memory 212 is non-volatile, the code and/or data stored therein can persist even when the network device is turned off (when power is removed) . In some instances, while the network device 210 is turned on that part of the code that is to be executed by the processor (s) 211 may be copied from non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM) , static random access memory (SRAM) ) of the network device 210.
Interface 213 may be used in the wired and/or wireless communication of signaling and/or data to or from the network device 210. For example, interface 213 may perform any formatting, coding, or translating to allow the network device 210 to send and receive data whether over a wired and/or a wireless connection. In some embodiments, interface 213 may comprise radio circuitry capable of receiving data from other devices in the network over a wireless connection and/or sending data out to other devices via a wireless connection. This radio circuitry may include transmitter (s) , receiver (s) , and/or transceiver (s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc. ) . The radio signal may then be transmitted via antennas to the appropriate recipient (s) . In some embodiments, interface 213 may comprise network interface controller (s) (NICs) , also known as a network interface card, network adapter, local area network (LAN) adapter or physical network interface. The NIC (s) may facilitate in connecting the network device 210 to other devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. As explained above, in particular embodiments, processor 211 may represent part of interface 213, and some or all of the functionality described as being provided by interface 213 may be provided more specifically by processor 211.
The components of the network device 210 are each depicted as separate boxes located within a single larger box for reasons of simplicity in describing certain aspects and  features of the network device 210 disclosed herein. In practice however, one or more of the components illustrated in the example device 210 may comprise multiple different physical elements (e.g., interface 213 may comprise terminals for coupling wires for a wired connection and a radio transceiver for a wireless connection) .
While the modules are illustrated as being implemented in software stored in memory 212, other embodiments implement part or all of each of these modules in hardware.
The solution described herein may be implemented in the network devices 200 and 210 by means of a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, where appropriate.
Although FIG. 2 depicts a single network device 200 and a single network device 210, there may be multiple network devices 200 and network devices 210 in other embodiments. In addition, it may be the case that either network device 200 or network devices 210 may be made up of two or more physically or logically separate components that, taken as a whole, perform the relevant functions or features of the respective device. For example, the network device 200 may comprise a base station component deployed at one location and control node component deployed at a second location. The two components together may comprise a single network device 200 for the purpose of this signaling diagram. For example, the network device 210 may comprise a control plane function component deployed at one location and a user plane function component deployed at a second location. The two components together may comprise a single network device 210 for the purpose of this signaling diagram. When the network device 210 is a BGPaaS, it may comprise a prober component deployed at one location which may initialize the probe messages to each terminator and a routing function component deployed at a second location. In addition, the prober component may be implemented in BGPaaS. When the network device 210 is a core network device such as (vEPC VM/container) of wireless network, it may comprise a terminator component deployed at one location which may reply the probe messages to the prober once it receives the probe messages and a core network function component deployed at a second location. In addition, the terminator component may be implemented in each core network device such as vEPC VM/container.
For a more thorough description of the example embodiment of the network devices 200 and 210 described in FIG. 2, turn to FIG. 3 below.
FIG. 3 illustrates two specific examples of how ND 300 may be implemented in certain embodiments of the described solution including: 1) a special-purpose network device 302 that uses custom processing circuits such as application–specific integrated–circuits (ASICs) and a proprietary operating system (OS) ; and 2) a general purpose network device 304 that uses  common off-the-shelf (COTS) processors and a standard OS which has been configured to provide one or more of the features or functions disclosed herein.
Special-purpose network device 302 includes hardware 310 comprising processor (s) 312, and interface 316, as well as memory 318 having stored therein software 320. In one embodiment, the software 320 implements the modules described with regard to the previous figures. During operation, the software 320 may be executed by the hardware 310 to instantiate a set of one or more software instance (s) 322. Each of the software instance (s) 322, and that part of the hardware 310 that executes that software instance (be it hardware dedicated to that software instance, hardware in which a portion of available physical resources (e.g., a processor core) is used, and/or time slices of hardware temporally shared by that software instance with others of the software instance (s) 322) , form a separate virtual network element 330A-R. Thus, in the case where there are multiple virtual network elements 330A-R, each operates as one of the network devices from the preceding figures.
Returning to FIG. 3, the example general purpose network device 304 includes hardware 340 comprising a set of one or more processor (s) 342 (which are often COTS processors) and interface 346 , as well as memory 348 having stored therein software 350. During operation, the processor (s) 342 execute the software 350 to instantiate one or more sets of one or more applications 364A-R. While certain embodiments do not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in certain alternative embodiments virtualization layer 354 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 362A-R called software containers that may each be used to execute one (or more) of the sets of applications 364A-R. In this embodiment, software containers 362A-R (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that may be separate from each other and separate from the kernel space in which the operating system is run. In certain embodiments, the set of applications running in a given user space, unless explicitly allowed, may be prevented from accessing the memory of the other processes. In other such alternative embodiments virtualization layer 354 may represent a hypervisor (sometimes referred to as a virtual machine monitor (VMM) ) or a hypervisor executing on top of a host operating system; and each of the sets of applications 364A-R may run on top of a guest operating system within an instance 362A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container that is run by the hypervisor) . In certain embodiments, one, some or all of the applications are implemented as unikernel (s) , which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS  services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware 340, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine) , or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 354, unikernels running within software containers represented by instances 362A-R, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers) .
The instantiation of the one or more sets of one or more applications 364A-R, as well as virtualization if implemented are collectively referred to as software instance (s) 352. Each set of applications 364A-R, corresponding virtualization construct (e.g., instance 362A-R) if implemented, and that part of the hardware 340 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers 362A-R) , forms a separate virtual network element (s) 360A-R.
The virtual network element (s) 360A-R perform similar functionality to the virtual network element (s) 330A-R. This virtualization of the hardware 340 is sometimes referred to as network function virtualization (NFV) ) . Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in for example data centers and customer premise equipment (CPE) . However, different embodiments of the invention may implement one or more of the software container (s) 362A-R differently. While embodiments of the invention are illustrated with each instance 362A-R corresponding to one VNE 360A-R, alternative embodiments may implement this correspondence at a finer level granularity; it should be understood that the techniques described herein with reference to a correspondence of instances 362A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
The third exemplary ND implementation in FIG. 3 is a hybrid network device 306, which includes both custom ASICs/proprietary OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform virtual machine (VM) , such as a VM that that implements the functionality of the special-purpose network device 302, could provide for para-virtualization to the hardware present in the hybrid network device 306.
FIG. 4 shows a flowchart of a method 400 according to an embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to a routing function entity or any other entity having similar functionality. As such, the  routing function entity may provide means or modules for accomplishing various parts of the method 400 as well as means or modules for accomplishing other processes in conjunction with other components. The routing function entity may be any suitable entity which can allow another entity (such as a guest VM or container based application) to place routes in the routing function entity (for example, in its own virtual VRF instance) for example by using a BGP message. In an embodiment, the routing function entity may be the network device 200 or 210 of FIG. 3 or the BGPaaS.
At block 402, the routing function entity may send a probe message to an entity of a cloud network. The routing path of the probe message passes through a first gateway of the cloud network. The cloud network may be a computer network that exists within or is a part of a cloud computing infrastructure. For example, the cloud network may be vEPC, cloud RAN, cloud data center network, cloud service platform, cloud based network function of 5GC, etc.
The entity of the cloud network may be a node of the cloud network which can provide at least one service. For example, the entity of the cloud network may be a VM or container. In an embodiment, the entity of the cloud network may be virtual network function entity of a core network (such as EPC or 5GC) of a wireless network (such as EPS (evolved packet system) or 5GS (5G system) ) . The entity of the cloud network is able to place at least one route in the routing function entity. In an embodiment, the entity of the cloud network is able to place at least one policy-based route in the routing function entity by using BGP. In an embodiment, the at least one policy-based route may comprise routes regarding different source Internet protocol (IP) addresses on the routing function entity passing through different gateways of the cloud network.
The probe message may be any suitable message which can implement probe function. In an embodiment, the probe message may comprise, but not limited to, an Internet control message protocol (ICMP) message and a bidirectional forwarding detection (BFD) message. For example, BFD message may be used in case there is a firewall blocking ICMP message.
The probe message may be sent in various ways such as periodically, in response to a path status detection request, according to a configuration, in response to an addition of a new entity of the cloud network, etc. In an embodiment, the probe message may be sent in a time interval. The time interval may be predefined or configured by the operator.
In an embodiment, when there are two or more gateways in the cloud network, the routing function entity may send two or more probe messages to the entity of the cloud network. The routing paths of the two or more probe messages respectively pass through different  gateways of the cloud network. For example, the routing function entity may use different source IP addresses in probe messages passing through different gateways of the cloud network.
In an embodiment, the cloud network may have two or more gateways providing redundant next hops for the entity of the cloud network through VRRP.
In an embodiment, different source IP addresses may be used in probe messages passing through different gateways of the cloud network. For example, a source IP address IP_Aof the routing function entity may be used in a probe message passing through the first gateway of the cloud network, a source IP address IP_B of the routing function entity may be used in a probe message passing through a second gateway of the cloud network, and so on.
At block 404, the routing function entity may determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
In an embodiment, when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network, the routing function entity may determine that a path failure has happened between the first gateway of the cloud network and the entity of the cloud network. The predefined time period may be any suitable time period such as 3 x the above described time interval.
In an embodiment, when the corresponding probe response message is received by the routing function entity within the predefined time period, the routing function entity may determine that the path failure has not happened between the first gateway of the cloud network and the entity of the cloud network.
The path failure between the routing function entity and the first gateway of the cloud network may be detected in various ways. In an embodiment, BFD for BGP between the routing function entity and a gateway (such as the first gateway) of the cloud network may be enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
The routing path of the corresponding probe response message may be any suitable route path between the routing function entity and the entity of the cloud network. In an embodiment, the routing path of the corresponding probe response message passes through the first gateway of the cloud network.
In an embodiment, the routing path of the corresponding probe response message passes through a second gateway of the cloud network rather than the first gateway of the cloud network. For example, in case of VRRP is used between the gateways of the cloud network, the probe messages may pass through different gateways of the cloud network. Since there may be  only one default gateway in case of VRRP, the response messages may always pass through the default gateway whose role is VRRP master (e.g., the second gateway of the cloud network in this embodiment) . In this case when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network and between the routing function entity and the second gateway of the cloud network, the routing function entity may determine that the path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
At block 406 (optionally) , when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the routing function entity may send a route withdrawal message to the first gateway of the cloud network to withdraw at least one route to the entity of the cloud network. For example, the route withdrawal message may be a BGP message.
At blocks 408 and 410 (optionally) , when the corresponding probe response message is received by the routing function entity again after the determination of the path failure, the routing function entity may determine the failed path is recovered and send a route reachability message to the first gateway of the cloud network to advertise at least one route to the entity of the cloud network. The route reachability message may be a BGP message. For example, after the determination of the path failure, the routing function entity may continue to send the probe message to the entity of the cloud network, wherein the routing path of the probe message passes through the first gateway of the cloud network. At a later time the failed path is recovered, the probe message may be received by the entity of the cloud network and the corresponding probe response message may be sent to the routing function entity. After reception of the corresponding probe response message, the routing function entity may determine the failed path is recovered.
In an embodiment, the routing function entity may get a list of next hop IP addresses for IP routes placed in the routing function entity by entities of the cloud network. These IP addresses may be logical IP addresses on the entities of the cloud network but not the IP addresses connected to the gateway of the cloud network. The routing function entity may get a list of neighbors (for example, gateways of the cloud network) of the routing function entity and their next hop IP addresses. The relationship between the gateways of the cloud network and the interface IP addresses of the gateways of the cloud network can be obtained. The routing function entity may send a probe message to each entity of the cloud network passing through each gateway of the cloud network in a time interval. The destination IP address in the probe messages may be the logical IP address on each entity of the cloud network. The routing  function entity may use different source IP addresses in probe messages passing through different gateways of the cloud network. For example, source IP address IP_A is used in probe messages passing through the gateway A of the cloud network, source IP address IP_B may be used in probe messages passing through the gateway B of the cloud network, and so on. The routing function entity may use a MAC (Media Access Control) address of the interface connected to the routing function entity on each gateway of the cloud network as the destination MAC address. This MAC address may be obtained by ARP (address resolution protocol) . Therefore the probe messages can pass through different gateways of the cloud network. For example, the probe messages passing through the gateway A of the cloud network may use the MAC address MAC_A as the destination MAC address, wherein MAC_A is the MAC address of the interface connected to the routing function entity on the gateway A of the cloud network. The probe messages passing through the gateway B of the cloud network use the MAC address MAC_B as the destination MAC address, wherein MAC_B is the MAC address of the interface connected to the routing function entity on the gateway B of the cloud network. There may be a timer with 3 x time interval age used to measure the status of the path between a gateway of the cloud network and the entity of the cloud network. If there is a probe response message received by the routing function entity within 3 x time interval, the timer may refresh. If no probe response message is received by the routing function entity within 3 x the above described time interval, the timer may timeout and notify the routing function entity to send an IP route withdrawal message to the gateway of the cloud network to withdraw the IP route to the entity of the cloud network. If the path recovers and the probe response message is received by the routing function entity again, the timer may restart and notify the routing function entity to send an IP route reachability message to the gateway of the cloud network to advertise the IP route to the entity of the cloud network.
FIG. 5 shows a flowchart of a method 500 according to another embodiment of the present disclosure, which may be performed by an apparatus implemented in or as or communicatively coupled to an entity of a cloud network or any other entity having similar functionality. As such, the entity of the cloud network may provide means or modules for accomplishing various parts of the method 500 as well as means or modules for accomplishing other processes in conjunction with other components. The entity of the cloud network may be any suitable entity which is able to place at least one route in the routing function entity. In an embodiment, The entity of the cloud network is about to place at least one route in the routing function entity by using BGP. The at least route may comprise routes regarding different service IP on the entity of the cloud network and UE IP pools served by the entity of the cloud network. In an embodiment, the entity of the cloud network may be the network device 200 or 210 of  FIG. 3. For some parts which have been described in the above embodiments, the description thereof is omitted here for brevity.
At block 502, the entity of the cloud network may receive a probe message from a routing function entity. The routing path of the probe message passes through a first gateway of the cloud network. For example, the routing function entity may send the probe message to the entity of the cloud network at block 402 of FIG. 4, and then the entity of the cloud network may receive the probe message from the routing function entity for example when there is no path failure of the routing path of the probe message. The probe message may comprise an ICMP message and a BFD message. The probe message may be received in a time interval. The routing function entity may be a BGPaaS entity. The entity of the cloud network may be a virtual network function entity of a core network of a wireless network.
At block 504, the entity of the cloud network may determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
In an embodiment, when the probe message from a first Internet protocol address on the routing function entity is not received by the entity of the cloud network within a predefined time period, a path failure is determined to have happened either between the first gateway of the cloud network and the entity of the could network or between the first gateway of the could network and the routing function entity of the could network. The predefined time period may be any suitable time period such as 3 x the above described time interval.
In an embodiment, when the probe message from the first Internet protocol address on the routing function entity is received by the entity of the cloud network within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the could network and between the first gateway of the could network and the routing function entity of the could network.
At block 506 (optionally) , in response to a reception of the probe message, the entity of the cloud network may send a corresponding probe response message to the routing function entity.
In an embodiment, a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
In an embodiment, a routing path of the corresponding probe response message passes through the second gateway of the cloud network rather than the first gateway of the cloud network. For example, in case of VRRP is used between the gateways of the cloud network, the probe messages may pass through different gateways of the cloud network. Since there may be only one default gateway in case of VRRP, the response messages may always  pass through the default gateway whose role is VRRP master (for example, the second gateway of the cloud network in this embodiment) .
At block 508 (optionally) , when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the entity of the cloud network may perform a first action based on the path failure. In an embodiment, the first action comprises at least one of avoiding sending traffic to the path passes through the first gateway of the cloud network, moving the service of the entity of the cloud network to another entity of the cloud network and rebooting the entity of the cloud network.
At block 510 (optionally) , when the probe message is received by the entity of a cloud network within a predefined time period again after the determination of the path failure, the entity of the cloud network may determine the failed path is recovered. For example, the routing function entity may continue to send the probe message to the entity of the cloud network in a time interval, wherein the routing path of the probe message passes through the first gateway of the cloud network. At a later time the failed path may be recovered, the probe message may be received by the entity of the cloud network and the entity of the cloud network may determine the failed path is recovered.
At block 512 (optionally) , the entity of the cloud network may perform a second action based on the recovered path. In an embodiment, the second action may comprise canceling the first action.
In an embodiment, different source Internet protocol addresses are used in probe messages passing through different gateways of the cloud network.
In an embodiment, bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network is enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
In an embodiment, the cloud network has two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
FIGs. 6a, 6b and 7 show examples of the proposed solution according to some embodiments of the present disclosure.
The proposed solution may include two components:
· Prober, which may initialize the probe messages to each terminator. Prober may be implemented in BGPaaS.
· Terminator, which may reply the probe messages to prober once it receives the probe messages. Terminator may be implemented in each vEPC VM/container.
The probe message may be ICMP message or BFD message. For example, the probe message may be BFD message in case there is firewall blocking ICMP packets.
The procedure in the prober side may be as below:
The prober obtains the list of next hop IP addresses for IP routes placed in BGPaaS by vEPC. These IP addresses may be logical IP addresses on VMs/containers rather than the IP addresses connected to DC-GW routers.
The prober obtains the list of BGP neighbors and their next hop IP addresses. The relationship between BGP neighbors and the interface IP addresses of the DC-GW routers can be obtained.
The prober sends a probe message to each VM/Container passing through each DC-GW router in a time interval.
The destination IP address in probe messages can be the IP address on the vNIC (virtual network interface card) of the VM/Container and the logical IP address in the VM/Container.
The prober may use different source IP addresses in probe messages passing through different DC-GW routers. For example, source IP address IP_A may be used in the probe messages passing through DC-GW router A, and source IP address IP_B may be used in the probe messages passing through DC-GW router B.
The prober may use MAC address of the interface connected to BGPaaS on each DC-GW router as the destination MAC address. This MAC address may be obtained by ARP. Therefore the probe messages can pass through different DC-GW routers. For example, the probe messages passing through DC-GW router A may use the MAC address MAC_A as the destination MAC address, wherein MAC_A is the MAC address of the interface connected to BGPaaS on DC-GW router A. The probe messages passing through DC-GW router B may use the MAC address MAC_B as the destination MAC address, wherein MAC_B is the MAC address of the interface connected to BGPaaS on DC-GW router B.
There may be a timer with a predefined time period such as 3 x the time interval used to measure the status of the path between a DC-GW router and a VM/container. If there is a probe response message is received by the prober within the predefined time period such as the 3 x the time interval, the timer may refresh. If no probe response message is received by the prober within the 3 x the time interval, the timer may timeout and notify the BGPaaS to send an IP route withdrawal message to the DC-GW router to withdraw at least one IP route to the VM/Container. If the failed path recovers and the probe response message is received by the prober again, the timer may restart and notify BGPaaS to send an IP route reachability message to the DC-GW router to advertise the IP route to the VM/container.
In case of VRRP is used between DC-GW routers, the probe messages may pass through the same path as above.
The procedure in the terminator side may be as below:
The terminator may provision different static routes to different IP addresses on BGPaaS passing through different DC-GW routers in case of multiple gateways is used. For example, the route to IP_A is provisioned with IP_GW_A as next hop and the route to IP_B is provisioned with IP_GW_B as next hop.
Once the terminator receives probe messages from BGPaaS, it may send response messages to BGPaaS. The response messages with destination IP address of IP_A may pass through DC-GW router A and the response messages with destination IP address of IP_B may pass through DC-GW router B.
After the terminator receives the first probe message from BGPaaS, it may start a timer with the same predefined time period as for the IP address of probe messages. If there is a probe message received by the terminator within the predefined time period, the timer may refresh. If no probe message is received by the terminator within the predefined time period, the timer may timeout and notify vEPC VM/container to take actions. The actions may comprise at least one of avoiding sending traffic to the path passes through the DC-GW router, rebooting VM/Container, moving the service to another vEPC VM/container and so on. If the failed path recovers and the probe message is received by the terminator again, the timer may restart and notify vEPC VM/container to cancel actions which have taken.
In case of VRRP is used between DC-GW routers, there may be only default gateway IP_GW_VIP. As shown in FIG. 7, the response messages may always pass through the DC-GW router whose role is VRRP master (e.g., the default gateway IP_GW_VIP) .
According to various embodiments, a new mechanism is proposed to detect a path status between the gateway (such as DC-GW router) of the cloud network and the entity (such as VM/container) of the cloud network, and this new mechanism does not need any additional feature on the gateway (such as DC-GW router) of the cloud network.
According to various embodiments, a new mechanism is proposed to track the availability of advertised routes on each BGP neighbor for the routing function entity (such as BGPaaS) in the cloud network. If an advertised route is no longer valid on a neighbor of the routing function entity, the routing function entity (such as BGPaaS) can withdraw the route from the neighbor.
According to various embodiments, the probe messages and response messages may pass through the gateway (such as DC-GW router) of the cloud network, therefore it can  track the availability of advertised routes on the gateway (such as DC-GW router) of the cloud network.
According to various embodiments, a new mechanism is proposed to trigger the routing function entity (such as BGPaaS) to send a network reachability or withdrawal message to specific gateway (such as DC-GW router) of the cloud network, therefore route/service fast convergence can be implemented.
Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows. In some embodiments herein, a route/service fast convergence method proposed for the cloud network (such as vEPC) does not need to depend on any additional feature on the gateway (such as DC-GW router) of the cloud network and is fully compatible with the gateway (such as DC-GW router) of the cloud network. For example, the proposed method can be applied with no additional function required to DC-GW routers. It does not need any configuration/function used for path failure detection on DC-GW routers. In some embodiments herein, the proposed method supports incremental deployment and there is no impact on the deployed cloud infrastructure. In some embodiments herein, the advantages of the proposed method comprises low cost, add-on feature, no extra capability required to any hardware devices. In some embodiments herein, the proposed method can be applied on massive and scaling deployment scenarios. In some embodiments herein, the proposed method can detect the path failure through messages within the entity (such as VNF (virtual network function) ) of the cloud network internal, and trigger routing protocol converge as soon as possible. In some embodiments herein, the proposed method can be applied in large scale cloud network and does not consume any additional resource (such as computing, storage and hardware resource) on the gateway (such as DC-GW router) of the cloud network. In some embodiments herein, the proposed method can be used in any size of the cloud network. The embodiments herein are not limited to the features and advantages mentioned above. A person skilled in the art will recognize additional features and advantages upon reading the following detailed description.
FIG. 8 is a block diagram showing an apparatus suitable for practicing some embodiments of the disclosure. For example, any one of the routing function entity and the entity of the cloud network described above may be implemented as or through the apparatus 800.
The apparatus 800 comprises at least one processor 821, such as a DP, and at least one MEM 822 coupled to the processor 821. The apparatus 820 may further comprise a transmitter TX and receiver RX 823 coupled to the processor 821. The MEM 822 stores a PROG 824. The PROG 824 may include instructions that, when executed on the associated processor 821, enable the apparatus 820 to operate in accordance with the embodiments of the  present disclosure. A combination of the at least one processor 821 and the at least one MEM 822 may form processing means 825 adapted to implement various embodiments of the present disclosure.
Various embodiments of the present disclosure may be implemented by computer program executable by one or more of the processor 821, software, firmware, hardware or in a combination thereof.
The MEM 822 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memories and removable memories, as non-limiting examples.
The processor 821 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors DSPs and processors based on multicore processor architecture, as non-limiting examples.
In an embodiment where the apparatus is implemented as or at the routing function entity, the memory 822 contains instructions executable by the processor 821, whereby the routing function entity operates according to the method as described in reference to FIG. 4.
In an embodiment where the apparatus is implemented as or at the entity of the cloud network, the memory 822 contains instructions executable by the processor 821, whereby the entity of the cloud network operates according to the method as described in reference to FIG. 5.
FIG. 9 is a block diagram showing a routing function entity according to an embodiment of the disclosure. As shown, the routing function entity 900 comprises a sending module 902 and a determining module 904. The sending module 902 may be configured to send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network. The determining module 904 may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
FIG. 10 is a block diagram showing an entity of a cloud network according to an embodiment of the disclosure. As shown, the entity 1000 of the cloud network comprises a receiving module 1002 and a determining module 1004. The receiving module 902 may be configured to receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network. The determining module  904 may be configured to determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
According to an aspect of the disclosure it is provided a computer program product being tangibly stored on a computer readable storage medium and including instructions which, when executed on at least one processor, cause the at least one processor to carry out any of the methods as described above.
According to an aspect of the disclosure it is provided a computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out any of the methods as described above.
In addition, the present disclosure may also provide a carrier containing the computer program as mentioned above, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium. The computer readable storage medium can be, for example, an optical compact disk or an electronic memory device like a RAM (random access memory) , a ROM (read only memory) , Flash memory, magnetic tape, CD-ROM, DVD, Blue-ray disc and the like.
The techniques described herein may be implemented by various means so that an apparatus implementing one or more functions of a corresponding apparatus described with an embodiment comprises not only prior art means, but also means for implementing the one or more functions of the corresponding apparatus described with the embodiment and it may comprise separate means for each separate function or means that may be configured to perform two or more functions. For example, these techniques may be implemented in hardware (one or more apparatuses) , firmware (one or more apparatuses) , software (one or more modules) , or combinations thereof. For a firmware or software, implementation may be made through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
Exemplary embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in  sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the subject matter described herein, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any implementation or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular implementations. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The above described embodiments are given for describing rather than limiting the disclosure, and it is to be understood that modifications and variations may be resorted to without departing from the spirit and scope of the disclosure as those skilled in the art readily understand. Such modifications and variations are considered to be within the scope of the disclosure and the appended claims. The protection scope of the disclosure is defined by the accompanying claims.

Claims (40)

  1. A method (400) performed by a routing function entity, comprising:
    sending (402) a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network; and
    determining (404) a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  2. The method according to claim 1, wherein the determining step comprises:
    when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network, determining that a path failure has happened between the first gateway of the cloud network and the entity of the cloud network; and
    when the corresponding probe response message is received by the routing function entity within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the cloud network.
  3. The method according to claim 1 or 2, wherein a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
  4. The method according to claim 1 or 2, wherein a routing path of the corresponding probe response message passes through a second gateway of the cloud network rather than the first gateway of the cloud network, the determining step comprises:
    when the corresponding probe response message is not received by the routing function entity within the predefined time period and there is no path failure between the routing function entity and the first gateway of the cloud network and between the routing function entity and the second gateway of the cloud network, determining that the path failure has happened between the first gateway of the cloud network and the entity of the cloud network.
  5. The method according to any of claims 1-4, wherein when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the method further comprises:
    sending (406) a route withdrawal message to the first gateway of the cloud network to withdraw at least one route to the entity of the cloud network.
  6. The method according to any of claims 1-5, wherein the probe message comprises an Internet control message protocol, ICMP, message and a bidirectional forwarding detection, BFD, message.
  7. The method according to any of claims 1-6, wherein the probe message is sent in a time interval.
  8. The method according to any of claims 1-7, wherein the routing function entity is a border gateway protocol, BGP, as a service, BGPaaS, entity.
  9. The method according to any of claims 1-8, wherein the entity of the cloud network is a virtual network function entity of a core network of a wireless network.
  10. The method according to any of claims 1-9, wherein the entity of the cloud network is able to place at least one route in the routing function entity by using border gateway protocol, BGP.
  11. The method according to claim 10, wherein the at least one policy-based route comprises routes regarding different source Internet protocol addresses on the routing function entity passing through different gateways of the cloud network.
  12. The method according to any of claims 1-11, wherein different source Internet protocol addresses are used in probe messages passing through different gateways of the cloud network.
  13. The method according to any of claims 1-12, further comprising:
    when the corresponding probe response message is received by the routing function entity again after the determination of the path failure, determining (408) the failed path is recovered; and
    sending (410) a route reachability message to the first gateway of the cloud network to advertise at least one route to the entity of the cloud network.
  14. The method according to any of claims 1-13, wherein bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network is enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
  15. The method according to any of claims 1-14, wherein the cloud network has two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
  16. The method according to any of claims 1-15, wherein when there are two or more gateways in the cloud network, the sending step comprises sending two or more probe messages to the entity of the cloud network, wherein the routing paths of the two or more probe messages respectively pass through different gateways of the cloud network.
  17. A method (500) performed by an entity of a cloud network, comprising:
    receiving (502) a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network; and
    determining (504) a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  18. The method according to claim 17, wherein the determining step comprises:
    when the probe message from a first Internet protocol address on the routing function entity is not received by the entity of the cloud network within a predefined time period, determining that a path failure has happened either between the first gateway of the cloud network and the entity of the could network or between the first gateway of the could network and the routing function entity of the could network; and
    when the probe message from the first Internet protocol address on the routing function entity is received by the entity of the cloud network within the predefined time period, determining that the path failure has not happened between the first gateway of the cloud network and the entity of the could network and between the first gateway of the could network and the routing function entity of the could network.
  19. The method according to claim 17 or 18, further comprising:
    in response to a reception of the probe message, sending (506) a corresponding probe response message to the routing function entity.
  20. The method according to claim 19, wherein a routing path of the corresponding probe response message passes through the first gateway of the cloud network.
  21. The method according to claim 19, wherein a routing path of the corresponding probe response message passes through the second gateway of the cloud network rather than the first gateway of the cloud network.
  22. The method according to any of claims 17-21, wherein when the path failure is determined to have happened between the first gateway of the cloud network and the entity of the cloud network, the method further comprises:
    performing (508) a first action based on the path failure.
  23. The method according to claim 22, wherein the first action comprises at least one of avoiding sending traffic to the path passes through the first gateway of the cloud network, moving the service of the entity of the cloud network to another entity of the cloud network and rebooting the entity of the cloud network.
  24. The method according to claim 22 or 23, further comprising:
    when the probe message is received by the entity of the cloud network within a predefined time period again after the determination of the path failure, determining (510) the failed path is recovered; and
    performing (512) a second action based on the recovered path.
  25. The method according to claim 24, wherein the second action comprises canceling the first action.
  26. The method according to any of claims 17-25 wherein the probe message comprises an Internet control message protocol, ICMP, message and a bidirectional forwarding detection, BFD, message.
  27. The method according to any of claims 17-26, wherein the probe message is received in a time interval.
  28. The method according to any of claims 17-27, wherein the routing function entity is a border gateway protocol, BGP, as a service, BGPaaS, entity.
  29. The method according to any of claims 17-28, wherein the entity of the cloud network is a virtual network function entity of a core network of a wireless network.
  30. The method according to any of claims 17-29, wherein the entity of the cloud network is able to place at least one route in the routing function entity by using border gateway protocol, BGP.
  31. The method according to claim 30, wherein the at least one policy-based route comprises routes regarding different source Internet protocol addresses on the routing function entity passing through different gateways of the cloud network.
  32. The method according to any of claims 17-31, wherein different source Internet protocol addresses are used in probe messages passing through different gateways of the cloud network.
  33. The method according to any of claims 17-32, wherein bidirectional forwarding detection, BFD for border gateway protocol, BGP, between the routing function entity and a gateway of the cloud network is enabled to detect the path failure between the routing function entity and the gateway of the cloud network.
  34. The method according to any of claims 17-33, wherein the cloud network has two or more gateways providing redundant next hops for the entity of the cloud network through virtual router redundancy protocol, VRRP.
  35. A routing function entity (800) , comprising:
    a processor (821) ; and
    a memory (822) coupled to the processor (821) , said memory (822) containing instructions executable by said processor (821) , whereby said routing function entity (800) is operative to:
    send a probe message to an entity of a cloud network, wherein a routing path of the probe message passes through a first gateway of the cloud network; and
    determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether a corresponding probe response message is received or not.
  36. The routing function entity according to claim 35, wherein the routing function entity is further operative to perform the method of any one of claims 2 to 16.
  37. An entity (800) of a cloud network, comprising:
    a processor (821) ; and
    a memory (822) coupled to the processor (821) , said memory (822) containing instructions executable by said processor (821) , whereby said entity (800) of the cloud network is operative to:
    receive a probe message from a routing function entity, wherein a routing path of the probe message passes through a first gateway of the cloud network; and
    determine a path status between the first gateway of the cloud network and the entity of the cloud network based on whether the probe message is received or not.
  38. The entity of the cloud network according to claim 37, wherein the routing function entity is further operative to perform the method of any one of claims 18 to 34.
  39. A computer-readable storage medium storing instructions which when executed by at least one processor, cause the at least one processor to perform the method according to any one of claims 1 to 34.
  40. A computer program product comprising instructions which when executed by at least one processor, cause the at least one processor to perform the method according to any of claims 1 to 34.
PCT/CN2020/073864 2020-01-22 2020-01-22 Method and apparatus for path status detection WO2021147014A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073864 WO2021147014A1 (en) 2020-01-22 2020-01-22 Method and apparatus for path status detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073864 WO2021147014A1 (en) 2020-01-22 2020-01-22 Method and apparatus for path status detection

Publications (1)

Publication Number Publication Date
WO2021147014A1 true WO2021147014A1 (en) 2021-07-29

Family

ID=76991961

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073864 WO2021147014A1 (en) 2020-01-22 2020-01-22 Method and apparatus for path status detection

Country Status (1)

Country Link
WO (1) WO2021147014A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949649A (en) * 2021-10-14 2022-01-18 迈普通信技术股份有限公司 Fault detection protocol deployment method and device, electronic equipment and storage medium
CN114629816A (en) * 2022-03-14 2022-06-14 京东科技信息技术有限公司 Method and system for detecting public network IP network state

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107729A1 (en) * 2011-11-02 2013-05-02 Telcordia Technologies, Inc. Method, system, network nodes, routers and program for bandwidth estimation in multi-hop networks
US20130170813A1 (en) * 2011-12-30 2013-07-04 United Video Properties, Inc. Methods and systems for providing relevant supplemental content to a user device
US9106353B2 (en) * 2011-12-13 2015-08-11 Jds Uniphase Corporation Time synchronization for network testing equipment
WO2016028228A1 (en) * 2014-08-21 2016-02-25 Avennetz Technologies Pte Ltd System, method and apparatus for determining driving risk
WO2017081518A1 (en) * 2015-11-12 2017-05-18 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for general packet radio service tunneling protocol (gtp) probing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107729A1 (en) * 2011-11-02 2013-05-02 Telcordia Technologies, Inc. Method, system, network nodes, routers and program for bandwidth estimation in multi-hop networks
US9106353B2 (en) * 2011-12-13 2015-08-11 Jds Uniphase Corporation Time synchronization for network testing equipment
US9882666B2 (en) * 2011-12-13 2018-01-30 Viavi Solutions Inc. Time synchronization for network testing equipment
US20130170813A1 (en) * 2011-12-30 2013-07-04 United Video Properties, Inc. Methods and systems for providing relevant supplemental content to a user device
WO2016028228A1 (en) * 2014-08-21 2016-02-25 Avennetz Technologies Pte Ltd System, method and apparatus for determining driving risk
WO2017081518A1 (en) * 2015-11-12 2017-05-18 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for general packet radio service tunneling protocol (gtp) probing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949649A (en) * 2021-10-14 2022-01-18 迈普通信技术股份有限公司 Fault detection protocol deployment method and device, electronic equipment and storage medium
CN113949649B (en) * 2021-10-14 2023-05-23 迈普通信技术股份有限公司 Fault detection protocol deployment method and device, electronic equipment and storage medium
CN114629816A (en) * 2022-03-14 2022-06-14 京东科技信息技术有限公司 Method and system for detecting public network IP network state
CN114629816B (en) * 2022-03-14 2023-11-03 京东科技信息技术有限公司 Public network IP network state detection method and system

Similar Documents

Publication Publication Date Title
US10938627B2 (en) Packet processing method, device, and network system
EP3591912B1 (en) Evpn packet processing method, device and system
US11621926B2 (en) Network device and method for sending BGP information
US11310846B2 (en) Local identifier locator network protocol (ILNP) breakout
US11129061B1 (en) Local identifier locator network protocol (ILNP) breakout
US11362925B2 (en) Optimizing service node monitoring in SDN
CN110971516B (en) Method and device for processing routing information
US11711243B2 (en) Packet processing method and gateway device
US11711281B2 (en) Methods and network devices for detecting and resolving abnormal routes
CN113630312B (en) Path detection method, path detection device, network equipment and computer readable storage medium
WO2021147014A1 (en) Method and apparatus for path status detection
US20230300048A1 (en) Tunnel bfd session establishment method and device
US11456943B2 (en) Packet transmission method and apparatus
CN109379760B (en) MEC bypass system and method
US11431623B2 (en) Method for configuring private line service, device, and storage medium
US11570073B1 (en) Service status notification
WO2022246693A1 (en) Method and apparatus for path switchover management
US20230336467A1 (en) Standby access gateway function signaling for a dynamic host configuration protocol
WO2023216836A1 (en) Mesh networking uplink control method and system, and device and readable storage medium
JP2024503289A (en) METHODS AND APPARATUS FOR DETECTING BGP SESSION STATE AND NETWORK DEVICE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915869

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915869

Country of ref document: EP

Kind code of ref document: A1