WO2022084385A1 - Notification of packet data network gateway (pgw) ip address change - Google Patents

Notification of packet data network gateway (pgw) ip address change Download PDF

Info

Publication number
WO2022084385A1
WO2022084385A1 PCT/EP2021/079081 EP2021079081W WO2022084385A1 WO 2022084385 A1 WO2022084385 A1 WO 2022084385A1 EP 2021079081 W EP2021079081 W EP 2021079081W WO 2022084385 A1 WO2022084385 A1 WO 2022084385A1
Authority
WO
WIPO (PCT)
Prior art keywords
moved
identifying
function
connection
network
Prior art date
Application number
PCT/EP2021/079081
Other languages
French (fr)
Inventor
Yong Yang
Anders Henriksson
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2022084385A1 publication Critical patent/WO2022084385A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/19Connection re-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/026Details of "hello" or keep-alive messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/74Admission control; Resource allocation measures in reaction to resource unavailability
    • H04L47/746Reaction triggered by a failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/12Reselecting a serving backbone network switching or routing node
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/20Manipulation of established connections
    • H04W76/22Manipulation of transport tunnels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/14Backbone network devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W92/00Interfaces specially adapted for wireless communication networks
    • H04W92/16Interfaces between hierarchically similar devices
    • H04W92/24Interfaces between hierarchically similar devices between backbone network devices

Definitions

  • the present disclosure relates generally to network notification procedures, and more particularly to methods and devices for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
  • NF network function
  • NF Network Function
  • CN 5G Core Network
  • 3GPP TS 23.501 v16.6.0 (2020-09), which is incorporated herein by reference in its entirety, provides the following list of definitions related to NF services, NF service sets, NFs, and NF sets.
  • NF instance an NF instance is defined as an identifiable instance of the NF.
  • NF service an NF service is defined as functionality exposed by a NF through a service based interface and consumed by other authorized NFs.
  • NF service instance an NF service instance is defined as an identifiable instance of the NF service.
  • NF service operation an NF service operation is defined as being an elementary unit of which a given NF service is composed.
  • an NF Service Set is defined as a group of interchangeable NF service instances of the same service type within an NF instance.
  • the NF service instances in the same NF Service Set have access to the same context data.
  • NF Set an NF Set is defined as a group of interchangeable NF instances of the same type that support the same services and the same Network Slice(s).
  • the NF instances in the same NF Set may be geographically distributed but have access to the same context data.
  • Embodiments of the present disclosure provide a technique for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
  • NF network function
  • a method is implemented by one of the first and second NFs and comprises determining one or more connection contexts to be moved from the first NF to the second NF, sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • IE information element
  • the present disclosure provides a network node for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
  • the network node comprises communications interface circuitry configured to communicate data packets with a peer network nodes in a core network, and processing circuitry configured to determine one or more connection contexts to be moved from the first NF to the second NF, and send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • IE information element
  • the present disclosure provides a non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a network node configured to notify peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, causes the network node to determine one or more connection contexts to be moved from the first NF to the second NF, and send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set
  • the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • IE information element
  • the present disclosure provides a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
  • the method is implemented by a peer network node in the core network and comprises receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, and sending, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • the first NF and the second NF are in the same NF set, and the request message comprises an information element (IE) identifying the group of one or more connection contexts to be moved from the first NF to the second NF.
  • IE information element
  • the present disclosure provides a peer network node in a core network that is notified that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
  • the peer network node comprises communications interface circuitry configured to communicate data packets with a plurality of Network Functions (NFs) in a same NF set, and processing circuitry configured to receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • NFs Network Functions
  • the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the present disclosure provides non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a peer network node in a core network, causes the peer network node to receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • Figure 1 illustrates a non-roaming architecture, including a SMF+PGW-C, for interworking between 5GS and EPC/E-UTRAN systems.
  • Figure 2 illustrates a non-roaming architecture for 3GPP accesses.
  • Figure 3 illustrates an architecture in which the user plane (UP) and the control plane (CP) are separate.
  • Figure 4 illustrates an architecture reference model showing the separation of the user plane (UP) and the control plane (CP) for non-roaming and roaming scenarios.
  • FIG. 5 is a functional block diagram illustrating some of the components of an EPS network configured according to the present embodiments.
  • Figure 6 is a signaling diagram illustrating a technique by which a NF notifies a peer network node in a core network to change the IP Address for a large number of connections matching certain criteria, according to one embodiment.
  • Figure 7 illustrates a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF, according to one embodiment.
  • Figure 8 illustrates a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF, according to one embodiment.
  • Figure 9 is a block diagram illustrating some components of a network node configured according to one embodiment.
  • Figure 10 is a functional block diagram illustrating some functions of computer program executed by processing circuitry of a network node configured according to one embodiment of the present disclosure.
  • Figure 11 is a block diagram illustrating some components of a peer network node in a core network configured according to one embodiment of the present disclosure.
  • Figure 12 is a functional block diagram illustrating some functions of computer program executed by processing circuitry of a peer network node according to one embodiment of the present disclosure.
  • Embodiments of the present disclosure provide techniques for notifying peer network nodes in a core network (CN) that a group of one or more connection contexts served by a first network function (NF) in a communications network are to be moved to a second NF in the communications network, in which the first NF and the second NF are in a same NF set.
  • CN core network
  • NF network function
  • an NF instance can be deployed such that several NF instances are present within an NF Set to provide distribution, redundancy, and scalability together as a Set of NF instances. Therefore, in situations associated with failures, load balancing, and load re-balancing, for example, it is possible to replace a given NF with an alternative NF within the same NF Set. For example, one SMF in a given SMF Set can take over a PDU Session which was handled by another SMF in the same SMF set.
  • FIG. 1 illustrates a non-roaming architecture 10, including a SMF+PGW-C, for such interworking between 5GS and EPC/E-UTRAN systems.
  • PGW Packet Data Gateway
  • Figures 2 and 3 illustrate corresponding non-roaming architectures 20, 30 depicting such a situation.
  • the architecture 20 of Figure 2 is more particularly a non-roaming architecture for 3GPP accesses, and includes an SMF+PGW-C.
  • Figure 4 illustrates an architecture reference model 40 showing the separation of the user plane (UP) and the control plane (CP) for non-roaming and roaming scenarios.
  • Mobility Management Entity MME
  • the MME maintains mobility management (MM) context and EPS bearer context information for User Equipment (UEs) in the ECM-IDLE, ECM CONNECTED and EMM-DEREGISTERED states.
  • MM mobility management
  • UEs User Equipment
  • Table 1 shows the context fields for one UE.
  • a NF e.g., the first or second NF
  • a NF can identify the PDN connections that may be affected by the movement of the connection contexts.
  • One embodiment as seen in Table 1 , for example, identifies such PDN connections using a current Access Point Name (APN) in the "APN in Use” field, along with the PDN Type fields.
  • API Access Point Name
  • IP Addresses involved in the replacement process are maintained in the "PDN GW Address in Use” and "PDN GW for S5/S8 fields of Table 1 . More particularly, the "PDN GW Address in Use” is the address will be replaced and is carried in the Fully Qualified Tunnel Endpoint Identifier (F-TEID).
  • F-TEID Fully Qualified Tunnel Endpoint Identifier
  • the MME Emergency Configuration Data is used instead of UE subscription data received from the HSS.
  • the MME Emergency Configuration Data is shown in Table 2 below.
  • Table 2 MME Emergency Configuration Data For all Restricted Local Operator Service (RLOS) PDN connections that are established by an MME on UE request, the MME RLOS Configuration Data is used instead of the UE subscription data received from HSS.
  • Table 3 identifies the MME RLOS Configuration Data.
  • the Serving Gateway maintains the following EPS bearer context information for UEs.
  • SGW Serving Gateway
  • Table 4 illustrates the context fields for one UE. However, for emergency attached or RLOS attached UEs, which are not authenticated, the International Mobile Equipment Identity (I M El) is stored in context.
  • I M El International Mobile Equipment Identity
  • the PDN GW maintains the following EPS bearer context information for UEs.
  • Table 5 shows the context fields for one UE. Additionally, for emergency attached or RLOS attached UEs which are not authenticated, the I MEI is stored in context.
  • the PFCP Session Establishment Request is sent over the Sxa, Sxb, Sxc and N4 interfaces by the CP function to establish a new PFCP session context in the UP function.
  • Table 6 illustrates the Information Elements
  • the PFCP Session Modification Request is used over the Sxa, Sxb, Sxc and N4 interfaces by the CP function to request the UP function to modify the PFCP session. Table 7 illustrates this.
  • partial failure handling is an optional feature implemented by the MME, SGW, ePDG, TWAN and PGW, SGW-C, PGW-C, SGW-U, and PGW-U for split SGW and PGW.
  • SGW Session Initiation Protocol
  • ePDG Session Initiation Protocol
  • TWAN Transmission Control Protocol
  • PGW Packet Control Protocol
  • a partial failure handling feature may be used when a hardware or software failure affects a significant number of PDN connections, even though a significant number of PDN connections may be unaffected. This feature may also be invoked in cases of a total failure of a remote node (e.g., MME or PGW) to cleanup hanging PDN connections associated with the failed node.
  • a remote node e.g., MME or PGW
  • node refers to an entity that functions as an MME, PGW, ePDG, TWAN, or SGW as defined in an SAE network.
  • a PDN Connection Set Identifier identifies a set of PDN connections within a node that may belong to an arbitrary number of UEs.
  • a CSID is an opaque parameter local to a node. Each node that supports the feature maintains a local mapping of CSID to its internal resources. When one or more of those resources fail, the corresponding one or more fully qualified CSIDs are signaled to the peer nodes.
  • the fully qualified CSID (FQ-CSID) is the combination of the node identity and the CSID assigned by the node which together globally identifies a set of PDN connections.
  • the node identifier in the FQ-CSID is required since two different nodes may use the same CSID value. Thus, a partial fault in one node should not cause unrelated PDN connections to be removed accidentally.
  • the node identifier is globally unique across all 3GPP EPS networks, and is formatted as specified in 3GPP TS 29.274 v16.5.0 (2020-09), which is incorporated herein by reference in its entirety.
  • peer For the purposes of partial fault handling the term "peer” is used herein as follows: For a particular PDN connection, two nodes are “peers” if both nodes are used for that PDN connection. For a PDN Connection Set, the nodes are peers if they have at least one PDN connection in the PDN Connection Set where both nodes are used for that PDN connection. In particular, a PGW and an MME are “peers” for the purposes of partial fault handling.
  • An FQ-CSID is established in a node and stored in peer nodes in the PDN connection at the time of PDN connection establishment, or during a node relocation, and used later during partial failure handling in messages defined in 3GPP TS 29.274 and 3GPP TS 29.275 V16.0.0 (2020-07).
  • Each node that supports the feature including the MME, SGW, ePDG, TWAN and the PGW, maintains the FQ-CSID provided by every other peer node for a PDN connection.
  • the FQ-CSIDs stored by PDN connection are later used to find the matching PDN connections when a FQ-CSID is received from a node reporting a partial fault for that FQ-CSID.
  • embodiments of the present disclosure propose that an alternative PGW in the same PGW/SMF set send Update Bearer Request messages with a new PGW F-TEID, so that the PDN connection can be then anchored in the alternative PGW.
  • Embodiments of the present disclosure therefore rely on the MME to pickup an alternative PGW in the set. Doing so means that the present embodiments rely on the MME to perform load sharing among all the PGWs in the set.
  • FIG. 5 is a functional block diagram illustrating some of the components of an EPS network 50 configured according to the present embodiments.
  • network 50 comprises a plurality of PGW functions 52a, 52b, 52c, and 52d (collectively, PGW functions 52), a plurality of SGW functions 54a, 54b, 54c, and 54d (collectively, SGW functions 54), and a plurality of MME functions 56a, 56b, 56c, and 56d (collectively, MME functions 56).
  • the PGW functions 52 are further comprised in a same PGW/SMF set 58, with a set ID of 100, which is managed by a centralized logic function, such as Orchestration Function (OF) 60.
  • OF Orchestration Function
  • a PGW set implementation such as PGW set 58 of Figure 5, may be configured to distribute affected PDN connections to different PGWs 52 in the PGW set 58.
  • This PDN connections may be evenly distributed between the other PGWs (i.e., each PGW 52b, 52c, and 52d would receive the same number of PDN connections from failed PGW 52a), or they may be distributed unevenly.
  • a first PGW 52a e.g., PGW 1
  • 60% of affected PDN connections could be moved to a second PGW 52b (e.g., PGW 2)
  • 20 % of the affected PDN connections could be moved to a third PGW 52c (e.g., PGW 3)
  • the remaining 20% of the connections could be moved to a fourth PGW 52d (e.g., PGW 4).
  • PGWs 52b, 52c, and 52d will signal the SGWs 54 and MME 56a to inform them of the movement.
  • the PGWs 52b, 52c, and 52d will send a message to SGWs 54 and MMEs 56 informing them that the IP address of PGW 52a in the F-TEID of the PDN connections will be changed, respectively, to the IP addres(es) of PGWs 52b, 52c, and 52d.
  • the PDN connections being moved can match a certain criteria (e.g., PDN connections served by a given Access Point Name (APN) and are of a particular PDN Type). Further, embodiments of the present disclosure allow for the criteria for each PDN connection being moved to be different. This allows the present disclosure to distribute some or all of the PDN connections served by a failed PGW 52a to one or more other PGWs 52b, 52c, 52d.
  • a certain criteria e.g., PDN connections served by a given Access Point Name (APN) and are of a particular PDN Type.
  • APN Access Point Name
  • embodiments of the present disclosure allow for the criteria for each PDN connection being moved to be different. This allows the present disclosure to distribute some or all of the PDN connections served by a failed PGW 52a to one or more other PGWs 52b, 52c, 52d.
  • the present embodiments provide more efficient signaling mechanism than is currently used conventionally. That is, the signaling of the present embodiments provides notifications about an address change (e.g., a PGW IP Address change of PGW 52a), thereby enabling the movement or "reassignment” of a possibly very large plurality of PDN connections (i.e., those sharing the same PGW IP address of PGW 52a but with different TEIDs thereby differentiating between the PDN connections) from one PGW identified by the IP Address of PGW 52a to another PGW in the same PGW set 58 (e.g., those identified by the PGW IP Address(es) of PGW 52b and/or PGW 52c and/or PGW 52d).
  • an address change e.g., a PGW IP Address change of PGW 52a
  • the PGW(s) to which the affected session(s) is/are being moved know a priori what sessions it will take, that PGW can, responsive to failover, pre-fetch the information associated with those known sessions currently anchored in the failed PGW (e.g., PGW 52a) from a shared distributed database. This helps to ensure that the PGW(s) taking over the affected sessions will be able to access the sessions locally, and as such, reduce or eliminate any future latency that may occur when the PGW(s) taking over the affected sessions subsequently attempt to access the sessions after they were moved.
  • Figure 6 is a signaling diagram 70 illustrating the mechanism by which the PGWs 52 notify the SGWs 54 and MMEs 56 to change the PGW IP Address (in the F-TEID, which is used on control plane for a PDN connection) for a large number of PDN connections matching certain criteria.
  • the embodiment of Figure 6 is implemented in EPS network 50 over a GTPv2 interface. It should be noted that Figure 6 illustrates only some of the PGW, SGW, and MME components seen in Figure 6; however, those of ordinary skill in the art will readily appreciate that this is for ease of discussion only.
  • MME 56a established a PDN connection for APN1 (not shown) with SGW 54a and PGW 52a (box 72).
  • the IP address of PGW 52a is, for example, IP-1.
  • the PDN connection is then created and associated with MME 56a, with SGW 54a and PGW 52a sharing the same IP address IP-1 (box 74).
  • MME 56a establishes a PDN connection for APN2 (not shown) with SGW 54b and PGW 52b (box 76). This PDN connection is then created and associated with MME 56a, with SGW 54b and PGW 52a sharing the same IP address IP-1 (box 78).
  • PGW 52a determines the need for the address change and notifies the peer network node of the IP address change by sending an UPDATE Bearer Request message, or a new address change message, to SGW 54a (line 80).
  • the criterion is used by the peer network node to further identify which PDN connections are affected by replacing the IP Address of PGW 52a.
  • PGW 52c also determines the need for an address change and sends a notification of the IP address change to peer network node SGW 54a (box 82).
  • PGW 52c may send an UPDATE Bearer Request message, or a new address change message.
  • the APN associated with a given PGW is not the only criterion usable by the peer network nodes. Rather, such criteria may include any PDN connection data stored in the SGWs 54 and/or the MMEs 56 that is usable to identify a PDN connection.
  • the illustrated embodiment uses an APN currently in use by the PGW.
  • other connection data such as the data described in sections 5.7.2 and 5.7.3 of 3GPP TS 23.401 V16.8.0 (2020-09), which is incorporated herein by reference in its entirety, is also usable for this purpose.
  • the criteria may include, in at least one embodiment, the FQ-CSIDs which were allocated by the failed PGW 52a, to identify a subset of PDN connections which were served by the failed PGW 52a.
  • SGW 54a updates all PDN connections with the PGW IP address of IP-1 and APN1/APN2, to use the IP addresses of PGWs 52b and 52c, respectively (box 84), and sends a notification of the IP address change to MME 56a (box 86).
  • MME 56a then updates all PDN connections with the PGW IP address of IP-1 and APN1/APN2, to use the IP addresses of PGWs 52b and 52c, respectively (box 88).
  • the present embodiments are not limited to the use of a PGW as the NF. Rather, the mechanism provided by the present disclosure may also be applied over Sx and N4 interfaces to enable a Control Plane (CP) function to notify a User Plane UP) function to change the IP address (i.e., the address stored in the CP F-SEID) for a very large number of PFCP sessions.
  • CP Control Plane
  • UP User Plane UP
  • the NFs may also be multiple CP functions in a same CP function set.
  • a CP function sends a new message (e.g., a Notify CP Address Change Request), or sends an existing message (e.g. PFCP Association Update Request message or Heartbeat request message) to a peer network node.
  • the message includes one or more new CP function IP Address Change I E(s) that includes, inter alia, the old CP function IP Address to be replaced and a new CP function IP Address to replace the old CP function IP Address.
  • the CP function IP Address Change IE carries the criteria to further identify the applicable PDN connections.
  • the criteria may include, but is not limited to, any PFCP session data stored in UPF that can be used to identify a PFCP session.
  • the criteria may identify a PGW-C, an FQ-CSID, an S-NSSAI, and an APN/DNN.
  • the techniques described herein provide advantages and benefits that conventional solutions cannot or do not provide.
  • the present embodiments configure the PGWs in the same PGW set to redistribute very large numbers of PDN connections, which may be affected by a failure of a PGW, to available PGWs in the same set.
  • the present embodiments configure the PGW taking over the PDN connections to pre-fetch the session information associated with the PDN connections presently anchored by a PGW from a shared database.
  • Figure 7 illustrates a method 90 for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF.
  • the method 70 is implemented by one of the first and second NFs.
  • method 70 calls for the first or second NF determining one or more connection contexts to be moved from the first NF to the second NF (box 94), and then sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set, and the request message identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the first NF and the second NF are interchangeable NF instances of the same service type supporting the same service and the same one or more network slices.
  • the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
  • PGWs Packet Data Network Gateways
  • the first NF and the second NF are respective combined PGW and Session Management Functions (SMFs) belonging to the same NF set.
  • SMFs Session Management Functions
  • the PGWs serve as Control Plane (CP) functions.
  • the request message comprises one of a message requesting an address change, an Update Bearer Request message, and an Echo Request message.
  • the identifying information comprises a PGW IP Address Change Information Element (IE).
  • IE PGW IP Address Change Information Element
  • the PGW IP Address Change IE comprises an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
  • PDN Packet Data Network
  • the IP addresses identifying the first and second NFs are comprised in respective Fully Qualified Tunnel Endpoint Identifiers (F-TEID).
  • F-TEID Fully Qualified Tunnel Endpoint Identifier
  • the IP address identifying the first NF is used for control plane signaling for a connection context.
  • the one or more criterion comprises information stored in the peer network nodes identifying at least one PDN connection.
  • the one or more criterion comprises information identifying the at least one PDN connection by an Access Point Name (APN) and a PDN Type.
  • APN Access Point Name
  • PDN Type a PDN Type
  • the APN is an APN currently in use.
  • the one or more criterion comprises a Fully Qualified Connection Set Identifier (FQ- CSID) allocated by the first NF.
  • FQ- CSID Fully Qualified Connection Set Identifier
  • the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
  • UP User Plane
  • the IP Addresses identifying the first and second CP functions comprise respective CP Fully Qualified Session Endpoint Identifiers (F-SEID).
  • the request message comprises one of a message requesting an access change, a Packet Forwarding Control Protocol (PFCP) Association Update Request message, and a Heartbeat Request message.
  • PFCP Packet Forwarding Control Protocol
  • the identifying information comprises a CP function IP Address Change Information Element (IE).
  • IE IP Address Change Information
  • the CP function IP Address Change IE comprises an IP address identifying the first CP function, an IP address identifying the second CP function that is to replace the first CP function, and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
  • the one or more criterion comprises PFCP session data stored at a User Plane Function (UP) function, and wherein the PFCP session data identifies at least one PFCP session.
  • UP User Plane Function
  • the one or more connection contexts to be moved from the first NF to the second NF are associated with respective PDN connections, or with respective PFCP sessions.
  • the at least one peer network node in the core network is a Mobility Management Entity (MME).
  • MME Mobility Management Entity
  • the at least one peer network node in the core network is a Serving Gateway (SGW).
  • SGW Serving Gateway
  • the at least one peer network node in the core network is a Packet Data Network
  • PGW-U Gateway-User Plane function
  • the at least one peer network node in the core network is a User Plane Function (UPF).
  • UPF User Plane Function
  • the request message is sent responsive to determining that an event associated with the first NF has occurred.
  • Figure 8 illustrates a method 100 for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF.
  • the method 100 is implemented by a peer network node in the core network.
  • method 100 calls for the peer network node receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set, and the request message comprises identifying information identifying the group of one or more connection contexts to be moved from the first NF to the second NF.
  • method 100 calls for the peer network node to send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
  • PGWs Packet Data Network Gateways
  • the identifying information is an address change Information Element (IE) comprising an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying one or more Packet Data Network (PDN) connections between the first NF and other NFs in the NF set that are affected by the second NF replacing the first NF.
  • IE address change Information Element
  • the method further calls for the peer network node updating a PDN connection with information indicating that the first NF is being replaced by the second NF.
  • the first and second NFs are CP functions
  • the peer network node in the core network is a User Plane (UP) NF.
  • UP User Plane
  • the address change IE comprises a CP function IP Address Change Information Element (IE) comprising an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying one or more Packet Data Network (PDN) connections between the first NF and other NFs in the NF set that are affected by the IP address change.
  • IE IP Address Change Information Element
  • the identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF is an Information Element (IE).
  • IE Information Element
  • an apparatus can perform any of the methods herein described by implementing any functional means, modules, units, or circuitry.
  • the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures.
  • the circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory.
  • the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like.
  • DSPs Digital Signal Processors
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
  • the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • DSPs digital signal processors
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • FIG. 9 is a block diagram illustrating some components of a network node 110 configured according to one embodiment of the present disclosure.
  • network node 110 is a NF configured as a PGW, such as PGW 52a, or as a device (e.g., a computer) executing a CP function.
  • network node 110 comprises processing circuitry 112, memory circuitry 114, and communications circuitry 116.
  • memory circuitry 114 stores a computer program 118 that, when executed by processing circuitry 112, configures network node 110 to implement the methods herein described.
  • the processing circuitry 112 controls the overall operation of network node 110 and processes the data and information it sends and receives to/from other nodes. Such processing includes, but is not limited to determining one or more connection contexts to be moved from a first NF to a second NF, and sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set.
  • the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the processing circuitry 112 may comprise one or more microprocessors, hardware, firmware, or a combination thereof.
  • the memory circuitry 114 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuitry 112 for operation.
  • Memory circuitry 114 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage.
  • memory circuitry 114 stores a computer program 118 comprising executable instructions that configure the processing circuitry 112 to implement the methods herein described.
  • a computer program 118 in this regard may comprise one or more code modules corresponding to the means or units described above.
  • computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory.
  • Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM).
  • computer program 118 for configuring the processing circuitry 112 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media.
  • the computer program 118 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • the communication circuitry 116 communicatively connects network node 110 to one or more other nodes via a communications network, as is known in the art.
  • communication circuitry 116 communicatively connects network node 110 to one or more peer network nodes, such as an SGW 54 and/or an MME 56.
  • communications circuitry may comprise, for example, an ETHERNET card or other circuitry configured to communicate wirelessly with the peer network nodes.
  • Figure 10 is a functional block diagram illustrating some functions of computer program 118 executed by processing circuitry 112 of a network node 110 according to one embodiment of the present disclosure.
  • computer program 112 comprises a determining module/unit 120, a send module/unit 122, and a receive module/unit 124.
  • the determining module/unit 120 configures network node 110 to receive, from each of a plurality of cell agents, respective intermediate control decisions determined by those cell agents, as previously described.
  • the consolidated control decision determination module/unit 102 configures network node 90 to determine one or more connection contexts to be moved from the first NF to the second NF, as previously described.
  • the send module/unit 122 configures network node 112 to send a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, as previously described.
  • the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF, as previously described.
  • the receive module/unit 124 configures network node 112 to receive information from the peer network node, such as notification of the address changes associated with other NFs in the same NF set, as previously described.
  • Embodiments further include a carrier containing such a computer program 118.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device.
  • This computer program product may be stored on a computer readable recording medium.
  • FIG 11 is a block diagram illustrating some components of a peer network node 130 in a core network configured according to one embodiment of the present disclosure.
  • peer network node 130 is configured as a SGW, such as SGW 54a, an MME, such as MME 56a, a Packet Data Network Gateway-User Plane function (PGW-U), or a User Plane Function (UPF).
  • SGW such as SGW 54a
  • MME Packet Data Network Gateway-User Plane function
  • PGW-U Packet Data Network Gateway-User Plane function
  • UPF User Plane Function
  • peer network node 130 comprises processing circuitry 132, memory circuitry 134, and communications circuitry 136.
  • memory circuitry 134 stores a computer program 138 that, when executed by processing circuitry 132, configures peer network node 130 to implement the methods herein described.
  • the processing circuitry 132 controls the overall operation of peer network node 130 and processes the data and information it sends and receives to/from other nodes. Such processing includes, but is not limited to receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, and sending, to the second NF, information fur subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • the first NF and the second NF are in a same NF set.
  • the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the processing circuitry 132 may comprise one or more microprocessors, hardware, firmware, or a combination thereof.
  • the memory circuitry 134 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuitry 132 for operation.
  • Memory circuitry 134 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage.
  • memory circuitry 134 stores a computer program 138 comprising executable instructions that configure the processing circuitry 132 to implement the methods herein described.
  • a computer program 138 in this regard may comprise one or more code modules corresponding to the means or units described above.
  • computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory.
  • Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM).
  • computer program 138 for configuring the processing circuitry 132 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media.
  • the computer program 138 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • the communication circuitry 136 communicatively connects peer network node 130 to one or more other nodes via a communications network, as is known in the art.
  • communication circuitry 136 communicatively connects peer network node 130 to one or more network nodes 110, such as a PGW 52, for example.
  • network nodes 110 such as a PGW 52
  • communications circuitry may comprise, for example, an ETHERNET card or other circuitry configured to communicate wirelessly with the network nodes.
  • Figure 12 is a functional block diagram illustrating some functions of computer program 138 executed by processing circuitry 132 of a peer network node 130 according to one embodiment of the present disclosure.
  • computer program 132 comprises a receive module/unit 140, a send module/unit 142, and an update unit/module 144.
  • the receive module/unit 140 configures peer network node 130 to receive a request message from one of a first NF and a second NF indicating that a group of one or more connection contexts are to be moved from the first NF to the second NF, as previously described.
  • the send module/unit 142 configures peer network node 130 to send messages to one or more NFs in the same NF set (e.g., PGW 52), as well as an MME 56, indicating the address change, as previously described.
  • the update module/unit 144 configures peer network node 130 to updates its information to reflect the address change, as previously described.
  • Embodiments further include a carrier containing such a computer program 138.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device.
  • This computer program product may be stored on a computer readable recording medium.
  • the term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
  • a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF the method implemented by one of the first and second NFs and comprising: determining one or more connection contexts to be moved from the first NF to the second NF; sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the request message comprises one of: a message requesting an address change; an Update Bearer Request message; and an Echo Request message.
  • identifying information comprises a PGW IP Address Change Information Element (IE).
  • IE PGW IP Address Change Information Element
  • the PGW IP Address Change IE comprises: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
  • PDN Packet Data Network
  • the one or more criterion comprises information identifying the at least one PDN connection by an Access Point Name (APN) and a PDN Type.
  • APN Access Point Name
  • IP Addresses identifying the first and second CP functions comprise respective CP Fully Qualified Session Endpoint Identifiers (F-SEID).
  • the request message comprises one of: a message requesting an address change; a Packet Forwarding Control Protocol (PFCP) Association Update Request message; and a Heartbeat Request message.
  • PFCP Packet Forwarding Control Protocol
  • identifying information comprises a CP function IP Address Change Information Element (IE).
  • IE IP Address Change Information Element
  • the CP function IP Address Change IE comprises: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
  • the one or more criterion comprises a subset of PFCP session data stored at a User Plane Function (UP) function, which can be used to identify at least one PFCP session.
  • UP User Plane Function
  • connection contexts to be moved from the first NF to the second NF are: associated with respective PDN connections; or associated with respective PFCP sessions.
  • SGW Serving Gateway
  • PGW-U Packet Data Network Gateway-User Plane function
  • a network node for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF comprising: communications interface circuitry configured to communicate data packets with a peer network nodes in a core network; and processing circuitry configured to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • a non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a network node configured to notify peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, causes the network node to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
  • NF network function
  • a computer program product for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF the computer program product comprising software instructions which, when run on at least one processing circuit in a network node, causes the network node to execute the method according to any one of embodiments 1-26.
  • a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF the method implemented by a peer network node in the core network and comprising: receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the group of one or more connection contexts to be moved from the first NF to the second NF; and sending, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • NF network function
  • identifying information is an address change Information Element (IE) comprising: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
  • IE address change Information Element
  • PDN Packet Data Network
  • the address change IE comprises a CP function IP Address Change Information Element (IE) comprising: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
  • IE IP Address Change Information Element
  • a peer network node in a core network for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF
  • the management node comprising: communications interface circuitry configured to communicate data packets with a plurality of Network Functions (NFs) in a same NF set; and processing circuitry configured to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
  • NFs Network Functions
  • a non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a peer network node in a core network, causes the peer network node to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. 40.
  • a computer program product for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF the computer program product comprising software instructions which, when run on at least one processing circuit of a peer network node, causes the peer network node to execute the method according to any one of embodiments 32-37.
  • IE Information Element

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present disclosure provides a technique for notifying peer network nodes in a core network (CN) that a group of one or more connection contexts served by a first network function (NF) in a communications network are to be moved to a second NF in the communications network, in which the first NF and the second NF are in a same NF set.

Description

NOTIFICATION OF PACKET DATA NETWORK GATEWAY (PGW) IP ADDRESS CHANGE
TECHNICAL FIELD
The present disclosure relates generally to network notification procedures, and more particularly to methods and devices for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
BACKGROUND
The Third Generation Partnership Project (3GPP) has issued Release-16 (Rel-16) of the 5G standards. In Rel-16, 3GPP has broadened the use of the Network Function (NF) Set concept to apply to all types of NFs in the 5G Core Network (CN). By way of example only, multiple Network Functions (NFs) can form a NF Set. Similarly, multiple Service Management Functions (SMFs) can form a SMF Set.
In defining the standards, 3GPP TS 23.501 v16.6.0 (2020-09), which is incorporated herein by reference in its entirety, provides the following list of definitions related to NF services, NF service sets, NFs, and NF sets.
• NF instance: an NF instance is defined as an identifiable instance of the NF.
• NF service: an NF service is defined as functionality exposed by a NF through a service based interface and consumed by other authorized NFs.
• NF service instance: an NF service instance is defined as an identifiable instance of the NF service.
• NF service operation: an NF service operation is defined as being an elementary unit of which a given NF service is composed.
• NF Service Set. an NF Service Set is defined as a group of interchangeable NF service instances of the same service type within an NF instance. The NF service instances in the same NF Service Set have access to the same context data.
• NF Set: an NF Set is defined as a group of interchangeable NF instances of the same type that support the same services and the same Network Slice(s). The NF instances in the same NF Set may be geographically distributed but have access to the same context data.
SUMMARY
Embodiments of the present disclosure provide a technique for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF.
In a first aspect, a method is implemented by one of the first and second NFs and comprises determining one or more connection contexts to be moved from the first NF to the second NF, sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF. The first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF. In a second aspect, the present disclosure provides a network node for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF. In this embodiment, the network node comprises communications interface circuitry configured to communicate data packets with a peer network nodes in a core network, and processing circuitry configured to determine one or more connection contexts to be moved from the first NF to the second NF, and send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF. The first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
In a third aspect, the present disclosure provides a non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a network node configured to notify peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, causes the network node to determine one or more connection contexts to be moved from the first NF to the second NF, and send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF. In this aspect, the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
In a fourth aspect, the present disclosure provides a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF. In this embodiment, the method is implemented by a peer network node in the core network and comprises receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, and sending, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. The first NF and the second NF are in the same NF set, and the request message comprises an information element (IE) identifying the group of one or more connection contexts to be moved from the first NF to the second NF.
In fifth aspect, the present disclosure provides a peer network node in a core network that is notified that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF. In this embodiment, the peer network node comprises communications interface circuitry configured to communicate data packets with a plurality of Network Functions (NFs) in a same NF set, and processing circuitry configured to receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. The first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF. In a sixth aspect, the present disclosure provides non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a peer network node in a core network, causes the peer network node to receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. In this aspect, the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a non-roaming architecture, including a SMF+PGW-C, for interworking between 5GS and EPC/E-UTRAN systems.
Figure 2 illustrates a non-roaming architecture for 3GPP accesses.
Figure 3 illustrates an architecture in which the user plane (UP) and the control plane (CP) are separate.
Figure 4 illustrates an architecture reference model showing the separation of the user plane (UP) and the control plane (CP) for non-roaming and roaming scenarios.
Figure 5 is a functional block diagram illustrating some of the components of an EPS network configured according to the present embodiments.
Figure 6 is a signaling diagram illustrating a technique by which a NF notifies a peer network node in a core network to change the IP Address for a large number of connections matching certain criteria, according to one embodiment.
Figure 7 illustrates a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF, according to one embodiment.
Figure 8 illustrates a method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF, according to one embodiment.
Figure 9 is a block diagram illustrating some components of a network node configured according to one embodiment.
Figure 10 is a functional block diagram illustrating some functions of computer program executed by processing circuitry of a network node configured according to one embodiment of the present disclosure.
Figure 11 is a block diagram illustrating some components of a peer network node in a core network configured according to one embodiment of the present disclosure.
Figure 12 is a functional block diagram illustrating some functions of computer program executed by processing circuitry of a peer network node according to one embodiment of the present disclosure. DETAILED DESCRIPTION
Embodiments of the present disclosure provide techniques for notifying peer network nodes in a core network (CN) that a group of one or more connection contexts served by a first network function (NF) in a communications network are to be moved to a second NF in the communications network, in which the first NF and the second NF are in a same NF set.
As specified in section 5.21.3.1 of TS 23.501, an NF instance can be deployed such that several NF instances are present within an NF Set to provide distribution, redundancy, and scalability together as a Set of NF instances. Therefore, in situations associated with failures, load balancing, and load re-balancing, for example, it is possible to replace a given NF with an alternative NF within the same NF Set. For example, one SMF in a given SMF Set can take over a PDU Session which was handled by another SMF in the same SMF set.
Figure 1
To support mobility between 4G systems and 5G systems, a combined PGW and SMF (i.e., SMF+PGW-C) is required. Figure 1 illustrates a non-roaming architecture 10, including a SMF+PGW-C, for such interworking between 5GS and EPC/E-UTRAN systems.
Without the "NF Set concept” (e.g., SMF Sets for Evolved Packet System (EPS)), PDN connections which were handled by a first Packet Data Gateway (PGW) can NOT be retained when the first PGW experiences a failure, for example, without the first PGW being restarted or without it having been restarted. In such cases, the MME has to request that the UEs RE-ESTABLISH the affected PDN connections.
Figures 2-3
Figures 2 and 3 illustrate corresponding non-roaming architectures 20, 30 depicting such a situation. The architecture 20 of Figure 2 is more particularly a non-roaming architecture for 3GPP accesses, and includes an SMF+PGW-C.
Figure 4
Figure 4 illustrates an architecture reference model 40 showing the separation of the user plane (UP) and the control plane (CP) for non-roaming and roaming scenarios. According to the present embodiments, failures at the PGW-C addressed. This is because the UP path - i.e., the path from the Operator's IP services 42 over the SGI, the PGW-U, the SGW-U, and finally to an eNB via S1-U may be fine.
Mobility Management Entity (MME)
The MME maintains mobility management (MM) context and EPS bearer context information for User Equipment (UEs) in the ECM-IDLE, ECM CONNECTED and EMM-DEREGISTERED states. The following table (i.e., Table 1) shows the context fields for one UE.
Figure imgf000005_0001
Figure imgf000006_0001
Figure imgf000007_0001
Figure imgf000008_0001
Figure imgf000009_0001
Figure imgf000010_0001
Figure imgf000011_0001
Table 1 : MME MM and EPS bearer Contexts
According to the present disclosure, there are a number of ways in which a NF (e.g., the first or second NF) can identify the PDN connections that may be affected by the movement of the connection contexts. One embodiment, as seen in Table 1 , for example, identifies such PDN connections using a current Access Point Name (APN) in the "APN in Use” field, along with the PDN Type fields. Additionally, the IP Addresses involved in the replacement process are maintained in the "PDN GW Address in Use” and "PDN GW for S5/S8 fields of Table 1 . More particularly, the "PDN GW Address in Use” is the address will be replaced and is carried in the Fully Qualified Tunnel Endpoint Identifier (F-TEID).
For all emergency bearer services that are established by an MME on UE request, the MME Emergency Configuration Data is used instead of UE subscription data received from the HSS. The MME Emergency Configuration Data is shown in Table 2 below.
Figure imgf000011_0002
Table 2: MME Emergency Configuration Data For all Restricted Local Operator Service (RLOS) PDN connections that are established by an MME on UE request, the MME RLOS Configuration Data is used instead of the UE subscription data received from HSS. Table 3 identifies the MME RLOS Configuration Data.
Figure imgf000012_0001
Table 3: MME RLOS Configuration Data The Serving Gateway (SGW) maintains the following EPS bearer context information for UEs. In particular,
Table 4 illustrates the context fields for one UE. However, for emergency attached or RLOS attached UEs, which are not authenticated, the International Mobile Equipment Identity (I M El) is stored in context.
Figure imgf000012_0002
Figure imgf000013_0001
Figure imgf000014_0001
Figure imgf000015_0001
Table 4: S-GW EPS bearer context
The PDN GW maintains the following EPS bearer context information for UEs. Table 5 shows the context fields for one UE. Additionally, for emergency attached or RLOS attached UEs which are not authenticated, the I MEI is stored in context.
Figure imgf000015_0002
Figure imgf000016_0001
Figure imgf000017_0001
Table 5: P-GW context
The PFCP Session Establishment Request is sent over the Sxa, Sxb, Sxc and N4 interfaces by the CP function to establish a new PFCP session context in the UP function. Table 6 illustrates the Information Elements
(lEs) in a PFCP Session Establishment Request.
Figure imgf000017_0002
Figure imgf000018_0001
Figure imgf000019_0001
Table 6: Information Elements in a PFCP Session Establishment Request
The PFCP Session Modification Request is used over the Sxa, Sxb, Sxc and N4 interfaces by the CP function to request the UP function to modify the PFCP session. Table 7 illustrates this.
Figure imgf000019_0002
Figure imgf000020_0001
Figure imgf000021_0001
Figure imgf000022_0001
Figure imgf000023_0001
Figure imgf000024_0001
Table 7: Information Elements in a PFCP Session Modification Request
GENERAL PARTIAL FAILURE HANDLING PROCEDURES
As defined in 3GPP TS 23.214 v.16.2.0 (2020-009), which is incorporated herein by reference in its entirety, partial failure handling is an optional feature implemented by the MME, SGW, ePDG, TWAN and PGW, SGW-C, PGW-C, SGW-U, and PGW-U for split SGW and PGW. For split SGW and PGW, the description in TS 23.214 that relates to the SGW also applies to the SGW-C. Similarly, the description in TS 23.214 that relates to the PGW also applies to the PGW-C.
A partial failure handling feature may be used when a hardware or software failure affects a significant number of PDN connections, even though a significant number of PDN connections may be unaffected. This feature may also be invoked in cases of a total failure of a remote node (e.g., MME or PGW) to cleanup hanging PDN connections associated with the failed node. When it is impossible to recover the affected PDN connections (for example, using implementation-specific session redundancy procedures), it is useful to inform the peer nodes about the affected PDN connections for recovery on the peer nodes. Notably, such a notification could be performed using an identifier that represents a large set of PDN connections rather than on individual PDN connection basis. If a hardware or software failure happens to impact a small or insignificant number of PDN connections, the node experiencing the fault need not treat the failure as a partial fault. However, the node may tear down connections one by one.
For the purposes of partial fault handling the term "node" refers to an entity that functions as an MME, PGW, ePDG, TWAN, or SGW as defined in an SAE network.
A PDN Connection Set Identifier (CSID) identifies a set of PDN connections within a node that may belong to an arbitrary number of UEs. A CSID is an opaque parameter local to a node. Each node that supports the feature maintains a local mapping of CSID to its internal resources. When one or more of those resources fail, the corresponding one or more fully qualified CSIDs are signaled to the peer nodes. The fully qualified CSID (FQ-CSID) is the combination of the node identity and the CSID assigned by the node which together globally identifies a set of PDN connections.
The node identifier in the FQ-CSID is required since two different nodes may use the same CSID value. Thus, a partial fault in one node should not cause unrelated PDN connections to be removed accidentally. The node identifier is globally unique across all 3GPP EPS networks, and is formatted as specified in 3GPP TS 29.274 v16.5.0 (2020-09), which is incorporated herein by reference in its entirety.
For the purposes of partial fault handling the term "peer” is used herein as follows: For a particular PDN connection, two nodes are "peers” if both nodes are used for that PDN connection. For a PDN Connection Set, the nodes are peers if they have at least one PDN connection in the PDN Connection Set where both nodes are used for that PDN connection. In particular, a PGW and an MME are "peers” for the purposes of partial fault handling.
An FQ-CSID is established in a node and stored in peer nodes in the PDN connection at the time of PDN connection establishment, or during a node relocation, and used later during partial failure handling in messages defined in 3GPP TS 29.274 and 3GPP TS 29.275 V16.0.0 (2020-07). Each node that supports the feature, including the MME, SGW, ePDG, TWAN and the PGW, maintains the FQ-CSID provided by every other peer node for a PDN connection. The FQ-CSIDs stored by PDN connection are later used to find the matching PDN connections when a FQ-CSID is received from a node reporting a partial fault for that FQ-CSID.
With the above in mind, embodiments of the present disclosure propose that an alternative PGW in the same PGW/SMF set send Update Bearer Request messages with a new PGW F-TEID, so that the PDN connection can be then anchored in the alternative PGW.
Although such signaling is "per-PDN connection,” it is generated when there is upstream signaling, such as when the PCF or UPF triggers such Update Bearer Request signaling, for example. However, this may still lead to massive signaling in the network since a failure in one PGW can affect a very large number (e.g., millions) of PDN connections. Further, not proactively anchoring a connection in the alternative PGW is not done can lead to non- deterministic latency when the MME performs signaling, and when the alternative PGW needs to fetch sessions from the shared session database for each session that is now anchored in the alternative PGW.
Embodiments of the present disclosure therefore rely on the MME to pickup an alternative PGW in the set. Doing so means that the present embodiments rely on the MME to perform load sharing among all the PGWs in the set.
Figure 5
For example, Figure 5 is a functional block diagram illustrating some of the components of an EPS network 50 configured according to the present embodiments. As seen in Figure 5, network 50 comprises a plurality of PGW functions 52a, 52b, 52c, and 52d (collectively, PGW functions 52), a plurality of SGW functions 54a, 54b, 54c, and 54d (collectively, SGW functions 54), and a plurality of MME functions 56a, 56b, 56c, and 56d (collectively, MME functions 56). The PGW functions 52 are further comprised in a same PGW/SMF set 58, with a set ID of 100, which is managed by a centralized logic function, such as Orchestration Function (OF) 60.
In some embodiments, a PGW set implementation, such as PGW set 58 of Figure 5, may be configured to distribute affected PDN connections to different PGWs 52 in the PGW set 58. This PDN connections may be evenly distributed between the other PGWs (i.e., each PGW 52b, 52c, and 52d would receive the same number of PDN connections from failed PGW 52a), or they may be distributed unevenly. By way of example only, if a first PGW 52a (e.g., PGW 1) has failed, 60% of affected PDN connections could be moved to a second PGW 52b (e.g., PGW 2), 20 % of the affected PDN connections could be moved to a third PGW 52c (e.g., PGW 3), and the remaining 20% of the connections could be moved to a fourth PGW 52d (e.g., PGW 4). Once the PDN connections have been moved, PGWs 52b, 52c, and 52d will signal the SGWs 54 and MME 56a to inform them of the movement. Particularly, in one embodiment, the PGWs 52b, 52c, and 52d will send a message to SGWs 54 and MMEs 56 informing them that the IP address of PGW 52a in the F-TEID of the PDN connections will be changed, respectively, to the IP addres(es) of PGWs 52b, 52c, and 52d.
According to the present embodiments, the PDN connections being moved can match a certain criteria (e.g., PDN connections served by a given Access Point Name (APN) and are of a particular PDN Type). Further, embodiments of the present disclosure allow for the criteria for each PDN connection being moved to be different. This allows the present disclosure to distribute some or all of the PDN connections served by a failed PGW 52a to one or more other PGWs 52b, 52c, 52d.
Thus, the present embodiments provide more efficient signaling mechanism than is currently used conventionally. That is, the signaling of the present embodiments provides notifications about an address change (e.g., a PGW IP Address change of PGW 52a), thereby enabling the movement or "reassignment” of a possibly very large plurality of PDN connections (i.e., those sharing the same PGW IP address of PGW 52a but with different TEIDs thereby differentiating between the PDN connections) from one PGW identified by the IP Address of PGW 52a to another PGW in the same PGW set 58 (e.g., those identified by the PGW IP Address(es) of PGW 52b and/or PGW 52c and/or PGW 52d).
Additionally, according to the present embodiments, if the PGW(s) to which the affected session(s) is/are being moved know a priori what sessions it will take, that PGW can, responsive to failover, pre-fetch the information associated with those known sessions currently anchored in the failed PGW (e.g., PGW 52a) from a shared distributed database. This helps to ensure that the PGW(s) taking over the affected sessions will be able to access the sessions locally, and as such, reduce or eliminate any future latency that may occur when the PGW(s) taking over the affected sessions subsequently attempt to access the sessions after they were moved.
Figure 6
Figure 6 is a signaling diagram 70 illustrating the mechanism by which the PGWs 52 notify the SGWs 54 and MMEs 56 to change the PGW IP Address (in the F-TEID, which is used on control plane for a PDN connection) for a large number of PDN connections matching certain criteria. As in Figure 5, the embodiment of Figure 6 is implemented in EPS network 50 over a GTPv2 interface. It should be noted that Figure 6 illustrates only some of the PGW, SGW, and MME components seen in Figure 6; however, those of ordinary skill in the art will readily appreciate that this is for ease of discussion only.
As seen in Figure 6, MME 56a established a PDN connection for APN1 (not shown) with SGW 54a and PGW 52a (box 72). The IP address of PGW 52a is, for example, IP-1. The PDN connection is then created and associated with MME 56a, with SGW 54a and PGW 52a sharing the same IP address IP-1 (box 74).
Next, MME 56a establishes a PDN connection for APN2 (not shown) with SGW 54b and PGW 52b (box 76). This PDN connection is then created and associated with MME 56a, with SGW 54b and PGW 52a sharing the same IP address IP-1 (box 78).
At some point, a failure occurs with PGW 52a. Alternatively, however, there may be a scheduled or unscheduled O&M procedure that forces the movement of the PDN connections from PGW 52a. In this embodiment, one of the PGWs will notify the MME and the SGWs of the address change. In this embodiment, PGW 52b determines the need for the address change and notifies the peer network node of the IP address change by sending an UPDATE Bearer Request message, or a new address change message, to SGW 54a (line 80). According to the present embodiments, the message includes one or more address change lEs (e.g., a PGW IP Address Change IE) that comprise the "old” IP address (i.e., IP-1) of PGW 52a, a "new” IP address (e.g., IP-2) of PGW 52b that will replace the old IP address, and one or more criterion (e.g., APN = APN1). The criterion is used by the peer network node to further identify which PDN connections are affected by replacing the IP Address of PGW 52a.
Similarly, PGW 52c also determines the need for an address change and sends a notification of the IP address change to peer network node SGW 54a (box 82). As above, PGW 52c may send an UPDATE Bearer Request message, or a new address change message. The message includes one or more address change lEs that comprises the "old” IP address (i.e., IP-1) of PGW 52a, a "new” IP address (e.g., IP-3) of PGW 52c, and one or more criterion (e.g., APN = APN2). The criterion is used to further identify which PDN connections are affected by replacing the IP Address of PGW 52a.
Those of ordinary skill in the art should appreciate that the APN associated with a given PGW is not the only criterion usable by the peer network nodes. Rather, such criteria may include any PDN connection data stored in the SGWs 54 and/or the MMEs 56 that is usable to identify a PDN connection. The illustrated embodiment uses an APN currently in use by the PGW. However, other connection data, such as the data described in sections 5.7.2 and 5.7.3 of 3GPP TS 23.401 V16.8.0 (2020-09), which is incorporated herein by reference in its entirety, is also usable for this purpose. For example, the criteria may include, in at least one embodiment, the FQ-CSIDs which were allocated by the failed PGW 52a, to identify a subset of PDN connections which were served by the failed PGW 52a.
Next, SGW 54a updates all PDN connections with the PGW IP address of IP-1 and APN1/APN2, to use the IP addresses of PGWs 52b and 52c, respectively (box 84), and sends a notification of the IP address change to MME 56a (box 86). MME 56a then updates all PDN connections with the PGW IP address of IP-1 and APN1/APN2, to use the IP addresses of PGWs 52b and 52c, respectively (box 88).
It should be noted here that the present embodiments are not limited to the use of a PGW as the NF. Rather, the mechanism provided by the present disclosure may also be applied over Sx and N4 interfaces to enable a Control Plane (CP) function to notify a User Plane UP) function to change the IP address (i.e., the address stored in the CP F-SEID) for a very large number of PFCP sessions. Thus, the NFs may also be multiple CP functions in a same CP function set.
In this aspect, a CP function sends a new message (e.g., a Notify CP Address Change Request), or sends an existing message (e.g. PFCP Association Update Request message or Heartbeat request message) to a peer network node. The message includes one or more new CP function IP Address Change I E(s) that includes, inter alia, the old CP function IP Address to be replaced and a new CP function IP Address to replace the old CP function IP Address. Additionally, the CP function IP Address Change IE carries the criteria to further identify the applicable PDN connections. In this aspect, the criteria may include, but is not limited to, any PFCP session data stored in UPF that can be used to identify a PFCP session. For example, the criteria may identify a PGW-C, an FQ-CSID, an S-NSSAI, and an APN/DNN. The techniques described herein provide advantages and benefits that conventional solutions cannot or do not provide. For example, the present embodiments configure the PGWs in the same PGW set to redistribute very large numbers of PDN connections, which may be affected by a failure of a PGW, to available PGWs in the same set. Additionally, the present embodiments configure the PGW taking over the PDN connections to pre-fetch the session information associated with the PDN connections presently anchored by a PGW from a shared database. This way, if the PDN connections anchored at the PGW are redistributed according to the present embodiments, the PGW receiving those PDN connections is assured to have local access to those connections, which will greatly reduce or eliminate latency if and when those sessions are later accessed.
Figure 7
Figure 7 illustrates a method 90 for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF. The method 70 is implemented by one of the first and second NFs.
As seen in Figure 92, a failure occurs at the first NF, for example (box 92). Alternatively, however, box 92 may represent the occurrence of a scheduled or unscheduled O&M function. Regardless, method 70 calls for the first or second NF determining one or more connection contexts to be moved from the first NF to the second NF (box 94), and then sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF. In this embodiment, the first NF and the second NF are in a same NF set, and the request message identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
In one embodiment, the first NF and the second NF are interchangeable NF instances of the same service type supporting the same service and the same one or more network slices.
In one embodiment, the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
In one embodiment, the first NF and the second NF are respective combined PGW and Session Management Functions (SMFs) belonging to the same NF set.
In one embodiment, the PGWs serve as Control Plane (CP) functions.
In one embodiment, the request message comprises one of a message requesting an address change, an Update Bearer Request message, and an Echo Request message.
In one embodiment, the identifying information comprises a PGW IP Address Change Information Element (IE).
In one embodiment, the PGW IP Address Change IE comprises an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
In one embodiment, the IP addresses identifying the first and second NFs are comprised in respective Fully Qualified Tunnel Endpoint Identifiers (F-TEID).
In one embodiment, the IP address identifying the first NF is used for control plane signaling for a connection context. In one embodiment, the one or more criterion comprises information stored in the peer network nodes identifying at least one PDN connection.
In one embodiment, the one or more criterion comprises information identifying the at least one PDN connection by an Access Point Name (APN) and a PDN Type.
In one embodiment, the APN is an APN currently in use.
In one embodiment, the one or more criterion comprises a Fully Qualified Connection Set Identifier (FQ- CSID) allocated by the first NF.
In one embodiment, the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
In one embodiment, the IP Addresses identifying the first and second CP functions comprise respective CP Fully Qualified Session Endpoint Identifiers (F-SEID).
In one embodiment, the request message comprises one of a message requesting an access change, a Packet Forwarding Control Protocol (PFCP) Association Update Request message, and a Heartbeat Request message.
In one embodiment, the identifying information comprises a CP function IP Address Change Information Element (IE).
In one embodiment, the CP function IP Address Change IE comprises an IP address identifying the first CP function, an IP address identifying the second CP function that is to replace the first CP function, and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
In one embodiment, the one or more criterion comprises PFCP session data stored at a User Plane Function (UP) function, and wherein the PFCP session data identifies at least one PFCP session.
In one embodiment, the one or more connection contexts to be moved from the first NF to the second NF are associated with respective PDN connections, or with respective PFCP sessions.
In one embodiment, the at least one peer network node in the core network is a Mobility Management Entity (MME).
In one embodiment, the at least one peer network node in the core network is a Serving Gateway (SGW).
In one embodiment, the at least one peer network node in the core network is a Packet Data Network
Gateway-User Plane function (PGW-U).
In one embodiment, the at least one peer network node in the core network is a User Plane Function (UPF).
In one embodiment, the request message is sent responsive to determining that an event associated with the first NF has occurred.
Figure 8
Figure 8 illustrates a method 100 for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first NF are to be moved to a second NF. The method 100 is implemented by a peer network node in the core network. As seen in Figure 8, method 100 calls for the peer network node receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF. The first NF and the second NF are in a same NF set, and the request message comprises identifying information identifying the group of one or more connection contexts to be moved from the first NF to the second NF. Once received, method 100 calls for the peer network node to send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
In one embodiment, the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
In one embodiment, the identifying information is an address change Information Element (IE) comprising an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying one or more Packet Data Network (PDN) connections between the first NF and other NFs in the NF set that are affected by the second NF replacing the first NF.
In one embodiment, the method further calls for the peer network node updating a PDN connection with information indicating that the first NF is being replaced by the second NF.
In one embodiment, the first and second NFs are CP functions, and the peer network node in the core network is a User Plane (UP) NF.
In one embodiment, the address change IE comprises a CP function IP Address Change Information Element (IE) comprising an IP address identifying the first NF, an IP address identifying the second NF that is to replace the first NF, and one or more criterion for identifying one or more Packet Data Network (PDN) connections between the first NF and other NFs in the NF set that are affected by the IP address change.
Regardless of whether the embodiments are implemented at a network node (e.g., Figure 7) or at a peer network node in a core network (e.g., Figure 8), the identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF is an Information Element (IE).
An apparatus can perform any of the methods herein described by implementing any functional means, modules, units, or circuitry. In one embodiment, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In embodiments that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein. Further, any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
Figure 9
With the above in mind, Figure 9 is a block diagram illustrating some components of a network node 110 configured according to one embodiment of the present disclosure. In this embodiment, network node 110 is a NF configured as a PGW, such as PGW 52a, or as a device (e.g., a computer) executing a CP function.
As seen in Figure 9, network node 110 comprises processing circuitry 112, memory circuitry 114, and communications circuitry 116. In addition, memory circuitry 114 stores a computer program 118 that, when executed by processing circuitry 112, configures network node 110 to implement the methods herein described.
In more detail, the processing circuitry 112 controls the overall operation of network node 110 and processes the data and information it sends and receives to/from other nodes. Such processing includes, but is not limited to determining one or more connection contexts to be moved from a first NF to a second NF, and sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF. The first NF and the second NF are in a same NF set. Further, the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF. In this regard, the processing circuitry 112 may comprise one or more microprocessors, hardware, firmware, or a combination thereof.
The memory circuitry 114 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuitry 112 for operation. Memory circuitry 114 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage. As stated above, memory circuitry 114 stores a computer program 118 comprising executable instructions that configure the processing circuitry 112 to implement the methods herein described. A computer program 118 in this regard may comprise one or more code modules corresponding to the means or units described above. In general, computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory. Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM). In some embodiments, computer program 118 for configuring the processing circuitry 112 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media. The computer program 118 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
The communication circuitry 116 communicatively connects network node 110 to one or more other nodes via a communications network, as is known in the art. In some embodiments, for example, communication circuitry 116 communicatively connects network node 110 to one or more peer network nodes, such as an SGW 54 and/or an MME 56. As such, communications circuitry may comprise, for example, an ETHERNET card or other circuitry configured to communicate wirelessly with the peer network nodes.
Figure 10
Figure 10 is a functional block diagram illustrating some functions of computer program 118 executed by processing circuitry 112 of a network node 110 according to one embodiment of the present disclosure. As seen in Figure 10, computer program 112 comprises a determining module/unit 120, a send module/unit 122, and a receive module/unit 124.
When computer program 118 is executed by processing circuitry 112, the determining module/unit 120 configures network node 110 to receive, from each of a plurality of cell agents, respective intermediate control decisions determined by those cell agents, as previously described. The consolidated control decision determination module/unit 102 configures network node 90 to determine one or more connection contexts to be moved from the first NF to the second NF, as previously described. The send module/unit 122 configures network node 112 to send a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, as previously described. Additionally, the first NF and the second NF are in a same NF set, and the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF, as previously described. The receive module/unit 124 configures network node 112 to receive information from the peer network node, such as notification of the address changes associated with other NFs in the same NF set, as previously described.
Embodiments further include a carrier containing such a computer program 118. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.
Figure 11
Figure 11 is a block diagram illustrating some components of a peer network node 130 in a core network configured according to one embodiment of the present disclosure. In this embodiment, peer network node 130 is configured as a SGW, such as SGW 54a, an MME, such as MME 56a, a Packet Data Network Gateway-User Plane function (PGW-U), or a User Plane Function (UPF). As seen in Figure 11, peer network node 130 comprises processing circuitry 132, memory circuitry 134, and communications circuitry 136. In addition, memory circuitry 134 stores a computer program 138 that, when executed by processing circuitry 132, configures peer network node 130 to implement the methods herein described.
In more detail, the processing circuitry 132 controls the overall operation of peer network node 130 and processes the data and information it sends and receives to/from other nodes. Such processing includes, but is not limited to receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, and sending, to the second NF, information fur subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. The first NF and the second NF are in a same NF set. Further, the request message comprises an information element (IE) identifying the one or more connection contexts to be moved from the first NF to the second NF. In this regard, the processing circuitry 132 may comprise one or more microprocessors, hardware, firmware, or a combination thereof.
The memory circuitry 134 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuitry 132 for operation. Memory circuitry 134 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage. As stated above, memory circuitry 134 stores a computer program 138 comprising executable instructions that configure the processing circuitry 132 to implement the methods herein described. A computer program 138 in this regard may comprise one or more code modules corresponding to the means or units described above. In general, computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory. Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM). In some embodiments, computer program 138 for configuring the processing circuitry 132 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media. The computer program 138 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
The communication circuitry 136 communicatively connects peer network node 130 to one or more other nodes via a communications network, as is known in the art. In some embodiments, for example, communication circuitry 136 communicatively connects peer network node 130 to one or more network nodes 110, such as a PGW 52, for example. As such, communications circuitry may comprise, for example, an ETHERNET card or other circuitry configured to communicate wirelessly with the network nodes.
Figure 12
Figure 12 is a functional block diagram illustrating some functions of computer program 138 executed by processing circuitry 132 of a peer network node 130 according to one embodiment of the present disclosure. As seen in Figure 12, computer program 132 comprises a receive module/unit 140, a send module/unit 142, and an update unit/module 144.
When computer program 138 is executed by processing circuitry 132, the receive module/unit 140 configures peer network node 130 to receive a request message from one of a first NF and a second NF indicating that a group of one or more connection contexts are to be moved from the first NF to the second NF, as previously described. The send module/unit 142 configures peer network node 130 to send messages to one or more NFs in the same NF set (e.g., PGW 52), as well as an MME 56, indicating the address change, as previously described. The update module/unit 144 configures peer network node 130 to updates its information to reflect the address change, as previously described.
Embodiments further include a carrier containing such a computer program 138. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the description.
The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
Some of the embodiments contemplated herein are described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein. The disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Further aspects of the present disclosure are provided in APPENDIX A, which is attached hereto. EMBODIMENTS
Some of the embodiments that have been described above can be summarized in the following manner:
1 . A method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the method implemented by one of the first and second NFs and comprising: determining one or more connection contexts to be moved from the first NF to the second NF; sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
2. The method of embodiment 1 wherein the first NF and the second NF are interchangeable NF instances of the same service type supporting the same service and the same one or more network slices.
3. The method of any embodiments 1-2 wherein the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
4. The method of any embodiments 1-2 wherein the first NF and the second NF are respective combined PGW and Session Management Functions (SMFs) belonging to the same NF set.
5. The method of any embodiments 1-4 wherein the PGWs serve as Control Plane (CP) functions.
6. The method of any of embodiments 1-5 wherein the request message comprises one of: a message requesting an address change; an Update Bearer Request message; and an Echo Request message.
7. The method of any of embodiments 1-6 wherein the identifying information comprises a PGW IP Address Change Information Element (IE).
8. The method of embodiment 7 wherein the PGW IP Address Change IE comprises: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF. 9. The method of any of embodiments 7-8 wherein the IP addresses identifying the first and second NFs are comprised in respective Fully Qualified Tunnel Endpoint Identifiers (F-TEID).
10. The method of embodiment 9 wherein the IP address identifying the first NF is used for control plane signaling for a connection context.
11 . The method of any of embodiments 8-10 wherein the one or more criterion comprises information stored in the peer network nodes identifying at least one PDN connection.
12. The method of embodiment 11 wherein the one or more criterion comprises information identifying the at least one PDN connection by an Access Point Name (APN) and a PDN Type.
13. The method of embodiment 12 wherein the APN is an APN currently in use.
14. The method of embodiment 12 wherein the one or more criterion comprises a Fully Qualified Connection Set Identifier (FQ-CSID) allocated by the first NF.
15. The method of embodiments 1-2 wherein the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
16. The method of embodiments 1-2 and 15 wherein the IP Addresses identifying the first and second CP functions comprise respective CP Fully Qualified Session Endpoint Identifiers (F-SEID).
17. The method of embodiments 1-2 and 14-16 wherein the request message comprises one of: a message requesting an address change; a Packet Forwarding Control Protocol (PFCP) Association Update Request message; and a Heartbeat Request message.
18. The method of any of embodiments 1-2 and 14-17 wherein the identifying information comprises a CP function IP Address Change Information Element (IE).
19. The method of embodiment 18 wherein the CP function IP Address Change IE comprises: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function. 20. The method of embodiment 19 wherein the one or more criterion comprises a subset of PFCP session data stored at a User Plane Function (UP) function, which can be used to identify at least one PFCP session.
21 . The method of any of the preceding embodiments wherein the one or more connection contexts to be moved from the first NF to the second NF are: associated with respective PDN connections; or associated with respective PFCP sessions.
22. The method of embodiment 1 wherein the at least one peer network node in the core network is a Mobility Management Entity (MME).
23. The method of embodiment 1 wherein the at least one peer network node in the core network is a Serving Gateway (SGW).
24. The method of embodiment 1 wherein the at least one peer network node in the core network is a Packet Data Network Gateway-User Plane function (PGW-U).
25. The method of embodiment 1 wherein the at least one peer network node in the core network is a User Plane Function (UPF).
26. The method of any of the preceding embodiments wherein the request message is sent responsive to determining that an event associated with the first NF has occurred.
27. A network node for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the network node comprising: communications interface circuitry configured to communicate data packets with a peer network nodes in a core network; and processing circuitry configured to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
28. The network node of embodiment 27 wherein the processing circuitry is further configured to perform any of embodiments 2-26. 29. A non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a network node configured to notify peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, causes the network node to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
30. A computer program product for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the computer program product comprising software instructions which, when run on at least one processing circuit in a network node, causes the network node to execute the method according to any one of embodiments 1-26.
31 . A method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the method implemented by a peer network node in the core network and comprising: receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the group of one or more connection contexts to be moved from the first NF to the second NF; and sending, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
32. The method of embodiment 31 wherein the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
33. The method of embodiments 31 and 32 wherein the identifying information is an address change Information Element (IE) comprising: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF. 34. The method of any of embodiments 31-33 further comprising updating at least one or more connection contexts with information indicating that the first NF is being replaced by the second NF.
35. The method of embodiment 31 wherein the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
36. The method of embodiments 31 and 35 wherein the address change IE comprises a CP function IP Address Change Information Element (IE) comprising: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
37. A peer network node in a core network for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the management node comprising: communications interface circuitry configured to communicate data packets with a plurality of Network Functions (NFs) in a same NF set; and processing circuitry configured to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
38. The management node of embodiment 37 wherein the processing circuitry is further configured to perform any of embodiments 33-36.
39. A non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a peer network node in a core network, causes the peer network node to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF. 40. A computer program product for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the computer program product comprising software instructions which, when run on at least one processing circuit of a peer network node, causes the peer network node to execute the method according to any one of embodiments 32-37.
41 . The method of any of the preceding claims wherein the identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF is an Information Element (IE).

Claims

CLAIMS What is claimed is:
1 . A method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the method implemented by one of the first and second NFs and comprising: determining one or more connection contexts to be moved from the first NF to the second NF; sending a request message to at least one peer network node in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
2. The method of claim 1 wherein the first NF and the second NF are interchangeable NF instances of the same service type supporting the same service and the same one or more network slices.
3. The method of any one of claim 1-2 wherein the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
4. The method of any one of claim 1-2 wherein the first NF and the second NF are respective combined PG and Session Management Functions (SMFs) belonging to the same NF set.
5. The method of any one of claim 1-4 wherein the PGWs serve as Control Plane (CP) functions.
6. The method of any one of claim 1-5 wherein the request message comprises one of: a message requesting an address change; an Update Bearer Request message; and an Echo Request message.
7. The method of any one of claim 1-6 wherein the identifying information comprises a PGW IP Address Change Information Element (IE).
8. The method of claim 7 wherein the PGW IP Address Change IE comprises: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
9. The method of any one of claim 7-8 wherein the IP addresses identifying the first and second NFs are comprised in respective Fully Qualified Tunnel Endpoint Identifiers (F-TEID).
10. The method of claim 9 wherein the IP address identifying the first NF is used for control plane signaling for a connection context.
11 . The method of any one of claim 8-10 wherein the one or more criterion comprises information stored in the peer network nodes identifying at least one PDN connection.
12. The method of claim 11 wherein the one or more criterion comprises information identifying the at least one PDN connection by an Access Point Name (APN) and a PDN Type.
13. The method of claim 12 wherein the APN is an APN currently in use.
14. The method of claim 12 wherein the one or more criterion comprises a Fully Qualified Connection Set Identifier (FQ-CSID) allocated by the first NF.
15. The method of any one of claim 1-2 wherein the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
16. The method of any one of claim 1-2 and 15 wherein the IP Addresses identifying the first and second CP functions comprise respective CP Fully Qualified Session Endpoint Identifiers (F-SEID).
17. The method of any one of claim 1-2 and 14-16 wherein the request message comprises one of: a message requesting an address change; a Packet Forwarding Control Protocol (PFCP) Association Update Request message; and a Heartbeat Request message.
18. The method of any one of claim 1-2 and 14-17 wherein the identifying information comprises a CP function IP Address Change Information Element (IE).
19. The method of claim 18 wherein the CP function IP Address Change IE comprises: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
20. The method of claim 19 wherein the one or more criterion comprises a subset of PFCP session data stored at a User Plane Function (UP) function, which can be used to identify at least one PFCP session.
21 . The method of any one of the preceding claims wherein the one or more connection contexts to be moved from the first NF to the second NF are: associated with respective PDN connections; or associated with respective PFCP sessions.
22. The method of claim 1 wherein the at least one peer network node in the core network is a Mobility Management Entity (MME).
23. The method of claim 1 wherein the at least one peer network node in the core network is a Serving Gateway (SGW).
24. The method of claim 1 wherein the at least one peer network node in the core network is a Packet Data Network Gateway-User Plane function (PGW-U).
25. The method of claim 1 wherein the at least one peer network node in the core network is a User Plane Function (UPF).
26. The method of any one of the preceding claims wherein the request message is sent responsive to determining that an event associated with the first NF has occurred.
27. A network node for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the network node comprising: communications interface circuitry configured to communicate data packets with a peer network nodes in a core network; and processing circuitry configured to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
28. The network node of claim 27 wherein the processing circuitry is further configured to perform any one of claim 2-
26.
29. A non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a network node configured to notify peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, causes the network node to: determine one or more connection contexts to be moved from the first NF to the second NF; send a request message to one or more peer network nodes in the core network indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set; and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF.
30. A computer program product for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the computer program product comprising software instructions which, when run on at least one processing circuit in a network node, causes the network node to execute the method according to any one of claim 1-26.
31 . A method for notifying peer network nodes in a core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the method implemented by a peer network node in the core network and comprising: receiving a request message from one of the first NF and the second NF indicating that the group of one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the group of one or more connection contexts to be moved from the first NF to the second NF; and sending, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
32. The method of claim 31 wherein the first NF and the second NF are respective Packet Data Network Gateways (PGWs) belonging to the same NF set.
33. The method of any one of claim 31 and 32 wherein the identifying information is an address change Information Element (IE) comprising: an IP address identifying the first NF; an IP address identifying the second NF that is to replace the first NF; and one or more criterion for identifying the one or more Packet Data Network (PDN) connections that are to be moved from the first NF to the second NF.
34. The method of any one of claim 31-33 further comprising updating at least one or more connection contexts with information indicating that the first NF is being replaced by the second NF.
35. The method of claim 31 wherein the first and second NFs are CP functions, and wherein the peer network node in the core network is a User Plane (UP) NF.
36. The method of any one of claim 31 and 35 wherein the address change IE comprises a CP function IP Address Change Information Element (IE) comprising: an IP address identifying the first CP function; an IP address identifying the second CP function that is to replace the first CP function; and one or more criterion for identifying one or more PFCP sessions that are to be moved from the first CP function to the second CP function.
37. A peer network node in a core network for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the management node comprising: communications interface circuitry configured to communicate data packets with a plurality of Network Functions (NFs) in a same NF set; and processing circuitry configured to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, to the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
38. The management node of claim 37 wherein the processing circuitry is further configured to perform any one of claim 32-36.
39. A non-transitory computer readable medium having computer instructions stored thereon that, when executed by a processing circuit of a peer network node in a core network, causes the peer network node to: receive a request message from one of a first NF and a second NF indicating that the one or more connection contexts are to be moved from the first NF to the second NF, wherein the first NF and the second NF are in a same NF set, and wherein the request message comprises identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF; and send, the second NF, information for subsequent connection context signaling for the group of one or more connection contexts that were moved from the first NF to the second NF.
40. A computer program product for notifying peer network nodes in the core network that a group of one or more connection contexts served by a first network function (NF) are to be moved to a second NF, the computer program product comprising software instructions which, when run on at least one processing circuit of a peer network node, causes the peer network node to execute the method according to any one of claims 32-37.
41 . The method of any of the preceding claims wherein the identifying information identifying the one or more connection contexts to be moved from the first NF to the second NF is an Information Element (IE).
PCT/EP2021/079081 2020-10-20 2021-10-20 Notification of packet data network gateway (pgw) ip address change WO2022084385A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063093957P 2020-10-20 2020-10-20
US63/093,957 2020-10-20

Publications (1)

Publication Number Publication Date
WO2022084385A1 true WO2022084385A1 (en) 2022-04-28

Family

ID=78302787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/079081 WO2022084385A1 (en) 2020-10-20 2021-10-20 Notification of packet data network gateway (pgw) ip address change

Country Status (1)

Country Link
WO (1) WO2022084385A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217265A1 (en) * 2022-05-12 2023-11-16 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for populating alternative pgw-c/smf information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169712A1 (en) * 2002-03-05 2003-09-11 Shiao-Li Tsao Re-allocation method for a distributed GGSN system
WO2008081007A1 (en) * 2007-01-05 2008-07-10 Nokia Corporation Network initiated relocation of a gateway support node
US20120134259A1 (en) * 2010-11-30 2012-05-31 Staffan Bonnier Mobile gateways in pool for session resilience
US20170126618A1 (en) * 2015-11-02 2017-05-04 Cisco Technology, Inc. System and method for providing a change in user equipment packet data network internet protocol address in a split control and user plane evolved packet core architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030169712A1 (en) * 2002-03-05 2003-09-11 Shiao-Li Tsao Re-allocation method for a distributed GGSN system
WO2008081007A1 (en) * 2007-01-05 2008-07-10 Nokia Corporation Network initiated relocation of a gateway support node
US20120134259A1 (en) * 2010-11-30 2012-05-31 Staffan Bonnier Mobile gateways in pool for session resilience
US20170126618A1 (en) * 2015-11-02 2017-05-04 Cisco Technology, Inc. System and method for providing a change in user equipment packet data network internet protocol address in a split control and user plane evolved packet core architecture

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Core Network and Terminals; Restoration procedures; (Release 16)", vol. CT WG4, no. V16.1.0, 30 March 2020 (2020-03-30), pages 1 - 109, XP051861212, Retrieved from the Internet <URL:ftp://ftp.3gpp.org/Specs/archive/23_series/23.007/23007-g10.zip 23007-g10.docx> [retrieved on 20200330] *
3GPP TS 23.214
3GPP TS 23.401, September 2020 (2020-09-01)
3GPP TS 23.501, September 2020 (2020-09-01)
3GPP TS 29.274
3GPP TS 29.274, September 2020 (2020-09-01)
3GPP TS 29.275, July 2020 (2020-07-01)
CISCO SYSTEMS INC: "Pseudo-CR on Solution for PGW-U Faillure Without Restart - Solution #2", vol. CT WG4, no. Tenerife, Spain; 20160725 - 20160729, 24 July 2016 (2016-07-24), XP051140037, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/Meetings_3GPP_SYNC/CT4/Docs/> [retrieved on 20160724] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217265A1 (en) * 2022-05-12 2023-11-16 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for populating alternative pgw-c/smf information

Similar Documents

Publication Publication Date Title
CN111771394B (en) System and method for UE context and PDU session context management
CN110535676B (en) SMF dynamic disaster tolerance realization method, device, equipment and storage medium
Taleb et al. On service resilience in cloud-native 5G mobile systems
CN1135046C (en) Recovery in mobile communication systems
DK2702793T3 (en) Improvements to completed mobile calls
US20150142958A1 (en) Control node and communication control method
CN109983736B (en) NF component exception processing method, device and system
JP2011211710A (en) Efficient deployment for mobility management entity (mme) with stateful (maintenance of communicating condition) geographical redundancy
CN113454948B (en) PFCP association release to enhance UP function request
CN105515812A (en) Fault processing method of resources and device
JP2022507036A (en) NF service consumer restart detection using direct signaling between NFs
WO2019096306A1 (en) Request processing method, and corresponding entity
CN105592486A (en) Disaster tolerance method, network element and server
CN108183849B (en) Device management method, device and system based on L2TP
CN110958719A (en) UE migration method, NRF, standby SMF, system and storage medium
EP3687205A1 (en) Saving ue context data based on data distribution policy
WO2022084385A1 (en) Notification of packet data network gateway (pgw) ip address change
EP3092741B1 (en) Allocating virtual machines in a gateway coupled to a software-defined switch
US20170099221A1 (en) Service packet distribution method and apparatus
JP6114994B2 (en) Communication system, MME, PGW, SGW and program
CN110798853A (en) Communication method, device and system
US11838845B2 (en) State pooling for stateful re-homing in a disaggregated radio access network
CN113286321B (en) Backup management method, device, equipment and machine readable storage medium
Htike et al. Enhancing service resiliency in the next generation EPC system
CN115915220A (en) Exception handling method, device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21749763

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21749763

Country of ref document: EP

Kind code of ref document: A1