WO2019149574A1 - Enabling resiliency capability information exchange - Google Patents

Enabling resiliency capability information exchange Download PDF

Info

Publication number
WO2019149574A1
WO2019149574A1 PCT/EP2019/051487 EP2019051487W WO2019149574A1 WO 2019149574 A1 WO2019149574 A1 WO 2019149574A1 EP 2019051487 W EP2019051487 W EP 2019051487W WO 2019149574 A1 WO2019149574 A1 WO 2019149574A1
Authority
WO
WIPO (PCT)
Prior art keywords
resiliency
capability information
interface
memory
level
Prior art date
Application number
PCT/EP2019/051487
Other languages
French (fr)
Inventor
Joseph Thaliath
Parijat BHATTACHARJEE
Shivanand KADADI
Prasanna KM
Gayatri VE
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2019149574A1 publication Critical patent/WO2019149574A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection

Definitions

  • Various example embodiments relate to communications.
  • Communication systems are large systems and comprise a multitude of different entities and apparatuses. Operators of communication systems may purchase the entities and apparatuses from one or more vendors. If the entities and apparatuses are all from the same vendor the apparatuses most likely operate well with each other. If the system comprises apparatuses from multiple vendors, the definition of interfaces between apparatuses must be carefully designed so that apparatuses of different vendors may be able to communicate with each other fluently.
  • an apparatus in a radio access network comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to perform operations comprising: maintain resiliency capability information related to the apparatus; control communication with a second apparatus of the radio access network over an interface and control transmission and reception of resiliency capability information with the second apparatus.
  • a method in an apparatus in a radio access network comprising: maintaining resiliency capability information related to the apparatus; communicating with a second apparatus of the radio access network over an interface; and transmitting and receiving resiliency capability information with the second apparatus.
  • Figure 1 illustrates a general architecture of an exemplary system
  • FIGS. 2A and 2B illustrate some examples of realizations of possible radio access networks of communication network
  • Figure 3 is a flowchart illustrating an example of an embodiment of the subject matter described herein;
  • Figure 4 is a signalling chart illustrating an example signalling in connection with resiliency capability and information exchange.
  • Figure 5 illustrates a simplified example of an apparatus in which some embodiments may be applied.
  • UMTS universal mobile telecommunications system
  • UTRAN radio access network
  • LTE long term evolution
  • WLAN wireless local area network
  • WiFi worldwide interoperability for microwave access
  • Bluetooth® personal communications services
  • PCS personal communications services
  • WCDMA wideband code division multiple access
  • UWB ultra-wideband
  • sensor networks mobile ad-hoc networks
  • IMS Internet Protocol multimedia subsystems
  • Figure 1 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown.
  • the connections shown in Fig. 1 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Fig. 1.
  • the embodiments are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties.
  • Figure 1 shows a part of an exemplifying radio access network.
  • Fig. 1 shows user devices 100 and 102 configured to be in a wireless connection on one or more communication channels in a cell with an access node (such as (e/g)NodeB) 104 providing the cell.
  • the physical link from a user device to a (e/g)NodeB is called uplink or reverse link and the physical link from the (e/g)NodeB to the user device is called downlink or forward link.
  • (e/g)NodeBs or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
  • a communications system typically comprises more than one (e/g)NodeB in which case the (e/g)NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for data and signalling purposes.
  • the (e/g)NodeB is a computing device configured to control the radio resources of communication system it is coupled to.
  • the NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment.
  • the (e/g)NodeB includes or is coupled to transceivers. From the transceivers of the (e/g)NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to user devices.
  • the antenna unit may comprise a plurality of antennas or antenna elements.
  • the (e/g)NodeB is further connected to core network 106 (CN or next generation core NGC).
  • core network 106 CN or next generation core NGC.
  • the counterpart on the CN side can be a serving gateway (S- GW, routing and forwarding user data packets), packet data network gateway (P-GW), for providing connectivity of user devices (UEs) to external packet data networks, or mobile management entity (MME), etc.
  • S- GW serving gateway
  • P-GW packet data network gateway
  • MME mobile management entity
  • the user device also called UE, user equipment, user terminal, terminal device, etc.
  • UE user equipment
  • user terminal terminal device
  • any feature described herein with a user device may be implemented with a corresponding apparatus, such as a relay node.
  • a relay node is a layer 3 relay (self-backhauling relay) towards the base station.
  • the user device typically refers to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device.
  • SIM subscriber identification module
  • a user device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a user device may also be a device having capability to operate in Internet of Things (loT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
  • the user device may also utilise cloud.
  • a user device may comprise a small portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud.
  • the user device (or in some embodiments a layer 3 relay node) is configured to perform one or more of user equipment functionalities.
  • the user device may also be called a subscriber unit, mobile station, remote terminal, access terminal, user terminal or user equipment (UE) just to mention but a few names or apparatuses.
  • CPS cyber-physical system
  • ICT devices sensors, actuators, processors microcontrollers, etc.
  • Mobile cyber physical systems in which the physical system in question has inherent mobility, are a subcategory of cyber- physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals.
  • apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Fig. 1 ) may be implemented.
  • 5G enables using multiple input - multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available.
  • 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC), including vehicular safety, different sensors and real-time control.
  • 5G is expected to have multiple radio interfaces, namely below 6GHz, cmWave and mmWave, and also being integradable with existing legacy radio access technologies, such as the LTE.
  • Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE.
  • 5G is planned to support both inter- RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave).
  • inter- RAT operability such as LTE-5G
  • inter-RI operability inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave.
  • One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
  • the current architecture in LTE networks is fully distributed in the radio and fully centralized in the core network.
  • the low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi- access edge computing (MEC).
  • MEC multi- access edge computing
  • 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.
  • MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time.
  • Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications).
  • the communication system is also able to communicate with other networks, such as a public switched telephone network or the Internet 112, or utilise services provided by them.
  • the communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Fig. 1 by“cloud” 1 14).
  • the communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
  • Edge cloud may be brought into radio access network (RAN) by utilizing network function virtualization (NVF) and software defined networking (SDN).
  • RAN radio access network
  • NVF network function virtualization
  • SDN software defined networking
  • Using edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts.
  • Application of cloudRAN architecture enables RAN real time functions being carried out at the RAN side (in a distributed unit, DU 104) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 108).
  • 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling.
  • Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (loT) devices or for passengers on board of vehicles, or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications.
  • Satellite communication may utilize geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega- constellations (systems in which hundreds of (nano)satellites are deployed).
  • GEO geostationary earth orbit
  • LEO low earth orbit
  • Each satellite 1 10 in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells.
  • the on-ground cells may be created through an on-ground relay node 104 or by a gNB located on-ground or in a satellite.
  • the depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g)NodeBs, the user device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as physical layer relay nodes or other network elements, etc. At least one of the (e/g)NodeBs or may be a Home(e/g)nodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided.
  • Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto- or picocells.
  • the (e/g)NodeBs of Figure 1 may provide any kind of these cells.
  • a cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node provides one kind of a cell or cells, and thus a plurality of (e/g)NodeBs are required to provide such a network structure.
  • a network which is able to use “plug-and-play” (e/g)Node Bs includes, in addition to Home (e/g)NodeBs (H(e/g)nodeBs), a home node B gateway, or HNB-GW (not shown in Figure 1 ).
  • HNB-GW HNB Gateway
  • a HNB Gateway (HNB-GW) which is typically installed within an operator’s network may aggregate traffic from a large number of HNBs back to a core network.
  • radio access network may be split into two logical entities called Central Unit (CU) and Distributed Unit (DU).
  • CU Central Unit
  • DU Distributed Unit
  • both CU and DU supplied by the same vendor. Thus they are designed together and interworking between the units is easy.
  • the interface between CU and DU is currently being standardized by 3GPP and it is denoted F1 interface. Therefore in the future the network operators may have the flexibility to choose different vendors for CU and DU. Different vendors can provide different failure and recovery characteristics for the units. If the failure and recovery scenarios of the units are not handled in a coordinated manner, it will result in inconsistent states in the CU and DU (which may lead to subsequent call failures, for example).
  • FIGS. 2A and 2B illustrate some examples of realizations of possible radio access networks of communication network where embodiments of the invention may be applied.
  • the radio access network is divided into a Central Unit (CU) 200 and Distributed Unit (DU) 202.
  • User plane network functions may be grouped into different clusters which are completely isolated with respect to the network connectivity between DU and CU.
  • the CU may comprise control plane 204 and one or more clusters of user planes 206, 208.
  • the DU 202 is not a cloud based realization.
  • the DU 202 is a cloud based realization as an edge cloud and it may comprise control plane 210 and one or more clusters of user planes 212, 214.
  • the interface F1 between CU 200 and DU 202 may be divided into more than one layers, subinterfaces or parts.
  • the control unit 204 of CU may be connected to DU 202 or the control plane 210 of DU via F1AP interface.
  • control plane message exchange goes via F1AP interface.
  • the user planes of CU may be connect to DU 202 or user planes 212, 214 of DU via F1-U interface, which provides data plane path.
  • FIG. 3 illustrates an example of an embodiment.
  • high availability or resiliency capability of CU and DU are exchanged with each other over the F1 interface.
  • the high availability or resiliency capability of any network unit of a radio access system can be exchanged with another network unit of the radio access system.
  • CU and DU are used as examples of network units of a radio access system which exchange the information.
  • a first apparatus of a radio access system is configured to maintain resiliency capability information related to the apparatus.
  • the apparatus may be CU or DU, for example.
  • the apparatus may be realised as cloud based realization.
  • the first apparatus of a radio access system is configured to communicate with a second apparatus of the radio access network over an interface.
  • the second apparatus may be CU if the first apparatus is DU, and vice versa.
  • the interface may be F1 interface or some other interface between the apparatuses.
  • the first apparatus of a radio access system is configured to transmit and receive resiliency capability information with the second apparatus.
  • the resiliency capability information exchange when the interface between the apparatuses is set up or whenever the resiliency capability information related to the apparatus changes. For example, if there is a dynamic change in the capability information of an apparatus triggered by internal or external events the resiliency capability information may be exchanged. This dynamic change of resiliency capability may help the system, to adapt to varying system capability and operator needs.
  • An example for an internal event which may cause capability change is a link connectivity failure towards the database which stores information required for resiliency.
  • An example for an external event which may cause capability change is when operator modifies the capability information based on subscription change.
  • apparatuses may exchange resiliency capability information periodically.
  • an apparatus may receive a query regarding the resiliency capability information of the apparatus and the apparatus may transmit the resiliency capability information as a response to the query.
  • the resiliency capability information comprises a resiliency or high availability capable flag.
  • the flag may indicate the resiliency properties of the apparatus.
  • the communicating apparatuses such as CU and DU, may exchange more resiliency related information as it is known that the other communicating apparatus supports resiliency.
  • the apparatuses may exchange the estimated recovery time required by the apparatus to recover after failure detection.
  • the non-failed apparatus may be configured to wait for the indicated recovery time after failure detection of the other apparatus. After the estimated recovery time has elapsed, appropriate recovery action may be taken so that normal operation may be resumed. Based on the estimated recovery time resource wastage may be reduced as reconnection to a failed apparatus is not performed until it is assumed to be recovered.
  • the apparatuses such as CU and DU can also exchange the level of resiliency with each other.
  • level 1 may indicate that the apparatus is capable of providing resiliency with no loss of data or context.
  • Level 2 may indicate that the apparatus is capable of providing resiliency with loss of data or context.
  • CU or DU
  • DU or CU
  • level 2 resiliency level if DU transmitted to CU that its resiliency capability is "can recover from failure within estimated recovery time, but with only static cell information and all UE context will be lost" - then when CU detects a DU failure, CU needs to reset UE contexts when the DU recovers from a failure.
  • CU/DU can enforce similar level of resiliency as the other entity. For example, CU may not be not storing the buffer in the user plane. In this case, when CU failure is detected, DU also need not buffer the control messages towards the same user plane in the CU during the recovery period. This may optimize resource utilization.
  • DU may perform a recovery action taking this information into account.
  • DU detects a failure of CU, it can perform recovery action immediately by bringing down the corresponding cells towards the user terminal(s), so the user terminal(s) can connect back to another cell or DU immediately after failure detection.
  • CU may perform a recovery action taking this information into account. For example, upon detecting any failure of DU, CU will perform reset for all cells of a DU. alternatively, upon detecting any failure of DU, CU may initiate user terminal paging towards another DU to trigger user terminal reconnection without releasing the user terminal context in CU or in the core network. This procedure may be useful if the user terminal does not reconnect to another cell or DU (due to ongoing downlink TCP traffic, for example). The paging may be based on previous user terminal measurement reports.
  • a recovery action may be, for example, to buffer data till the expiry of estimated recovery time.
  • Another possible action is to connect to a redundant endpoint connection provided by CU/DU after estimated recovery time of the other entity.
  • a recovery action may be also simply to reconnect after estimated recovery time of the failed entity.
  • the resiliency capability information may comprise resiliency-related information required to perform recovery action which can be exchanged due to different event triggers (e.g. interface F1 setup, periodically, or after a failure).
  • CU or DU can provide the operational status of its own and its dependent entities periodically or as a response to a query from another apparatus.
  • the CU user plane status may be provided to DU based on a query as a response or as a broadcast by CU control plane.
  • the resiliency information granularity may vary.
  • the resiliency information may be exchanged at per interface level (or subinterface) (such as F1AP, F1-U), per cell level, per user terminal level, per bearer level or per slice level.
  • F1AP and F1-U interfaces may be configured to have different recovery times.
  • different cells, user terminal, bearers and/or slices may have different resiliency configurations.
  • the CU or DU can also perform different recovery actions based on the resiliency capability, resiliency information and failure type (such as F1AP interface failure or F1-U interface failure).
  • Having resiliency enabled in an apparatus can involve extra overheads, such as storing context in persistent storage, extra messaging, extra timers and extra monitoring. These overheads can be reduced by enabling resiliency only in part or only as needed. For example, resiliency can be disabled for low priority user terminals. In case of failure detection, reset or release contexts can be triggered only for the low priority user terminals for which resiliency feature is disabled. Thus resiliency can be enabled or disabled or partially enabled based on capability negotiation between apparatuses.
  • resiliency can be disabled. Further, if estimated recovery time is acceptable for some user terminals or /bearers, then can resiliency can selectively be enabled for these user terminals or bearers and otherwise disabled.
  • the resiliency capability can be exchanged during cell setup, user terminal setup or bearer setup. This enables differentiated resiliency behavior at per cell or per UE or per bearer level even though a failure can happen at network function level (such as user plane network function). Slicing is one example where the resiliency capability may be exchanged in the above scenarios.
  • the second apparatus may take a default recovery action if a first apparatus has indicated that resiliency is enabled and has transmitted indicated estimated recovery time to a second apparatus but the first apparatus does not recover within the indicated estimated recovery time.
  • the recovered apparatus may inform the second apparatus about the failure and recovery so that necessary clean up or initialization actions may be taken.
  • the non-failed apparatus may provide the failed apparatus context information so that the failed apparatus may more easily continue normal operation.
  • DU when there has been a failure in CU, DU can provide sequence number, SN, information to Packet Data Convergence Protocol, PDCP, in CU. CU can then utilize this information in recovery. Thus, there is no need for PDCP in CU to store this information in a persistent storage. Storing dynamic information per packet (such as SN) will cause high overhead especially in case of high throughput and this may now be avoided when DU transmits the information.
  • CU when there has been a failure in DU, CU can provide cell, user terminal and bearer information (which are generally exchanged during respective setup) after DU recovery so that DU can resume operation and continue sessions.
  • apparatuses may exchange operational status information regarding themselves and/or other dependent entities. Based on this information, apparatuses such as DU (or CU) can differentiate if the connectivity has failed to CU (or DU) itself or an entity dependent of CU (or DU).
  • apparatuses such as DU (or CU) can differentiate if the connectivity has failed to CU (or DU) itself or an entity dependent of CU (or DU).
  • DU when it detects that the user plane in CU is down, it may query the CU control plane to confirm the status of the user plane. If only the link between the DU and the CU user plane is down, CU can migrate the context to a user plane in another cluster and instruct DU to connect to it.
  • Figure 4 is a signalling chart illustrating an example signalling in connection with resiliency capability and information exchange between DU 202 and CU 200.
  • Initial resiliency capability exchange and negotiation 400 comprises in this example three messages.
  • First DU 202 transmits an initial message 402, which comprises resiliency or high availability capable flag and optionally estimated recovery time and other recovery related information.
  • CU 200 responds with an initial reply message 404, which comprises resiliency or high availability capable flag and optionally estimated recovery time and other recovery related information. It also acknowledges the resiliency message received from the DU.
  • DU transmits an initial reply message 406 which acknowledges the resiliency message received from the CU.
  • CU transmits a message 410 which comprises information on cell setup, user terminal setup and bearer setup.
  • the message may comprise a resiliency flag.
  • DU responds with an acknowledgement message 412, which may optionally comprise a resiliency flag.
  • CU may transmit resiliency capability modification message 418 to DU.
  • the message comprises resiliency flag.
  • an example for an internal event may be a link connectivity failure towards the database which stores information required for resiliency and an example for an external event may be when operator modifies the capability information based on subscription change.
  • DU detects 420 the failure. If resiliency is enabled, DU is configured to wait for the estimated recovery time transmitted by the CU before performing any recovery actions.
  • Figure 5 illustrates an embodiment.
  • the figure illustrates a simplified example of an apparatus 500 of a radio access network in which embodiments of the invention may be applied.
  • the apparatus may be a a Central Unit (CU) 200 or Distributed Unit (DU) 202.
  • CU Central Unit
  • DU Distributed Unit
  • the apparatus is depicted herein as an example illustrating some embodiments. It is apparent to a person skilled in the art that the apparatus may also comprise other functions and/or structures and not all described functions and structures are required. Although the apparatus has been depicted as one entity, different modules and memory may be implemented in one or more physical or logical entities. For example, the apparatus may be realized using cloud computing or distributed computing with several physical entities located in different places but connected with each other.
  • the apparatus of the example includes a control circuitry 502 configured to control at least part of the operation of the apparatus.
  • the apparatus may comprise a memory 504 for storing data. Furthermore the memory may store software or applications 506 executable by the control circuitry 502. The memory may be integrated in the control circuitry.
  • the control circuitry 502 is configured to execute one or more applications.
  • the applications may be stored in the memory 504.
  • the apparatus may further comprise one or more interfaces 508, 510 operationally connected to the control circuitry 502.
  • the interface may connect 512, 514 the apparatus to other apparatuses of the radio access system.
  • the interface may be connect DU to CU and vice versa, so that DU and CU may communicate with each other.
  • the interface may be F1 interface and comprise F1AP and F1-U interfaces.
  • the applications 506 stored in the memory 504 executable by the control circuitry 502 may cause the apparatus to maintain resiliency capability information related to the apparatus, communicate with a second apparatus of the radio access network over an interface, and transmit and receive resiliency capability information with the second apparatus.
  • the apparatuses or controllers able to perform the above-described steps may be implemented as an electronic digital computer, or a circuitry which may comprise a working memory (RAM), a central processing unit (CPU), and a system clock.
  • the CPU may comprise a set of registers, an arithmetic logic unit, and a controller.
  • the controller or the circuitry is controlled by a sequence of program instructions transferred to the CPU from the RAM.
  • the controller may contain a number of microinstructions for basic operations. The implementation of microinstructions may vary depending on the CPU design.
  • the program instructions may be coded by a programming language, which may be a high-level programming language, such as C, Java, etc., or a low-level programming language, such as a machine language, or an assembler.
  • the electronic digital computer may also have an operating system, which may provide system services to a computer program written with the program instructions.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry applies to all uses of this term in this application.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
  • An embodiment provides a computer program embodied on a distribution medium, comprising program instructions which, when loaded into an electronic apparatus, are configured to control the apparatus to execute the embodiments described above.
  • the computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, which may be any entity or device capable of carrying the program.
  • carrier include a record medium, computer memory, read-only memory, and a software distribution package, for example.
  • the computer program may be executed in a single electronic digital computer or it may be distributed amongst a number of computers.
  • the apparatus may also be implemented as one or more integrated circuits, such as application-specific integrated circuits ASIC.
  • Other hardware embodiments are also feasible, such as a circuit built of separate logic components.
  • a hybrid of these different implementations is also feasible.
  • the apparatus comprises means for maintaining resiliency capability information related to the apparatus, means for communicating with a second apparatus of the radio access network over an interface and means for transmitting and receiving resiliency capability information with the second apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

An apparatus in a radio access network and a method in an apparatus is disclosed. The method comprises maintaining (300) resiliency capability information related to the apparatus; communicating (302) with a second apparatus of the radio access network over an interface; and exchanging(304) resiliency capability information with the second apparatus.

Description

ENABLING RESILIENCY CAPABILITY INFORMATION EXCHANGE
Technical Field
Various example embodiments relate to communications.
Background
In communication systems, reliability and fast recovery of possible failures with minimal damages is essential. Communication systems are large systems and comprise a multitude of different entities and apparatuses. Operators of communication systems may purchase the entities and apparatuses from one or more vendors. If the entities and apparatuses are all from the same vendor the apparatuses most likely operate well with each other. If the system comprises apparatuses from multiple vendors, the definition of interfaces between apparatuses must be carefully designed so that apparatuses of different vendors may be able to communicate with each other fluently.
Regardless of the vendors it is the purpose to design the communication systems in such a manner that the systems are as resilient as possible to failures. Failures are bound to happen every now and then as every technical system encounters them at some point, but the recovery time may be designed as short as possible by careful design of interfaces between the apparatuses, for example.
Brief description
The following presents a simplified summary of the invention in or-der to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to a more detailed description that is presented later.
According to an aspect of the present invention, there is provided an apparatus in a radio access network, comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to perform operations comprising: maintain resiliency capability information related to the apparatus; control communication with a second apparatus of the radio access network over an interface and control transmission and reception of resiliency capability information with the second apparatus.
According to an aspect of the present invention, there is provided a method in an apparatus in a radio access network, comprising: maintaining resiliency capability information related to the apparatus; communicating with a second apparatus of the radio access network over an interface; and transmitting and receiving resiliency capability information with the second apparatus.
Brief description of drawings
One or more examples of implementations are set forth in more detail in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
Figure 1 illustrates a general architecture of an exemplary system;
Figures 2A and 2B illustrate some examples of realizations of possible radio access networks of communication network;
Figure 3 is a flowchart illustrating an example of an embodiment of the subject matter described herein;
Figure 4 is a signalling chart illustrating an example signalling in connection with resiliency capability and information exchange; and
Figure 5 illustrates a simplified example of an apparatus in which some embodiments may be applied.
Detailed description of some embodiments
In the following, different exemplifying embodiments will be described using, as an example of an access architecture to which the embodiments may be applied, a radio access architecture based on long term evolution advanced (LTE Advanced, LTE-A) or new radio (NR, 5G), without restricting the embodiments to such an architecture, however. It is obvious for a person skilled in the art that the embodiments may also be applied to other kinds of communications networks having suitable means by adjusting parameters and procedures appropriately. Some examples of other options for suitable systems are the universal mobile telecommunications system (UMTS) radio access network (UTRAN or E-UTRAN), long term evolution (LTE, the same as E-UTRA), wireless local area network (WLAN or WiFi), worldwide interoperability for microwave access (WiMAX), Bluetooth®, personal communications services (PCS), ZigBee®, wideband code division multiple access (WCDMA), systems using ultra-wideband (UWB) technology, sensor networks, mobile ad-hoc networks (MANETs) and Internet Protocol multimedia subsystems (IMS) or any combination thereof.
Figure 1 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown. The connections shown in Fig. 1 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Fig. 1. The embodiments are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties.
The example of Figure 1 shows a part of an exemplifying radio access network.
Fig. 1 shows user devices 100 and 102 configured to be in a wireless connection on one or more communication channels in a cell with an access node (such as (e/g)NodeB) 104 providing the cell. The physical link from a user device to a (e/g)NodeB is called uplink or reverse link and the physical link from the (e/g)NodeB to the user device is called downlink or forward link. It should be appreciated that (e/g)NodeBs or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
A communications system typically comprises more than one (e/g)NodeB in which case the (e/g)NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for data and signalling purposes. The (e/g)NodeB is a computing device configured to control the radio resources of communication system it is coupled to. The NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment. The (e/g)NodeB includes or is coupled to transceivers. From the transceivers of the (e/g)NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to user devices. The antenna unit may comprise a plurality of antennas or antenna elements. The (e/g)NodeB is further connected to core network 106 (CN or next generation core NGC). Depending on the system, the counterpart on the CN side can be a serving gateway (S- GW, routing and forwarding user data packets), packet data network gateway (P-GW), for providing connectivity of user devices (UEs) to external packet data networks, or mobile management entity (MME), etc.
The user device (also called UE, user equipment, user terminal, terminal device, etc.) illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a user device may be implemented with a corresponding apparatus, such as a relay node. An example of such a relay node is a layer 3 relay (self-backhauling relay) towards the base station.
The user device typically refers to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a user device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. A user device may also be a device having capability to operate in Internet of Things (loT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. The user device may also utilise cloud. In some applications, a user device may comprise a small portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud. The user device (or in some embodiments a layer 3 relay node) is configured to perform one or more of user equipment functionalities. The user device may also be called a subscriber unit, mobile station, remote terminal, access terminal, user terminal or user equipment (UE) just to mention but a few names or apparatuses.
Various techniques described herein may also be applied to a cyber-physical system (CPS) (a system of collaborating computational elements controlling physical entities). CPS may enable the implementation and exploitation of massive amounts of interconnected ICT devices (sensors, actuators, processors microcontrollers, etc.) embedded in physical objects at different locations. Mobile cyber physical systems, in which the physical system in question has inherent mobility, are a subcategory of cyber- physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals.
Additionally, although the apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Fig. 1 ) may be implemented.
5G enables using multiple input - multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available. 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC), including vehicular safety, different sensors and real-time control. 5G is expected to have multiple radio interfaces, namely below 6GHz, cmWave and mmWave, and also being integradable with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter- RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave). One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
The current architecture in LTE networks is fully distributed in the radio and fully centralized in the core network. The low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi- access edge computing (MEC). 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors. MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time. Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications).
The communication system is also able to communicate with other networks, such as a public switched telephone network or the Internet 112, or utilise services provided by them. The communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Fig. 1 by“cloud” 1 14). The communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
Edge cloud may be brought into radio access network (RAN) by utilizing network function virtualization (NVF) and software defined networking (SDN). Using edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. Application of cloudRAN architecture enables RAN real time functions being carried out at the RAN side (in a distributed unit, DU 104) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 108).
It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent. Some other technology advancements probably to be used are Big Data and all-IP, which may change the way networks are being constructed and managed. 5G (or new radio, NR) networks are being designed to support multiple hierarchies, where MEC servers can be placed between the core and the base station or nodeB (gNB). It should be appreciated that MEC can be applied in 4G networks as well.
In an embodiment, 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling. Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (loT) devices or for passengers on board of vehicles, or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications. Satellite communication may utilize geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega- constellations (systems in which hundreds of (nano)satellites are deployed). Each satellite 1 10 in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells. The on-ground cells may be created through an on-ground relay node 104 or by a gNB located on-ground or in a satellite.
It is obvious for a person skilled in the art that the depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g)NodeBs, the user device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as physical layer relay nodes or other network elements, etc. At least one of the (e/g)NodeBs or may be a Home(e/g)nodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided. Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto- or picocells. The (e/g)NodeBs of Figure 1 may provide any kind of these cells. A cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node provides one kind of a cell or cells, and thus a plurality of (e/g)NodeBs are required to provide such a network structure.
For fulfilling the need for improving the deployment and performance of communication systems, the concept of “plug-and-play” (e/g)NodeBs has been introduced. Typically, a network which is able to use “plug-and-play” (e/g)Node Bs, includes, in addition to Home (e/g)NodeBs (H(e/g)nodeBs), a home node B gateway, or HNB-GW (not shown in Figure 1 ). A HNB Gateway (HNB-GW), which is typically installed within an operator’s network may aggregate traffic from a large number of HNBs back to a core network.
As mentioned, radio access network may be split into two logical entities called Central Unit (CU) and Distributed Unit (DU). In prior art, both CU and DU supplied by the same vendor. Thus they are designed together and interworking between the units is easy. The interface between CU and DU is currently being standardized by 3GPP and it is denoted F1 interface. Therefore in the future the network operators may have the flexibility to choose different vendors for CU and DU. Different vendors can provide different failure and recovery characteristics for the units. If the failure and recovery scenarios of the units are not handled in a coordinated manner, it will result in inconsistent states in the CU and DU (which may lead to subsequent call failures, for example). Thus there is a need to enable the CU and DU from different vendors to coordinate operation to handle failure conditions and recovery, taking into account the potential differences in resiliency capabilities between the CU and DU.
Figures 2A and 2B illustrate some examples of realizations of possible radio access networks of communication network where embodiments of the invention may be applied. In these examples the radio access network is divided into a Central Unit (CU) 200 and Distributed Unit (DU) 202. User plane network functions may be grouped into different clusters which are completely isolated with respect to the network connectivity between DU and CU. Thus, the CU may comprise control plane 204 and one or more clusters of user planes 206, 208. In the example of Fig. 2A, the DU 202 is not a cloud based realization. In the example of Fig. 2B, the DU 202 is a cloud based realization as an edge cloud and it may comprise control plane 210 and one or more clusters of user planes 212, 214.
The interface F1 between CU 200 and DU 202 may be divided into more than one layers, subinterfaces or parts. The control unit 204 of CU may be connected to DU 202 or the control plane 210 of DU via F1AP interface. In an embodiment, control plane message exchange goes via F1AP interface. The user planes of CU may be connect to DU 202 or user planes 212, 214 of DU via F1-U interface, which provides data plane path.
The flowchart of Figure 3 illustrates an example of an embodiment. In an embodiment, high availability or resiliency capability of CU and DU are exchanged with each other over the F1 interface. Basically the high availability or resiliency capability of any network unit of a radio access system can be exchanged with another network unit of the radio access system. Below, CU and DU are used as examples of network units of a radio access system which exchange the information.
In step 300, a first apparatus of a radio access system is configured to maintain resiliency capability information related to the apparatus. The apparatus may be CU or DU, for example. The apparatus may be realised as cloud based realization.
In step 302, the first apparatus of a radio access system is configured to communicate with a second apparatus of the radio access network over an interface. Here the second apparatus may be CU if the first apparatus is DU, and vice versa. The interface may be F1 interface or some other interface between the apparatuses.
In step 304, the first apparatus of a radio access system is configured to transmit and receive resiliency capability information with the second apparatus.
In an embodiment, the resiliency capability information exchange when the interface between the apparatuses is set up or whenever the resiliency capability information related to the apparatus changes. For example, if there is a dynamic change in the capability information of an apparatus triggered by internal or external events the resiliency capability information may be exchanged. This dynamic change of resiliency capability may help the system, to adapt to varying system capability and operator needs.
An example for an internal event which may cause capability change is a link connectivity failure towards the database which stores information required for resiliency.
An example for an external event which may cause capability change is when operator modifies the capability information based on subscription change.
In an embodiment, apparatuses may exchange resiliency capability information periodically.
In an embodiment, an apparatus may receive a query regarding the resiliency capability information of the apparatus and the apparatus may transmit the resiliency capability information as a response to the query.
In an embodiment, the resiliency capability information comprises a resiliency or high availability capable flag. The flag may indicate the resiliency properties of the apparatus. Based on the enabled state of the resiliency capable flag the communicating apparatuses, such as CU and DU, may exchange more resiliency related information as it is known that the other communicating apparatus supports resiliency.
For example, if the resiliency capable flag is enabled, the apparatuses may exchange the estimated recovery time required by the apparatus to recover after failure detection. The non-failed apparatus may be configured to wait for the indicated recovery time after failure detection of the other apparatus. After the estimated recovery time has elapsed, appropriate recovery action may be taken so that normal operation may be resumed. Based on the estimated recovery time resource wastage may be reduced as reconnection to a failed apparatus is not performed until it is assumed to be recovered.
If the resiliency capable flag is enabled, the apparatuses such as CU and DU can also exchange the level of resiliency with each other. For example, level 1 may indicate that the apparatus is capable of providing resiliency with no loss of data or context. Level 2 may indicate that the apparatus is capable of providing resiliency with loss of data or context.
For example, if CU (or DU) has indicated in the initial resiliency capability exchange that it is capable of recovering with full context (level 1 ) after an indicated recovery time, but the other apparatus such as DU (or CU) detects that it has failed to recover after the estimated time, then necessary steps may be taken to clean up the corresponding context.
As another example regarding level 2 resiliency level, if DU transmitted to CU that its resiliency capability is "can recover from failure within estimated recovery time, but with only static cell information and all UE context will be lost" - then when CU detects a DU failure, CU needs to reset UE contexts when the DU recovers from a failure. Based on the level of resiliency (recovery with or without loss of context/data) exchanged between CU and DU, CU/DU can enforce similar level of resiliency as the other entity. For example, CU may not be not storing the buffer in the user plane. In this case, when CU failure is detected, DU also need not buffer the control messages towards the same user plane in the CU during the recovery period. This may optimize resource utilization.
There are various actions that can be taken by CU or DU based on the exchanged state of the resiliency capability flag.
For example, if CU has indicated that it has no resiliency capability, DU may perform a recovery action taking this information into account. When DU detects a failure of CU, it can perform recovery action immediately by bringing down the corresponding cells towards the user terminal(s), so the user terminal(s) can connect back to another cell or DU immediately after failure detection.
If on the other hand DU has indicated that it has no resiliency capability, CU may perform a recovery action taking this information into account. For example, upon detecting any failure of DU, CU will perform reset for all cells of a DU. alternatively, upon detecting any failure of DU, CU may initiate user terminal paging towards another DU to trigger user terminal reconnection without releasing the user terminal context in CU or in the core network. This procedure may be useful if the user terminal does not reconnect to another cell or DU (due to ongoing downlink TCP traffic, for example). The paging may be based on previous user terminal measurement reports.
If both CU and DU have indicated that they are resiliency capable, different recovery actions can be taken by CU/DU based on the resiliency capability, resiliency information and failure type exchanged. A recovery action may be, for example, to buffer data till the expiry of estimated recovery time. Another possible action is to connect to a redundant endpoint connection provided by CU/DU after estimated recovery time of the other entity. A recovery action may be also simply to reconnect after estimated recovery time of the failed entity.
In an embodiment, the resiliency capability information may comprise resiliency-related information required to perform recovery action which can be exchanged due to different event triggers (e.g. interface F1 setup, periodically, or after a failure).
For example, CU or DU can provide the operational status of its own and its dependent entities periodically or as a response to a query from another apparatus. The CU user plane status may be provided to DU based on a query as a response or as a broadcast by CU control plane.
In an embodiment, the resiliency information granularity may vary. For example, the resiliency information may be exchanged at per interface level (or subinterface) (such as F1AP, F1-U), per cell level, per user terminal level, per bearer level or per slice level. For example, F1AP and F1-U interfaces may be configured to have different recovery times. Further, different cells, user terminal, bearers and/or slices may have different resiliency configurations. The CU or DU can also perform different recovery actions based on the resiliency capability, resiliency information and failure type (such as F1AP interface failure or F1-U interface failure).
Having resiliency enabled in an apparatus can involve extra overheads, such as storing context in persistent storage, extra messaging, extra timers and extra monitoring. These overheads can be reduced by enabling resiliency only in part or only as needed. For example, resiliency can be disabled for low priority user terminals. In case of failure detection, reset or release contexts can be triggered only for the low priority user terminals for which resiliency feature is disabled. Thus resiliency can be enabled or disabled or partially enabled based on capability negotiation between apparatuses.
For example, if the estimated recovery time of an apparatus is not acceptable by the other apparatus, then resiliency can be disabled. Further, if estimated recovery time is acceptable for some user terminals or /bearers, then can resiliency can selectively be enabled for these user terminals or bearers and otherwise disabled.
In an embodiment, the resiliency capability can be exchanged during cell setup, user terminal setup or bearer setup. This enables differentiated resiliency behavior at per cell or per UE or per bearer level even though a failure can happen at network function level (such as user plane network function). Slicing is one example where the resiliency capability may be exchanged in the above scenarios.
In an embodiment, if a first apparatus has indicated that resiliency is enabled and has transmitted indicated estimated recovery time to a second apparatus but the first apparatus does not recover within the indicated estimated recovery time, then the second apparatus may take a default recovery action.
In case a first apparatus has a failure but recovers before the second apparatus detects failure, the recovered apparatus may inform the second apparatus about the failure and recovery so that necessary clean up or initialization actions may be taken.
In an embodiment after a failure has occurred and detected by a non-failed apparatus and informed estimated recovery time has elapsed, the non-failed apparatus may provide the failed apparatus context information so that the failed apparatus may more easily continue normal operation.
For example, when there has been a failure in CU, DU can provide sequence number, SN, information to Packet Data Convergence Protocol, PDCP, in CU. CU can then utilize this information in recovery. Thus, there is no need for PDCP in CU to store this information in a persistent storage. Storing dynamic information per packet (such as SN) will cause high overhead especially in case of high throughput and this may now be avoided when DU transmits the information. In a similar manner, when there has been a failure in DU, CU can provide cell, user terminal and bearer information (which are generally exchanged during respective setup) after DU recovery so that DU can resume operation and continue sessions.
In an embodiment, apparatuses may exchange operational status information regarding themselves and/or other dependent entities. Based on this information, apparatuses such as DU (or CU) can differentiate if the connectivity has failed to CU (or DU) itself or an entity dependent of CU (or DU).
For example, when DU detects that the user plane in CU is down, it may query the CU control plane to confirm the status of the user plane. If only the link between the DU and the CU user plane is down, CU can migrate the context to a user plane in another cluster and instruct DU to connect to it.
Figure 4 is a signalling chart illustrating an example signalling in connection with resiliency capability and information exchange between DU 202 and CU 200.
Initial resiliency capability exchange and negotiation 400 comprises in this example three messages. First DU 202 transmits an initial message 402, which comprises resiliency or high availability capable flag and optionally estimated recovery time and other recovery related information. CU 200 responds with an initial reply message 404, which comprises resiliency or high availability capable flag and optionally estimated recovery time and other recovery related information. It also acknowledges the resiliency message received from the DU. As a reply, DU transmits an initial reply message 406 which acknowledges the resiliency message received from the CU.
Then in this example follow two asynchronous setup messages 408. CU transmits a message 410 which comprises information on cell setup, user terminal setup and bearer setup. Optionally the message may comprise a resiliency flag. DU responds with an acknowledgement message 412, which may optionally comprise a resiliency flag.
Triggered by internal or external events 416, CU may transmit resiliency capability modification message 418 to DU. The message comprises resiliency flag. As mentioned, an example for an internal event may be a link connectivity failure towards the database which stores information required for resiliency and an example for an external event may be when operator modifies the capability information based on subscription change.
Then in this example there happens a failure 418 at CU 200. DU detects 420 the failure. If resiliency is enabled, DU is configured to wait for the estimated recovery time transmitted by the CU before performing any recovery actions.
When the estimated recovery time has passed, DU performs a recovery action
422.
Figure 5 illustrates an embodiment. The figure illustrates a simplified example of an apparatus 500 of a radio access network in which embodiments of the invention may be applied. In some embodiments, the apparatus may be a a Central Unit (CU) 200 or Distributed Unit (DU) 202.
It should be understood that the apparatus is depicted herein as an example illustrating some embodiments. It is apparent to a person skilled in the art that the apparatus may also comprise other functions and/or structures and not all described functions and structures are required. Although the apparatus has been depicted as one entity, different modules and memory may be implemented in one or more physical or logical entities. For example, the apparatus may be realized using cloud computing or distributed computing with several physical entities located in different places but connected with each other.
The apparatus of the example includes a control circuitry 502 configured to control at least part of the operation of the apparatus.
The apparatus may comprise a memory 504 for storing data. Furthermore the memory may store software or applications 506 executable by the control circuitry 502. The memory may be integrated in the control circuitry.
The control circuitry 502 is configured to execute one or more applications. The applications may be stored in the memory 504.
The apparatus may further comprise one or more interfaces 508, 510 operationally connected to the control circuitry 502. The interface may connect 512, 514 the apparatus to other apparatuses of the radio access system. For example, the interface may be connect DU to CU and vice versa, so that DU and CU may communicate with each other. The interface may be F1 interface and comprise F1AP and F1-U interfaces.
In an embodiment, the applications 506 stored in the memory 504 executable by the control circuitry 502 may cause the apparatus to maintain resiliency capability information related to the apparatus, communicate with a second apparatus of the radio access network over an interface, and transmit and receive resiliency capability information with the second apparatus.
The steps and related functions described in the above and attached figures are in no absolute chronological order, and some of the steps may be performed simultaneously or in an order differing from the given one. Other functions can also be executed between the steps or within the steps. Some of the steps can also be left out or replaced with a corresponding step.
The apparatuses or controllers able to perform the above-described steps may be implemented as an electronic digital computer, or a circuitry which may comprise a working memory (RAM), a central processing unit (CPU), and a system clock. The CPU may comprise a set of registers, an arithmetic logic unit, and a controller. The controller or the circuitry is controlled by a sequence of program instructions transferred to the CPU from the RAM. The controller may contain a number of microinstructions for basic operations. The implementation of microinstructions may vary depending on the CPU design. The program instructions may be coded by a programming language, which may be a high-level programming language, such as C, Java, etc., or a low-level programming language, such as a machine language, or an assembler. The electronic digital computer may also have an operating system, which may provide system services to a computer program written with the program instructions.
As used in this application, the term‘circuitry’ refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of‘circuitry’ applies to all uses of this term in this application. As a further example, as used in this application, the term‘circuitry’ would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term‘circuitry’ would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
An embodiment provides a computer program embodied on a distribution medium, comprising program instructions which, when loaded into an electronic apparatus, are configured to control the apparatus to execute the embodiments described above.
The computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, which may be any entity or device capable of carrying the program. Such carriers include a record medium, computer memory, read-only memory, and a software distribution package, for example. Depending on the processing power needed, the computer program may be executed in a single electronic digital computer or it may be distributed amongst a number of computers.
The apparatus may also be implemented as one or more integrated circuits, such as application-specific integrated circuits ASIC. Other hardware embodiments are also feasible, such as a circuit built of separate logic components. A hybrid of these different implementations is also feasible. When selecting the method of implementation, a person skilled in the art will consider the requirements set for the size and power consumption of the apparatus, the necessary processing capacity, production costs, and production volumes, for example.
In an embodiment, the apparatus comprises means for maintaining resiliency capability information related to the apparatus, means for communicating with a second apparatus of the radio access network over an interface and means for transmitting and receiving resiliency capability information with the second apparatus.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims

Claims
1. An apparatus in a radio access network, comprising:
at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to perform operations comprising:
maintain resiliency capability information related to the apparatus; control communication with a second apparatus of the radio access network over an interface;
control transmission and reception of resiliency capability information with the second apparatus.
2. The apparatus of claim 1 , the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to:
transmit and receive resiliency capability information when the interface is set up or whenever the resiliency capability information related to the apparatus changes.
3. The apparatus of claim 1 , the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to:
transmit resiliency capability information periodically.
4. The apparatus of any preceding claim, wherein the resiliency capability information comprises a flag indicating whether the apparatus supports resiliency or not.
5. The apparatus of any preceding claim, wherein the resiliency capability information comprises estimated recovery time of the apparatus after failure detection.
6. The apparatus of any preceding claim, wherein the resiliency capability information comprises the capability of the apparatus to provide resiliency with or without loss of data.
7. The apparatus of any preceding claim, wherein the resiliency capability information comprises operational status of the apparatus or entities dependent of the apparatus.
8. The apparatus of any preceding claim, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to
receive a query;
transmit resiliency capability information as a response to the query.
9. The apparatus of any preceding claim, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to
communicate with the second apparatus utilising interface comprising more than one subinterfaces, wherein resiliency related communication granularity is per subinterface level.
10. The apparatus of any preceding claim, wherein the communication with the second apparatus is related to cells, user terminals, bearers or slices allocated to user terminals, wherein the resiliency related communication granularity is per cell level, per user terminal level, per bearer level or per slice level.
1 1. The apparatus of claim 5, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to
detect that an error has occurred in the second apparatus;
control transmission of context information to the second apparatus after the estimated recovery time of the second apparatus has elapsed.
12. The apparatus of any preceding claim, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus further to
detect that an error has occurred in the second apparatus;
perform a recovery action regarding the second apparatus based on the resiliency capability information exchanged with the second apparatus.
13. A method in an apparatus in a radio access network, comprising: maintaining resiliency capability information related to the apparatus; communicating with a second apparatus of the radio access network over an interface; and
transmitting and receiving resiliency capability information with the second apparatus.
14. The method of claim 13, further comprising:
transmitting and receiving resiliency capability information when the interface is set up or whenever the resiliency capability information related to the apparatus changes.
15. The method of claim 13, further comprising:
transmitting resiliency capability information periodically.
16. The method of any preceding claim 13 to 15, wherein the resiliency capability information comprises a flag indicating whether the apparatus supports resiliency not.
17. The method of any preceding claim 13 to 16, wherein the resiliency capability information comprises estimated recovery time of the apparatus after failure detection.
18. The method of any preceding claim 13 to 17, wherein the resiliency capability information comprises the capability of the apparatus to provide resiliency with or without loss of data.
19. The method of any preceding claim 13 to 18, wherein the resiliency capability information comprises operational status of the apparatus or entities dependent of the apparatus.
20. The method of any preceding claim 13 to 19, further comprising: receiving a query;
transmitting resiliency capability information as a response to the query.
21. The method of any preceding claim 13 to 20, further comprising: communicating with the second apparatus utilising interface comprising more than one subinterfaces, wherein resiliency related communication granularity is per subinterface level.
22. The method of any preceding claim 13 to 21 , wherein the communication with the second apparatus is related to cells, user terminals, bearers or slices allocated to user terminals, wherein the resiliency related communication granularity is per cell level, per user terminal level, per bearer level or per slice level.
23. The method of claim 17, further comprising:
detecting that an error has occurred in the second apparatus;
transmitting context information to the second apparatus after the estimated recovery time of the second apparatus has elapsed.
24. The method of any preceding claim 13 to 23, further comprising: detecting that an error has occurred in the second apparatus;
performing a recovery action regarding the second apparatus based on the resiliency capability information exchanged with the second apparatus.
PCT/EP2019/051487 2018-01-31 2019-01-22 Enabling resiliency capability information exchange WO2019149574A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201841003679 2018-01-31
IN201841003679 2018-01-31

Publications (1)

Publication Number Publication Date
WO2019149574A1 true WO2019149574A1 (en) 2019-08-08

Family

ID=65237014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/051487 WO2019149574A1 (en) 2018-01-31 2019-01-22 Enabling resiliency capability information exchange

Country Status (1)

Country Link
WO (1) WO2019149574A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021034906A1 (en) * 2019-08-19 2021-02-25 Q Networks, Llc Methods, systems, kits and apparatuses for providing end-to-end, secured and dedicated fifth generation telecommunication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QUALCOMM INCORPORATED: "Support for multiple SCTP associations in F1", vol. RAN WG3, no. Prague, Czech Republic; 20171009 - 20171013, 9 October 2017 (2017-10-09), XP051344060, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/Meetings_3GPP_SYNC/RAN3/Docs/> [retrieved on 20171009] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021034906A1 (en) * 2019-08-19 2021-02-25 Q Networks, Llc Methods, systems, kits and apparatuses for providing end-to-end, secured and dedicated fifth generation telecommunication

Similar Documents

Publication Publication Date Title
EP3766274A1 (en) Determination for conditional handover failure
EP3874856B1 (en) Apparatus and method for utilising uplink resources
US11109284B2 (en) Controlling handover based on network slices supported by base station
EP3981177A1 (en) Providing information
US20220286405A1 (en) Method for controlling communication availability in a cyber-physical system
US20230254684A1 (en) Communication of user terminal having multiple subscription identities
EP3787352B1 (en) Method for user equipment&#39;s registration update
US20220217620A1 (en) Controlling network access
WO2020056594A1 (en) Apparatus and method for data transmission
WO2019149574A1 (en) Enabling resiliency capability information exchange
US20220330263A1 (en) Computing device comprising a pool of terminal devices and a controller
US11870585B1 (en) Adapting hybrid automatic repeat requests
US20240089721A1 (en) Explicit notifications
US20240031415A1 (en) Locating recipient
US20240187914A1 (en) Methods and apparatuses for controlling small data transmission on uplink
US20230036207A1 (en) Method and apparatus for system providing multicast services
US20220159758A1 (en) Apparatuses and Methods for Data Duplication
WO2022238035A1 (en) Inter-ue coordination in groupcast transmissions
WO2022195161A1 (en) Configurations for gaps in device having multiple user subscription identities
WO2022228780A1 (en) Configuring routing in networks
EP4128864A1 (en) Adapting operation of an apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19702039

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19702039

Country of ref document: EP

Kind code of ref document: A1