WO2019030710A1 - Quality of service (qos) management in a distributed radio access network (ran) architecture - Google Patents

Quality of service (qos) management in a distributed radio access network (ran) architecture Download PDF

Info

Publication number
WO2019030710A1
WO2019030710A1 PCT/IB2018/056020 IB2018056020W WO2019030710A1 WO 2019030710 A1 WO2019030710 A1 WO 2019030710A1 IB 2018056020 W IB2018056020 W IB 2018056020W WO 2019030710 A1 WO2019030710 A1 WO 2019030710A1
Authority
WO
WIPO (PCT)
Prior art keywords
qos
flow
drb
information
mapping
Prior art date
Application number
PCT/IB2018/056020
Other languages
French (fr)
Inventor
Angelo Centonza
Elena MYHRE
Alexander Vesely
Matteo FIORANI
Martin Israelsson
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2019030710A1 publication Critical patent/WO2019030710A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2491Mapping quality of service [QoS] requirements between different networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware

Definitions

  • the present disclosure relates to the standardization work on 5th Generation (5G) Radio Access Network (RAN) architecture.
  • 5G 5th Generation
  • RAN Radio Access Network
  • the CU 10 and DU 12 described above and illustrated in figure 1, are two new logical nodes, one is a Central Unit (CU) 10, hosting high layer protocols, the other is a Distributed Unit (DU) 12 hosting low layer protocols.
  • CU Central Unit
  • DU Distributed Unit
  • the CU 10 may be itself split into two entities, one terminating the user plane (UP) interfaces and the other terminating the control plane (CP) interfaces towards the DU 12. Further it was agreed that the CU may host protocols such as Radio Resource Control (RRC) and Packet Data Convergence Protocol (PDCP), while the DU may host protocols such as Radio Link Control (RLC), Medium Access Control (MAC) and Physical Layer Protocol (PHY).
  • RRC Radio Resource Control
  • PDCP Packet Data Convergence Protocol
  • RLC Radio Link Control
  • MAC Medium Access Control
  • PHY Physical Layer Protocol
  • QoS Quality of Service
  • DRB Data Radio Bearer
  • CU Central Unit
  • DU Distributed Unit
  • the method comprises receiving, at the CU, a flow from a User Plane Function (UPF).
  • UPF User Plane Function
  • the method comprises performing, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of : QoS information for the DRB and QoS information for the flow.
  • the method comprises providing the QoS information to the DU.
  • the QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile.
  • the QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile.
  • the flow may be a QoS flow.
  • the flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID.
  • the CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to the DRB.
  • the CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
  • Specific QoS information may be provided to the DU for each one of the plurality of DRBs.
  • the QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
  • the CU may be configured with a QoS policy associated to each QoS flow ID.
  • a network node comprising a Central Unit (CU), for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU).
  • the network node comprises processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the CU is operative to: receive a flow from a User Plane Function (UPF), perform a mapping between the flow and the DRB and determine the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow, and provide the QoS information to the DU.
  • UPF User Plane Function
  • the QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile.
  • the QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile.
  • the flow may be a QoS flow.
  • the flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID.
  • the CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to the DRB.
  • the CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
  • Specific QoS information may be provided to the DU for each one of the plurality of DRBs.
  • the QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
  • the CU may be configured with a QoS policy associated to each QoS flow ID.
  • Figure 1 is a schematic representation of the gNB architecture with a CU and DUs.
  • Figure 2 is a schematic representation of an architecture according to 5G.
  • Figure 3 is a schematic representation of how QoS Flows are mapped to Access Network (AN) Resources according to some embodiments.
  • Figure 4 is a schematic representation of 5G control node (5GC) multi-Radio Access Technology (multi- RAT) dual connectivity (MR-DC) principles according to some embodiments.
  • 5GC 5G control node
  • multi-RAT multi-Radio Access Technology
  • MR-DC dual connectivity
  • FIG. 5 is a schematic representation of a Radio Protocol Architecture for Master Cell Group (MCG), MCG split bearers, Secondary Cell Group (SCG) and SCG split bearers in 5GC MR-DC according to some embodiments.
  • MCG Master Cell Group
  • SCG Secondary Cell Group
  • SCG split bearers in 5GC MR-DC according to some embodiments.
  • Figure 6 is a flowchart of a method in accordance with some embodiments.
  • Figures 7a and 7b are flowcharts of methods in accordance with some embodiments.
  • Figure 8 is a schematic representation of a wireless network in accordance with some embodiments.
  • Figure 9 is a schematic representation of a virtualization environment in accordance with some embodiments.
  • the CU 10 is connected to the 5G Control Node (CN) (5GC) 20 via the NG interface.
  • the NG interface is made of a UP part (NG-U) and a CP part (NG-C). This is shown in Figure 2.
  • the Access Network 30 corresponds to the RAN described previously.
  • the User Plane Function (UPF) 40 is part of the 5GC 20 and is in charge of delivery of UP traffic to the RAN 30.
  • packets sent to the RAN 30 are marked with a QoS Flow ID. This parameter is included in the header of the General Packet Radio Service Tunneling Protocol User (GTP-U) packet including the UP payload and it is assumed to provide the RAN 30 with information about the QoS to be assigned to the service traffic.
  • GTP-U General Packet Radio Service Tunneling Protocol User
  • MR-DC multi-Radio Access Technology
  • MN master node
  • SN secondary node
  • UE User Equipment
  • 5G QoS Indicator is used in 3GPP to identify a specific QoS forwarding behaviour for a 5G QoS Flow (similar to the QCI value used for LTE). As such, 5QI defines packet loss rate, packet delay budget, etc.
  • 5QI defines packet loss rate, packet delay budget, etc.
  • GTP-U is terminated in the CU. Therefore, the QoS Flow Identifier (ID) is received by the CU.
  • the CU based on such information, will perform a mapping of the traffic flow packets received with a Dedicated Radio Bearer or Data Radio Bearer (DRB).
  • DRB Data Radio Bearer
  • the DU is the node in which most of radio resource management and the whole scheduling function reside. Therefore, a problem is how the DU receives information about the QoS with which the traffic forwarded by the CU has to be handled. Namely, if the CU does not send specific QoS information to the DU, it is not possible for the DU to derive QoS policies to apply to the traffic flows forwarded by the CU.
  • each DRB is associated with a traffic queue. Management of such queues are under the DU control. Each queue might need specific scheduling priority and delay budget requirements. However, unless the DU receives information about how each queue should be managed, such QoS policies cannot be applied.
  • Some embodiments may provide transfer of QoS information between nodes in a distributed RAN architecture, where possibly only some nodes are directly receiving QoS information from the CN (MN, CU), but where other nodes are essential in the enforcement of such QoS (SN, DU).
  • Some mechanisms to communicate information to the DU which are useful for QoS management of UP traffic, are disclosed.
  • mechanisms useful for coordination between SN and MN, which are necessary so that the CU in the SN is receiving enough information to communicate to its respective DUs are also disclosed.
  • Certain embodiments described herein may advantageously enables a flexible management of QoS in networks where either a split architecture and/or MR dual connectivity are deployed.
  • First embodiment Forwarding of QoS parameters from CU to DU.
  • the method comprises receiving, step 601 , at the CU, a flow from a User Plane Function (UPF) .
  • the method comprises performing, step 602, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of : QoS information for the DRB and QoS information for the flow.
  • the method comprises providing, step 603, the QoS information to the DU.
  • the QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile.
  • the QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile.
  • the flow may be a QoS flow.
  • the flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID.
  • the CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to the DRB.
  • the CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
  • Specific QoS information may be provided to the DU for each one of the plurality of DRBs.
  • the QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
  • the CU may be configured with a QoS policy associated to each QoS flow ID.
  • a method is described by which the CU forwards to the DU new QoS information derived from at least the QoS Flow ID and aimed at describing QoS policies the DU should apply to packets within the traffic flow associated to a specific DRB.
  • the CU receives from the User Plane Function, UPF, packets containing a QoS Flow ID.
  • UPF User Plane Function
  • the CU is separated into a User Plane (UP) part and a Control Plane (CP) part.
  • the separation between UP and CP may be absent.
  • the UP part of the CU, CU-UP determines, based on a number of information amongst which the QoS Flow ID, to which DRB each packet received from the UPF shall be mapped.
  • the CU then sends to the DU a list of QoS parameters describing the QoS policy the DU needs to apply to each DRB.
  • Such list may include parameters specifying: average and maximum throughput; guaranteed throughput; maximum delay; maximum delay jitter; traffic priority level; and robustness, i.e. a measure of how reliable traffic transmission needs to be.
  • the CU can derive such list based on the QoS Flow IDs that have been mapped to the DRB in question.
  • the CU is configured with QoS policies associated to each QoS Flow ID. Such configuration may occur via the Operation and Maintenance (OAM) system.
  • OAM Operation and Maintenance
  • the DU in this case does not have visibility on the original QoS Flow IDs associated to the traffic mapped to a certain DRB, but instead is given a partial or full description of the QoS parameters to apply to traffic mapped to a given DRB.
  • Second embodiment Forwarding Quality of Service, QoS, Flow Identifier, ID, from Central Unit, CU, to Distributed Unit, DU.
  • a method executed in a Central Unit (CU), for sending a Quality of Service (QoS) flow Identifier (ID) Dedicated Radio Bearer (DRB) mapping to a Distributed Unit (DU) in a 5 th Generation (5G) Radio Access Network (RAN), comprising receiving a packet containing the QoS Flow Identifier (ID) from a User Plane Function (UPF); determining, e.g., by a User Plane (UP) portion of the CU (UP-CU) or by a Control Plane (CP) portion of the CU (CP-CU), a mapping between the DRB and the QoS flow ID contained in the packet; and sending the QoS flow ID-DRB mapping and the packet to a Distributed Unit (DU).
  • UP User Plane
  • CP Control Plane
  • the QoS flow ID may be associated with the DRB through the QoS flow ID-DRB mapping.
  • the QoS flow ID-DRB mapping may be sent to the DU.
  • a plurality of QoS flow IDs may be associated with at least one DRB.
  • the QoS flow IDs may be associated with at most one DRBs.
  • a QoS policy may be determined to satisfy QoS requirements of the plurality of QoS flow ID.
  • the QoS policy for a DRB may be configured in the DU by an Operation and Maintenance configuration.
  • the QoS policy for a QoS flow ID may be signaled from the CU to the DU over a Fl interface.
  • the QoS policy may include QoS parameters such as required throughput, maximum delay, maximum jitter and traffic priority.
  • the CU may prepare a list of QoS flow ID-DRB mappings comprising, for each mapping, a list of QoS parameters describing the QoS policy to be applied by the DU to a corresponding DRB.
  • the list of QoS parameters may comprise any one of: average and maximum throughput, guaranteed throughput, maximum delay, maximum delay jitter, traffic priority level and robustness, which is a measure of how reliable traffic transmission needs to be.
  • the packet transmitted to the DU may contain a modified QoS Flow ID with less information, an empty QoS Flow ID or no QoS Flow ID.
  • a Master Node MN may determine the QoS flow ID-DRB mapping and may send the QoS flow ID-DRB mapping to the Secondary Node (SN) over an Xn interface as QoS information for use for the establishment of bearers.
  • the configuration at DU of QoS policies per QoS Flow ID may be achieved by signaling from the CU to the DU over the Fl interface.
  • the CU may signal a mapping of QoS Flow ID to QoS parameters, such as required throughput, maximum delay, maximum jitter, traffic priority.
  • a method is described by which the Central Unit (CU) forwards to the distributed unit (DU) a mapping between traffic flow and Dedicated Radio Bearer (DRB) and/or the Quality of Service (QoS) Flow Identifier (ID) received for packets of the traffic flow by the 5th Generation (5G) Control Node (CN) (5GC).
  • This method allows the DU to derive QoS policies from the QoS Flow ID received from the CU.
  • the CU receives from the User Plane Function (UPF) packets containing a QoS Flow ID.
  • the UP part of the CU which is named CU-UP for convenience, determines, based on a number of information amongst which the QoS Flow ID, to which DRB, each packed received from the UPF shall be mapped.
  • the CU then sends to the DU a mapping of QoS Flow ID to DRB information.
  • This information consists of describing which QoS Flow ID has been associated to a specific DRB. Based on this information all packets with a certain QoS flow ID are delivered via a specific DRB.
  • the DU is able to understand the QoS characteristics a specific DRB should be subject to.
  • the CU could send to the DU the following information: mapping between QoS Flow ID 1 and DRB 1 ; and mapping between QoS Flow ID 2 and DRB 1.
  • the DU Under the assumption that the DU has been configured with QoS policies per QoS Flow ID, the DU is able to derive a QoS policy per DRB, which fulfills all QoS Flow IDs mapped to the DRB.
  • the configuration at DU of QoS policies per QoS Flow ID can be achieved by Operation and Maintenance (OAM) configuration.
  • OAM Operation and Maintenance
  • Third, fourth and fifth embodiments described below concern signalling of QoS configuration over Xn, in case of MR-DC.
  • the third embodiment applies in case of split architecture with MR-DC, and concerns a method where the MN decides the overall mapping of QoS flows to DRBs and communicate the results to the SN (CU component) via Xn interface as QoS information part of the establishment of bearers (split bearers or SCG bearers) in the SN.
  • this can be useful if the SN operates at a high frequency and the MN operates at a low frequency and there is a wish/desire to map the Uplink (UL) onto one node/Radio Access Technology (RAT) and the Downlink (DL) onto a different node/RAT by means of dual connectivity where the MN retains overall configuration control.
  • This method enables the first embodiment above.
  • the fourth embodiment applies in case of split architecture with MR-DC, and concerns a method where the MN lets the SN handle QoS independently for bearers terminated in the SN by forwarding QoS information and delegating the mapping to bearers to the SN. This method enables the first and second embodiments above.
  • the fifth embodiment concerns a combination of the methods listed previously that can be introduced in Xn and/or Fl signalling, whereby the MN and SN, CU and DU respectively can negotiate which level of independence the SN/DU are respectively configured with.
  • the initiating node can provide both a suggested mapping of QoS configuration to AN resources and the information provided by the CN and the receiving node can select an option (based on configuration or capability).
  • the MR sends via Xn the necessary QoS information, which can similarly to the above be: the results of QoS Flow to DRB mapping decided by the MN and sent as part of the SN addition and configuration of bearers in the SN; or QoS information whereby the mapping to DRB is left up to the SN to decide; or a combination of the two above, whereby there is a negotiation between MN and SN on which node will map the NG QoS information to AN resources and at which level of detail.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • a wireless network such as the example wireless network illustrated in Figure 8.
  • the wireless network of Figure 8 only depicts network 806, network nodes 860 and 860b, and WDs 810, 810b, and 810c.
  • a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device.
  • network node 860 and wireless device (WD) 810 are depicted with additional detail.
  • the wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network.
  • the wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system.
  • the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures.
  • particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • WLAN wireless local area network
  • WiMax Worldwide Interoperability for Microwave Access
  • Bluetooth Z-Wave and/or ZigBee standards.
  • Network 806 may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • PSTNs public switched telephone networks
  • WANs wide-area networks
  • LANs local area networks
  • WLANs wireless local area networks
  • wired networks wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • Network node 860 and WD 810 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network.
  • the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, and evolved Node Bs (eNBs)).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs.
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • transmission points transmission nodes
  • MCEs multi-cell/multicast coordination entities
  • core network nodes e.g., MSCs, MMEs
  • O&M nodes e.g., OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs.
  • network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network.
  • network node 860 includes processing circuitry 870, device readable medium 880, interface 890, auxiliary equipment 884, power source 886, power circuitry 887, and antenna 862.
  • network node 860 illustrated in the example wireless network of Figure 8 may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein.
  • network node 860 may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 880 may comprise multiple separate hard drives as well as multiple RAM modules).
  • network node 860 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • network node 860 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeB 's.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • network node 860 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • Network node 860 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 860, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 860.
  • Processing circuitry 870 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry 870 may include processing information obtained by processing circuitry 870 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of the processing making a determination.
  • processing information obtained by processing circuitry 870 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of the processing making a determination.
  • Processing circuitry 870 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 860 components, such as device readable medium 880, network node 860 functionality.
  • processing circuitry 870 may execute instructions stored in device readable medium 880 or in memory within processing circuitry 870. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein.
  • processing circuitry 870 may include a system on a chip (SOC).
  • SOC system on a chip
  • processing circuitry 870 may include one or more of radio frequency (RF) transceiver circuitry 872 and baseband processing circuitry 874.
  • radio frequency (RF) transceiver circuitry 872 and baseband processing circuitry 874 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units.
  • part or all of RF transceiver circuitry 872 and baseband processing circuitry 874 may be on the same chip or set of chips, boards, or units
  • processing circuitry 870 executing instructions stored on device readable medium 880 or memory within processing circuitry 870.
  • some or all of the functionality may be provided by processing circuitry 870 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner.
  • processing circuitry 870 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 870 alone or to other components of network node 860, but are enjoyed by network node 860 as a whole, and/or by end users and the wireless network generally.
  • Device readable medium 880 may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 870.
  • volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
  • Device readable medium 880 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 870 and, utilized by network node 860.
  • Device readable medium 880 may be used to store any calculations made by processing circuitry 870 and/or any data received via interface 890.
  • processing circuitry 870 and device readable medium 880 may be considered to be integrated.
  • Interface 890 is used in the wired or wireless communication of signalling and/or data between network node 860, network 806, and/or WDs 810. As illustrated, interface 890 comprises port(s)/terminal(s) 894 to send and receive data, for example to and from network 806 over a wired connection. Interface 890 also includes radio front end circuitry 892 that may be coupled to, or in certain embodiments a part of, antenna 862. Radio front end circuitry 892 comprises filters 898 and amplifiers 896. Radio front end circuitry 892 may be connected to antenna 862 and processing circuitry 870. Radio front end circuitry may be configured to condition signals communicated between antenna 862 and processing circuitry 870.
  • Radio front end circuitry 892 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 892 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 898 and/or amplifiers 896. The radio signal may then be transmitted via antenna 862. Similarly, when receiving data, antenna 862 may collect radio signals which are then converted into digital data by radio front end circuitry 892. The digital data may be passed to processing circuitry 870. In other embodiments, the interface may comprise different components and/or different combinations of components.
  • network node 860 may not include separate radio front end circuitry 892, instead, processing circuitry 870 may comprise radio front end circuitry and may be connected to antenna 862 without separate radio front end circuitry 892.
  • processing circuitry 870 may comprise radio front end circuitry and may be connected to antenna 862 without separate radio front end circuitry 892.
  • all or some of RF transceiver circuitry 872 may be considered a part of interface 890.
  • interface 890 may include one or more ports or terminals 894, radio front end circuitry 892, and RF transceiver circuitry 872, as part of a radio unit (not shown), and interface 890 may communicate with baseband processing circuitry 874, which is part of a digital unit (not shown).
  • Antenna 862 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 862 may be coupled to radio front end circuitry 890 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna 862 may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as ⁇ . In certain embodiments, antenna 862 may be separate from network node 860 and may be connectable to network node 860 through an interface or port.
  • Antenna 862, interface 890, and/or processing circuitry 870 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 862, interface 890, and/or processing circuitry 870 may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment.
  • Power circuitry 887 may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node 860 with power for performing the functionality described herein. Power circuitry 887 may receive power from power source 886. Power source 886 and/or power circuitry 887 may be configured to provide power to the various components of network node 860 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 886 may either be included in, or external to, power circuitry 887 and/or network node 860.
  • network node 860 may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 887.
  • power source 886 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 887. The battery may provide backup power should the external power source fail.
  • Other types of power sources such as photovoltaic devices, may also be used.
  • network node 860 may include additional components beyond those shown in Figure 8 that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • network node 860 may include user interface equipment to allow input of information into network node 860 and to allow output of information from network node 860. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 860.
  • a network node 860 comprising Central Unit (CU) for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU).
  • the network node 860 comprising the CU comprises processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the CU is operative to: receive a flow from a User Plane Function (UPF), perform a mapping between the flow and the DRB and determine the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow, and provide the QoS information to the DU.
  • UPF User Plane Function
  • the QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile.
  • the QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile.
  • the flow may be a QoS flow.
  • the flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID.
  • the CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to the DRB.
  • the CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
  • Specific QoS information may be provided to the DU for each one of the plurality of DRBs.
  • the QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
  • the CU may be configured with a QoS policy associated to each QoS flow ID.
  • wireless device refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD may be used interchangeably herein with user equipment (UE).
  • UE user equipment
  • wireless device 810 includes antenna 811, interface 814, processing circuitry 820, device readable medium 830, user interface equipment 832, auxiliary equipment 834, power source 836 and power circuitry 837.
  • Radio front end circuitry 812 may be coupled to or a part of antenna 811. In some embodiments, some or all of RF transceiver circuitry 822 may be considered a part of interface 814. Radio front end circuitry 812 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 812 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 818 and/or amplifiers 816.
  • Processing circuitry 820 may provide, either alone or in conjunction with other WD 810 components, such as device readable medium 830, WD 810 functionality.
  • processing circuitry 820 includes one or more of RF transceiver circuitry 822, baseband processing circuitry 824, and application processing circuitry 826.
  • User interface equipment 832 may provide components that allow for a human user to interact with WD 810.
  • Auxiliary equipment 834 is operable to provide more specific functionality which may not be generally performed by WDs.
  • Power source 836 may, in some embodiments, be in the form of a battery or battery pack.
  • Power circuitry 837 may additionally or alternatively be operable to receive power from an external power source; in which case WD 810 may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable.
  • FIG. 9 is a schematic block diagram illustrating a virtualization environment 900 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks).
  • a node e.g., a virtualized base station or a virtualized radio access node
  • a device e.g., a UE, a wireless device or any other type of communication device
  • some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments 900 hosted by one or more of hardware nodes 930. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized.
  • the functions may be implemented by one or more applications 920 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Applications 920 are run in virtualization environment 900 which provides hardware 930 comprising processing circuitry 960 and memory 990.
  • Memory 990 contains instructions 995 executable by processing circuitry 960 whereby application 920 is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
  • Virtualization environment 900 comprises general-purpose or special-purpose network hardware devices 930 comprising a set of one or more processors or processing circuitry 960, which may be commercial off-the- shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors.
  • processors or processing circuitry 960 may be commercial off-the- shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors.
  • Each hardware device may comprise memory 990-1 which may be non-persistent memory for temporarily storing instructions 995 or software executed by processing circuitry 960.
  • Each hardware device may comprise one or more network interface controllers (NICs) 970, also known as network interface cards, which include physical network interface 980.
  • NICs network interface controllers
  • Each hardware device may also include non-transitory, persistent, machine-readable storage media 990-2 having stored therein software 995 and/or instructions executable by processing circuitry 960.
  • Software 995 may include any type of software including software for instantiating one or more virtualization layers 950 (also referred to as hypervisors), software to execute virtual machines 940 as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
  • Virtual machines 940 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 950 or hypervisor. Different embodiments of the instance of virtual appliance 920 may be implemented on one or more of virtual machines 940, and the implementations may be made in different ways.
  • processing circuitry 960 executes software 995 to instantiate the hypervisor or virtualization layer 950, which may sometimes be referred to as a virtual machine monitor (VMM).
  • Virtualization layer 950 may present a virtual operating platform that appears like networking hardware to virtual machine 940.
  • hardware 930 may be a standalone network node with generic or specific components. Hardware 930 may comprise antenna 9225 and may implement some functions via virtualization. Alternatively, hardware 930 may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO) 9100, which, among others, oversees lifecycle management of applications 920.
  • CPE customer premise equipment
  • MANO management and orchestration
  • NFV network function virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • virtual machine 940 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of virtual machines 940, and that part of hardware 930 that executes that virtual machine be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines 940, forms a separate virtual network elements (VNE).
  • VNE virtual network elements
  • VNF Virtual Network Function
  • one or more radio units 9200 that each include one or more transmitters 9220 and one or more receivers 9210 may be coupled to one or more antennas 9225.
  • Radio units 9200 may communicate directly with hardware nodes 930 via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • control system 9230 which may alternatively be used for communication between the hardware nodes 930 and radio units 9200.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

There is provided a method and Network node comprising a Central Unit (CU) for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU). The method comprises receiving, at the CU, a flow from a User Plane Function (UPF); performing, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow; and providing the QoS information to the DU.

Description

QUALITY OF SERVICE (QOS) MANAGEMENT IN A DISTRIBUTED RADIO ACCESS NETWORK (RAN) ARCHITECTURE
PRIORITY STATEMENT UNDER 35 U.S.C. S.119(E) & 37 C.F.R. S.1.78
[0001] This non-provisional patent application claims priority based upon the prior U.S. provisional patent application entitled "QoS management in a distributed RAN architecture", application number 62/544,423, filed August 11, 2017, in the names of MYHRE et al.
TECHNICAL FIELD
[0002] The present disclosure relates to the standardization work on 5th Generation (5G) Radio Access Network (RAN) architecture.
BACKGROUND
[0003] As part of the work ongoing in 3rd Generation Partnership Project (3GPP) on 5G RAN architecture, a new set of RAN logical nodes is under discussion. In 3 GPP TR 38.801 Vl.2.0 (2017-2), section 11.1.3.8, (included herein by reference), the following is captured to describe at least part of such new RAN architecture: "Central Unit (CU): a logical node that includes the gNB functions as listed in section 6.2 excepting those functions allocated exclusively to the DU. CU controls the operation of DUs. Distributed Unit (DU): a logical node that includes, depending on the functional split option, a subset of the gNB functions (gNB stands for a 5G/NR RAN base station). The operation of DU is controlled by the CU."
[0004] The CU 10 and DU 12 described above and illustrated in figure 1, are two new logical nodes, one is a Central Unit (CU) 10, hosting high layer protocols, the other is a Distributed Unit (DU) 12 hosting low layer protocols.
[0005] It was agreed in 3GPP that the CU 10 may be itself split into two entities, one terminating the user plane (UP) interfaces and the other terminating the control plane (CP) interfaces towards the DU 12. Further it was agreed that the CU may host protocols such as Radio Resource Control (RRC) and Packet Data Convergence Protocol (PDCP), while the DU may host protocols such as Radio Link Control (RLC), Medium Access Control (MAC) and Physical Layer Protocol (PHY).
SUMMARY
[0006] There are, proposed herein, various embodiments which address one or more of the issues disclosed herein.
[0007] There is provided a method for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) from a Central Unit (CU) to a Distributed Unit (DU). The method comprises receiving, at the CU, a flow from a User Plane Function (UPF). The method comprises performing, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of : QoS information for the DRB and QoS information for the flow. The method comprises providing the QoS information to the DU. [0008] The QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile. The QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile. The flow may be a QoS flow. The flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID. The CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to the DRB. The CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to a different one of a plurality of DRBs. Specific QoS information may be provided to the DU for each one of the plurality of DRBs. The QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness. The CU may be configured with a QoS policy associated to each QoS flow ID.
[0009] There is provided a network node comprising a Central Unit (CU), for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU). The network node comprises processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the CU is operative to: receive a flow from a User Plane Function (UPF), perform a mapping between the flow and the DRB and determine the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow, and provide the QoS information to the DU.
[0010] The QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile. The QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile. The flow may be a QoS flow. The flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID. The CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to the DRB. The CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to a different one of a plurality of DRBs. Specific QoS information may be provided to the DU for each one of the plurality of DRBs. The QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness. The CU may be configured with a QoS policy associated to each QoS flow ID.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Figure 1 is a schematic representation of the gNB architecture with a CU and DUs.
[0012] Figure 2 is a schematic representation of an architecture according to 5G.
[0013] Figure 3 is a schematic representation of how QoS Flows are mapped to Access Network (AN) Resources according to some embodiments. [0014] Figure 4 is a schematic representation of 5G control node (5GC) multi-Radio Access Technology (multi- RAT) dual connectivity (MR-DC) principles according to some embodiments.
[0015] Figure 5 is a schematic representation of a Radio Protocol Architecture for Master Cell Group (MCG), MCG split bearers, Secondary Cell Group (SCG) and SCG split bearers in 5GC MR-DC according to some embodiments.
[0016] Figure 6 is a flowchart of a method in accordance with some embodiments.
[0017] Figures 7a and 7b are flowcharts of methods in accordance with some embodiments.
[0018] Figure 8 is a schematic representation of a wireless network in accordance with some embodiments.
[0019] Figure 9 is a schematic representation of a virtualization environment in accordance with some embodiments.
DETAILED DESCRIPTION
[0020] Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
[0021] Some of the embodiments contemplated herein will now be described more fully with reference to the drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
[0022] While the methods proposed herein are based on the split RAN architecture agreed in 3GPP and outlined above, they can be applied to other split RAN architectures too. In particular, they can be applied to all architectures in which radio resource management resides in part or in full in the DU.
[0023] In the architecture defined in 3 GPP the CU 10 is connected to the 5G Control Node (CN) (5GC) 20 via the NG interface. The NG interface is made of a UP part (NG-U) and a CP part (NG-C). This is shown in Figure 2.
[0024] Over the NG-U interface the 5GC 20 sends information to the RAN 30 about Quality of Service (QoS) to be assigned to UP traffic. Figure 3, extracted from 3 GPP TS 23.501 Vl.2.0 (2017-07) section 5.7.1 (included herein by reference) illustrates this mechanism.
[0025] In Figure 3 the Access Network 30 corresponds to the RAN described previously. The User Plane Function (UPF) 40 is part of the 5GC 20 and is in charge of delivery of UP traffic to the RAN 30. As it can be seen in Figure 3, packets sent to the RAN 30 are marked with a QoS Flow ID. This parameter is included in the header of the General Packet Radio Service Tunneling Protocol User (GTP-U) packet including the UP payload and it is assumed to provide the RAN 30 with information about the QoS to be assigned to the service traffic.
[0026] Furthermore, in the context of 5G standardization multi-Radio Access Technology (multi-RAT) dual connectivity (MR-DC) is being specified. When MR-DC is applied, a RAN node (the master node (MN) 50) anchors the control plane towards the CN, while another RAN node (the secondary node (SN) 60) provides control and user plane resources to the User Equipment (UE) via coordination with the MN 50. This is illustrated in the figure 4, extracted from 3 GPP TS 37.340 VO.2.0 (2017-06) section 4.2.1 (included herein by reference).
[0027] Within the scope of MR-DC, various user plane earer type solutions are possible, as seen in the figure 5, extracted from TS 37.340 VO.2.0 (2017-06) section 4.2.2 (included herein by reference).
[0028] It should be noted that either or the MN 50, the SN 60 or both may be deployed according to the CU-DU split architecture. Further, it should also be noted that 5G QoS Indicator (5QI) is used in 3GPP to identify a specific QoS forwarding behaviour for a 5G QoS Flow (similar to the QCI value used for LTE). As such, 5QI defines packet loss rate, packet delay budget, etc. For further information concerning figures 1 to 5, the reader is referred to the previously provided 3GPP TS and TR references.
[0029] There currently exist certain challenge(s) for QoS Management in split RAN architectures.
[0030] In the CU-DU split architecture described previously, GTP-U is terminated in the CU. Therefore, the QoS Flow Identifier (ID) is received by the CU. The CU, based on such information, will perform a mapping of the traffic flow packets received with a Dedicated Radio Bearer or Data Radio Bearer (DRB). However, the DU is the node in which most of radio resource management and the whole scheduling function reside. Therefore, a problem is how the DU receives information about the QoS with which the traffic forwarded by the CU has to be handled. Namely, if the CU does not send specific QoS information to the DU, it is not possible for the DU to derive QoS policies to apply to the traffic flows forwarded by the CU.
[0031] As an example, one could think of a case where each DRB is associated with a traffic queue. Management of such queues are under the DU control. Each queue might need specific scheduling priority and delay budget requirements. However, unless the DU receives information about how each queue should be managed, such QoS policies cannot be applied.
[0032] In case MR-DC is deployed, there is the additional problem that the CU may be part of the secondary node, SN, hence it may not be the node anchoring the control plane towards the CN, hence there is a coordination need also between MN and SN, given that, in 5G, the mapping of QoS flows to AN resources is done by the RAN.
[0033] Certain aspects of the present disclosure and their embodiments may provide solutions to these or other challenges. Some embodiments may provide transfer of QoS information between nodes in a distributed RAN architecture, where possibly only some nodes are directly receiving QoS information from the CN (MN, CU), but where other nodes are essential in the enforcement of such QoS (SN, DU).
[0034] Some mechanisms to communicate information to the DU, which are useful for QoS management of UP traffic, are disclosed. In case of MR-DC, mechanisms useful for coordination between SN and MN, which are necessary so that the CU in the SN is receiving enough information to communicate to its respective DUs, are also disclosed.
[0035] Certain embodiments described herein may advantageously enables a flexible management of QoS in networks where either a split architecture and/or MR dual connectivity are deployed. [0036] First embodiment: Forwarding of QoS parameters from CU to DU.
[0037] Referring to figure 6, there is provided a method 600 for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) from a Central Unit (CU) to a Distributed Unit (DU). The method comprises receiving, step 601 , at the CU, a flow from a User Plane Function (UPF) . The method comprises performing, step 602, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of : QoS information for the DRB and QoS information for the flow. The method comprises providing, step 603, the QoS information to the DU.
[0038] The QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile. The QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile. The flow may be a QoS flow. The flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID. The CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to the DRB. The CU may receive a plurality of flows from the UPF and performing the mapping may comprise performing a mapping of each of the plurality of flows to a different one of a plurality of DRBs. Specific QoS information may be provided to the DU for each one of the plurality of DRBs. The QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness. The CU may be configured with a QoS policy associated to each QoS flow ID.
[0039] In an example related to this embodiment a method is described by which the CU forwards to the DU new QoS information derived from at least the QoS Flow ID and aimed at describing QoS policies the DU should apply to packets within the traffic flow associated to a specific DRB.
[0040] In this embodiment the CU receives from the User Plane Function, UPF, packets containing a QoS Flow ID. In this example, the CU is separated into a User Plane (UP) part and a Control Plane (CP) part. The separation between UP and CP may be absent. The UP part of the CU, CU-UP, determines, based on a number of information amongst which the QoS Flow ID, to which DRB each packet received from the UPF shall be mapped. The CU then sends to the DU a list of QoS parameters describing the QoS policy the DU needs to apply to each DRB. Such list may include parameters specifying: average and maximum throughput; guaranteed throughput; maximum delay; maximum delay jitter; traffic priority level; and robustness, i.e. a measure of how reliable traffic transmission needs to be.
[0041] The CU can derive such list based on the QoS Flow IDs that have been mapped to the DRB in question.
[0042] It is assumed that the CU is configured with QoS policies associated to each QoS Flow ID. Such configuration may occur via the Operation and Maintenance (OAM) system.
[0043] The DU in this case does not have visibility on the original QoS Flow IDs associated to the traffic mapped to a certain DRB, but instead is given a partial or full description of the QoS parameters to apply to traffic mapped to a given DRB.
[0044] Second embodiment: Forwarding Quality of Service, QoS, Flow Identifier, ID, from Central Unit, CU, to Distributed Unit, DU. [0045] Referring to figure 7a, there is provided a method, executed in a Central Unit (CU), for sending a Quality of Service (QoS) flow Identifier (ID) Dedicated Radio Bearer (DRB) mapping to a Distributed Unit (DU) in a 5th Generation (5G) Radio Access Network (RAN), comprising receiving a packet containing the QoS Flow Identifier (ID) from a User Plane Function (UPF); determining, e.g., by a User Plane (UP) portion of the CU (UP-CU) or by a Control Plane (CP) portion of the CU (CP-CU), a mapping between the DRB and the QoS flow ID contained in the packet; and sending the QoS flow ID-DRB mapping and the packet to a Distributed Unit (DU). The QoS flow ID may be associated with the DRB through the QoS flow ID-DRB mapping. When the CU receives a new packet with the QoS flow ID associated with the DRB, the QoS flow ID-DRB mapping may be sent to the DU. A plurality of QoS flow IDs may be associated with at least one DRB. The QoS flow IDs may be associated with at most one DRBs.
[0046] Referring to figure 7b, there is provided a method, executed in a Distributed Unit (DU), for transmitting a packet according to Quality of Service (QoS) requirements, in a 5th Generation (5G) Radio Access Network (RAN), comprising: receiving a packet and a Quality of Service (QoS) flow Identifier (ID)-Dedicated Radio Bearer (DRB) mapping; determining QoS requirements that the DRB should fulfill, based on the QoS flow ID- DRB mapping received; and transmitting the packet according to the QoS requirements. When a plurality of QoS flow ID are mapped to a single DRB, a QoS policy may be determined to satisfy QoS requirements of the plurality of QoS flow ID. The QoS policy for a DRB may be configured in the DU by an Operation and Maintenance configuration. The QoS policy for a QoS flow ID may be signaled from the CU to the DU over a Fl interface. The QoS policy may include QoS parameters such as required throughput, maximum delay, maximum jitter and traffic priority. The CU may prepare a list of QoS flow ID-DRB mappings comprising, for each mapping, a list of QoS parameters describing the QoS policy to be applied by the DU to a corresponding DRB. The list of QoS parameters may comprise any one of: average and maximum throughput, guaranteed throughput, maximum delay, maximum delay jitter, traffic priority level and robustness, which is a measure of how reliable traffic transmission needs to be. The packet transmitted to the DU may contain a modified QoS Flow ID with less information, an empty QoS Flow ID or no QoS Flow ID. When the method is executed in a split architecture with multi-Radio Access Technology (multi-RAT) dual connectivity MR-DC, a Master Node (MN) may determine the QoS flow ID-DRB mapping and may send the QoS flow ID-DRB mapping to the Secondary Node (SN) over an Xn interface as QoS information for use for the establishment of bearers. In an alternative embodiment the configuration at DU of QoS policies per QoS Flow ID may be achieved by signaling from the CU to the DU over the Fl interface. For example, the CU may signal a mapping of QoS Flow ID to QoS parameters, such as required throughput, maximum delay, maximum jitter, traffic priority.
[0047] In an example related to this embodiment a method is described by which the Central Unit (CU) forwards to the distributed unit (DU) a mapping between traffic flow and Dedicated Radio Bearer (DRB) and/or the Quality of Service (QoS) Flow Identifier (ID) received for packets of the traffic flow by the 5th Generation (5G) Control Node (CN) (5GC). This method allows the DU to derive QoS policies from the QoS Flow ID received from the CU.
[0048] The CU receives from the User Plane Function (UPF) packets containing a QoS Flow ID. The UP part of the CU, which is named CU-UP for convenience, determines, based on a number of information amongst which the QoS Flow ID, to which DRB, each packed received from the UPF shall be mapped. The CU then sends to the DU a mapping of QoS Flow ID to DRB information.
[0049] This information consists of describing which QoS Flow ID has been associated to a specific DRB. Based on this information all packets with a certain QoS flow ID are delivered via a specific DRB.
[0050] With this information the DU is able to understand the QoS characteristics a specific DRB should be subject to. For example, the CU could send to the DU the following information: mapping between QoS Flow ID 1 and DRB 1 ; and mapping between QoS Flow ID 2 and DRB 1.
[0051] The above indicates to the DU that DRB 1 needs to fulfil QoS requirements corresponding to QoS Flow ID 1 and 2.
[0052] Under the assumption that the DU has been configured with QoS policies per QoS Flow ID, the DU is able to derive a QoS policy per DRB, which fulfills all QoS Flow IDs mapped to the DRB.
[0053] Alternatively, the configuration at DU of QoS policies per QoS Flow ID can be achieved by Operation and Maintenance (OAM) configuration.
[0054] Third, fourth and fifth embodiments described below concern signalling of QoS configuration over Xn, in case of MR-DC.
[0055] The third embodiment applies in case of split architecture with MR-DC, and concerns a method where the MN decides the overall mapping of QoS flows to DRBs and communicate the results to the SN (CU component) via Xn interface as QoS information part of the establishment of bearers (split bearers or SCG bearers) in the SN. For example, this can be useful if the SN operates at a high frequency and the MN operates at a low frequency and there is a wish/desire to map the Uplink (UL) onto one node/Radio Access Technology (RAT) and the Downlink (DL) onto a different node/RAT by means of dual connectivity where the MN retains overall configuration control. This method enables the first embodiment above.
[0056] The fourth embodiment applies in case of split architecture with MR-DC, and concerns a method where the MN lets the SN handle QoS independently for bearers terminated in the SN by forwarding QoS information and delegating the mapping to bearers to the SN. This method enables the first and second embodiments above.
[0057] The fifth embodiment concerns a combination of the methods listed previously that can be introduced in Xn and/or Fl signalling, whereby the MN and SN, CU and DU respectively can negotiate which level of independence the SN/DU are respectively configured with. For example, the initiating node can provide both a suggested mapping of QoS configuration to AN resources and the information provided by the CN and the receiving node can select an option (based on configuration or capability).
[0058] In the case of the third, fourth and fifth embodiments, the MR sends via Xn the necessary QoS information, which can similarly to the above be: the results of QoS Flow to DRB mapping decided by the MN and sent as part of the SN addition and configuration of bearers in the SN; or QoS information whereby the mapping to DRB is left up to the SN to decide; or a combination of the two above, whereby there is a negotiation between MN and SN on which node will map the NG QoS information to AN resources and at which level of detail.
[0059] Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
[0060] There is further provided a system, a CU, a DU, a MN and a SN operative to execute at least some of the methods described herein. The description below describes how such network nodes may operate in a communications network.
[0061] Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated in Figure 8. For simplicity, the wireless network of Figure 8 only depicts network 806, network nodes 860 and 860b, and WDs 810, 810b, and 810c. In practice, a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node 860 and wireless device (WD) 810 are depicted with additional detail. The wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network.
[0062] The wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
[0063] Network 806 may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
[0064] Network node 860 and WD 810 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
[0065] As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, and evolved Node Bs (eNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network.
[0066] In Figure 8, network node 860 includes processing circuitry 870, device readable medium 880, interface 890, auxiliary equipment 884, power source 886, power circuitry 887, and antenna 862. Although network node 860 illustrated in the example wireless network of Figure 8 may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node 860 are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 880 may comprise multiple separate hard drives as well as multiple RAM modules).
[0067] Similarly, network node 860 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node 860 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeB 's. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node 860 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate device readable medium 880 for the different RATs) and some components may be reused (e.g., the same antenna 862 may be shared by the RATs). Network node 860 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 860, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 860.
[0068] Processing circuitry 870 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry 870 may include processing information obtained by processing circuitry 870 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of the processing making a determination.
[0069] Processing circuitry 870 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 860 components, such as device readable medium 880, network node 860 functionality. For example, processing circuitry 870 may execute instructions stored in device readable medium 880 or in memory within processing circuitry 870. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry 870 may include a system on a chip (SOC).
[0070] In some embodiments, processing circuitry 870 may include one or more of radio frequency (RF) transceiver circuitry 872 and baseband processing circuitry 874. In some embodiments, radio frequency (RF) transceiver circuitry 872 and baseband processing circuitry 874 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 872 and baseband processing circuitry 874 may be on the same chip or set of chips, boards, or units
[0071] In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be performed by processing circuitry 870 executing instructions stored on device readable medium 880 or memory within processing circuitry 870. In alternative embodiments, some or all of the functionality may be provided by processing circuitry 870 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 870 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 870 alone or to other components of network node 860, but are enjoyed by network node 860 as a whole, and/or by end users and the wireless network generally.
[0072] Device readable medium 880 may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 870. Device readable medium 880 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 870 and, utilized by network node 860. Device readable medium 880 may be used to store any calculations made by processing circuitry 870 and/or any data received via interface 890. In some embodiments, processing circuitry 870 and device readable medium 880 may be considered to be integrated.
[0073] Interface 890 is used in the wired or wireless communication of signalling and/or data between network node 860, network 806, and/or WDs 810. As illustrated, interface 890 comprises port(s)/terminal(s) 894 to send and receive data, for example to and from network 806 over a wired connection. Interface 890 also includes radio front end circuitry 892 that may be coupled to, or in certain embodiments a part of, antenna 862. Radio front end circuitry 892 comprises filters 898 and amplifiers 896. Radio front end circuitry 892 may be connected to antenna 862 and processing circuitry 870. Radio front end circuitry may be configured to condition signals communicated between antenna 862 and processing circuitry 870. Radio front end circuitry 892 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 892 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 898 and/or amplifiers 896. The radio signal may then be transmitted via antenna 862. Similarly, when receiving data, antenna 862 may collect radio signals which are then converted into digital data by radio front end circuitry 892. The digital data may be passed to processing circuitry 870. In other embodiments, the interface may comprise different components and/or different combinations of components.
[0074] In certain alternative embodiments, network node 860 may not include separate radio front end circuitry 892, instead, processing circuitry 870 may comprise radio front end circuitry and may be connected to antenna 862 without separate radio front end circuitry 892. Similarly, in some embodiments, all or some of RF transceiver circuitry 872 may be considered a part of interface 890. In still other embodiments, interface 890 may include one or more ports or terminals 894, radio front end circuitry 892, and RF transceiver circuitry 872, as part of a radio unit (not shown), and interface 890 may communicate with baseband processing circuitry 874, which is part of a digital unit (not shown).
[0075] Antenna 862 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 862 may be coupled to radio front end circuitry 890 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna 862 may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as ΜΓΜΟ. In certain embodiments, antenna 862 may be separate from network node 860 and may be connectable to network node 860 through an interface or port.
[0076] Antenna 862, interface 890, and/or processing circuitry 870 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 862, interface 890, and/or processing circuitry 870 may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment.
[0077] Power circuitry 887 may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node 860 with power for performing the functionality described herein. Power circuitry 887 may receive power from power source 886. Power source 886 and/or power circuitry 887 may be configured to provide power to the various components of network node 860 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 886 may either be included in, or external to, power circuitry 887 and/or network node 860. For example, network node 860 may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 887. As a further example, power source 886 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 887. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used.
[0078] Alternative embodiments of network node 860 may include additional components beyond those shown in Figure 8 that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node 860 may include user interface equipment to allow input of information into network node 860 and to allow output of information from network node 860. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 860.
[0079] There is provided a network node 860, comprising Central Unit (CU) for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU). The network node 860, comprising the CU comprises processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the CU is operative to: receive a flow from a User Plane Function (UPF), perform a mapping between the flow and the DRB and determine the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow, and provide the QoS information to the DU.
[0080] The QoS information for the DRB may be a DRB QoS profile and the QoS information for the flow may be a QoS flow profile. The QoS Information may be an aggregate of the DRB QoS profile and the QoS flow profile. The flow may be a QoS flow. The flow may be marked with a QoS flow Identifier (ID) and the mapping may be based at least in part on the QoS flow ID. The CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to the DRB. The CU may receive a plurality of flows from the UPF and the CU may further be operative to perform a mapping of each of the plurality of flows to a different one of a plurality of DRBs. Specific QoS information may be provided to the DU for each one of the plurality of DRBs. The QoS information may comprise a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness. The CU may be configured with a QoS policy associated to each QoS flow ID.
[0081] As used herein, wireless device (WD) refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD may be used interchangeably herein with user equipment (UE).
[0082] As illustrated, wireless device 810 includes antenna 811, interface 814, processing circuitry 820, device readable medium 830, user interface equipment 832, auxiliary equipment 834, power source 836 and power circuitry 837.
[0083] Radio front end circuitry 812 may be coupled to or a part of antenna 811. In some embodiments, some or all of RF transceiver circuitry 822 may be considered a part of interface 814. Radio front end circuitry 812 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 812 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 818 and/or amplifiers 816.
[0084] Processing circuitry 820 may provide, either alone or in conjunction with other WD 810 components, such as device readable medium 830, WD 810 functionality.
[0085] As illustrated, processing circuitry 820 includes one or more of RF transceiver circuitry 822, baseband processing circuitry 824, and application processing circuitry 826.
[0086] User interface equipment 832 may provide components that allow for a human user to interact with WD 810.
[0087] Auxiliary equipment 834 is operable to provide more specific functionality which may not be generally performed by WDs.
[0088] Power source 836 may, in some embodiments, be in the form of a battery or battery pack. Power circuitry 837 may additionally or alternatively be operable to receive power from an external power source; in which case WD 810 may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable.
[0089] Figure 9 is a schematic block diagram illustrating a virtualization environment 900 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks).
[0090] In some embodiments, some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments 900 hosted by one or more of hardware nodes 930. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized. [0091] The functions may be implemented by one or more applications 920 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications 920 are run in virtualization environment 900 which provides hardware 930 comprising processing circuitry 960 and memory 990. Memory 990 contains instructions 995 executable by processing circuitry 960 whereby application 920 is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
[0092] Virtualization environment 900, comprises general-purpose or special-purpose network hardware devices 930 comprising a set of one or more processors or processing circuitry 960, which may be commercial off-the- shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device may comprise memory 990-1 which may be non-persistent memory for temporarily storing instructions 995 or software executed by processing circuitry 960. Each hardware device may comprise one or more network interface controllers (NICs) 970, also known as network interface cards, which include physical network interface 980. Each hardware device may also include non-transitory, persistent, machine-readable storage media 990-2 having stored therein software 995 and/or instructions executable by processing circuitry 960. Software 995 may include any type of software including software for instantiating one or more virtualization layers 950 (also referred to as hypervisors), software to execute virtual machines 940 as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
[0093] Virtual machines 940, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 950 or hypervisor. Different embodiments of the instance of virtual appliance 920 may be implemented on one or more of virtual machines 940, and the implementations may be made in different ways.
[0094] During operation, processing circuitry 960 executes software 995 to instantiate the hypervisor or virtualization layer 950, which may sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer 950 may present a virtual operating platform that appears like networking hardware to virtual machine 940.
[0095] As shown in Figure 9, hardware 930 may be a standalone network node with generic or specific components. Hardware 930 may comprise antenna 9225 and may implement some functions via virtualization. Alternatively, hardware 930 may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO) 9100, which, among others, oversees lifecycle management of applications 920.
[0096] Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
[0097] In the context of NFV, virtual machine 940 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines 940, and that part of hardware 930 that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines 940, forms a separate virtual network elements (VNE).
[0098] Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines 940 on top of hardware networking infrastructure 930 and corresponds to application 920 in Figure 9.
[0099] In some embodiments, one or more radio units 9200 that each include one or more transmitters 9220 and one or more receivers 9210 may be coupled to one or more antennas 9225. Radio units 9200 may communicate directly with hardware nodes 930 via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
[0100] In some embodiments, some signalling can be effected with the use of control system 9230 which may alternatively be used for communication between the hardware nodes 930 and radio units 9200.

Claims

A method for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) from a Central Unit (CU) to a Distributed Unit (DU), comprising:
receiving, at the CU, a flow from a User Plane Function (UPF);
performing, at the CU, a mapping between the flow and the DRB and determining the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow; and
providing the QoS information to the DU.
The method of claim 1, wherein the QoS information for the DRB is a DRB QoS profile and the QoS information for the flow is a QoS flow profile.
The method of claim 2, wherein the QoS Information is an aggregate of the DRB QoS profile and the QoS flow profile.
The method of any one of claims 1 to 3 wherein the flow is a QoS flow.
The method of any one of claims 1 to 4, wherein the flow is marked with a QoS flow Identifier (ID) and wherein the mapping is based at least in part on the QoS flow ID.
The method of any one of claims 1 to 5, wherein the CU receives a plurality of flows from the UPF and wherein performing the mapping comprises performing a mapping of each of the plurality of flows to the DRB.
The method of any one of claims 1 to 5, wherein the CU receives a plurality of flows from the UPF and wherein performing the mapping comprises performing a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
The method of claim 7, wherein specific QoS information is provided to the DU for each one of the plurality of DRBs.
The method of any one of claims 5 or 6 to 8 when dependent on claim 5, wherein the QoS information comprises a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
The method of any one of claims 5 to 9, wherein the CU is configured with a QoS policy associated to each QoS flow ID.
A network node comprising a Central Unit (CU), for providing Quality of Service (QoS) information for a Data Radio Bearer (DRB) to a Distributed Unit (DU), comprising processing circuits and a memory, the memory containing instructions executable by the processing circuits whereby the CU is operative to:
receive a flow from a User Plane Function (UPF);
perform a mapping between the flow and the DRB and determine the QoS information associated with the mapping, the QoS information comprising at least one of: QoS information for the DRB and QoS information for the flow; and provide the QoS information to the DU.
12. The network node of claim 11, wherein the QoS information for the DRB is a DRB QoS profile and the QoS information for the flow is a QoS flow profile.
13. The network node of claim 12, wherein the QoS Information is an aggregate of the DRB QoS profile and the QoS flow profile.
14. The network node of any one of claims 11 to 13 wherein the flow is a QoS flow.
15. The network node of any one of claims 11 to 14, wherein the flow is marked with a QoS flow Identifier (ID) and wherein the mapping is based at least in part on the QoS flow ID.
16. The network node of any one of claims 11 to 15, wherein the CU receives a plurality of flows from the UPF and wherein the CU is further operative to perform a mapping of each of the plurality of flows to the DRB.
17. The network node of any one of claims 11 to 15, wherein the CU receives a plurality of flows from the UPF and wherein the CU is further operative to perform a mapping of each of the plurality of flows to a different one of a plurality of DRBs.
18. The network node of claim 17, wherein specific QoS information is provided to the DU for each one of the plurality of DRBs.
19. The network node of any one of claims 15 or 16 to 18 when dependent on 15, wherein the QoS information comprises a list of QoS parameters derived from the QoS flow ID and describing a QoS policy to be applied the to the DRB, the parameters comprising at least one of: an average throughput, a maximum throughput, a guaranteed throughput, a maximum delay, a maximum delay jitter, a traffic priority level, and a robustness.
20. The network node of any one of claims 15 to 19, wherein the CU is configured with a QoS policy associated to each QoS flow ID.
21. The network node of any one of claims 11 to 20, wherein the network node is a gNB.
PCT/IB2018/056020 2017-08-11 2018-08-09 Quality of service (qos) management in a distributed radio access network (ran) architecture WO2019030710A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762544423P 2017-08-11 2017-08-11
US62/544,423 2017-08-11

Publications (1)

Publication Number Publication Date
WO2019030710A1 true WO2019030710A1 (en) 2019-02-14

Family

ID=63524337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/056020 WO2019030710A1 (en) 2017-08-11 2018-08-09 Quality of service (qos) management in a distributed radio access network (ran) architecture

Country Status (1)

Country Link
WO (1) WO2019030710A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112469085A (en) * 2020-11-18 2021-03-09 杭州红岭通信息科技有限公司 Control method for downstream flow of F1-U interface of 5G base station
WO2021087813A1 (en) * 2019-11-06 2021-05-14 华为技术有限公司 Session establishment method, data transmission method and related apparatuses

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Radio Access Network; NG-RAN; Architecture description (Release 15)", 26 June 2017 (2017-06-26), XP051301973, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/Meetings_3GPP_SYNC/RAN3/Docs/> [retrieved on 20170626] *
"3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Study on new radio access technology: Radio access architecture and interfaces (Release 14)", 3GPP STANDARD ; TECHNICAL REPORT ; 3GPP TR 38.801, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG3, no. V14.0.0, 3 April 2017 (2017-04-03), pages 1 - 91, XP051298041 *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; System Architecture for the 5G System; Stage 2 (Release 15)", 3GPP STANDARD ; TECHNICAL SPECIFICATION ; 3GPP TS 23.501, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG2, no. V1.2.0, 26 July 2017 (2017-07-26), pages 1 - 166, XP051336684 *
3GPP TS, June 2017 (2017-06-01)
GPP TS 23.501, July 2017 (2017-07-01)
HUAWEI: "Bearer Management over F1", vol. RAN WG3, no. Qingdao, China; 20170627 - 20170629, 26 June 2017 (2017-06-26), XP051302135, Retrieved from the Internet <URL:http://www.3gpp.org/ftp/Meetings_3GPP_SYNC/RAN3/Docs/> [retrieved on 20170626] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021087813A1 (en) * 2019-11-06 2021-05-14 华为技术有限公司 Session establishment method, data transmission method and related apparatuses
CN112469085A (en) * 2020-11-18 2021-03-09 杭州红岭通信息科技有限公司 Control method for downstream flow of F1-U interface of 5G base station

Similar Documents

Publication Publication Date Title
EP3400739B1 (en) Radio network nodes and methods performed therein
US11064417B2 (en) QoS and hop-aware adaptation layer for multi-hop integrated access backhaul system
US11412356B2 (en) Charging system and method, and network device
US11711863B2 (en) Slicing of network resources for dual connectivity using NR
US10219178B2 (en) Channel aggregation using Wi-Fi
WO2022005356A1 (en) Enhanced quality-of-experience (qoe) measurements in a wireless network
US20230231779A1 (en) Enhanced Network Control Over Quality-of-Experience (QoE) Measurement Reports by User Equipment
JP2021515465A (en) Support information for SpCell selection
EP3881637A1 (en) Sidelink quality of service flow management in wireless communications systems and related methods and apparatuses
WO2020122791A1 (en) Improved techniques for conditional handover and bi-casting
WO2021154137A1 (en) Triggering a subsequent handover during a dual-active protocol stack handover
GB2537404A (en) WLAN-LTE Interworking
WO2019030710A1 (en) Quality of service (qos) management in a distributed radio access network (ran) architecture
CN107079515B (en) Improving communication efficiency
US20230318941A1 (en) SPECIAL QoE MEASUREMENTS DURING RRC CONNECTED STATE MOBILITY
CN112566182B (en) Network cooperative control method, controller, control system, device and medium
US20220286841A1 (en) Internet protocol address allocation for integrated access backhaul nodes
WO2022154713A1 (en) METHODS, APPARATUS AND MACHINE-READABLE MEDIA RELATED TO STORAGE OF QoE DATA
EP4233216A1 (en) Simultaneous quality of experience measurement configurations for incapable user equipments
WO2024107097A1 (en) Unknown qfi handling in ran
WO2024096811A1 (en) Rvqoe measurement and reporting for dual connectivity
WO2024035290A1 (en) L1/l2 inter-cell mobility execution
WO2024028840A1 (en) Reporting user equipment assistance information to facilitate radio access network energy savings
WO2023204748A1 (en) User equipment (ue) assistance information with deactivated secondary cell group (scg)
WO2023277738A1 (en) Determining whether to perform a handover

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18766042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18766042

Country of ref document: EP

Kind code of ref document: A1