CN117981287A - Methods, apparatus, and computer readable media related to low latency services in a wireless network - Google Patents

Methods, apparatus, and computer readable media related to low latency services in a wireless network Download PDF

Info

Publication number
CN117981287A
CN117981287A CN202280064248.7A CN202280064248A CN117981287A CN 117981287 A CN117981287 A CN 117981287A CN 202280064248 A CN202280064248 A CN 202280064248A CN 117981287 A CN117981287 A CN 117981287A
Authority
CN
China
Prior art keywords
network node
packets
uplink
user plane
indication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280064248.7A
Other languages
Chinese (zh)
Inventor
C·奥斯特伯格
P·施利瓦-伯特林
I·约翰松
A·琴通扎
H·容凯宁
E·维滕马克
P·威拉斯
M·斯卡夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of CN117981287A publication Critical patent/CN117981287A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/33Flow control; Congestion control using forward notification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0289Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/0858Load balancing or load distribution among entities in the uplink
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/086Load balancing or load distribution among access entities
    • H04W28/0861Load balancing or load distribution among access entities between base stations
    • H04W28/0862Load balancing or load distribution among access entities between base stations of same hierarchy level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/084Load balancing or load distribution among network function virtualisation [NFV] entities; among edge computing entities, e.g. multi-access edge computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method performed by a first network node for uplink congestion control in a radio network is provided. The first network node is operable to process one or more first layers of a protocol stack for an uplink connection between the wireless device and the radio network and is communicatively coupled to the second network node, which processes one or more second lower layers of the protocol stack. The method comprises the following steps: an indication of a proportion of packets within the uplink user plane flow to be marked with the congestion indicator is obtained, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node. The method further comprises the steps of: marking the proportion of packets with a congestion indicator; and transmitting the packets of the uplink user plane flow towards a core network of the radio network.

Description

Methods, apparatus, and computer readable media related to low latency services in a wireless network
Technical Field
Embodiments of the present disclosure relate to wireless networks and, in particular, to methods, apparatuses, and computer readable media related to congestion in wireless networks.
Background
Today, typical wireless networks supporting 4G and earlier versions are optimized mainly for mobile broadband (MBB) and voice services. MBB traffic may be very throughput demanding but generally insensitive to delay. For example, non-real-time streaming services handle long delays by using large buffers, which effectively conceal delay jitter in the network, thereby still yielding a good end-user experience. In later versions of 4G, but especially in 5G, other types of services have become the focus. Examples of these new services are ultra-reliable low latency communication (URLLC) services (typically for industrial applications) and gaming services. Within 3GPP standardization, features are being developed to support these new URLLC services and use cases.
Remote control driving is a delay-sensitive use case, but gaming may be a more common example of an application, including multi-user gaming, augmented Reality (AR), virtual Reality (VR), gaming with and without rendering, and so forth. In order to meet the end user quality experience of these applications, end-to-end (E2E) delay must be considered, i.e. delay through the Core Network (CN) and up to the application server and/or client needs to be considered in addition to providing low delay through the Radio Access Network (RAN). With edge cloud deployment of applications, the impact from the CN and delay between the network and the applications can be reduced.
Another quality of service (QoS) aspect to consider for delay sensitive services is reliability, a measure of which is the probability of delivering traffic (i.e., meeting delay requirements) for a specified duration. Reliability is closely related to delay requirements because traffic can always be transmitted by using a sufficient number of retransmissions without delay requirements. Thus, reliability is a very important criterion when adapting the network to delay sensitive traffic.
When a particular QoS level is to be ensured for a given service, the other parameter to be considered is the availability of resources. Ensuring that resources are available when services need them will ensure timely data exchange and reduce the number of failures of a given bearer communication procedure.
E2E congestion control and active queue management
E2E congestion control allows nodes involved in the traffic path to send congestion to the source signaling. The signaling may be explicit or implicit, such as by dropping packets. The source detects the congestion signaling and then adapts its rate to the weakest link. Active Queue Management (AQM) is typically used in combination with E2E rate adaptation to reduce delay jitter of long-term transmissions caused by burst sources. One example of E2E congestion control and AQM is low latency, low loss, scalable throughput (L4S) described in the next section. L4S uses explicit congestion notification signaling with active queue management algorithms and is used within this disclosure to exemplify this solution. Those skilled in the art will appreciate that the concepts described herein are equally applicable to other congestion notification mechanisms.
Low delay, low loss, scalable throughput (L4S)
One way to manage Latency, especially queue Latency, in an E2E data stream is to use L4S (see Internet draft: low Latency, low Loss, scalable Throughput (L4S) INTERNET SERVICE: architecture (Low Latency, low Loss, scalable throughput (L4S) Internet service: architecture), https:// tools. Any node serving an L4S capable data flow may set an Explicit Congestion Notification (ECN) bit in the IP header of the flow if congestion is experienced. The receiving client gathers congestion/ECN statistics and feeds them back to the corresponding server. Based on the reported congestion information, the server application adapts its data rate to keep the queue delay low and the short E2E delay. Thus, congestion indications are set in the forward direction, collected by the client and sent to the server in a feedback protocol.
When the queue delay is very low, the packets are marked as "congested", which reacts quickly to very small signs of congestion, allowing the end hosts to implement scalable congestion control, where the change in transmission rate (or congestion window) is proportional to the fraction (fraction) of packets marked as congested. Referring additionally to fig. 1, the overall principle is shown.
L4S enables real-time critical data applications to adapt their rate to the weakest link, minimizing the delay impact due to queue growth (build up). The latest L4S is typically triggered by a threshold in the transmission node input queue and can be used to signal a congestion situation. Most transmission nodes provide good results in view of their fairly constant or slowly varying output rates. But for radio networks the output rate over the wireless link may change more frequently than in conventional wired solutions, which may lead to sudden delay peaks, even when using L4S.
High-level partitioning in RAN
In the 5G standard, the gNB may be divided into a Central Unit (CU) and one or more Distributed Units (DUs) that are communicatively coupled to each other through an F1 interface, as shown in fig. 2 (taken from 3GPP TS 38.401v16.6.0).
The F1 interface means that PDCP functions are located in the CU, while RLC and lower layer functions (e.g., MAC and PHY layers) are located in the DU.
Fig. 3 shows a user plane protocol stack between a RAN and a UE. In the gNB CU-DU separation architecture, responsibility of the RAN-UE protocol stack is divided between CUs and DUs, with higher layers (SDAP and PDCP) terminating in the CUs and the remaining lower layers (RLC, MAC, PHY) terminating in the DUs. This is shown in fig. 4. The UE does not know the internal gNB CU-DU partitioning, which means that the RAN-UE procedure is the same regardless of the gNB internal architecture.
The distributed termination of the RAN-UE protocol layer is implemented by the F1 interface, which provides a method for the user plane to transfer NR PDCP PDUs between CUs and DUs. Depending on the distance between CU and DU placement, the transmission delay may be added to the gNB processing delay.
User plane protocol for use on F1-U, xn-U, X-U
In 3GPP TS 38.425v16.3.0, the UP protocol for the F1-U, xn-U and X2-U interfaces is described. In this specification, the following two types of PDUs are shown: downlink data transfer status (DDDS) (PDU type 1) and auxiliary information (PDU type 2).
PDU type 1 has been defined to enable a node hosting a lower layer (e.g., RLC) to communicate information about DL traffic flows to a node hosting PDCP. Furthermore, the PDU type may be used to signal radio link interruption or radio link restoration of the relevant data radio bearer to the node hosting PDCP.
PDU type 2 has been introduced to allow nodes hosting lower layers (e.g., RLC) to communicate information that can help nodes hosting PDCP better manage the configuration of radio bearers. As an example, the auxiliary information may be of a different type, as described below by means of the value range of the auxiliary information type field. It can be seen that the side information provides information about the radio channel used for the DRB.
The value range of the auxiliary information type is as follows: { 0=unknown, 1=average CQI, 2=average HARQ failure, 3=average HARQ retransmission, 4=dl radio quality index, 5=ul radio quality index, 6=power headroom report, 7-228=reserved for future value extension, 229-255=reserved for test purposes }.
The remainder of this section is the copy from 3GPP TS 38.425v16.3.0.
++ + + And ++ + + and from 3GPP TS 38.425v16.3.0 start of text +: ++ and ++ + + and
5.5.2.2DL data transfer State (PDU type 1)
The frame format is defined as transmission feedback to allow a receiving node (i.e., a node hosting the NR PDCP entity) to control downlink user data flow via a transmitting node (i.e., a corresponding node).
The corresponding DL data transfer status frame is shown below. Table 1 shows an example of how a frame is constructed when all optional IEs (i.e., the IEs whose presence is indicated by the associated flags) are present.
The absence of such IEs would change the position of all subsequent IEs on the octet level.
Table 1: DL data transfer status (PDU type 1) format 5.5.2.3 side information data (PDU type 2)
The frame format is defined to allow a node hosting the NR PDCP entity to receive the assistance information.
The following table shows the corresponding auxiliary information data frames.
++ + + And ++ + + and from 3GPP TS 38.425v16.3.0 end of text +: ++ and ++ + + and
There are currently certain challenges. When used in a radio access network such as NR, efficient rate adaptation of high-rate time critical services is required, which is crucial for obtaining good quality of experience. Existing and earlier RAN technologies do not include this possibility.
The major part of the gNB processing delay comes from the radio interface and related scheduling. In the gNB split architecture, however, it is necessary to distribute the functions and responsibilities among the nodes and define the necessary interface modifications. Furthermore, depending on the distance between the CU and the DU placement, additional CU-DU transmission delays may have to be considered. Due to the possibility of sharing data between congestion detection and congestion marking functions, the integrated gNB L4S solution (co-located CU-DU function or co-located hosted PDCP and corresponding node) enables a low complexity design. The problem is that there is currently no defined solution/design for uplink L4S using a high-level split architecture.
L4S is an information addition based on the IP level and has proven to be an effective method of providing rate adaptation for network support (see ericsson white paper "Enabling time-critical applications over G with rate adaptation (time critical application implemented on 5G by rate adaptation)", 2021, month 5). In the UL direction, an IP packet is generated at the UE, and theoretically the UE may be a node responsible for setting L4S information in the IP packet transmitted in the UL. But such an approach will have an impact on the usability of the L4S solution, e.g. depending on the UE' S ability to support L4S. A better and more operator controllable approach would be to enable L4S information to be set by the network so that L4S support is independent of UE type and capability.
Disclosure of Invention
Thus, one problem addressed by embodiments of the present disclosure is how to support L4S for UL traffic in a network. Such a problem is even more pronounced in a split RAN architecture, where the gNB-DUs can see resource conditions (and thus congestion) over the radio interface, while UL IP packets are only visible at the gNB-CU-UP.
Certain aspects of the present disclosure and embodiments thereof are capable of providing solutions to these and other challenges. One object of embodiments disclosed herein is to enable support for rate adaptation applications in a gNB (CU/DU) split architecture. Several methods are presented how to allocate the functions required for uplink congestion control and the necessary interface updates in the gNB (CU/DU) split architecture. The congestion detection algorithm and the marker probability function may be deployed together in a DU or CU. There is also the option of having congestion detection for DU deployments and a tag probability function for CU deployments. Note that embodiments of the present disclosure also apply to scenarios other than split base station architectures, such as dual-connection or multi-connection configurations (e.g., where bearers are divided between a primary node and a secondary node).
Embodiments of the present disclosure enable UEs using a particular service and/or a particular subscription to perform an indication from the RAN to limit the impact of delay due to queue growth. A particular service may be characterized by: the need for low latency and the ability to perform service rate adaptation based on notification from the RAN.
A first aspect of the present disclosure provides a method performed by a first network node for uplink congestion control in a radio network. The first network node processes one or more first layers of a protocol stack for an uplink connection between a wireless device and the radio network and is communicatively coupled to a second network node that processes one or more second layers of the protocol stack for the uplink connection. The one or more second layers are lower than the one or more first layers. The method comprises the following steps: obtaining an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node; marking the proportion of packets with the congestion indicator; and transmitting packets of the uplink user plane flow towards a core network of the radio network.
An apparatus for performing the method set forth above is also provided. For example, another aspect provides a first network node for uplink congestion control in a radio network. The first network node processes one or more first layers of a protocol stack for an uplink connection between a wireless device and the radio network and is communicatively coupled to a second network node that processes one or more second layers of the protocol stack for the uplink connection. The one or more second layers are lower than the one or more first layers. The first network node comprises processing circuitry configured to cause the first network node to: obtaining an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node; marking the proportion of packets with the congestion indicator; and transmitting packets of the uplink user plane flow towards a core network of the radio network.
In a second aspect, the present disclosure provides a method performed by a second network node for uplink congestion control in a radio network. The second network node processes one or more second layers of a protocol stack for an uplink connection between a wireless device and the radio network and is communicatively coupled to a first network node that processes one or more first layers of the protocol stack for the uplink connection. The one or more second layers are lower than the one or more first layers. The method comprises the following steps: transmitting packets for uplink user plane flows on the uplink connection to the first network node for continued transmission towards a core network node of the radio network; and transmitting to the first network node one or more of: an indication of a proportion of packets within the uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node; and an indication of a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node.
An apparatus for performing the method set forth above is also provided. For example, another aspect provides a second network node for uplink congestion control in a radio network. The second network node processes one or more second layers of a protocol stack for an uplink connection between a wireless device and the radio network and is communicatively coupled to a first network node that processes one or more first layers of the protocol stack for the uplink connection. The one or more second layers are lower than the one or more first layers. The second network node comprises processing circuitry configured to cause the second network node to: transmitting packets for uplink user plane flows on the uplink connection to the first network node for continued transmission towards a core network node of the radio network; and transmitting to the first network node one or more of: an indication of a proportion of packets within the uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node; and an indication of a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node.
Particular embodiments can provide one or more of the following technical advantages. One advantage of the embodiments disclosed herein is that an efficient way of deploying uplink congestion detection for network supported rate adaptation (e.g., L4S) in a high-level split architecture defined in NR is ensured. This will achieve good QoE for high rate, rate-adapted services requiring short delays.
Drawings
For a better understanding of embodiments of the present disclosure, and to show how the same may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings in which:
FIG. 1 shows an overview of the functionality of low-latency, low-loss, scalable throughput (L4S);
fig. 2 shows a high level division in a next generation radio access network (NG-RAN);
fig. 3 shows user plane protocol stacks for a User Equipment (UE) and a gndeb (gNB);
FIG. 4 shows a distribution of protocol layers for a gNB;
fig. 5 shows an implementation of the L4S function within a network node or base station;
FIG. 6 illustrates characteristics of a marker probability function pMark according to some embodiments;
Fig. 7 illustrates an implementation of L4S within a Radio Access Network (RAN) in accordance with some embodiments;
8-10 illustrate gNB according to some embodiments;
FIG. 11 is a timeline illustrating the principles of delay measurement for a Distributed Unit (DU) according to some embodiments;
FIG. 12 illustrates a method in accordance with certain embodiments;
FIG. 13 illustrates a method in accordance with certain embodiments;
Fig. 14 illustrates an example of a communication system in accordance with some embodiments;
fig. 15 illustrates a UE in accordance with some embodiments;
Fig. 16 illustrates a network node according to some embodiments;
FIG. 17 is a block diagram of a host according to some embodiments;
FIG. 18 is a block diagram illustrating a virtualized environment in accordance with some embodiments; and
Fig. 19 illustrates a communication diagram of a host in accordance with some embodiments.
Detailed Description
Some embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. The embodiments are provided as examples to convey the scope of the subject matter to those skilled in the art. Although described in the context of a distributed base station, for example, where the base station is divided into a Centralized Unit (CU) (which itself may be divided into a Control Plane (CP) unit and a User Plane (UP) unit) and one or more Distributed Units (DUs), the present disclosure provides methods and apparatus that may be applied to any use case where the user plane protocol described in TS 38.425v16.3.0 is used. That is, the covered use cases are use cases in which the node hosting the lower layer and the node hosting the PDCP protocol communicate with each other by means of Xn-U, X2-U and F1-U interfaces or via any other interface after TS 38.425v 16.3.0.
This may occur, for example, when the wireless device is configured with dual or multiple connections. In these cases, the wireless device is configured with connections with multiple base stations (e.g., a primary node (e.g., meNB, mgNB, etc.) and one or more secondary nodes (e.g., seNB, sgNB, etc.)). The radio bearers may be divided between the primary node and the secondary node such that lower layers (e.g., RLC, MAC, and/or PHY) of the protocol stack for the connection/bearer are hosted at the secondary node, while higher layers (e.g., PDCP, IP, etc.) are hosted at the primary node. Thus, embodiments of the present disclosure are equally applicable to these scenarios. In this disclosure, unless otherwise specified, the term "first network node" refers to a network node or base station that hosts an upper layer of a protocol stack for a connection between a wireless device (e.g., UE) and a radio network. Examples of the first network node include a centralized unit (e.g., CU-UP) of distributed base stations, and a master node for wireless devices configured with dual or multiple connections. The term "second network node" refers to a lower layer network node or base station that hosts a protocol stack for a connection between a wireless device (e.g., UE) and a radio network. Examples of the second network node include a Distributed Unit (DU) of a distributed base station, and a secondary node for a wireless device configured with dual or multiple connections.
Fig. 5 illustrates an implementation of the L4S functionality within a network node or base station (e.g., a gNB or eNB). The core functions of the L4S function are listed below:
Packet marking IP packets are marked as experiencing Congestion (CE) as specified by the L4S internet draft (described above). It uses the output from PMark to identify the proportion of packets to be marked. The function has a deployment constraint, i.e. to be located at a position where the IP packet header of the application data flow can be accessed, which means allocation at the PDCP entity.
The congestion detection algorithm detects congestion and congestion levels in the data flow. Estimation (queue)
(CDA) whether the delay objective cannot be met and whether there is a deviation, how much.
PMark, marking based on information from the CDA, calculate a packet to be marked as CE
A score calculated by the probability. For example, PMark functions may have the characteristics shown in fig. 6, where the probability of marking increases linearly with respect to the (queue) delay time.
FIG. 6 illustrates characteristics of pMark functions according to one embodiment of the present disclosure. Those skilled in the art will appreciate that fig. 6 illustrates only one possible implementation of the pMark function, and that other pMark functions are possible within the scope of the appended claims and/or embodiments. For example, fig. 6 shows a linear change in pMark function between low and high values (e.g., 0 and 1, respectively) for delay times between low and high thresholds (Th low and Th high, respectively). At delay times below the low threshold, the pMark function may have a low value; at delay times above the high threshold, the pMark function may have a high value. Alternatively, pMark functions may vary non-linearly; for example, pMark functions may vary in quantization steps as time delays vary; the pMark function may vary as a function of the time delay experienced by the connected data packets or other functions. Other examples will naturally occur to those of skill in the art and embodiments of the disclosure are not limited in this respect.
Fig. 7 illustrates an implementation of L4S within a RAN according to an embodiment of the present disclosure. The gNB (which is responsible for scheduling the UE) may estimate the UE output queue based on a Buffer Status Report (BSR) sent by the UE. The UE output queues and other metrics (e.g., channel quality, cell load, etc.) may be used to calculate the level of IP packet congestion marking to be injected into the data flow towards the application client.
In a split gNB architecture (or split bearers in dual or multiple connections), the core functions in FIG. 5 are distributed among DUs and/or CUs (secondary network nodes and/or primary network nodes). The packet marking function may be located in the CU along with the PDCP function, but there are different options for how to allocate the CDA and PMark functions.
Fig. 8 shows an embodiment according to the present disclosure, where CDA and pMark functions are located in DUs and packet marking functions are located in CUs. Thus, the DU sends an indication of the proportion of packets to be marked with a congestion indicator to the CU over the F1 interface.
Fig. 9 shows an embodiment according to the present disclosure, where CDA is located in a DU and pMark and packet marking functions are located in a CU. Thus, the DU transmits information about the delay experienced by the packets of the uplink user plane flow to the CU through the F1 interface. The PMark function uses this information to calculate the proportion of packets to be marked with the congestion indicator, and the proportion of packets are marked by the packet marking function in the PDCP entity.
Fig. 10 illustrates an embodiment according to the present disclosure, wherein CDA, pMark, and packet marking functions are located in a CU. Thus, the DU sends the CU data flow and/or the information of the performance metrics monitored by the DU to the CU over the F1 interface. The CDA uses this information to calculate or estimate the delay experienced by, for example, packets of the uplink user plane flow. The PMark function uses the delay to calculate the proportion of packets to be marked with the congestion indicator, and the proportion of packets are marked by a packet marking function in the PDCP entity.
The embodiments described above with respect to fig. 8, 9 and 10 are labeled as cases A, B and C, respectively. Further details regarding these embodiments are set forth below.
Cases a and B: CDA in DU
In both cases a and B, the gNB-DU may host a function (e.g., CDA) that determines whether UL resources are congested. This is possible because the gNB-DU receives a Buffer Status Report (BSR) and also collects UL measurements revealing the radio quality of the UL channel. The gNB-DU also knows the rate at which RLC traffic from the UE is acknowledged, and thus it can infer whether congestion is due to a bad radio link or due to lack of radio resources (or both). Furthermore, the gNB-DU knows the resource utilization, e.g. information signaled to the gNB-CU over the F1-C interface via a resource status update (RESOURCE STATUS UPDATE) procedure. This information provides insight as to whether potential congestion is occurring in the UL and can be used by the gNB-DU to provide instructions to the gNB-CU-UP as to how to perform ECN marking.
Case a: CDA and PMark in DU, tag probability is transmitted through F1
As described above, in case a, CDA and PMark functions are hosted in DUs (or secondary network nodes). In one embodiment of the present disclosure, information providing a marker probability may be added to the 3gpp TS 34.425 assistance information PDU as follows (the underlined paragraphs show the addition to the relevant standards).
Assistant information data (PDU type 2) extracted from section 3GPP TS 38.425v16.3.0, 5.5.2.3
The frame format is defined to allow a node hosting the NR PDCP entity to receive the assistance information.
Table 2 shows the corresponding auxiliary information data (ASSISTANCE INFORMATION DATA) frames.
Table 2: auxiliary information data (PDU type 2) format
L4S marker probability indication
Description of: this field indicates the existence of an L4S marker probability.
Value range: { 0=l4s flag probability does not exist, 1=pdcp l4s flag probability exists }.
Field length: 1 bit.
L4S labeling probability
Description of: this field indicates the probability that the UL IP packet should be marked with an L4S flag (i.e., ECN mark). For example, if the L4S flag probability is set to 50, the node hosting PDCP should interpret this information as suggesting that 50% of UL IP packets in the egress be marked with the L4S flag.
Value range: {0 … }.
Field length: 1 octet.
The number n of octets for the L4S marker probability may reflect the desired marker probability resolution. In the above example, 1 octet is used to represent the L4S marker probability. But more octets can be allocated if higher precision is desired. Based on the L4S mark probability information received in the side information, the CU-UP determines when to start including the ECN mark in the IP header according to the received L4S mark probability value. Upon receiving side information having an L4S mark probability value different from the previously received L4S mark probability value, the CU-UP may change the ECN mark accordingly. The lack of L4S marker probability in the subsequent side information may be interpreted by the CU-UP as an indication that the L4S marker probability is no longer applicable and thus the ECN marker should no longer be included in the IP header.
In another embodiment of the present disclosure, the information added to the PDU type 2 described above (i.e., L4S mark probability indication and L4S mark probability) may be added to the PDU type 1PDU (i.e., DDDS). One advantage of this approach may be that PDU type 1 may be received more frequently by nodes hosting PDCP than PDU type 2. Thus, by adding L4S side information to the PDU type, the gNB-CU-UP can receive guidance on how to set UP L4S more frequently.
In another embodiment of the present disclosure, the L4S congestion indication may be included as a new event in the cause value IE included in PDU type 1 defined in TS 38.425. Examples of how this new value may be included are reported below:
Extract from 3GPP TS 38.425v16.3.0 section 5.5.3.23 cause value
Description of: the parameter indicates a particular event reported by the corresponding node.
Value range: { 0=unknown, 1=radio link outage, 2=radio link restoration, 3=ul radio link outage, 4=dl radio link outage, 5=ul radio link restoration, 6=dl radio link restoration, 7=l4s congestion indication, 8-228=reserved for future value extension, 229-255=reserved for testing purposes }.
As an alternative solution, a congestion flag (referred to herein as L4S congestion notification) may be added to PDUs signaled from the gNB-DUs to the gNB-CUs, or PDUs typically signaled from a node hosting lower layers to a node hosting PDCP (e.g., in the GTP-U extension header of UL GTP-U PDUs). The L4S congestion indication may be provided with each uplink PDU containing an IP packet determined by implementation-specific congestion control algorithm to carry the congestion indication (e.g., in the ECN field of the IP packet). If an uplink PDU contains more than one IP packet or a fragment thereof, and if an L4S congestion notification is added to the uplink PDU (e.g., in a GTP-U extension header), the node receiving the managed PDCP of the UL PDU will tag all corresponding IP packets with a field (e.g., ECN field) indicating congestion.
Case B: PMark in CDA and CU in DU, DU delay information is sent over F1
In this embodiment of the disclosure, the gNB-DU provides DU delay information to the gNB-CU-UP, e.g., an indication of the delay experienced by the packets of the uplink user plane flow. In one embodiment, the DU delay information reflects the time to "empty" the UE outgoing buffer and send data to the gNB-CU-UP, i.e., the time from BSR reception until data is sent on F1 is measured in the DU.
The principle of DU delay measurement is illustrated using the time line in fig. 11. The left side (example a) can be described as:
1. when the UE has data to transmit, the UE transmits a BSR (including a data size).
The gNB-DU responds with a grant indicating the data size and resources to be used in the upcoming transmission.
The ue transmits uplink data using the granted resources.
GNB-DU processing reception and data is sent to gNB-CU through F1.
The DU delay corresponds to the time from (1) to (4). As shown in fig. 11, the UE data message may also include an updated BSR. In example (a), all data is admitted and no new data arrives in the UE outgoing buffer. In example (B), multiple BSRs and UE data transmissions are required to empty the UE buffer, which means a longer DU delay.
DU delay measurements may be included with the F1 data (piggybacked). Alternatively, it may be sent in a separate message, e.g. as a new information element in the side information (i.e. PDU type 2). The DU delay may also be averaged and transmitted less frequently than the F1 data transmission. Yet another option is to use existing information elements in existing messages but with a new interpretation, for example UL delay DU results in the side information (i.e. PDU type 2) can be redefined for this purpose.
As described above, the gNB-DU knows the cell/traffic load, the resource situation and the radio channel quality, and based on this information, the Congestion Detection Algorithm (CDA) can adjust the DU delay according to the current and predicted situation and then send it to the gNB-CU.
The PMark function in the gNB-CU uses the received DU delay information as an input to the L4S marker probability calculation, e.g., as described above with respect to FIG. 6.
Case C: CDA and PMark in CU, F1 performance metrics for monitoring/analyzing CU data stream and/or DU monitoring by F1
As described above, in case C, CDA and PMark functions are hosted in the CU (or master network node). For example, the DU signals to the gNB-CU auxiliary information that instructs the UP how to set the L4S information in UL traffic, depending on the channel conditions monitored in UL.
In this embodiment, the node hosting PDCP (e.g., gNB-CU-UP) may rely on the information contained in PDU type 1 and/or PDU type 2 to infer whether the ECN flag should be set in the UL IP packet in egress. The information that the node hosting PDCP may use for this purpose may be one or more of the following:
The actual UL data flow received via the F1-U interface into the gNB-CU may be monitored and used to estimate congestion related delays that occur when the number of bits to be transmitted is limited by the capacity of the air interface/radio interface. (the reasons for the limitation may be shadow fading, interference, scheduling of other users or the purpose of the application to temporarily transmit large amounts of data.)
UL delay DU results contained in PDU type 2. This information may indicate to the node hosting PDCP that the delay on the Uu interface in UL is too large, and this may indicate that congestion is occurring. Note that the gNB-CU-UP may also utilize UL D1 results received from the gNB-CU-CP over the F1 interface, which represent UL delays experienced by the UE for traffic relative to a particular DRB. Such delay measurements include the time between the PDU entering the PDCP buffer until the PDU leaves over the air interface or the radio interface. gNB-CU-UP receives the UL D1 result IE in the E1AP information structure shown in appendix 1:
the type of side information and radio quality side information contained in PDU type 2: this information may provide information about UL channel conditions to the gNB-CU-UP. An example of such information is UL radio quality index.
The cause value contained in PDU type 1: the IE may include an event that assists the gNB-CU-UP in determining UL channel status, such as a UL radio link failure event indication that indicates that the radio link is not available for transmission in the UL, and may indicate the presence of UL congestion.
Furthermore, gNB-CU-UP can use the measurement of F1-U round trip time transmission delay to infer whether congestion is due to F1-U resource limitations. Such RTT measurements may be obtained in different ways, for example by using a GTP-U echo (echo) function, which generates GTP-U UL PDUs when they are received; or by reporting a poll flag using report poll flags or side information, which may be included in (DL) PDU type 0 and trigger immediate reporting of the gNB-DUs from PDU type 1 and PDU type 2 packets. Thus, the gNB-CU-UP can calculate the RTT between sending the PDU including the poll flag and receiving the association report, and infer the F1-U delay from the RTT.
For use cases of split RAN architecture (where the node hosting the lower layer is a gNB-DU and the node hosting the PDCP includes a gNB-CU), i.e. where the gNB-CU-UP and gNB-CU-CP are not split, another piece of information that the gNB-CU can use to infer if there is congestion of the UL channel is the one received by means of the resource status update message over the F1-C interface. The message contains per-cell resource information, which relates to e.g. the utilization of PRBs, the availability of resources in a cell, the number of active UEs in a cell, the number of RRC connections in a cell, a transmission level traffic load indication, etc. The resource status update message is seen in appendix 2.
By receiving one or more of the information listed above, the node hosting PDCP can infer the presence of congestion conditions on the UL communication channel for a particular DRB. Thus, the node hosting PDCP may decide to apply ECN marking to some/all UL IP packets in the egress of the corresponding DRB traffic.
In one embodiment of the present disclosure, a node hosting a lower layer (e.g., a gNB-DU) may host functionality that is intended to affect how a node hosting PDCP (e.g., a gNB-CU) should apply ECN marking. In such cases, the node hosting the lower layer may set, adapt or configure some of the parameters listed above in order to generate a specific ECN marking at the node hosting the PDCP. For example, some parameters may be set to values that will trigger ECN marking at the node hosting PDCP. Some parameters that a node hosting a lower layer may set are:
cause value in PDU type 1: in this case, an event such as UL radio link interruption may be declared for the purpose of triggering ECN marking.
Resource status update information: in this case, information such as the composite available capacity and the radio resource status may be set to the following values: these values allow the node hosting the PDCP to determine that congestion has occurred and therefore a specific ECN marking policy needs to be applied.
Appendix 1: e1AP information structure
Information on GNB-CU-CP measurements taken from section 3GPP TS 38.473v16.6.0 (F1 application protocol) 9.2.2.19
The message is sent to the gNB-CU-UP to provide the measurement results received by the gNB-CU-CP.
The direction is: gNB-CU-CP → gNB-CU-UP
/>
Appendix 2: resource status update message:
Resource status update from section 9.2.1.23 of the 3GPP TS 38.473v16.6.0 (F1 application protocol)
The message is sent by the gNB-DU to the gNB-CU to report the results of the requested measurements.
The direction is: gNB-DU→gNB-CU
/>
Fig. 12 illustrates a method in accordance with certain embodiments. The method 12 may be performed by a first network node within a radio network (e.g., a centralized unit (CU, CU-UP, etc.) within a distributed base station, a master node, meNB, mgNB, etc., illustrated by network node 1460 or 1600 described later with reference to fig. 14 and 16, respectively). The method should be understood in the context of fig. 5 to 11 above. In particular, the method described with respect to fig. 12 may correspond to the actions of the CU or the master network node described above with respect to these figures.
The first network node processes one or more first layers of a protocol stack for an uplink connection between the wireless device and the radio network. For example, the first network node may host one or more of a PDCP layer and an IP layer of the protocol stack. The first network node is communicatively coupled to a second network node (e.g., a DU, a secondary node, a SeNB, a SgNB, etc.) that processes one or more second layers of a protocol stack for an uplink connection. The one or more second layers are lower than the one or more first layers. For example, the second network node may host one or more of the following: RLC, MAC and PHY layers of the protocol stack.
The method starts in step 1202, where a first network node receives packets for an uplink user plane flow on an uplink connection between a wireless device and a radio network from a second network node. Data Radio Bearers (DRBs) may be used to control and/or organize UL user plane flows.
In step 1204, the first network node obtains an indication of the proportion of packets within the uplink user plane flow to be marked with a congestion indicator. Note that the packets to be marked with the congestion indicator may be at a different layer of the protocol stack than the packets received from the second network node. For example, the packet received from the second network node may be an RLC PDU. The packets to be marked with the congestion indicator may be PDCP PDUs or IP packets. The ratio is based on a delay experienced by packets of the uplink user plane flow sent (e.g., by the wireless device) to the second network node.
In step 1206, the first network node marks the proportion of packets with a congestion indicator (e.g., an L4S indicator, such as an ECN field). This step may be performed in the PDCP layer in the first network node.
The packets may be marked using probabilistic techniques. For example, the indication of the proportion of packets to be marked with the congestion indicator may comprise an indication of a probability according to which the first network node marks the packets.
In step 1208, the first network node sends packets of the uplink user plane flow towards the core network of the radio network, including packets of the proportion marked with a congestion indicator. For example, the packet may be sent to the core network over the backhaul network.
Those skilled in the art will appreciate that step 1204 may vary depending on, for example, the different conditions A, B and C described above.
In case a, for example, where CDA and PMark functions are hosted in the second network node, step 1204 may include receiving an indication (e.g., in the form of a probability) of the proportion of packets to be marked with a congestion indicator from the second network node, e.g., over an F1, X2, or Xn interface. For example, as described above, the indication may be included in an assistance information (type 2) PDU or a downlink data transfer status (type 1) PDU.
In case B, for example, where the CDA function is hosted in the second network node and the PMark function is hosted in the first network node, step 1204 may include calculating a proportion of packets based on a delay experienced by packets of the uplink user plane flow sent to the second network node. In this example, the first network node may receive from the second network node an indication of the delay that these packets experienced in their transmission from the wireless device (e.g., UE) to the first network node via the second network node. For example, the delay may include a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, data corresponding to the buffer status report to the first network node. Additionally or alternatively, the delay may include a delay between the sending of the buffer status report by the wireless device to the second network node and the sending of all data indicated in the buffer status report by the second network node to the first network node. Further details can be found in the description above with respect to fig. 11.
The indication of delay may be piggybacked within the data of the uplink user plane flow (e.g., within the packet received in step 1202) or received in a PDU (e.g., an assistance information PDU) from the second network node. In either case, the delay value reported to the first network node may be averaged by the second network node over multiple measurement instances of the delay.
In case C, where the CDA and PMark functions are hosted in the first network node, step 1204 may further include calculating a proportion of packets based on the delay experienced by the packets of the uplink user plane flow sent to the second network node. In this embodiment, rather than receiving the delay directly from the first network node, the first network node may receive information from the second network node or a third network node (e.g., a CU-CP node), thereby enabling the first network node to calculate, estimate, or infer the delay experienced by the packets of the uplink user plane flow. The information received from the second network node may include one or more of: an uplink data stream; uplink delay on the radio interface; one or more radio quality metrics for a connection between the wireless device and the second network node; an indication of a status of one or more uplink channels between the wireless device and the second network node; an indication of a round trip time of a transmission between the first network node and the second network node. The information received from the third network node (which may relate to resources utilized in the cell served by the second network node) may comprise one or more of: an indication of utilization of the physical resource blocks; an indication of availability of resources in the cell; the number of active wireless devices in a cell; the number of RRC connections in the cell; and one or more transmission level traffic load indications.
In other embodiments, the first congestion indicator may be added to a PDU signaled from the second network node to the first network node. A first congestion flag (which may be provided, for example, in a GTP-U extension header of UL GTP-UPDU) may be provided with each uplink PDU containing an IP packet determined by a congestion control algorithm (in the second network node) to carry a second congestion indicator (for example, in the ECN field of the IP packet). If an uplink PDU contains more than one IP packet or a fragment thereof, and if a first congestion indicator is added to the uplink PDU (e.g., in the GTP-U extension header), the node receiving the managed PDCP of the UL PDU will tag all corresponding IP packets within the uplink PDU with a second congestion indicator (e.g., the ECN field).
Fig. 13 illustrates a method in accordance with certain embodiments. The method 13 may be performed by a second network node (e.g., a distributed unit (CU, CU-UP, etc.) within a distributed base station, a secondary node, seNB, sgNB, etc., illustrated by network node 1460 or 1600 described later with reference to fig. 14 and 16, respectively). The method should be understood in the context of fig. 5 to 11 above. In particular, the method described with respect to fig. 13 may correspond to the actions of the DU or secondary network node described above with respect to these figures.
The second network node processes one or more second layers of a protocol stack for an uplink connection between the wireless device and the radio network. For example, the second network node may host one or more of the following: RLC, MAC and PHY layers of the protocol stack. The second network node is communicatively coupled to a first network node (e.g., CU, master node, meNB, mgNB, etc.), which processes one or more first layers of a protocol stack for uplink connections. The one or more second layers are lower than the one or more first layers. For example, the first network node may host one or more of a PDCP layer and an IP layer of the protocol stack.
The method starts in step 1302, where a second network node sends packets for an uplink user plane flow on an uplink connection between a wireless device and a radio network to a first network node.
In step 1304, the second network node sends one or more of the following to the first network node: an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node; an indication of a delay experienced by packets of an uplink user plane flow transmitted by the wireless device to the second network node; and information enabling the first network node to calculate or estimate a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node.
Those skilled in the art will appreciate that step 1304 may vary depending on, for example, the different conditions A, B and C described above.
In case a, for example, where CDA and PMark functions are hosted in the second network node, step 1304 may include sending an indication (e.g., in the form of a probability) of the proportion of packets to be marked with a congestion indicator to the first network node, e.g., over the F1, X2, or Xn interface. For example, as described above, the indication may be included in an assistance information (type 2) PDU or a downlink data transfer status (type 1) PDU.
In case B, for example, where the CDA function is hosted in the second network node and the PMark function is hosted in the first network node, step 1304 may include sending to the first network node an indication of the delay that these packets experience in their transmission from the wireless device (e.g., UE) to the first network node via the second network node. For example, the delay may include a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, data corresponding to the buffer status report to the first network node. Additionally or alternatively, the delay may include a delay between sending, by the wireless device, the buffer status report to the second network node and sending, by the second network node, all data indicated in the buffer status report to the first network node. Further details can be found in the description above with respect to fig. 11.
The indication of delay may be piggybacked within the data of the uplink user plane flow (e.g., within the packets sent in step 1302) or sent in a PDU (e.g., an assistance information PDU) from the second network node. In either case, the delay value reported to the first network node may be averaged over multiple measurement instances of the delay.
In case C, where the CDA and PMark functions are hosted in the first network node, step 1304 may include sending information to the first network node, thereby enabling the first network node to calculate, estimate, or infer the delay experienced by the packets of the uplink user plane flow. The information sent to the first network node may include one or more of the following: an uplink data stream; uplink delay on the radio interface; one or more radio quality metrics for a connection between the wireless device and the second network node; an indication of a status of one or more uplink channels between the wireless device and the second network node; an indication of a round trip time of a transmission between the first network node and the second network node.
In other embodiments, the first congestion indicator may be added to a PDU signaled from the second network node to the first network node. A first congestion flag (which may be provided, for example, in a GTP-U extension header of UL GTP-UPDU) may be provided with each uplink PDU containing an IP packet determined by a congestion control algorithm (in the second network node) to carry a second congestion indicator (for example, in the ECN field of the IP packet). If an uplink PDU contains more than one IP packet or a fragment thereof, and if a first congestion indicator is added to the uplink PDU (e.g., in the GTP-U extension header), the node receiving the managed PDCP of the UL PDU will tag all corresponding IP packets within the uplink PDU with a second congestion indicator (e.g., the ECN field).
Fig. 14 illustrates an example of a communication system 1400 in accordance with some embodiments.
In this example, the communication system 1400 includes a telecommunications network 1402 that includes an access network 1404 (e.g., a Radio Access Network (RAN)) and a core network 1406 (which includes one or more core network nodes 1408). Access network 1404 includes one or more access network nodes, such as network nodes 1410a and 1410b (one or more of which may be collectively referred to as network nodes 1410), or any other similar third generation partnership project (3 GPP) access nodes or non-3 GPP access points. The network node 1410 facilitates direct or indirect connection of User Equipment (UE), such as connecting UEs 1412a, 1412b, 1412c, and 1412d (one or more of which may be collectively referred to as UE 1412) to the core network 1406 via one or more wireless connections.
Example wireless communications through wireless connections include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Further, in various embodiments, communication system 1400 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals (whether via wired or wireless connections). Communication system 1400 may include and/or interface with any type of communication, telecommunications, data, cellular, radio network, and/or other similar type of system.
UE 1412 may be any of a variety of communication devices including a wireless device that is arranged, configured, and/or operable to wirelessly communicate with network node 1410 and other communication devices. Similarly, network node 1410 is arranged, capable, configured and/or operable to communicate directly or indirectly with UE 1412 and/or other network nodes or devices in telecommunications network 1402 to enable and/or provide network access (e.g., wireless network access) and/or to perform other functions (e.g., management) in telecommunications network 1402.
In the depicted example, core network 1406 connects network node 1410 to one or more hosts, such as host 1416. These connections may be direct connections or indirect connections via one or more intermediary networks or devices. In other examples, the network node may be directly coupled to the host. The core network 1406 includes one or more core network nodes (e.g., core network node 1408) constructed with hardware and software components. The features of these components may be substantially similar to those described for the UE, network node, and/or host such that their description generally applies to the corresponding components of the core network node 1408. An example core network node includes functionality of one or more of: a Mobile Switching Center (MSC), a Mobility Management Entity (MME), a Home Subscriber Server (HSS), an access and mobility management function (AMF), a Session Management Function (SMF), an authentication server function (AUSF), a subscription identifier de-hiding function (SIDF), a Unified Data Management (UDM), a Secure Edge Protection Proxy (SEPP), a network opening function (NEF), and/or a User Plane Function (UPF).
Host 1416 may be under ownership or control of a service provider other than the operator or provider of access network 1404 and/or telecommunications network 1402, and may be operated by or on behalf of the service provider. Host 1416 may host various applications to provide one or more services. Examples of such applications include providing real-time and/or pre-recorded audio/video content, data collection services (e.g., retrieving and compiling data regarding various environmental conditions detected by multiple UEs), analytics functionality, social media, functionality for controlling or otherwise interacting with remote devices, functionality for alert and monitoring centers, or any other such functionality performed by a server.
Overall, the communication system 1400 of fig. 14 enables connections between UEs, network nodes, and hosts. In this sense, the communication system may be configured to operate in accordance with predefined rules or procedures, such as specific criteria, including but not limited to: global system for mobile communications (GSM); universal Mobile Telecommunications System (UMTS); long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any suitable future generation standard (e.g., 6G); wireless Local Area Network (WLAN) standards, such as Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard (WiFi); and/or any other suitable wireless communication standard, such as worldwide interoperability for microwave access (WiMax), bluetooth, Z-wave, near Field Communication (NFC), zigBee, liFi, and/or any Low Power Wide Area Network (LPWAN) standard, such as LoRa and Sigfox.
In some examples, the telecommunications network 1402 is a cellular network implementing 3GPP standardization features. Thus, the telecommunications network 1402 can support network slicing to provide different logical networks to different devices connected to the telecommunications network 1402. For example, the telecommunications network 1402 may provide ultra-reliable low-latency communication (URLLC) services to some UEs while providing enhanced mobile broadband (eMBB) services to other UEs, and/or provide large-scale machine-type communication (mMTC)/large-scale IoT services to other UEs.
In some examples, UE 1412 is configured to send and/or receive information without direct human interaction. For example, the UE may be designed to send information to the access network 1404 on a predetermined schedule when triggered by an internal or external event or in response to a request from the access network 1404. Additionally, the UE may be configured to operate in a single RAT or multi-standard mode. For example, the UE may operate using any one or a combination of Wi-Fi, NR (new radio) and LTE, i.e. configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (evolved UMTS terrestrial radio access network) new radio-dual connectivity (EN-DC).
In the example shown in fig. 14, hub 1414 communicates with access network 1404 to facilitate indirect communication between one or more UEs (e.g., UEs 1412c and/or 1412 d) and a network node (e.g., network node 1410 b). In some examples, hub 1414 may be a controller, router, content source and analysis, or any other communication device described herein with respect to a UE. For example, hub 1414 may be a broadband router that enables a UE to access core network 1406. As another example, the hub 1414 may be a controller that sends commands or instructions to one or more actuators in the UE. The commands or instructions may be received from the UE, the network node 1410, or through executable code, scripts, procedures, or other instructions in the hub 1414. As another example, hub 1414 may be a data collector that serves as temporary storage for UE data, and in some embodiments, may perform analysis or other processing of the data. As another example, hub 1414 may be a content source. For example, for a UE that is a VR headset, display, speaker, or other media delivery device, hub 1414 may retrieve VR assets, video, audio, or other media or data related to the sensed information via a network node, and then provide hub 1414 to the UE directly, after performing local processing, and/or after adding additional local content. In yet another example, hub 1414 acts as a proxy server or orchestrator for the UEs, particularly if one or more of the UEs are low energy IoT devices.
The hub 1414 may have a constant/persistent or intermittent connection with the network node 1410 b. The hub 1414 may also allow for different communication schemes and/or schedules between the hub 1414 and UEs (e.g., UEs 1412c and/or 1412 d) and between the hub 1414 and the core network 1406. In other examples, hub 1414 is connected to core network 1406 and/or one or more UEs via a wired connection. Further, the hub 1414 may be configured to connect to an M2M service provider through the access network 1404 and/or to connect to another UE through a direct connection. In some scenarios, the UE may establish a wireless connection with the network node 1410 while still connecting through a wired or wireless connection via the hub 1414. In some embodiments, hub 1414 may be a dedicated hub, that is, a hub whose primary function is to route communications from network node 1410b to UEs/from UEs to network node 1410 b. In other embodiments, the hub 1414 may be a non-dedicated hub, that is, a device operable to route communications between the UE and the network node 1410b, but also operable as a communication start point and/or end point for a particular data channel.
Fig. 15 illustrates a UE 1500 in accordance with some embodiments. As used herein, a UE refers to a device capable of, configured, arranged and/or operable to wirelessly communicate with a network node and/or other UEs. Examples of UEs include, but are not limited to, smart phones, mobile phones, cellular phones, voice over IP (VoIP) phones, wireless local loop phones, desktop computers, personal Digital Assistants (PDAs), wireless cameras, gaming machines or devices, music storage devices, playback devices, wearable terminal devices, wireless endpoints, mobile stations, tablet computers, notebook computer built-in devices (LEEs), notebook computer installed devices (LMEs), smart devices, wireless client devices (CPE), in-vehicle or in-vehicle embedded/integrated wireless devices, and the like. Other examples include any UE identified by the third generation partnership project (3 GPP), including narrowband internet of things (NB-IoT) UEs, machine Type Communication (MTC) UEs, and/or enhanced MTC (eMTC) UEs.
The UE may support device-to-device (D2D) communication, for example, by implementing 3GPP standards for sidelink communication, dedicated Short Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle-to-anything (V2X). In other examples, the UE may not necessarily have a user in the sense of a human user owning and/or operating the relevant device. Rather, the UE may represent a device (e.g., an intelligent sprinkler controller) intended to be sold to or operated by a human user, but which may not or may not be initially associated with a particular human user. Alternatively, the UE may represent a device (e.g., a smart power meter) that is not intended to be sold to or operated by an end user, but may be associated with or operated for the benefit of the user.
The UE 1500 includes processing circuitry 1502, the processing circuitry 1502 being operatively coupled to an input/output interface 1506, a power source 1508, a memory 1510, a communication interface 1512, and/or any other component or combination of any thereof via a bus 1504. Some UEs may utilize all or a subset of the components shown in fig. 15. The level of integration between components may vary from UE to UE. Further, some UEs may include multiple instances of components, such as multiple processors, memories, transceivers, transmitters, receivers, and so forth.
The processing circuit 1502 is configured to process instructions and data and may be configured to implement any sequential state machine operable to execute instructions stored as machine-readable computer programs in the memory 1510. The processing circuit 1502 may be implemented as one or more hardware-implemented state machines (e.g., employing discrete logic, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), etc.); programmable logic and appropriate firmware; one or more stored computer programs, a general-purpose processor (e.g., a microprocessor or Digital Signal Processor (DSP)), and suitable software; or any combination of the above. For example, the processing circuit 1502 may include a plurality of Central Processing Units (CPUs). The processing circuitry 1502 may be operable to provide UE 1500 functionality alone or in combination with other UE 1500 components (e.g., memory 1510).
In this example, the input/output interface 1506 may be configured to provide one or more interfaces to an input device, an output device, or one or more input and/or output devices. Examples of output devices include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, a transmitter, a smart card, another output device, or any combination thereof. The input device may allow a user to capture information into the UE 1500. Examples of input devices include a touch-sensitive display or a presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a webcam, etc.), a microphone, a sensor, a mouse, a trackball, a trackpad, a scroll wheel, a smart card, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. The sensor may be, for example, an accelerometer, gyroscope, tilt sensor, force sensor, magnetometer, optical sensor, proximity sensor, biometric sensor, or the like, or any combination thereof. The output device may use the same type of interface port as the input device. For example, universal Serial Bus (USB) ports may be used to provide input devices and output devices.
In some embodiments, the power source 1508 is configured as a battery or battery pack. Other types of power sources may be used, such as external power sources (e.g., power outlets), photovoltaic devices, or batteries. The power supply 1508 may also include power circuitry for delivering power from the power supply 1508 itself and/or an external power supply to various portions of the UE 1500 via an input circuit or interface (e.g., a power cord). The transmitted power may be used, for example, to charge the power source 1508. The power circuitry may perform any formatting, conversion, or other modification of the power from the power source 1508 to adapt the power to the respective components of the UE 1500 to which the power is provided.
The memory 1510 may be or be configured to include memory such as Random Access Memory (RAM), read Only Memory (ROM), programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), magnetic disk, optical disk, hard disk, removable cartridge, flash drive, and the like. In one example, memory 1510 includes one or more application programs 1514 (e.g., an operating system, web browser application, widget, gadget engine, or other application) and corresponding data 1516. Memory 1510 may store any one or a combination of various operating systems for use by UE 1500.
The memory 1510 may be configured to include a plurality of physical drive units, such as Redundant Array of Independent Disks (RAID), flash memory, USB flash drives, external hard disk drives, thumb drives, pen drives, key drives, high-density digital versatile disk (HD-DVD) optical drives, internal hard disk drives, blu-ray disc drives, holographic Digital Data Storage (HDDS) optical drives, external mini-Dual Inline Memory Modules (DIMMs), synchronous Dynamic Random Access Memory (SDRAM), external micro DIMM SDRAM, tamper resistant modules in the form of smart card memory (e.g., universal Integrated Circuit Cards (UICCs), including one or more Subscriber Identity Modules (SIMs), such as USIMs and/or ISIMs), other memory, or any combination thereof. The UICC may be, for example, an embedded UICC (eUICC), an integrated UICC (eUICC), or a removable UICC, commonly referred to as a "SIM card". Memory 1510 may allow UE 1500 to access instructions, applications, etc. stored on a temporary or non-temporary storage medium to offload data or upload data. An article of manufacture such as that utilizing a communication system may be tangibly embodied as memory 1510 or in memory 1510, which may be or include a device readable storage medium.
The processing circuit 1502 may be configured to communicate with an access network or other network using the communication interface 1512. The communication interface 1512 may include one or more communication subsystems and may include an antenna 1522 or be communicatively coupled to the antenna 1522. The communication interface 1512 may include one or more transceivers for communicating (e.g., by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network)). Each transceiver can include a transmitter 1518 and/or a receiver 1520 that can be adapted to provide network communication (e.g., optical, electrical, frequency allocation, etc.). Further, the transmitter 1518 and the receiver 1520 may be coupled to one or more antennas (e.g., antenna 1522), and may share circuit components, software, or firmware, or alternatively be implemented separately.
In some embodiments, the communication functions of the communication interface 1512 may include cellular communication, wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communication such as bluetooth, near-field communication, location-based communication such as using a Global Positioning System (GPS) to determine location, another similar communication function, or any combination thereof. Communication may be implemented in accordance with one or more communication protocols and/or standards, such as IEEE 802.11, code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), GSM, LTE, new Radio (NR), UMTS, wiMax, ethernet, transmission control protocol/Internet protocol (TCP/IP), synchronous Optical Network (SONET), asynchronous Transfer Mode (ATM), QUIC, hypertext transfer protocol (HTTP), etc.
Regardless of the type of sensor, the UE may provide output of data captured by its sensor via its communication interface 1512 via a wireless connection with the network node. Data captured by the sensors of the UE may be transmitted via another UE through a wireless connection with the network node. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to balance the reporting load from multiple sensors), in response to a triggering event (e.g., sending an alarm when wetness is detected), in response to a request (e.g., a user initiated request), or continuous flow (e.g., a real-time video feed of the patient).
As another example, the UE includes an actuator, motor, or switch associated with a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input, the state of the actuator, motor, or switch may change. For example, the UE may include motors that adjust a control surface or rotor of the in-flight drone according to the received inputs, or control a robotic arm that performs a medical procedure according to the received inputs.
When in the form of an internet of things (IoT) device, the UE may be a device for one or more application domains including, but not limited to, urban wearable technology, extended industry applications, and healthcare. Non-limiting examples of such IoT devices are devices that are or are embedded in: networking refrigerators or freezers, televisions, networking lighting, electricity meters, robotic cleaners, voice controlled smart speakers, home security cameras, motion detectors, thermostats, smoke detectors, door/window sensors, flood/humidity sensors, power door locks, networking doorbell, air conditioning systems such as heat pumps, autonomous vehicles, monitoring systems, weather monitoring devices, vehicle parking monitoring devices, electric vehicle charging stations, smart watches, fitness trackers, head mounted displays for Augmented Reality (AR) or Virtual Reality (VR), wearable devices for haptic augmentation or sensory augmentation, sprinklers, animal or item tracking devices, sensors for monitoring plants or animals, industrial robots, unmanned Aerial Vehicles (UAVs), and any kind of medical devices such as heart rate monitors or teleoperated robots. The UE in the form of an IoT device includes circuitry and/or software related to the intended application of the IoT device and other components as described for the UE 1500 shown in fig. 15.
As yet another particular example, in an IoT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements and sends the results of such monitoring and/or measurements to another UE and/or network node. In this case, the UE may be an M2M device, which may be referred to as an MTC device in a 3GPP context. As one particular example, a UE may implement the 3GPP NB-IoT standard. In other scenarios, the UE may represent a vehicle (e.g., car, bus, truck), ship, and airplane or other device capable of monitoring and/or reporting its operational status or other functions associated with its operation.
In practice, any number of UEs may be used together for a single use case. For example, the first UE may be or be integrated in a drone and provide speed information of the drone (obtained by a speed sensor) to a second UE that is a remote control operating the drone. When the user makes a change from the remote control, the first UE may adjust a throttle on the drone (e.g., by controlling an actuator) to increase or decrease the speed of the drone. The first UE and/or the second UE may also include a plurality of the functions described above. For example, the UE may include sensors and actuators, and process data communications of both the speed sensor and the actuator.
Fig. 16 illustrates a network node 1600 in accordance with some embodiments. As used herein, a network node refers to a device that is capable of, configured, arranged and/or operable to communicate directly or indirectly with UEs and/or other network nodes or devices in a telecommunications network. Examples of network nodes include, but are not limited to, access Points (APs) (e.g., radio access points), base Stations (BSs) (e.g., radio base stations, node BS, evolved node BS (enbs), and NR node BS (gnbs)).
Base stations may be classified based on the amount of coverage provided by the base stations (or in other words, their transmit power levels), and thus may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations depending on the amount of coverage provided. The base station may be a relay node or a relay donor node controlling the relay. The network node may also include one or more (or all) portions of a distributed radio base station (e.g., a centralized digital unit and/or a Remote Radio Unit (RRU) (sometimes referred to as a Remote Radio Head (RRH)). Such a remote radio unit may or may not be integrated with an antenna into an antenna integrated radio. The portion of the distributed radio base station may also be referred to as a node in a Distributed Antenna System (DAS).
Other examples of network nodes include a multi-transmission point (multi-TRP) 5G access node, a multi-standard radio MSR device (such as an MSR BS), a network controller such as a Radio Network Controller (RNC) or a Base Station Controller (BSC), a Base Transceiver Station (BTS), a transmission point, a transmission node, a multi-cell/Multicast Coordination Entity (MCE), an operation and maintenance (O & M) node, an Operation Support System (OSS) node, a self-organizing network (SON) node, a positioning node (e.g., an evolved serving mobile positioning center (E-SMLC)), and/or a Minimization of Drive Test (MDT).
Network node 1600 includes processing circuit 1602, memory 1604, communication interface 1606 and power supply 1608 and/or any other components or any combination thereof. Network node 1600 may include multiple physically separate components (e.g., a node B component and an RNC component, or a BTS component and a BSC component, etc.), each of which may have their own respective components. In certain scenarios where network node 1600 includes multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among the multiple network nodes. For example, a single RNC may control multiple node bs. In such a scenario, each unique node B and RNC pair may be considered as a single, individual network node in some cases. In some embodiments, network node 1600 may be configured to support multiple Radio Access Technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memories 1604 for different RATs), while some components may be reused (e.g., the same antenna 1610 may be shared by different RATs). Network node 1600 may also include multiple sets of various example components for different wireless technologies (e.g., GSM, WCDMA, LTE, NR, wi-Fi, zigbee, Z wave, loRaWAN, radio Frequency Identification (RFID), or Bluetooth wireless technologies) integrated into network node 1600. These wireless technologies may be integrated into the same or different chips or chipsets as well as other components within network node 1600.
Processing circuitry 1602 may include a combination of one or more of the following operable to provide network node 1600 functionality, either alone or in combination with other network node 1600 components (e.g., memory 1604): a microprocessor, a controller, a microcontroller, a central processing unit, a digital signal processor, an application specific integrated circuit, a field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic. For example, the processing circuit 1602 may be configured to cause a network node to perform a method as described with reference to fig. 12 and/or 13. That is, the processing circuit 1602 may be configured to act as the first network node described above and perform one or more actions described above with respect to fig. 12. Additionally or alternatively, the processing circuit 1602 may be configured to act as the second network node described above and perform one or more actions described above with respect to fig. 13.
In some embodiments, processing circuitry 1602 includes a System On Chip (SOC). In some embodiments, the processing circuitry 1602 includes one or more of Radio Frequency (RF) transceiver circuitry 1612 and baseband processing circuitry 1614. In some embodiments, the Radio Frequency (RF) transceiver circuitry 1612 and baseband processing circuitry 1614 may be on separate chips (or chipsets), boards, or units (e.g., radio units and digital units). In alternative embodiments, some or all of the RF transceiver circuitry 1612 and baseband processing circuitry 1614 may be on the same chip or chipset, board, or unit.
The memory 1604 may comprise any form of volatile or non-volatile computer-readable memory including, but not limited to, permanent storage, solid state memory, remote mounted memory, magnetic media, optical media, random Access Memory (RAM), read Only Memory (ROM), mass storage media (e.g., a hard disk), removable storage media (e.g., a flash memory drive, compact Disk (CD), or Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable storage device that stores information, data, and/or instructions that may be used by the processing circuit 1602. The memory 1604 may store any suitable instructions, data, or information, including computer programs, software, applications (including one or more of logic, rules, code, tables, etc.), and/or other instructions capable of being executed by the processing circuit 1602 and utilized by the network node 1600. The memory 1604 may be used to store any computations performed by the processing circuit 1602 and/or any data received via the communication interface 1606. In some embodiments, the processing circuitry 1602 and the memory 1604 are integrated.
The communication interface 1606 is used in wired or wireless communication of signaling and/or data between network nodes, access networks, and/or UEs. As shown, the communication interface 1606 includes a port/terminal 1616 to send and receive data to and from the network, such as through a wired connection. The communication interface 1606 also includes radio front-end circuitry 1618 that may be coupled to the antenna 1610 or, in some embodiments, as part of the antenna 1610. The radio front-end circuitry 1618 includes a filter 1620 and an amplifier 1622. Radio front-end circuitry 1618 may be connected to antenna 1610 and processing circuitry 1602. The radio front-end circuitry may be configured to condition signals communicated between the antenna 1610 and the processing circuitry 1602. The radio front-end circuitry 1618 may receive digital data to be sent out to other network nodes or UEs via wireless connections. The radio front-end circuitry 1618 may use a combination of filters 1620 and/or amplifiers 1622 to convert the digital data to a radio signal having the appropriate channel and bandwidth parameters. The radio signal may then be transmitted via the antenna 1610. Similarly, upon receiving data, the antenna 1610 may collect radio signals, which are then converted to digital data by the radio front-end circuitry 1618. The digital data may be passed to processing circuitry 1602. In other embodiments, the communication interface may include different components and/or different combinations of components.
In certain alternative embodiments, network node 1600 does not include separate radio front-end circuitry 1618, but rather, processing circuitry 1602 includes radio front-end circuitry and is connected to antenna 1610. Similarly, in some embodiments, all or a portion of RF transceiver circuitry 1612 is part of communication interface 1606. In other embodiments, the communication interface 1606 includes one or more ports or terminals 1616, radio front-end circuitry 1618, and RF transceiver circuitry 1612 as part of a radio unit (not shown), and the communication interface 1606 communicates with baseband processing circuitry 1614, which baseband processing circuitry 1614 is part of a digital unit (not shown).
The antenna 1610 may include one or more antennas or antenna arrays configured to transmit and/or receive wireless signals. The antenna 1610 may be coupled to the radio front-end circuitry 1618 and may be any type of antenna capable of wirelessly transmitting and receiving data and/or signals. In some embodiments, antenna 1610 is separate from network node 1600 and may be connected to network node 1600 through an interface or port.
The antenna 1610, the communication interface 1606, and/or the processing circuitry 1602 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from the UE, another network node and/or any other network device. Similarly, the antenna 1610, the communication interface 1606, and/or the processing circuitry 1602 may be configured to perform any transmit operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to the UE, another network node and/or any other network device.
The power supply 1608 provides power to the various components of the network node 1600 in a form suitable for the respective components (e.g., at the voltage and current levels required by each respective component). The power supply 1608 may also include or be coupled to power management circuitry to provide power to components of the network node 1600 for performing the functions described herein. For example, network node 1600 may be connected to an external power source (e.g., power grid, power outlet) via an input circuit or interface (e.g., cable), whereby the external power source provides power to the power circuit of power supply 1608. As yet another example, the power supply 1608 may include a power supply in the form of a battery or battery pack connected to or integrated within a power circuit. The battery may provide backup power if the external power source fails.
Embodiments of network node 1600 may include additional components beyond those shown in fig. 16 for providing certain aspects of the functionality of the network node, including any functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node 1600 may include user interface devices to allow information to be entered into network node 1600 and to allow information to be output from network node 1600. This may allow a user to perform diagnostic, maintenance, repair, and other management functions for network node 1600.
Fig. 17 is a block diagram of a host 1700 in accordance with various aspects described herein, the host 1700 may be an embodiment of the host 1416 of fig. 14. As used herein, the host 1700 may be or include various combinations of hardware and/or software, including a stand-alone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, a container, or a processing resource in a server farm. Host 1700 may provide one or more services to one or more UEs.
The host 1700 includes processing circuitry 1702, the processing circuitry 1702 being operatively coupled to an input/output interface 1706, a network interface 1708, a power supply 1710, and a memory 1712 via a bus 1704. Other components may be included in other embodiments. The features of these components may be substantially similar to those described for the devices of the previous figures (e.g., fig. 15 and 16) such that their description generally applies to the corresponding components of host 1700.
The memory 1712 may include one or more computer programs, including one or more host applications 1714 and data 1716, the data 1716 may include user data, such as data generated by the UE for the host 1700 or data generated by the host 1700 for the UE. Embodiments of host 1700 may use only a subset or all of the components shown. The host application 1714 may be implemented in a container-based architecture and may provide support for video codecs (e.g., multifunctional video coding (VVC), high Efficiency Video Coding (HEVC), advanced Video Coding (AVC), MPEG, VP 9) and audio codecs (e.g., FLAC, advanced Audio Coding (AAC), MPEG, g.711), including transcoding for a number of different classes, types, or implementations of UEs (e.g., cell phones, desktop computers, wearable display systems, heads-up display systems). The host application 1714 may also provide user authentication and authorization checks and may periodically report health, routing, and content availability to a central node (e.g., a device in the core network or on an edge). Thus, the host 1700 may select and/or indicate a different host for over-top services for the UE. The host application 1714 may support various protocols such as, for example, the HTTP real-time streaming (HLS) protocol, the real-time messaging protocol (RTMP), the real-time streaming protocol (RTSP), the HTTP dynamic adaptation streaming (MPEG-DASH), and the like.
FIG. 18 is a block diagram illustrating a virtualization environment 1800 in which functionality implemented by some embodiments can be virtualized. In the present context, virtualization means creating a virtual version of an apparatus or device, which may include virtualized hardware platforms, storage devices, and networking resources. As used herein, virtualization may be applied to any device or component thereof described herein, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functionality described herein may be implemented as virtual components executed by one or more Virtual Machines (VMs) implemented in one or more virtual environments 1800 hosted by one or more hardware nodes (e.g., hardware computing devices operating as network nodes, UEs, core network nodes, or hosts). Furthermore, in embodiments in which the virtual node does not require a radio connection (e.g., a core network node or host), the node may be fully virtualized.
An application 1802 (which may alternatively be referred to as a software instance, virtual device, network function, virtual node, virtual network function, etc.) runs in the virtualized environment 1800 to implement certain features, functions, and/or benefits of some embodiments disclosed herein.
The hardware 1804 includes processing circuitry, memory storing software and/or instructions executable by the hardware processing circuitry, and/or other hardware devices as described herein, such as network interfaces, input/output interfaces, etc. The software may be executed by the processing circuitry to instantiate one or more virtualization layers 1806 (also referred to as a hypervisor or Virtual Machine Monitor (VMM)), provide virtual machines 1808a and 1808b (one or more of which may be generally referred to as virtual machine 1808), and/or perform any of the functions, features, and/or benefits described with respect to some embodiments described herein. The virtualization layer 1806 may present a virtual operating platform to the virtual machine 1808 that appears to be networking hardware.
The virtual machine 1808 includes virtual processing, virtual memory, virtual networks or interfaces, and virtual storage, and may be executed by the corresponding virtualization layer 1806. Different embodiments of instances of virtual device 1802 may be implemented on one or more virtual machines 1808 and may be implemented in different ways. In some contexts, virtualization of hardware is referred to as Network Function Virtualization (NFV). NFV can be used to integrate many network device types onto industry standard mass server hardware, physical switches, and physical storage that can be located in data centers and customer premises equipment.
In the context of NFV, virtual machine 1808 may be a software implementation of a physical machine that runs a program as if the program were executing on a physical non-virtualized machine. Each virtual machine 1808, as well as the portion of hardware 1804 executing the virtual machine (hardware dedicated to the virtual machine and/or hardware shared by the virtual machine with other virtual machines), forms a separate virtual network element. Still in the context of NFV, virtual network functions are responsible for handling specific network functions running in one or more virtual machines 1808 on top of hardware 1804 and correspond to applications 1802.
The hardware 1804 may be implemented in a stand-alone network node with general-purpose or specific components. The hardware 1804 may implement some functions via virtualization. Alternatively, the hardware 1804 may be part of a larger hardware cluster (e.g., such as at a data center or CPE) where many hardware nodes work together and are managed via a management and orchestration 1810 that manages, among other things, the lifecycle management of the supervisory application 1802. In some embodiments, hardware 1804 is coupled to one or more radios, each of which includes one or more transmitters and one or more receivers, which may be coupled to one or more antennas. The radio unit may communicate directly with other hardware nodes via one or more suitable network interfaces and may be used in combination with virtual components to provide virtual nodes, such as radio access nodes or base stations, with radio capabilities. In some embodiments, some signaling may be provided using a control system 1812, which control system 1812 may alternatively be used for communication between hardware nodes and radio units.
Fig. 19 illustrates a communication diagram of a host 1902 communicating with a UE 1906 over a partial wireless connection via a network node 1904, in accordance with some embodiments. An example implementation of the UE discussed in the preceding paragraphs (e.g., UE 1412a of fig. 14 and/or UE 1500 of fig. 15), a network node (e.g., network node 1410a of fig. 14 and/or network node 1600 of fig. 16), and a host (e.g., host 1416 of fig. 14 and/or host 1700 of fig. 17) in accordance with various embodiments will now be described with reference to fig. 19.
Similar to host 1700, embodiments of host 1902 include hardware, such as communication interfaces, processing circuitry, and memory. Host 1902 also includes software that is stored in host 1902 or is accessible to host 1902 and executable by processing circuitry. The software includes a host application operable to provide services to remote users, such as UEs 1906 connected via an over-the-top (OTT) connection 1950 extending between the UEs 1906 and the host 1902. In providing services to remote users, the host application may provide user data sent using OTT connection 1950.
The network node 1904 includes hardware that enables it to communicate with the host 1902 and the UE 1906. The connection 1960 may be direct or through a core network (e.g., core network 1406 of fig. 14) and/or one or more other intermediary networks, such as one or more public, private, or hosted networks. For example, the intermediate network may be a backbone network or the internet.
The UE 1906 includes hardware and software that is stored in the UE 1906 or accessible to the UE 1906 and executable by the processing circuitry of the UE. The software includes a client application, such as a web browser or operator specific "application," operable to provide services to human or non-human users via the UE 1906 under the support of the host 1902. In host 1902, an executing host application may communicate with an executing client application via OTT connection 1950 terminating at UE 1906 and host 1902. In providing services to a user, a client application of the UE may receive request data from a host application of a host and provide user data in response to the request data. OTT connection 1950 may transmit both request data and user data. The client application of the UE may interact with the user to generate user data that it provides to the host application over OTT connection 1950.
OTT connection 1950 may extend via a connection 1960 between host 1902 and network node 1904 and via a wireless connection 1970 between network node 1904 and UE 1906 to provide a connection between host 1902 and UE 1906. The connection 1960 through which the OTT connection 1950 may be provided and the wireless connection 1970 have been abstractly drawn to illustrate communication between the host 1902 and the UE 1906 via the network node 1904 without explicit reference to any intermediate devices and precise routing of messages via these devices.
As an example of sending data via OTT connection 1950, in step 1908, host 1902 provides user data, which may be performed by executing a host application. In some embodiments, the user data is associated with a particular human user that interacts with the UE 1906. In other embodiments, the user data is associated with the UE 1906, and the UE 1906 shares data with the host 1902 without explicit human interaction. In step 1910, the host 1902 initiates a transmission carrying user data towards the UE 1906. The host 1902 may initiate a transmission in response to a request sent by the UE 1906. The request may be caused by human interaction with the UE 1906 or by operation of a client application executing on the UE 1906. Transmissions may pass through network node 1904 according to the teachings of the embodiments described throughout this disclosure. Thus, in step 1912, the network node 1904 sends user data carried in the host 1902-initiated transmission to the UE 1906 in accordance with the teachings of the embodiments described throughout the present disclosure. In step 1914, the UE 1906 receives user data carried in the transmission, which may be performed by a client application executing on the UE 1906, the client application being associated with a host application executed by the host 1902.
In some examples, the UE 1906 executes a client application that provides user data to the host 1902. User data may be provided in response to data received from host 1902. Thus, in step 1916, the UE 1906 may provide user data, which may be performed by executing a client application. In providing user data, the client application may further consider user input received from a user via the input/output interface of the UE 1906. Regardless of the particular manner in which the user data is provided, the UE 1906 initiates transmission of the user data via the network node 1904 towards the host 1902 in step 1918. In step 1920, the network node 1904 receives user data from the UE 1906 and initiates transmission of the received user data towards the host 1902, in accordance with the teachings of the embodiments described throughout the present disclosure. In step 1922, host 1902 receives user data carried in a transmission initiated by UE 1906.
One or more of the various embodiments improves the performance of OTT services provided to UE 1906 using OTT connection 1950 (where wireless connection 1970 forms the last segment). More precisely, the teachings of these embodiments can improve the delay and reliability of uplink transmissions, thereby providing benefits such as better responsiveness.
In an example scenario, the host 1902 may collect and analyze plant status information. As another example, the host 1902 may process audio and video data that may have been retrieved from a UE for use in creating a map. As another example, the host 1902 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights). As another example, the host 1902 may store surveillance videos uploaded by UEs. As another example, the host 1902 may store or control access to media content, such as video, audio, VR, or AR, which the host 1902 may broadcast, multicast, or unicast to UEs. As other examples, host 1902 may be used for remote control of energy pricing, non-time critical electrical loads to balance power generation requirements, location services, presentation services (e.g., compiling graphs from data collected from remote devices, etc.), or any other function that collects, retrieves, stores, analyzes, and/or transmits data.
In some examples, the measurement process may be provided for the purpose of monitoring data rate, delay, and other factors upon which one or more embodiments improve. In response to a change in the measurement results, there may also be an optional network function for reconfiguring OTT connection 1950 between host 1902 and UE 1906. The measurement procedures and/or network functions for reconfiguring OTT connections may be implemented in software and hardware of the host 1902 and/or the UE 1906. In some embodiments, sensors (not shown) may be deployed in or associated with other devices through which OTT connection 1950 passes; the sensor may participate in the measurement process by providing the value of the monitored quantity exemplified above or providing a value of other physical quantity from which the software can calculate or estimate the monitored quantity. Reconfiguration of OTT connection 1950 may include message format, retransmission settings, preferred routing, etc.; the reconfiguration does not require a direct change in the operation of the network node 1904. Such processes and functions may be known and practiced in the art. In some embodiments, the measurements may involve proprietary UE signaling that facilitates the measurement of throughput, propagation time, delay, etc. by the host 1902. Measurements can be made because the software causes OTT connection 1950 to be used to send messages, particularly null messages or "dummy" messages, during its monitoring of propagation times, errors, etc.
Although the computing devices described herein (e.g., UE, network node, host) may include a combination of the hardware components shown, other embodiments may include computing devices having different combinations of components. It will be appreciated that these computing devices may include any suitable combination of hardware and/or software necessary to perform the tasks, features, functions, and methods disclosed herein. The determining, calculating, obtaining, or the like described herein may be performed by processing circuitry that may process information by: converting the obtained information into other information, comparing the obtained information or the converted information with information stored in the network node, and/or performing one or more operations based on the obtained information or the converted information, and making a determination as a result of the processing. Furthermore, while components are depicted as a single block within a larger block or nested within multiple blocks, in practice a computing device may comprise multiple different physical components that make up a single illustrated component, and the functionality may be divided among the individual components. For example, the communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be divided between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any such component may be implemented in software or firmware, while computationally intensive functions may be implemented in hardware.
In some embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on a memory, which in some embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by processing circuitry without the need to execute instructions stored on separate or discrete device-readable storage media, such as in a hardwired manner. In any of these particular embodiments, the processing circuitry, whether executing instructions stored on a non-transitory computer-readable storage medium or not, may be configured to perform the described functions. The benefits provided by such functionality are not limited to processing circuitry or other components of a computing device, but are enjoyed in their entirety by the computing device and/or generally by the end user and the wireless network.
The following numbered statements set forth embodiments of the present disclosure:
Group B examples
1. A method performed by a first network node for uplink congestion control in a radio network, the first network node handling one or more first layers of a protocol stack for an uplink connection between a wireless device and the radio network, the first network node being communicatively coupled to a second network node handling one or more second layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the method comprising:
Obtaining an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent to the second network node;
marking the proportion of packets with a congestion indicator; and
Packets of the uplink user plane flow are sent towards the core network of the radio network.
2. The method of embodiment 1 wherein the proportion of packets is marked in a packet data convergence protocol PDCP entity of the first network node.
3. The method of embodiment 1 or 2, wherein the indication of the proportion of packets to be marked comprises an indication of a probability, and wherein marking the proportion of packets with a congestion indicator comprises: packets of the uplink user plane flow are marked with congestion indicators according to probability.
4. The method according to any of the preceding embodiments, wherein obtaining an indication of the proportion of packets to be marked comprises: an indication of a proportion of packets to be marked is received from the second network node.
5. The method of embodiment 4 wherein the indication of the proportion of packets to be marked is received in a secondary information protocol data unit, PDU, sent by the second network node.
6. The method of embodiment 4 wherein the indication of the proportion of packets to be marked is received in a downlink data transfer status PDU sent by the second network node.
7. The method of any of embodiments 1-3, wherein obtaining an indication of a proportion of packets to be marked comprises: based on the delay experienced by the packets of the uplink user plane flow sent to the second network node, the proportion of the packets is calculated.
8. The method of embodiment 7, further comprising: an indication of delay experienced by packets of an uplink user plane flow sent to the second network node is received from the second network node.
9. The method of embodiment 8 wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between the sending of the buffer status report by the wireless device to the second network node and the sending of the data corresponding to the buffer status report by the second network node to the first network node.
10. The method of embodiment 8 or 9, wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between the sending of the buffer status report by the wireless device to the second network node and the sending of all data indicated in the buffer status report by the second network node to the first network node.
11. The method according to any of embodiments 8-10, wherein the indication of the delay experienced by the packets of the uplink user plane flow sent to the second network node is piggybacked within the data for the uplink user plane flow.
12. The method according to any of embodiments 8 to 10, wherein the indication of the delay experienced by the packets of the uplink user plane flow sent to the second network node is received in an assistance information PDU from the second network node.
13. The method according to any of embodiments 8-12, wherein the indication of the delay experienced by the packets of the uplink user plane flow sent to the second network node comprises an indication of the average delay experienced by the packets of the uplink user plane flow sent to the second network node.
14. The method of embodiment 7, further comprising: the delay experienced by packets of the uplink user plane flow sent to the second network node is calculated or estimated.
15. The method of embodiment 14 wherein the delay experienced by the packets of the uplink user plane flow sent to the second network node is estimated or calculated based on information received from the second network node.
16. The method of embodiment 15, wherein the information received from the second network node includes one or more of: an uplink data stream; uplink delay on the radio interface; one or more radio quality metrics for a connection between the wireless device and the second network node; an indication of a status of one or more uplink channels between the wireless device and the second network node; an indication of a round trip time of a transmission between the first network node and the second network node.
17. The method according to any of embodiments 14-16, wherein the delay experienced by packets of the uplink user plane flow sent to the second network node is estimated or calculated based on information received from a third network node (e.g. a control plane node, such as a control plane centralized unit) about resources utilized in a cell served by the second network node.
18. The method of embodiment 17, wherein the information received from the third network node includes one or more of: an indication of utilization of the physical resource blocks; an indication of availability of resources in the cell; the number of active wireless devices in a cell; the number of RRC connections in the cell; and one or more transmission level traffic load indications.
19. The method of embodiment 1 or 2, further comprising: receiving data packets for an uplink user plane flow from a second network node, wherein obtaining an indication of a proportion of packets to be marked with a congestion indicator comprises: a subset of the data packets received from the second network node marked with the first congestion indicator, and wherein marking the packets for the uplink user plane flow with the congestion indicator comprises: the internet protocol packets within the subset of data packets received from the second network node are marked with a second congestion indicator.
20. A method performed by a second network node for uplink congestion control in a radio network, the second network node handling one or more second layers of a protocol stack for an uplink connection between a wireless device and the radio network, the second network node being communicatively coupled to a first network node, the first network node handling one or more first layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the method comprising:
transmitting packets for uplink user plane flows on an uplink connection to a first network node for continued transmission towards a core network node of the radio network; and
Transmitting to the first network node one or more of:
an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node;
an indication of a delay experienced by packets of an uplink user plane flow transmitted by the wireless device to the second network node; and
Information enabling the first network node to calculate or estimate a delay experienced by packets of an uplink user plane flow transmitted by the wireless device to the second network node.
21. The method of embodiment 20 wherein the indication of the proportion of packets to be marked with the congestion indicator comprises a subset of data packets marked with the first congestion indicator being sent to the first network node, thereby enabling the first network node to mark internet protocol packets within the subset of data packets with the second congestion indicator.
22. The method of embodiment 20, wherein the method comprises: transmitting, to the first network node, information that enables the first network node to calculate or estimate a delay experienced by packets of an uplink user plane flow transmitted by the wireless device to the second network node, and wherein the information includes one or more of: an uplink data stream; uplink delay on the radio interface; one or more radio quality metrics for a connection between the wireless device and the second network node; an indication of a status of one or more uplink channels between the wireless device and the second network node; an indication of a round trip time of a transmission between the first network node and the second network node.
23. The method of embodiment 20, wherein the method comprises: an indication of a proportion of packets within an uplink user plane flow over the uplink connection to be marked with a congestion indicator is sent to the first network node, and wherein the indication of the proportion of packets to be marked comprises an indication of a probability that the first network node is to mark packets of the uplink user plane flow with the congestion indicator.
24. The method of embodiment 20, wherein the method comprises: and transmitting, to the first network node, an indication of a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node, and wherein the delay experienced by packets of the uplink user plane flow comprises a delay between transmitting, by the wireless device, the buffer status report to the second network node and transmitting, by the second network node, data corresponding to the buffer status report to the first network node.
25. The method of embodiment 24 wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between the sending of the buffer status report by the wireless device to the second network node and the sending of all data indicated in the buffer status report by the second network node to the first network node.
26. The method of embodiment 24 or 25, wherein the indication of the delay experienced by the packets of the uplink user plane flow sent to the second network node is piggybacked within the data of the uplink user plane flow.
27. The method according to any of embodiments 24-26, wherein the indication of the delay experienced by the packets of the uplink user plane flow sent to the second network node is received in an assistance information PDU from the second network node.
28. The method according to any of embodiments 24-27, wherein the indication of delay experienced by packets of the uplink user plane flow sent to the second network node comprises an indication of average delay experienced by packets of the uplink user plane flow sent to the second network node.
29. A method as in any preceding embodiment, wherein one or more first layers comprise a packet data convergence protocol, PDCP, layer.
30. The method of any preceding embodiment, wherein the one or more second layers comprise one or more of: a radio link control, RLC, layer; a Medium Access Control (MAC) layer; and a physical PHY layer.
31. A method according to any of the preceding embodiments, wherein the first network node comprises a first e.g. centralized unit of a base station and the second network node comprises a second e.g. distributed unit of a base station.
32. The method of any preceding embodiment, wherein the congestion indicator comprises a low-latency, low-loss, scalable throughput, L4S, congestion indicator.
33. The method of any of the preceding embodiments, further comprising:
Obtaining user data; and
The user data is forwarded to the host.
Group C examples
34. A network node for uplink congestion control in a radio network, the network node comprising:
processing circuitry configured to cause a network node to perform any of the steps of any of the B-group embodiments;
A power circuit configured to supply power to the processing circuit.
35. A first network node for uplink congestion control in a radio network, the first network node comprising:
Processing circuitry configured to cause the first network node to perform the method of any of embodiments 1 to 19 and 29 to 33 (subject to embodiments 1 to 19);
A power circuit configured to supply power to the processing circuit.
36. A second network node for uplink congestion control in a radio network, the second network node comprising:
Processing circuitry configured to cause the second network node to perform the method of any of embodiments 20 to 28 and 29 to 33 (subject to embodiments 20 to 28);
A power circuit configured to supply power to the processing circuit.
37. A host configured to operate in a communication system to provide over-the-top (OTT) services, the host comprising:
Processing circuitry configured to provide user data; and
A network interface configured to initiate transmission of user data to a network node in a cellular network for transmission to a User Equipment (UE), the network node having a communication interface and processing circuitry, the processing circuitry of the network node being configured to perform any of the operations of any of the group B embodiments to transmit user data from a host to the UE.
38. The host according to the previous embodiment, wherein:
the processing circuitry of the host is configured to execute a host application providing user data; and
The UE includes processing circuitry configured to execute a client application associated with a host application to receive a transmission of user data from the host.
39. A method implemented in a host configured to operate in a communication system further comprising a network node and a User Equipment (UE), the method comprising:
Providing user data for the UE; and
A transmission carrying user data to the UE via a cellular network comprising a network node is initiated, wherein the network node performs any operation of any of the B-group embodiments to send user data from the host to the UE.
40. The method according to the previous embodiment, further comprising: at the network node, user data provided by the host for the UE is transmitted.
41. The method of any one of the two previous embodiments, wherein the user data is provided at the host by executing a host application, the host application interacting with a client application executing on the UE, the client application being associated with the host application.
42. A communication system configured to provide overhead services, the communication system comprising:
a host, comprising:
Processing circuitry configured to provide user data for a User Equipment (UE), the user data associated with an over-the-top service; and
A network interface configured to initiate transmission of user data to a cellular network node for transmission to a UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node being configured to perform any of the operations of any of the group B embodiments to transmit user data from a host to the UE.
43. The communication system according to the previous embodiment, further comprising:
A network node; and/or
A user equipment.
44. A host configured to operate in a communication system to provide over-the-top (OTT) services, the host comprising:
Processing circuitry configured to initiate reception of user data; and
A network interface configured to receive user data from a network node in a cellular network, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the group B embodiments to receive user data from a User Equipment (UE) of a host.
45. The host according to the previous embodiment, wherein:
The processing circuitry of the host is configured to execute the host application, thereby providing user data; and
The host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.
46. The host of any one of the two previous embodiments, wherein initiating receipt of user data comprises: user data is requested.
47. A method implemented by a host configured to operate in a communication system further comprising a network node and a User Equipment (UE), the method comprising:
At the host, a reception of user data from the UE is initiated, the user data originating from a transmission that the network node has received from the UE, wherein the network node performs any steps of any of the B-group embodiments to receive user data from the UE of the host.
48. The method according to the previous embodiment, further comprising: at the network node, the received user data is sent to the host.
Abbreviations (abbreviations)
At least some of the following abbreviations may be used in this disclosure. If there is a discrepancy between the abbreviations, the above usage should be selected preferentially. If listed multiple times below, the first list should take precedence over any subsequent list.
5GC 5G core network
AQM active queue management
AR augmented reality
BSR buffer status reporting
CDA congestion detection algorithm
CE experiences congestion
CN core network
CQI channel quality indicator
CU central unit
DDDS downlink data transfer state
DL downlink
DRB data radio bearer
DU distributed unit
E1AP E1 application protocol
E2E end-to-end
ECN explicit congestion notification
GTP GPRS tunnel protocol
HARQ hybrid automatic repeat request
L4S low delay, low loss, scalable throughput
MAC medium access control
MBB mobile broadband
NG next generation
PDCP packet data convergence protocol
PDU protocol data unit
PHY physical layer
QoE quality of experience
QoS quality of service
RAN radio access network
RLC radio link control
Round trip time of RTT
SDAP service data adaptation protocol
UE user equipment
UL uplink
UP user plane
URLLC ultra-reliable low-delay communications
VR virtual reality
1X RTT CDMA2000 1x radio transmission technique
3GPP third Generation partnership project
Fifth generation of 5G
6G sixth generation
ABS almost blank subframe
ARQ automatic repeat request
AWGN additive Gaussian white noise
BCCH broadcast control channel
BCH broadcast channel
CA carrier aggregation
CC carrier component
CCCH SDU common control channel SDU
CDMA code division multiple access
CGI cell global identifier
CIR channel impulse response
CP cyclic prefix
CPICH common pilot channel
CPICH Ec/No CPICH received energy per chip divided by power density in band
CQI channel quality information
C-RNTI cell RNTI
CSI channel state information
DCCH dedicated control channel
DL downlink
DM demodulation
DMRS demodulation reference signal
DRX discontinuous reception
Discontinuous transmission of DTX
DTCH dedicated traffic channel
DUT device under test
E-CID enhanced cell ID (positioning method)
EMBMS evolved multimedia broadcast multicast service
E-SMLC evolution type service mobile positioning center
ECGI evolution CGI
ENBE-UTRAN node B
EPDCCH enhanced physical downlink control channel
E-SMLC evolution type service mobile positioning center
E-UTRA evolved UTRA
E-UTRAN evolved UTRAN
FDD frequency division duplexing
FFS is to be further studied
Base station in gNB NR
GNSS global navigation satellite system
HARQ hybrid automatic repeat request
HO handover
HSPA high speed packet access
HRPD high rate packet data
LOS line of sight
LPP LTE positioning protocol
LTE long term evolution
MAC medium access control
MAC message authentication code
MBSFN multimedia broadcast multicast service single frequency network
MBSFN ABS MBSFN almost blank subframe
MDT minimization of drive test
MIB master information block
MME mobility management entity
MSC mobile switching center
NPDCCH narrowband physical downlink control channel
NR new radio
OCNG OFDMA channel noise generator
OFDM orthogonal frequency division multiplexing
OFDMA multiple access
OSS operation support system
Time difference of arrival observed by OTDOA
O & M operation and maintenance
PBCH physical broadcast channel
P-CCPCH primary common control physical channel
PCell primary cell
PCFICH physical control format indicator channel
PDCCH physical downlink control channel
PDCP packet data convergence protocol
PDP profile delay profile
PDSCH physical downlink shared channel
PGW grouping gateway
PHICH physical hybrid ARQ indicator channel
PLMN public land mobile network
PMI precoder matrix indicator
PRACH physical random access channel
PRS positioning reference signal
PSS primary synchronization signal
PUCCH physical uplink control channel
PUSCH physical uplink shared channel
RACH random access channel
QAM quadrature amplitude modulation
RAN radio access network
RAT radio access technology
RLC radio link control
RLM radio link management
RNC radio network controller
RNTI radio network temporary identifier
RRC radio resource control
RRM radio resource management
RS reference signal
RSCP received signal code power
RSRP reference symbol received power or reference signal received power
RSRQ reference signal reception quality or reference symbol reception quality
RSSI received signal strength indicator
RSTD reference signal time difference
SCH synchronization channel
SCell secondary cell
SDAP service data adaptation protocol
SDU service data unit
SFN system frame number
SGW service gateway
SI system information
SIB system information block
SNR signal to noise ratio
SON self-optimizing network
SS synchronization signal
SSS secondary synchronization signal
TDD time division duplexing
TDOA time difference of arrival
TOA arrival time
TSS three-stage synchronization signal
TTI transmission time interval
UE user equipment
UL uplink
USIM universal subscriber identity module
UTDOA uplink time difference of arrival
WCDMA wideband CDMA
WLAN broadband local area network

Claims (29)

1. A method performed by a first network node for uplink congestion control in a radio network, the first network node handling one or more first layers of a protocol stack for an uplink connection between a wireless device and the radio network, the first network node being communicatively coupled to a second network node handling one or more second layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the method comprising:
Obtaining (1204) an indication of a proportion of packets within an uplink user plane flow over the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node;
marking (1206) the proportion of packets with the congestion indicator; and
-Sending (1208) packets of the uplink user plane flow towards a core network of the radio network.
2. The method of claim 1, wherein the indication of the proportion of packets to be marked comprises an indication of a probability, and wherein marking the proportion of packets with the congestion indicator comprises: packets of the uplink user plane flow are marked with the congestion indicator according to the probability.
3. The method according to any of the preceding claims, wherein obtaining (1204) the indication of the proportion of packets to be marked comprises: the indication of the proportion of packets to be marked is received from the second network node.
4. A method according to claim 3, wherein the indication of the proportion of packets to be marked is received in an assistance information protocol data unit, PDU, sent by the second network node or a downlink data transfer status, PDU, sent by the second network node.
5. The method of any of claims 1-2, wherein obtaining (1204) the indication of the proportion of packets to be marked comprises: the proportion of packets is calculated based on the delay experienced by packets of the uplink user plane flow sent to the second network node.
6. The method of claim 5, further comprising: an indication of delay experienced by packets of the uplink user plane flow sent to the second network node is received from the second network node.
7. The method of claim 6, wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, data corresponding to the buffer status report to the first network node.
8. The method of claim 5, further comprising: the delay experienced by packets of the uplink user plane flow sent to the second network node is calculated or estimated.
9. A method performed by a second network node for uplink congestion control in a radio network, the second network node handling one or more second layers of a protocol stack for an uplink connection between a wireless device and the radio network, the second network node being communicatively coupled to a first network node handling one or more first layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the method comprising:
-sending (1302) packets for uplink user plane flows on the uplink connection to the first network node for continuing transmission towards a core network node of the radio network; and
-Transmitting (1304) to the first network node one or more of the following:
An indication of a proportion of packets within the uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node; and
An indication of a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node.
10. The method of claim 9, wherein the indication of the proportion of packets to be marked with a congestion indicator comprises a subset of the data packets marked with a first congestion indicator sent to the first network node, thereby enabling the first network node to mark internet protocol packets within the subset of data packets with a second congestion indicator.
11. The method according to claim 9, wherein the method comprises: -sending (1304) an indication of a proportion of packets within the uplink user plane flow on the uplink connection to be marked with a congestion indicator to the first network node, and wherein the indication of the proportion of packets to be marked comprises an indication of a probability that the first network node is to mark packets of the uplink user plane flow with the congestion indicator.
12. The method according to claim 9, wherein the method comprises: -transmitting (1304), to the first network node, an indication of a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node, and wherein the delay experienced by packets of the uplink user plane flow comprises a delay between transmitting, by the wireless device, a buffer status report to the second network node and transmitting, by the second network node, data corresponding to the buffer status report to the first network node.
13. The method of claim 12, wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, all data indicated in the buffer status report to the first network node.
14. The method according to any of the preceding claims, wherein the first network node comprises a centralized unit of base stations and the second network node comprises a distributed unit of the base stations.
15. A network node (1600) for uplink congestion control in a radio network, the network node being configured to perform the method according to any of claims 1 to 8.
16. A network node (1600) for uplink congestion control in a radio network, the network node being configured to perform the method according to any of claims 9 to 14.
17. A first network node for uplink congestion control in a radio network, the first network node handling one or more first layers of a protocol stack for an uplink connection between a wireless device and the radio network, the first network node being communicatively coupled to a second network node handling one or more second layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the first network node comprising:
-processing circuitry (1602) configured to cause the first network node to:
Obtaining an indication of a proportion of packets within an uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node;
Marking the proportion of packets with the congestion indicator; and
Packets of the uplink user plane flow are sent towards a core network of the radio network.
18. The first network node of claim 17, wherein the indication of the proportion of packets to be marked comprises an indication of a probability, and wherein the processing circuitry is configured to cause the first network node to mark the proportion of packets with the congestion indicator by: packets of the uplink user plane flow are marked with the congestion indicator according to the probability.
19. The first network node of any of claims 17 to 18, wherein the processing circuitry is configured to cause the first network node to obtain the indication of the proportion of packets to be marked by: the indication of the proportion of packets to be marked is received from the second network node.
20. The first network node of claim 19, wherein the indication of the proportion of packets to be marked is received in a assistance information protocol data unit, PDU, sent by the second network node or a downlink data transfer status, PDU, sent by the second network node.
21. The first network node of any of claims 17 to 18, wherein the processing circuitry is configured to cause the first network node to obtain the indication of the proportion of packets to be marked by: the proportion of packets is calculated based on the delay experienced by packets of the uplink user plane flow sent to the second network node.
22. The first network node of claim 21, wherein the processing circuitry is configured to cause the first network node to: an indication of delay experienced by packets of the uplink user plane flow sent to the second network node is received from the second network node.
23. The first network node of claim 22, wherein the delay experienced by the packets of the uplink user plane flow comprises a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, data corresponding to the buffer status report to the first network node.
24. The first network node of claim 21, wherein the processing circuitry is configured to cause the first network node to: the delay experienced by packets of the uplink user plane flow sent to the second network node is calculated or estimated.
25. A second network node (1600) for uplink congestion control in a radio network, the second network node handling one or more second layers of a protocol stack for an uplink connection between a wireless device and the radio network, the second network node being communicatively coupled to a first network node handling one or more first layers of the protocol stack for the uplink connection, wherein the one or more second layers are lower than the one or more first layers, the second network node comprising:
-processing circuitry (1602) configured to cause the second network node to:
Transmitting packets for uplink user plane flows on the uplink connection to the first network node for continued transmission towards a core network node of the radio network; and
Transmitting to the first network node one or more of:
an indication of a proportion of packets within the uplink user plane flow on the uplink connection to be marked with a congestion indicator, wherein the proportion is based on a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node;
An indication of a delay experienced by packets of the uplink user plane flow sent by the wireless device to the second network node.
26. The second network node of claim 25, wherein the indication of the proportion of packets to be marked with a congestion indicator comprises a subset of the data packets marked with a first congestion indicator sent to the first network node, thereby enabling the first network node to mark internet protocol packets within the subset of data packets with a second congestion indicator.
27. The second network node of claim 25, wherein the processing circuitry is configured to cause the second network node to: transmitting an indication of a proportion of packets within the uplink user plane flow on the uplink connection to the first network node to be marked with a congestion indicator, and wherein the indication of the proportion of packets to be marked comprises an indication of a probability that the first network node is to mark packets of the uplink user plane flow with the congestion indicator.
28. The second network node of claim 25, wherein the processing circuitry is configured to cause the second network node to: transmitting, to the first network node, an indication of a delay experienced by packets of the uplink user plane flow transmitted by the wireless device to the second network node, and wherein the delay experienced by packets of the uplink user plane flow includes a delay between transmitting, by the wireless device, a buffer status report to the second network node and transmitting, by the second network node, data corresponding to the buffer status report to the first network node.
29. The second network node of claim 28, wherein the delay experienced by packets of the uplink user plane flow comprises a delay between sending, by the wireless device, a buffer status report to the second network node and sending, by the second network node, all data indicated in the buffer status report to the first network node.
CN202280064248.7A 2021-09-24 2022-09-23 Methods, apparatus, and computer readable media related to low latency services in a wireless network Pending CN117981287A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP21382862 2021-09-24
EP21382862.7 2021-09-24
PCT/SE2022/050843 WO2023048627A1 (en) 2021-09-24 2022-09-23 Methods, apparatus and computer-readable media relating to low-latency services in wireless networks

Publications (1)

Publication Number Publication Date
CN117981287A true CN117981287A (en) 2024-05-03

Family

ID=83995835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280064248.7A Pending CN117981287A (en) 2021-09-24 2022-09-23 Methods, apparatus, and computer readable media related to low latency services in a wireless network

Country Status (2)

Country Link
CN (1) CN117981287A (en)
WO (1) WO2023048627A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024093346A1 (en) * 2023-07-07 2024-05-10 Lenovo (Beijing) Limited Explicit congestion notification marking

Also Published As

Publication number Publication date
WO2023048627A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN117981287A (en) Methods, apparatus, and computer readable media related to low latency services in a wireless network
WO2023048628A1 (en) Methods, apparatus and computer-readable media relating to low-latency services in wireless networks
WO2023209668A1 (en) Signaling and mechanisms for relay ue discovery message transmission multi-hop sidelink scenarios
CN118140461A (en) Methods, apparatus, and computer readable media related to low latency services in a wireless network
WO2023142758A1 (en) Separate link quality evaluation for relayed and non-relayed traffic
WO2024001724A1 (en) User equipment, network node and methods for rlf handling
US20240196272A1 (en) Methods for Predicting and Signaling Traffic Status and Migration
WO2023221553A9 (en) Method and apparatus for handling radio link failure during path switch
WO2023048629A1 (en) Methods, apparatus and computer-readable media relating to congestion in wireless networks
WO2023073677A2 (en) Measurements in a communication network
WO2024079717A1 (en) Reporting of qoe reports to the sn
WO2023203550A1 (en) Methods for handling pdcp pdu in split gnb architecture
WO2024028832A1 (en) Group signaling for network energy savings
WO2023007022A1 (en) Methods and apparatuses for controlling load reporting
WO2023163646A1 (en) Configuration activation in a communication network
WO2023083882A1 (en) Configured grant for multi-panel uplink transmission
WO2024104953A1 (en) Triggering user equipment buffer status reporting for xr services
WO2024014998A1 (en) Methods and apparatuses to improve carrier aggregation and dual- connectivity for network energy saving
KR20240036089A (en) System and method for bidirectional timing measurement
WO2023101591A1 (en) Minimization of drive test configuration in user equipment
WO2022229235A1 (en) Methods for predicting and signaling traffic status and migration
KR20240068765A (en) Constructing and directing location determination and data traffic priorities
WO2024068910A1 (en) Methods for aggregating resources for positioning measurements
WO2023053095A1 (en) Configuring and indicating positioning and data traffic priorities
JP2024517572A (en) Network Traffic Management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication