WO2019207403A1 - Core-stateless ecn for l4s - Google Patents

Core-stateless ecn for l4s Download PDF

Info

Publication number
WO2019207403A1
WO2019207403A1 PCT/IB2019/053048 IB2019053048W WO2019207403A1 WO 2019207403 A1 WO2019207403 A1 WO 2019207403A1 IB 2019053048 W IB2019053048 W IB 2019053048W WO 2019207403 A1 WO2019207403 A1 WO 2019207403A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
node
queue
ctv
value
Prior art date
Application number
PCT/IB2019/053048
Other languages
French (fr)
Inventor
Szilveszter NÁDAS
Gergo GOMBOS
Sandor LAKI
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2019207403A1 publication Critical patent/WO2019207403A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/33Flow control; Congestion control using forward notification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes

Definitions

  • wireless terminals also known as mobile stations and/or user equipments (UEs) communicate via a Radio Access Network (RAN) to one or more core networks.
  • the RAN covers a geographical area which is divided into cell areas, with each cell area being served by a base station such as, for example, a radio base station (RBS), which in some networks may also be called, for example, a“NodeB” or“eNodeB.”
  • RBS radio base station
  • a cell is a geographical area where radio coverage is provided by the radio base station at a base station site or an antenna site in case the antenna and the radio base station are not collocated.
  • Each cell is identified by an identity within the local radio area, which is broadcast in the cell. Another identity identifying the cell uniquely in the whole mobile network is also broadcasted in the cell.
  • the base stations communicate over the air interface operating on radio frequencies with the user equipments (UEs) within range of the base stations.
  • RNC radio network controller
  • BSC base station controller
  • a Universal Mobile Telecommunications System is a third- generation mobile communication system, which evolved from the second generation (2G) Global System for Mobile Communications (GSM).
  • the UMTS Terrestrial Radio Access Network (UTRAN) is essentially a RAN using Wideband Code Division Multiple Access (WCDMA) and/or High-Speed Packet Access (HSPA) for UEs.
  • WCDMA Wideband Code Division Multiple Access
  • HSPA High-Speed Packet Access
  • 3GPP Third Generation Partnership Project
  • telecommunications suppliers propose and agree upon standards for e.g. third generation networks and further generations and investigate enhanced data rate and radio capacity.
  • the Evolved Packet System comprises the Evolved Universal Terrestrial Radio Access Network (E-UTRAN), also known as the Long Term Evolution (LTE) radio access, and the Evolved Packet Core (EPC), also known as System Architecture Evolution (SAE) core network.
  • E-UTRAN/LTE is a variant of a 3GPP radio access technology wherein the radio base stations are directly connected to the EPC core network rather than to RNCs.
  • the functions of a RNC are distributed between the radio base stations, e.g., eNodeBs in LTE, and the core network.
  • the RAN of an EPS has an essentially“flat” architecture comprising radio base stations without reporting to RNCs.
  • Packets are transported in a Core Network along paths in a transport network being parts of the communications network.
  • IP Internet Protocol
  • MPLS IP/Multiprotocol label switching
  • SPB Ethernet Shortest Path Bridging
  • GPLS Generalized MPLS
  • TP MPLS-Transport Profile
  • PBB- TP Provider Backbone Bridges Transport Profile
  • RSVP Resource Reservation Protocol
  • Automatic solutions have low management burden, and in that most of the time relying on shortest paths the automatic solutions also achieve some form of low delay efficiency. They can be expanded with automatic fast reroute and Equal-Cost Multipath (ECMP) to increase reliability and network utilization. The automatic solutions, however, fail to offer adequate Quality of Service (QoS) measures. Usually simple packet priorities or DiffServ handling does not provide any bandwidth guarantees.
  • QoS Quality of Service
  • Transport equipment such as communication nodes
  • Transport equipment usually have very limited packet processing capabilities, usually limited to a few priority or weighted fair queues, a few levels of drop precedence, since emphasis is on high throughput and low price per bit.
  • packet arrivals get over the packet processing capability, a node becomes a bottleneck, and some packets are dropped due to congestion at the node. To get around the congestion, a few solutions have been proposed.
  • a bottleneck which is a location in the communications network where a single or limited number of components or resources, affects capacity or performance of the communications network.
  • a first common way is to pre -signal/pre -configure the desired resource sharing rules for a given traffic aggregate, such as a flow or a bearer, to a bottleneck node prior the arrival of the actual traffic.
  • the bottleneck node then implements the handling of the traffic aggregates based on these sharing rules. For example, the bottleneck node may use scheduling to realize the desired resource sharing.
  • Examples for this pre- signaling/pre-configuration method are, for example, the bearer concept as discussed in 3 GPP TS 23.401 vl2.4.0, SIRIG as discussed in 3GPP TS 23.060 section 5.3.5.3, vl2.4.0, or Resource Reservation Protocol (RSVP) as discussed in RFC2205.
  • RSVP Resource Reservation Protocol
  • An example scheduling algorithm for this method, implementing the 3GPP bearer concept at an LTE eNB, may be found in Wang Min, Jonas Pettersson, Ylva Timner, Stefan Wanstedt and Magnus Hurd, Efficient QoS over LTE - a Scheduler Centric Approach. Personal Indoor and Mobile Radio Communications (PIMRC), 2012 IEEE 23rd International Symposium. Another example of this is to base the resource sharing on Service Value as described in Service Value Oriented Radio Resource Allocation, WO2013085437.
  • a second common way is to mark packets with priority - this would give more resources to higher priority flows, or with drop precedence, which marks the relative importance of the packets compared to each other. Packets of higher drop precedence are to be dropped before packets of lower drop precedence.
  • An example for such method is DiffServ Assured Forwarding (AF) within a given class [RFC2597] Also such a method with several drop precedence levels are defined in a Per-Bearer Multi Level Profiling, EP2663037.
  • a communication node associates packets with a value related to resource sharing.
  • the communication node marks the packet with a value related to resource sharing in the physical network, wherein the value, also referred to herein as packet value, indicates a level of importance of the packet relative to the importance of other packets along a, for example, linear, scale in the physical network.
  • the communication node further transmits, over the physical network, the marked packet towards a destination node.
  • each packet gets a label that expresses its importance.
  • these labels or packet values are used in a bandwidth sharing decision. Packets of a flow can have different importance values to first drop packets with lowest importance, for example, in case of congestion.
  • U.S. Patent Publication No. 2016/0105369A1 published on April 14, 2016, which is incorporated by reference in its entirety, relates to the Per Packet Value concept also referred to as Per Packet Operator Value (PPOV) concept.
  • PPOV Per Packet Operator Value
  • L4S Low Latency, Low Loss, Scalable Throughput
  • the L4S problem is introduced in“Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: Architecture” by B. Briscoe, K. De Schepper, and M. Bagnulo Braun, dated March 22, 2018, which may be found at https://tools.ietf.org/html/draft-ietf-tsvwg-14s-arch-02 (last visited April 8, 2019).
  • the existing solutions of the L4S problem build on predefined congestion control behavior and cannot handle misbehaving and/or unresponsive flows.
  • a method by a node for handling packets includes maintaining a data structure tracking packet value distribution of packets in a queue and determining a congestion threshold value (CTV).
  • CTV congestion threshold value
  • the node determines that the first packet’s corresponding per packet value (PPV) is not marked to be dropped in the data structure.
  • the node determines whether the PPV for the first packet is less than the CTV. If the PPV for the first packet is not less than the CTV, the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport, ECT, the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, the first packet is dropped.
  • PPV per packet value
  • a node includes memory operable to store instructions and processing circuitry operable to execute the instructions to cause the node to maintain a data stmcture tracking packet value distribution of packets in a queue and determining a CTV.
  • the processing circuitry is operable to execute the instructions to cause the node to determine that the first packet’s corresponding per PPV is not marked to he dropped in the data structure.
  • the node determines whether the PPV for the first packet is less than the CTV. If the PPV for the first packet is not less than the CTV, the first packet is served.
  • the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, the first packet is dropped.
  • Certain embodiments may provide one or more of the following technical advantage(s). For example, a technical advantage may be that certain embodiments may keep the high speed, simple implementation of previously introduced PPV concepts. As another example, a technical advantage may be that certain embodiments have no need for parameter tuning and verification. As yet another example, a technical advantage may that certain embodiments may use ECN marking together with PPV. Another technical advantage may be that certain embodiments solve the L4S problem without explicit assumption on congestion control. Still another technical advantage may that certain embodiments introduce the option of weighted sharing to the L4S domain.
  • FIGURE 1 illustrates a radio communications network, according to certain embodiments
  • FIGURES 2A-2B illustrate entities for the operations of packet value-based packet processing, according to certain embodiments
  • FIGURE 3 illustrates illustrating the packet enqueue process per one as may be performed by a node, according to certain embodiments
  • FIGURE 4 illustrates the packet dequeue process from a queue, according to certain embodiments
  • FIGURE 5 illustrates entities for the operations of packet value and CTV-based packet processing, according to certain embodiments
  • FIGURE 6 illustrates a packet dequeue process, according to certain embodiments
  • FIGURE 7 illustrates an example process for updating the CTV when needed, according to certain embodiments
  • FIGURE 8 illustrates a graph depicting the throughput of four flows, according to certain embodiments.
  • FIGURE 9 illustrates a graph depicting the per flow ECN marking rates, according to certain embodiments.
  • FIGURE 10 illustrates method by node for handling packets in a communication network, according to certain embodiments
  • FIGURE 11 illustrates a receiving node, exemplified as a radio base station, according to certain embodiments
  • FIGURE 12 illustrates a transmitting node, exemplified as a gateway, according to certain embodiments
  • FIGURE 13 illustrates a telecommunication network connected via an intermediate network to a host computer, according to certain embodiments
  • FIGURE 14 illustrates a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection, according to certain embodiments
  • FIGURE 15 illustrates a method implemented in a communication system, according to one embodiment
  • FIGURE 16 illustrates another method implemented in a communication system, according to one embodiment
  • FIGURE 17 illustrates another method implemented in a communication system, according to one embodiment.
  • FIGURE 18 illustrates another method implemented in a communication system, according to one embodiment.
  • CTV Congestion Threshold Value
  • HIN maintained histogram of packets
  • references in the specification to“one embodiment,”“an embodiment,”“an example embodiment,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine-readable media also called computer-readable media
  • machine-readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, inf
  • an electronic device e.g., a computer
  • hardware and software such as a set of one or more processors (e.g., where a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • processors e.g., where a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • NI(s) physical network interface
  • a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s).
  • the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter.
  • NICs network interface controller
  • the NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are“multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • a network device may communicate with the other electronic devices through radio and/or landline communications networks.
  • a traffic flow is traffic of packets identified by a set of header information and port information including, but not limited to: IP header, Layer 2 (L2) header, virtual and/or physical interface port, and/or agent circuit ID information for a remote port in an access network.
  • a data flow may be identified by a set of attributes embedded to one or more packets of the flow.
  • An exemplary set of attributes includes a 5-tuple (source and destination IP addresses, a protocol type, source and destination TCP/UDP ports).
  • a packet value is a value assigned to each packet and is also referred to as a per packet value (PPV).
  • a packet value is a scalar value and enables nodes to perform computations on these values, such as summing up the values for a total value or dividing the values to reflect higher cost of transmission.
  • the packet values not only express which packet is more important, but also by how much. This is in contrast to existing drop precedence markings, which simply define categories of drop levels, where the individual drop level categories are merely ordered, but further relation among them is not expressed.
  • a whole packet has the same value, but the representation of the value may be a value of a single bit of that packet.
  • the packet value indicates a drop precedence of a packet.
  • the value marked for the packet may be the value of the single bit times bit length.
  • the coding of the value may be linear, logarithmic or based on a mapping table.
  • the packet value for a packet may be embedded within the packet, but it also may be indicated outside of the packet (e.g., a mapping table within a network device to map packets of one traffic flow to one packet value). While packets of the same traffic flow may share the same packet value in one embodiment, they may have different packet values in an alternative embodiment (e.g., packets of a video traffic flow may assign different packet values to different layers of the video traffic flow).
  • the techniques described herein may implement ECN marking by maintaining a Packet Value histogram (HIN).
  • HIN Packet Value histogram
  • a byte limit on the queue length (limit_ecn) which may also be referred to as a byte limit threshold, may be defined.
  • a Congestion Threshold Value (CTV) may be determined.
  • Outgoing packets with Per Packet Value (PPV) smaller than the CTV are ECN marked, if ECN capable. Otherwise, such outgoing packets are dropped.
  • the CTV may be update periodically such as, for example, every ms.
  • the maximum queue length which is different from the byte limit threshold, may be simultaneously enforced by a HDROP histogram. When all flows are responsive this limit is not needed. In case of one or more unresponsive flows this limit will drop packets from the unresponsive flow only.
  • either of both target queue lengths may be calculated using target delay values multiplied by bottleneck capacity.
  • Non-ECN capable packets may be dropped instead of ECN marked when PPV is below CTV.
  • a node and a method by the node may be provided for handling packets in a communication network.
  • the node may maintain a data structure tracking packet value distribution of packets in a queue. Wren dequeuing a first packet from the queue, the node may determine that the first packet's corresponding PPV is not marked to be dropped m the data structure and determine whether the PPV for the first packet is less than a CTV. The node may then perform one of:
  • FIGURE 1 is a schematic overview depicting a radio communications network 100.
  • the radio communications network 100 comprises one or more RANs and one or more CNs.
  • the radio communications network 100 may use a number of different technologies, such as LTE, LTE-Advanced, WCDMA, Global System for Mobile communications/Enhanced Data rate for GSM Evolution (GSM/EDGE), Worldwide Interoperability for Microwave Access (WiMax), WiFi, Code Divisions Multiple Access (CDMA) 2000 or Ultra Mobile Broadband (UMB), just to mention a few possible implementations.
  • LTE Long Term Evolution
  • GSM/EDGE Global System for Mobile communications/Enhanced Data rate for GSM Evolution
  • WiMax Worldwide Interoperability for Microwave Access
  • WiFi Code Divisions Multiple Access
  • CDMA Code Divisions Multiple Access
  • UMB Ultra Mobile Broadband
  • a user equipment 110 also known as a mobile station and/or a wireless terminal, communicates via a Radio Access Network (RAN) to one or more core networks (CN).
  • RAN Radio Access Network
  • CN core networks
  • “user equipment” is a non-limiting term which means any wireless terminal, MTC device or node such as, for example, Personal Digital Assistant (PDA), laptop, mobile, sensor, relay, mobile tablets, or even a small base station communicating within respective cell.
  • PDA Personal Digital Assistant
  • the radio communications network 100 covers a geographical area which is divided into cell areas.
  • cell 105 is served by a radio base station 1200.
  • the radio base station 120 may also be referred to as a first radio base station.
  • the radio base station 120 may be referred to as e.g. a NodeB, an evolved Node B (eNB, eNode B), a base transceiver station, Access Point Base Station, base station router, or any other network unit capable of communicating with a user equipment within the cell served by the radio base station depending e.g. on the radio access technology and terminology used.
  • the radio base station 1200 may serve one or more cells, such as the cell 105.
  • the user equipment 110 is served by the radio base station 120.
  • a cell is a geographical area where radio coverage is provided by the radio base station equipment at a base station site.
  • the cell definition may also incorporate frequency bands and radio access technology used for transmissions, which means that two different cells may cover the same geographical area but using different frequency bands.
  • Each cell is identified by an identity within the local radio area, which is broadcast in the cell. Another identity identifying the cell 105 uniquely in the whole radio communications network 100 is also broadcasted in the cell 105.
  • the radio base station 120 communicates over the air or radio interface operating on radio frequencies with the user equipment 110 within range of the radio base station 120.
  • the user equipment 110 transmits data over the radio interface to the radio base station 120 in Uplink (UL) transmissions and the radio base station 120 transmits data over an air or radio interface to the user equipment 110 in Downlink (DL) transmissions.
  • UL Uplink
  • DL Downlink
  • the radio communications network 100 comprises a Gate Way node (GW) 130 for connecting to the Core Network (CN).
  • GW Gate Way node
  • RNC Radio Network Controller
  • BSC Base Station Controller
  • a transmitting node such as the GW 130 or similar, assigns and marks a value, also denoted as per-packet value, on each packet.
  • the value reflects the importance of the packet for the operator in a linear scale wherein the value corresponds to a level of importance along the linear scale.
  • the value indicates a level of importance of the packet relative importance of another packet. For example, the value ‘1000’ indicates that the packet is 100 times as important as a packet with the value‘10’ and 10 times as important as a packet with value‘100’. The importance may be determined based on the actual contents of the packet payload, e.g.
  • the value assigned to each packet is a scalar value and enables nodes to perform computations on these values, such as sum the values for a total value or divide the values to reflect higher cost of transmission. Also, such values not only express which packet is more important, but also by how much. This is in contrast to existing drop precedence markings, which simply define categories of drop levels, where the individual drop level categories are merely ordered, but further relation among them is not expressed.
  • the whole packet has the same value, but the representation of the value may be a value of a single bit of that packet.
  • the value marked for the packet may e.g. be the value of the single bit times bit length.
  • the coding of the value may be linear, logarithmic or based on, for example, a pre-communicated table.
  • the packet is transmitted to a receiving node, such as the radio base station 120 or similar, which reads the value of the packet and handles the packet based on the value and expected resources needed to serve the packet.
  • the receiving node may comprise a resource allocation scheme.
  • the resource allocation scheme that works on these marked values aims to maximize a realized value, may also be referred to as a realized operator value, of served packets over a bottleneck. For this, it takes into account both the value indicating the per-packet value, and the expected amount of resources to serve the given packet. Therefore, packets with e.g. different amount of expected radio channel overhead may also be compared and precedence among them may be established based on the realized value per radio resource unit.
  • embodiments herein use a kind of marking for drop precedence, such that the marking not only defines the relative drop precedence of packets compared to each other in case of equal resource demand, but it also gives a value by which the resources needed, such as, for example, different radio channel overheads, can be taken into account and a precedence can be established between packets with different radio channels.
  • radio resources As an alternative to taking into account the amount of radio resources, different resources such as, for example, packet overhead at lower layers or processing cost can be taken into account similarly.
  • embodiments herein take the importance as well as the expected resources needed to serve the packet into account when handling the packet.
  • the radio channel overhead also known as user specific radio channel quality
  • the radio channel overhead may be taken into account.
  • a large number of drop precedence levels are suitable to describe a large variety of target resource sharing policies.
  • the interpretation of drop precedence is limiting the richness of the resource sharing policies.
  • embodiments enable the determining of the resource sharing between a packet with a higher drop precedence but requiring a small amount of radio resources to serve and a packet with lower drop precedence but taking much more radio resource to serve. This is a flexible way of handling packets in an efficient manner.
  • FIGURES 2A-2B illustrates entities for the operations of packet value-based packet processing, according to certain embodiments.
  • the node 200 may be implemented in a network device, such as GW 130 or radio base station 120, discussed above with regard to FIGURE 1.
  • all the entities may be within the node 200 of a communications network, which may be a radio or landline communications network.
  • some entities e.g., the data structure tracking packet value distribution 250 are outside of, but accessible by the node 200.
  • Packets arrive at the packet arrival block 202.
  • the packet arrival block 202 may be a part of a transceiver interface of the node 200 in one embodiment.
  • the arrived packets are assumed to be already marked with packet values.
  • the marking of a packet with a packet value may be performed at an edge communication node, where the packet value indicates the relative importance of the packet.
  • the marking of the packet with the packet value may also be performed by the node 200, before arriving at the packet arrival block 202.
  • the arrived packets are then either put in a packet processing queue 220 or dropped at 210.
  • the decision of whether to enqueue the packet at one end of the queue or to drop it depends on factors such as the packet length and/or packet value as discussed in more details below.
  • the packet processing queue 220 is a first-in and first-out (FIFO) queue (sometimes referred to as first-come and first-served, FCFS) in one embodiment.
  • FIFO first-in and first-out
  • FCFS first-come and first-served
  • the queue 220 is shared among the packets of different traffic flows. Thus, packets within the queue 220 may have different packet values.
  • the node 200 may have multiple packet processing queues. For example, each of the multiple queues may be dedicated to certain types of traffic flows.
  • Packet processing queue 220 is just an example of a packet processing queue that contains packets with different packet values.
  • the packet processing queue 220 may be implemented by a buffer, cache, and/or one or more other machine-readable memory storage media discussed above relating to the definition of the electronic device.
  • the packet processing queue 220 has a maximum queue length, and it does not hold packet or packets that have a total length longer than the maximum queue length.
  • the maximum queue length is the amount of data volume the queue can output in a given period of time in one embodiment. In other words, it is the throughput of the bottleneck scaled for a chosen timescale.
  • a queue length or a packet length is often measured in bits, bytes/octets (8 bits), halfwords (16 bits), words (32 bits), or doublewords (64 bits).
  • the embodiments of the invention are agnostic to the measurement of the queue (queue length), and for simplicity of explanation, the following discussion use bytes for the queue length.
  • the node 200 determines whether to accept an arrived packet to the queue, and the determination may be based on the data structure tracking packet value distribution
  • the node updates the data structure 250 reflecting the determination (the illustrated updating 252 the data structure).
  • the packet serving block 204 serves the packets in the queue.
  • the serving block 204 may be a part of a processor (e.g., an execution unit of a central processor unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or microcontroller).
  • the packet serving block 204 may either serve the packets at the other end of the queue 220 or drop the packets based on the information from the data structure 250 (the illustrated reading 254 the data structure).
  • the packet serving block 204 then updates the data structure 250 (the illustrated updating 254 the data structure).
  • serving may also be referred to as processing or executing
  • a packet may include a variety of packet processing operations such as one or more of forwarding the packet, revising the packet’s header, encrypting the packet, compressing the packet, and extracting/revising the payload.
  • the data structure tracking packet value distribution 250 includes histograms.
  • a histogram tracks distribution of numerical data. While a histogram may be presented as a graphic for human consumption, it may be presented merely as data distribution for operations within a node. The histograms in this Specification tracks distribution of packet values, presented graphically or otherwise.
  • FIGURE 2B illustrates a data structure tracking packet value distribution.
  • the data structure tracking packet value distribution 250 includes two histograms in one embodiment.
  • the first histogram tracks packet value distribution of packets in the queue of a node.
  • the histogram is for packets that are currently in the queue, and we refer it as the packet-in-queue histogram (FUN) 262.
  • FUN packet-in-queue histogram
  • the packet-in-queue histogram 262 illustrates the cumulative size of packets (e.g., in bytes) that is to be served.
  • the minimum packet value (PVmin) of packet values for the packets currently in the queue is tracked so that the node may determine which packet to be enqueued. Since the node may sort the packet-in-queue histogram packet, the PVmin may be obtained with insignificant processing resource.
  • the second histogram tracks packet value distribution of packets that are to be dropped when the packets reach the packet serving block 264, and we refer it as the packet-drop histogram (HDROP) 264.
  • the HDROP 264 is updated when a packet value is determined to correspond to packets that are to be dropped as discussed in more details below.
  • the histograms are updated when the packet status changes (e.g., a packet is dropped or served), and the node 200 maintains the histograms to be up to date.
  • the node 200 may process packets efficiently without dropping a packet in the middle of a queue or performing complex pre -tuning of control loops.
  • node 200 may be understood with reference to the flow diagrams illustrated in FIGURES 3 and 4. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams.
  • FIGURE 3 is a flow diagram illustrating the packet enqueue process per one as may be performed by node 200, in accordance with certain embodiments.
  • the queue for the packets to enter is the queue 220 in one embodiment.
  • the operations determine whether to enqueue an incoming packet or drop the packet without entering it to the queue.
  • a packet arrives at the queue.
  • the packet has a certain per packet value (PV), which may also be referred to herein as a per packet value (PPV), and a certain packet length (P_len).
  • PV per packet value
  • P_len packet length
  • the packet value may be marked within the electronic device itself, or it may be marked by anther electronic device.
  • the node determines whether the sum of packet lengths of the existing packets in the queue and the packet length of the newly arrived packet would reach the maximum queue length (QMAX). If the sum is less than the maximum queue length, the flow goes to reference 306, and the packet is enqueued in the queue.
  • a packet-in-queue histogram (e.g., the HIN 202) is updated to reflect the newly added packet in the queue. If the per packet value (PPV) is not in the packet-in queue histogram already, a new entry for the packet value is added in the histogram, and the packet length (P len) is indicated to be the total byte length for the packet value.
  • a data structure such as the packet-in-queue histogram may include all valid packet values, and the packet length is initialized to be zero and then accumulated as packets arrive.
  • PVmin minimum PV
  • the flow goes to reference 308.
  • the PV of the newly arrived packet is compared to PVmin. If the PV is not larger than PVmin, the flow goes to reference 310, and the newly arrived packet is denied from entering the queue thus dropped. Once the packet is dropped, the flow goes back to reference 302, waiting for another packet to arrive.
  • the electronic device determines again whether the packet value of the newly arrived packet is larger than PVmin (since PVmin may be updated at reference 316), and if PV is larger than PVmin, the flow goes back to reference 304, otherwise the flow goes to reference 310, and the packet is denied from entering the queue. Once the packet is dropped at reference 310, the flow goes back to reference 302, waiting for another packet to arrive.
  • the electronic device may determine whether to enqueue the newly arrived packet. These operations in method 300 do not require any pre-tuning of the control loops and can be performed efficiently as the packet arrives.
  • FIGURE 4 is a flow diagram illustrating the packet dequeue process from a queue, according to certain embodiments.
  • Method 400 may be performed within an electronic device such as the node 200.
  • the queue for the packets to exit is the queue 220 in one embodiment.
  • the operations determine whether to serve a packet exiting the queue or drop the packet instead.
  • a packet to be served is existing from a queue.
  • the packet has a certain packet value (PV) and a certain packet length (P_len).
  • the electronic device determines whether the packet value of the packet has any accumulated byte length in a packet-drop histogram such as the HDROP 264. As discussed at reference 312, bytes with earlier minimum packet value (PVmin) may be moved to the packet-drop histogram. The electronic device determines whether any byte associated with the packet value is in the packet-drop histogram.
  • the flow goes to reference 406.
  • the bytes of the packet are removed from a packet-in-queue histogram (e.g., the HIN 262) that tracks packet distribution of the enqueued packets.
  • the electronic device serves the packet.
  • the minimum packet value is updated when the packet value of the exiting packet has the minimum packet value and no other bytes remains for the minimum packet value. Otherwise, the minimum packet value stays the same. The flow then goes back to reference 402, and the electronic device determine whether to serve or drop the next exiting packet from the queue.
  • the electronic device removes bytes corresponding to the packet from the packet-drop histogram at reference 410.
  • a packet has a portion of bytes that are tracked in the packet-drop histogram, and the remaining portion of the packet are tracked in the packet-in-queue histogram.
  • the electronic device removes the remaining portion of the packet in the packet-in-queue histogram also, since unlikely the remaining portion of the packet will be served properly.
  • the packet-in-queue histogram needs to be updated once the remaining portion of the packet is removed (e.g. reducing the byte length for the corresponding packet value by the number of bytes removed).
  • the packet is dropped without being served.
  • the minimum packet value is updated when the packet value of the exiting packet has the minimum packet value and no other bytes remaining for the minimum packet value. Otherwise, the minimum packet value stays the same. Then the flow goes back to reference 402, and the electronic device determine whether to serve or drop the next exiting packet from the queue.
  • the electronic device may determine whether to serve an existing packet from the queue. These operations in method 400 may be performed quickly using value comparison and data structure reading/updating .
  • FIGURE 5 illustrates entities for the operations of packet value and CTV-based packet processing, according to certain embodiments.
  • Certain features of node 500 may be similar to those of node 200 described above with respect to FIGURES 2A- 2B.
  • packets arrive at the packet arrival block 502 and are processed using an enqueueing process similar to that described above with respect to packet arrival block 102.
  • Node 500 is only responsible to keep the maximum queue length and update HIN 552 and HDROP 554.
  • data structure tracking packet value distribution 550 includes a HIN 552 and HDROP 554. Additionally, however, data structure tracking packet value distribution 350 includes a Congestion Threshold Value (CTV) block 556 for calculating the CTV for ECN marking.
  • CTV Congestion Threshold Value
  • Dequeue block 560 enables Dequeue ECN marking based on the calculated CTV and the per packet value (PPV) of the packet.
  • FIGURE 6 is a flow diagram illustrating a packet dequeue process 600, according to certain embodiments.
  • Method 600 begins when a determination is made at step 602 that the queue is not empty.
  • the dequeue size, PPV is determined.
  • a packet to be served is exiting from queue.
  • the packet has a size and a PPV.
  • the node determines whether the PPV of the packet has any accumulated byte length in a packet-drop histogram such as the HDROP 554.
  • HIN 552 and HDROP 554 may be updated.
  • node 500 may remove bytes corresponding to the packet from the HDROP histogram 554.
  • a packet has a portion of bytes that are tracked in the HDROP histogram 554, and the remaining portion of the packet are tracked in the HIN histogram 552.
  • node 500 may also remove the remaining portion of the packet in the HIN 552 histogram, since it is unlikely the remaining portion of the packet will be served properly.
  • the HIN histogram 552 needs to be updated once the remaining portion of the packet is removed (e.g. reducing the byte length for the corresponding packet value by the number of bytes removed). The flow goes back to reference 602, and node 500 determines whether another packet is exiting from the queue 520.
  • the CTV is updated, if needed.
  • the updating of the CTV is described in more detail below.
  • a determination is made at reference 614 as to whether the PPV of the packet is less than the CTV.
  • IP ECT field IP ECT field
  • Dequeue block 560 maintains HIN 552 and HDROP 554 similarly to as described above.
  • the HDROP[ppv] is never above 0, because the ECN marking threshold (limit_ECN), which may also be referred to as a byte limit threshold, may be significantly smaller than the maximum queue length limit.
  • limit_ECN ECN marking threshold
  • the packets of that flow will mostly fill the buffer region between limit_ECN and QMAX. Note that this region is also used by bursts of responsive flows.
  • node 500 is capable of maintaining a consistent maximum queue length (QMAX) and not dropping any packet of responsive ECN- capable flows at the same time, when both responsive and unresponsive flows are present in a queue.
  • QMAX maximum queue length
  • FIGURE 7 is a flow diagram illustrating an example process 700 for updating the CTV when needed, according to certain embodiments.
  • process 700 illustrates step 612 of FIGURE 6 in more detail.
  • the Core-stateless ECN scheduler may be implemented in the NS3 simulator.
  • the Linux kernel 4.1 implementation of Data Center TCP and Cubic TCP congestion control algorithms may be used.
  • FIGURE 8 illustrates a graph 800 depicting the throughput of four flows.
  • a Gold DCTCP a Gold Cubic TCP a Silver DCTCP and a Silver Cubic TCP flow in the system.
  • Limit_ECN is set to 5ms (times the system capacity). The system capacity is 100 Mbps.
  • the dashed lines indicate the ideal throughput of the Gold and silver sources. While Gold DCTCP flows get slightly larger share, the system approximates the desired sharing well, without any assumptions on the used congestion control.
  • FIGURE 9 illustrates a graph 900 depicting the per flow ECN marking rates, which is the bitrate of ECN marked packets. It can be seen that for the DCTCP the marking rate has to be much higher than for Cubic to reach similar bitrate s, due to the different congestion control algorithm.
  • FIGURE 10 is a method 1000 by node for handling packets in a communication network, according to certain embodiments.
  • the method begins at step 1002 with the node maintaining a data structure tracking packet value distribution of packets in a queue
  • the queue may be a first-in and first- out queue for the packets.
  • the data structure may include a first histogram and a second histogram.
  • the first histogram may include first bytes of packets to be processed, and the first bytes of packets are distributed PPVs.
  • the second histogram may include second bytes of packets to be dropped. The second bytes of packets may also be distributed per packet values.
  • the node determines a CTV.
  • the CTV is determined based on the data structure and a byte limit on the queue length, which may be referred to as a byte limit threshold.
  • the node determines that the first packet’s corresponding PPV is not marked to be dropped in the data structure and determines whether the PPV for the first packet is less than the CTV.
  • the PPV may indicate a drop precedence of the first packet.
  • the PPV may be embedded within the first packet or mapped to the first packet. in a particular embodiment, the first packet may he marked to be dropped when the PPV in the HDROP is greater than zero in a particular embodiment, the PPV may ⁇ be greater than zero when one or packets in the queue is not responsive.
  • the node performs one of:
  • the first packet may be removed from the queue upon the first packet being dropped or served. Additionally, the data structure may be updated based whether the first packet is dropped or served.
  • the node may periodically update the CTV. For example, the node may detemiine an update timer has expired since a previous update of the CTV and sum a number of bytes from at least one highest PPV until either all bytes are counted or a limit is reached. As another example, the node may determine an update timer has expired since a previous update of the CTV and update the CTV based on whether a number of bytes is greater than a threshold, which may include the byte limit threshold
  • the method performed by the node may further include determining admission of a second packet to the queue based on a length of the second packet. Additionally, when the admission of the second packet would cause the queue to become full, the admission may be further based on the per packet value of the second packet and the data structure tracking packet value distribution of packets in the queue. The data structure may be updated based on determining the admission of the second packet.
  • determining the admission may include, when the per packet value is higher than a minimum packet value in the first histogram, moving as many bytes with the minimum packet value as needed from the first histogram to the second histogram to accommodate the first packet.
  • the queue may be determined to be full when a sum of packet lengths corresponding to packet values in the first histogram is over a threshold.
  • determining the admission may include, when the packet value is equal to or lower than a minimum packet value in the first histogram, denying the second packet from admitting to the queue.
  • FIGURE 11 is a block diagram depicting a receiving node, exemplified as the radio base station 120 in the figures above, according to embodiments herein.
  • the receiving node for handling packets for the wireless device 110 in the radio communications network 100 comprises a receiving circuit 1401 configured to receive the packet from the transmitting node.
  • the packet is marked with a value, wherein the value is corresponding to a level of importance of the packet along a linear scale.
  • the receiving node further comprises a handling circuit 1402 configured to handle the packet based on the value and an expected amount of resources needed to serve the packet.
  • the handling circuit 1402 may be configured to handle the packet based on a realized value, wherein the realized value is the marked value divided with the expected amount of resources.
  • the realized value may also be referred to as an effective packet value.
  • the handling circuit 1402 may further be configured to maximize the realized value of served packets over a bottleneck.
  • the handling circuit 1402 may be configured to share resources when packet scheduling, packet dropping or a combination of packet scheduling and packet dropping over a bottleneck resource which aims to maximize the realized value.
  • the handling circuit 1402 may be configured to share resources when packet scheduling, packet dropping or a combination of packet scheduling and packet dropping, based on the value of the received packet.
  • the resources may comprise radio resources, packet overhead at lower layers, or processing cost.
  • a bit of the packet is marked with the value and the handling circuit 1402 is configured to take an amount of bits of the packet into account in conjunction with the value.
  • the handling circuit 1402 may be configured to take a total value into account in a resource allocation, wherein the total value of the packet is a sum of the values for each bit served.
  • the handling circuit 1402 may comprise a resource sharing scheme, which applies a smaller drop precedence for packets with higher effective packet value; or gives precedence to packets with high effective packet value to be served; or uses a combination of the above, possibly in combination with other dropping and scheduling conditions.
  • the receiving node may further comprise a determining circuit 1403 configured to determine the expected amount of resource required to send a single bit on a radio bearer.
  • the determining circuit 1403 may comprise a resource estimator for the expected resource usage per transmitted bit.
  • the receiving node may further comprise a transmitting circuit 1404 configured to transmit the packets according to a scheduling, resource allocation, control signaling back to the transmitting node, or similar.
  • the embodiments herein for handling packets may be implemented through one or more processors, such as a processing circuit 1405 in the receiving node depicted in FIGURE 11, together with computer program code for performing the functions and/or method steps of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing embodiments herein when being loaded into the receiving node.
  • a data carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the receiving node.
  • circuits may refer to a combination of analogue and digital circuits, and/or one or more processors configured with software and/or firmware (e.g., stored in memory) that, when executed by the one or more processors, perform as described above.
  • processors as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
  • ASIC application-specific integrated circuit
  • SoC system-on-a-chip
  • the receiving node further comprises a memory 1406 that may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the receiving node or similar.
  • a memory 1406 may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the receiving node or similar.
  • FIGURE 12 is a block diagram depicting a transmitting node, exemplified above as the GW 130, according to embodiments herein.
  • the transmitting node for handling packets for the wireless device 110 in the radio communications network 100 comprises a marking circuit 1501 configured to mark a packet with a value, which value is corresponding to a level of importance of the packet along a linear scale.
  • the marking circuit 1501 may be configured to also take an expected amount of resources needed to serve the packet, when marking the packet.
  • the transmitting node may comprise the marking circuit 1501 comprising a marker entity that marks the packets depending on an expected value of the bits of the packets.
  • the transmitting node comprises a transmitting circuit 1502 configured to transmit the packet to a receiving node 12.
  • the level of importance may be based on one or more of: contents of the packet payload, a specific type of packet flow, and throughput commitments.
  • the transmitting node may further comprises a receiving circuit 1503 configured to receive the packet from another node, and/or to receive packets from the receiving node.
  • the embodiments herein for transmitting packets may be implemented through one or more processors, such as a processing circuit 1504 in the transmitting node depicted in FIGURE 12, together with computer program code for performing the functions and/or method steps of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing embodiments herein when being loaded into the transmitting node.
  • a data carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the transmitting node.
  • circuits may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware (e.g., stored in memory) that, when executed by the one or more processors, perform as described above.
  • processors as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
  • ASIC application-specific integrated circuit
  • SoC system-on-a-chip
  • the transmitting node further comprises a memory 1505 that may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the transmitting node or similar.
  • a memory 1505 may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the transmitting node or similar.
  • FIGURE 13 schematically illustrates a telecommunication network connected via an intermediate network to a host computer, according to certain embodiments.
  • a communication system includes a telecommunication network 1310, such as a 3GPP-type cellular network, which comprises an access network 1311, such as a radio access network, and a core network 1314.
  • the access network 1311 comprises a plurality of base stations l3 l2a, 13 l2b, l3 l2c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area 13 l3a, 13 l3b, 13 l3c.
  • Each base station 13 l2a, 13 l2b, l3 l2c is connectable to the core network 1314 over a wired or wireless connection 1315.
  • a first user equipment (UE) 1391 located in coverage area 13 l3c is configured to wirelessly connect to, or be paged by, the corresponding base station l3 l2c.
  • a second UE 1392 in coverage area l3 l3a is wirelessly connectable to the corresponding base station l3 l2a. While a plurality of UEs 1391, 1392 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station 1312.
  • the telecommunication network 1310 is itself connected to a host computer 1330, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm.
  • the host computer 1330 may be under the ownership or control of a service provider or may be operated by the service provider or on behalf of the service provider.
  • the connections 1321, 1322 between the telecommunication network 1310 and the host computer 1330 may extend directly from the core network 1314 to the host computer 1330 or may go via an optional intermediate network 1320.
  • the intermediate network 1320 may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network 1320, if any, may be a backbone network or the Internet; in particular, the intermediate network 1320 may comprise two or more sub-networks (not shown).
  • the communication system of FIGURE 13 as a whole enables connectivity between one of the connected UEs 1391, 1392 and the host computer 1330.
  • the connectivity may be described as an over-the-top (OTT) connection 1350.
  • the host computer 1330 and the connected UEs 1391, 1392 are configured to communicate data and/or signaling via the OTT connection 1350, using the access network 1311, the core network 1314, any intermediate network 1320 and possible further infrastructure (not shown) as intermediaries.
  • the OTT connection 1350 may be transparent in the sense that the participating communication devices through which the OTT connection 1350 passes are unaware of routing of uplink and downlink communications.
  • a base station 1312 may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer 1330 to be forwarded (e.g., handed over) to a connected UE 1391. Similarly, the base station 1312 need not be aware of the future routing of an outgoing uplink communication originating from the UE 1391 towards the host computer 1330.
  • FIGURE 14 is a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection, according to certain embodiments.
  • Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference to FIGURE 14.
  • a host computer 1410 comprises hardware 1415 including a communication interface 1416 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system 1400.
  • the host computer 1410 further comprises processing circuitry 1418, which may have storage and/or processing capabilities.
  • the processing circuitry 1418 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the host computer 1410 further comprises software 1411, which is stored in or accessible by the host computer 1410 and executable by the processing circuitry 1418.
  • the software 1411 includes a host application 1412.
  • the host application 1412 may be operable to provide a service to a remote user, such as a UE 1430 connecting via an OTT connection 1450 terminating at the UE 1430 and the host computer 1410. In providing the service to the remote user, the host application 1412 may provide user data which is transmitted using the OTT connection 1450.
  • the communication system 1400 further includes a base station 1420 provided in a telecommunication system and comprising hardware 1425 enabling it to communicate with the host computer 1410 and with the UE 1430.
  • the hardware 1425 may include a communication interface 1426 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system 1400, as well as a radio interface 1427 for setting up and maintaining at least a wireless connection 1470 with a UE 1430 located in a coverage area (not shown in FIGURE 14) served by the base station 1420.
  • the communication interface 1426 may be configured to facilitate a connection 1460 to the host computer 1410.
  • connection 1460 may be direct or it may pass through a core network (not shown in FIGURE 14) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system.
  • the hardware 1425 of the base station 1420 further includes processing circuitry 1428, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the base station 1420 further has software 1421 stored internally or accessible via an external connection.
  • the communication system 1400 further includes the UE 1430 already referred to.
  • Its hardware 1435 may include a radio interface 1437 configured to set up and maintain a wireless connection 1470 with a base station serving a coverage area in which the UE 1430 is currently located.
  • the hardware 1435 of the UE 1430 further includes processing circuitry 1436, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the UE 1430 further comprises software 1431, which is stored in or accessible by the UE 1430 and executable by the processing circuitry 1438.
  • the software 1431 includes a client application 432.
  • the client application 1432 may be operable to provide a service to a human or non-human user via the UE 1430, with the support of the host computer 1410.
  • an executing host application 1412 may communicate with the executing client application 432 via the OTT connection 1450 terminating at the UE 1430 and the host computer 1410.
  • the client application 1432 may receive request data from the host application 1412 and provide user data in response to the request data.
  • the OTT connection 1450 may transfer both the request data and the user data.
  • the client application 1432 may interact with the user to generate the user data that it provides.
  • the host computer 1410, base station 1420 and UE 1430 illustrated in FIGURE 14 may be identical to the host computer 1330, one of the base stations l3 l2a, 13 l2b, l3 l2c and one ofthe UEs 1391, 1392 of FIGURE 13, respectively.
  • the inner workings of these entities may be as shown in FIGURE 13 and independently, the surrounding network topology may be that of FIGURE 14.
  • the OTT connection 1450 has been drawn abstractly to illustrate the communication between the host computer 1410 and the use equipment 1430 via the base station 1420, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • Network infrastructure may determine the routing, which it may be configured to hide from the UE 1430 or from the service provider operating the host computer 1410, or both. While the OTT connection 1450 is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network).
  • the wireless connection 1470 between the UE 1430 and the base station 1420 is in accordance with the teachings of the embodiments described throughout this disclosure.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 1430 using the OTT connection 1450, in which the wireless connection 1470 forms the last segment. More precisely, the teachings of these embodiments may improve the data rate, latency, and/or power consumption and thereby provide benefits such as reduced user waiting time, relaxed restriction on file size, better responsiveness, extended battery lifetime.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection 1450 may be implemented in the software 1411 of the host computer 1410 or in the software 1431 of the UE 1430, or both.
  • sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection 1450 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above or supplying values of other physical quantities from which software 1411, 1431 may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 1450 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station 1420, and it may be unknown or imperceptible to the base station 1420. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling facilitating the host computer’s 1410 measurements of throughput, propagation times, latency and the like.
  • the measurements may be implemented in that the software 1411, 1431 causes messages to be transmitted, in particular empty or‘dummy’ messages, using the OTT connection 1450 while it monitors propagation times, errors etc.
  • FIGURE 15 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 15 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE.
  • the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the UE executes a client application associated with the host application executed by the host computer.
  • FIGURE 16 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 16 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the UE receives the user data carried in the transmission.
  • FIGURE 17 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 17 will be included in this section.
  • the UE receives input data provided by the host computer.
  • the UE provides user data.
  • the UE provides the user data by executing a client application.
  • the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer.
  • the executed client application may further consider user input received from the user.
  • the UE initiates, in an optional third substep 1730, transmission of the user data to the host computer.
  • the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
  • FIGURE 18 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 18 will be included in this section.
  • the base station receives user data from the UE.
  • the base station initiates transmission of the received user data to the host computer.
  • the host computer receives the user data carried in the transmission initiated by the base station.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method by a node for handling packets includes maintaining a data structure tracking packet value distribution of packets in a queue and determining a congestion threshold value (CTV). When dequeuing a first packet from the queue, the node determines that the first packet's corresponding per packet value (PPV) is not marked to be dropped in the data structure and then determines whether the PPV for the first packet is less than the CTV. If the PPV is not less than the CTV, the first packet is served. If the PPV is less than the CTV and the first packet is marked for ECN-Capable Transport, ECT, the first packet is served. If the PPV for is less than the CTV and the first packet is not marked for ECT, the first packet is dropped.

Description

Figure imgf000003_0001
CORE-STATELESS ECN FOR L4S
BACKGROUND
In a typical communications network such as a radio communications network, wireless terminals, also known as mobile stations and/or user equipments (UEs), communicate via a Radio Access Network (RAN) to one or more core networks. The RAN covers a geographical area which is divided into cell areas, with each cell area being served by a base station such as, for example, a radio base station (RBS), which in some networks may also be called, for example, a“NodeB” or“eNodeB.” A cell is a geographical area where radio coverage is provided by the radio base station at a base station site or an antenna site in case the antenna and the radio base station are not collocated. Each cell is identified by an identity within the local radio area, which is broadcast in the cell. Another identity identifying the cell uniquely in the whole mobile network is also broadcasted in the cell. The base stations communicate over the air interface operating on radio frequencies with the user equipments (UEs) within range of the base stations.
In some typical versions of the RAN, several base stations may be connected such as, for example, by landlines or microwave, to a controller node, such as a radio network controller (RNC) or a base station controller (BSC), which supervises and coordinates various activities of the plural base stations connected thereto. The RNCs are typically connected to one or more core networks.
A Universal Mobile Telecommunications System (UMTS) is a third- generation mobile communication system, which evolved from the second generation (2G) Global System for Mobile Communications (GSM). The UMTS Terrestrial Radio Access Network (UTRAN) is essentially a RAN using Wideband Code Division Multiple Access (WCDMA) and/or High-Speed Packet Access (HSPA) for UEs. In a forum known as the Third Generation Partnership Project (3GPP), telecommunications suppliers propose and agree upon standards for e.g. third
Figure imgf000004_0001
generation networks and further generations and investigate enhanced data rate and radio capacity.
Specifications for the Evolved Packet System (EPS) have been completed within the 3GPP and this work continues in the coming 3GPP releases. The EPS comprises the Evolved Universal Terrestrial Radio Access Network (E-UTRAN), also known as the Long Term Evolution (LTE) radio access, and the Evolved Packet Core (EPC), also known as System Architecture Evolution (SAE) core network. E- UTRAN/LTE is a variant of a 3GPP radio access technology wherein the radio base stations are directly connected to the EPC core network rather than to RNCs. In general, in E-UTRAN/LTE the functions of a RNC are distributed between the radio base stations, e.g., eNodeBs in LTE, and the core network. As such, the RAN of an EPS has an essentially“flat” architecture comprising radio base stations without reporting to RNCs.
Packets are transported in a Core Network along paths in a transport network being parts of the communications network. Today two broad categories of transport networking are available. In one, called automatic, path selection is automatic, usually distributed, and, usually, follows the shortest path paradigm. Internet Protocol (IP) routing, IP/Multiprotocol label switching (MPLS), and Ethernet Shortest Path Bridging (SPB) clearly fall into this category. The other approach, called traffic engineering, is more circuit aware and relies on setting up explicit paths across the network, usually with resource reservation included. Generalized MPLS (GMPLS), MPLS-Transport Profile (TP), Provider Backbone Bridges Transport Profile (PBB- TP) and Resource Reservation Protocol (RSVP) all fall into this category.
Automatic solutions have low management burden, and in that most of the time relying on shortest paths the automatic solutions also achieve some form of low delay efficiency. They can be expanded with automatic fast reroute and Equal-Cost Multipath (ECMP) to increase reliability and network utilization. The automatic solutions, however, fail to offer adequate Quality of Service (QoS) measures. Usually simple packet priorities or DiffServ handling does not provide any bandwidth guarantees.
Figure imgf000005_0001
Transport equipment, such as communication nodes, usually have very limited packet processing capabilities, usually limited to a few priority or weighted fair queues, a few levels of drop precedence, since emphasis is on high throughput and low price per bit. When packet arrivals get over the packet processing capability, a node becomes a bottleneck, and some packets are dropped due to congestion at the node. To get around the congestion, a few solutions have been proposed.
There are two common ways of defining and signaling desired resource demands to a bottleneck in a communications network in, for example, a core network such as a packet network. A bottleneck, which is a location in the communications network where a single or limited number of components or resources, affects capacity or performance of the communications network.
A first common way is to pre -signal/pre -configure the desired resource sharing rules for a given traffic aggregate, such as a flow or a bearer, to a bottleneck node prior the arrival of the actual traffic. The bottleneck node then implements the handling of the traffic aggregates based on these sharing rules. For example, the bottleneck node may use scheduling to realize the desired resource sharing. Examples for this pre- signaling/pre-configuration method are, for example, the bearer concept as discussed in 3 GPP TS 23.401 vl2.4.0, SIRIG as discussed in 3GPP TS 23.060 section 5.3.5.3, vl2.4.0, or Resource Reservation Protocol (RSVP) as discussed in RFC2205. An example scheduling algorithm for this method, implementing the 3GPP bearer concept at an LTE eNB, may be found in Wang Min, Jonas Pettersson, Ylva Timner, Stefan Wanstedt and Magnus Hurd, Efficient QoS over LTE - a Scheduler Centric Approach. Personal Indoor and Mobile Radio Communications (PIMRC), 2012 IEEE 23rd International Symposium. Another example of this is to base the resource sharing on Service Value as described in Service Value Oriented Radio Resource Allocation, WO2013085437.
A second common way is to mark packets with priority - this would give more resources to higher priority flows, or with drop precedence, which marks the relative importance of the packets compared to each other. Packets of higher drop precedence are to be dropped before packets of lower drop precedence. An example for such
Figure imgf000006_0001
method is DiffServ Assured Forwarding (AF) within a given class [RFC2597] Also such a method with several drop precedence levels are defined in a Per-Bearer Multi Level Profiling, EP2663037.
According to a Per Packet Value (PPV) concept, a communication node associates packets with a value related to resource sharing. Thus, the communication node marks the packet with a value related to resource sharing in the physical network, wherein the value, also referred to herein as packet value, indicates a level of importance of the packet relative to the importance of other packets along a, for example, linear, scale in the physical network. The communication node further transmits, over the physical network, the marked packet towards a destination node.
The basic concept of per packet marking-based bandwidth sharing control method is the following. At an edge communication node, each packet gets a label that expresses its importance. In a bottleneck node, these labels or packet values are used in a bandwidth sharing decision. Packets of a flow can have different importance values to first drop packets with lowest importance, for example, in case of congestion.
U.S. Patent Publication No. 2016/0105369A1, published on April 14, 2016, which is incorporated by reference in its entirety, relates to the Per Packet Value concept also referred to as Per Packet Operator Value (PPOV) concept. In U.S. Patent Publication No. 2016/0105369A1 and“Per packet value: A practical concept for network resource sharing,” which was published in IEEE Globecom 2016 by S. Nadas, Z. R. Turanyi, and S. Racz, methods are proposed to control bandwidth sharing among flows even when per flow queuing is not possible. Both concepts are based on per packet marking based bandwidth sharing control. Algorithms are defined for a single buffer, which results in a shared delay among these packet flows.
In“Take your own share of the PIE,” which was published in In Proceedings of ACM Applied Networking Research Workshop, Prague, Czech Republic, July 2017 (ANRW’l7) by S. Nadas, Z. R. Turanyi, S. Laki, G. Gombos, and U.S. Provisional Application 62/484146, a combination of the PPV resource sharing framework with the PIE queue management algorithm was described to implement statistical packet dropping in the resource nodes. U.S. Provisional Application 62/484146, which was
Figure imgf000007_0001
filed on December 8, 2017, has now been incorporated into PCT Application No. PCT/SE2017/082044, filed on December 8, 2017. The contents of U.S. Provisional Application 62/484146 and PCT/SE2017/082044 are incorporated by reference in their entireties.
In PCT Application No. PCT/SE2017/051199, filed on November 30, 2017, an efficient implementation of the framework is proposed. However, PPV schedulers typically require dropping from the middle of the queue (drop smallest PPV first). This might be too processing/memory intensive for some practical communication nodes such as routers. Alternatively, their chipset may only support drops on packet arrival and not later. This results in a non-flexible solution limiting or reducing performance of the communications network.
Additionally, PPV schedulers cannot solve the Low Latency, Low Loss, Scalable Throughput (L4S) problem when serving flows with different congestion control over the same bottleneck. The L4S problem is introduced in“Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: Architecture” by B. Briscoe, K. De Schepper, and M. Bagnulo Braun, dated March 22, 2018, which may be found at https://tools.ietf.org/html/draft-ietf-tsvwg-14s-arch-02 (last visited April 8, 2019). The existing solutions of the L4S problem build on predefined congestion control behavior and cannot handle misbehaving and/or unresponsive flows.
SUMMARY
To address the foregoing problems with existing solutions, disclosed is systems and methods for handling packets in a communication network.
According to certain embodiments, a method by a node for handling packets includes maintaining a data structure tracking packet value distribution of packets in a queue and determining a congestion threshold value (CTV). When dequeuing a first packet from the queue, the node determines that the first packet’s corresponding per packet value (PPV) is not marked to be dropped in the data structure. The node then determines whether the PPV for the first packet is less than the CTV. If the PPV for the first packet is not less than the CTV, the first packet is served. If the PPV for
Figure imgf000008_0001
the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport, ECT, the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, the first packet is dropped.
According to certain embodiments, a node includes memory operable to store instructions and processing circuitry operable to execute the instructions to cause the node to maintain a data stmcture tracking packet value distribution of packets in a queue and determining a CTV. When dequeuing a first packet from the queue, the processing circuitry is operable to execute the instructions to cause the node to determine that the first packet’s corresponding per PPV is not marked to he dropped in the data structure. The node then determines whether the PPV for the first packet is less than the CTV. If the PPV for the first packet is not less than the CTV, the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport, ECT, the first packet is served. If the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, the first packet is dropped.
Certain embodiments may provide one or more of the following technical advantage(s). For example, a technical advantage may be that certain embodiments may keep the high speed, simple implementation of previously introduced PPV concepts. As another example, a technical advantage may be that certain embodiments have no need for parameter tuning and verification. As yet another example, a technical advantage may that certain embodiments may use ECN marking together with PPV. Another technical advantage may be that certain embodiments solve the L4S problem without explicit assumption on congestion control. Still another technical advantage may that certain embodiments introduce the option of weighted sharing to the L4S domain.
Other advantages may be readily apparent to one having skill in the art. Certain embodiments may have none, some, or all of the recited advantages.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the disclosed embodiments and their features and advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
Figure imgf000009_0001
FIGURE 1 illustrates a radio communications network, according to certain embodiments;
FIGURES 2A-2B illustrate entities for the operations of packet value-based packet processing, according to certain embodiments;
FIGURE 3 illustrates illustrating the packet enqueue process per one as may be performed by a node, according to certain embodiments;
FIGURE 4 illustrates the packet dequeue process from a queue, according to certain embodiments;
FIGURE 5 illustrates entities for the operations of packet value and CTV-based packet processing, according to certain embodiments;
FIGURE 6 illustrates a packet dequeue process, according to certain embodiments;
FIGURE 7 illustrates an example process for updating the CTV when needed, according to certain embodiments;
FIGURE 8 illustrates a graph depicting the throughput of four flows, according to certain embodiments;
FIGURE 9 illustrates a graph depicting the per flow ECN marking rates, according to certain embodiments;
FIGURE 10 illustrates method by node for handling packets in a communication network, according to certain embodiments;
FIGURE 11 illustrates a receiving node, exemplified as a radio base station, according to certain embodiments;
FIGURE 12 illustrates a transmitting node, exemplified as a gateway, according to certain embodiments;
FIGURE 13 illustrates a telecommunication network connected via an intermediate network to a host computer, according to certain embodiments;
FIGURE 14 illustrates a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection, according to certain embodiments;
Figure imgf000010_0001
FIGURE 15 illustrates a method implemented in a communication system, according to one embodiment;
FIGURE 16 illustrates another method implemented in a communication system, according to one embodiment;
FIGURE 17 illustrates another method implemented in a communication system, according to one embodiment; and
FIGURE 18 illustrates another method implemented in a communication system, according to one embodiment.
Figure imgf000011_0001
DETAILED DESCRIPTION
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
The following description describes methods and apparatus for processing packets based on a Congestion Threshold Value (CTV) which is determined based on a maintained histogram of packets (HIN) and a byte limit on the queue length. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order to not obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
References in the specification to“one embodiment,”“an embodiment,”“an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not
Figure imgf000012_0001
necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
In the following description and claims, the terms“coupled” and“connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.“Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.“Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other.
An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors (e.g., where a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic
Figure imgf000013_0001
circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. For example, the set of physical NIs (or the set of physical NI(s) in combination with the set of processors executing code) may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection. This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s). In some embodiments, the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter. The NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
Figure imgf000014_0001
A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are“multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video). A network device may communicate with the other electronic devices through radio and/or landline communications networks.
A traffic flow (or data flow, flow) is traffic of packets identified by a set of header information and port information including, but not limited to: IP header, Layer 2 (L2) header, virtual and/or physical interface port, and/or agent circuit ID information for a remote port in an access network. A data flow may be identified by a set of attributes embedded to one or more packets of the flow. An exemplary set of attributes includes a 5-tuple (source and destination IP addresses, a protocol type, source and destination TCP/UDP ports).
A packet value (PV) is a value assigned to each packet and is also referred to as a per packet value (PPV). A packet value is a scalar value and enables nodes to perform computations on these values, such as summing up the values for a total value or dividing the values to reflect higher cost of transmission. The packet values not only express which packet is more important, but also by how much. This is in contrast to existing drop precedence markings, which simply define categories of drop levels, where the individual drop level categories are merely ordered, but further relation among them is not expressed. In embodiments of the invention, a whole packet has the same value, but the representation of the value may be a value of a single bit of that packet. In one embodiment, the packet value indicates a drop precedence of a packet. The value marked for the packet may be the value of the single bit times bit length. The coding of the value may be linear, logarithmic or based on a mapping table. In other words, the packet value for a packet may be embedded within the packet, but it also may be indicated outside of the packet (e.g., a mapping table within a network device to map packets of one traffic flow to one packet value). While packets of the
Figure imgf000015_0001
same traffic flow may share the same packet value in one embodiment, they may have different packet values in an alternative embodiment (e.g., packets of a video traffic flow may assign different packet values to different layers of the video traffic flow).
Certain embodiments described herein may overcome the drawbacks of the previous approaches by utilizing a data structure that tracks packet value distribution of packets in a queue of a node. According to certain embodiments, the techniques described herein may implement ECN marking by maintaining a Packet Value histogram (HIN). A byte limit on the queue length (limit_ecn), which may also be referred to as a byte limit threshold, may be defined. Based on the maintained histogram of packets (HIN) and the byte limit in queue (limit_ecn), a Congestion Threshold Value (CTV) may be determined. Outgoing packets with Per Packet Value (PPV) smaller than the CTV are ECN marked, if ECN capable. Otherwise, such outgoing packets are dropped. In particular embodiments, the CTV may be update periodically such as, for example, every ms.
According to particular embodiments, the maximum queue length, which is different from the byte limit threshold, may be simultaneously enforced by a HDROP histogram. When all flows are responsive this limit is not needed. In case of one or more unresponsive flows this limit will drop packets from the unresponsive flow only.
In particular embodiments, either of both target queue lengths may be calculated using target delay values multiplied by bottleneck capacity. Non-ECN capable packets may be dropped instead of ECN marked when PPV is below CTV.
More specifically, according to certain embodiments, a node and a method by the node may be provided for handling packets in a communication network. According to certain embodiments, the node may maintain a data structure tracking packet value distribution of packets in a queue. Wren dequeuing a first packet from the queue, the node may determine that the first packet's corresponding PPV is not marked to be dropped m the data structure and determine whether the PPV for the first packet is less than a CTV. The node may then perform one of:
* if the PPV for the first packet is not less than the CTV7, serving the first packet;
Figure imgf000016_0001
8 if the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capabie Transport (ECT), serving the first packet; and * if the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, dropping the first packet.
FIGURE 1 is a schematic overview depicting a radio communications network 100. The radio communications network 100 comprises one or more RANs and one or more CNs. The radio communications network 100 may use a number of different technologies, such as LTE, LTE-Advanced, WCDMA, Global System for Mobile communications/Enhanced Data rate for GSM Evolution (GSM/EDGE), Worldwide Interoperability for Microwave Access (WiMax), WiFi, Code Divisions Multiple Access (CDMA) 2000 or Ultra Mobile Broadband (UMB), just to mention a few possible implementations.
In the radio communications network 1, a user equipment 110, also known as a mobile station and/or a wireless terminal, communicates via a Radio Access Network (RAN) to one or more core networks (CN). It should be understood by the skilled in the art that“user equipment” is a non-limiting term which means any wireless terminal, MTC device or node such as, for example, Personal Digital Assistant (PDA), laptop, mobile, sensor, relay, mobile tablets, or even a small base station communicating within respective cell.
The radio communications network 100 covers a geographical area which is divided into cell areas. For example, cell 105 is served by a radio base station 1200. The radio base station 120 may also be referred to as a first radio base station. The radio base station 120 may be referred to as e.g. a NodeB, an evolved Node B (eNB, eNode B), a base transceiver station, Access Point Base Station, base station router, or any other network unit capable of communicating with a user equipment within the cell served by the radio base station depending e.g. on the radio access technology and terminology used. The radio base station 1200 may serve one or more cells, such as the cell 105. The user equipment 110 is served by the radio base station 120.
Figure imgf000017_0001
A cell is a geographical area where radio coverage is provided by the radio base station equipment at a base station site. The cell definition may also incorporate frequency bands and radio access technology used for transmissions, which means that two different cells may cover the same geographical area but using different frequency bands. Each cell is identified by an identity within the local radio area, which is broadcast in the cell. Another identity identifying the cell 105 uniquely in the whole radio communications network 100 is also broadcasted in the cell 105. The radio base station 120 communicates over the air or radio interface operating on radio frequencies with the user equipment 110 within range of the radio base station 120. The user equipment 110 transmits data over the radio interface to the radio base station 120 in Uplink (UL) transmissions and the radio base station 120 transmits data over an air or radio interface to the user equipment 110 in Downlink (DL) transmissions.
Furthermore, the radio communications network 100 comprises a Gate Way node (GW) 130 for connecting to the Core Network (CN).
In some versions of the radio communications network 100, several base stations are typically connected, e.g. by landlines or microwave, to a controller node (not shown), such as a Radio Network Controller (RNC) or a Base Station Controller (BSC), which supervises and coordinates various activities of the plural base stations connected thereto. The RNCs are typically connected to one or more core networks.
Certain embodiments herein fall into the drop precedence marking category of solutions for defining and signaling desired resource demands to a bottleneck in the radio communications network 100. A transmitting node, such as the GW 130 or similar, assigns and marks a value, also denoted as per-packet value, on each packet. The value reflects the importance of the packet for the operator in a linear scale wherein the value corresponds to a level of importance along the linear scale. The value indicates a level of importance of the packet relative importance of another packet. For example, the value ‘1000’ indicates that the packet is 100 times as important as a packet with the value‘10’ and 10 times as important as a packet with value‘100’. The importance may be determined based on the actual contents of the packet payload, e.g. important video frame, or based on the specific packet flow, e.g.
Figure imgf000018_0001
premium traffic of gold subscriber, or based on throughput commitments, e.g. whether or not this packet is needed to reach a certain level of throughput, or a combination of these, or any other, criteria.
The value assigned to each packet, also referred to as a per packet value, is a scalar value and enables nodes to perform computations on these values, such as sum the values for a total value or divide the values to reflect higher cost of transmission. Also, such values not only express which packet is more important, but also by how much. This is in contrast to existing drop precedence markings, which simply define categories of drop levels, where the individual drop level categories are merely ordered, but further relation among them is not expressed. The whole packet has the same value, but the representation of the value may be a value of a single bit of that packet. The value marked for the packet may e.g. be the value of the single bit times bit length. The coding of the value may be linear, logarithmic or based on, for example, a pre-communicated table.
The packet is transmitted to a receiving node, such as the radio base station 120 or similar, which reads the value of the packet and handles the packet based on the value and expected resources needed to serve the packet. For example, the receiving node may comprise a resource allocation scheme. The resource allocation scheme that works on these marked values aims to maximize a realized value, may also be referred to as a realized operator value, of served packets over a bottleneck. For this, it takes into account both the value indicating the per-packet value, and the expected amount of resources to serve the given packet. Therefore, packets with e.g. different amount of expected radio channel overhead may also be compared and precedence among them may be established based on the realized value per radio resource unit.
In this way, embodiments herein use a kind of marking for drop precedence, such that the marking not only defines the relative drop precedence of packets compared to each other in case of equal resource demand, but it also gives a value by which the resources needed, such as, for example, different radio channel overheads,
Figure imgf000019_0001
can be taken into account and a precedence can be established between packets with different radio channels.
As an alternative to taking into account the amount of radio resources, different resources such as, for example, packet overhead at lower layers or processing cost can be taken into account similarly.
Thus, embodiments herein take the importance as well as the expected resources needed to serve the packet into account when handling the packet. For example, the radio channel overhead, also known as user specific radio channel quality, may be taken into account. As long as the radio channel overhead is the same for all packets, a large number of drop precedence levels are suitable to describe a large variety of target resource sharing policies. However, as soon as the radio channel overhead is different for the packets, the interpretation of drop precedence is limiting the richness of the resource sharing policies. Herein, embodiments enable the determining of the resource sharing between a packet with a higher drop precedence but requiring a small amount of radio resources to serve and a packet with lower drop precedence but taking much more radio resource to serve. This is a flexible way of handling packets in an efficient manner.
FIGURES 2A-2B illustrates entities for the operations of packet value-based packet processing, according to certain embodiments. The node 200, along with other nodes discussed in this specification, may be implemented in a network device, such as GW 130 or radio base station 120, discussed above with regard to FIGURE 1. In one embodiment, all the entities may be within the node 200 of a communications network, which may be a radio or landline communications network. In an alternative embodiment, some entities (e.g., the data structure tracking packet value distribution 250) are outside of, but accessible by the node 200.
Packets arrive at the packet arrival block 202. The packet arrival block 202 may be a part of a transceiver interface of the node 200 in one embodiment. The arrived packets are assumed to be already marked with packet values. The marking of a packet with a packet value may be performed at an edge communication node, where the packet value indicates the relative importance of the packet. The marking of the
Figure imgf000020_0001
packet with the packet value may also be performed by the node 200, before arriving at the packet arrival block 202.
The arrived packets are then either put in a packet processing queue 220 or dropped at 210. The decision of whether to enqueue the packet at one end of the queue or to drop it depends on factors such as the packet length and/or packet value as discussed in more details below. The packet processing queue 220 is a first-in and first-out (FIFO) queue (sometimes referred to as first-come and first-served, FCFS) in one embodiment. Note that the queue 220 is shared among the packets of different traffic flows. Thus, packets within the queue 220 may have different packet values. Also note that the node 200 may have multiple packet processing queues. For example, each of the multiple queues may be dedicated to certain types of traffic flows. Packet processing queue 220 is just an example of a packet processing queue that contains packets with different packet values. The packet processing queue 220 may be implemented by a buffer, cache, and/or one or more other machine-readable memory storage media discussed above relating to the definition of the electronic device.
The packet processing queue 220 has a maximum queue length, and it does not hold packet or packets that have a total length longer than the maximum queue length. The maximum queue length, is the amount of data volume the queue can output in a given period of time in one embodiment. In other words, it is the throughput of the bottleneck scaled for a chosen timescale.
Once the queue reaches its maximum queue length and new packets arrives, either the new packet is dropped without entering the queue or the node makes room in the queue for the new packet.
Note that a queue length or a packet length is often measured in bits, bytes/octets (8 bits), halfwords (16 bits), words (32 bits), or doublewords (64 bits). The embodiments of the invention are agnostic to the measurement of the queue (queue length), and for simplicity of explanation, the following discussion use bytes for the queue length.
The node 200 determines whether to accept an arrived packet to the queue, and the determination may be based on the data structure tracking packet value distribution
Figure imgf000021_0001
250 (the illustrated reading 252 the data structure). Once the node makes the determination, it updates the data structure 250 reflecting the determination (the illustrated updating 252 the data structure).
At the other end of the queue 220, the packet serving block 204 serves the packets in the queue. The serving block 204 may be a part of a processor (e.g., an execution unit of a central processor unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or microcontroller). The packet serving block 204 may either serve the packets at the other end of the queue 220 or drop the packets based on the information from the data structure 250 (the illustrated reading 254 the data structure). The packet serving block 204 then updates the data structure 250 (the illustrated updating 254 the data structure). Note that serving (may also be referred to as processing or executing) a packet may include a variety of packet processing operations such as one or more of forwarding the packet, revising the packet’s header, encrypting the packet, compressing the packet, and extracting/revising the payload.
In one embodiment, the data structure tracking packet value distribution 250 includes histograms. A histogram tracks distribution of numerical data. While a histogram may be presented as a graphic for human consumption, it may be presented merely as data distribution for operations within a node. The histograms in this Specification tracks distribution of packet values, presented graphically or otherwise.
FIGURE 2B illustrates a data structure tracking packet value distribution. The data structure tracking packet value distribution 250 includes two histograms in one embodiment.
The first histogram tracks packet value distribution of packets in the queue of a node. The histogram is for packets that are currently in the queue, and we refer it as the packet-in-queue histogram (FUN) 262. For each packet value that one or more in queue packets are marked with, the number of bytes within the one or more packets is tracked for the packet value in one embodiment. Thus, the packet-in-queue histogram 262 illustrates the cumulative size of packets (e.g., in bytes) that is to be served. For the FUN 262, the minimum packet value (PVmin) of packet values for the packets currently in the queue is tracked so that the node may determine which packet to be
Figure imgf000022_0001
enqueued. Since the node may sort the packet-in-queue histogram packet, the PVmin may be obtained with insignificant processing resource.
The second histogram tracks packet value distribution of packets that are to be dropped when the packets reach the packet serving block 264, and we refer it as the packet-drop histogram (HDROP) 264. The HDROP 264 is updated when a packet value is determined to correspond to packets that are to be dropped as discussed in more details below.
The histograms are updated when the packet status changes (e.g., a packet is dropped or served), and the node 200 maintains the histograms to be up to date. Using the histograms (or other data structures that track packet value distribution), the node 200 may process packets efficiently without dropping a packet in the middle of a queue or performing complex pre -tuning of control loops.
The operations of node 200 may be understood with reference to the flow diagrams illustrated in FIGURES 3 and 4. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams.
FIGURE 3 is a flow diagram illustrating the packet enqueue process per one as may be performed by node 200, in accordance with certain embodiments. The queue for the packets to enter is the queue 220 in one embodiment. The operations determine whether to enqueue an incoming packet or drop the packet without entering it to the queue.
At reference 302, a packet arrives at the queue. The packet has a certain per packet value (PV), which may also be referred to herein as a per packet value (PPV), and a certain packet length (P_len). As discussed herein above, the packet value may be marked within the electronic device itself, or it may be marked by anther electronic device.
At reference 304, the node determines whether the sum of packet lengths of the existing packets in the queue and the packet length of the newly arrived packet would
Figure imgf000023_0001
reach the maximum queue length (QMAX). If the sum is less than the maximum queue length, the flow goes to reference 306, and the packet is enqueued in the queue. In one embodiment, a packet-in-queue histogram (e.g., the HIN 202) is updated to reflect the newly added packet in the queue. If the per packet value (PPV) is not in the packet-in queue histogram already, a new entry for the packet value is added in the histogram, and the packet length (P len) is indicated to be the total byte length for the packet value. Note that a data structure such as the packet-in-queue histogram may include all valid packet values, and the packet length is initialized to be zero and then accumulated as packets arrive.
For example, an entry in the histogram may be represented by something like (PV = 100, Byte length = 40) when it is forthe packet, PV = 100 and P_len = 40 bytes. If the packet value 100 is already in the packet-in-queue histogram, the byte length for the packet value is increased by the P_len. For example, if PV = 100 is in the histogram and the byte length is 250 for PV = 100, after the new packet is enqueued, the entry in the histogram may be represented by something like (PV = 100, Byte length = 290). That is the new byte length at PV = 100 includes the byte length of the newly admitted packet. Additionally, the node checks the minimum PV (PVmin) of the existing packets within the queue. If the newly arrived packet is the first packet of the queue, PV = 100 may be set to be PVmin. Otherwise, PV is compared with the current PVmin. If PV =100 is smaller than the existing PVmin, PVmin is updated to be PV; otherwise, PVmin is intact. Once the packet is enqueued, the flow goes back to reference 302, waiting for another packet to arrive.
If at reference 304, the node determines that the sum is larger than the maxim queue length, the flow goes to reference 308. At reference 308, the PV of the newly arrived packet is compared to PVmin. If the PV is not larger than PVmin, the flow goes to reference 310, and the newly arrived packet is denied from entering the queue thus dropped. Once the packet is dropped, the flow goes back to reference 302, waiting for another packet to arrive.
If at reference 308, the node determines that the PV is larger than PVmin, then at reference 312, the electronic device moves as many bytes having the packet value
Figure imgf000024_0001
being PVmin to a packet-drop histogram such as the HDROP 204 as needed to accommodate the newly arrived packet. For example, assume that PVmin is 90 and its corresponding byte length is 30 in the packet-in-queue histogram. When the packet (PV = 100, P_len = 40) arrives, at reference 312, all the 30 bytes corresponding to PVmin = 90 is moved to the packet-drop histogram. When the bytes are moved to the packet-drop histogram, these bytes are indicated to be dropped.
At reference 316, the node updates PVmin if necessary. For example, in the immediate preceding paragraph, the update of PVmin is necessary because all the 30 bytes corresponding to PVmin is moved to the packet-drop histogram, so that the second lowest packet value becomes the PVmin at reference 316. Yet if the PVmin is 90 and its corresponding byte length is 60 in the packet-in-queue histogram, since the new packet has (PV = 100, Byte length = 40), only 40 out of the 60 bytes need to be moved to the packet-drop histogram, and the PVmin remains at 90.
At reference 318, the electronic device determines again whether the packet value of the newly arrived packet is larger than PVmin (since PVmin may be updated at reference 316), and if PV is larger than PVmin, the flow goes back to reference 304, otherwise the flow goes to reference 310, and the packet is denied from entering the queue. Once the packet is dropped at reference 310, the flow goes back to reference 302, waiting for another packet to arrive.
Through simply tracking the packet value and packet length of the newly arrived packet, and in consideration of the minimum packet value and the packet lengths of the existing packets, the electronic device may determine whether to enqueue the newly arrived packet. These operations in method 300 do not require any pre-tuning of the control loops and can be performed efficiently as the packet arrives.
FIGURE 4 is a flow diagram illustrating the packet dequeue process from a queue, according to certain embodiments. Method 400 may be performed within an electronic device such as the node 200. The queue for the packets to exit is the queue 220 in one embodiment. The operations determine whether to serve a packet exiting the queue or drop the packet instead.
Figure imgf000025_0001
At reference 402, a packet to be served is existing from a queue. The packet has a certain packet value (PV) and a certain packet length (P_len). At reference 404, the electronic device determines whether the packet value of the packet has any accumulated byte length in a packet-drop histogram such as the HDROP 264. As discussed at reference 312, bytes with earlier minimum packet value (PVmin) may be moved to the packet-drop histogram. The electronic device determines whether any byte associated with the packet value is in the packet-drop histogram.
If there is no byte associated with the packet value in the packet-drop histogram as determined at reference 404, the flow goes to reference 406. The bytes of the packet are removed from a packet-in-queue histogram (e.g., the HIN 262) that tracks packet distribution of the enqueued packets. At reference 420, the electronic device serves the packet. Optionally at reference 422, the minimum packet value is updated when the packet value of the exiting packet has the minimum packet value and no other bytes remains for the minimum packet value. Otherwise, the minimum packet value stays the same. The flow then goes back to reference 402, and the electronic device determine whether to serve or drop the next exiting packet from the queue.
If there is any byte associated with the packet value in the packet-drop histogram as determined at reference 404, the flow goes to reference 408. The electronic device removes bytes corresponding to the packet from the packet-drop histogram at reference 410. In some cases, a packet has a portion of bytes that are tracked in the packet-drop histogram, and the remaining portion of the packet are tracked in the packet-in-queue histogram. In one embodiment, the electronic device removes the remaining portion of the packet in the packet-in-queue histogram also, since unlikely the remaining portion of the packet will be served properly. In this embodiment, the packet-in-queue histogram needs to be updated once the remaining portion of the packet is removed (e.g. reducing the byte length for the corresponding packet value by the number of bytes removed).
At reference 410, the packet is dropped without being served. Then optionally at reference 412, the minimum packet value is updated when the packet value of the exiting packet has the minimum packet value and no other bytes remaining for the
Figure imgf000026_0001
minimum packet value. Otherwise, the minimum packet value stays the same. Then the flow goes back to reference 402, and the electronic device determine whether to serve or drop the next exiting packet from the queue.
Through simply tracking the packet values of packets exiting the queue, and in consideration of the packet values that are marked as to be dropped through a data structure such as the packet-drop histogram, the electronic device may determine whether to serve an existing packet from the queue. These operations in method 400 may be performed quickly using value comparison and data structure reading/updating .
FIGURE 5 illustrates entities for the operations of packet value and CTV-based packet processing, according to certain embodiments. Certain features of node 500 may be similar to those of node 200 described above with respect to FIGURES 2A- 2B. Thus, packets arrive at the packet arrival block 502 and are processed using an enqueueing process similar to that described above with respect to packet arrival block 102. Node 500 is only responsible to keep the maximum queue length and update HIN 552 and HDROP 554.
Similar to data structure tracking packet value distribution 150 described above, data structure tracking packet value distribution 550 includes a HIN 552 and HDROP 554. Additionally, however, data structure tracking packet value distribution 350 includes a Congestion Threshold Value (CTV) block 556 for calculating the CTV for ECN marking.
As a further addition, Dequeue block 560 enables Dequeue ECN marking based on the calculated CTV and the per packet value (PPV) of the packet.
FIGURE 6 is a flow diagram illustrating a packet dequeue process 600, according to certain embodiments. Method 600 begins when a determination is made at step 602 that the queue is not empty. At reference 604, the dequeue size, PPV is determined.
Figure imgf000027_0001
At reference 604, a packet to be served is exiting from queue. The packet has a size and a PPV. At reference 606, the node determines whether the PPV of the packet has any accumulated byte length in a packet-drop histogram such as the HDROP 554.
If there is any byte associated with the packet value in the packet-drop histogram as determined at reference 606, the flow goes to reference 608 and the packet is dropped without being served.
At reference 610, HIN 552 and HDROP 554 may be updated. For example, node 500 may remove bytes corresponding to the packet from the HDROP histogram 554. In some cases, a packet has a portion of bytes that are tracked in the HDROP histogram 554, and the remaining portion of the packet are tracked in the HIN histogram 552. Thus, in a particular embodiment, node 500 may also remove the remaining portion of the packet in the HIN 552 histogram, since it is unlikely the remaining portion of the packet will be served properly. In this embodiment, the HIN histogram 552 needs to be updated once the remaining portion of the packet is removed (e.g. reducing the byte length for the corresponding packet value by the number of bytes removed). The flow goes back to reference 602, and node 500 determines whether another packet is exiting from the queue 520.
Returning to reference 606, if there is no byte associated with the PPV in the HDROP 554 and the packet is not dropped due to the PPV being present in HDROP 554, the flow goes to reference 612.
At reference 612, the CTV is updated, if needed. The updating of the CTV is described in more detail below. Thereafter, a determination is made at reference 614 as to whether the PPV of the packet is less than the CTV.
If it is determined at reference 614 the PPV of the packet is smaller than the CTV, then a determination is made at reference 616 as to whether the packet is ECT. If it is determined that the packet is ECT, then the packet is ECN marked (by setting ECN Congestion Experienced on ECT) if it is ECN capable (market in the incoming
Figure imgf000028_0001
IP ECT field) at reference 618. The packet is then sent at reference 620 and the HIN 552 is updated.
Conversely, if it is determined at reference 616 that the packet is not ECT, then the method returns to reference 608 and the packet is dropped.
In this manner, Dequeue block 560 maintains HIN 552 and HDROP 554 similarly to as described above.
In a particular embodiment, which may be the typical case when all flows are responsive, then the HDROP[ppv] is never above 0, because the ECN marking threshold (limit_ECN), which may also be referred to as a byte limit threshold, may be significantly smaller than the maximum queue length limit. However, when a flow is unresponsive then the packets of that flow will mostly fill the buffer region between limit_ECN and QMAX. Note that this region is also used by bursts of responsive flows.
Unlike any prior system, node 500 is capable of maintaining a consistent maximum queue length (QMAX) and not dropping any packet of responsive ECN- capable flows at the same time, when both responsive and unresponsive flows are present in a queue.
FIGURE 7 is a flow diagram illustrating an example process 700 for updating the CTV when needed, according to certain embodiments. Thus, process 700 illustrates step 612 of FIGURE 6 in more detail.
At reference 702, process 700 begins with a determination as to whether the T update has elapsed since the last update. If T update has not lapsed since the last update, the method ends. Conversely, if T update has lapsed since the last update, the number of bytes is summarized from the highest PPVs until either all bytes are counted (iPPV =0) or the limit_ECN is reached, at references 704-710. The CTV is determined based on this at reference 712 or 714.
In a particular embodiment, the Core-stateless ECN scheduler may be implemented in the NS3 simulator. The Linux kernel 4.1 implementation of Data Center TCP and Cubic TCP congestion control algorithms may be used.
Figure imgf000029_0001
FIGURE 8 illustrates a graph 800 depicting the throughput of four flows. There is a Gold DCTCP a Gold Cubic TCP a Silver DCTCP and a Silver Cubic TCP flow in the system. Limit_ECN is set to 5ms (times the system capacity). The system capacity is 100 Mbps. The dashed lines indicate the ideal throughput of the Gold and silver sources. While Gold DCTCP flows get slightly larger share, the system approximates the desired sharing well, without any assumptions on the used congestion control.
FIGURE 9 illustrates a graph 900 depicting the per flow ECN marking rates, which is the bitrate of ECN marked packets. It can be seen that for the DCTCP the marking rate has to be much higher than for Cubic to reach similar bitrate s, due to the different congestion control algorithm.
FIGURE 10 is a method 1000 by node for handling packets in a communication network, according to certain embodiments. The method begins at step 1002 with the node maintaining a data structure tracking packet value distribution of packets in a queue In a particular embodiment, the queue may be a first-in and first- out queue for the packets.
in a particular embodiment, the data structure may include a first histogram and a second histogram. The first histogram may include first bytes of packets to be processed, and the first bytes of packets are distributed PPVs. The second histogram may include second bytes of packets to be dropped. The second bytes of packets may also be distributed per packet values.
At step 1004, the node determines a CTV. In a particular embodiment, the CTV is determined based on the data structure and a byte limit on the queue length, which may be referred to as a byte limit threshold.
At step 1006, when dequeuing a first packet from the queue, the node determines that the first packet’s corresponding PPV is not marked to be dropped in the data structure and determines whether the PPV for the first packet is less than the CTV. In a particular embodiment, the PPV may indicate a drop precedence of the first packet. According to various embodiments, the PPV may be embedded within the first packet or mapped to the first packet.
Figure imgf000030_0001
in a particular embodiment, the first packet may he marked to be dropped when the PPV in the HDROP is greater than zero in a particular embodiment, the PPV may¬ be greater than zero when one or packets in the queue is not responsive.
At step 1008, the node performs one of:
* if the PPV for the first packet is not less than the CTV, serving the first packet:
s if the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport (ECT), serving the first packet; and
if the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, dropping the first packet.
The first packet may be removed from the queue upon the first packet being dropped or served. Additionally, the data structure may be updated based whether the first packet is dropped or served.
According to certain embodiments, the node may periodically update the CTV. For example, the node may detemiine an update timer has expired since a previous update of the CTV and sum a number of bytes from at least one highest PPV until either all bytes are counted or a limit is reached. As another example, the node may determine an update timer has expired since a previous update of the CTV and update the CTV based on whether a number of bytes is greater than a threshold, which may include the byte limit threshold
in a particular embodiment, the method performed by the node may further include determining admission of a second packet to the queue based on a length of the second packet. Additionally, when the admission of the second packet would cause the queue to become full, the admission may be further based on the per packet value of the second packet and the data structure tracking packet value distribution of packets in the queue. The data structure may be updated based on determining the admission of the second packet.
In a particular embodiment, determining the admission may include, when the per packet value is higher than a minimum packet value in the first histogram, moving
Figure imgf000031_0001
as many bytes with the minimum packet value as needed from the first histogram to the second histogram to accommodate the first packet. In another embodiment, the queue may be determined to be full when a sum of packet lengths corresponding to packet values in the first histogram is over a threshold.
In still another embodiment, determining the admission may include, when the packet value is equal to or lower than a minimum packet value in the first histogram, denying the second packet from admitting to the queue.
FIGURE 11 is a block diagram depicting a receiving node, exemplified as the radio base station 120 in the figures above, according to embodiments herein. The receiving node for handling packets for the wireless device 110 in the radio communications network 100 comprises a receiving circuit 1401 configured to receive the packet from the transmitting node. As mentioned above, the packet is marked with a value, wherein the value is corresponding to a level of importance of the packet along a linear scale.
The receiving node further comprises a handling circuit 1402 configured to handle the packet based on the value and an expected amount of resources needed to serve the packet. The handling circuit 1402 may be configured to handle the packet based on a realized value, wherein the realized value is the marked value divided with the expected amount of resources. The realized value may also be referred to as an effective packet value. The handling circuit 1402 may further be configured to maximize the realized value of served packets over a bottleneck. In some embodiments the handling circuit 1402 may be configured to share resources when packet scheduling, packet dropping or a combination of packet scheduling and packet dropping over a bottleneck resource which aims to maximize the realized value. The handling circuit 1402 may be configured to share resources when packet scheduling, packet dropping or a combination of packet scheduling and packet dropping, based on the value of the received packet. The resources may comprise radio resources, packet overhead at lower layers, or processing cost.
In some embodiments a bit of the packet is marked with the value and the handling circuit 1402 is configured to take an amount of bits of the packet into account
Figure imgf000032_0001
in conjunction with the value. The handling circuit 1402 may be configured to take a total value into account in a resource allocation, wherein the total value of the packet is a sum of the values for each bit served. The handling circuit 1402 may comprise a resource sharing scheme, which applies a smaller drop precedence for packets with higher effective packet value; or gives precedence to packets with high effective packet value to be served; or uses a combination of the above, possibly in combination with other dropping and scheduling conditions.
The receiving node may further comprise a determining circuit 1403 configured to determine the expected amount of resource required to send a single bit on a radio bearer. The determining circuit 1403 may comprise a resource estimator for the expected resource usage per transmitted bit.
The receiving node may further comprise a transmitting circuit 1404 configured to transmit the packets according to a scheduling, resource allocation, control signaling back to the transmitting node, or similar.
The embodiments herein for handling packets may be implemented through one or more processors, such as a processing circuit 1405 in the receiving node depicted in FIGURE 11, together with computer program code for performing the functions and/or method steps of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing embodiments herein when being loaded into the receiving node. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the receiving node.
Those skilled in the art will also appreciate that the various“circuits” described may refer to a combination of analogue and digital circuits, and/or one or more processors configured with software and/or firmware (e.g., stored in memory) that, when executed by the one or more processors, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various
Figure imgf000033_0001
digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
The receiving node further comprises a memory 1406 that may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the receiving node or similar.
FIGURE 12 is a block diagram depicting a transmitting node, exemplified above as the GW 130, according to embodiments herein. The transmitting node for handling packets for the wireless device 110 in the radio communications network 100 comprises a marking circuit 1501 configured to mark a packet with a value, which value is corresponding to a level of importance of the packet along a linear scale. The marking circuit 1501 may be configured to also take an expected amount of resources needed to serve the packet, when marking the packet. The transmitting node may comprise the marking circuit 1501 comprising a marker entity that marks the packets depending on an expected value of the bits of the packets.
The transmitting node comprises a transmitting circuit 1502 configured to transmit the packet to a receiving node 12. The level of importance may be based on one or more of: contents of the packet payload, a specific type of packet flow, and throughput commitments.
The transmitting node may further comprises a receiving circuit 1503 configured to receive the packet from another node, and/or to receive packets from the receiving node.
The embodiments herein for transmitting packets may be implemented through one or more processors, such as a processing circuit 1504 in the transmitting node depicted in FIGURE 12, together with computer program code for performing the functions and/or method steps of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing embodiments herein when being loaded into the transmitting node. One such carrier
Figure imgf000034_0001
may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the transmitting node.
Those skilled in the art will also appreciate that the various“circuits” described may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware (e.g., stored in memory) that, when executed by the one or more processors, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
The transmitting node further comprises a memory 1505 that may comprise one or more memory units and may be used to store for example data such as values, expected amount of resources needed to serve a packet, channel/bearer information, applications to perform the methods herein when being executed on the transmitting node or similar.
FIGURE 13 schematically illustrates a telecommunication network connected via an intermediate network to a host computer, according to certain embodiments. In accordance with an embodiment, a communication system includes a telecommunication network 1310, such as a 3GPP-type cellular network, which comprises an access network 1311, such as a radio access network, and a core network 1314. The access network 1311 comprises a plurality of base stations l3 l2a, 13 l2b, l3 l2c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area 13 l3a, 13 l3b, 13 l3c. Each base station 13 l2a, 13 l2b, l3 l2c is connectable to the core network 1314 over a wired or wireless connection 1315. A first user equipment (UE) 1391 located in coverage area 13 l3c is configured to wirelessly connect to, or be paged by, the corresponding base station l3 l2c. A second UE 1392 in coverage area l3 l3a is wirelessly connectable to the corresponding base station l3 l2a. While a plurality of UEs 1391, 1392 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where
Figure imgf000035_0001
a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station 1312.
The telecommunication network 1310 is itself connected to a host computer 1330, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. The host computer 1330 may be under the ownership or control of a service provider or may be operated by the service provider or on behalf of the service provider. The connections 1321, 1322 between the telecommunication network 1310 and the host computer 1330 may extend directly from the core network 1314 to the host computer 1330 or may go via an optional intermediate network 1320. The intermediate network 1320 may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network 1320, if any, may be a backbone network or the Internet; in particular, the intermediate network 1320 may comprise two or more sub-networks (not shown).
The communication system of FIGURE 13 as a whole enables connectivity between one of the connected UEs 1391, 1392 and the host computer 1330. The connectivity may be described as an over-the-top (OTT) connection 1350. The host computer 1330 and the connected UEs 1391, 1392 are configured to communicate data and/or signaling via the OTT connection 1350, using the access network 1311, the core network 1314, any intermediate network 1320 and possible further infrastructure (not shown) as intermediaries. The OTT connection 1350 may be transparent in the sense that the participating communication devices through which the OTT connection 1350 passes are unaware of routing of uplink and downlink communications. For example, a base station 1312 may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer 1330 to be forwarded (e.g., handed over) to a connected UE 1391. Similarly, the base station 1312 need not be aware of the future routing of an outgoing uplink communication originating from the UE 1391 towards the host computer 1330.
FIGURE 14 is a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection, according
Figure imgf000036_0001
to certain embodiments. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference to FIGURE 14. In a communication system 1400, a host computer 1410 comprises hardware 1415 including a communication interface 1416 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system 1400. The host computer 1410 further comprises processing circuitry 1418, which may have storage and/or processing capabilities. In particular, the processing circuitry 1418 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The host computer 1410 further comprises software 1411, which is stored in or accessible by the host computer 1410 and executable by the processing circuitry 1418. The software 1411 includes a host application 1412. The host application 1412 may be operable to provide a service to a remote user, such as a UE 1430 connecting via an OTT connection 1450 terminating at the UE 1430 and the host computer 1410. In providing the service to the remote user, the host application 1412 may provide user data which is transmitted using the OTT connection 1450.
The communication system 1400 further includes a base station 1420 provided in a telecommunication system and comprising hardware 1425 enabling it to communicate with the host computer 1410 and with the UE 1430. The hardware 1425 may include a communication interface 1426 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system 1400, as well as a radio interface 1427 for setting up and maintaining at least a wireless connection 1470 with a UE 1430 located in a coverage area (not shown in FIGURE 14) served by the base station 1420. The communication interface 1426 may be configured to facilitate a connection 1460 to the host computer 1410. The connection 1460 may be direct or it may pass through a core network (not shown in FIGURE 14) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment
Figure imgf000037_0001
shown, the hardware 1425 of the base station 1420 further includes processing circuitry 1428, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The base station 1420 further has software 1421 stored internally or accessible via an external connection.
The communication system 1400 further includes the UE 1430 already referred to. Its hardware 1435 may include a radio interface 1437 configured to set up and maintain a wireless connection 1470 with a base station serving a coverage area in which the UE 1430 is currently located. The hardware 1435 of the UE 1430 further includes processing circuitry 1436, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The UE 1430 further comprises software 1431, which is stored in or accessible by the UE 1430 and executable by the processing circuitry 1438. The software 1431 includes a client application 432. The client application 1432 may be operable to provide a service to a human or non-human user via the UE 1430, with the support of the host computer 1410. In the host computer 1410, an executing host application 1412 may communicate with the executing client application 432 via the OTT connection 1450 terminating at the UE 1430 and the host computer 1410. In providing the service to the user, the client application 1432 may receive request data from the host application 1412 and provide user data in response to the request data. The OTT connection 1450 may transfer both the request data and the user data. The client application 1432 may interact with the user to generate the user data that it provides.
It is noted that the host computer 1410, base station 1420 and UE 1430 illustrated in FIGURE 14 may be identical to the host computer 1330, one of the base stations l3 l2a, 13 l2b, l3 l2c and one ofthe UEs 1391, 1392 of FIGURE 13, respectively. This is to say, the inner workings of these entities may be as shown in FIGURE 13 and independently, the surrounding network topology may be that of FIGURE 14.
Figure imgf000038_0001
In FIGURE 14, the OTT connection 1450 has been drawn abstractly to illustrate the communication between the host computer 1410 and the use equipment 1430 via the base station 1420, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from the UE 1430 or from the service provider operating the host computer 1410, or both. While the OTT connection 1450 is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network).
The wireless connection 1470 between the UE 1430 and the base station 1420 is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to the UE 1430 using the OTT connection 1450, in which the wireless connection 1470 forms the last segment. More precisely, the teachings of these embodiments may improve the data rate, latency, and/or power consumption and thereby provide benefits such as reduced user waiting time, relaxed restriction on file size, better responsiveness, extended battery lifetime.
A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection 1450 between the host computer 1410 and UE 1430, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection 1450 may be implemented in the software 1411 of the host computer 1410 or in the software 1431 of the UE 1430, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection 1450 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above or supplying values of other physical quantities from which software 1411, 1431 may compute or estimate the monitored quantities. The reconfiguring of the OTT connection 1450 may include message format,
Figure imgf000039_0001
retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station 1420, and it may be unknown or imperceptible to the base station 1420. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating the host computer’s 1410 measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that the software 1411, 1431 causes messages to be transmitted, in particular empty or‘dummy’ messages, using the OTT connection 1450 while it monitors propagation times, errors etc.
FIGURE 15 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 15 will be included in this section. In a first step 1510 of the method, the host computer provides user data. In an optional substep 1511 of the first step 1510, the host computer provides the user data by executing a host application. In a second step 1520, the host computer initiates a transmission carrying the user data to the UE. In an optional third step 1530, the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In an optional fourth step 1540, the UE executes a client application associated with the host application executed by the host computer.
FIGURE 16 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 16 will be included in this section. In a first step 1610 of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In a second step 1620, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of
Figure imgf000040_0001
the embodiments described throughout this disclosure. In an optional third step 1630, the UE receives the user data carried in the transmission.
FIGURE 17 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 17 will be included in this section. In an optional first step 1710 of the method, the UE receives input data provided by the host computer. Additionally, or alternatively, in an optional second step 1720, the UE provides user data. In an optional substep 1725 of the second step 1720, the UE provides the user data by executing a client application. In a further optional substep 1715 of the first step 1710, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in an optional third substep 1730, transmission of the user data to the host computer. In a fourth step 1740 of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
FIGURE 18 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to FIGURES 13 and 14. For simplicity of the present disclosure, only drawing references to FIGURE 18 will be included in this section. In an optional first step 1810 of the method, in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In an optional second step 1820, the base station initiates transmission of the received user data to the host computer. In a third step 1830, the host computer receives the user data carried in the transmission initiated by the base station.
Figure imgf000041_0001
ABBREVIATIONS
At least some of the following abbreviations may be used in this disclosure. If there is an inconsistency between abbreviations, preference should be given to how it is used above. If listed multiple times below, the first listing should be preferred over any subsequent bsting(s).
AQM Active Queue Management
PPOV Per Packet Operator Value (a synonym for PPV)
PPV Per Packet Value (a synonym for PPOV)
PV Packet Value
HIN Input histogram
HDROP Drop histogram
L4S Low Latency, Low Loss, Scalable Throughput.
lx RTT CDMA2000 lx Radio Transmission Technology
3GPP 3rd Generation Partnership Project
5G 5th Generation
ABS Almost Blank Subframe
ARQ Automatic Repeat Request
AWGN Additive White Gaussian Noise
BCCH Broadcast Control Channel
BCH Broadcast Channel
CA Carrier Aggregation
CC Carrier Component
CCCH SDU Common Control Channel SDU
CDMA Code Division Multiplexing Access
CGI Cell Global Identifier
CIR Channel Impulse Response
CP Cyclic Prefix
CPICH Common Pilot Channel
Figure imgf000042_0001
CPICH Ec/No CPICH Received energy per chip divided by the power density in the band
CQI Channel Quality information
C-RNTI Cell RNTI
CSI Channel State Information
DCCH Dedicated Control Channel
DL Downlink
DM Demodulation
DMRS Demodulation Reference Signal
DRX Discontinuous Reception
DTX Discontinuous Transmission
DTCH Dedicated Traffic Channel
DUT Device Under Test
E-CID Enhanced Cell-ID (positioning method)
E-SMLC Evolved-Serving Mobile Location Centre
ECGI Evolved CGI
eNB E-UTRAN NodeB
ePDCCH enhanced Physical Downlink Control Channel
E-SMLC evolved Serving Mobile Location Center
E-UTRA Evolved UTRA
E-UTRAN Evolved UTRAN
FDD Frequency Division Duplex
FFS For Further Study
GERAN GSM EDGE Radio Access Network
gNB Base station in NR
GNSS Global Navigation Satellite System
GSM Global System for Mobile communication
HARQ Hybrid Automatic Repeat Request
HO Handover
HSPA High Speed Packet Access
HRPD High Rate Packet Data
LOS Line of Sight
Figure imgf000043_0001
LPP LTE Positioning Protocol
LTE Long-Term Evolution
MAC Medium Access Control
MBMS Multimedia Broadcast Multicast Services
MBSFN Multimedia Broadcast multicast service Single Frequency Network
MBSFN ABS MBSFN Almost Blank Subframe
MDT Minimization of Drive Tests
MIB Master Information Block
MME Mobility Management Entity
MSC Mobile Switching Center
NPDCCH Narrowband Physical Downlink Control Channel
NR New Radio
OCNG OFDMA Channel Noise Generator
OFDM Orthogonal Frequency Division Multiplexing
OFDMA Orthogonal Frequency Division Multiple Access
OSS Operations Support System
OTDOA Observed Time Difference of Arrival
O&M Operation and Maintenance
PBCH Physical Broadcast Channel
P-CCPCH Primary Common Control Physical Channel
PCell Primary Cell
PCFICH Physical Control Format Indicator Channel
PDCCH Physical Downlink Control Channel
PDP Profile Delay Profile
PDSCH Physical Downlink Shared Channel
PGW Packet Gateway
PHICH Physical Hybrid-ARQ Indicator Channel
PLMN Public Land Mobile Network
PMI Precoder Matrix Indicator
PRACH Physical Random Access Channel
PRS Positioning Reference Signal
Figure imgf000044_0001
PSS Primary Synchronization Signal
PUCCH Physical Uplink Control Channel
PUSCH Physical Uplink Shared Channel
RACH Random Access Channel
QAM Quadrature Amplitude Modulation
RAN Radio Access Network
RAT Radio Access Technology
RLM Radio Uink Management
RNC Radio Network Controller
RNTI Radio Network Temporary Identifier
RRC Radio Resource Control
RRM Radio Resource Management
RS Reference Signal
RSCP Received Signal Code Power
RSRP Reference Symbol Received Power OR
Reference Signal Received Power
RSRQ Reference Signal Received Quality OR
Reference Symbol Received Quality
RSSI Received Signal Strength Indicator
RSTD Reference Signal Time Difference
SCH Synchronization Channel
SCell Secondary Cell
SDU Service Data Unit
SFN System Frame Number
SGW Serving Gateway
SI System Information
SIB System Information Block
SNR Signal to Noise Ratio
SON Self Optimized Network
ss Synchronization Signal
sss Secondary Synchronization Signal
TDD Time Division Duplex
Figure imgf000045_0001
TDOA Time Difference of Arrival
TOA Time of Arrival
TSS Tertiary Synchronization Signal
TTI Transmission Time Interval
UE User Equipment
UL Uplink
UMTS Universal Mobile Telecommunication System
USIM Universal Subscriber Identity Module
UTDOA Uplink Time Difference of Arrival
UTRA Universal Terrestrial Radio Access
UTRAN Universal Terrestrial Radio Access Network
WCDMA Wide CDMA
WLAN Wide Local Area Network

Claims

1. A method by a node for handling packets in a communication network, the method comprising:
maintaining a data structure tracking packet value distribution of packets in a queue;
determining a congestion threshold value, CTV ;
when dequeuing a first packet from the queue, determining that the first packet’s corresponding per packet value, PPV, is not marked to be dropped in the data structure and determining whether the PPV for the first packet is less than the CTV ; and
performing one of:
if the PPV tor the first packet is not less than the CTV serving the first packet;
if the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport, ECT, serving the first packet; and
if the PPV for the first packet is less than the CTV and the first packet is not marked for ECT, dropping the first packet.
2. The method of Claim 1, wherein the CTV is determined based on the data structure and a byte limit threshold on the queue length.
3. Tie method of any one of Claims 1 to 2, further comprising periodically updating the CTV.
4. The method of Claim 3, wherein updating the CTV comprises:
determining an update timer has expired since a previous update of the CTV; and
summ ing a number of bytes from at least one highest PPV until either all bytes are counted or a limit is reached.
Figure imgf000047_0001
5. The method of Claim 3, wherein updating the CTV comprises:
determining an update timer has expired since a previous update of the CTV; and
updating the CTV based on whether a number of bytes is greater than a byte limit threshold.
6. The method of any one of Claims 1 to 5, further comprising determining that one or packets in the queue is not responsive.
7. The method of any one of Claims 1 to 6, wherein the CTV is determined based on tire data structure and a byte limit threshold .
8. Tire method of any one of Claims 1 to 7, w herein the data structure comprises a first histogram of first bytes of packets to be processed, and wherein the first bytes of packets are distributed per packet values.
9. The method of any of Claims 1 to 8, wherein the data structure further comprises a second histogram of second bytes of packets to be dropped, and wherein the second bytes of packets are distributed per packet values.
10. The method of any one of Claims 1 to 9, further comprising:
determining admission of a second packet to the queue based on a length of the second packet, wherein when the admission of the second packet would cause the queue to become full, the admission is further based on the per packet value of the second packet and the data structure tracking packet value distribution of packets in the queue.
11. The method of Claim 10, wherein determining the admission comprises:
when the per packet value is higher than a minimum packet value in the first histogram,
Figure imgf000048_0001
moving as many bytes with the minimum packet value as needed from the first histogram to the second histogram to accommodate the first packet.
12. The method of any one of Claims 10 to 11, wherein the queue is determined to be full when a sum of packet lengths corresponding to packet values in the first histogram is over a threshold.
13. The method of any one of Claims 10 to 11, wherein determining the admission comprises:
when the packet value is equal to or lower than a minimum packet value in the first histogram, denying the second packet from admitting to the queue.
14. The method of Claims any one of Claims 10 to 11, further comprising updating the data structure based on determining the admission of the second packet.
15. The method of any one of Claims 1 to 14, further comprising updating the data structure based on dropping or serving the first packet.
16. The method of any one of Claims 1 to 15, further comprising:
removing the first packet from the queue upon the first packet being served.
17. The method of any one of Claims 1 to 16, wherein the PPV indicates drop precedence of the first packet.
18. The method of any one of Claims 1 to 17, wherein the PPV is embedded within the first packet.
19. The method of any one of Claims 1 to 17, wherein the PPV is mapped to the first packet in the node.
20. The method of any one of Claims 1 to 19, wherein the queue is a first-in and first-out queue for the packets.
Figure imgf000049_0001
21. A computer program comprising instructions which when executed on a computer perform any of the methods of Claims 1 to 20.
22. A computer program product comprising computer program, the computer program comprising instructions which when executed on a computer perform any of the methods of Claims 1 to 20.
23. A non-transitory computer readable medium storing instructions which when executed by a computer perform any of the methods of Claims 1 to 20.
24. A node comprising:
memory operable to store instructions; and
processing circuitry operable to execute the instructions to cause the node to: maintain a data structure tracking packet value distribution of packets in a queue;
determine a congestion threshold value (CTV);
when dequeuing a first packet from tire queue, determine that the first packet’s corresponding per packet value (PPV) is not marked to be dropped in the data structure and determine whether the PPV for the first packet is less than the CTV; and perform one of:
if the PPV for the first packet is not less than the CTV, serve the first packet;
if the PPV for the first packet is less than the CTV and the first packet is marked for ECN-Capable Transport (ECT), serve the first packet; and
if the PPV for the first packet is less than the CTV and tire first packet is not marked for PICT, drop the first packet.
25 The node of Claim 24, wherein the CTV is determined based on the data
Figure imgf000050_0001
structure and a byte limit threshold on the queue length.
26. The node of any one of Claims 24 to 25, wherein the processing circuitry is further operable to execute the instructions to cause the node to periodically update the
CTV.
27. The node of Claim 26, wherein, when updating the CTV, the processing circuitry' is operable to execute the instructions to cause the node to:
determine an update timer has expired since a previous update of the CTV ; and sum a number of bytes from at least one highest PPV until either all bytes are counted or a limit is reached.
28. The node of Claim 26, wherein, when updating the CTV, the processing circuitry- is operable to execute the instructions to cause the node to:
determine an update timer has expired since a previous update of the CTV ; and update the CTV7 based on whether a number of bytes is greater than a byte limit threshold.
29. Tie node of any one of Claims 24 to 28, wherein the processing circuitry is further operable to execute tire instructions to cause the node to determine that one or packets in the queue is not responsive.
30. The node of any one of Claims 24 to 29, wherein the CTV is determined based on the data structure and a byte limit on the queue length
31. The node of any one of Claims 24 to 30, wherein the data structure comprises a first histogram of first bytes of packets to be processed, and wherein the first bytes of packets are distributed per packet values.
Figure imgf000051_0001
32. The node of any of Claims 24 to 31 , wherein the data structure further comprises a second histogram of second bytes of packets to be dropped, and wherein the second bytes of packets are distributed per packet values.
33. The node of any one of Claims 24 to 32, wherein the processing circuitry is further operable to execute the instructions to cause the node to:
determine admission of a second packet to the queue based on a length of the second packet, wherein when the admission of the second packet would cause the queue to become full, the admission is further based on the per packet value of the second packet and the data structure tracking packet value distribution of packets in the queue.
34. The node of Claim 33, wherein when determining the admission wherein the processing circuitry is further operable to execute the instructions to cause the node to: when the per packet value is higher than a minimum packet value in the first histogram,
move as many bytes with the minimum packet value as needed from the first histogram to the second histogram to accommodate the first packet.
35. The node of any one of Claims 33 to 34, wherein the queue is determined to be full when a sum of packet lengths corresponding to packet values in the first histogram is over a threshold.
36. The node of any one of Claims 33 to 34, wherein determining the admission comprises:
when the packet value is equal to or lower than a minimum packet value in the first histogram, denying the second packet from admitting to the queue.
Figure imgf000052_0001
37. The node of any one of Claims 33 to 34, wherein the processing circuitry is further operable to execute the instructions to cause the node to update the data structure based on determining the admission of the second packet.
38. The node of any one of Claims 24 to 37, wherein the processing circuitry is further operable to execute the instructions to cause the node to update the data structure based on dropping or serving the first packet.
39. The node of any one of Claims 24 to 38, wherein the processing circuitry is further operable to execute the instructions to cause the node to:
remove the first packet from the queue upon the first packet being served.
40. The node of any one of Claims 24 to 39, wherein the PPV indicates drop precedence of the first packet.
41. The node of any one of Claims 24 to 39, wherein the PPV is embedded within the first packet.
42. The node of any one of Claims 24 to 41, wherein the PPV is mapped to the first packet in the node.
43. The method of any one of Claims 24 to 42, wherein the queue is a first-in and first-out queue for the packets.
PCT/IB2019/053048 2018-04-23 2019-04-12 Core-stateless ecn for l4s WO2019207403A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862661177P 2018-04-23 2018-04-23
US62/661,177 2018-04-23

Publications (1)

Publication Number Publication Date
WO2019207403A1 true WO2019207403A1 (en) 2019-10-31

Family

ID=66647434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/053048 WO2019207403A1 (en) 2018-04-23 2019-04-12 Core-stateless ecn for l4s

Country Status (1)

Country Link
WO (1) WO2019207403A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887218A (en) * 2020-12-22 2021-06-01 新华三技术有限公司 Message forwarding method and device
WO2021213711A1 (en) * 2020-04-24 2021-10-28 Telefonaktiebolaget Lm Ericsson (Publ) Virtual dual queue core stateless active queue management (aqm) for communication networks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013085437A1 (en) 2011-12-05 2013-06-13 Telefonaktiebolaget L M Ericsson (Publ) A method and arrangements for scheduling wireless resources in a wireless network
EP2663037A1 (en) 2012-05-08 2013-11-13 Telefonaktiebolaget L M Ericsson (PUBL) Multi-level Bearer Profiling in Transport Networks
US9112786B2 (en) * 2002-01-17 2015-08-18 Juniper Networks, Inc. Systems and methods for selectively performing explicit congestion notification
US20160105369A1 (en) 2013-05-23 2016-04-14 Telefonaktiebolaget L M Ericsson Transmitting node, receiving node and methods therein

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9112786B2 (en) * 2002-01-17 2015-08-18 Juniper Networks, Inc. Systems and methods for selectively performing explicit congestion notification
WO2013085437A1 (en) 2011-12-05 2013-06-13 Telefonaktiebolaget L M Ericsson (Publ) A method and arrangements for scheduling wireless resources in a wireless network
EP2663037A1 (en) 2012-05-08 2013-11-13 Telefonaktiebolaget L M Ericsson (PUBL) Multi-level Bearer Profiling in Transport Networks
US20160105369A1 (en) 2013-05-23 2016-04-14 Telefonaktiebolaget L M Ericsson Transmitting node, receiving node and methods therein

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
B. BRISCOEK. DE SCHEPPERM. BAGNULO BRAUN, LOW LATENCY, LOW LOSS, SCALABLE THROUGHPUT (L4S) INTERNET SERVICE: ARCHITECTURE, 22 March 2018 (2018-03-22), Retrieved from the Internet <URL:https://tools.ietf.org/html/draft-ietf-tsvwg-14s-arch-02>
NADAS, GOMBOS, HUBODA AND LAKI: "Towards a Congestion Control-Independent Core-Stateless AQM", PROCEEDINGS OF THE APPLIED NETWORKING RESEARCH WORKSHOP ON , ANRW '18, 16 July 2018 (2018-07-16), New York, New York, USA, pages 84 - 90, XP055596902, ISBN: 978-1-4503-5585-8, DOI: 10.1145/3232755.3232777 *
S. NADASZ. R. TURANYIS. LAKIG. GOMBOS: "Take your own share of the PIE", IN PROCEEDINGS OF ACM APPLIED NETWORKING RESEARCH WORKSHOP, July 2017 (2017-07-01)
S. NADASZ. R. TURANYIS. RACZ: "Per packet value: A practical concept for network resource sharing", IEEE GLOBECOM, 2016
WANG MINJONAS PETTERSSONYLVA TIMNERSTEFAN WANSTEDTMAGNUS HURD: "Efficient QoS over LTE - a Scheduler Centric Approach. Personal Indoor and Mobile Radio Communications (PIMRC", IEEE 23RD INTERNATIONAL SYMPOSIUM, 2012

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021213711A1 (en) * 2020-04-24 2021-10-28 Telefonaktiebolaget Lm Ericsson (Publ) Virtual dual queue core stateless active queue management (aqm) for communication networks
CN112887218A (en) * 2020-12-22 2021-06-01 新华三技术有限公司 Message forwarding method and device

Similar Documents

Publication Publication Date Title
KR102056196B1 (en) Method for performing a re-establishment of a pdcp entity associated with um rlc entity in wireless communication system and a device therefor
US10462700B2 (en) Method for performing reflective quality of service (QOS) in wireless communication system and a device therefor
US20190044880A1 (en) Method for handling state variables of a pdcp entity in wireless communication system and a device therefor
EP3603168B1 (en) Method for transmitting lossless data packet based on quality of service (qos) framework in wireless communication system and a device therefor
EP3586489B1 (en) Methods and network elements for multi-connectivity control
US10470097B2 (en) Method for performing a handover procedure in a communication system and device therefor
JP2017507619A (en) Techniques for dynamically partitioning bearers among various radio access technologies (RATs)
EP3610590A1 (en) Distributed scheduling algorithm for cpri over ethernet
López-Pérez et al. Long term evolution-wireless local area network aggregation flow control
KR20150123926A (en) Method and system for parallelizing packet processing in wireless communication
WO2018000220A1 (en) Data transmission method, apparatus and system
US11096167B2 (en) Method for transmitting a MAC CE in different TTI durations in wireless communication system and a device therefor
AU2020255012B2 (en) Communication method and communications apparatus
TW201924272A (en) URLLC transmissions with polar codes
CN115668850A (en) System and method for determining TCI status for multiple transmission opportunities
WO2019207403A1 (en) Core-stateless ecn for l4s
WO2018167254A1 (en) Unique qos marking ranges for smfs in a 5g communications network
WO2018031081A1 (en) Systems and methods for packet data convergence protocol sequencing
WO2016141213A1 (en) Opportunistic access of millimeterwave radio access technology based on edge cloud mobile proxy
US20180014313A1 (en) Method for transmitting data in a communication system and device therefor
KR20120012865A (en) Method and apparatus for allocating resource of base station in mobile communication system
CN116458201A (en) inter-gNB carrier aggregation based on congestion control
US20240113975A1 (en) Managing a Delay of Network Segments in an End-To-End Communication Path
RU2801116C2 (en) Communication device and communication method
ES2951320T3 (en) System and method for multipath transmission

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19726485

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19726485

Country of ref document: EP

Kind code of ref document: A1