CN118199826A - System and method for managing Transmission Control Protocol (TCP) acknowledgements - Google Patents

System and method for managing Transmission Control Protocol (TCP) acknowledgements Download PDF

Info

Publication number
CN118199826A
CN118199826A CN202311716505.3A CN202311716505A CN118199826A CN 118199826 A CN118199826 A CN 118199826A CN 202311716505 A CN202311716505 A CN 202311716505A CN 118199826 A CN118199826 A CN 118199826A
Authority
CN
China
Prior art keywords
packet
tcp
tcp ack
ack
packets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311716505.3A
Other languages
Chinese (zh)
Inventor
M·库格勒
H·J·斯蒂芬
M·A·史卡利
V·文卡塔拉曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/080,182 external-priority patent/US11882051B2/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN118199826A publication Critical patent/CN118199826A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to systems and methods for managing Transmission Control Protocol (TCP) acknowledgements. A client device in a wireless network accesses a queue that includes transmission control protocol acknowledgement (TCP ACK) packets. At least some of the packets include a packet descriptor having a flow identifier and a TCP ACK generation count, the flow identifier indicating a corresponding TCP flow. The device examines the packet descriptor of the first TCP ACK packet and identifies a first flow identifier and a first TCP ACK generation count. The device accesses entries in a data structure, each entry including a first field and a second field storing a flow identifier and a TCP ACK generation count, respectively. The apparatus determines that a condition is satisfied, the condition comprising an entry in the data structure comprising a flow identifier and a TCP ACK generation count that match the first flow identifier and the first TCP ACK generation count, respectively. In response to the determination, the device marks the first TCP ACK packet as to be discarded.

Description

System and method for managing Transmission Control Protocol (TCP) acknowledgements
Technical Field
The following disclosure relates generally to communication technology, and in particular to systems, methods, and apparatus for Transmission Control Protocol (TCP) Acknowledgement (ACK) transmission in a communication network.
Background
TCP is a communication protocol that facilitates the exchange of messages between computing devices on a network, such as the exchange of application data between a client device and a server device connected by one or more network connections. TCP is designed to ensure reliable, orderly and error-corrected delivery of data sent in TCP packets. TCP uses Acknowledgement (ACK) packets for reliable transmission.
Disclosure of Invention
The present disclosure describes systems, devices, and methods directed to managing transmission of TCP acknowledgement packets (referred to as TCP ACK packets). In some implementations, the disclosed systems, devices, and methods are for managing TCP ACK packets sent from a client device to another network device, such AS an Application Server (AS), in response to receiving the TCP data and control packets at the client device. In some implementations, baseband (BB) circuitry in the client device examines TCP ACK packets queued for transmission to the AS, and discards duplicate TCK ACK packets (e.g., TCP ACK packets corresponding to the same flow with the same sequence number) that the BB circuitry determines to be redundant. In some implementations, the BBU circuit determines the TCP ACK packet as redundant when the TCP ACK packet is a duplicate of one or more other TCP ACK packets and is stamped with the same counter value (generated by a TCP application at the client device) as the counter value of the one or more other TCP ACK packets. In some implementations, the client device is an electronic device in a wireless communication network that connects to the AS through one or more network connections. For example, in some implementations, the client device is a User Equipment (UE) in a3 rd generation partnership project (3 GPP) mobile wireless communication network that connects to an AS using a 3GPP network. In such cases, the UE manages transmission of TCP ACK packets in the uplink direction.
TCP is a reliable flow delivery service in which the receiver of a TCP packet responds with a TCP ACK message to the sender when a TCP packet is received. For reliable transmission, TCP uses sequence numbers to identify each data byte. The sequence numbers identify the order in which bytes are sent from each computer so that the data can be reconstructed in order, regardless of any packet reordering or packet loss that may occur during transmission. The receiver sends a TCP ACK along with the sequence number to inform the sender of the specified bytes of the received data. The sequence number associated with the TCK ACK is accumulated to acknowledge all data bytes received prior to the sequence number. In some cases, a TCP packet may be lost during transmission and a receiver may receive a TCP packet whose sequence numbers are discontinuous in the sequence number chain. In such a case, upon detecting an interruption of the sequence number chain, the receiver sends a duplicate ACK with the latest sequence number before the interruption. The duplicate ACK is used as a signal of packet loss, triggering the sender to retransmit the last unacknowledged packet. Upon receiving multiple duplicate TCP ACK packets, the sender will retransmit the missing data packet.
In some implementations, duplicate TCP ACKs are part of a failure recovery mechanism that ensures TCP protocol reliability. The duplicate acknowledgement is sent when the client device notices a gap between a series of packets or when the client device receives an out-of-order data packet. For example, if the client device receives the following sequence of data packets: data packet # 1-data packet # 3-data packet #2, but not data packet # 1-data packet # 2-data packet #3, the client device starts sending duplicate TCP ACKs upon receiving packet #3 so that the server can start the fast retransmission process.
In some implementations, the TCP protocol uses duplicate ACKs and timer expiration to retransmit lost data packets. The duplicate ACK is used as part of the fast retransmission and data packet recovery. In some implementations, the repeated TCP ACKs are used to notify the server before a timeout occurs. Since the server does not know whether duplicate TCP ACKs were received due to lost data packets or simply due to reordering of data packets, the server waits for a small number of duplicate TCP ACKs to be received. If multiple duplicate TCP ACKs are received consecutively, this strongly indicates that the data packet has been lost.
In some implementations, a server that sends a series of data packets to a client device is allowed to send up to a predetermined number of unacknowledged data packets to the client device before receiving a TCP ACK packet acknowledging successful receipt of the sent unacknowledged data packet. If the server transmitting the series of data packets has transmitted a predetermined number of unacknowledged data packets, the server must cease transmitting further sequential packets in the series of data packets and/or retransmit one or more previously transmitted data packets until the server receives a TCP ACK from the client device. Thus, if the client device sends a TCP ACK packet to acknowledge receipt of a data packet sent by the server with a delay, the throughput of the communication session with the server decreases because the sending of the data packet by the server ceases.
In some cases, a connection with more latency between the receiver and the sender may have a large number of duplicate TCP ACK packets when the data packet is lost. For example, for several lost data packets, a high latency connection may observe tens or hundreds of duplicate TCP ACK packets, which may increase congestion and round trip time, thereby reducing the data throughput of the communication session.
Various implementations disclosed herein provide for optimization of a TCP connection (interchangeably also referred to AS TCP flow (TCP STREAM or TCP flow)) by discarding duplicate or redundant TCP ACK packets at a receiver (e.g., client device) when sending the TCP ACK packets to the sender (e.g., AS). In some implementations, the client device is configured to discard one or more packets of the older TCP ACK packets to be processed in an output queue at the client device within a TCP uplink flow (e.g., from the client device to the AS) in response to detecting the newer duplicate TCP ACK packets (e.g., TCP ACK packets having the same sequence number at the head of the output queue).
In some implementations, the client device queues a TCP Uplink (UL) packet including a TCP ACK packet in a memory coupled to the client device until the packet can be transmitted to the AS via the network. The TCP layer within the 3GPP protocol stack of the client device is implemented to check the queue, e.g., during UL delay as the queue grows, to decide whether some TCP ACK packets are redundant (e.g., have the same sequence number as one or more other TCP ACK packets) and may be discarded before transmission. Discarding redundant TCP ACK packets is also referred to herein as traffic reduction.
In a general aspect, a client device in a wireless network manages transmission control protocol acknowledgement (TCP ACK) packet transmissions by, in response to receiving a TCP packet from another device in the wireless network, accessing a queue in memory that includes TCP ACK packets to be transmitted to the other device. At least a subset of the TCP ACK packets in the queue include respective packet descriptors, each having a flow identifier indicating a TCP flow associated with the packet and a TCP ACK generation count. The client device examines a packet descriptor of a first TCP ACK packet of the TCP ACK packets in the queue and identifies a first flow identifier and a first TCP ACK generation count corresponding to the first TCP ACK packet. Upon determining that the first flow identifier and the first TCP ACK generation count are valid, the client device accesses entries in a data structure in memory, where each entry includes a first field storing the flow identifier and a second field storing the corresponding TCP ACK generation count. The client device determines that a condition is satisfied, wherein the condition includes the data structure including a first entry having a flow identifier and a TCP ACK generation count that match the first flow identifier and the first TCP ACK generation count, respectively. In response to determining that the condition is met, the client device marks the first TCP ACK packet as to be discarded.
Particular implementations include one or more of the following features.
In some implementations, the condition further includes a churn rate of the lower layer queue being below a threshold.
In some implementations, the TCP flow includes uplink data in a plurality of queues. The condition further includes that the first TCP ACK packet corresponds to uplink data in a particular queue of the plurality of queues. The particular queue may have a high priority or a low priority.
In some implementations, the TCP flow includes uplink data transmitted in a plurality of Data Radio Bearers (DRBs). The condition further includes that the first TCP ACK packet corresponds to uplink data transmitted in one or more DRBs. The one or more DRBs may be default DRBs on an internet Packet Data Network (PDN). At least one DRB of the one or more DRBs may have a throughput within a given range. The one or more DRBs may also be bi-directional DRBs.
In some implementations, the TCP flow includes uplink data corresponding to a plurality of Packet Data Networks (PDNs). The condition further includes that the first TCP ACK packet corresponds to uplink data transmitted in one or more given PDNs.
In some implementations, the condition further includes the client device operating in a known power mode.
In some implementations, the TCP flow includes a plurality of Internet Protocol (IP) packet flows. The condition further includes that the first TCP ACK packet corresponds to uplink data transmitted in one or more given IP packet streams.
In some implementations, the condition further includes the client device detecting downlink TCP data at baseband.
In some implementations, the first entry in the data structure corresponds to a second TCP ACK packet in the queue, wherein the first TCP ACK packet is generated before the second TCP ACK packet, and wherein a position of the first TCP ACK packet in the queue precedes a position of the second TCP ACK packet in the queue. In some implementations, the queue is checked first from the latest packet.
In some implementations, the packet descriptors are assigned by a TCP application processor included in the client device and the packet descriptors are checked by baseband processor circuitry included in the client device. In some implementations, the application processor assigns a first ACK generation count value corresponding to a first flow identifier to the plurality of TCP ACK packets generated in a first time interval and assigns a second ACK generation count value corresponding to a second flow identifier to the plurality of TCP ACK packets generated in a second time interval different from the first time interval, the first ACK generation count value being different from the second ACK generation count value. In some implementations, the application processor controls the rate at which TCP ACK packets are discarded from the queue by controlling the duration of at least one of the first time interval or the second time interval.
In some implementations, the client device examines packet descriptors of third TCP ACK packets in the queue and identifies a third TCP ACK generation count and a third stream identifier corresponding to the third TCP ACK packet included in the packet descriptors of the third TCP ACK packets. The client device determines that the third TCP ACK generation count is set to an invalid value. In response to the determination, the client device aborts further processing of the third TCP ACK packet. In some implementations, determining that the TCP ACK generation count of the third TCP packet is set to an invalid value includes determining one of: the TCP ACK generation count of the third TCP packet is null; or the TCP ACK generation count of the third TCP packet is set to a predetermined invalid value.
In some implementations, the client device examines packet descriptors of third TCP ACK packets in the queue and identifies a third TCP ACK generation count and a third stream identifier corresponding to the third TCP ACK packet included in the packet descriptors of the third TCP ACK packets. The client device determines that the third stream identifier is set to an invalid value. In response to the determination, the client device aborts further processing of the third TCP ACK packet. In some implementations, determining that the third stream identifier of the third TCP ACK packet is set to an invalid value includes determining one of: the flow identifier is not assigned to the third TCP ACK packet; or the flow identifier of the third TCP packet is set to a predetermined invalid value.
In some implementations, the client device examines packet descriptors of a fourth TCP packet of the TCP packets in the queue and identifies a fourth TCP ACK generation count and a fourth flow identifier corresponding to the fourth TCP ACK packet included in the packet descriptors of the fourth TCP ACK packet. The client device determines that the fourth TCP ACK generation count and the fourth flow identifier are valid. The client device determines (i) that the fourth flow identifier is the same as the first flow identifier in the first entry in the queue, and (ii) that the fourth TCP ACK generation count is different from the first TCP ACK generation count in the first entry in the queue. In response to the determination, the client device updates the first entry by replacing the first TCP ACK generation count stored in the second field of the first entry with the fourth TCP ACK generation count.
In some implementations, the client device examines a packet descriptor of a fifth TCP packet of the TCP packets in the queue and identifies a fifth TCP ACK generation count and a fifth flow identifier corresponding to the fifth TCP ACK packet included in the packet descriptor of the third TCP ACK packet. The client device determines that the fifth TCP ACK generation count and the fifth flow identifier are valid and the data structure does not include an entry corresponding to the fifth flow identifier. In response to the determination, the client device creates a third entry in the data structure and stores a fifth flow identifier and a fifth TCP ACK generation count in the third entry.
In some implementations, the received TCP packet includes one or more of application control information and application data.
In some implementations, one or more additional TCP ACK packets are included in the queue, the one or more additional TCP ACK packets having a packet descriptor without a TCP ACK generation count field.
In some implementations, one or more entries in the data structure include a hash value representing a flow identifier, and wherein determining the data structure includes storing a first TCP ACK generation count and a first entry of the first flow identifier and the first TCP ACK generation count includes: performing a hash function on the first stream identifier to obtain a first hash value, the first hash value being represented by a fewer number of bits than the number of bits used to represent the first stream identifier; comparing the first hash value with hash values included in the one or more entries in the data structure to determine if there is a match; and in response to the comparison, determining that the hash value included in the first entry matches the first hash value.
In another general aspect, a method performed by a client device in a wireless network for transmission of a TCP ACK packet includes: in response to receiving a TCP packet from another network device in the wireless network, accessing a queue in a memory coupled to the client device that includes TCP ACK packets to be transmitted to the other network device, wherein at least a subset of the TCP ACK packets include respective packet descriptors, each packet descriptor including (i) a flow identifier indicating a TCP flow associated with the packet and (ii) a TCP ACK generation count; checking a packet descriptor of a first TCP ACK packet among the TCP ACK packets in the queue; identifying a first flow identifier and a first TCP ACK generation count corresponding to the first TCP ACK packet included in the packet descriptor of the first TCP ACK packet; determining that the first flow identifier and the first TCP ACK generation count are valid; accessing a data structure having one or more entries in the memory coupled to the client device, each entry including a flow identifier and a corresponding TCP ACK generation count; determining that a condition is satisfied, wherein the condition includes the data structure including a first entry, the first entry including (i) a flow identifier that matches the first flow identifier and (ii) a TCP ACK generation count that matches a first TCP ACK generation count, the first entry further storing a second location field corresponding to a location of a second TCP ACK packet in the queue; responsive to the determination, storing a location of the first TCP ACK packet in a first location field in the first entry; exchanging the positions of the first and second TCP ACK packets in the queue based at least on the first and second position fields such that the first TCP ACK packet moves to a position in the queue previously occupied by the second TCP ACK packet and the second TCP ACK packet moves to a position in the queue previously occupied by the first TCP ACK packet; and discarding the first TCP ACK packet from the queue.
Particular implementations include one or more of the following features. In some implementations, the first entry in the data structure corresponds to a second TCP ACK packet in the queue, and wherein the first TCP ACK packet is generated before the second TCP ACK packet, and wherein a position of the first TCP ACK packet in the queue precedes a position of the second TCP ACK packet in the queue.
In some implementations, the queue is checked first from the latest packet.
In some implementations, the packet descriptors are assigned by a TCP application processor included in the client device and the packet descriptors are checked by baseband processor circuitry included in the client device. In some implementations, the application processor assigns a first ACK generation count value corresponding to a first flow identifier to the plurality of TCP ACK packets generated in a first time interval and assigns a second ACK generation count value corresponding to a second flow identifier to the plurality of TCP ACK packets generated in a second time interval different from the first time interval, the first ACK generation count value being different from the second ACK generation count value. In some implementations, the application processor controls the rate at which TCP ACK packets are discarded from the queue by controlling the duration of at least one of the first time interval or the second time interval.
In some implementations, the method further includes: checking a packet descriptor of a third TCP ACK packet of the TCP packets in the queue; identifying a third TCP ACK generation count and a third stream identifier corresponding to the third TCP ACK packet included in the packet descriptor of the third TCP ACK packet; determining that the third TCP ACK generation count is set to an invalid value; and in response to the determination, aborting further processing of the third TCP ACK packet. In some implementations, determining that the TCP ACK generation count of the third TCP packet is set to an invalid value includes determining one of: the TCP ACK generation count of the third TCP packet is null; or the TCP ACK generation count of the third TCP packet is set to a predetermined invalid value.
In some implementations, the method further includes: checking a packet descriptor of a third TCP ACK packet of the TCP ACK packets in the queue; identifying a third TCP ACK generation count and a third stream identifier corresponding to the third TCP ACK packet included in the packet descriptor of the third TCP ACK packet; determining that the third stream identifier is set to an invalid value; and in response to the determination, aborting further processing of the third TCP ACK packet. In some implementations, determining that the third stream identifier of the third TCP ACK packet is set to an invalid value includes determining one of: the flow identifier is not assigned to the third TCP ACK packet; or the flow identifier of the third TCP packet is set to a predetermined invalid value.
In some implementations, the method further includes: checking a packet descriptor of a fourth TCP packet among the TCP packets in the queue; identifying a fourth TCP ACK generation count and a fourth flow identifier corresponding to the fourth TCP ACK packet included in the packet descriptor of the fourth TCP ACK packet; determining that the fourth TCP ACK generation count and the fourth flow identifier are valid; determining that (i) the fourth flow identifier is the same as the first flow identifier in the first entry in the queue, and (ii) the fourth TCP ACK generation count is different from the first TCP ACK generation count in the first entry in the queue; and in response to the determination, updating the first entry by replacing the first TCP ACK generation count stored in the second location field of the first entry with the fourth TCP ACK generation count.
In some implementations, the method further includes: checking a packet descriptor of a fifth TCP packet of the TCP packets in the queue; identifying a fifth TCP ACK generation count and a fifth flow identifier corresponding to the fifth TCP ACK packet included in the packet descriptor of the third TCP ACK packet; determining that the fifth TCP ACK generation count and the fifth flow identifier are valid; determining that the data structure does not include an entry corresponding to the fifth flow identifier; and in response to the determination, creating a third entry in the data structure and storing a fifth flow identifier and a fifth TCP ACK generation count in the third entry.
In some implementations, the received TCP packet includes one or more of application control information and application data.
In some implementations, one or more additional TCP ACK packets are included in the queue, the one or more additional TCP ACK packets having a packet descriptor without a TCP ACK generation count field.
In some implementations, one or more entries in the data structure include a hash value representing a flow identifier, and wherein determining the data structure includes storing a first TCP ACK generation count and a first entry of the first flow identifier and the first TCP ACK generation count includes: performing a hash function on the first stream identifier to obtain a first hash value, the first hash value being represented by a fewer number of bits than the number of bits used to represent the first stream identifier; comparing the first hash value with hash values included in the one or more entries in the data structure to determine if there is a match; and in response to the comparison, determining that the hash value included in the first entry matches the first hash value.
In some implementations, one or more entries in the data structure include a hash value representing a flow identifier, and wherein determining the data structure includes storing a first TCP ACK generation count and a first entry of the first flow identifier and the first TCP ACK generation count includes: performing a hash function on the first stream identifier to obtain a first hash value, the first hash value being represented by a fewer number of bits than the number of bits used to represent the first stream identifier; comparing the first hash value with hash values included in the one or more entries in the data structure to determine if there is a match; and in response to the comparison, determining that the hash value included in the first entry matches the first hash value.
In another general aspect, a method performed by a TCP application processor in a client device in a wireless network for TCP ACK packet transmission includes: generating TCP ACK packets for transmission to a remote device in a wireless network, each TCP ACK packet including a packet descriptor; and storing in respective packet descriptors of at least a subset of the TCP ACK packets (i) a flow identifier indicating a TCP flow associated with the packet and (ii) a TCP ACK generation count; and forwarding the TCP ACK packets including the subset of TCP ACK packets to a baseband processor included in the client device.
Particular implementations include one or more of the following features. In some implementations, the application processor assigns a first ACK generation count value to the plurality of TCP ACK packets generated in a first time interval and assigns a second ACK generation count value to the plurality of TCP ACK packets generated in a second time interval different from the first time interval, the first ACK generation count value being different from the second ACK generation count value. In some implementations, the application processor controls the number of TCP ACK packets to which the first ACK generation count value or the second ACK generation count value is allocated by controlling a duration of at least one of the first time interval or the second time interval.
In some implementations, the method further includes: determining, by the application processor, that the first TCP ACK packet includes at least one of additional header information or TCP payload information; and in response to the determination, setting an ACK generation count value in a packet descriptor of the first TCP ACK packet to a predetermined invalid value.
In some implementations, the method further includes: determining, by the application processor, that the first TCP ACK packet includes at least one of additional header information or TCP payload information; and in response to the determination, forwarding the first TCP ACK packet to the baseband processor without the TCP ACK generation count value.
In some implementations, the method further includes: determining, by the application processor, that the first TCP ACK packet includes at least one of additional header information or TCP payload information; and in response to the determination, setting a flow identifier in a packet descriptor of the first TCP ACK packet to a predetermined invalid value.
In some implementations, the method further includes: determining, by the application processor, that the first TCP ACK packet includes at least one of additional header information or TCP payload information; and in response to the determination, forwarding the first TCP ACK packet to the baseband processor without the flow identifier.
Eliminating redundant TCP ACK packets in the UL as disclosed in the implementations results in: less packet processing and power consumption savings in subsequent processing entities; the amount of data to be transmitted is reduced, or the saved bandwidth is reused by other packet data services; the delay of all subsequent UL packets (after the discarded packet) is reduced; or TCP RTT (round trip time) decreases, which ultimately results in a faster increase in TCP throughput and thus higher end-to-end throughput. This may result in a faster and more efficient packet data transmission method compared to conventional TCP methods.
Implementations of the above techniques include methods, apparatus, and computer program products. Such a computer program product is suitably embodied in one or more non-transitory machine-readable media that store instructions that, when executed by one or more processors, are configured to cause the one or more processors to perform the acts described above. One such apparatus includes processing circuitry for executing instructions to perform the actions described above. The instructions may be stored in a memory coupled to the device. In some implementations, the apparatus is a baseband processor for a client device (e.g., UE) in a wireless network.
Drawings
Fig. 1 illustrates a block diagram of a communication system that can be implemented in accordance with some of the disclosed implementations.
Fig. 2 illustrates optimization of TCP ACK packets in accordance with some implementations of the disclosure.
Fig. 3 illustrates a flow chart of an exemplary process for managing TCP ACK packets, implemented in accordance with some of the disclosure.
Fig. 4 illustrates optimization of TCP ACK packet flows in accordance with some implementations of the disclosure.
Fig. 5 illustrates a flow chart of a second exemplary process for managing TCP ACK packets, implemented in accordance with some of the disclosure.
Fig. 6 illustrates a data structure for managing TCP ACK packets in accordance with certain disclosed implementations.
Fig. 7 illustrates a second data structure for managing TCP ACK packets by reordering in accordance with some implementations of the disclosure.
Fig. 8A-8G each illustrate exemplary conditions for turning TCP ACK optimization on or off, as embodied in accordance with some disclosure.
Fig. 9 illustrates a block diagram of a communication device in accordance with certain disclosed implementations.
Fig. 10 illustrates a block diagram of a communication device that can be implemented in accordance with some of the disclosed implementations.
Fig. 11 illustrates a 3GPP protocol stack embodied in accordance with some disclosure.
Fig. 12 illustrates a block diagram of a communication system that can be implemented in accordance with some of the disclosed implementations.
Detailed Description
TCP is a network communication protocol that enables reliable data exchange between two host devices (e.g., a client device and a server such AS an AS) over a communication network (e.g., a 3GPP wireless communication network). TCP is a connection-oriented protocol; the hardware and software operations performed by a host device implementing the TCP protocol (commonly referred to as TCP in this disclosure) in the protocol stack are responsible for establishing a connection with another host device and maintaining the connection for data transmission. In the following description, without loss of generality, the disclosed TCP improvements are described with respect to a client device (e.g., UE) and a server (e.g., AS) communicating over a 3GPP wireless network. It should be appreciated that the disclosed techniques are equally applicable to TCP connections between other types of host devices or in other types of networks or both.
TCP uses a mechanism called three-way handshake to connect between a server and a client device. The mechanism is a three-step process that requires both the client device and the server to exchange synchronization and acknowledgement packets before the actual data packets can be exchanged. In a three-way handshake, a Synchronization (SYN) message is used to initiate and establish a connection. The SYN message also helps synchronize sequence numbers between devices. As an illustrative example, a client device requests a connection by sending a SYN message to a server. The server acknowledges by sending a SYN-ACK (synchronization-acknowledgement) message back to the client. The client device responds with an ACK message and a connection is established.
When the server sends a TCP data packet (also simply referred to as a data packet) to the client device, the client device sends a TCP ACK packet indicating receipt of the TCP data packet. The client device selects an initial sequence number set in the first SYN packet. The server also selects its own initial sequence number set in the SYN ACK packet. Each side acknowledges each other's serial number by incrementing the serial number, which is the acknowledgment number. The use of sequence numbers and acknowledgement numbers allows both sides to detect missing, lost or out of order data packets. For example, when the server sends a TCP data packet to the client device, the client device acknowledges the TCP data packet by responding with a TCP ACK packet.
In some implementations, the client device generates a TCP ACK in response to receiving the TCP data packet and temporarily stores the TCP ACK in an output queue maintained by the client device until the TCP ACK is transmittable to the server. One or more additional TCP ACK packets may also be waiting to be sent to the server. Sometimes duplicate TCP ACKs are added to the output queue.
In some implementations, the available Downlink (DL) bandwidth is greater than the available Uplink (UL) bandwidth. In some cases, the constraint in the available UL bandwidth prohibits transmission of TCP ACK data packets at the same rate as TCP data packets are received from the server. During the period of UL delay, a large number of TCP ACKs may be stored in the output queue. Therefore, the transmission of the data packet from the server may be stopped due to the delay of the transmission of the TCP ACK packet, resulting in degradation of throughput.
Implementations disclosed herein provide for optimization of uplink TCP flows by discarding duplicate TCP ACK packets in the output queue of a client device. As described in detail in the following sections, in some implementations, BB circuitry (also referred to as baseband unit, BBU) in the client device is configured to discard one or more of the older TCP ACK packets to be processed in the output queue in response to detecting the newer duplicate TCP ACK packet. In some other implementations, the operations are configured to discard one or more of the newer TCP ACK packets pending in the output queue in response to detecting the older duplicate TCP ACK packet. In the following sections, these techniques are described with respect to implementations in which one or more older duplicate TCP ACK packets are discarded. However, it should be appreciated that these techniques are equally applicable to implementations in which one or more newer duplicate TCP ACK packets are discarded.
The client device uses a counter called a TCP ACK generation count to keep track of TCP ACK packets that are potential copies. In some implementations, a process that implements a TCP application (also referred to as an application processor, AP) in a client device generates a TCP ACK generation Count value (also referred to as an ACK Gen Count) and stamps the TCP ACK packets with a counter value before the packets are sent to BBU circuitry for uplink transmission. In some implementations, the AP increments a counter every n milliseconds (where n is a positive integer). The AP stamps the TCP ACK data packets in one n millisecond interval with the same counter value generated for the particular time interval. In some implementations, n is set to a predetermined value, e.g., as a design parameter. In other implementations, the TCP AP 105 is configured to dynamically modify the value of n at runtime, for example, according to the current throughput.
Upon receiving a TCP ACK packet from the AP, the BBU temporarily stores the packet in the output queue until UL transmission. In some implementations, the BBU uses the counter value to decide whether some TCP ACK packets in the output queue are redundant and disposable. This may occur, for example, when the queue length becomes large during UL transmission delay. By using the counter value, the BBU ensures that at least one TCP ACK packet is sent to the server in the UL direction on the 3GPP network for a certain period of time (e.g. n milliseconds), so that the TCP connection remains stable even in case some TCP ACK packets are discarded. The server starts retransmitting the data packet after receiving the first duplicate TCP ACK packet.
Fig. 1 illustrates an exemplary wireless communication system 100 implementing the disclosed TCP ACK management techniques. The system 100 includes a client device 102, an Access Point (AP) 104, a Radio Access Network (RAN) 112, a Core Network (CN) 108, and an Application Server (AS) 110.
For convenience, but not limitation, the system 100 is described in the context of Long Term Evolution (LTE) and fifth generation (5G) new air interface (NR) communication standards as defined by the 3GPP technical specifications. More specifically, the wireless communication system 100 is described in the context of non-independent (NSA) networks that combine both LTE and NR, such as E-UTRA (evolved universal terrestrial radio access) -NR dual connectivity (EN-DC) networks and NE-DC networks. However, the system 100 may also be a Standalone (SA) network that incorporates only NRs. The system 100 may also implement other types of communication standards including future 3GPP systems (e.g., sixth generation (6G) systems), IEEE 802.16 protocols (e.g., WMAN, wiMAX, etc.), and so forth.
In some implementations, the client device 102 is a UE (and may alternatively be referred to herein as UE 102). Although a single client device 102 is shown, it should be understood that the system 100 may include multiple client devices, and that the disclosed techniques are equally applicable to these client devices. The client device or UE 102 may be any suitable type of mobile or non-mobile computing device, such as a consumer electronics device, cellular telephone, smart phone, feature phone, tablet computer, wearable computer device, personal Digital Assistant (PDA), pager, wireless handheld device, desktop computer, laptop computer, in-vehicle infotainment (IVI), in-vehicle entertainment (ICE) device, dashboard (IC), heads-up display (HUD) device, on-board diagnostic (OBD) device, in-vehicle mobile equipment (DME), mobile Data Terminal (MDT), electronic Engine Management System (EEMS), electronic/Engine Control Unit (ECU), electronic/Engine Control Module (ECM), embedded system, microcontroller, control module, engine Management System (EMS), networking or "smart" appliance, machine Type Communication (MTC) device, machine-to-machine (M2M) device, internet of things (IoT) device, or a combination thereof, or the like.
In some implementations, the client device 102 is an internet of things (IoT) client device that may include a network access layer designed for low-power IoT applications that utilize short-term client device connections. IoT client devices may utilize technologies such as machine-to-machine (M2M) communication or Machine Type Communication (MTC) to exchange data with MTC servers or devices using, for example, public Land Mobile Networks (PLMNs), proximity services (proses), device-to-device (D2D) communication, sensor networks, ioT networks, or combinations thereof, or the like. The M2M or MTC data exchange may be a machine-initiated data exchange. IoT networks describe interconnected IoT client devices that may include uniquely identifiable embedded computing devices (within the internet infrastructure) with short-term connections. The IoT client device may execute a background application (e.g., keep-alive message, status update, etc.) to facilitate connection of the IoT network.
As shown, client device 102 includes baseband (BB) circuitry 103 (also referred to as BBU 103), a TCP Application Processor (AP) 105, a database 106, and an output queue 107. In some implementations, the client device 102 includes one or more processors configured to execute instructions stored in a memory (e.g., storage memory) coupled to the client device to perform various functions, such as the programs, methods, functions discussed herein. These functions include operations performed by the baseband circuit 103 and the TCP AP 105, which are described below.
The network 108 may be embodied as any network that may support communication between two networked devices, such as between the client device 102 and the application server 110 (a). The network 108 may be embodied, for example, as a wired network (e.g., an ethernet network, a wired local area network, a fiber optic network, a wired network that may be maintained by a telephone/wired service provider, some combination thereof, etc.), a wireless network (e.g., a cellular network, a wireless local area network, a wireless wide area network, some combination thereof, etc.), or a combination thereof, and may include the internet in some example implementations.
In some implementations, the client device 102 is connected to AN Access Network (AN) or a Radio Access Network (RAN) 112. In some examples, RAN 112 may be a next generation RAN (NG RAN), an evolved UMTS terrestrial radio access network (E-UTRAN), or a legacy RAN, such as a UMTS Terrestrial Radio Access Network (UTRAN) or a GSM EDGE Radio Access Network (GERAN). As used herein, the term "NG RAN" or the like may refer to a RAN operating in the 5G NR system 100, while the term "E-UTRAN" or the like may refer to a RAN operating in the LTE or 4G system 100. Client device 102 utilizes connections (or channels) 109, respectively, each of which includes a physical communication interface or layer.
In this example, connection 109 is shown as an air interface implementing a communicative coupling, and may be consistent with cellular communication protocols, such as GSM protocols, CDMA network protocols, PTT protocols, POC protocols, UMTS protocols, 3GPP LTE protocols, long term evolution-advanced (LTE-a) protocols, LTE-based unlicensed spectrum access (LTE-U), 5G protocols, NR-based unlicensed spectrum access (NR-U) protocols, and/or any other suitable wireless communication protocols.
As shown, client device 102 is connected to an Access Point (AP) 104 (also referred to as a "WLAN node," "WLAN terminal," "WT," etc.) via a connection 107. Connection 107 may comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, where the AP will comprise wireless fidelityAnd a router. In various implementations, client device 102, RAN 112, and AP 104 may be configured to operate with LWA and/or LWIP. LWA operations may involve configuring, by RAN nodes 112a-b, client device 102 in an rrc_connected state to utilize radio resources of LTE and WLAN. LWIP operations may involve client device 102 using WLAN radio resources (e.g., connection 107) to authenticate and encrypt packets (e.g., IP packets) sent over connection 107 via IPsec protocol tunneling. IPsec tunneling may involve encapsulating the entire original IP packet and adding a new packet header, thereby protecting the original header of the IP packet.
RAN 112 may include one or more AN nodes or RAN nodes 112a and 112b (collectively, "RAN node 112a" or "RAN node 112b" or "RAN nodes 112 a-b") that enable connection 109. As used herein, the terms "access node," "access point," and the like may describe equipment that provides radio baseband functionality for data and/or voice connections between a network and one or more users. These access nodes may be referred to as BS, gNB, RAN nodes, eNB, nodeB, RSU, TRxP, TRP, or the like, and may include ground stations (e.g., terrestrial access points) or satellite stations that provide coverage within a geographic area (e.g., cell). As used herein, the term "NG RAN node" or the like may refer to a RAN node 112 (e.g., a gNB) operating in an NR or 5G system 100, while the term "E-UTRAN node" or the like may refer to a RAN node 112 (e.g., an eNB) operating in an LTE or 4G system. According to various implementations, RAN nodes 112a-b may be implemented as one or more of dedicated physical devices such as macrocell base stations and/or Low Power (LP) base stations for providing femtocells, picocells, or other similar cells with smaller coverage areas, smaller user capacities, or higher bandwidths than macrocells.
In some implementations, all or a portion of RAN nodes 112a-b may be implemented as one or more software entities running on a server computer as part of a virtual network that may be referred to as a CRAN and/or virtual baseband unit pool (vBBUP). In these implementations, CRAN or vBBUP may implement RAN functionality partitioning, such as PDCP partitioning, where RRC and PDCP layers are operated by CRAN/vBBUP, while other L2 protocol entities are operated by respective RAN nodes 112 a-b; MAC/PHY partitioning, wherein RRC, PDCP, RLC and MAC layers are operated by CRAN/vBBUP, and PHY layers are operated by respective RAN nodes 112 a-b; or "lower PHY" split, where RRC, PDCP, RLC, MAC layers and upper portions of the PHY layers are operated by CRAN/vBBUP and lower portions of the PHY layers are operated by the respective RAN nodes 112 a-b. The virtualization framework allows idle processor cores of RAN nodes 112a-b to execute other virtualized applications. In some implementations, each RAN node 112a-b may represent a respective gNB-DU connected to the gNB-CU via a respective F1 interface (not shown). In these implementations, the gNB-DU may include one or more remote radio heads or RFEMs, and the gNB-CU may be operated by a server (not shown) located in RAN 112 or by a server pool in a similar manner as CRANs/vBBUP. Additionally or alternatively, one or more of the RAN nodes 112a-b may be a next generation eNB (NG-eNB), which is a RAN node providing E-UTRA user plane and control plane protocol terminals to the client device 102 and connected to the 5GC via an NG interface (discussed below).
In a vehicle-to-everything (V2X) scenario, one or more of the RAN nodes 112a-b may be or act as a Road Side Unit (RSU). The term "road side unit" or "RSU" may refer to any traffic infrastructure entity for V2X communication. The RSU may be implemented in or by a suitable RAN node or stationary (or relatively stationary) UE, wherein the RSU implemented in or by the UE may be referred to as a "UE-type RSU", the RSU implemented in or by the eNB may be referred to as an "eNB-type RSU", the RSU implemented in or by the gNB may be referred to as a "gNB-type RSU", etc. In one example, the RSU is a computing device coupled with radio frequency circuitry located on the road side that provides connectivity support to passing vehicle client devices. The RSU may also include internal data storage circuitry for storing intersection map geometry, traffic statistics, media, and applications/software for sensing and controlling ongoing vehicle and pedestrian traffic. The RSU may operate over the 5.9GHz Direct Short Range Communication (DSRC) band to provide very low latency communications required for high speed events, such as crashes, traffic warnings, and the like. Additionally or alternatively, the RSU may operate on the cellular V2X frequency band to provide the aforementioned low-delay communications, as well as other cellular communication services. Additionally or alternatively, the RSU may operate as a Wi-Fi hotspot (2.4 GHz band) and/or provide connectivity to one or more cellular networks to provide uplink and downlink communications. Some or all of the radio frequency circuitry of the computing device and RSU may be enclosed in a weather resistant enclosure suitable for outdoor installation, and may include a network interface controller to provide wired connections (e.g., ethernet) with traffic signal controllers and/or backhaul networks.
Any of the RAN nodes 112a-b may be the end point of the air interface protocol and may be the first point of contact for the client device 102. In some implementations, any of RAN nodes 112a-b may perform various logical functions of RAN 112a-b, including but not limited to Radio Network Controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.
In implementations, client device 102 may be configured to communicate with each other or any of RAN nodes 112a-b over a multicarrier communication channel using OFDM communication signals in accordance with various communication techniques such as, but not limited to, OFDMA communication techniques (e.g., for downlink communications) or SC-FDMA communication techniques (e.g., for uplink and ProSe or side-link communications), although the scope of the implementations is not limited in this respect. The OFDM signal may comprise a plurality of orthogonal subcarriers.
In some implementations, the downlink resource grid may be used for downlink transmissions from any of the RAN nodes 112a-b to the client device 102, while the uplink transmissions may utilize similar techniques. The grid may be a time-frequency grid, referred to as a resource grid or time-frequency resource grid, which is a physical resource in the downlink in each time slot. For OFDM systems, such time-frequency plane representation is common practice, which makes radio resource allocation intuitive. Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. The duration of the resource grid in the time domain corresponds to one slot in the radio frame. The smallest time-frequency unit in the resource grid is denoted as a resource element. Each resource grid includes a plurality of resource blocks that describe the mapping of certain physical channels to resource elements. Each resource block includes a set of resource elements; in the frequency domain, this may represent the minimum amount of resources that can be currently allocated. Several different physical downlink channels are transmitted using such resource blocks.
According to various implementations, client device 102 and RAN nodes 112a-b transmit data (e.g., transmit data and receive data) over a licensed medium (also referred to as a "licensed spectrum" and/or "licensed band") and an unlicensed shared medium (also referred to as an "unlicensed spectrum" and/or "unlicensed band"). The licensed spectrum may include channels operating in a frequency range of about 400MHz to about 3.8GHz, while the unlicensed spectrum may include the 5GHz band. The NR in the unlicensed spectrum may be referred to as NR-U, and the LTE in the unlicensed spectrum may be referred to as LTE-U, licensed Assisted Access (LAA), or MulteFire.
To operate in the unlicensed spectrum, client device 102 and RAN nodes 112a-b may operate using LAA, eLAA, and/or feLAA mechanisms. In these implementations, client device 102 and RAN nodes 112a-b may perform one or more known media sensing operations and/or carrier sensing operations to determine whether one or more channels in the unlicensed spectrum are unavailable or otherwise occupied before transmission in the unlicensed spectrum. The medium/carrier sensing operation may be performed according to a Listen Before Talk (LBT) protocol.
LBT is a mechanism by which equipment (e.g., client device 102, RAN nodes 112a-b, etc.) senses a medium (e.g., a channel or carrier frequency) and transmits when the medium is sensed to be idle (or when a particular channel in the medium is sensed to be unoccupied). The medium sensing operation may include a CCA that utilizes at least the ED to determine whether other signals are present on the channel in order to determine whether the channel is occupied or idle. The LBT mechanism allows the cellular/LAA network to coexist with existing systems in the unlicensed spectrum and with other LAA networks. The ED may include sensing RF energy over an expected transmission band for a period of time, and comparing the sensed RF energy to a predefined or configured threshold.
In general, existing systems in the 5GHz band are WLANs based on IEEE 802.11 technology. WLAN employs a contention-based channel access mechanism called CSMA/CA. Here, when a WLAN node (e.g., a Mobile Station (MS) such as client device 102, AP 104, etc.) intends to transmit, the WLAN node may first perform CCA prior to transmitting. In addition, in the case where more than one WLAN node senses the channel as idle and transmits simultaneously, a backoff mechanism is used to avoid collisions. The backoff mechanism may be a counter that is randomly introduced within the CWS, increases exponentially when a collision occurs, and resets to a minimum when the transmission is successful. The LBT mechanism designed for LAA is somewhat similar to CSMA/CA for WLAN. In some implementations, the LBT procedure of DL or UL transmission bursts (including PDSCH or PUSCH transmissions, respectively) may have LAA contention window of variable length between X and Y ECCA slots, where X and Y are the minimum and maximum values of the CWS of the LAA. In one example, the minimum CWS for LAA transmission may be 9 microseconds (μs); however, the size of the CWS and the MCOT (e.g., transmission burst) may be based on government regulatory requirements.
The LAA mechanism is built on the CA technology of the LTE-Advanced system. In CA, each aggregated carrier is referred to as a CC. One CC may have a bandwidth of 1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz, or 20MHz, and at most five CCs may be aggregated, so that the maximum aggregate bandwidth is 100MHz. In an FDD system, the number of aggregated carriers may be different for DL and UL, where the number of UL CCs is equal to or lower than the number of DL component carriers. In some cases, each CC may have a different bandwidth than other CCs. In a TDD system, the number of CCs and the bandwidth of each CC are typically the same for DL and UL.
The CA also includes individual serving cells to provide individual CCs. The coverage of the serving cell may be different, for example, because CCs on different frequency bands will experience different path losses. The primary serving cell or PCell may provide PCC for both UL and DL and may handle RRC and NAS related activities. Other serving cells are referred to as scells, and each SCell may provide a separate SCC for both UL and DL. SCCs may be added and removed as needed, while changing PCC may require client device 102 to undergo a handoff. In LAA, eLAA, and feLAA, some or all of the scells may operate in unlicensed spectrum (referred to as "LAA SCell"), and the LAA SCell is assisted by a PCell operating in licensed spectrum. When the UE is configured with more than one LAA SCell, the UE may receive UL grants on the configured LAA SCell indicating different PUSCH starting locations within the same subframe.
PDSCH carries user data and higher layer signaling to the client device 102. The PDCCH carries, among other information, information about transport formats and resource allocations related to the PDSCH channel. PDSCH may also inform the client device 102 of transport format, resource allocation, and HARQ information related to the uplink shared channel. In general, downlink scheduling (allocation of control and shared channel resource blocks to client devices 102 within a cell) may be performed at any one of the RAN nodes 112a-b based on channel quality information fed back from any one of the client devices 102. The downlink resource allocation information may be sent on a PDCCH for (e.g., allocated to) each of the client devices 102.
The PDCCH transmits control information using CCEs. The PDCCH complex-valued symbols may first be organized into quadruples before being mapped to resource elements, and then may be aligned for rate matching using a sub-block interleaver. Each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements, respectively, referred to as REGs. Four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. Depending on the size of the DCI and the channel conditions, the PDCCH may be transmitted using one or more CCEs. There may be four or more different PDCCH formats in LTE with different numbers of CCEs (e.g., aggregation level, l=1, 2, 4, or 8).
Some implementations may use concepts for resource allocation for control channel information, the concepts of resource allocation being extensions of the concepts described above. For example, some implementations may utilize EPDCCH using PDSCH resources for control information transmission. The EPDCCH may be transmitted using one or more ECCEs. Similar to the above, each ECCE may correspond to nine sets of four physical resource elements, referred to as EREGs. In some cases, ECCEs may have other amounts of EREGs.
RAN nodes 112a-b may be configured to communicate with each other via interface 116. In implementations where the system 100 is an LTE system, the interface 116 may be an X2 interface 116. The X2 interface may be defined between two or more RAN nodes 112a-b (e.g., two or more enbs, etc.) connected to EPC 108 and/or between two enbs connected to EPC 108. In some implementations, the X2 interface may include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C). The X2-U provides a flow control mechanism for user packets transmitted over the X2 interface and may be used to communicate information regarding the delivery of user data between enbs. For example, X2-U may provide specific sequence number information about user data transmitted from the MeNB to the SeNB; information regarding successful in-sequence delivery of PDCP PDUs from the SeNB to the client device 102 for user data; information of PDCP PDUs not delivered to the client device 102; information about a current minimum expected buffer size at the SeNB for transmitting user data to the UE; etc. The X2-C may provide LTE access mobility functions including context transfer from source eNB to target eNB, user plane transfer control, etc.; a load management function; inter-cell interference coordination function.
In implementations where system 100 is a 5G or NR system, interface 116 may be an Xn interface 116. An Xn interface is defined between two or more RAN nodes 112a-b (e.g., two or more gnbs, etc.) connected to the 5gc 108, between a RAN node 112a-b (e.g., a gNB) connected to the 5gc 108 and an eNB, and/or between two enbs connected to the 5gc 108. In some implementations, the Xn interface can include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functions. An Xn-C may provide management and error handling functions for managing the functions of the Xn-C interface; mobility support for client devices 102 in CONNECTED mode (e.g., CM-CONNECTED) includes functionality for managing UE mobility in CONNECTED mode between one or more RAN nodes 112 a-b. Mobility support may include context transfer from an old (source) serving RAN node 112a-b to a new (target) serving RAN node 112 a-b; and control of user plane tunnels between the old (source) serving RAN nodes 112a-b to the new (target) serving RAN nodes 112 a-b. The protocol stack of an Xn-U may include a transport network layer built on top of an Internet Protocol (IP) transport layer, and a GTP-U layer on top of the UDP and/or IP layer for carrying user plane PDUs. The Xn-C protocol stack may include an application layer signaling protocol, referred to as the Xn application protocol (Xn-AP), and a transport network layer built on SCTP. SCTP may be on top of the IP layer and may provide guaranteed delivery of application layer messages. In the transport IP layer, signaling PDUs are delivered using point-to-point transport. In other implementations, the Xn-U protocol stack and/or the Xn-C protocol stack may be the same or similar to the user plane and/or control plane protocol stacks shown and described herein.
RAN 112 is shown as being communicatively coupled to a core network, in this implementation, core Network (CN) 108.CN 108 may include a plurality of network elements 122 configured to provide various data and telecommunications services to clients/subscribers (e.g., users of client devices 102) connected to CN 108 via RAN 112. The components of the CN 108 may be implemented in one physical node or in a separate physical node, including components for reading and executing instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some implementations, NFV may be used to virtualize any or all of the above-described network node functions via executable instructions stored in one or more computer-readable storage media (described in further detail below). The logical instantiation of the CN 108 may be referred to as a network slice, and the logical instantiation of a portion of the CN 108 may be referred to as a network sub-slice. NFV architecture and infrastructure can be used to virtualize one or more network functions onto physical resources including industry standard server hardware, storage hardware, or a combination of switches (alternatively performed by proprietary hardware). In other words, NFV systems may be used to perform virtual or reconfigurable implementations of one or more EPC components/functions.
In some implementations, the Application Server (AS) 110 is a network server that uses IP bearer resources with a core network (e.g., UMTS PS domain, LTE PS data services, etc.). The AS110 may also be configured to support one or more communication services (e.g., voIP session, PTT session, group communication session, social networking service, etc.) for the client device 102 via the EPC 108. AS described in the following sections, in some implementations, client device 102 establishes a TCP session with AS110 and uses the optimized TCP protocol disclosed in this specification. In such implementations, TCP AP 107 in client device 102 provides a TCP-ACK generation count to enable BB circuitry 103 to track TCP ACK packets as duplicates. The TCP AP 107 generates a TCP-ACK generation count value (ack_gen_count) of the TCP ACK packet and supplies the counter value to the BB circuit 103.TCP AP 107 also increments a counter every n milliseconds.
In implementations, CN 108 may be 5GC (referred to as "5GC 108" or the like), and RAN 112 may be connected with CN 108 via NG interface 113. In implementations, NG interface 113 may be split into two parts: an NG user plane (NG-U) interface 114 that carries traffic data between RAN nodes 112a-b and the UPF; and an S1 control plane (NG-C) interface 115, which is a signaling interface between RAN nodes 112a-b and the AMF.
In implementations, the CN 108 may be a 5G CN (referred to as "5gc 108" or the like), while in other implementations, the CN 108 may be an EPC. In the case where CN 108 is an EPC (referred to as "EPC 108", etc.), RAN 112 may be connected with CN 108 via S1 interface 113. In implementations, the S1 interface 113 may be split into two parts: an S1 user plane (S1-U) interface 114 that carries traffic data between RAN nodes 112a-b and the S-GW; and S1-MME interface 115, which is a signaling interface between RAN nodes 112a-b and the MME.
In some implementations, the client device 102 receives data for the communication session from the AS110 via the CN 108 and the AN 112 on a downlink channel. AS110 sends data using TCP AS a TCP data packet. Client device 102 acknowledges successful receipt of the TCP data packet by sending an acknowledgement (e.g., a TCP ACK packet) to AS 110. TCP data packets may be identified using sequence numbers. The TCP ACK sent by client device 102 indicates one or more successfully received data packets by including the sequence number of the most recent TCP data packet received in the chain of consecutive sequence numbers, i.e., without missing any sequence numbers due to the corresponding TCP data packet not being received.
As noted previously, in some implementations, the client device 102 includes BB circuitry 103. The BB circuit 103 may be embodied as a hardware circuit, or a computer program product having computer readable program instructions stored on a computer readable medium (e.g., a storage memory) and executed by a processing device (e.g., the baseband circuit 1010 described with respect to fig. 10), or some combination thereof. AS previously described, client device 102 is configured to generate TCP ACK packets and add the TCP ACK packets to AN output queue maintained by client device 102 for pending TCP packets awaiting uplink transmission (e.g., awaiting uplink to AN 112, awaiting transmission to AS 110).
In some implementations, BB circuitry 103 tracks one or more TCP connections or TCP packet flows and maintains detailed information for each tracked TCP connection or flow. BB circuitry 103 keeps track of the most recent TCP ACK information and the time stamp at the time the TCP ACK was received, the sequence number of the most recent data packets, the time stamp at the time the data packets were received, and the number of unacknowledged data packets. In some implementations, BB circuitry 103 stores this information in a memory coupled to client device 102, e.g., in database 106.
TCP packets sent by AS110 are associated with one or more different communication sessions. In some implementations, the TCP AP 105 uses different TCP flow identifiers (referred to as flow IDs) to identify different communication sessions. In some implementations, the flow ID corresponds to a 5-tuple (source IP address, source TCP/UDP port, destination IP address, destination TCP/UDP port, and IP protocol) or a 3-tuple (source IP address, destination IP address, IP protocol). The flow ID uniquely identifies the flow associated with the TCP data packet and the corresponding TCP ACK packet. TCP AP 105 determines a corresponding flow ID for each TCP data packet received at client device 102. In some implementations, the TCP AP 105 provides different TCP ACK generation count values to associate with the TCP ACK packets of the different streams. In other implementations, the TCP AP 105 provides the same TCP ACK generation count value to associate with TCP ACK packets of different streams. In the packet descriptor prepared for the generated TCP ACK packet, the TCP AP 105 includes a flow ID determined for the corresponding TCP data packet and a TCP ACK generation count of a specific flow in the current time interval. The packet descriptors are stored in a database 106. Information including, but not limited to, total packet count, total byte count, and last time the packet was seen is maintained for each of the flows.
In some implementations, the TCP AP 105 does not provide the flow IDs of certain TCP packets. This may be, for example, due to external traffic via a tethered or because the packet does not contain all of the fields required to generate a stream ID. In such cases, BB circuitry 103 treats these TCP ACK packets as invalid for traffic reduction and ignores these packets when inspecting the output queue 107 for redundant packets.
In some implementations, BB circuitry 103 tracks the period of time that a TCP ACK packet waits in the output queue before UL transmission, for example, using a timer. If the value of the timer exceeds a predetermined value, indicating that the period of time for which TCP ACK packets are queued is longer than a specified threshold (e.g., due to congestion in the uplink channel), BB circuitry 103 examines the TCP ACK packets in output queue 107 to determine if some TCP ACK packets can be discarded. The threshold may be set to a predetermined period of time. In some implementations, the threshold is set to n milliseconds, e.g., in synchronization with the interval in which the TCP AP 105 updates the ACK Gen Count value. In such implementations, using the ACK Gen Count value ensures that at least one TCP ACK packet for a flow within a particular time period (e.g., n milliseconds (ms)) is transmitted to the network 108 in the UL direction, while the remaining TCP ACK packets for the flow within the particular time period are discarded. In this way, BB circuitry 103 reduces network congestion by removing redundant TCP ACK packets while keeping the TCP connection stable.
Additionally or alternatively, in some implementations, BB circuitry 103 uses, for example, a counter to track the number of TCP ACK packets waiting in the output queue prior to UL transmission. If the value of the counter exceeds a predetermined value, indicating that the number of queued TCP ACK packets is greater than a specified number (e.g., due to congestion in the uplink channel), BB circuit 103 examines the TCP ACK packets in the queue to determine if some TCP ACK packets can be discarded. The specified number may be equal to the predetermined number. The specified number may be set to any suitable value depending on the implementation. For example, the specified number may be set to 3, 12, 27, or any other suitable number. Thus, BB circuitry 103 may be configured to monitor the number of TCP ACK packets pending in output queue 107 and detect the occurrence of one or more redundant TCP ACK packets of the flow if the number of TCP ACK packets pending reaches a predetermined threshold limit.
In some implementations, as described in more detail in the sections below, BB circuitry 103 discards one or more older duplicate TCP ACK packets with a stream ID and an ACK Gen Count when there are newer TCP ACK packets to be processed in output queue 107. In some implementations, after one or more redundant TCP ACK packets have been discarded, at least one most recent TCP ACK packet having the same stream ID and ACK Gen Count remains pending in output queue 107. In some implementations, the TCP ACK packet is identified using a timestamp that indicates the time at which the TCP ACK packet was generated or added to the output queue 107. In some cases, when a TCP ACK packet with the same stream ID and ACK Gen Count but with a more recent timestamp value is also in the output queue, one or more TCP ACK packets with an earlier timestamp are discarded. In some other cases, when a TCP ACK packet with the same stream ID and ACK Gen Count but with an earlier timestamp value is also in the output queue, one or more TCP ACK packets with a more recent timestamp are discarded.
Fig. 2 illustrates optimization of TCP ACK packet flows in accordance with some implementations of the disclosure. FIG. 2 shows a configuration 202 of an output queue prior to optimization; configurations 204 and 206 of output queues during optimization to identify and remove redundant TCP ACK packets; and configuration 208 of the output queue after optimization is complete. In some implementations, the operations described with respect to fig. 2 are performed by BB circuitry 103, and configurations 202-208 correspond to output queue 107.
Each of configurations 202-208 shows an arrangement of TCP ACK packets in an output queue, where each packet is identified by a packet number (e.g., "packet # 1"), a flow ID (e.g., "flow a"), and an ACK Gen Count value ("ackgen") (e.g., "ackgen 2") associated with the flow. As shown, the queue includes a plurality of TCP ACK packets with different stream IDs and corresponding ACK Gen Count values assigned by TCP AP 105. In some implementations, the TCP ACK packets are arranged in order from oldest to newest packet, with packet #1 being oldest and packet #13 being newest. The oldest TCP ACK packet (packet # 1) in TCP ACK queue (202) has a flow ID of "flow a" and ACK Gen Count of "ackgen 2", and the newest TCP ACK packet (packet # 13) has a flow ID of "flow a" and ACK Gen Count of "ackgen".
Configuration 202 shows that the output queue includes multiple TCP ACK packets with the same stream ID and the same ACK Gen Count before BB circuitry 103 optimizes the queue. For example, packet #1, packet #3, and packet #9 have a flow ID of "flow a" and ACK Gen Count of "ackgen 2". In some implementations, packet #1, packet #3, and packet #9 are processed by TCP AP 105 within the same time period such that TCP AP 105 stamps each of these packets with the same ACK Gen Count value. When optimizing output queue 107, BB circuitry 103 identifies one or more of these TCP ACK packets having the same flow ID and ACK Gen Count values as redundant duplicate ACK packets, which are then removed from the queue.
Configuration 202 shows that the output queue also includes multiple TCP ACK packets with the same flow ID but with different ACK Gen Count values; and TCP ACK packets with different stream IDs. For example, packets #1 and #10 have the same stream ID "stream a", but have different ACK Gen Count values, "ackgen 2" and "ackgen", respectively. In some implementations, packet #1 and packet #10 corresponding to "flow a" are processed by TCP AP 105 in different time periods such that TCP AP 105 stamps these packets with different ACK Gen Count values. As another example, packet #1 and packet #2 have different stream IDs, "stream a" and "stream b", respectively. These TCP ACK packets having the same stream ID but different ACK Gen Count values or different stream IDs are not identified as redundant with respect to each other.
In some implementations, the packet descriptors (e.g., flow ID and ACK Gen Count) of the TCP ACK packets in the queue are stored in database 106, e.g., in a data structure such as a table as described with respect to fig. 6. In some implementations, BB circuitry 103 manages the table. Upon inspecting the packet and obtaining the associated flow ID and ACK Gen Count, if no flow ID exists in database 106, BB circuitry 103 records this information in an entry in the database 106 data structure. As described in more detail in the following sections, BB circuitry 103 manages TCP ACK packets in output queue 107 by examining corresponding packet descriptor entries in database 106 to identify redundant TCP ACK packets.
To examine a TCP ACK packet in the output queue, BB circuitry 103 accesses the corresponding entry in the database corresponding to the packet and compares the relevant field of the packet descriptor from the packet with the values stored in entries in database 106 corresponding to other TCP ACK packets in output queue 107. As previously described, in some implementations, when checking TCP ACK packets stored in the output queue for redundancy, BB circuitry 103 first checks whether the ACK Gen Count or stream ID, or both, of the packet is valid. If BB circuit 103 determines that the ACK Gen Count of the current packet under inspection is invalid, then BB circuit marks the database entry corresponding to the TCP ACK packet as invalid for reduction (e.g., not as a candidate for removal of redundant TCP ACK packets) and proceeds to move to the next entry in the database, e.g., corresponding to the next packet in the output queue. In some implementations, when it is determined that the TCP ACK packet contains important information, the TCP AP 105 sets the ACK Gen Count of the packet to invalid. This may occur, for example, for a TCP ACK with additional header information or TCP data packets. In some implementations, a TCP ACK packet with a null ACK Gen Count field or a predetermined invalid ACK Gen Count value (e.g., 0x 10000) also indicates that ACK Gen Count is invalid for reduction, and BB circuitry 103 does not consider the associated TCP ACK as a candidate for redundancy and leaves the packet in the output queue. In some implementations, the ACK generation count is valid if the value is within a valid range (e.g., 0 … … x ffff), otherwise invalid.
In some implementations, the packet descriptor of the TCP ACK packet includes a separate AckGenCount _valid field that indicates whether the ACK Gen Count value of the packet is available. In some implementations, BB circuitry 103 determines the validity of the ACK Gen Count value by checking the AckGenCount _valid field.
Additionally or alternatively, in some implementations, BB circuitry 103 checks a flow ID associated with the inspected packet. If the flow ID indicates that the flow ID is invalid, BB circuit 103 marks the database entry corresponding to the TCP ACK packet as invalid for compression and moves on to the next entry in the database corresponding to the next packet in the output queue. Once a TCP ACK packet is determined to be invalid due to an invalid ACK Gen Count or an invalid flow ID, or both, BB circuit 103 ignores the packet for reduction, leaving it in the output queue. In some implementations, if the value is within a valid range (e.g., 0 … …,0 xffff), the TCP flow ID is valid, while the value (0 x 10000) indicates that the flow ID is invalid.
In some implementations, BB circuitry 103 determines that two or more TCP ACK packets in the queue have the same flow ID and ACK Gen Count, indicating that one or more of these packets are redundant duplicate ACK packets. For example, configuration 204 instructs BB circuitry 103 to examine packets in the output queue in the order of the latest packet to the oldest packet. Regarding the latest packet (e.g., packet # 13), BB circuitry 103 searches entries in database 106 to determine whether the entry corresponding to any other TCP ACK packet in the queue has the same ACK Gen Count (e.g., "ackgen") and a stream ID (e.g., "stream a"). If BB circuit 103 determines that there is no matching entry in database 106, then BB circuit stores the checked flow ID and ACK Gen Count of the current packet as a new entry in database 106. BB circuitry 103 then continues to examine the next older packet (e.g., packet # 12) in the queue. If the BB circuit 103 determines that there is an older TCP ACK packet (e.g., packet # 10) in the queue with the same stream ID, the BB circuit 103 checks whether the ACK Gen Count values of the two packets (e.g., packet #13 and packet # 10) are the same.
In some cases, BB circuitry 103 determines that another packet with the same stream ID also has the same ACK Gen Count value. For example, as shown in association 204a, packet #13 and packet #10 both have a flow ID of "flow a" and ACK Gen Count of "ackgen 3". Upon determining that the older packet and the newer packet have the same stream ID and ACK Gen Count value, BB circuitry 103 determines that the older packet (e.g., packet # 10) is a redundant duplicate TCP ACK packet. The BB circuit 103 leaves the latest packet (e.g., packet # 13) in the output queue while discarding the older packet (e.g., packet # 10) from the queue as a redundant duplicate TCP ACK packet. In some cases, as shown in configuration 206, BB circuitry 103 marks the redundant packet as a discard packet for discarding from the queue at a later time, e.g., during a queue cleanup procedure. For example, upon determining that packet #10 is a redundant duplicate TCP ACK packet relative to packet #13, BB circuitry 103 marks packet #10 (e.g., the corresponding database entry) as a discarded packet, as shown at 206 a.
In some implementations, BB circuitry 103 determines that the flow IDs of the two TCP ACK packets are the same, but the ACK Gen Count values of the two packets are different, and the ACK Gen Count of the older packet labeled the most recent ACK Gen Count in the database is associated with the flow ID. In such a case, BB circuitry 103 replaces the ACK Gen Count associated with the flow ID in the database with the ACK Gen Count of the newer TCP ACK packet.
After the inspection of the TCP ACK packet is completed, BB circuit 103 continues to move to inspect the next older TCP ACK packet in the output queue and compare the packet's flow ID and ACK Gen Count with the other remaining packets in the queue in a manner similar to that described above. For example, after solving the latest TCP ACK packet, i.e., packet #13, the BB circuit 103 selects the next TCP ACK packet in the queue, e.g., TCP ACK packet #12.BB circuitry 103 obtains the packet descriptor of TCP ACK packet #12 from database 106. BB circuitry 103 determines that the flow ID of TCP ACK packet #12 matches the flow ID of TCP ACK packet #11, but that the ACK Gen Count values of these two TCP ACK packets are different. This may occur, for example, when the two TCP ACK packets are generated by TCP AP 105 in two different intervals. In this case, the packets are not considered redundant with respect to each other, and neither packet is discarded or removed from the queue, as shown in configuration 206. However, the corresponding database entry for the flow (e.g., "flow c") is updated with the most recent ACK Gen Count value (e.g., "ackgen 88") corresponding to the most recent TCP ACK packet for the flow.
BB circuitry 103 continues to examine output queue 107 moving back from newer TCP ACK packets to older TCP ACK packets in the queue in a manner similar to that described above. For example, as shown with respect to configuration 204, upon examining packet #11, BB circuitry 103 determines that two other TCP ACK packets, packet #6 and packet #5, have the same stream ID ("stream c") and the same ACK Gen Count ("ackgen 87"), as shown by associations 204b and 204 c. In this case, BB circuitry 103 determines that the two older packets (e.g., packet #6 and packet # 5) are redundant and marks these packets for discarding, as shown in configuration 206.
Similarly, considering the next older TCP ACK packet in the output queue that has not yet been checked (e.g., packet # 9), BB circuitry 103 determines that the two older packets in the output queue (e.g., packet #3 and packet # 1) are redundant duplicate TCP ACK packets because they have the same stream ID ("stream a") and the same ACK Gen Count value ("ackgen 2"), as shown by associations 204d and 204 e. As another example, BB circuitry 103 determines that packet #7 is a redundant copy of packet #4, as shown by association 204 f. In these cases, BB circuitry 103 discards the redundant duplicate packets, as shown in configuration 206.
After discarding all redundant packets, the result of the output queue optimization/reduction is shown by configuration 208. As shown, the number of TCP ACK packets in the output queue after optimization is less than the number of TCP ACK packets in the output queue before the optimization process is performed (204). Eliminating redundant TCP ACK packets in this way results in a lower number of packets being sent on the UL, which increases processing efficiency and results in reduced latency. This results in a decrease in TCP RTT, resulting in an increase in TCP throughput and thus higher end-to-end throughput.
In some implementations, the TCP AP 105 sets the ACK Gen Count of a particular TCP ACK packet to invalid, e.g., by making the ACK Gen Count field in the packet descriptor null, or assigning a predetermined invalid value. For example, the ACK Gen Count field in the packet descriptor may be a 16-bit field; TCP AP 105 may set the 16-bit field to a hexadecimal value FFFF to indicate that ACK Gen Count is invalid. In some implementations, a similar approach is used for stream ID, as previously described. By setting ACK Gen Count or stream ID, or both, to be invalid for a TCP ACK packet, application processor 105 may avoid that the corresponding TCP ACK packet is classified as redundant and therefore discarded. This may occur, for example, for a TCP ACK packet that is determined to include important information (e.g., a TCP ACK with additional header information). In some implementations, the output queue 107 includes additional packets, such as TCP data packets. In such implementations, the TCP AP 105 ensures that the data packet is not checked for reduction by setting the ACK Gen Count or the flow ID, or both, of the data packet to an invalid value.
Fig. 3 illustrates an exemplary process 300 for managing TCP ACK packets in an output queue according to some implementations. In some implementations, the process 300 is performed by the client device 102 (e.g., by the BB circuitry 103 of the client device 102) to manage TCP ACK packets buffered in the output queue 107 for uplink transmission by checking the TCP ACK packets for disposable redundant packets. Accordingly, the process 300 is described in the following sections with respect to the client device 102 and the system 100. However, in other implementations, the process 300 may be performed by other devices as well.
Process 300 begins when a client device accesses an output queue (302) that includes a plurality of TCP ACK packets. For example, BB circuitry 103 accesses output queue 107 to determine whether there are disposable redundant packets. In some implementations, BB circuitry 103 accesses output queue 107 to optimize the queue when the number of TCP ACK packets in the queue exceeds a predetermined threshold number. In some implementations, BB circuitry 103 accesses output queue 107 to optimize the queue when the duration of queuing TCP ACK packets in output queue 107 exceeds a predetermined time threshold. In some implementations, BB circuitry 103 accesses output queue 107 at predetermined periodic intervals to optimize the queue. In some implementations, the predetermined time interval corresponds to a time period used by the TCP AP 105 to stamp packets with the same ACK Gen Count value, as previously described. In some implementations, accessing output queue 107 includes accessing a record of TCP ACK packets in database 106. Such a record is shown with respect to fig. 6. In some implementations, the BB circuit 103 resets the database 106 when the output queue is accessed. For example, when the output queue 107 is accessed, the BB circuit 103 clears the entry in the database 106.
Upon accessing the output queue, the client device examines the TCP ACK packet in the queue (304). For example, in some implementations, BB circuitry 103 examines the packet starting from the latest packet in the queue (e.g., packet #13 as shown by configuration 204). In other implementations, BB circuitry 103 examines the packet starting from the oldest packet in the queue (e.g., packet #1 as shown by configuration 204).
The client device identifies the value of the ACK Gen Count and flow ID corresponding to the currently accessed packet (306). For example, when checking a TCP ACK packet (such as packet # 13) in the output queue 107, the BB circuit 103 accesses the flow ID and ACK Gen Count value included in the packet.
Upon identifying the ACK Gen Count value of the examined TCP ACK packet, the client device verifies the validity of the ACK Gen Count of the packet (308). For example, BB circuit 103 checks whether the ACK-generated count value of the packet is set to a valid value or an invalid value (such as hexadecimal FFFF or some other suitable predetermined value indicating invalid). In some implementations, having a null ACK Gen Count field indicates that ACK Gen Count is invalid.
If the client device determines that the ACK Gen Count value of the examined TCP ACK packet is invalid (308-no), the client device marks the TCP ACK packet as not conforming to the decrease and continues to move to examine the next packet in the output queue, if available. For example, as previously described, in some cases, such as when a packet contains important information (such as a TCP ACK with additional header information or a TCP data packet), TCP AP 105 sets the ACK Gen Count of the packet to invalid to indicate that the packet should not be a candidate for removal from the output queue. When BB circuit 103 determines that ACK Gen Count for the currently inspected packet is invalid, the BB circuit leaves the packet in output queue 107 without further processing and continues to move to inspect the next packet in the queue (if there are other non-inspected packets).
On the other hand, if the client device determines that the checked TCP ACK packet has a valid ACK Gen Count, the client device checks if the flow ID of the TCP ACK packet is valid (310). For example, BB circuitry 103 checks whether a stream ID value is assigned to a TCP ACK packet and, if so, checks whether the stream ID indicates a valid value or an invalid value (e.g., hexadecimal FFFF or some other suitable predetermined value indicating invalid). In some implementations, the absence of any value in the stream ID field indicates that the stream ID is invalid.
If the client device determines that the flow ID of the inspected packet indicates that the flow ID is invalid (310-no), the client device marks the TCP ACK packet as not conforming to the reduction and continues to move to inspect the next packet in the output queue, if available. For example, as previously described, in some cases, such as when the client device determines that a TCP ACK packet contains important information (such as a TCP ACK with additional header information or a TCP data packet), TCP AP 105 sets the flow ID of the particular TCP ACK packet to invalid to indicate that the packet should not be a candidate for removal from the output queue. When BB circuit 103 determines that the flow ID of the currently inspected packet is invalid, the BB circuit leaves the packet in output queue 107 without further processing and continues to move to inspect the next packet in the queue (if there are other non-inspected packets).
On the other hand, if the client device determines that the flow ID of the checked packet is valid (310—yes), the client device checks if the flow ID of the packet is stored in an entry in the database (312). For example, BB circuitry 103 checks the entry in database 106 to determine if the entry includes the flow ID of the currently checked packet, indicating that another TCP ACK packet corresponding to the same flow is present in the output queue (and has been previously checked).
If the client device determines that the stream ID is not stored in the database (312-NO), the client device stores the stream ID of the packet and TCP ACK GEN Count in the database (314). For example, if BB circuitry 103 determines that database 106 does not include an entry with the flow ID of the currently examined TCP ACK packet, then BB circuitry 103 creates a new entry in database 106, e.g., as shown with respect to fig. 6. The BB circuit 103 stores the flow ID and ACK Gen Count value of the packet in the newly created entry.
On the other hand, if the client device determines that the flow ID of the inspected packet is present in an entry in the database (312—yes), the client device checks whether the ACK Gen Count of the inspected packet matches the ACK Gen Count value stored in the database entry (316). For example, as described above and with respect to fig. 2, packet #10 in queue (204) has a flow ID of "flow a" and ACK Gen Count of "ackgen 3". If packet #10 is the currently checked packet, then BB circuitry 103 determines that there is at least one entry in database with the same stream ID corresponding to another TCP ACK packet with the same stream ID in the queue when searching for an entry in database 106. BB circuitry 103 determines whether the ACK Gen Count of the entry is the same as the ACK Gen Count of packet # 10.
If the client device determines that the ACK Gen Count value of the examined TCP ACK packet is different from the ACK Gen Count value stored in the database entry (316—no), the client device updates the ACK Gen Count value in the database by replacing the existing value with the ACK Gen Count value of the examined TCP ACK packet (318). For example, if the BB circuit 103 determines that an entry in the database 106 having the same flow ID as the flow ID of the inspected packet has a different ACK Gen Count value, the BB circuit 103 updates the ACK Gen Count field in the entry with the ACK Gen Count value of the inspected TCP ACK packet. .
On the other hand, if the client device determines that the ACK Gen Count of the inspected packet matches the ACK Gen Count value stored in the database entry (316—yes), the client device discards the TCP ACK packet (320). For example, when checking entries in database 106 for packet #10 (shown in fig. 2), BB circuitry 103 determines that there are entries with the same stream ID ("stream a") and the same ACK Gen Count value ("ackgen 3") as packet # 10. As previously described, this entry is created when packet #13 is checked. Therefore, the BB circuit 103 determines that the packet #10 is a redundant duplicate TCP ACK packet as compared to the packet # 13. BB circuitry 103 then discards or discards TCP ACK packet #10 from output queue 107.
In implementations in which BB circuitry 103 first examines the output queue starting with the oldest packet, when a redundant TCP ACK packet (such as packet # 10) is identified, BB circuitry 103 discards the newer packet (e.g., packet # 13) and retains the older packet (e.g., packet # 10) in the output queue. In such implementations, entries of older packets remain in the database, while entries of newer packets are deleted. It should be noted that in such a case, when a packet is checked, the BB circuit 103 does not immediately know whether the packet is to be discarded or to be retained in the output queue. This determination is made when the next packet with the same stream ID is detected. Thus, the decision whether to leave the packet in the output queue or discard the packet is delayed during the iteration of checking the packet, compared to when the packet in the output queue is first checked starting with the latest packet.
The client device then checks if additional packets are present in the output queue (322). If there are one or more additional packets in the output queue (322—Yes), the client device accesses the TCP ACK packet in the output queue (324) and begins checking entries in the database for matches corresponding to the flow ID and ACK Gen Count of the newly accessed TCP ACK packet in the manner described in the previous sections with respect to (304) - (320). On the other hand, if there are no additional packets in the output queue (322—no), then process 300 ends, where the output queue has been compressed for the current iteration.
In some cases of the above-described process 300, when older TCP ACK packets that are redundant copies are removed from the output queue while newer TCP ACK packets remain in the queue, sending the TCP ACK packets to the remote server is delayed because the newer TCP ACK packets are located later in the output queue. As described in the following sections, in some implementations, the position of the (discarded) duplicate older TCP ACK packet in the output queue is provided to the newer TCP ACK packet such that the TCP ACK is sent earlier (e.g., when the older TCP ACK packet is to be sent based on its position in the output queue) to the remote server, which helps reduce latency. In such implementations, BB circuitry 103 tracks the position of the TCP ACK packet in the output queue. When an older redundant TCP ACK packet is identified, BB circuitry 103 discards the older packet and moves the newer packet to the position of the older packet in the queue. To achieve this reordering of the output queues, BB circuitry 103 maintains additional fields in entries in database 106. The additional fields include a replacement candidate index and a winner packet (e.g., reserved newer packet) index. The replacement candidate index indicates the position in the queue of the older packet of the flow that has the same TCP ACK GEN Count value as the newer TCP ACK packet. This field is updated whenever an older packet with the same TCP ACK GEN Count value is found in the output queue. The winner packet index indicates the queue position of the latest packet of the flow with a particular TCP ACK GEN Count value. This field is set when a packet with a new TCP ACK GEN Count value is identified.
Fig. 4 illustrates optimization of a TCP ACK packet stream in accordance with some implementations of the disclosure in which packets are reordered in an output queue. The figure shows the configuration 402 of the output queue prior to optimization; configurations 404, 406, and 408 of output queues during optimization to identify and remove redundant TCP ACK packets; and configuration 410 of the output queue after optimization is complete. In some implementations, the operations described with respect to fig. 4 are performed by BB circuitry 103, and configurations 402-410 correspond to output queue 107.
Each of the configurations 402-410 includes a TCP ACK packet identified by a packet number (e.g., "packet # 1"), a flow ID (e.g., "flow a"), and an ACK Gen Count value (e.g., "ackgen 2"). As shown, the queue includes a plurality of TCP ACK packets with different stream IDs and corresponding ACK Gen Count values assigned by TCP AP 105. In some implementations, the TCP ACK packets are arranged in order from oldest to newest packet, with packet #1 being oldest and packet #13 being newest. The oldest TCP ACK packet in the queue (packet # 1) has the flow ID "flow a" and ACK Gen Count "ackgen 2", and the newest TCP ACK packet (packet # 13) has the flow ID "flow a" and ACK Gen Count "ackgen".
Configuration 402 shows that the output queue includes multiple TCP ACK packets with the same stream ID and the same ACK Gen Count before BB circuit 103 optimizes the queue. For example, packets #1, #3, and #9 have stream ID "stream a" and ACK Gen Count "ackgen 2". When optimizing output queue 107, BB circuitry 103 identifies one or more of these TCP ACK packets having the same flow ID and ACK Gen Count values as redundant duplicate ACK packets, which are then removed from the queue.
Configuration 402 shows that the output queue also includes multiple TCP ACK packets with the same flow ID but with different ACK Gen Count values; and TCP ACK packets with different stream IDs. For example, packets #1 and #10 have the same stream ID "stream a", but different ACK Gen Count values, "ackgen 2" and "ackgen 3", respectively; while packets #1 and #2 have different stream IDs, "stream a" and "stream b", respectively. These TCP ACK packets with the same stream ID but different ACK Gen Count values or different stream IDs are not identified as redundant with respect to each other.
In some implementations, the packet descriptors (e.g., flow ID and ACK Gen Count values) of the TCP ACK packets in the queue are stored in database 106, for example, in a data structure such as a table as described with respect to fig. 7. In some implementations, BB circuitry 103 manages the table. In checking the packet, if there is no associated flow ID and ACK Gen Count value in the database 106, the BB circuit 103 records the packet descriptor information as an entry in the database, and also records the index or position of the packet in the output queue. As described in more detail in the following sections, BB circuitry 103 manages TCP ACK packets in output queue 107 by examining entries in database 106 to identify redundant TCP ACK packets.
In some implementations, BB circuitry 103 examines output queue 107 starting with the latest packet (e.g., packet #13 as shown by configuration 402). To examine the TCP ACK packet in the output queue, BB circuit 103 accesses the packet descriptor of the packet and compares the information in the packet descriptor with entries stored in database 106.
As previously described, in some implementations, when checking TCP ACK packets stored in the output queue for redundancy, BB circuitry 103 first checks whether the corresponding packet descriptor of the packet has a valid ACK Gen Count or a valid stream ID, or both. If BB circuit 103 determines that the ACK Gen Count of the current packet under inspection is invalid, then BB circuit marks the TCP ACK packet as invalid for a reduction (e.g., not a candidate for removal as a redundant TCP ACK packet) and proceeds to move to the next packet in the output queue. Additionally or alternatively, in some implementations, BB circuitry 103 checks a flow ID associated with the inspected packet. If the flow ID indicates that the flow ID is invalid, BB circuit 103 marks the TCP ACK packet as invalid for the decrease and moves on to the next packet in the output queue. Once a TCP ACK packet is determined to be invalid due to an invalid ACK Gen Count or an invalid flow ID, or both, BB circuit 103 ignores the packet for reduction, leaving it in the output queue.
In some implementations, BB circuitry 103 determines that the same flow ID and ACK Gen Count of the TCP ACK packet in the queue matches an entry in database 106, indicating that the packet is a redundant duplicate ACK packet. For example, considering configuration 404, bb circuit 103 examines packets in the output queue in the order of the latest packet to the oldest packet. Regarding the latest packet, i.e., packet #13, bb circuit 103 determines that there are no entries in database 106 with the same stream ID ("stream a") and the same ACK Gen Count ("ackgen 3"). Thus, BB circuitry 103 creates a new entry in database 106, storing the location indices of stream ID "stream a", ACK Gen Count "ackgen", and packet # 13. Subsequently, when checking the older TCP ACK packet #10, the BB circuit 103 determines that there is an entry in the database 106 having the same stream ID "stream a" and ACK Gen Count value "ackgen 3" as the packet # 10. As shown by association 404a in configuration 404, packet #13 and packet #10 both have a flow ID of "flow a" and ACK Gen Count of "ackgen 3". BB circuitry 103 determines that packet #10 has the same stream ID and ACK Gen Count value as the newer packet (e.g., packet # 13) and is a discardable redundant duplicate TCP ACK packet. BB circuitry 103 annotates the location index of replacement candidate packet #10 (which is a discarded TCP ACK packet) in the database entry corresponding to the particular flow ID ("flow a") and ACK Gen Count ("ackgen 3"), as shown with respect to fig. 7.
After completion of the check of the TCP ACK packet, BB circuit 103 continues to move to check the next older TCP ACK packet in the output queue and compares the packet's flow ID and ACK Gen Count with entries in the queue in a manner similar to that described above. For example, after solving the TCP ACK packet #12, the BB circuit 103 selects the next TCP ACK packet in the queue, that is, the TCP ACK packet #11.BB circuitry 103 determines that database 106 includes an entry with the flow ID ("flow c") of TCP ACK packet #11, but that the ACK Gen Count value ("ackgen 87") of packet #11 is different from the ACK Gen Count value in the database entry (e.g., "ackgen 88" corresponding to packet #12 with the same flow ID). In this case, packet #11 is not a redundant copy of packet #12 and is not discarded or removed from the queue, as shown in configuration 404. BB circuitry 103 updates the corresponding database entry (e.g., the entry corresponding to "flow c") to update the ACK Gen Count field to the ACK Gen Count value of storage packet #11 and the winner index field to the location index of storage packet #11.
BB circuitry 103 continues to examine output queue 107 moving back from newer TCP ACK packets to older TCP ACK packets in the queue in a manner similar to that described above. For example, as shown with respect to configuration 404, upon examining packet #6, then packet #5, BB circuitry 103 determines that these packets have the same stream ID ("stream c") and the same ACK Gen Count ("ackgen 87") as the entry in the database corresponding to packet #11, as described above. In this case, BB circuitry 103 determines that the two older packets (packet #6 and packet # 5) are redundant copies (shown by associations 404b and 404 c) with respect to packet #11, and marks these packets for discarding.
In some implementations, when there are a plurality of redundant duplicate TCP ACK packets as in the foregoing example, the BB circuit 3 selects the position of the oldest packet of the plurality of duplicate TCP ACK packets as the replacement position of the latest packet to be retained, and stores the position of the oldest copy of the value as the replacement candidate index, and the position of the winner packet in the database entry corresponding to the flow ID and ACK Gen Count value. In some implementations, the replacement candidate field in the database entry is updated upon identifying the additional duplicate TCP ACK packet. In such implementations, the final value of the replacement candidate field corresponds to the position index of the identified oldest duplicate TCP ACK packet. Considering the above example again, BB circuitry 103 first identifies packet #6 as having the same stream ID and the same ACK Gen Count value, as shown by association 404 b. Since packet #6 is an older packet than packet #11, the BB circuit 103 determines that packet #6 can be discarded, and the position occupied by packet #6 in the queue can be given to packet #11. Following this identification, BB circuitry 103 stores the location of packet #6 in the replacement candidate index field and the location of packet #11 in the winner packet field in the database entry corresponding to the stream ID ("stream c") and ACK Gen Count value ("ackgen 87"). Subsequently, BB circuitry 103 identifies packet #5 (which is the older packet in the output queue than packet # 5) as corresponding to another copy of flow ID "flow c" and ACK Gen Count value "ackgen 87", as indicated by associated 404 c. Then, BB circuitry 103 updates the replacement candidate index field in the corresponding database entry of the flow ID ("flow c") and ACK Gen Count value ("ackgen 87") to store the location of packet #5 (removing the location of packet #6 previously stored in this field in the database entry).
Similarly, consider the flow ID "flow a" and ACK Gen Count value "acckgen 2" corresponding to packet #9, with BB circuit 103 identifying the two older packets in the output queue that are redundant duplicate TCP ACK packets (i.e., packet #3 and packet # 1) because the two older packets have the same flow ID and ACK Gen Count values, as shown in association 404d and 404 e. Of these two redundant duplicate TCP ACK packets, packet #1 is older than packet # 3. Therefore, the BB circuit 103 stores the position of the packet # 1in the corresponding database entry of the flow ID "flow a" and the ACK Gen Count value "ackgen 2" as the final value in the replacement candidate index field. As another example, BB circuitry 103 determines that packet #4 is a single redundant repetition of packet #7, as shown by association 404 f. In this case, the BB circuit 103 stores the position of the packet #4 in the database entry corresponding to the stream ID ("stream b") and the ACK Gen Count value ("ackgen 41") as the final value in the replacement candidate index field, and the position of the packet #7 in the winner candidate index field.
After checking the output queue to identify redundant TCP ACK packets and annotate the positions of the winner packets and replacement candidates, BB circuitry 103 uses the recorded position index information to rearrange the packets in the queue so that the positions of the winner packets corresponding to a particular stream ID and ACK Gen Count value are swapped with the positions of the oldest packets corresponding to that stream ID and ACK Gen Count value. For example, as discussed above with respect to association 404a, for flow ID "flow a" and ACK Gen Count "ackgen 3", packet #10 is a redundant copy of packet #13, and their respective queue locations are stored as a replacement candidate index and a winner index, respectively, in the corresponding database entry. When the packets in the output queue are rearranged after the redundant copies are identified, BB circuitry 103 swaps their positions in the output queue by looking up the positions of packet #13 and packet #10 in the replacement candidate index and winner index fields of the database entry, as shown in configuration 406. As shown at 406a, the reserved packet #13 moves up the queue to the position previously occupied by packet #10 t. This reordering ensures that the earliest available position of the TCP ACK packet in the queue (e.g., the original position of packet # 10) corresponding to stream ID "stream a" and ACK Gen Count value "ackgen" is maintained while removing redundant duplicate packets.
In a similar manner to the above, when redundant duplicate TCP ACK packets are removed, TCP ACK packets corresponding to other < stream ID, ACK Gen Count value > tuples are reordered. For example, as shown in association 406b, the positions of packet #11 and packet #5, which are the latest packet and the oldest packet corresponding to stream ID "stream c" and ACK Gen Count value "ackgen 87", respectively, are exchanged; as shown by association 406 c; exchanging positions of packet #9 and packet #1 as the latest packet and the oldest packet corresponding to the stream ID "stream a" and the ACK Gen Count value "ackgen2", respectively; and as shown in association 406d, the positions of packet #7 and packet #4, which are the latest packet and the oldest packet corresponding to stream ID "stream b" and ACK Gen Count value "ackgen 41", respectively, are exchanged. For each of these reorders, BB circuitry 103 looks up the locations to be swapped from the replacement candidate index field and the winner index field in the corresponding database entry.
After BB circuit 103 has exchanged the positions of the packets and re-queues as described above, BB circuit 103 discards the TCP ACK packets identified as redundant repetitions. For example, as shown in configuration 408, packet #10, which is a redundant duplicate packet corresponding to stream ID "stream a" and ACK Gen Count value "ackgen" is discarded. Similarly, packet #5 and packet #6, which are redundant duplicate packets corresponding to stream ID "stream c" and ACK Gen Count value "ackgen 87", are discarded; discarding packet #1 and packet #3 as redundant duplicate packets corresponding to stream ID "stream a" and ACK Gen Count value "ackgen 2"; and discards packet #4, which is a redundant duplicate packet corresponding to stream ID "stream b" and ACK Gen Count value "ackgen 41". After discarding the redundant packets, configuration 410 of the output queue shows that the number of packets in the compression queue is less than the number of packets in the original queue, as shown in configuration (402). In this way, redundant TCP ACK packets are eliminated from the output queue to improve processing efficiency. In doing so, the remaining packets are reordered such that the order in which acknowledgements are sent is not changed, which results in a reduced latency for all subsequent UL TCP ACK packets (after the discarded packets), a reduced TCP RTT (round trip time) (which ultimately results in an increased TCP throughput and thus a higher end-to-end throughput).
Fig. 5 illustrates an exemplary process 500 for managing TCP ACK packets in an output queue and reordering in accordance with certain disclosed implementations. In some implementations, the process 500 is performed by the client device 102, for example, by the BB circuitry 103 of the client device 102. Thus, the process 500 is described in the following sections with respect to the client device 102 and the system 100. However, process 500 may also be performed by other devices.
Process 500 begins when a client device accesses an output queue (502) that includes a plurality of TCP ACK packets. For example, BB circuitry 103 accesses output queue 107 to determine whether there are disposable redundant packets. In some implementations, BB circuitry 103 accesses output queue 107 to optimize the queue when the number of TCP ACK packets in the queue exceeds a predetermined threshold number. In some implementations, BB circuitry 103 accesses output queue 107 to optimize the queue when the duration of queuing TCP ACK packets in output queue 107 exceeds a predetermined time threshold. In some implementations, BB circuitry 103 accesses output queue 107 at predetermined periodic intervals to optimize the queue. In some implementations, the BB circuit 103 resets the database 106 when the output queue is accessed. For example, in such implementations, BB circuitry 103 clears entries in database 106 when output queue 107 is accessed.
Upon accessing the output queue, the client device examines the TCP ACK packets in the queue (504). For example, in some implementations, BB circuitry 103 examines the packet starting from the latest packet in the queue (e.g., packet #13 as shown by configuration 402). In some implementations, BB circuitry 103 examines the packet starting from the oldest packet in the queue (e.g., packet #1 as shown by configuration 402). In some implementations, when a packet in the output queue 107 is checked, the BB circuit 103 reads a packet descriptor corresponding to the packet.
The client device identifies the value of the ACK Gen Count and flow ID corresponding to the currently accessed TCP ACK packet (506). For example, when checking a TCP ACK packet (such as packet # 13) in the output queue 107, the BB circuit 103 accesses the flow ID and ACK Gen Count value included in the packet descriptor of the packet.
Upon identifying the ACK Gen Count value of the examined TCP ACK packet, the client device checks whether the ACK Gen Count value of the packet is valid (508). For example, BB circuitry 103 checks whether the ACK Gen Count value of the packet is set to a valid value, or an invalid value (e.g., hexadecimal FFFF or some other suitable predetermined value indicating invalid). In some implementations, the absence of any value in the ACK Gen Count field indicates that the ACK Gen Count is invalid.
If the client device determines that the ACK Gen Count value of the examined TCP ACK packet is invalid (508-no), the client device marks the TCP ACK packet as not conforming to the decrease and continues to move to examine the next packet in the output queue if available (522). For example, as previously described, in some cases, such as when a packet contains important information (such as a TCP ACK with additional header information or a TCP data packet), TCP AP 105 sets the ACK Gen Count of the packet to invalid to indicate that the packet should not be a candidate for removal from the output queue. When BB circuit 103 determines that ACK Gen Count for the currently inspected packet is invalid, the BB circuit leaves the packet in output queue 107 without further processing and continues to move to inspect the next packet in the queue (if there are other non-inspected packets).
On the other hand, if the client device determines that the checked TCP ACK packet has a valid ACK Gen Count, the client device checks whether the flow ID of the TCP ACK packet is valid (510). For example, BB circuitry 103 checks whether a stream ID value is assigned to a TCP ACK packet and, if so, checks whether the stream ID indicates a valid value or an invalid value (e.g., hexadecimal FFFF or some other suitable predetermined value indicating invalid). In some implementations, the absence of any value in the stream ID field indicates that the stream ID is invalid.
If the client device determines that the flow ID of the inspected packet indicates that the flow ID is invalid (510—no), the client device marks the TCP ACK packet as not conforming to the decrease and continues to move to inspect the next packet in the output queue if available (522). For example, as previously described, in some cases, such as when the client device determines that a TCP ACK packet contains important information (such as a TCP ACK with additional header information or a TCP data packet), TCP AP 105 sets the flow ID of the particular TCP ACK packet to invalid to indicate that the packet should not be a candidate for removal from the output queue. When BB circuit 103 determines that the flow ID of the currently inspected packet is invalid, the BB circuit leaves the packet in output queue 107 without further processing and continues to move to inspect the next packet in the queue (if there are other non-inspected packets).
On the other hand, if the client device determines that the flow ID of the checked packet is valid (510—yes), the client device checks whether the flow ID of the packet is stored in an entry in the database (512). For example, BB circuitry 103 checks the entry in database 106 to determine if the entry includes a flow ID that matches the flow ID of the currently examined packet, indicating that another TCP ACK packet corresponding to the same flow is present in the output queue (and has been previously examined).
If the client device determines that the stream ID is not stored in the database (512—NO), the client device stores the stream ID of the packet and TCP ACK GEN Count in the database (514). For example, if BB circuitry 103 determines that database 106 does not include an entry with the flow ID of the TCP ACK packet currently being checked, then BB circuitry 103 creates a new entry in database 106, e.g., as shown with respect to fig. 7. The BB circuit 103 stores the flow ID and ACK Gen Count value of the packet in the newly created entry. The client device also stores the location of the packet in a database (515). For example, in addition to storing the flow ID and ACK Gen Count value of the TCP ACK packet in the newly created entry, BB circuitry 103 also stores the packet's position in the output queue in the winner index field of the database entry, as previously described with respect to fig. 4.
On the other hand, if the client device determines that the flow ID of the inspected packet is present in an entry in the database (512—yes), the client device checks whether the ACK Gen Count of the inspected packet matches the ACK Gen Count value in the entry (516). For example, as described above and with respect to fig. 4, packet #10 in queue (404) has a flow ID of "flow a" and ACK Gen Count of "ackgen3". When packet #10 is checked, BB circuitry 103 determines that there is an entry in database having a stream ID ("stream a") when retrieving the entry in database 106 (e.g., previously created when checking packet # 13). BB circuit 103 determines whether the ACK Gen Count value of the entry matches the ACK Gen Count of packet # 10. BB circuit 103 checks whether the ACK Gen Count in the entry is identical to the ACK Gen Count of packet # 10.
If the client device determines that the ACK Gen Count of the TCP ACK packet does not match the ACK Gen Count of the database entry (516-no), the client device updates the ACK Gen Count value of the entry in the database (518). If the BB circuit 103 determines that the existing entry in the database 106 having the same flow ID as the flow ID of the checked packet has an ACK Gen Count value different from the ACK Gen Count value of the packet, the BB circuit 103 updates the database entry. BB circuitry 103 updates the entry's ACK Gen Count field with the checked packet's ACK Gen Count value. For example, as previously described, when examining TCP ACK packet #11, BB circuit 103 determines that database 106 includes an entry having the flow ID ("flow c") of TCP ACK packet #11, but the ACK Gen Count value ("ackgen 87") of packet #11 is different from the ACK Gen Count value in the database entry (corresponding to "ackgen" of packet #12 having the same flow ID). In this case, BB circuitry 103 updates the corresponding database entry of "flow c" by updating the ACK Gen Count field to store the ACK Gen Count value of packet #11 and the winner index field to store the position index of packet # 11.
The client device also stores the location of the packet as the winner packet in a database entry (519). For example, in addition to updating the ACK Gen Count field in the database entry as described above, BB circuitry 103 also stores the output queue location of the packet in the entry's winner index field, as previously described with respect to fig. 4. Considering the above example of packet #11, BB circuitry 103 updates the winner index field (of the queue location previously storing packet # 12) of the database entry to store the queue location index of packet # 11.
On the other hand, if the client device determines that the ACK Gen Count of the inspected packet matches the ACK Gen Count value stored in the database entry (516—yes), the client device marks the TCP ACK packet as redundant and stores the location of the TCP ACK packet in the database entry as a replacement candidate index (520). For example, when considering the packet #10 to be checked (as shown in fig. 4), the BB circuit 103 determines that there is an entry (created when checking the packet # 13) having the same stream ID ("stream a") and the same ACK Gen Count value ("ackgen 3") as the packet # 10. Therefore, the BB circuit 103 determines that the packet #10 is a redundant duplicate TCP ACK packet as compared to the packet # 13. BB circuitry 103 marks TCP ACK packet #10 as to be discarded from output queue 107 and stores the position of packet #10 in the replacement candidate index field of the corresponding database entry of flow ID "flow a", with the queue position of packet #13 stored in the winner index field.
The client device then checks if additional packets are present in the output queue (522). If there are one or more additional packets in the output queue (522—yes), the client device accesses the next TCP ACK packet in the output queue (524) and begins checking entries in the database for matches corresponding to the flow ID and ACK Gen Count of the newly accessed TCP ACK packet in the manner described in the previous sections with respect to (504) - (520).
On the other hand, if no additional packets are present in the output queue (522—no), the client device continues to reorder the TCP ACK packets in the output queue and discard redundant TCP ACK packets (526). For example, BB circuit 103 switches the positions of packet #13 and packet #10 as described with respect to 406a in fig. 4, and discards redundant duplicate TCP ACK packet #10 as shown with respect to configuration 408. Process 500 then ends with the output queue having been compressed and reordered for the current iteration. For example, in some implementations, configuration 410 in FIG. 4 shows the output queue at the end of process 500.
Fig. 6 illustrates entries in database 106 for managing TCP ACK packets, implemented in accordance with some of the disclosure. As described with respect to fig. 1, database 106 is included in client device 102.
As shown in fig. 6, database 106 includes one or more entries, such as 602, 604, and 606, represented as rows and also referred to as data records. Each entry includes two fields, namely a flow ID field 620 and an ACK Gen Count field 622, which store the flow ID and ACK Gen Count values, respectively, determined by BB circuitry 103 when inspecting TCP ACK packets in output queue 107. Each entry in database 106 includes a unique combination of a flow ID and an ACK Gen Count value. For example, entry 602 has a value of "flow a" in its flow ID field 620 and a value of "ackgen 3" in its ACK Gen Count value field 622. When BB circuit 103 begins checking output queue 107 in each iteration, it resets database 106 by clearing all entries.
When checking the TCP ACK packets in the output queue, if BB circuit 103 does not find an entry in database 106 that matches the flow ID of the packet, then BB circuit 103 creates a new entry in database 106 and inputs the flow ID and ACK Gen Count values in the corresponding fields of the entry, as described with respect to output queue configurations 202 through 204 and process 300. This occurs, for example, when the latest packet corresponding to a specific flow ID is checked. For example, as described with respect to fig. 2 and 3, when packet #13 in output queue 107 is checked, BB circuit 103 determines that the flow ID ("flow a") of packet #13 is not present in database 106. Thus, BB circuitry 103 creates a new entry, e.g., entry 602, in database 106 and enters the flow ID and ACK Gen Count value for packet #13 in the corresponding fields of the newly created entry. Similarly, when packet #12 and packet #7 are checked accordingly, BB circuit 103 creates entries 604 and 606.
Fig. 7 illustrates entries in database 106 for managing TCP ACK packets, implemented in accordance with some of the disclosure. As described with respect to fig. 1, database 106 is included in client device 102. As shown in fig. 7, database 106 includes one or more entries, such as 702, 704, and 706, represented as rows and also referred to as data records. Each entry includes four fields: a stream ID field 720, an ACK Gen Count field 722, a replacement candidate index field 724, and a winner index field 726.
The flow ID field 720 and the ACK Gen Count field 722 store a flow ID and an ACK Gen Count value, respectively, determined by the BB circuit 103 when checking the TCP ACK packet in the output queue 107. Each entry in database 106 includes a unique combination of a flow ID and an ACK Gen Count value. For example, entry 702 has a value of "flow a" in its flow ID field 720 and a value of "ackgen 3" in its ACK Gen Count value field 722. When the BB circuit 103 identifies a redundant duplicate TCP ACK packet, the replacement candidate index field 724 and the winner index field 726 of the entry store the position of the redundant duplicate packet to be discarded and the position of the TCP ACK packet to be reserved, respectively. For example, entry 702 has a queue location of packet #10 in its replacement candidate index field 724 (referred to as a packet #10 index) and a queue location of packet #13 in its winner index field 726 (referred to as a packet #13 index). Therefore, when the output queue 107 is compressed, as described with respect to fig. 4 and 5, the BB circuit 103 switches the positions of the packet #10 and the packet #13, and discards the packet #10.
When BB circuit 103 begins checking output queue 107 in each iteration, it resets database 106 by clearing all entries. As previously described, when inspecting TCP ACK packets in the output queue, if BB circuitry 103 does not find an entry in database 106 that matches the flow ID of the packet, then BB circuitry 103 creates a new entry in database 106 and inputs the flow ID, ACK Gen Count value, and queue location of the packet in the corresponding fields of the entry, as described with respect to output queue configurations 402-408 and process 500. This occurs, for example, when the latest packet corresponding to a specific flow ID and ACK Gen Count interval is checked. For example, as described with respect to fig. 4 and 5, when packet #13 in output queue 107 is checked, BB circuit 103 determines that there is no combination of the flow ID ("flow a") and the ACK Gen Count value ("ackgen 3") of packet #13 in database 106. Thus, BB circuitry 103 creates a new entry, e.g., entry 702, in database 106, and in the corresponding fields 720, 722, and 726 of the newly created entry, the input stream ID, ACK Gen Count value, and the queue position of packet #13 (packet #13 index). The BB circuit 103 creates an entry 704 when packet #12 is checked, and updates the entry when packet #11 having the same flow ID but a different ACK Gen Count value as compared to packet #12 is checked. Similarly, the BB circuit 103 creates an entry 706 when packet #8 is checked, and updates the entry when packet #7 is checked.
On the other hand, when checking a TCP ACK packet in the output queue, if the BB circuit 103 finds an entry in the database 106 matching the combination of the flow ID and ACK Gen Count value of the packet, the BB circuit 103 determines that the currently checked packet is a redundancy repetition. BB circuitry 103 marks the packet as to be discarded and records the queue location of the packet in replacement candidate index field 724 of the corresponding entry, as described with respect to output queue configurations 402-408 and process 500. This occurs, for example, when the most recent packet corresponding to a particular stream ID and ACK Gen Count interval has been detected. For example, as described with respect to fig. 4 and 5, when packet #10 in output queue 107 is checked, BB circuit 103 determines that there is already a combination of the flow ID ("flow a") and the ACK Gen Count value ("ackgen") of packet #10 in entry 702 in database 106 (created when packet #13 is checked). Thus, the BB circuit 103 marks packet #10 as to be discarded, and records the queue position of packet #10 ("packet #10 index") in the replacement candidate index field 724 of the entry 702. Similarly, when packet #6 in output queue 107 is checked, BB circuit 103 determines that there is already a combination of the flow ID ("flow c") and ACK Gen Count value ("ackgen 87") of the packet in entry 706 in database 106 (updated when packet #11 is checked). Thus, the BB circuit 103 marks packet #6 as to be discarded, and records the queue position of packet #6 in the replacement candidate index field 724 of the entry 706. Next examine packet #5 in output queue 107 (when the queue is first examined starting from the latest packet from right to left such that packet #6 is newer in the output queue than packet # 5), BB circuit 103 determines that there is already a combination of the flow ID ("flow c") and ACK Gen Count value ("ackgen 87") of the packet in database 106 in entry 706. Thus, the BB circuit 103 marks the packet #5 as to be discarded, and records the queue position of the packet #5 in the replacement candidate index field 724 of the entry 706, thereby replacing the previous value recorded in the field 724, i.e., the queue position of the packet # 6.
Once BB circuit 103 has completed checking output queue 107 in the current iteration, the entries in database 106 are as shown in fig. 7. The BB circuit 103 exchanges the positions of the winner packet and the redundant duplicate packet as described with respect to configuration 406 and process 500 in fig. 4. For example, for flow ID "flow a" and ACK Gen Count value "ackgen 3" (entry 702), BB circuitry 103 moves winner packet #13 to the location of redundant packet #10 that is earlier in output queue 107 and known from replacement candidate index field 724 in entry 702.
In some implementations, BB circuitry 103 moves redundant packet #10 to the position of winner packet #13 known from winner index field 726 in entry 702. Similar exchanges are performed for entries 704 and 706. After the exchange is completed, BB circuitry 103 discards the redundant duplicate packets as described with respect to configuration 408 and process 500 in fig. 4. For example, for flow ID "flow a" and ACK Gen Count value "ackgen 3" (entry 702), BB circuitry 103 discards packet #10, which now occupies the position indicated by winner index field 726.
Thus, in implementing process 500, BB circuitry 103 utilizes database 106 to optimize output queue 107 to remove redundant duplicate packets.
In some implementations, the stream ID may be very large (e.g., 116 bits or 128 bits). Storing stream IDs having such larger sizes would require a large amount of memory. The time taken to parse the stream ID and find a match in database 106 may be long due to the large size of the stream ID, resulting in additional delay overhead in compressing the output queue. In some implementations, memory storage requirements or the speed of parsing the database, or both, are reduced by compacting the size of the stream ID to perform an initial search in the database 106 using the last X bits of the stream ID as a hash value. In such implementations, X is a predetermined positive integer that is less than a number corresponding to the full bit length of the stream ID. For example, X may be 8 bits, 16 bits, or 32 bits, while the full bit length of the stream ID may be 116 bits or 128 bits. In such implementations, the BB circuitry 103 uses the hash value as an index into a hash table that stores entries with < stream ID, ack_gen_count > pairs that have the same hash value. The table is searched to identify an entry having the same hash value as the hash value of the flow ID of the currently processed TCP ACK packet. Since the number of bits of the hash value is smaller than the original stream ID bit length, using the hash value reduces the time required to search database 106.
In some implementations, when using the hash value in the manner described above, the full stream ID is also stored in the hash table at the same time. This is useful, for example, to resolve stream ID collisions (e.g., two or more different stream IDs mapped to the same hash value). In such cases, initially, a search is performed using the hash value to determine the presence of an entry in the database. This is followed by a second search to locate an entry using the full stream ID. When a hash match and a hash table entry are found, all complete flow IDs mapped to hash values in the hash table entry are checked until a flow ID is found that matches the flow ID of the packet under inspection. If no match is found, the flow ID of the packet is stored in the same hash table entry. Thus, a hash table entry includes a linear list of stream IDs that are hashed to a hash value corresponding to the entry. Using hashing in the manner described above involves a tradeoff between memory consumption (e.g., large hash table, but less conflicting) and speed (e.g., small hash table, but greater search effort due to stream ID collisions). The size of the hash value is chosen according to a trade-off.
In some implementations, the complexity of TCP ACK optimization is O (n), where n is the number of packets in the output queue. That is, the complexity of TCP ACK optimization increases linearly with increasing n. In order to reduce computing resource consumption and save power, it is desirable to have the flexibility to turn on or off TCP ACK optimization. Thus, a client device (e.g., client device 102) may be configured with features that subject TCP ACK optimization to various conditions. Implementing these features may be useful in devices or applications such as modems operating in low power modes or consumer electronics devices (e.g., low-end smartwatches and mobile phones) having affordable hardware configurations. Exemplary TCP ACK optimization conditions are described below with reference to examples shown in fig. 8A to 8G. The examples shown in fig. 8A-8G may be implemented on client device 102 and/or with one or more features of TCP ACK management described with reference to fig. 2-7.
The examples of fig. 8A-8G focus on the operation between an Application Processor (AP) 810 and a baseband (BB) circuit 820. In some implementations, AP 810 is similar to TCP AP 105 and BB circuit 820 is similar to BBU 103. As shown, AP 810 includes an application block 811 and an IP stack 812, and BB circuitry 820 includes a TCP-Ack optimization block 821 and a layer 2 (L2) block 822. The application block 811, the IP stack block 812, the TCP-Ack optimization block 821, and the L2 block 822 may be functional blocks implemented by software code and/or hardware circuitry. AP 810 outputs uplink packet stream 830 to BB circuit 820. After processing the uplink packet stream 830 with the L2 block 822, the BB circuit 820 outputs the uplink packet stream 830 to, for example, a Physical (PHY) layer for wireless transmission.
Fig. 8A illustrates an exemplary condition 800A for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800A is based on determining whether the churn rate 824 is below a threshold. The churn rate 824 may be a parameter describing how fast packets stored in the L2 queue 823 are processed (e.g., dequeued and transferred to the PHY). To determine condition 800a, TCP-Ack optimization block 821 receives churn rate 824 from L2 block 822 and determines whether to turn on TCP Ack optimization to reduce backlog. If the churn rate 824 exceeds a threshold, the TCP ACK optimization block 821 may infer that backlog in the L2 queue 823 is low. If the churn rate 824 is below the threshold, the TCP ACK optimization block 821 may infer that backlog in the L2 queue 823 is high and requires optimization.
Fig. 8B illustrates an exemplary condition 800B for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. When the uplink packets from AP 810 to BB circuit 820 are divided into multiple queues, condition 800B is applicable, including high priority queue 825 for high priority data and best effort queue 826 for best effort data without stringent requirements regarding priority (e.g., TCP data packets in File Transfer Protocol (FTP) upload). Condition 800B is based on determining whether an uplink packet to be transmitted belongs to a particular queue of the plurality of queues. In an example, AP 810 is configured to transmit all TCK ACKs in high priority queue 825. In this case, since the high priority queue 825 is the only queue with TCP ACKs, TCP ACK optimization is turned on only for the high priority queue 825 and turned off for the other queues. In another example, AP 810 is also configured to transmit TCP ACKs in best effort queue 826, but at a lower speed than the data in high priority queue 825. In this case, TCP ACK optimization is turned on only for best effort queue 826 to improve processing efficiency, and turned off for other queues.
Fig. 8C illustrates an exemplary condition 800C for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800C is applicable when multiple DRBs (e.g., radio bearers for transmitting data) are available for L2 block processing of uplink packets. In this case, condition 800C may specify certain DRBs for TCP ACK optimization. In an example, TCP ACK optimization is turned on only for the default DRB for handling the internet PDN. In an example, TCP ACK optimization is turned on only for DRBs whose throughput is within a given range (e.g., DRBs with highest throughput, DRBs with lowest throughput, DRBs whose throughput is below a threshold, or DRBs whose throughput is above a threshold). In an example, TCP ACK optimization is turned on only for bi-directional DRBs (e.g., DRBs with both uplink data and incoming downlink data).
Fig. 8D illustrates an exemplary condition 800D for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800D applies when multiple PDNs are available from AP 810 to BB circuitry 820. Since not all PDNs have high throughput, condition 800D may select one or more PDNs for TCP ACK optimization based on, for example, the throughput needs of the PDN.
Fig. 8E illustrates an exemplary condition 800E for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800E is based on the power mode of the client device. When the client device is operating in a low power mode controlled by power control block 827, TCP ACK optimization may be turned off to save power. Otherwise, TCP ACK optimization may be turned on to improve data transmission efficiency.
Fig. 8F illustrates an exemplary condition 800F for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800F applies when IP stack 812 divides the uplink data into streams of multiple IP packets. In this case, condition 800F may select certain IP packet flows for TCP ACK optimization. Exemplary selection criteria include historical information for IP packet flows and TCP receive window adaptation information. For example, when the UE recognizes that there are a large number of TCP ACKs in a previous packet of a particular flow, the UE may turn on TCP ACK optimization for all subsequent packets of the same flow. Conversely, when the UE recognizes that there is no TCP ACK in the previous packet of a particular flow, the UE may turn off TCP ACK optimization for the same flow. In addition, the UE may be configured to turn on TCP ACK optimization in case the TCP sliding window is small (e.g., less than a given threshold) in order to reduce the number of packets sent. These criteria may be based on flow information 828 provided by, for example, the IP stack 812.
Fig. 8G illustrates an exemplary condition 800G for turning TCP ACK optimization on or off, in accordance with some implementations of the disclosure. Condition 800G is based on determining whether the BB circuitry 820 detects a downlink TCP data packet 829. For example, TCP ACK optimization may be turned on only when BB circuitry 820 detects downlink TCP data packet 829. This is because when there is no downlink TCP data, no TCP ACK will be sent in the uplink direction, and thus TCP ACK optimization is not required.
The conditions described above with respect to the examples of fig. 8A-8G, along with other possible conditions for turning TCP ACK optimization on and off, provide the client device with great flexibility in balancing performance metrics such as power consumption, computing resources, hardware complexity, and transmission delay. The client device may implement one or more of these conditions depending on its particular performance needs.
Fig. 9 illustrates an example of infrastructure equipment 900 according to various implementations. Infrastructure equipment 900 (or "system 900") may be implemented as a base station, a radio head, a RAN node (such as RAN nodes 112a and 112b and/or AP 104 shown and described previously), an application server 110, and/or any other element/device discussed herein. In other examples, system 900 may be implemented in or by client device 102. The system 900 includes application circuitry 905, baseband circuitry 910, one or more Radio Front End Modules (RFEM) 915, memory circuitry 920, programs 922 stored in the memory 920, power Management Integrated Circuits (PMICs) 925, power tee circuitry 930, network controller circuitry 935, network interface connector 940, satellite positioning circuitry 945, and user interface 950.
In some implementations, the apparatus 900 may include additional elements such as, for example, memory/storage, a display, a camera, a sensor, or an input/output (I/O) interface. In other implementations, these components may be included in more than one device. For example, the circuitry may be included solely in more than one device for CRAN, vBBU, or other similar implementations.
The application circuitry 905 may include circuitry such as, but not limited to, one or more processors (or processor cores), cache memory, and one or more of the following: low dropout regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface modules, real Time Clocks (RTCs), timer-counters (including interval timers and watchdog timers), universal input/output (I/O or IO), memory card controllers such as Secure Digital (SD) multimedia cards (MMCs) or similar products, universal Serial Bus (USB) interfaces, mobile Industry Processor Interface (MIPI) interfaces, and Joint Test Access Group (JTAG) test access ports. The processor (or core) of the application circuit 905 may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage device to enable various applications or operating systems to run on the system 900. In some implementations, the memory/storage elements may be on-chip memory circuitry that may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein.
The processors of application circuitry 905 may comprise, for example, one or more processor Cores (CPUs), one or more application processors, one or more Graphics Processing Units (GPUs), one or more Reduced Instruction Set Computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more Complex Instruction Set Computing (CISC) processors, one or more Digital Signal Processors (DSPs), one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, or any suitable combination thereof.
In some implementations, the application circuitry 905 may include or be a dedicated processor/controller for operating in accordance with various implementations herein. By way of example, the processor of the application circuit 905 may include one or more Apple A series processors, intelOr/>A processor; advanced Micro Devices (AMD)Processor, acceleration Processing Unit (APU) or/>A processor; ARM holders, ltd. Licensed ARM-based processors, such as ARM Cortex-A series processors provided by Caviem (TM), inc.. And/>MIPS-based designs from MIPS Technologies, inc, such as MIPS Warrior P-stage processors; etc. In some implementations, the system 900 may not utilize the application circuit 905, but may include a dedicated processor or controller to process IP data received from, for example, the EPC or 5 GC.
In some implementations, the application circuitry 905 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer Vision (CV) and/or Deep Learning (DL) accelerators. For example, the programmable processing device may be one or more Field Programmable Devices (FPDs), such as a Field Programmable Gate Array (FPGA), or the like; programmable Logic Devices (PLDs), such as Complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; an ASIC, such as a structured ASIC; a programmable SoC (PSoC); etc. In such implementations, the circuitry of application circuit 905 may include logic blocks or logic frameworks, as well as other interconnect resources that may be programmed to perform various functions, such as programs, methods, functions, and the like.
In such implementations, the circuitry of application circuitry 905 may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static Random Access Memory (SRAM), antifuse, etc)) for storing logic blocks, logic fabrics, data, and so forth.
The baseband circuitry 910 may be implemented, for example, as a solder-in substrate comprising one or more integrated circuits, a single packaged integrated circuit soldered to a host circuit board, or a multi-chip module containing two or more integrated circuits.
The user interface circuitry 950 may include one or more user interfaces designed to enable a user to interact with the system 900, or a peripheral component interface designed to enable a peripheral component to interact with the system 900. The user interface may include, but is not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., light Emitting Diodes (LEDs)), a physical keyboard or keypad, a mouse, a touch pad, a touch screen, a speaker or other audio emitting device, a microphone, a printer, a scanner, a headset, a display screen or display device, and the like. Peripheral component interfaces may include, but are not limited to, non-volatile memory ports, universal Serial Bus (USB) ports, audio jacks, power interfaces, and the like.
Radio Front End Module (RFEM) 915 may include millimeter wave (mmWave) RFEM and one or more sub-millimeter wave Radio Frequency Integrated Circuits (RFICs). In some implementations, the one or more sub-millimeter wave RFICs may be physically separate from the millimeter wave RFEM. The RFIC may include connections to one or more antennas or antenna arrays, and the RFEM may be connected to multiple antennas. In alternative implementations, the radio functions of both millimeter wave and sub-millimeter wave may be implemented in the same physical RFEM 915 that incorporates both millimeter wave antennas and sub-millimeter wave.
Memory circuit 920 may include one or more of the following: volatile memory including Dynamic Random Access Memory (DRAM) and/or Synchronous Dynamic Random Access Memory (SDRAM); and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as flash memory), phase-change random access memory (PRAM), magnetoresistive Random Access Memory (MRAM), etc., and may incorporateAnd/>A three-dimensional (3D) intersection (XPOINT) memory. Memory circuit 920 may be implemented as one or more of the following: solder-in package integrated circuits, socket memory modules, and plug-in memory cards.
The PMIC 925 may include a voltage regulator, a surge protector, a power alert detection circuit, and one or more backup power sources, such as a battery or a capacitor. The power alert detection circuit may detect one or more of a power down (under voltage) and surge (over voltage) condition.
The power tee circuit 930 may provide power extracted from the network cable to use a single cable to provide both power and data connections for the infrastructure equipment 900.
Network controller circuit 935 may provide connectivity to the network using standard network interface protocols, such as Ethernet, GRE tunnel-based Ethernet, multiprotocol label switching (MPLS) based Ethernet, or some other suitable protocol. The network connection may be provided to/from the infrastructure equipment 900 via the network interface connector 940 using a physical connection, which may be an electrical connection (commonly referred to as a "copper interconnect"), an optical connection, or a wireless connection. The network controller circuit 935 may include one or more dedicated processors and/or FPGAs for communicating using one or more of the aforementioned protocols. In some implementations, the network controller circuit 935 may include multiple controllers for providing connections to other networks using the same or different protocols.
The positioning circuitry 945 includes circuitry for receiving and decoding signals transmitted/broadcast by a positioning network of a Global Navigation Satellite System (GNSS). Examples of navigation satellite constellations (or GNSS) include the Global Positioning System (GPS) of the united states, the global navigation system of russia (GLONASS), the galileo system of the european union, the beidou navigation satellite system of china, the regional navigation system or GNSS augmentation system (e.g., navigation using the indian constellation (NAVIC), the quasi-zenith satellite system of japan (QZSS), the doppler orbit map of france, satellite integrated radio positioning (DORIS), etc.), and so forth.
The positioning circuitry 945 may include various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, etc. for facilitating OTA communications) to communicate with components of a positioning network such as navigation satellite constellation nodes. In some implementations, the positioning circuitry 945 may include a micro-technology (micro PNT) IC for positioning, navigation, and timing that performs position tracking/estimation using a master timing clock without GNSS assistance. Positioning circuitry 945 may also be part of or interact with baseband circuitry 910 and/or RFEM 915 to communicate with nodes and components of a positioning network. Positioning circuitry 945 may also provide location data and/or time data to application circuitry 905, which may use the data to synchronize operations with various infrastructure (e.g., RAN nodes 112a, 112b, etc.), and so on.
The components shown in fig. 9 may communicate with each other using interface circuitry that may include any number of bus and/or Interconnect (IX) technologies, such as Industry Standard Architecture (ISA), enhanced ISA (EISA), peripheral Component Interconnect (PCI), peripheral component interconnect express (PCIx), PCI express (PCIe), or any number of other technologies. The bus/IX may be a proprietary bus, for example, as used in SoC based systems. Other bus/IX systems may be included such as I2C interfaces, SPI interfaces, point-to-point interfaces, and power buses, among others.
FIG. 10 illustrates an example of a computer platform 1000 (or "device 1000") according to various implementations. In some implementations, computer platform 1000 may be adapted to function as client device 102, application server 110, and/or any other element/device discussed herein. Platform 1000 may include any combination of the components shown in the examples. The components of platform 1000 may be implemented as Integrated Circuits (ICs), portions of ICs, discrete electronic devices, or other modules adapted in computer platform 1000, logic, hardware, software, firmware, or combinations thereof, or as components otherwise incorporated within the chassis of a larger system. The block diagram of FIG. 10 is intended to illustrate a high-level view of components of computer platform 1000. However, some of the illustrated components may be omitted, additional components may be present, and different arrangements of the illustrated components may occur in other implementations.
The application circuitry 1005 includes circuitry such as, but not limited to, one or more processors (or processor cores), cache memory, and one or more of the following: LDOs, interrupt controllers, serial interfaces (such as SPI), I2C or universal programmable serial interface modules, RTCs, timers (including interval timers and watchdog timers), universal I/os, memory card controllers (such as SD MMC or similar controllers), USB interfaces, MIPI interfaces, and JTAG test access ports. The processor (or core) of the application circuit 1005 may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage device to enable various applications or operating systems to run on the system 1000. In some implementations, the memory/storage elements may be on-chip memory circuitry that may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein.
The processor of application circuit 1005 may include, for example, one or more processor cores, one or more application processors, one or more GPUs, one or more RISC processors, one or more ARM processors, one or more CISC processors, one or more DSPs, one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, a multi-threaded processor, an ultra-low voltage processor, an embedded processor, some other known processing elements, or any suitable combination thereof.
In some implementations, the application circuitry 1005 may include or be a dedicated processor/controller for operating in accordance with various implementations herein. By way of example, the processor of the application circuit 1005 may comprise an Apple A series processor. The processor of application circuit 1005 may also be one or more of the following: based onArchitecture Core TM processors, such as quick TM、AtomTM, i3, i5, i7, or MCU-level processors, or are commercially available from Santa Clara, calif. >Company (/ >)Another such processor of Corporation, SANTA CLARA, CA); advanced Micro Devices (AMD)/>A processor or an Acceleration Processing Unit (APU); from/>Snapdragon TM processor, texas Instruments,/>, from Technologies, incOpen Multimedia Applications Platform (OMAP) TM processor; MIPS-based designs such as MIPS Warrior M-stage, warrior I-stage, and Warrior P-stage processors from MIPS Technologies, inc.; ARM-based designs, such as ARM Cortex-A, cortex-R and Cortex-M series processors, were licensed by ARM holders, ltd; etc.
In some implementations, the application circuit 1005 may be part of a system on a chip (SoC) in which the application circuit 1005 and other components are formed as a single integrated circuit. Additionally or alternatively, the application circuitry 1005 may include circuitry such as, but not limited to, one or more Field Programmable Devices (FPDs) such as FPGAs, or the like; programmable Logic Devices (PLDs), such as Complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; an ASIC, such as a structured ASIC; a programmable SoC (PSoC); etc. In such implementations, the circuitry of application circuit 1005 may include logic blocks or logic frameworks, as well as other interconnect resources that may be programmed to perform various functions, such as programs, methods, functions, and the like.
In some implementations, the circuitry of application circuitry 1005 may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static Random Access Memory (SRAM), antifuse, etc)) for storing logic blocks, logic fabrics, data, etc. In some implementations, the client device 102 may include one or more processors configured to execute software instructions stored in the application circuitry 1005. The application circuit 1005 may include an output queue optimizer 1048.
The baseband circuitry 1010 may be implemented, for example, as a solder-in substrate comprising one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board, or a multi-chip module containing two or more integrated circuits. In some implementations, baseband circuitry 1010 is similar to baseband circuitry 103. In some implementations, the operations performed by the interaction between output queue optimizer 1048 and baseband circuitry 1010 are similar to the operations performed by BB circuitry 103 to manage output queue 107. This may occur, for example, when one or more processors associated with BB circuitry 103 execute instructions to perform operations similar to those performed by output queue optimizer 1048 and baseband circuitry 1010.
RFEM 1015 may include millimeter wave (mmWave) RFEM and one or more sub-millimeter wave Radio Frequency Integrated Circuits (RFICs). In some implementations, the one or more sub-millimeter wave RFICs may be physically separate from the millimeter wave RFEM. The RFIC may include connections to one or more antennas or antenna arrays, and the RFEM may be connected to multiple antennas. In alternative implementations, the radio functions of both millimeter wave and sub-millimeter wave may be implemented in the same physical RFEM 1015 that incorporates both millimeter wave antennas and sub-millimeter wave.
Memory circuitry 1020 may include any number and type of memory devices for providing a given amount of system memory. For example, the memory circuit 1020 may include one or more of the following: volatile memory including Random Access Memory (RAM), dynamic RAM (DRAM), and/or Synchronous Dynamic RAM (SDRAM), non-volatile memory (NVM) including high speed electrically erasable memory (commonly referred to as flash memory), phase change random access memory (PRAM), magnetoresistive Random Access Memory (MRAM), and the like.
The memory circuit 1020 may be developed in accordance with a Low Power Double Data Rate (LPDDR) based design such as LPDDR2, LPDDR3, LPDDR4, etc., by the Joint Electron Device Engineering Council (JEDEC). The memory circuit 1020 may be implemented as one or more of the following: solder-in package integrated circuits, single Die Packages (SDPs), dual Die Packages (DDPs) or quad die packages (Q17P), socket memory modules, dual in-line memory modules (DIMMs) including micro DIMMs or mini DIMMs, and/or soldered to a motherboard via a Ball Grid Array (BGA). In a low power implementation, the memory circuit 1020 may be an on-chip memory or register associated with the application circuit 1005. To provide persistent storage of information, such as data, applications, operating systems, etc., the memory circuit 1020 may include one or more mass storage devices, which may include, among other things, a Solid State Disk Drive (SSDD), a Hard Disk Drive (HDD), a micro HDD, a resistance change memory, a phase change memory, a holographic memory, or a chemical memory, etc. For example, computer platform 1000 may be obtained in connection with a computer systemAnd/>A three-dimensional (3D) intersection (XPOINT) memory.
Removable memory circuit 1023 may include devices, circuits, housings/shells, ports or receptacles, etc. for coupling portable data storage devices to platform 1000. These portable data storage devices may be used for mass storage and may include, for example, flash memory cards (e.g., secure Digital (SD) cards, micro SD cards, xD picture cards, etc.), as well as USB flash drives, optical disks, external HDDs, etc.
Platform 1000 may also include interface circuitry (not shown) for connecting external devices to platform 1000. External devices connected to platform 1000 via the interface circuitry include sensor circuitry 1021 and electro-mechanical components (EMC) 1022, as well as removable memory devices coupled to removable memory circuitry 1023.
The sensor circuit 1021 includes a device, module, or subsystem that is aimed at detecting an event or change in its environment, and transmits information (sensor data) about the detected event to some other device, module, subsystem, or the like. Examples of such sensors include, inter alia: an Inertial Measurement Unit (IMU) comprising an accelerometer, gyroscope and/or magnetometer; microelectromechanical Systems (MEMS) or nanoelectromechanical systems (NEMS) including triaxial accelerometers, triaxial gyroscopes and/or magnetometers; a liquid level sensor; a flow sensor; a temperature sensor (e.g., a thermistor); a pressure sensor; an air pressure sensor; a gravimeter; a height gauge; an image capturing device (e.g., a camera or a lens-free aperture); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detectors, etc.), depth sensors, ambient light sensors, ultrasonic transceivers; a microphone or other similar audio capturing device; etc.
EMC 1022 includes devices, modules, or subsystems that are intended to enable platform 1000 to change its state, position, and/or orientation, or to move or control a mechanism or (subsystem). Additionally, EMC 1022 may be configured to generate and send messages/signaling to other components of platform 1000 to indicate the current state of EMC 1022. EMC 1022 includes one or more power switches, relays (including electromechanical relays (EMR) and/or Solid State Relays (SSR)), actuators (e.g., valve actuators, etc.), audible sound generators, visual warning devices, motors (e.g., DC motors, stepper motors, etc.), wheels, propellers, claws, clamps, hooks, and/or other similar electromechanical components. In particular implementations, platform 1000 is configured to operate one or more EMCs 1022 based on one or more capture events and/or instructions or control signals received from service providers and/or various clients.
In some implementations, interface circuitry may connect platform 1000 with positioning circuitry 1045. The positioning circuitry 1045 includes circuitry for receiving and decoding signals transmitted/broadcast by the positioning network of the GNSS. Examples of navigation satellite constellations (or GNSS) include GPS in the united states, GLONASS in russia, galileo system in the european union, beidou navigation satellite system in china, regional navigation system or GNSS augmentation system (e.g., NAVIC, QZSS in japan, DORIS in france, etc.), and so forth. The positioning circuitry 1045 may comprise various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, etc. for facilitating OTA communications) to communicate with components of the positioning network such as navigation satellite constellation nodes. In some implementations, the positioning circuitry 1045 may include a miniature PNT IC that performs position tracking/estimation using a master timing clock without GNSS assistance. The positioning circuitry 1045 may also be part of or interact with the baseband circuitry 1010 and/or the RFEM 1015 to communicate with nodes and components of a positioning network. The positioning circuit 1045 may also provide location data and/or time data to the application circuit 1005, which may use the data to synchronize operation with various infrastructure (e.g., radio base stations) for turn-by-turn navigation applications, etc.
In some implementations, interface circuitry may connect platform 1000 with Near Field Communication (NFC) circuitry 1040. NFC circuit 1040 is configured to provide contactless proximity communications based on Radio Frequency Identification (RFID) standards, wherein magnetic field induction is used to enable communications between NFC circuit 1040 and NFC-enabled devices external to platform 1000 (e.g., an "NFC contact point"). NFC circuit 1040 includes an NFC controller coupled with an antenna element and a processor coupled with the NFC controller. The NFC controller may be a chip/IC that provides NFC functionality to NFC circuit 1040 by executing NFC controller firmware and an NFC stack. The NFC stack may be executable by the processor to control the NFC controller, and the NFC controller firmware may be executable by the NFC controller to control the antenna element to transmit the short range RF signal. The RF signal may power a passive NFC tag (e.g., a microchip embedded in a sticker or wristband) to transfer stored data to NFC circuit 1040 or initiate a data transfer between NFC circuit 1040 and another active NFC device (e.g., a smart phone or NFC-enabled POS terminal) near platform 1000.
Drive circuitry 1046 may include software elements and hardware elements for controlling particular devices embedded in platform 1000, attached to platform 1000, or otherwise communicatively coupled with platform 1000. Drive circuitry 1046 may include various drives to allow other components of platform 1000 to interact with or control various input/output (I/O) devices that may be present within or connected to platform 1000. For example, the driving circuit 1046 may include: a display driver for controlling and allowing access to the display device, a touch screen driver for controlling and allowing access to a touch screen interface of platform 1000, a sensor driver for obtaining sensor readings of sensor circuit 1021 and controlling and allowing access to sensor circuit 1021, an EMC driver for obtaining the actuator position of EMC 1022 and/or controlling and allowing access to EMC 1022, a camera driver for controlling and allowing access to the embedded image capturing device, an audio driver for controlling and allowing access to one or more audio devices.
A Power Management Integrated Circuit (PMIC) 1025 (also referred to as a "power management circuit 1025") may manage the power provided to the various components of platform 1000. In particular, relative to baseband circuitry 1010, pmic 1025 may control power supply selection, voltage regulation, battery charging, or DC-DC conversion. PMIC 1025 may generally be included when platform 1000 is capable of being powered by battery 1030, for example, when the device is included in client device 102.
In some implementations, PMIC 1025 may control or otherwise become part of the various power saving mechanisms of platform 1000. For example, if platform 1000 is in an RRC Connected state in which the platform is still Connected to a RAN node because it is expected to receive traffic soon, after a period of inactivity, the platform may enter a state called discontinuous reception mode (DRX). During this state, platform 1000 may be powered down for a short period of time, thereby conserving power. If there is no data traffic activity for an extended period of time, platform 1000 may transition to an RRC_Idle state in which the device is disconnected from the network and no operations such as channel quality feedback, handover, etc. are performed. Platform 1000 enters a very low power state and performs paging where the device wakes up again periodically to listen to the network and then powers down again. Platform 1000 is unable to receive data in this state; in order to receive data, the platform must transition back to the rrc_connected state. The additional power saving mode may cause the device to fail to use the network for more than a paging interval (varying from seconds to hours). During this time, the device is not connected to the network at all and may be powered off at all. Any data transmitted during this period causes a significant delay and the delay is assumed to be acceptable.
Battery 1030 may power platform 1000, but in some examples, platform 1000 may be installed and deployed in a fixed location and may have a power source coupled to a power grid. Battery 1030 may be a lithium ion battery, a metal-air battery such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, or the like. In some implementations, such as in a V2X application, battery 1030 may be a typical lead-acid automotive battery.
In some implementations, battery 1030 may be a "smart battery" that includes or is coupled to a Battery Management System (BMS) or battery monitoring integrated circuit. A BMS may be included in platform 1000 to track the state of charge of battery 1030 (SoCh). The BMS may be used to monitor other parameters of battery 1030, such as the state of health (SoH) and the state of function (SoF) of battery 1030, to provide a fault prediction. The BMS may communicate information of battery 1030 to application circuit 1005 or other components of platform 1000. The BMS may also include an analog-to-digital (ADC) converter that allows the application circuit 1005 to directly monitor the voltage of the battery 1030 or the current from the battery 1030. Battery parameters may be used to determine actions that platform 1000 may perform, such as transmission frequency, network operation, sensing frequency, and the like.
A power block or other power source coupled to the power grid may be coupled with the BMS to charge the battery 1030. In some examples, power block 1030 may be replaced with a wireless power receiver to wirelessly draw power, for example, through a loop antenna in computer platform 1000. In these examples, the wireless battery charging circuit may be included in the BMS. The particular charging circuit selected may depend on the size of battery 1030, and thus on the current required. The charging may be performed using aviation fuel standards promulgated by the aviation fuel alliance, qi wireless charging standards promulgated by the wireless power alliance, or Rezence charging standards promulgated by the wireless power alliance.
User interface circuitry 1050 includes various input/output (I/O) devices that reside within or are connected to platform 1000 and include one or more user interfaces designed to enable user interaction with platform 1000 and/or peripheral component interfaces designed to enable interaction with peripheral components of platform 1000.
The user interface circuit 1050 includes input device circuitry and output device circuitry. The input device circuitry includes any physical or virtual means for accepting input, including, inter alia, one or more physical or virtual buttons (e.g., a reset button), a physical keyboard, a keypad, a mouse, a touch pad, a touch screen, a microphone, a scanner, a headset, and the like. Output device circuitry includes any physical or virtual means for displaying information or otherwise conveying information, such as sensor readings, actuator positions, or other similar information. Output device circuitry may include any number and/or combination of audio or visual displays, including, inter alia, one or more simple visual outputs/indicators (e.g., binary status indicators (e.g., light Emitting Diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display devices or touch screens (e.g., liquid Crystal Displays (LCDs), LED displays, quantum dot displays, projectors, etc.), where the output of characters, graphics, multimedia objects, etc. is generated or produced by operation of platform 1000.
In some implementations, the sensor circuit 1021 may be used as an input device circuit (e.g., image capture device, motion capture device, etc.) and one or more EMCs may be used as an output device circuit (e.g., actuator for providing haptic feedback, etc.). In another example, an NFC circuit may be included to read an electronic tag and/or connect with another NFC enabled device, the NFC circuit including an NFC controller and a processing device coupled with an antenna element. Peripheral component interfaces may include, but are not limited to, non-volatile memory ports, USB ports, audio jacks, power interfaces, and the like.
Although not shown, the components of platform 1000 may communicate with each other using suitable bus or Interconnect (IX) technology, which may include any number of technologies including ISA, EISA, PCI, PCIx, PCIe, time Triggered Protocol (TTP) systems, flexRay systems, or any number of other technologies. The bus/IX may be a proprietary bus/IX, for example, a proprietary bus used in SoC based systems. Other bus/IX systems may be included such as I2C interfaces, SPI interfaces, point-to-point interfaces, and power buses, among others.
Fig. 11 illustrates various protocol functions that may be implemented in a wireless communication device in accordance with various implementations. In particular, fig. 11 includes an arrangement 1100 illustrating interconnections between various protocol layers/entities. The following description of fig. 11 is provided for various protocol layers/entities operating in conjunction with the 5G/NR system standard and the LTE system standard, but some or all aspects of fig. 11 may also be applicable to other wireless communication network systems.
The protocol layers of arrangement 1100 may include one or more of PHY 1110, MAC 1120, RLC 1130, PDCP 1140, SDAP 1147, RRC 1155, and NAS layer 1157, among other higher layer functions not shown. These protocol layers may include one or more service access points (e.g., items 1159, 1156, 1150, 1149, 1145, 1135, 1125, and 1115 in fig. 11) capable of providing communication between two or more protocol layers.
PHY 1110 may transmit and receive physical layer signals 1105, which may be received from or transmitted to one or more other communication devices. Physical layer signals 1105 may include one or more physical channels, such as those discussed herein. PHY 1110 may also perform link adaptation or Adaptive Modulation and Coding (AMC), power control, cell search (e.g., for initial synchronization and handover purposes), and other measurements used by higher layers, such as RRC 1155. PHY 1110 may further perform error detection on the transport channels, forward Error Correction (FEC) encoding/decoding of the transport channels, modulation/demodulation of the physical channels, interleaving, rate matching, mapping to the physical channels, and MIMO antenna processing. In implementations, an instance of PHY 1110 may process requests from an instance of MAC 1120 via one or more PHY-SAPs 1115 and provide an indication thereto. According to some implementations, the request and indication transmitted via PHY-SAP 1115 may include one or more transport channels.
An instance of MAC 1120 may process and provide an indication to a request from an instance of RLC 1130 via one or more MAC-SAPs 1125. These requests and indications transmitted via MAC-SAP 1125 may include one or more logical channels. MAC 1120 may perform mapping between logical channels and transport channels, multiplexing MAC SDUs from one or more logical channels onto TBs to be delivered to PHY 1110 via transport channels, demultiplexing MAC SDUs from TBs delivered from PHY 1110 via transport channels onto one or more logical channels, multiplexing MAC SDUs onto TBs, scheduling information reporting, error correction by HARQ, and logical channel prioritization.
An instance of RLC 1130 can process and provide an indication to a request from an instance of PDCP 1140 via one or more radio link control service access points (RLC-SAPs) 1135. These requests and indications transmitted via RLC-SAP 1135 may include one or more RLC channels. RLC 1130 may operate in a variety of modes of operation including: transparent Mode (TM), unacknowledged Mode (UM), and Acknowledged Mode (AM). RLC 1130 may perform transmission of upper layer Protocol Data Units (PDUs), error correction by automatic repeat request (ARQ) for AM data transmission, and concatenation, segmentation and reassembly of RLC SDUs for UM and AM data transmission. RLC 1130 may also perform re-segmentation of RLC data PDUs for AM data transmissions, re-ordering RLC data PDUs for UM and AM data transmissions, detecting duplicate data for UM and AM data transmissions, discarding RLC SDUs for UM and AM data transmissions, detecting protocol errors for AM data transmissions, and performing RLC re-establishment.
An instance of PDCP 1140 may process and provide an indication to a request from an instance of RRC 1155 and/or an instance of SDAP 1147 via one or more packet data convergence protocol service points (PDCP-SAPs) 1145. These requests and indications communicated via PDCP-SAP 1145 may include one or more radio bearers. PDCP 1140 may perform header compression and decompression of IP data, maintain PDCP Sequence Numbers (SNs), perform sequence delivery of upper layer PDUs upon lower layer re-establishment, eliminate duplication of lower layer SDUs upon re-establishment of the lower layer for radio bearers mapped on RLC AM, encrypt and decrypt control plane data, perform integrity protection and integrity verification on control plane data, control timer-based data discard, and perform security operations (e.g., ciphering, deciphering, integrity protection, integrity verification, etc.).
An instance of the SDAP 1147 can process requests from and provide indications to one or more higher layer protocol entities via one or more SDAP-SAP 1149. These requests and indications communicated via SDAP-SAP 1149 may include one or more QoS flows. The SDAP 1147 can map QoS flows to DRBs and vice versa, and can also label QFIs in DL packets and UL packets. A single SDAP entity 1147 may be configured for separate PDU sessions. In the UL direction, NG-RAN 112 may control the mapping of QoS flows to DRBs in two different ways (reflection mapping or explicit mapping). For reflection mapping, the SDAP 1147 of the client device 102 may monitor the QFI of DL packets for each DRB and may apply the same mapping for packets flowing in the UL direction. For a DRB, the SDAP 1147 of the client device 102 may map UL packets belonging to a QoS flow that corresponds to the QoS flow ID and PDU session observed in the DL packets of that DRB. To achieve reflection mapping, the NG-RAN may tag DL packets with QoS flow IDs over the Uu interface. Explicit mapping may involve RRC 1155 configuring SDAP 1147 with explicit mapping rules for QoS flows to DRBs, which rules may be stored and followed by SDAP 1147. In implementations, the SDAP 1147 may be used only in NR implementations and may not be used in LTE implementations.
The RRC 1155 may configure aspects of one or more protocol layers, which may include one or more instances of PHY 1110, MAC 1120, RLC 1130, PDCP 1140, and SDAP 1147, via one or more management service access points (M-SAPs). In implementations, an instance of the RRC 1155 can process requests from one or more NAS entities 1157 and provide an indication to the one or more NAS entities via one or more RRC-SAPs 1156. The primary services and functions of RRC 1155 may include broadcasting of system information (e.g., included in a MIB or SIB related to NAS), broadcasting of system information related to Access Stratum (AS), paging, establishment, maintenance, and release of RRC connections between client device 102 and RAN (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), establishment, configuration, maintenance, and release of point-to-point radio bearers, security functions including key management, inter-RAT mobility, and measurement configuration for UE measurement reporting. These MIB and SIBs may include one or more IEs, each of which may include separate data fields or data structures.
NAS1157 may form the highest level of control plane between client device 102 and the AMF. NAS1157 may support mobility and session management procedures for client device 102 to establish and maintain an IP connection between client device 102 and a P-GW in an LTE system.
According to various implementations, one or more protocol entities of arrangement 1100 may be implemented in client device 102, RAN node 112a, an AMF in an NR implementation or an MME in an LTE implementation, a UPF in an NR implementation or S-GW and P-GW in an LTE implementation, etc., for a control plane or user plane communication protocol stack between the aforementioned devices. In such implementations, one or more protocol entities that may be implemented in one or more of the client devices 102, the gNB 112a, the AMF, etc., may communicate with (perform such communications using services of) respective peer protocol entities that may be implemented in or on another device. In some implementations, the gNB-CU of gNB 112A may host RRC 1155, SDAP 1147, and PDCP 1140 of the gNB that control one or more gNB-DU operations, and the gNB-DU of gNB 112A may host RLC 1130, MAC 1120, and PHY 1110 of gNB 112A, respectively.
In a first example, the control plane protocol stack may include NAS1157, RRC 1155, PDCP 1140, RLC 1130, MAC 1120, and PHY 1110 in order from the highest layer to the lowest layer. In this example, the upper layer 1160 may be built on top of a NAS1157 that includes an IP layer 1161, SCTP 1162, and application-layer signaling protocols (APs) 1163.
In an NR implementation, AP 1163 may be an NG application protocol layer (NGAP or NG-AP) 1163 for NG interface 113 defined between NG-RAN node 112A and an AMF, or AP 1163 may be an Xn application protocol layer (XnAP or Xn-AP) 1163 for an Xn interface 112B defined between two or more RAN nodes 112A.
NG-AP 1163 may support the functionality of NG interface 113 and may include a primary program (EP). The NG-AP EP may be an interworking unit between the NG-RAN point 112A and the AMF. The NG-AP 1163 service may include two groups: UE-associated services (e.g., services related to client device 102) and non-UE-associated services (e.g., services related to the entire NG interface instance between NG-RAN node 112a and the AMF). These services may include functionality including, but not limited to: paging functionality for sending paging requests to NG-RAN nodes 112A involved in a particular paging area; UE context management functionality for allowing the AMF to establish, modify and/or release UE contexts in the AMF and NG-RAN node 112 a; mobility functions for the client device 102 in ECM-CONNECTED mode, for intra-system HO support mobility within NG-RAN, and for inter-system HO support mobility from/to EPS system; NAS signaling transport functionality for transmitting or rerouting NAS messages between client device 102 and the AMF; NAS node selection functionality for determining an association between the AMF and the client device 102; an NG interface management function for setting up an NG interface and monitoring errors on the NG interface; a warning message sending function for providing a means for transmitting a warning message via the NG interface or canceling an ongoing warning message broadcast; a configuration transfer function for requesting and transferring RAN configuration information (e.g., SON information, performance Measurement (PM) data, etc.) between two RAN nodes 112A via CN 108; and/or other similar functions.
XnAP 1163 may support the functionality of the Xn interface 112B and may include XnAP basic mobility procedures and XnAP global procedures. The XnAP basic mobility procedures may include procedures for handling UE mobility within NG RAN 112A (or E-UTRAN), such as handover preparation and cancellation procedures, SN status transfer procedures, UE context retrieval and UE context release procedures, RAN paging procedures, dual connectivity related procedures, and so on. XnAP global procedures may include procedures unrelated to the particular client device 102, such as Xn interface set-up and reset procedures, NG-RAN update procedures, cell activation procedures, and the like.
In an LTE implementation, the AP 1163 may be an S1 application protocol layer (S1-AP) 1163 for an S1 interface 113 defined between the E-UTRAN node 112A and the MME, or the AP 1163 may be an X2 application protocol layer (X2 AP or X2-AP) 1163 for an X2 interface 112B defined between two or more E-UTRAN nodes 112A.
The S1 application protocol layer (S1-AP) 1163 may support the functionality of the S1 interface and, similar to the NG-AP previously discussed, the S1-AP may include an S1-AP EP. The S1-AP EP may be an interworking unit between the E-UTRAN node 112A and the MME within the LTE core network. S1-AP 1163 services may include two groups: UE-associated services and non-UE-associated services. The functions performed by these services include, but are not limited to: E-UTRAN radio access bearer (E-RAB) management, UE capability indication, mobility, NAS signaling transport, RAN Information Management (RIM), and configuration transport.
The X2AP 1163 may support the functionality of the X2 interface 112B and may include X2AP basic mobility procedures and X2AP global procedures. The X2AP basic mobility procedures may include procedures for handling UE mobility within the E-UTRAN 108, such as handover preparation and cancellation procedures, SN status transfer procedures, UE context retrieval and UE context release procedures, RAN paging procedures, procedures related to dual connectivity, and so on. The X2AP global program may include programs unrelated to the particular client device 102, such as X2 interface set-up and reset programs, load indication programs, error indication programs, cell activation programs, and the like.
The SCTP layer (alternatively referred to as SCTP/IP layer) 1162 may provide guaranteed delivery of application layer messages (e.g., NGAP or XnAP messages in NR implementations, or S1-AP or X2AP messages in LTE implementations). SCTP 1162 may ensure reliable delivery of signaling messages between RAN node 112a or 112b and the AMF/MME based in part on the IP protocol supported by IP 1161. An internet protocol layer (IP) 1161 may be used to perform packet addressing and routing functions. In some implementations, the IP layer 1161 may use point-to-point transmission to deliver and transport PDUs. In this regard, RAN node 112a or 112b may include L2 and L1 layer communication links (e.g., wired or wireless) with the MME/AMF to exchange information.
In a second example, the user plane protocol stack may include, in order from the highest layer to the lowest layer, SDAP 1147, PDCP 1140, RLC 1130, MAC 1120, and PHY 1110. The user plane protocol stack may be used for communication between client devices 102, RAN nodes 112a, 112b, or service or packet gateways in LTE implementations. In this example, the upper layer 1151 may be built on top of the SDAP 1147 and may include a User Datagram Protocol (UDP) and IP security layer (UDP/IP) 1152, a General Packet Radio Service (GPRS) tunneling protocol for user plane layer (GTP-U) 1153, and a user plane PDU layer (UP PDU) 1163.
Transport network layer 1154 (also referred to as a "transport layer") may be built on top of IP transport and GTP-U1153 may be used on top of UDP/IP layer 1152 (including both UDP and IP layers) to carry user plane PDUs (UP-PDUs). The IP layer (also referred to as the "internet layer") may be used to perform packet addressing and routing functions. The IP layer may assign an IP address to a user data packet in any of IPv4, IPv6, or PPP formats, for example.
GTP-U1153 may be used to carry user data within the GPRS core network and between the radio access network and the core network. For example, the user data transmitted may be packets in any of the IPv4, IPv6, or PPP formats. UDP/IP 1152 may provide a checksum for data integrity, port numbers for addressing different functions at the source and destination, and encryption and authentication of the selected data stream. RAN nodes 112a and 112b and a serving and packet gateway (not shown) may utilize the S1-U interface to exchange user plane data via a protocol stack that includes an L1 layer (e.g., PHY 1110), an L2 layer (e.g., MAC 1120, RLC 1130, PDCP 1140, and/or SDAP 1147), a UDP/IP layer 1152, and GTP-U1153. The service and packet gateway may utilize the S5/S8a interface to exchange user plane data via a protocol stack including an L1 layer, an L2 layer, a UDP/IP layer 1152, and a GTP-U1153. As previously discussed, NAS protocols may support mobility and session management procedures for client device 102 to establish and maintain IP connectivity for client device 102.
Further, although not shown in fig. 11, an application layer may exist above the AP 1163 and/or transport network layer 1154. The application layer may be a layer in which a user of client device 102, RAN node 112a, 112b, or other network element interacts with a software application executed by, for example, application circuitry 905 or application circuitry 1005, respectively. The application layer may also provide one or more interfaces for software applications to interact with the client device 102, the communication system of the RAN nodes 112a, 112b, such as baseband circuitry 910 or 1010. In some implementations, the IP layer and/or the application layer may provide the same or similar functionality as layers 5 through 7 of the Open Systems Interconnection (OSI) model or portions thereof (e.g., OSI layer 7-application layer, OSI layer 6-presentation layer, and OSI layer 5-session layer).
Fig. 12 is a block diagram illustrating components capable of reading instructions from a machine-readable medium or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and performing any one or more of the methods discussed herein, according to some example implementations. In particular, fig. 12 shows a diagrammatic representation of a hardware resource 1200 that includes one or more processors (or processor cores) 1210, one or more memory/storage devices 1220, and one or more communication resources 1230, each of which may be communicatively coupled via a bus 1240. For implementations in which node virtualization (e.g., NFV) is utilized, the hypervisor 1202 can be executed to provide an execution environment for one or more network slices/sub-slices to utilize hardware resources 1200.
Processor 1210 may include, for example, a processor 1212 and a processor 1214. Processor 1210 may be, for example, a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a Radio Frequency Integrated Circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
Memory/storage 1220 may include main memory, disk storage, or any suitable combination thereof. Memory/storage 1220 may include, but is not limited to, any type of volatile or non-volatile memory, such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, solid state storage, and the like.
The communication resources 1230 may include interconnections or network interface components or other suitable devices to communicate with one or more peripheral devices 1204 or one or more databases 1206 via the network 1208. For example, the communication resources 1230 may include wired communication components (e.g., for coupling via USB), cellular communication components, NFC components,(OrLow power consumption) component, wi-/>Components and other communication components.
The instructions 1250 may include software, programs, applications, applets, applications, or other executable code for causing at least any one of the processors 1210 to perform any one or more of the method sets discussed herein. The instructions 1250 may reside, completely or partially, within at least one of the processor 1210 (e.g., within a cache memory of a processor), the memory/storage device 1220, or any suitable combination thereof. Further, any portion of the instructions 1250 may be transferred from the peripheral 1204 or any combination of the databases 1206 to the hardware resource 1200. Thus, the memory of processor 1210, memory/storage 1220, peripherals 1204, and database 1206 are examples of computer readable and machine readable media. In some implementations, the hardware resources 1200 may be included in the client device 102. The client device 102 may include one or more processors similar to the processor 1210 configured to execute software instructions that, when executed, perform various functions, such as the programs, methods, functions discussed herein.
It is well known that the use of personally identifiable information should follow privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining user privacy. In particular, personally identifiable information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use, and the nature of authorized use should be specified to the user.
Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware (including the structures disclosed in this specification and their structural equivalents), or in combinations of one or more of them. Software implementations of the subject matter may be implemented as one or more computer programs. Each computer program may include one or more modules of computer program instructions encoded on a tangible, non-transitory computer-readable computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded in/on a artificially generated propagated signal. In one example, the signal may be a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of computer storage media.
The terms "data processing apparatus", "computer" and "computing device" (or equivalent forms as understood by those of ordinary skill in the art) refer to data processing hardware. For example, a data processing apparatus may encompass a variety of apparatuses, devices, and machines for processing data, including, for example, a programmable processor, a computer, or multiple processors or computers. The apparatus may also include special purpose logic circuitry including, for example, a Central Processing Unit (CPU), a Field Programmable Gate Array (FPGA), or an Application Specific Integrated Circuit (ASIC). In some implementations, the data processing apparatus or the dedicated logic circuit (or a combination of the data processing apparatus or the dedicated logic circuit) may be based on hardware or software (or a combination of hardware and software). The apparatus may optionally include code that creates an execution environment for the computer program, such as code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of execution environments. The present disclosure contemplates the use of data processing apparatus with or without a conventional operating system (e.g., LINUX, UNIX, WINDOWS, MAC OS, ANDROID, or IOS).
A computer program, which may also be referred to or described as a program, software application, module, software module, script, or code, can be written in any form of programming language. The programming language may include, for example, a compiled, interpreted, declarative, or procedural language. The program(s) may be deployed in any form, including as a stand-alone program, module, component, subroutine, or unit for use in a computing environment. The computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites that are interconnected by a communication network. While portions of the programs shown in the various figures may be shown as separate modules that implement the various features and functions through various objects, methods, or processes, the programs may alternatively include multiple sub-modules, third party services, components, and libraries. Rather, the features and functions of the various components may be combined as appropriate into a single component. The threshold for making the computational determination may be determined statically, dynamically or both.
The methods, processes, or logic flows described herein can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The methods, processes, and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry (e.g., a CPU, FPGA, or ASIC).
A computer suitable for executing a computer program may be based on one or more of a general purpose microprocessor and a special purpose microprocessor, as well as other types of CPUs. Elements of a computer are a CPU for executing instructions and one or more memory devices for storing instructions and data. Generally, a CPU may receive instructions and data from a memory (and write the data to the memory). The computer may also include, or be operatively coupled to, one or more mass storage devices for storing data. In some implementations, a computer may receive data from and transmit data to mass storage devices including, for example, magnetic disks, magneto-optical disks, or optical disks. In addition, the computer may be embedded in another device, such as a mobile phone, a Personal Digital Assistant (PDA), a mobile audio or video player, a gaming machine, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive.
Computer readable media (transitory or non-transitory, as the case may be) suitable for storing computer program instructions and data may include all forms of persistent/non-persistent and volatile/nonvolatile memory, media, and memory devices. Computer-readable media may include, for example, semiconductor memory devices such as Random Access Memory (RAM), read Only Memory (ROM), phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), and flash memory devices. The computer readable medium may also include, for example, magnetic devices such as magnetic tape, magnetic cassettes, cartridges, and built-in/removable disks. Computer-readable media may also include magneto-optical disks and optical memory devices and techniques including, for example, digital Video Disks (DVD), CD ROM, DVD +/-R, DVD-RAM, DVD-ROM, HD-DVD, and BLURAY. The memory may store various objects or data including caches, classes, frameworks, applications, modules, backup data, jobs, web pages, web page templates, data structures, database tables, repositories, and dynamic information. The types of objects and data stored in memory may include parameters, variables, algorithms, instructions, rules, constraints, and references. In addition, the memory may include logs, policies, security, or access data and report files. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Specific implementations of the subject matter have been described. Other implementations, modifications, and arrangements of the implementations are within the scope of the following claims, as will be apparent to those skilled in the art. Although operations are shown in the drawings or claims in a particular order, this should not be construed as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed (some operations may be considered optional) to achieve desirable results. In some cases, a multitasking process or a parallel process (or a combination of multitasking and parallel processes) may be advantageous and performed as appropriate.
Moreover, the division or integration of various system modules and components in the implementations previously described should not be construed as requiring such division or integration in all implementations, and it should be understood that the program components and systems may be generally integrated together in a single software product or packaged into multiple software products.
Accordingly, the previously described exemplary implementations do not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure.

Claims (20)

1. A method performed by a client device in a wireless network for Transmission Control Protocol (TCP) acknowledgement (TCP ACK) packet transmission, the method comprising:
In response to receiving a TCP packet from another device in the wireless network, accessing a queue in a memory coupled to the client device that includes TCP ACK packets to be transmitted to the other device, wherein at least a subset of the TCP ACK packets include respective packet descriptors, each packet descriptor including (i) a flow identifier indicating a TCP flow associated with the packet and (ii) a TCP ACK generation count;
checking a packet descriptor of a first TCP ACK packet among the TCP ACK packets in the queue;
Identifying a first flow identifier and a first TCP ACK generation count corresponding to the first TCP ACK packet included in the packet descriptor of the first TCP ACK packet;
Determining that the first flow identifier and the first TCP ACK generation count are valid;
Accessing a data structure comprising entries in the memory coupled to the client device, each entry having at least a first field and a second field, the first field and the second field storing a flow identifier and a corresponding TCP ACK generation count, respectively;
Determining that a condition is satisfied, wherein the condition comprises the data structure comprising a first entry having (i) a flow identifier that matches the first flow identifier and (ii) a TCP ACK generation count that matches the first TCP ACK generation count; and
In response to determining that the condition is met, the first TCP ACK packet is marked for discard.
2. The method of claim 1, wherein the conditions further comprise:
the churn rate of the lower layer queues is below the threshold.
3. The method of claim 1, further comprising determining that the TCP flow includes uplink data in a plurality of queues, wherein the conditions further comprise:
The first TCP ACK packet corresponds to uplink data in a particular queue of the plurality of queues.
4. The method of claim 3, wherein the particular queue of the plurality of queues has a high priority.
5. The method of claim 3, wherein the particular queue of the plurality of queues has a low priority.
6. The method of claim 1, further comprising determining that the TCP flow includes uplink data transmitted in a plurality of Data Radio Bearers (DRBs), wherein the conditions further comprise:
The first TCP ACK packet corresponds to uplink data transmitted in one or more DRBs.
7. The method of claim 6, wherein the one or more DRBs are default DRBs on an internet Packet Data Network (PDN).
8. The method of claim 6, wherein at least one DRB of the one or more DRBs has a throughput within a given range.
9. The method of claim 6, wherein the one or more DRBs are bi-directional DRBs.
10. The method of claim 1, further comprising determining that the TCP flow includes uplink data corresponding to a plurality of Packet Data Networks (PDNs), wherein the conditions further comprise:
the first TCP ACK packet corresponds to uplink data transmitted in one or more given PDNs.
11. The method of claim 1, wherein the conditions further comprise:
The client device operates in a known power mode.
12. The method of claim 1, further comprising determining that the TCP flow comprises a plurality of Internet Protocol (IP) packet flows, wherein the conditions further comprise:
the first TCP ACK packet corresponds to uplink data transmitted in one or more given IP packet flows.
13. The method of claim 1, wherein the conditions further comprise:
the client device detects downlink TCP data at baseband.
14. A processor comprising circuitry to execute instructions that cause a UE to perform operations comprising:
In response to receiving a Transmission Control Protocol (TCP) packet from another UE in a wireless network, accessing a queue in a memory coupled to the processor that includes TCP acknowledgement (TCP ACK) packets to be transmitted to the other UE, wherein at least a subset of the TCP ACK packets include respective packet descriptors, each packet descriptor including (i) a flow identifier indicating a TCP flow associated with the packet and (ii) a TCP ACK generation count;
checking a packet descriptor of a first TCP ACK packet among the TCP ACK packets in the queue;
Identifying a first flow identifier and a first TCP ACK generation count corresponding to the first TCP ACK packet included in the packet descriptor of the first TCP ACK packet;
Determining that the first flow identifier and the first TCP ACK generation count are valid;
Accessing in the memory a data structure comprising entries, each entry having at least a first field and a second field, the first field and the second field storing a flow identifier and a corresponding TCP ACK generation count, respectively;
Determining that a condition is satisfied, wherein the condition comprises the data structure comprising a first entry having (i) a flow identifier that matches the first flow identifier and (ii) a TCP ACK generation count that matches the first TCP ACK generation count; and
In response to determining that the condition is met, the first TCP ACK packet is marked for discard.
15. The processor of claim 14, wherein the condition further comprises at least one of:
The churn rate for the lower layer queues is below the threshold,
The UE operates in a given power mode, or
The UE detects downlink TCP data at baseband.
16. The processor of claim 14, the operations further comprising determining that the TCP flow includes uplink data in a plurality of queues, wherein the conditions further comprise:
the first TCP ACK packet corresponds to uplink data in one of the plurality of queues.
17. The processor of claim 14, the operations further comprising determining that the TCP flow includes uplink data transmitted in a plurality of Data Radio Bearers (DRBs), wherein the conditions further comprise:
The first TCP ACK packet corresponds to uplink data transmitted in one or more given DRBs.
18. The processor of claim 14, the operations further comprising determining that the TCP flow includes uplink data corresponding to a plurality of Packet Data Networks (PDNs), wherein the conditions further comprise:
the first TCP ACK packet corresponds to uplink data transmitted in one or more given PDNs.
19. The processor of claim 14, the operations further comprising determining that the TCP flow comprises a plurality of Internet Protocol (IP) packet flows, wherein the conditions further comprise:
the first TCP ACK packet corresponds to uplink data transmitted in one or more given IP packet flows.
20. A User Equipment (UE), comprising:
Processing circuitry configured to execute instructions that cause the UE to perform operations comprising:
In response to receiving a Transmission Control Protocol (TCP) packet from another UE in a wireless network, accessing a queue in memory that includes TCP acknowledgement (TCP ACK) packets to be transmitted to the other UE, wherein at least a subset of the TCP ACK packets include respective packet descriptors, each packet descriptor including (i) a flow identifier indicating a TCP flow associated with the packet and (ii) a TCP ACK generation count;
checking a packet descriptor of a first TCP ACK packet among the TCP ACK packets in the queue;
identifying a first flow identifier and a first TCP ACK generation count corresponding to the first TCP ACK packet included in the packet descriptor of the first TCP ACK packet;
Determining that the first flow identifier and the first TCP ACK generation count are valid;
Accessing in the memory a data structure comprising entries, each entry having at least a first field and a second field, the first field and the second field storing a flow identifier and a corresponding TCP ACK generation count, respectively;
Determining that a condition is satisfied, wherein the condition comprises the data structure comprising a first entry having (i) a flow identifier that matches the first flow identifier and (ii) a TCP ACK generation count that matches the first TCP ACK generation count; and
In response to determining that the condition is met, the first TCP ACK packet is marked for discard.
CN202311716505.3A 2022-12-13 2023-12-13 System and method for managing Transmission Control Protocol (TCP) acknowledgements Pending CN118199826A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18/080,182 2022-12-13
US18/080,182 US11882051B2 (en) 2021-07-26 2022-12-13 Systems and methods for managing transmission control protocol (TCP) acknowledgements

Publications (1)

Publication Number Publication Date
CN118199826A true CN118199826A (en) 2024-06-14

Family

ID=91405110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311716505.3A Pending CN118199826A (en) 2022-12-13 2023-12-13 System and method for managing Transmission Control Protocol (TCP) acknowledgements

Country Status (1)

Country Link
CN (1) CN118199826A (en)

Similar Documents

Publication Publication Date Title
US20220182977A1 (en) Soft resource signaling in relay networks
CN113826339B (en) Multiplexing Configuration Grant (CG) transmissions in a New Radio (NR) system operating on unlicensed spectrum
US11876719B2 (en) Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11882051B2 (en) Systems and methods for managing transmission control protocol (TCP) acknowledgements
CN111800244A (en) Design of physical side loop feedback channel of NR-V2X
US20210392673A1 (en) Enhanced physical uplink control channel (pucch) transmission for high reliability
US20220416953A1 (en) Harq-ack transmission and retransmission in wireless communication system
US20220167438A1 (en) Mobile-Terminated (MT) Early Data Transmission (EDT) in Control Plane and User Plane Solutions
CN114026931A (en) Mobile Terminated (MT) Early Data Transfer (EDT) in control plane and user plane solutions
US20210022115A1 (en) Location Based Sidelink (SL) Hybrid Automatic Repeat Request (HARQ) Feedback Transmission
CN116097864A (en) Techniques for PUCCH operation with multiple TRPs
CN112913169B (en) Transmission, retransmission and hybrid automatic repeat request (HARQ) for pre-configured uplink resources (PURs) in idle mode
WO2020198430A1 (en) Dynamic indication of soft resource availability
CN114128376A (en) Resource configuration in integrated access and backhaul networks
CN114175834A (en) Random access channel configuration in integrated access and backhaul networks
CN111918324B (en) Method for SCell beam fault detection and beam fault recovery request transmission in NR
US20220167223A1 (en) Methods for Enhanced Handover Using Conditional and Simultaneous Connectivity Handover
CN116114354A (en) Techniques for PDSCH/PUSCH processing for multiple TRPs
WO2020190930A1 (en) User equipment configuration for operating in an unlicensed spectrum
WO2020227653A1 (en) Adaptation layer enhancement in relay networks
CN118199826A (en) System and method for managing Transmission Control Protocol (TCP) acknowledgements
US20230269818A1 (en) Low power data communications
US20230199678A1 (en) Enhanced radio resource management for synchronization signal blocks outside an active bandwidth part
US20240224235A1 (en) Location Based Sidelink (SL) Hybrid Automatic Repeat Request (HARQ) Feedback Transmission
US20230098218A1 (en) Soft signaling for application service accessibility in advanced networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination