WO2021152369A1 - Dynamic uplink end-to-end data transfer scheme with optimized memory path - Google Patents

Dynamic uplink end-to-end data transfer scheme with optimized memory path Download PDF

Info

Publication number
WO2021152369A1
WO2021152369A1 PCT/IB2020/059912 IB2020059912W WO2021152369A1 WO 2021152369 A1 WO2021152369 A1 WO 2021152369A1 IB 2020059912 W IB2020059912 W IB 2020059912W WO 2021152369 A1 WO2021152369 A1 WO 2021152369A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
packet
memory
window
internal memory
Prior art date
Application number
PCT/IB2020/059912
Other languages
French (fr)
Inventor
Su-Lin Low
Hong Kui Yang
Tianan Tim Ma
Hausting Hong
Original Assignee
Zeku Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku Inc. filed Critical Zeku Inc.
Priority to CN202080094295.7A priority Critical patent/CN115066844A/en
Publication of WO2021152369A1 publication Critical patent/WO2021152369A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • Embodiments of the present disclosure relate to apparatuses and methods for memory handling, which may be applicable to communication systems, such as wireless communication systems.
  • Communication systems such as wireless communication systems
  • wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts.
  • a modem having a protocol stack embodied in hardware and software may pass the packets down the protocol stack with a physical layer, including a radio frequency (RF) module, ultimately converting the bits of the packet into radio waves.
  • RF radio frequency
  • an apparatus for memory handling can include an external memory configured to store layer three (L3) data.
  • the apparatus can also include an internal memory configured to store layer two (L2) data.
  • the apparatus can further include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remainder of the packet upon determination that at least two predetermined conditions are met, and pass the remainder of the packet from the external memory to the internal memory.
  • the at least two predetermined conditions can include that space in the internal memory is available and that a medium access control (MAC) layer is ready to prepare data for a next window of transmission.
  • MAC medium access control
  • an apparatus for memory handling can include an external memory configured to store L3 data and an internal memory configured to store L2 data.
  • the apparatus can further include circuitry configured to maintain L3 data according to at least one first window and maintain L2 data according to at least one second window shorter than the first window.
  • a method for memory handling can include processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store L3 data to an internal memory configured to store L2 data.
  • the method can also include processing, by the circuitry, a remainder of the packet upon determination that at least two predetermined conditions are met.
  • the method can further include passing, by the circuitry, the remainder of the packet from the external memory to the internal memory.
  • the at least two predetermined conditions can include that space in the internal memory is available and that a MAC layer is ready to prepare data for a next window of transmission.
  • a method for memory handling can include maintaining, by circuitry, L3 data according to at least one first window, wherein the L3 data is stored in external memory. The method may also include maintaining, by the circuitry, L2 data according to at least one second window shorter than the first window, wherein the L2 data is stored in internal memory.
  • a non-transitory computer-readable medium can encode instructions that, when executed by a microcontroller of a node, may perform a process for memory handling. The process can include any of the above-described methods.
  • FIG. 1 illustrates data processing in a protocol stack, according to some embodiments of the present disclosure.
  • FIG. 2 illustrates a data flow diagram illustrating some embodiments of the present disclosure.
  • FIGs. 3A and 3B illustrate an internal memory corresponding to the data flow diagram of FIG. 2, in some embodiments of the present disclosure.
  • FIG. 4A illustrates a method according to some embodiments of the present disclosure.
  • FIG. 4B illustrates a further method according to some embodiments of the present disclosure.
  • FIG. 5 illustrates a detailed block diagram of a baseband system on chip (SoC) implementing Layer 2 packet processing using Layer 2 circuits and a microcontroller (MCU) according to some embodiments of the present disclosure.
  • SoC system on chip
  • MCU microcontroller
  • FIG. 6 illustrates an exemplary wireless network that may incorporate memory handling, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • FIG. 7 illustrates a node that may be used for memory handling, according to some embodiments of the present disclosure.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. [0021] In general, terminology may be understood at least in part from usage in context.
  • the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense.
  • terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context.
  • the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC- FDMA single-carrier frequency division multiple access
  • a CDMA network may implement a radio access technology (RAT) such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc.
  • RAT radio access technology
  • UTRA Universal Terrestrial Radio Access
  • E-UTRA evolved UTRA
  • CDMA 2000 etc.
  • TDMA network may implement a RAT such as GSM.
  • An OFDMA network may implement a RAT, such as long term evolution (LTE) or new radio (NR).
  • LTE long term evolution
  • NR new radio
  • the techniques and system described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs. Likewise, the techniques and systems described herein may also be applied to wired networks, such as networks based on optical fibers, coaxial cables, or twisted-pairs, or to satellite networks.
  • Some embodiments of the present disclosure relate to a mechanism to manage memory and processing as a packet traverses down through protocol layers. Some embodiments also relate to a minimum internal memory for transmission and retransmission purposes for such a packet. Furthermore, some embodiments relate to effective management of retransmissions data storage.
  • L3 packet data to be transmitted from the device is stored in external memory.
  • the external memory may be shared by multiple components within the modem or with other components of the UE outside the modem.
  • the L3 packet data may be moved into an internal memory, which may also be referred to as a local memory.
  • the internal memory may be local to a given system-on-chip, as distinct from external memory, which may be on another chip of the same device.
  • the L3 packet data is stored back in external memory again.
  • a trigger is then sent to the PDCP layer to process the L3 packets one function at a time.
  • the functions can include robust header compression (ROHC), integrity checking, and ciphering.
  • ROHC robust header compression
  • integrity checking integrity checking
  • ciphering ciphering
  • PDCP L2 packets are then queued into logical channel queues waiting to be processed further.
  • the RLC layer sorts the data into various RLC queues in the LCs.
  • the MAC layer retrieves the L2 data from the LC queues and moves them to an internal memory for transfer to the PHY layer.
  • the above-described approaches to handling packet data may result in inefficient data movements of a packet from L3 to multiple PDCP layer functions, and then to RLC and to MAC layers.
  • the above-described approaches rely on multiple external memory accesses, both for reading and writing. Additionally, a large external memory and large internal memory are required. In view of the large amount of memory, and the large amount of accesses to the memory, a relatively large amount of power may be used.
  • Some embodiments may have various benefits and/or advantage as to various technical aspects. For example, some embodiments of the present disclosure provide a way to reduce a data transfer path through the memories in the UL ETE data path. Some embodiments still ensure that the packets traverse all the multiple data plane layers needed to process the incoming L3 packets. Furthermore, some embodiments minimize data access to external memory, thereby saving power. In addition, some embodiments minimize the amount of internal memory space, even though internal memory may provide fast performance at a higher cost of power and area.
  • Some embodiments of the present disclosure relate to an efficient memory path method for the dynamic transfer of 5G Uplink (UL) packets for data transmission is proposed, which allows minimal data movements, optimized external memory access, and small internal memory for high throughput and low latency packets.
  • UL Uplink
  • a challenge in the UL ETE data path is finding the minimum data transfer path through the memories, necessary to transverse all the multiple data plane layers to process the incoming L3 packets and minimize data access to external memory to save power.
  • Internal memory space may provide fast performance but at a higher cost of power and area.
  • Internal memory 514 in FIG. 5 is an example of internal memory, as distinct from external memory 506 in FIG. 5.
  • the external memory 506 may be shared by multiple components of the system, including those not shown in FIG. 5.
  • the internal memory 514 in FIG. 5 may be configured exclusively for use by a baseband chip of a modem of a user equipment implementing the system shown in FIG. 5.
  • the baseband chip may include an RF component, or an RF chip may be provided as a physically separate element.
  • Some embodiments relate to an efficient memory path method for the dynamic transfer of fifth-generation (5G) uplink (UL) packets for data transmission.
  • 5G fifth-generation
  • UL uplink
  • Some embodiments may allow minimal data movements, may have optimized external memory access and may rely on a small internal memory for high throughput and low latency packets.
  • the hardware aspects can refer to aspects that are performed by specialized hardware, such as a hardware-based protocol stack implementation.
  • FIG. 5 discussed below, provides a specific example of a hardware-based protocol stack implementation with multiple dedicated integrated circuits, such as application- specific integrated circuits (ASICs), handling different layers of the protocol stack.
  • the software aspects can refer to aspects that may be performed by a general-purpose processor or by a layer-independent specialized modem processor.
  • FIG. 5 illustrates a specific example in which the software aspects may be implemented on a microcontroller.
  • Some embodiments may rely on three different and potentially independent principles that can be used together in one aspect of some embodiments. According to a first principle, some embodiments move data from layer three (L3) external memory to layer two (L2) internal memory only near the transmission time frame.
  • L3 layer three
  • L2 layer two
  • some embodiments perform packet data convergence protocol (PDCP) processing concurrent with data movement from L3 external memory to L2 internal memory.
  • PDCP packet data convergence protocol
  • some embodiments prepare expected medium access control (MAC) protocol data unit (PDU) packets in L2 internal memory directly in place. The preparation may involve prioritizing and concatenating the L2 packets data moves from L3 external memory to L2 internal memory.
  • MAC medium access control
  • a reduced transmission window (TXWTN) buffer can be used for prioritized L2 MAC data storage in a minimal internal memory.
  • the reduced TXWTN buffer may be used for fast transmission near the transmission timeframe.
  • a reduced retransmission window (RETXWIN) buffer can be used for L2 MAC data storage in a minimal internal memory.
  • the reduced RETXWIN buffer may be used for fast hybrid automatic repeat request (HARQ) retransmission close to the transmitted timeframe.
  • HARQ hybrid automatic repeat request
  • the first and second principles can be implemented together to, for example, help further reduce local data storage needs.
  • This second aspect can, therefore, be considered as a minimum internal memory for fast UL transmissions and retransmissions.
  • a third aspect of some embodiments may involve the effective management of retransmissions data storage. This third aspect may involve three principles, which may be used independently or together.
  • HARQ retransmission data can be retrieved from a small, fast, internal memory, if available.
  • One detail here may be the length of time that HARQ retransmission data is retained in the small, fast, internal memory. This length of time may be in advance by a configuration or may be dynamically changed over time based on HARQ usage by the device in practice. For example, a device in a relatively noisy or otherwise interfered scenario may need to use HARQ more often than in a relatively clear scenario.
  • the retransmission data may be retrieved from external memory. This may be because the retention time in internal memory may be long enough to handle the vast majority of HARQ retransmissions. Nevertheless, occasionally a request for retransmission may arrive outside the retention time.
  • the retention time for internal memory can be configured to capture some predicted percentage of the retransmission requests, such as 97% of the retransmission requests, 99% of the retransmission requests, or 99.9% of the retransmission requests. Other percentages can also be targeted: the preceding are just examples.
  • all L3 data packets may be stored in the external memory until a predetermined time expires.
  • the predetermined time may be an L2 discard window or a PDCP discard window. If there are multiple discard windows applicable, the external memory may wait until the last discard window expires.
  • a window may be based on the need to perform link recovery. Thus, the discard window may expire when the RLC layer or the PDCP layer has completed link recovery.
  • FIG. 1 illustrates data processing in a protocol stack, according to some embodiments.
  • the protocol stack may be implemented in a modem or similar device.
  • the packet data protocol stack consists of the Modem Layer 3 IP layer, the PDCP (Packet Data Convergence Protocol) layer, the RLC (Radio Link Control) layer, and the MAC (Media Access Control) layer.
  • Each layer is responsible for processing the user plane packet data in the form of IP data or raw user data and ensuring that data transmission is secure, on-time, and error-free.
  • the L3 data is processed through multiple layers before the final transfer to the MAC layer and to the PHY layer.
  • the packet may pass through L3 layer internet protocol (IP) header and quality of service (QOS) flow processing and can be queued in L3 buffers.
  • IP internet protocol
  • QOS quality of service
  • the packet may pass through PDCP processing, which can include ROHC compression, integrity checking, and ciphering.
  • the PDCP packet data can be queued in L2 buffers sorted in Logical channels (LCs).
  • LCs Logical channels
  • RLC queues can be sorted in priority bins according to the type of data (retransmission, new data, status, segments).
  • the data packets from different LCs can be gathered according to priority per the Logical Channel Prioritization (LCP) procedures as specified in the 3 GPP standard.
  • LCP Logical Channel Prioritization
  • Some embodiments of the present disclosure provide a way to reduce a data transfer path through the memories in the UL ETE data path. Some embodiments still ensure that the packets traverse all the multiple data plane layers needed to process the incoming L3 packets. Furthermore, some embodiments minimize data access to external memory, thereby saving power. In addition, some embodiments minimize the amount of internal memory space, even though internal memory may provide fast performance at a higher cost of power and area.
  • FIG. 2 illustrates a data flow diagram illustrating some embodiments of the present disclosure.
  • FIGs. 3 A and 3B illustrate an internal memory corresponding to the data flow diagram of FIG. 2, in some embodiments.
  • an application (AP) or host can send L3 TCP/IP packets to the modem data stack of a system 200.
  • Data buffers are allocated from external memory and stored with incoming IP packets. These may broadly be part of the L3 data window.
  • IP headers can be processed and moved to L2 internal memory. Since the IP headers may need to be processed efficiently for QoS flow identification and sorting/filtering, they can be placed in fast internal memory first, namely before the remainder of the packets. Although not particularly shown in FIG. 3 A, these may be part of the packets, such as current transmission packet 310 or any of the other packets in TXWIN 320, with the remainder of the packets joining them after 230C.
  • an external memory such as L3 Buffer (Ext) 202
  • DP digital processing
  • DP digital processing
  • L2+HARQ buffer local/internal
  • PHY physical layer
  • DP software 212 may run on a microcontroller (for example, MCU 510 in FIG. 5) or another computing device.
  • MAC can trigger the allocation of L2 data buffers from the small internal memory and can extract data from L3 external memory.
  • This data taken from the L3 external memory can pass through PDCP processing, which can include ROHC, integrity checking, and ciphering, as well as the addition of RLC and MAC headers at the same time.
  • the data prepared in L2 internal memory can be placed in contiguous memory for fast streaming to the PHY layer at the transmission timeline.
  • PDCP, MAC PDU preparation, and prioritized placement into contiguous memory may all be done when moving data from L3 external to L2 internal memory. By doing this movement only once, data movements may be optimized or otherwise efficiently or beneficially arranged for the next window of transmission. As shown in FIG. 3 A, at TO, this movement into internal memory can occur to fill out the packets in TXWIN 320, including current transmission packet 310. Thus, a current transmission packet 310 can be loaded into the transmission window (TXWIN) 320 in L2 internal memory (which can be referred to as L2Localmem). Meanwhile, the L3 data window 330, also referred to as L3 data buffer 330, can encompass the same packets and more.
  • the L3 data window 330 may be maintained in external memory in an L3 buffer (for example, in L3 Buffer (Ext) in FIG. 2 or external memory 506 in FIG. 5).
  • the L3 data buffer 330 may include all the same packets of TXWIN 320, RETXWIN 340, and more.
  • an RRC signaling message or an RLC command message may arrive.
  • RRC signaling messages or RLC command messages may arrive at the same time to the L2 transmission queues. These message may be directly allocated into L2 data buffers. Although not explicitly shown in FIG. 3 A, these can be included in the TXWIN 320.
  • MAC PDU transmission and/or retransmission can occur.
  • the MAC may get an indication and grant from the BS to transmit packets. This grant is shown, by way of example, as NW UL grant in FIG. 1.
  • the packets may be retrieved quickly from the TXWIN 320 buffer L2 internal memory prepared with MAC data.
  • the RETXWIN 340 buffer may first be scanned to retrieve the hybrid automatic repeat request (HARQ) data, such as an unacknowledged packet 350. If the data is outside the RETXWIN 340 window, and/or is already overwritten/deleted (for example, due to the limited size of RETXWIN 340), then the L3 data may be accessed again from external memory. In this case, the retrieved data may traverse the L3 to L2 processing data path, where new L2 local buffers may be allocated for these packets. For example, as shown at 360 at T1 in FIG.
  • HARQ hybrid automatic repeat request
  • packets previously sent and found only in the L3 data window at TO, shown at 375 may be added back into RETXWIN 340 at 370.
  • Previously sent packets still within the RETXWIN 340 may be aggregated by moving, for example, to the left as shown at 365.
  • old data may be overwritten or otherwise deleted, making space for incoming
  • TXWIN and RETXWIN can also include dereferencing the bits, without any requirement to zero the bits or otherwise alter them.
  • Additional L3 data may be drawn into the L2 internal memory, as described above, after PDCP processing, header additions, and prioritized MAC PDU creation.
  • FIG. 3B at Tl, where the transmission window and retransmission window have moved forward to the right one packet as illustrated by the arrow for windows movement direction.
  • This one packet adjustment is just for illustration. If multiple packets are sent at the same time, the adjustment could be multiple packets at the same time.
  • the directional arrow is to the right, this is simply to illustrate memories in which contiguous blocks of memory are arranged in a left- to-right order. Other arrangements of memory are also permitted, with the way illustrated simply for purposes of illustration and example.
  • FIG. 4A illustrates a method according to some embodiments. As shown in FIG.
  • a method 400 for memory handling can include, at 410, maintaining, by circuitry, layer three (L3) data according to at least one first window.
  • the L3 data can be stored in external memory.
  • the method 400 may also include, at 420, maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window.
  • the L2 data can be stored in internal memory. An illustration of this approach can be seen in FIGs. 3 A and 3B, in which the L3 data window is much larger than the windows TXWIN and RETXWIN for L2 data.
  • the at least one second window can include a transmission window and a retransmission window, such as TXWIN 320 and RETXWIN 340 in FIGs. 3A and 3B. As shown by way of example in FIGs. 3A and 3B, the transmission window combined with the retransmission window may still be less than the at least one first window, such as the L3 data window.
  • the method 400 may further include, at 430, dimensioning the internal memory for multiple medium access control instances. This dimensioning may occur in combination with the previously described maintaining steps as illustrated, or may be implemented separately from such steps. The dimensioning may take into account a plurality of parameters.
  • the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of the logical channel, and layer three buffer size of the logical channel.
  • the method 400 may further include, at 440, scaling each medium access control instance size based on a ratio of a maximum internal memory size and the total size of all medium access control instances. This is explained above in further detail. For example, based on an initial calculation of the needs of each MAC instance, it may occur that the total need of the instances exceeds a maximum available amount of internal memory. Accordingly, using a weighted fairness approach, each of the MAC instances may be allocated according to their need scaled by a ratio between the total needs and the maximum available internal memory. Other ways of handling limited internal memory are permitted.
  • FIG. 4A may be performed with the architecture shown in FIG. 2 and the specific hardware illustrated in FIG. 5 and discussed in more detail below.
  • a microcontroller and/or application-specific integrated circuits (ASICs) may be responsible for maintaining, dimensioning, and scaling, as described above.
  • FIG. 4B illustrates a further method according to some embodiments. As with FIG.
  • the method of FIG. 4B can be implemented in circuitry, such as the hardware and associated software illustrated in FIGs. 2 and 5.
  • the method of FIG. 4B is usable with the method FIG. 4A, such that both methods may be simultaneously and harmoniously implemented in the same modem of the same user equipment.
  • Other implementations are possible, such as the methods being practiced separately from one another.
  • a method 405 for memory handling can include, at 415, processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data. This is similarly illustrated at 220B, as explained above.
  • the method 405 can also include, at 425, processing, by the circuitry, a remainder of the packet upon the determination that at least two predetermined conditions are met. This is illustrated at 230B and 240D in FIG. 2, as discussed above.
  • the remainder of the packet can be everything aside from the packet header that was separately processed at 220B and 415.
  • the determination that the predetermined conditions are met, at 427, may be variously implemented.
  • the at least two predetermined conditions can include space in the internal memory being available and medium access control being ready to prepare data for the next window of transmission.
  • This may be thought of as a just-in-time preparation technique, with the remainder of the packets being provided to the L2 memory only just-in-time for transmission, thereby minimizing the time that they are present in L2, and consequently also minimizing size requirements for the L2 memory.
  • the processing of the remainder of the packet can include packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering, as illustrated in FIG. 2 and discussed above.
  • the remainder of the packet may be further processed by the addition of radio link control and medium access control headers.
  • the remainder of the packet may be placed in contiguous memory in the internal memory, as illustrated in FIGs. 3 A and 3B.
  • Contiguous memory can refer to the physical or logical arrangement of the bits in memory. For example, the logical arrangement may the physical address or order in which bits are accessed by a controller of the memory. When contiguous memory is used, the system may be able to extract a range of bits, rather than having to receive numerous bit addresses or ranges of bits scattered throughout the memory.
  • the method 405 can further include, at 432, passing, by the circuitry, the remainder of the packet from the external memory to the internal memory. This is also illustrated at 230C in FIG. 2, as discussed above.
  • the method 405 can also include, at 402, receiving the packet and storing the packet in the external memory prior to processing the header. This is further illustrated at 210A in FIG. 2.
  • the method 405 can further include passing the packet to a physical layer of the implementing device for transmission. This is also illustrated at 250E in FIG. 2, as discussed above.
  • the internal memory used in method 405 can include a transmission window buffer and a retransmission window buffer, as illustrated in FIGs. 3A and 3B as TXWIN 320 and RETXWIN 340.
  • the method 405 may further include also, at 437, moving the packet to the retransmission window buffer. This move is also illustrated in FIGs. 3 A and 3B in the change to the scope of the windows between TO and Tl.
  • the method 405 may further include, at 404, bringing additional layer three data from the external memory into the internal memory. The method 405 may then proceed as described above from 415 onward.
  • FIG. 5 illustrates a detailed block diagram of a baseband SoC 502 implementing
  • FIG. 5 may be viewed as a specific implementation and example of the architecture illustrated in FIG. 2, although other implementations including those that are more or less reliant on hardware are also permitted.
  • baseband SoC 502 may be one example of a software and hardware interworking system in which the software functions are implemented by MCU 510, and the hardware functions are implemented by Layer 2 circuits 508.
  • MCU 510 may be one example of a microcontroller
  • Layer 2 circuits 508 may be one example of integrated circuits, although other microcontroller and integrated circuits are also permitted.
  • Layer 2 circuits 508 include an SDAP circuit 520, a PDCP circuit 522, an RLC circuit 524, and a MAC circuit 526.
  • the dedicated integrated circuits (ICs) (for example, SDAP circuit 520, PDCP circuit 522, RLC circuit 524, and MAC circuit 526) controlled by MCU 510 can be used to conduct Layer 2 packet processing.
  • each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 is an IC dedicated to performing the functions of the respective layer in the Layer 2 user plane and/or control plane.
  • each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 may be an ASIC, which may be customized for a particular use, rather than being intended for general-purpose use.
  • Some ASICs may have high speed, small die size, and low power consumption compared with a generic processor.
  • baseband SoC 502 may be operatively coupled to a host processor 504 and an external memory 506 through a main bus 538.
  • host processor 504 such as an application processor (AP)
  • AP application processor
  • host processor 504 may generate raw data that has not been coded and modulated yet by the PHY layer of baseband SoC 502.
  • host processor 504 may receive data after it is initially decoded and demodulated by the PHY layer and subsequently processed by Layer 2 circuits 508.
  • the raw data is formatted into data packets, according to any suitable protocols, for example, Internet Protocol (IP) data packets.
  • IP Internet Protocol
  • External memory 506 may be shared by host processor 504 and baseband SoC 502, or any other suitable components.
  • external memory 506 stores the raw data (e.g., IP data packets) to be processed by Layer 2 circuits 508 of baseband SoC 502 and stores the data processed by Layer 2 circuits 508 (e.g., MAC PDUs) to be accessed by Layer 1 (e.g., the PHY layer).
  • Layer 2 circuits 508 e.g., MAC PDUs
  • Layer 1 e.g., the PHY layer
  • External memory 506 may, or optionally may not, store any intermediate data of Layer 2 circuits 508, for example, PDCP PDUs/RLC SDUs or RLC PDUs/MAC SDUs.
  • Layer 2 circuits 508 may modify the data stored in external memory 506.
  • baseband SoC 502 may also direct memory access (DMA) 516 that can allow some Layer 2 circuits 508 to access external memory 506 directly independent of host processor 504.
  • DMA 516 may include a DMA controller and any other suitable input/output (I/O) circuits.
  • baseband SoC 502 may further include an internal memory 514, such as an on-chip memory on baseband SoC 502, which is distinguished from external memory 506 that is an off-chip memory not on baseband SoC 502.
  • internal memory 514 includes one or more LI, L2, L3, or L4 caches.
  • Layer 2 circuits 508 may access internal memory 514 through main bus 538 as well.
  • the internal memory 514 may, thus, by particularly for the baseband SoC 502 as distinct from other sub-components or components of an implementing system.
  • baseband SoC 502 may further include a memory 512 that can be shared by (e.g., both accessed by) Layer 2 circuits 508 and MCU 510. It is understood that although memory 512 is shown as an individual memory separate from internal memory 514, in some examples, memory 512 and internal memory 514 may be local partitions of the same physical memory structure, for example, a static random-access memory (SRAM). In one example, a logical partition in internal memory 514 may be dedicated to or dynamically allocated to Layer 2 circuits 508 and MCU 510 for exchanging commands and responses.
  • SRAM static random-access memory
  • memory 512 includes a plurality of command queues 534 for storing a plurality sets of commands, respectively, and a plurality of response queues 536 for storing a plurality sets of responses respectively.
  • Each pair of corresponding command queue 534 and response queue 536 may be dedicated to one of Layer 2 circuits 508.
  • baseband SoC 502 may further include a local bus 540.
  • MCU 510 may be operatively coupled to memory 512 and main bus 538 through local bus 540.
  • MCU 510 may be configured to generate a plurality sets of control commands and write each set of the commands into respective command queue 534 in memory 512 through local bus 540 and interrupts.
  • MCU 510 may also read a plurality sets of responses (e.g., processing result statuses) from response queues 536 in memory 512, respectively, through local bus 540 and interrupts.
  • MCU 510 generates a set of commands based on a set of responses from a higher layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 uplink data processing) or a lower layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 downlink data processing).
  • MCU 510 can be operatively coupled to Layer 2 circuits 508 and control the operations of Layer 2 circuits 508 to process the Layer 2 data. It is understood that although one MCU 510 is shown in FIG. 5, the number of MCUs is scalable, such that multiple MCUs may be used in some examples.
  • memory 512 may be part of MCU 510, e.g., a cache integrated with MCU 510. It is further understood that regardless of the naming, any suitable processing units that can generate control commands to control the operations of Layer 2 circuits 508 and check the responses of Layer 2 circuits 508 may be considered as MCU 510 disclosed herein.
  • FIG. 6 illustrates an exemplary wireless network 600, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • wireless network 600 may include a network of nodes, such as a user equipment (UE) 602, an access node 604, and a core network element 606.
  • User equipment 602 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Internet-of-Things (IoT) node.
  • V2X vehicle to everything
  • IoT Internet-of-Things
  • Access node 604 may be a device that communicates with user equipment 602, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like.
  • Access node 604 may have a wired connection to user equipment 602, a wireless connection to user equipment 602, or any combination thereof.
  • Access node 604 may be connected to user equipment 602 by multiple connections, and user equipment 602 may be connected to other access nodes in addition to access node 604. Access node 604 may also be connected to other UEs.
  • Core network element 606 may serve access node 604 and user equipment 602 to provide core network services.
  • core network element 606 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW).
  • HSS home subscriber server
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • EPC evolved packet core
  • Other core network elements may be used in LTE and in other communication systems.
  • core network element 606 includes an access and mobility management function (AMF) device, a session management function (SMF) device, or a user plane function (UPF) device, of a core network for the NR system. It is understood that core network element 606 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • Core network element 606 may connect with a large network, such as the Internet
  • data from user equipment 602 may be communicated to other UEs connected to other access points, including, for example, a computer 610 connected to Internet 608, for example, using a wired connection or a wireless connection, or to a tablet 612 wirelessly connected to Internet 608 via a router 614.
  • computer 610 and tablet 612 provide additional examples of possible UEs
  • router 614 provides an example of another possible access node.
  • a generic example of a rack-mounted server is provided as an illustration of core network element 606.
  • database servers such as a database 616
  • security and authentication servers such as an authentication server 618.
  • Database 616 may, for example, manage data related to user subscription to network services.
  • a home location register (HLR) is an example of a standardized database of subscriber information for a cellular network.
  • authentication server 618 may handle authentication of users, sessions, and so on.
  • an authentication server function (AUSF) device may be the specific entity to perform user equipment authentication.
  • a single server rack may handle multiple such functions, such that the connections between core network element 606, authentication server 618, and database 616, may be local connections within a single rack.
  • Each of the elements of FIG. 6 may be considered a node of wireless network 600.
  • Node 700 may be configured as user equipment 602, access node 604, or core network element 606 in FIG. 6. Similarly, node 700 may also be configured as computer 610, router 614, tablet 612, database 616, or authentication server 618 in FIG. 6.
  • node 700 may include a processor 702, a memory 704, a transceiver 706. These components are shown as connected to one another by bus 708, but other connection types are also permitted. When node 700 is user equipment 602, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 700 may be implemented as a blade in a server system when node 700 is configured as core network element 606. Other implementations are also possible.
  • Transceiver 706 may include any suitable device for sending and/or receiving data.
  • Node 700 may include one or more transceivers, although only one transceiver 706 is shown for simplicity of illustration.
  • An antenna 710 is shown as a possible communication mechanism for node 700. Multiple antennas and/or arrays of antennas may be utilized. Additionally, examples of node 700 may communicate using wired techniques rather than (or in addition to) wireless techniques.
  • access node 604 may communicate wirelessly to user equipment 602 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 606.
  • Other communication hardware such as a network interface card (NIC), may be included as well.
  • NIC network interface card
  • node 700 may include processor 702. Although only one processor is shown, it is understood that multiple processors can be included.
  • Processor 702 may include microprocessors, microcontrollers, DSPs, ASICs, field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure.
  • Processor 702 may be a hardware device having one or many processing cores.
  • Processor 702 may execute software.
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • Software can include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for instructing hardware are also permitted under the broad category of software.
  • Processor 702 may be a baseband chip, such as DP hardware 204 in FIG. 2 or SoC 502 in FIG. 5.
  • the node 700 may also include other processors, not shown, such as a central processing unit of the device, a graphica processor, or the like.
  • the processor 702 may include internal memory (not shown in FIG. 7) that may serve as memory for L2 data, such as L2+HARQ buffer (local / internal) 206 in FIG. 2 or internal memory 514 in FIG. 5.
  • Processor 702 may include an RF chip, for example integrated into a baseband chip, or an RF chip may be provided separately.
  • Processor 702 may be configured to operate as a modem of node 700, or may be one element or component of a modem. Other arrangements and configurations are also permitted.
  • node 700 may also include memory 704. Although only one memory is shown, it is understood that multiple memories can be included. Memory 704 can broadly include both memory and storage.
  • memory 704 may include random-access memory (RAM), read-only memory (ROM), SRAM, dynamic RAM (DRAM), ferro-electric RAM (FRAM), electrically erasable programmable ROM (EEPROM), CD-ROM or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702.
  • RAM random-access memory
  • ROM read-only memory
  • SRAM dynamic RAM
  • FRAM ferro-electric RAM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM or other optical disk storage such as hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be
  • memory 704 may be embodied by any computer-readable medium, such as a non- transitory computer-readable medium.
  • the memory 704 can be the external memory 506 in FIG. 5 or the L3 Buffer (Ext) 202 in FIG. 2.
  • the memory 704 may be shared by processor 702 and other componnets of node 700, such as the unillustrated graphic processor or central processing unit.
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium.
  • Computer-readable media includes computer storage media.
  • Storage media may be any available media that can be accessed by a computing device, such as node 700 in FIG. 7.
  • computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer.
  • Disk and disc includes CD, laser disc, optical disc, DVD, and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • an apparatus for memory handling can include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data.
  • the apparatus can also include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remainder of the packet upon determination that at least two predetermined conditions are met, and pass the remainder of the packet from the external memory to the internal memory.
  • the circuitry may further be configured to receive the packet and store the packet in the external memory prior to processing the header.
  • the circuitry may further be configured to pass the packet to a physical layer of the apparatus for transmission.
  • the internal memory may include a transmission window buffer and a retransmission window buffer.
  • the circuitry may be configured also to move the packet to the retransmission window buffer.
  • the circuitry may be configured to bring additional L3 data from the external memory into the internal memory.
  • the remainder of the packet may be processed by packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering.
  • the remainder of the packet may further be processed by the addition of radio link control and medium access control headers.
  • the remainder of the packet may be placed in contiguous memory in the internal memory.
  • the at least two predetermined conditions may include space in the internal memory being available and medium access control being ready to prepare data for a next window of transmission.
  • an apparatus for memory handling can include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data.
  • the apparatus can further include circuitry configured to maintain L3 data according to at least one first window and maintain L2 data according to at least one second window shorter than the first window.
  • the at least one second window can include a transmission window and a retransmission window.
  • the transmission window combined with the retransmission window may be less than the at least one first window.
  • the circuitry may further be configured to dimension the internal memory for multiple medium access control instances.
  • the circuitry may be configured to take into account a plurality of parameters when dimensioning the internal memory.
  • the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of logical channel, and layer three buffer size of logical channel.
  • the circuitry may be configured to scale each medium access control instance size based on a ratio of a maximum internal memory size and total size of all medium access control instances.
  • a method for memory handling can include processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data.
  • the method can also include processing, by the circuitry, a remainder of the packet upon determination that at least two predetermined conditions are met.
  • the method can further include passing, by the circuitry, the remainder of the packet from the external memory to the internal memory.
  • the method can also include receiving the packet and storing the packet in the external memory prior to processing the header.
  • the method can further include passing the packet to a physical layer of a device for transmission.
  • the internal memory can include a transmission window buffer and a retransmission window buffer.
  • the method may further include also moving the packet to the retransmission window buffer.
  • the method may further include bringing additional layer three data from the external memory into the internal memory.
  • the processing of the remainder of the packet can include packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering.
  • the remainder of the packet may be further processed by the addition of radio link control and medium access control headers.
  • the remainder of the packet may be placed in contiguous memory in the internal memory.
  • the at least two predetermined conditions can include space in the internal memory being available and medium access control being ready to prepare data for a next window of transmission.
  • a method for memory handling can include maintaining, by circuitry, layer three (L3) data according to at least one first window, wherein the L3 data is stored in external memory.
  • the method may also include maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window, wherein the L2 data is stored in internal memory.
  • the at least one second window can include a transmission window and a retransmission window.
  • the transmission window combined with the retransmission window may be less than the at least one first window.
  • the method may further include dimensioning the internal memory for multiple medium access control instances.
  • the dimensioning may take into account a plurality of parameters.
  • the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of logical channel, and layer three buffer size of logical channel.
  • the method may further include scaling each medium access control instance size based on a ratio of a maximum internal memory size and total size of all medium access control instances.
  • a non-transitory computer-readable medium can encode instructions that, when executed by a microcontroller of a node, may perform a process for memory handling.
  • the process can include any of the above-described methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments of apparatuses and methods for memory handling may be applicable to communication systems, such as wireless communication systems. In an example, an apparatus for memory handling may include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data. The apparatus may further include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remainder of the packet upon determination that at least two predetermined conditions are met, and pass the remainder of the packet from the external memory to the internal memory.

Description

DYNAMIC UPLINK END-TO-END DATA TRANSFER SCHEME WITH
OPTIMIZED MEMORY PATH
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is related to and claims the priority of US Provisional Patent
Application No. 62/966,686, filed January 28, 2020, the entirety of which is hereby incorporated herein by reference.
BACKGROUND
[0002] Embodiments of the present disclosure relate to apparatuses and methods for memory handling, which may be applicable to communication systems, such as wireless communication systems.
[0003] Communication systems, such as wireless communication systems, are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts. When packets are to be sent through a medium, for example, over the air in the case of wireless communication, a modem having a protocol stack embodied in hardware and software may pass the packets down the protocol stack with a physical layer, including a radio frequency (RF) module, ultimately converting the bits of the packet into radio waves.
SUMMARY
[0004] Embodiments of apparatuses and methods for memory handling are disclosed herein.
[0005] In one example, an apparatus for memory handling can include an external memory configured to store layer three (L3) data. The apparatus can also include an internal memory configured to store layer two (L2) data. The apparatus can further include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remainder of the packet upon determination that at least two predetermined conditions are met, and pass the remainder of the packet from the external memory to the internal memory. The at least two predetermined conditions can include that space in the internal memory is available and that a medium access control (MAC) layer is ready to prepare data for a next window of transmission.
[0006] In another example, an apparatus for memory handling can include an external memory configured to store L3 data and an internal memory configured to store L2 data. The apparatus can further include circuitry configured to maintain L3 data according to at least one first window and maintain L2 data according to at least one second window shorter than the first window.
[0007] In a further example, a method for memory handling can include processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store L3 data to an internal memory configured to store L2 data. The method can also include processing, by the circuitry, a remainder of the packet upon determination that at least two predetermined conditions are met. The method can further include passing, by the circuitry, the remainder of the packet from the external memory to the internal memory. The at least two predetermined conditions can include that space in the internal memory is available and that a MAC layer is ready to prepare data for a next window of transmission. [0008] In yet another example, a method for memory handling can include maintaining, by circuitry, L3 data according to at least one first window, wherein the L3 data is stored in external memory. The method may also include maintaining, by the circuitry, L2 data according to at least one second window shorter than the first window, wherein the L2 data is stored in internal memory. [0009] In a still further example, a non-transitory computer-readable medium can encode instructions that, when executed by a microcontroller of a node, may perform a process for memory handling. The process can include any of the above-described methods.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the description, further serve to explain the principles of the present disclosure and to enable a person skilled in the pertinent art to make and use the present disclosure.
[0011] FIG. 1 illustrates data processing in a protocol stack, according to some embodiments of the present disclosure.
[0012] FIG. 2 illustrates a data flow diagram illustrating some embodiments of the present disclosure.
[0013] FIGs. 3A and 3B illustrate an internal memory corresponding to the data flow diagram of FIG. 2, in some embodiments of the present disclosure.
[0014] FIG. 4A illustrates a method according to some embodiments of the present disclosure.
[0015] FIG. 4B illustrates a further method according to some embodiments of the present disclosure.
[0016] FIG. 5 illustrates a detailed block diagram of a baseband system on chip (SoC) implementing Layer 2 packet processing using Layer 2 circuits and a microcontroller (MCU) according to some embodiments of the present disclosure.
[0017] FIG. 6 illustrates an exemplary wireless network that may incorporate memory handling, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
[0018] FIG. 7 illustrates a node that may be used for memory handling, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
[0019] Although specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the pertinent art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the present disclosure. It will be apparent to a person skilled in the pertinent art that the present disclosure can also be employed in a variety of other applications.
[0020] It is noted that references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. [0021] In general, terminology may be understood at least in part from usage in context.
For example, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
[0022] Various aspects of wireless communication systems will now be described with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, units, components, circuits, steps, operations, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, firmware, computer software, or any combination thereof. Whether such elements are implemented as hardware, firmware, or software depends upon the particular application and design constraints imposed on the overall system.
[0023] The techniques described herein may be used for various wireless communication networks, such as code division multiple access (CDMA) system, time division multiple access (TDMA) system, frequency division multiple access (FDMA) system, orthogonal frequency division multiple access (OFDMA) system, single-carrier frequency division multiple access (SC- FDMA) system, and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio access technology (RAT) such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc. A TDMA network may implement a RAT such as GSM. An OFDMA network may implement a RAT, such as long term evolution (LTE) or new radio (NR). The techniques and system described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs. Likewise, the techniques and systems described herein may also be applied to wired networks, such as networks based on optical fibers, coaxial cables, or twisted-pairs, or to satellite networks.
[0024] Some embodiments of the present disclosure relate to a mechanism to manage memory and processing as a packet traverses down through protocol layers. Some embodiments also relate to a minimum internal memory for transmission and retransmission purposes for such a packet. Furthermore, some embodiments relate to effective management of retransmissions data storage.
[0025] In communication devices, such as wireless modems for use in the user equipment
(UE) or other terminal devices of Fifth Generation (5G) communication systems, L3 packet data to be transmitted from the device is stored in external memory. The external memory may be shared by multiple components within the modem or with other components of the UE outside the modem. During L3 IP header processing, the L3 packet data may be moved into an internal memory, which may also be referred to as a local memory. For example, the internal memory may be local to a given system-on-chip, as distinct from external memory, which may be on another chip of the same device. After L3 IP header processing, the L3 packet data is stored back in external memory again.
[0026] A trigger is then sent to the PDCP layer to process the L3 packets one function at a time. The functions can include robust header compression (ROHC), integrity checking, and ciphering. Upon processing, the L3 packets may be saved to external memory, or an internal memory, for the next steps in the processing chain.
[0027] PDCP L2 packets are then queued into logical channel queues waiting to be processed further. The RLC layer then sorts the data into various RLC queues in the LCs. [0028] Finally, the MAC layer retrieves the L2 data from the LC queues and moves them to an internal memory for transfer to the PHY layer.
[0029] The above-described approaches to handling packet data may result in inefficient data movements of a packet from L3 to multiple PDCP layer functions, and then to RLC and to MAC layers. The above-described approaches rely on multiple external memory accesses, both for reading and writing. Additionally, a large external memory and large internal memory are required. In view of the large amount of memory, and the large amount of accesses to the memory, a relatively large amount of power may be used.
[0030] Some embodiments may have various benefits and/or advantage as to various technical aspects. For example, some embodiments of the present disclosure provide a way to reduce a data transfer path through the memories in the UL ETE data path. Some embodiments still ensure that the packets traverse all the multiple data plane layers needed to process the incoming L3 packets. Furthermore, some embodiments minimize data access to external memory, thereby saving power. In addition, some embodiments minimize the amount of internal memory space, even though internal memory may provide fast performance at a higher cost of power and area.
[0031] Some embodiments of the present disclosure relate to an efficient memory path method for the dynamic transfer of 5G Uplink (UL) packets for data transmission is proposed, which allows minimal data movements, optimized external memory access, and small internal memory for high throughput and low latency packets.
[0032] A challenge in the UL ETE data path is finding the minimum data transfer path through the memories, necessary to transverse all the multiple data plane layers to process the incoming L3 packets and minimize data access to external memory to save power.
[0033] In addition, it may be beneficial to minimize the amount of internal memory space.
Internal memory space may provide fast performance but at a higher cost of power and area. Internal memory 514 in FIG. 5 is an example of internal memory, as distinct from external memory 506 in FIG. 5. The external memory 506 may be shared by multiple components of the system, including those not shown in FIG. 5. By contrast, the internal memory 514 in FIG. 5 may be configured exclusively for use by a baseband chip of a modem of a user equipment implementing the system shown in FIG. 5. The baseband chip may include an RF component, or an RF chip may be provided as a physically separate element.
[0034] Some embodiments relate to an efficient memory path method for the dynamic transfer of fifth-generation (5G) uplink (UL) packets for data transmission. Some embodiments may allow minimal data movements, may have optimized external memory access and may rely on a small internal memory for high throughput and low latency packets.
[0035] Some aspects of the explanation of some embodiments of the present disclosure discuss hardware aspects and software aspects. In some cases, the hardware aspects can refer to aspects that are performed by specialized hardware, such as a hardware-based protocol stack implementation. FIG. 5, discussed below, provides a specific example of a hardware-based protocol stack implementation with multiple dedicated integrated circuits, such as application- specific integrated circuits (ASICs), handling different layers of the protocol stack. On the other hand, the software aspects can refer to aspects that may be performed by a general-purpose processor or by a layer-independent specialized modem processor. FIG. 5 illustrates a specific example in which the software aspects may be implemented on a microcontroller.
[0036] Some embodiments may rely on three different and potentially independent principles that can be used together in one aspect of some embodiments. According to a first principle, some embodiments move data from layer three (L3) external memory to layer two (L2) internal memory only near the transmission time frame.
[0037] According to a second principle, some embodiments perform packet data convergence protocol (PDCP) processing concurrent with data movement from L3 external memory to L2 internal memory. [0038] According to a third principle, some embodiments prepare expected medium access control (MAC) protocol data unit (PDU) packets in L2 internal memory directly in place. The preparation may involve prioritizing and concatenating the L2 packets data moves from L3 external memory to L2 internal memory.
[0039] Each of these identified principles can be used together or differently. These three principles can be viewed as principles of a first aspect of some embodiments, as mentioned above. This aspect may be referred to as optimized data movements from external memory to internal memory.
[0040] Some embodiments may rely on two different and potentially independent principles that can be used together in a second aspect of some embodiments. According to a first principle, a reduced transmission window (TXWTN) buffer can be used for prioritized L2 MAC data storage in a minimal internal memory. The reduced TXWTN buffer may be used for fast transmission near the transmission timeframe.
[0041] According to a second principle, a reduced retransmission window (RETXWIN) buffer can be used for L2 MAC data storage in a minimal internal memory. The reduced RETXWIN buffer may be used for fast hybrid automatic repeat request (HARQ) retransmission close to the transmitted timeframe.
[0042] The first and second principles can be implemented together to, for example, help further reduce local data storage needs. This second aspect can, therefore, be considered as a minimum internal memory for fast UL transmissions and retransmissions.
[0043] A third aspect of some embodiments may involve the effective management of retransmissions data storage. This third aspect may involve three principles, which may be used independently or together.
[0044] According to a first aspect, HARQ retransmission data can be retrieved from a small, fast, internal memory, if available. One detail here may be the length of time that HARQ retransmission data is retained in the small, fast, internal memory. This length of time may be in advance by a configuration or may be dynamically changed over time based on HARQ usage by the device in practice. For example, a device in a relatively noisy or otherwise interfered scenario may need to use HARQ more often than in a relatively clear scenario.
[0045] According to a second aspect, if retransmission data is requested or otherwise required, but is not currently available in internal memory, then the retransmission data may be retrieved from external memory. This may be because the retention time in internal memory may be long enough to handle the vast majority of HARQ retransmissions. Nevertheless, occasionally a request for retransmission may arrive outside the retention time. As mentioned above, the retention time for internal memory can be configured to capture some predicted percentage of the retransmission requests, such as 97% of the retransmission requests, 99% of the retransmission requests, or 99.9% of the retransmission requests. Other percentages can also be targeted: the preceding are just examples.
[0046] According to a third aspect, all L3 data packets may be stored in the external memory until a predetermined time expires. The predetermined time may be an L2 discard window or a PDCP discard window. If there are multiple discard windows applicable, the external memory may wait until the last discard window expires. A window may be based on the need to perform link recovery. Thus, the discard window may expire when the RLC layer or the PDCP layer has completed link recovery.
[0047] FIG. 1 illustrates data processing in a protocol stack, according to some embodiments. For example, the protocol stack may be implemented in a modem or similar device. As shown in FIG. 1, in a 5G cellular wireless modem, the packet data protocol stack consists of the Modem Layer 3 IP layer, the PDCP (Packet Data Convergence Protocol) layer, the RLC (Radio Link Control) layer, and the MAC (Media Access Control) layer. Each layer is responsible for processing the user plane packet data in the form of IP data or raw user data and ensuring that data transmission is secure, on-time, and error-free. [0048] In the UL end to end (ETE) data path shown in FIG. 1, the L3 data is processed through multiple layers before the final transfer to the MAC layer and to the PHY layer.
[0049] First, for example, the packet may pass through L3 layer internet protocol (IP) header and quality of service (QOS) flow processing and can be queued in L3 buffers. Then the packet may pass through PDCP processing, which can include ROHC compression, integrity checking, and ciphering. The PDCP packet data can be queued in L2 buffers sorted in Logical channels (LCs). Then, at the RLC layer, RLC queues can be sorted in priority bins according to the type of data (retransmission, new data, status, segments). Finally, at the MAC layer, the data packets from different LCs can be gathered according to priority per the Logical Channel Prioritization (LCP) procedures as specified in the 3 GPP standard. A similar approach can be used for other communication standards.
[0050] Some embodiments of the present disclosure provide a way to reduce a data transfer path through the memories in the UL ETE data path. Some embodiments still ensure that the packets traverse all the multiple data plane layers needed to process the incoming L3 packets. Furthermore, some embodiments minimize data access to external memory, thereby saving power. In addition, some embodiments minimize the amount of internal memory space, even though internal memory may provide fast performance at a higher cost of power and area.
[0051] FIG. 2 illustrates a data flow diagram illustrating some embodiments of the present disclosure. FIGs. 3 A and 3B illustrate an internal memory corresponding to the data flow diagram of FIG. 2, in some embodiments. As shown in FIG. 2, at 210A, an application (AP) or host can send L3 TCP/IP packets to the modem data stack of a system 200. Data buffers are allocated from external memory and stored with incoming IP packets. These may broadly be part of the L3 data window.
[0052] At 220B, IP headers can be processed and moved to L2 internal memory. Since the IP headers may need to be processed efficiently for QoS flow identification and sorting/filtering, they can be placed in fast internal memory first, namely before the remainder of the packets. Although not particularly shown in FIG. 3 A, these may be part of the packets, such as current transmission packet 310 or any of the other packets in TXWIN 320, with the remainder of the packets joining them after 230C.
[0053] In FIG. 2, an external memory, such as L3 Buffer (Ext) 202, is shown operatively coupled to digital processing (DP) hardware 204, which in turn is shown operatively coupled to an internal memory, such as L2+HARQ buffer (local/internal) 206. The L2+HARQ buffer (local/internal) 206 is shown operatively connected to physical layer (PHY) 208, which may be considered as external to the DP hardware 204, but part of the overall system 200. DP software 212 may run on a microcontroller (for example, MCU 510 in FIG. 5) or another computing device. [0054] As shown in FIG. 2, at 230C, when L2 internal memory is available, and MAC is ready to prepare data for the next window of transmissions, MAC can trigger the allocation of L2 data buffers from the small internal memory and can extract data from L3 external memory. [0055] This data taken from the L3 external memory can pass through PDCP processing, which can include ROHC, integrity checking, and ciphering, as well as the addition of RLC and MAC headers at the same time.
[0056] The data prepared in L2 internal memory can be placed in contiguous memory for fast streaming to the PHY layer at the transmission timeline.
[0057] PDCP, MAC PDU preparation, and prioritized placement into contiguous memory may all be done when moving data from L3 external to L2 internal memory. By doing this movement only once, data movements may be optimized or otherwise efficiently or beneficially arranged for the next window of transmission. As shown in FIG. 3 A, at TO, this movement into internal memory can occur to fill out the packets in TXWIN 320, including current transmission packet 310. Thus, a current transmission packet 310 can be loaded into the transmission window (TXWIN) 320 in L2 internal memory (which can be referred to as L2Localmem). Meanwhile, the L3 data window 330, also referred to as L3 data buffer 330, can encompass the same packets and more. The L3 data window 330 may be maintained in external memory in an L3 buffer (for example, in L3 Buffer (Ext) in FIG. 2 or external memory 506 in FIG. 5). Thus, the L3 data buffer 330 may include all the same packets of TXWIN 320, RETXWIN 340, and more.
[0058] As shown in FIG. 2, at 240D, an RRC signaling message or an RLC command message may arrive. RRC signaling messages or RLC command messages may arrive at the same time to the L2 transmission queues. These message may be directly allocated into L2 data buffers. Although not explicitly shown in FIG. 3 A, these can be included in the TXWIN 320.
[0059] At 250E, MAC PDU transmission and/or retransmission can occur. At every slot, the MAC may get an indication and grant from the BS to transmit packets. This grant is shown, by way of example, as NW UL grant in FIG. 1. For new data, the packets may be retrieved quickly from the TXWIN 320 buffer L2 internal memory prepared with MAC data.
[0060] As shown at TO in FIG. 3 A, for retransmission data, the RETXWIN 340 buffer may first be scanned to retrieve the hybrid automatic repeat request (HARQ) data, such as an unacknowledged packet 350. If the data is outside the RETXWIN 340 window, and/or is already overwritten/deleted (for example, due to the limited size of RETXWIN 340), then the L3 data may be accessed again from external memory. In this case, the retrieved data may traverse the L3 to L2 processing data path, where new L2 local buffers may be allocated for these packets. For example, as shown at 360 at T1 in FIG. 3B, packets previously sent and found only in the L3 data window at TO, shown at 375, may be added back into RETXWIN 340 at 370. Previously sent packets still within the RETXWIN 340 may be aggregated by moving, for example, to the left as shown at 365.
[0061] As shown in FIG. 2, at 260F, there can be a refill of the L2 local buffer. When data have been drawn out from the L2 internal memory for transmission to the PHY layer, these buffers may be saved into the internal memory RETXWIN region. This may be accomplished by moving the bits from one region of internal memory to another region of the internal memory. Another approach may be to redefine the region of internal memory that was previously part of the TXWIN region to be in the RETXWIN.
[0062] Then, old data may be overwritten or otherwise deleted, making space for incoming
TXWIN and RETXWIN. Deleting here can also include dereferencing the bits, without any requirement to zero the bits or otherwise alter them.
[0063] Additional L3 data may be drawn into the L2 internal memory, as described above, after PDCP processing, header additions, and prioritized MAC PDU creation. This is shown FIG. 3B, at Tl, where the transmission window and retransmission window have moved forward to the right one packet as illustrated by the arrow for windows movement direction. This one packet adjustment is just for illustration. If multiple packets are sent at the same time, the adjustment could be multiple packets at the same time. Likewise, while the directional arrow is to the right, this is simply to illustrate memories in which contiguous blocks of memory are arranged in a left- to-right order. Other arrangements of memory are also permitted, with the way illustrated simply for purposes of illustration and example.
[0064] FIG. 4A illustrates a method according to some embodiments. As shown in FIG.
4 A, a method 400 for memory handling can include, at 410, maintaining, by circuitry, layer three (L3) data according to at least one first window. The L3 data can be stored in external memory. The method 400 may also include, at 420, maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window. The L2 data can be stored in internal memory. An illustration of this approach can be seen in FIGs. 3 A and 3B, in which the L3 data window is much larger than the windows TXWIN and RETXWIN for L2 data.
[0065] The at least one second window can include a transmission window and a retransmission window, such as TXWIN 320 and RETXWIN 340 in FIGs. 3A and 3B. As shown by way of example in FIGs. 3A and 3B, the transmission window combined with the retransmission window may still be less than the at least one first window, such as the L3 data window. [0066] As shown in FIG. 4 A, the method 400 may further include, at 430, dimensioning the internal memory for multiple medium access control instances. This dimensioning may occur in combination with the previously described maintaining steps as illustrated, or may be implemented separately from such steps. The dimensioning may take into account a plurality of parameters. For example, the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of the logical channel, and layer three buffer size of the logical channel.
[0067] In some embodiments, the method 400 may further include, at 440, scaling each medium access control instance size based on a ratio of a maximum internal memory size and the total size of all medium access control instances. This is explained above in further detail. For example, based on an initial calculation of the needs of each MAC instance, it may occur that the total need of the instances exceeds a maximum available amount of internal memory. Accordingly, using a weighted fairness approach, each of the MAC instances may be allocated according to their need scaled by a ratio between the total needs and the maximum available internal memory. Other ways of handling limited internal memory are permitted.
[0068] The method of FIG. 4A may be performed with the architecture shown in FIG. 2 and the specific hardware illustrated in FIG. 5 and discussed in more detail below. For example, a microcontroller and/or application-specific integrated circuits (ASICs) may be responsible for maintaining, dimensioning, and scaling, as described above.
[0069] FIG. 4B illustrates a further method according to some embodiments. As with FIG.
4A, the method of FIG. 4B can be implemented in circuitry, such as the hardware and associated software illustrated in FIGs. 2 and 5. The method of FIG. 4B is usable with the method FIG. 4A, such that both methods may be simultaneously and harmoniously implemented in the same modem of the same user equipment. Other implementations are possible, such as the methods being practiced separately from one another.
[0070] As shown in FIG. 4B, a method 405 for memory handling can include, at 415, processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data. This is similarly illustrated at 220B, as explained above.
[0071] As shown in FIG. 4B, the method 405 can also include, at 425, processing, by the circuitry, a remainder of the packet upon the determination that at least two predetermined conditions are met. This is illustrated at 230B and 240D in FIG. 2, as discussed above. The remainder of the packet can be everything aside from the packet header that was separately processed at 220B and 415. The determination that the predetermined conditions are met, at 427, may be variously implemented. In some embodiments, the at least two predetermined conditions can include space in the internal memory being available and medium access control being ready to prepare data for the next window of transmission. This may be thought of as a just-in-time preparation technique, with the remainder of the packets being provided to the L2 memory only just-in-time for transmission, thereby minimizing the time that they are present in L2, and consequently also minimizing size requirements for the L2 memory.
[0072] The processing of the remainder of the packet can include packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering, as illustrated in FIG. 2 and discussed above. The remainder of the packet may be further processed by the addition of radio link control and medium access control headers. The remainder of the packet may be placed in contiguous memory in the internal memory, as illustrated in FIGs. 3 A and 3B. Contiguous memory can refer to the physical or logical arrangement of the bits in memory. For example, the logical arrangement may the physical address or order in which bits are accessed by a controller of the memory. When contiguous memory is used, the system may be able to extract a range of bits, rather than having to receive numerous bit addresses or ranges of bits scattered throughout the memory. [0073] As shown in FIG. 4B, the method 405 can further include, at 432, passing, by the circuitry, the remainder of the packet from the external memory to the internal memory. This is also illustrated at 230C in FIG. 2, as discussed above.
[0074] As shown in FIG. 4B, the method 405 can also include, at 402, receiving the packet and storing the packet in the external memory prior to processing the header. This is further illustrated at 210A in FIG. 2.
[0075] As shown in FIG. 4B, the method 405 can further include passing the packet to a physical layer of the implementing device for transmission. This is also illustrated at 250E in FIG. 2, as discussed above.
[0076] The internal memory used in method 405 can include a transmission window buffer and a retransmission window buffer, as illustrated in FIGs. 3A and 3B as TXWIN 320 and RETXWIN 340. Upon the packet being passed from the transmission window buffer to the physical layer at 435, the method 405 may further include also, at 437, moving the packet to the retransmission window buffer. This move is also illustrated in FIGs. 3 A and 3B in the change to the scope of the windows between TO and Tl.
[0077] Upon the packet being passed from the transmission window buffer to the physical layer, the method 405 may further include, at 404, bringing additional layer three data from the external memory into the internal memory. The method 405 may then proceed as described above from 415 onward.
[0078] FIG. 5 illustrates a detailed block diagram of a baseband SoC 502 implementing
Layer 2 packet processing using Layer 2 circuits 508 and a microcontroller (MCU) 510 according to some embodiments of the present disclosure. FIG. 5 may be viewed as a specific implementation and example of the architecture illustrated in FIG. 2, although other implementations including those that are more or less reliant on hardware are also permitted. [0079] As shown in FIG. 5, baseband SoC 502 may be one example of a software and hardware interworking system in which the software functions are implemented by MCU 510, and the hardware functions are implemented by Layer 2 circuits 508. MCU 510 may be one example of a microcontroller, and Layer 2 circuits 508 may be one example of integrated circuits, although other microcontroller and integrated circuits are also permitted. In some embodiments, Layer 2 circuits 508 include an SDAP circuit 520, a PDCP circuit 522, an RLC circuit 524, and a MAC circuit 526. The dedicated integrated circuits (ICs) (for example, SDAP circuit 520, PDCP circuit 522, RLC circuit 524, and MAC circuit 526) controlled by MCU 510 can be used to conduct Layer 2 packet processing. In some embodiments, each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 is an IC dedicated to performing the functions of the respective layer in the Layer 2 user plane and/or control plane. For example, each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 may be an ASIC, which may be customized for a particular use, rather than being intended for general-purpose use. Some ASICs may have high speed, small die size, and low power consumption compared with a generic processor.
[0080] As shown in FIG. 5, baseband SoC 502 may be operatively coupled to a host processor 504 and an external memory 506 through a main bus 538. For uplink communication, host processor 504, such as an application processor (AP), may generate raw data that has not been coded and modulated yet by the PHY layer of baseband SoC 502. Similarly, for downlink communication, host processor 504 may receive data after it is initially decoded and demodulated by the PHY layer and subsequently processed by Layer 2 circuits 508. In some embodiments, the raw data is formatted into data packets, according to any suitable protocols, for example, Internet Protocol (IP) data packets. External memory 506 may be shared by host processor 504 and baseband SoC 502, or any other suitable components.
[0081] In some embodiments, external memory 506 stores the raw data (e.g., IP data packets) to be processed by Layer 2 circuits 508 of baseband SoC 502 and stores the data processed by Layer 2 circuits 508 (e.g., MAC PDUs) to be accessed by Layer 1 (e.g., the PHY layer). The reverse may be the case in a downlink flow from the user equipment, in which the external memory 506 may store data received from the PHY layer and data output from the Layer 2 circuits 508 after header removal and other tasks. External memory 506 may, or optionally may not, store any intermediate data of Layer 2 circuits 508, for example, PDCP PDUs/RLC SDUs or RLC PDUs/MAC SDUs. For example, Layer 2 circuits 508 may modify the data stored in external memory 506.
[0082] As shown in FIG. 5, baseband SoC 502 may also direct memory access (DMA) 516 that can allow some Layer 2 circuits 508 to access external memory 506 directly independent of host processor 504. DMA 516 may include a DMA controller and any other suitable input/output (I/O) circuits. As shown in FIG. 5, baseband SoC 502 may further include an internal memory 514, such as an on-chip memory on baseband SoC 502, which is distinguished from external memory 506 that is an off-chip memory not on baseband SoC 502. In some embodiments, internal memory 514 includes one or more LI, L2, L3, or L4 caches. Layer 2 circuits 508 may access internal memory 514 through main bus 538 as well. The internal memory 514 may, thus, by particularly for the baseband SoC 502 as distinct from other sub-components or components of an implementing system.
[0083] As shown in FIG. 5, baseband SoC 502 may further include a memory 512 that can be shared by (e.g., both accessed by) Layer 2 circuits 508 and MCU 510. It is understood that although memory 512 is shown as an individual memory separate from internal memory 514, in some examples, memory 512 and internal memory 514 may be local partitions of the same physical memory structure, for example, a static random-access memory (SRAM). In one example, a logical partition in internal memory 514 may be dedicated to or dynamically allocated to Layer 2 circuits 508 and MCU 510 for exchanging commands and responses. In some embodiments, memory 512 includes a plurality of command queues 534 for storing a plurality sets of commands, respectively, and a plurality of response queues 536 for storing a plurality sets of responses respectively. Each pair of corresponding command queue 534 and response queue 536 may be dedicated to one of Layer 2 circuits 508.
[0084] As shown in FIG. 5, baseband SoC 502 may further include a local bus 540. In some embodiments, MCU 510 may be operatively coupled to memory 512 and main bus 538 through local bus 540. MCU 510 may be configured to generate a plurality sets of control commands and write each set of the commands into respective command queue 534 in memory 512 through local bus 540 and interrupts. MCU 510 may also read a plurality sets of responses (e.g., processing result statuses) from response queues 536 in memory 512, respectively, through local bus 540 and interrupts. In some embodiments, MCU 510 generates a set of commands based on a set of responses from a higher layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 uplink data processing) or a lower layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 downlink data processing). Through the control commands in commands queues 534 in memory 512, MCU 510 can be operatively coupled to Layer 2 circuits 508 and control the operations of Layer 2 circuits 508 to process the Layer 2 data. It is understood that although one MCU 510 is shown in FIG. 5, the number of MCUs is scalable, such that multiple MCUs may be used in some examples. It is also understood that in some embodiments, memory 512 may be part of MCU 510, e.g., a cache integrated with MCU 510. It is further understood that regardless of the naming, any suitable processing units that can generate control commands to control the operations of Layer 2 circuits 508 and check the responses of Layer 2 circuits 508 may be considered as MCU 510 disclosed herein.
[0085] The software and hardware interworking systems disclosed herein, such as system
200 in FIG. 2 and baseband SoC 502 in FIG. 5 may be implemented by any suitable nodes in a wireless network. For example, FIG. 6 illustrates an exemplary wireless network 600, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
[0086] As shown in FIG. 6, wireless network 600 may include a network of nodes, such as a user equipment (UE) 602, an access node 604, and a core network element 606. User equipment 602 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Internet-of-Things (IoT) node. It is understood that user equipment 602 is illustrated as a mobile phone simply by way of illustration and not by way of limitation.
[0087] Access node 604 may be a device that communicates with user equipment 602, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 604 may have a wired connection to user equipment 602, a wireless connection to user equipment 602, or any combination thereof. Access node 604 may be connected to user equipment 602 by multiple connections, and user equipment 602 may be connected to other access nodes in addition to access node 604. Access node 604 may also be connected to other UEs. It is understood that access node 604 is illustrated by a radio tower by way of illustration and not by way of limitation. [0088] Core network element 606 may serve access node 604 and user equipment 602 to provide core network services. Examples of core network element 606 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW). These are examples of core network elements of an evolved packet core (EPC) system, which is a core network for the LTE system. Other core network elements may be used in LTE and in other communication systems. In some embodiments, core network element 606 includes an access and mobility management function (AMF) device, a session management function (SMF) device, or a user plane function (UPF) device, of a core network for the NR system. It is understood that core network element 606 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
[0089] Core network element 606 may connect with a large network, such as the Internet
608, or another IP network, to communicate packet data over any distance. In this way, data from user equipment 602 may be communicated to other UEs connected to other access points, including, for example, a computer 610 connected to Internet 608, for example, using a wired connection or a wireless connection, or to a tablet 612 wirelessly connected to Internet 608 via a router 614. Thus, computer 610 and tablet 612 provide additional examples of possible UEs, and router 614 provides an example of another possible access node.
[0090] A generic example of a rack-mounted server is provided as an illustration of core network element 606. However, there may be multiple elements in the core network including database servers, such as a database 616, and security and authentication servers, such as an authentication server 618. Database 616 may, for example, manage data related to user subscription to network services. A home location register (HLR) is an example of a standardized database of subscriber information for a cellular network. Likewise, authentication server 618 may handle authentication of users, sessions, and so on. In the NR system, an authentication server function (AUSF) device may be the specific entity to perform user equipment authentication. In some embodiments, a single server rack may handle multiple such functions, such that the connections between core network element 606, authentication server 618, and database 616, may be local connections within a single rack.
[0091] Although the above-description used uplink and downlink processing of a packet in a user equipment as examples in various discussions, similar techniques may likewise be used for the other direction of processing and for processing in other devices, such as access nodes, and core network nodes. For example, any device that processes packets through a plurality of layers of a protocol stack may benefit some embodiments of the present disclosure, even if not specifically listed above or illustrated in the example network of FIG. 6.
[0092] Each of the elements of FIG. 6 may be considered a node of wireless network 600.
More detail regarding the possible implementation of a node is provided by way of example in the description of a node 700 in FIG. 7 below. Node 700 may be configured as user equipment 602, access node 604, or core network element 606 in FIG. 6. Similarly, node 700 may also be configured as computer 610, router 614, tablet 612, database 616, or authentication server 618 in FIG. 6.
[0093] As shown in FIG. 7, node 700 may include a processor 702, a memory 704, a transceiver 706. These components are shown as connected to one another by bus 708, but other connection types are also permitted. When node 700 is user equipment 602, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 700 may be implemented as a blade in a server system when node 700 is configured as core network element 606. Other implementations are also possible.
[0094] Transceiver 706 may include any suitable device for sending and/or receiving data.
Node 700 may include one or more transceivers, although only one transceiver 706 is shown for simplicity of illustration. An antenna 710 is shown as a possible communication mechanism for node 700. Multiple antennas and/or arrays of antennas may be utilized. Additionally, examples of node 700 may communicate using wired techniques rather than (or in addition to) wireless techniques. For example, access node 604 may communicate wirelessly to user equipment 602 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 606. Other communication hardware, such as a network interface card (NIC), may be included as well.
[0095] As shown in FIG. 7, node 700 may include processor 702. Although only one processor is shown, it is understood that multiple processors can be included. Processor 702 may include microprocessors, microcontrollers, DSPs, ASICs, field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure. Processor 702 may be a hardware device having one or many processing cores. Processor 702 may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Software can include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for instructing hardware are also permitted under the broad category of software. Processor 702 may be a baseband chip, such as DP hardware 204 in FIG. 2 or SoC 502 in FIG. 5. The node 700 may also include other processors, not shown, such as a central processing unit of the device, a graphica processor, or the like. The processor 702 may include internal memory (not shown in FIG. 7) that may serve as memory for L2 data, such as L2+HARQ buffer (local / internal) 206 in FIG. 2 or internal memory 514 in FIG. 5. Processor 702 may include an RF chip, for example integrated into a baseband chip, or an RF chip may be provided separately. Processor 702 may be configured to operate as a modem of node 700, or may be one element or component of a modem. Other arrangements and configurations are also permitted.
[0096] As shown in FIG. 7, node 700 may also include memory 704. Although only one memory is shown, it is understood that multiple memories can be included. Memory 704 can broadly include both memory and storage. For example, memory 704 may include random-access memory (RAM), read-only memory (ROM), SRAM, dynamic RAM (DRAM), ferro-electric RAM (FRAM), electrically erasable programmable ROM (EEPROM), CD-ROM or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702. Broadly, memory 704 may be embodied by any computer-readable medium, such as a non- transitory computer-readable medium. The memory 704 can be the external memory 506 in FIG. 5 or the L3 Buffer (Ext) 202 in FIG. 2. The memory 704 may be shared by processor 702 and other componnets of node 700, such as the unillustrated graphic processor or central processing unit. [0097] In various aspects of the present disclosure, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 700 in FIG. 7. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer. Disk and disc, as used herein, includes CD, laser disc, optical disc, DVD, and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0098] According to an aspect of the present disclosure, an apparatus for memory handling can include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data. The apparatus can also include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remainder of the packet upon determination that at least two predetermined conditions are met, and pass the remainder of the packet from the external memory to the internal memory. [0099] In some embodiments, the circuitry may further be configured to receive the packet and store the packet in the external memory prior to processing the header.
[0100] In some embodiments, the circuitry may further be configured to pass the packet to a physical layer of the apparatus for transmission.
[0101] In some embodiments, the internal memory may include a transmission window buffer and a retransmission window buffer.
[0102] In some embodiments, upon the packet being passed from the transmission window buffer to the physical layer, the circuitry may be configured also to move the packet to the retransmission window buffer.
[0103] In some embodiments, upon the packet being passed from the transmission window buffer to the physical layer, the circuitry may be configured to bring additional L3 data from the external memory into the internal memory.
[0104] In some embodiments, the remainder of the packet may be processed by packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering.
[0105] In some embodiments, the remainder of the packet may further be processed by the addition of radio link control and medium access control headers.
[0106] In some embodiments, the remainder of the packet may be placed in contiguous memory in the internal memory.
[0107] In some embodiments, the at least two predetermined conditions may include space in the internal memory being available and medium access control being ready to prepare data for a next window of transmission.
[0108] According to another aspect, an apparatus for memory handling can include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data. The apparatus can further include circuitry configured to maintain L3 data according to at least one first window and maintain L2 data according to at least one second window shorter than the first window.
[0109] In some embodiments, the at least one second window can include a transmission window and a retransmission window. The transmission window combined with the retransmission window may be less than the at least one first window. [0110] In some embodiments, the circuitry may further be configured to dimension the internal memory for multiple medium access control instances.
[0111] In some embodiments, the circuitry may be configured to take into account a plurality of parameters when dimensioning the internal memory.
[0112] In some embodiments, the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of logical channel, and layer three buffer size of logical channel.
[0113] In some embodiments, the circuitry may be configured to scale each medium access control instance size based on a ratio of a maximum internal memory size and total size of all medium access control instances.
[0114] According to a further aspect, a method for memory handling can include processing, by circuitry, a header of a packet, and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data. The method can also include processing, by the circuitry, a remainder of the packet upon determination that at least two predetermined conditions are met. The method can further include passing, by the circuitry, the remainder of the packet from the external memory to the internal memory.
[0115] In some embodiments, the method can also include receiving the packet and storing the packet in the external memory prior to processing the header.
[0116] In some embodiments, the method can further include passing the packet to a physical layer of a device for transmission.
[0117] In some embodiments, the internal memory can include a transmission window buffer and a retransmission window buffer.
[0118] In some embodiments, upon the packet being passed from the transmission window buffer to the physical layer, the method may further include also moving the packet to the retransmission window buffer.
[0119] In some embodiments, upon the packet being passed from the transmission window buffer to the physical layer, the method may further include bringing additional layer three data from the external memory into the internal memory.
[0120] In some embodiments, the processing of the remainder of the packet can include packet data convergence protocol processing that includes robust header compression, integrity checking, and ciphering.
[0121] In some embodiments, the remainder of the packet may be further processed by the addition of radio link control and medium access control headers.
[0122] In some embodiments, the remainder of the packet may be placed in contiguous memory in the internal memory.
[0123] In some embodiments, the at least two predetermined conditions can include space in the internal memory being available and medium access control being ready to prepare data for a next window of transmission.
[0124] According to yet another aspect, a method for memory handling can include maintaining, by circuitry, layer three (L3) data according to at least one first window, wherein the L3 data is stored in external memory. The method may also include maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window, wherein the L2 data is stored in internal memory.
[0125] In some embodiments, the at least one second window can include a transmission window and a retransmission window. The transmission window combined with the retransmission window may be less than the at least one first window.
[0126] In some embodiments, the method may further include dimensioning the internal memory for multiple medium access control instances.
[0127] In some embodiments, the dimensioning may take into account a plurality of parameters. [0128] In some embodiments, the parameters can include a number of logical channels, data rate, priority of logical channel, maximum bucket size of logical channel, and layer three buffer size of logical channel.
[0129] In some embodiments, the method may further include scaling each medium access control instance size based on a ratio of a maximum internal memory size and total size of all medium access control instances.
[0130] According to still another aspect, a non-transitory computer-readable medium can encode instructions that, when executed by a microcontroller of a node, may perform a process for memory handling. The process can include any of the above-described methods.
[0131] The foregoing description of the specific embodiments will so reveal the general nature of the present disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
[0132] Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
[0133] The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims in any way.
[0134] Various functional blocks, modules, and steps are disclosed above. The particular arrangements provided are illustrative and without limitation. Accordingly, the functional blocks, modules, and steps may be re-ordered or combined in different ways than in the examples provided above. Likewise, some embodiments include only a subset of the functional blocks, modules, and steps, and any such subset is permitted.
[0135] The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. An apparatus for memory handling, comprising: an external memory configured to store layer three (L3) data; an internal memory configured to store layer two (L2) data; and circuitry operatively coupled to the external memory and the internal memory and configured to: process a header of a packet and move the header from the external memory to the internal memory; process a remainder of the packet upon determination that space in the internal memory is available and a medium access control (MAC) layer is ready to prepare data for a next window of transmission; and pass the remainder of the packet from the external memory to the internal memory.
2. The apparatus of claim 1, wherein the circuitry is further configured to receive the packet and store the packet in the external memory prior to processing the header.
3. The apparatus of claim 1, further comprising: a physical layer operatively coupled to the circuitry, wherein the circuitry is further configured to pass the packet to the physical layer for transmission.
4. The apparatus of claim 1, wherein the internal memory comprises a transmission window buffer and a retransmission window buffer.
5. The apparatus of claim 4, wherein upon the packet being passed from the transmission window buffer to the physical layer, the circuitry is further configured to move the packet to the retransmission window buffer.
6. The apparatus of claim 4, wherein upon the packet being passed from the transmission window buffer to the physical layer, the circuitry is further configured to bring additional L3 data from the external memory into the internal memory.
7. The apparatus of claim 1, wherein to process the remainder of the packet, the circuitry is configured to apply packet data convergence protocol processing comprising robust header compression, integrity checking, and ciphering.
8. The apparatus of claim 7, wherein to process the remainder of the packet, the circuitry is further configured to add a radio link control (RLC) header and a MAC header to the remainder of the packet before passing the remainder of the packet to the internal memory.
9. The apparatus of claim 1, wherein to process the remainder of the packet, the circuitry is configured to place the remainder of the packet in contiguous memory in the internal memory.
10. The apparatus of claim 1, wherein the internal memory is configured to be accessed only by a baseband chip of the apparatus and the external memory is configured to be accessed by a plurality of components of the apparatus in addition to the baseband chip.
11. The apparatus of claim 10, wherein the baseband chip comprises the circuitry.
12. The apparatus of claim 1, wherein the circuitry is further configured to: maintain the L3 data according to at least one first window comprising a first plurality of packets; and maintain the L2 data according to at least one second window shorter than the first window, wherein the second window comprises a second plurality of packets fewer than the first plurality of packets.
13. An apparatus for memory handling, comprising: an external memory configured to store layer three (L3) data; an internal memory configured to store layer two (L2) data; and circuitry operatively coupled to the external memory and the internal memory and configured to: maintain the L3 data according to at least one first window comprising a first plurality of packets; and maintain the L2 data according to at least one second window shorter than the first window, wherein the second window comprises a second plurality of packets fewer than the first plurality of packets.
14. The apparatus of claim 13, wherein the at least one second window comprises a transmission window and a retransmission window, wherein the transmission window combined with the retransmission window is less than the at least one first window.
15. The apparatus of claim 13, wherein the circuitry is further configured to dimension the internal memory for multiple medium access control (MAC) instances based on a plurality of parameters.
16. The apparatus of claim 15, wherein the parameters comprise at least one of a number of logical channels, a data rate, a priority of logical channel, a maximum bucket size of logical channel, or an L3 buffer size of logical channel.
17. The apparatus of claim 15, wherein to dimension the internal memory, the circuitry is further configured to scale each MAC instance size based on a ratio of a maximum internal memory size and a total size of all MAC instances.
18. A method for memory handling, comprising: processing, by circuitry, a header of a packet and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data; processing, by the circuitry, a remainder of the packet upon determination that space in the internal memory is available and a medium access control (MAC) layer is ready to prepare data for a next window of transmission; and passing, by the circuitry, the remainder of the packet from the external memory to the internal memory.
19. A method for memory handling, comprising: maintaining, by circuitry, layer three (L3) data according to at least one first window, wherein the L3 data is stored in external memory; and maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window, wherein the L2 data is stored in internal memory.
20. A non-transitory computer-readable medium encoding instructions that, when executed by a microcontroller of a node, perform a process for memory handling, the process comprising the method according to claim 18 or claim 19.
PCT/IB2020/059912 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transfer scheme with optimized memory path WO2021152369A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080094295.7A CN115066844A (en) 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transmission scheme with optimized memory path

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062966686P 2020-01-28 2020-01-28
US62/966,686 2020-01-28

Publications (1)

Publication Number Publication Date
WO2021152369A1 true WO2021152369A1 (en) 2021-08-05

Family

ID=77078077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/059912 WO2021152369A1 (en) 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transfer scheme with optimized memory path

Country Status (2)

Country Link
CN (1) CN115066844A (en)
WO (1) WO2021152369A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024063785A1 (en) * 2022-09-23 2024-03-28 Zeku, Inc. Apparatus and method for logical channel prioritization (lcp) processing of high-density, high-priority small packets
WO2024092697A1 (en) * 2022-11-04 2024-05-10 华为技术有限公司 Communication method, apparatus and system
WO2024123357A1 (en) * 2022-12-09 2024-06-13 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for robust header compression processing using a local customized shared memory
WO2024155269A1 (en) * 2023-01-16 2024-07-25 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for using a physical layer subsystem to directly wakeup a downlink dataplane subsystem

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040030806A1 (en) * 2002-06-11 2004-02-12 Pandya Ashish A. Memory system for a high performance IP processor
US20060146831A1 (en) * 2005-01-04 2006-07-06 Motorola, Inc. Method and apparatus for modulating radio link control (RLC) ACK/NAK persistence to improve performance of data traffic
US20080056278A1 (en) * 1999-03-17 2008-03-06 Broadcom Corporation Network switch memory interface configuration
US20080130655A1 (en) * 1998-07-08 2008-06-05 Broadcom Corporation Memory management unit for a network switch
US20100274921A1 (en) * 2009-04-27 2010-10-28 Lerzer Juergen Technique for coordinated RLC and PDCP processing
US20180285254A1 (en) * 2017-04-04 2018-10-04 Hailo Technologies Ltd. System And Method Of Memory Access Of Multi-Dimensional Data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5450563A (en) * 1992-10-30 1995-09-12 International Business Machines Corporation Storage protection keys in two level cache system
ATE540406T1 (en) * 2008-11-14 2012-01-15 Ericsson Telefon Ab L M NETWORK ACCESS DEVICE WITH SHARED MEMORY
EP2187697B1 (en) * 2008-11-14 2012-01-04 Telefonaktiebolaget L M Ericsson (publ) Modular radio network access device
KR100906098B1 (en) * 2008-12-02 2009-07-06 엠티에이치 주식회사 Communication method and device in communication system and recording medium for performing the method
EP2247020B1 (en) * 2009-04-27 2012-01-04 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Technique for performing layer 2 processing using a distributed memory architecture
US8254386B2 (en) * 2010-03-26 2012-08-28 Verizon Patent And Licensing, Inc. Internet protocol multicast on passive optical networks
US9635655B2 (en) * 2014-02-24 2017-04-25 Intel Corporation Enhancement to the buffer status report for coordinated uplink grant allocation in dual connectivity in an LTE network
AU2015274511B2 (en) * 2014-06-11 2019-08-15 Commscope Technologies Llc Bitrate efficient transport through distributed antenna systems
US11381514B2 (en) * 2018-05-07 2022-07-05 Apple Inc. Methods and apparatus for early delivery of data link layer packets

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080130655A1 (en) * 1998-07-08 2008-06-05 Broadcom Corporation Memory management unit for a network switch
US20080056278A1 (en) * 1999-03-17 2008-03-06 Broadcom Corporation Network switch memory interface configuration
US20040030806A1 (en) * 2002-06-11 2004-02-12 Pandya Ashish A. Memory system for a high performance IP processor
US20060146831A1 (en) * 2005-01-04 2006-07-06 Motorola, Inc. Method and apparatus for modulating radio link control (RLC) ACK/NAK persistence to improve performance of data traffic
US20100274921A1 (en) * 2009-04-27 2010-10-28 Lerzer Juergen Technique for coordinated RLC and PDCP processing
US20180285254A1 (en) * 2017-04-04 2018-10-04 Hailo Technologies Ltd. System And Method Of Memory Access Of Multi-Dimensional Data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024063785A1 (en) * 2022-09-23 2024-03-28 Zeku, Inc. Apparatus and method for logical channel prioritization (lcp) processing of high-density, high-priority small packets
WO2024092697A1 (en) * 2022-11-04 2024-05-10 华为技术有限公司 Communication method, apparatus and system
WO2024123357A1 (en) * 2022-12-09 2024-06-13 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for robust header compression processing using a local customized shared memory
WO2024155269A1 (en) * 2023-01-16 2024-07-25 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for using a physical layer subsystem to directly wakeup a downlink dataplane subsystem

Also Published As

Publication number Publication date
CN115066844A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US8988994B2 (en) System and method for creating logical radio link control (RLC) and medium access control (MAC) protocol data units (PDUs) in mobile communication system
WO2021152369A1 (en) Dynamic uplink end-to-end data transfer scheme with optimized memory path
CN115066975B (en) Layer 2 downstream data on-line processing using integrated circuits
EP2667655B1 (en) Method and apparatus for controlling congestion in wireless communication system
US20220368494A1 (en) Uplink re-transmission with compact memory usage
US8589586B2 (en) Method and apparatus for managing transmission of TCP data segments
CN110800365B (en) Method and apparatus for processing data
CN116420346A (en) Layer 2 data processing apparatus and method using flexible layer 2 circuitry
WO2018082595A1 (en) Data transmission method and device, and base station
US20190174356A1 (en) Data transmission method, data receiving device, and data sending device
US20230101531A1 (en) Uplink medium access control token scheduling for multiple-carrier packet data transmission
JP7502691B2 (en) Wireless communication device, wireless communication method, and wireless communication system
US20230019547A1 (en) Uplink data transmission scheduling
WO2021152363A2 (en) Layer 2 uplink data inline processing using integrated circuits
CN110708723B (en) Data transmission method and device
WO2023003543A1 (en) Apparatus and method of power optimized hybrid parallel/pipelined layer 2 processing for packets of different throughput types
WO2021042089A2 (en) Packet processing in downlink
CN110611558B (en) Method and device for collecting mobile terminal information, collecting equipment and storage medium
WO2021165740A1 (en) Method and apparatus for packet de-segmentation and reassembly
WO2023091125A1 (en) Apparatus and method of a layer 2 recovery mechanism to maintain synchronization for wireless communication
WO2017101069A1 (en) Data transmission method and terminal device
CN118368041A (en) Data transmission method and communication device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20917069

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20917069

Country of ref document: EP

Kind code of ref document: A1