WO2023009117A1 - Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler - Google Patents

Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler Download PDF

Info

Publication number
WO2023009117A1
WO2023009117A1 PCT/US2021/043576 US2021043576W WO2023009117A1 WO 2023009117 A1 WO2023009117 A1 WO 2023009117A1 US 2021043576 W US2021043576 W US 2021043576W WO 2023009117 A1 WO2023009117 A1 WO 2023009117A1
Authority
WO
WIPO (PCT)
Prior art keywords
size
packet
credit
layer
byte count
Prior art date
Application number
PCT/US2021/043576
Other languages
French (fr)
Other versions
WO2023009117A8 (en
Inventor
Na CHEN
Su-Lin Low
Chun-I Lee
Yunhong Li
Tianan Tim MA
Sonali Bagchi
Original Assignee
Zeku, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku, Inc. filed Critical Zeku, Inc.
Priority to PCT/US2021/043576 priority Critical patent/WO2023009117A1/en
Priority to CN202180098902.1A priority patent/CN117643124A/en
Publication of WO2023009117A1 publication Critical patent/WO2023009117A1/en
Publication of WO2023009117A8 publication Critical patent/WO2023009117A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0001Arrangements for dividing the transmission path
    • H04L5/0003Two-dimensional division
    • H04L5/0005Time-frequency
    • H04L5/0007Time-frequency the frequencies being orthogonal, e.g. OFDM(A), DMT
    • H04L5/001Time-frequency the frequencies being orthogonal, e.g. OFDM(A), DMT the frequencies being arranged in component carriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0058Allocation criteria
    • H04L5/0064Rate requirement of the data, e.g. scalable bandwidth, data priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0078Timing of allocation
    • H04L5/0087Timing of allocation when data requirements change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0091Signaling for the administration of the divided path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • H04L1/1829Arrangements specially adapted for the receiver end
    • H04L1/1835Buffer management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0001Arrangements for dividing the transmission path
    • H04L5/0014Three-dimensional division
    • H04L5/0023Time-frequency-space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • H04W72/1268Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows of uplink data flows

Definitions

  • Embodiments of the present disclosure relate to apparatus and method for wireless communication.
  • Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts.
  • a radio access technology is the underlying physical connection method for a radio-based communication network.
  • Many modem terminal devices such as mobile devices, support several RATs in one device.
  • the 3rd Generation Partnership Project defines a Radio Layer 2 (referred to here as “Layer 2”) as part of the cellular protocol stack structure corresponding to the data plane (DP) (also referred to as the “user plane”), which includes a Service Data Adaptation Protocol (SDAP) layer, a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, and a Medium Access Control (MAC), from top to bottom in the stack.
  • DP data plane
  • SDAP Service Data Adaptation Protocol
  • PDCP Packet Data Convergence Protocol
  • RLC Radio Link Control
  • MAC Medium Access Control
  • a baseband chip may include a set of transmission command queues each associated with a different component carrier (CC) and each configured to maintain packet descriptors associated with one of the different component carriers (CCs).
  • the baseband chip may also include a Layer 2 microcontroller.
  • the Layer 2 microcontroller may be configured to generate the packet descriptors for each of the different CCs based on associated uplink (UL) grant indicators.
  • the Layer 2 microcontroller may be configured to send each of the packet descriptors to the set of transmission command queues based on CC.
  • the Layer 2 microcontroller may be configured to select a credit- based scheduling mechanism from a set of credit-based scheduling mechanisms.
  • the Layer 2 microcontroller may be configured to configure a transmission scheduler with the credit-based scheduling mechanism.
  • a baseband chip is provided.
  • the baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs.
  • the baseband chip may further include a transmission scheduler.
  • the transmission scheduler may be configured to receive configuration information associated with a credit-based scheduling mechanism from a Layer 2 microcontroller.
  • the transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor.
  • the transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
  • a method of wireless communication of a transmission scheduler may include receiving configuration information of credit-based scheduling mechanism configured from a Layer 2 microcontroller.
  • the method may include, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, servicing the first packet descriptor.
  • the method may include increasing the byte count associated with the maximum credit size by the first packet size.
  • the method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, servicing the second packet descriptor.
  • the method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increasing the byte count associated with the maximum credit size by the second packet size.
  • the method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, servicing the first packet descriptor.
  • the method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decreasing a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
  • FIG. 1 illustrates an exemplary wireless network, according to some embodiments of the present disclosure.
  • FIG. 2 illustrates a block diagram of an exemplary apparatus including a baseband chip, a radio frequency (RF) chip, and a host chip, according to some embodiments of the present disclosure.
  • RF radio frequency
  • FIG. 3A illustrates a detailed block diagram of an exemplary baseband chip, according to some embodiments of the present disclosure.
  • FIG. 3B illustrates a flow diagram of a first exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
  • FIG. 3C illustrates a flow diagram of a second exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
  • FIG. 3D illustrates a flow diagram of a third exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
  • FIG. 4A illustrates a flow chart of a first exemplary method for DL Layer 2 data processing, according to some embodiments of the present disclosure.
  • FIG. 4B illustrates a flow chart of a second exemplary method for DL Layer 2 data processing, according to some embodiments of the present disclosure.
  • FIG. 5 illustrates a block diagram of an exemplary node, according to some embodiments of the present disclosure.
  • FIG. 6 illustrates a block diagram of a flow diagram for Layer 2 UL packet processing.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” “certain embodiments,” etc., indicate that the embodiment described may include a feature, structure, or characteristic, but every embodiment may not necessarily include the feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense.
  • terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context.
  • the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC- FDMA single-carrier frequency division multiple access
  • WLAN wireless local area network
  • a CDMA network may implement a radio access technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc.
  • RAT radio access technology
  • UTRA Universal Terrestrial Radio Access
  • E-UTRA evolved UTRA
  • CDMA 2000 etc.
  • GSM global system for mobile communications
  • An OFDMA network may implement a first RAT, such as LTE or NR.
  • a WLAN system may implement a second RAT, such as Wi-Fi.
  • the techniques described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs.
  • Layer 2 is the protocol stack layer responsible for ensuring a reliable, error-free datalink for the wireless modem (referred to herein as a “baseband chip”) of a UE. More specifically, Layer 2 interfaces with Radio Layer 1 (also referred to as “Layer 1” or the “physical (PHY) layer”) and Radio Layer 3 (also referred to as “Layer 3” or the “Internet Protocol (IP) layer”), passing data packets up or down the protocol stack structure, depending on whether the data packets are associated with UL or DL transmissions.
  • Radio Layer 1 also referred to as “Layer 1” or the “physical (PHY) layer”
  • Radio Layer 3 also referred to as “Layer 3” or the “Internet Protocol (IP) layer”
  • IP Internet Protocol
  • Layer 2 may perform de-multiplexing / multiplexing, segmentation / reassembly, aggregation / de-aggregation, and sliding window automatic repeat request (ARQ) techniques, among others, to ensure reliable end-to-end data integrity and in-order error-free delivery of data packets.
  • Layer 3 data packets e.g., IP data packets
  • Layer 2 protocol stack circuit may be input into a Layer 2 packet buffer and which are then fetched by the Layer 2 protocol stack circuit, and encoded into MAC layer packets (e.g., 5GNR) for transporting to the PHY layer.
  • the timing for Layer 2 processing of a UL data packet proceeds based on the grant indication received from a transmitter.
  • FIG. 6 illustrates a flow diagram 600 for Layer 2 UL packet processing at a baseband chip of a user equipment (UE).
  • the baseband chip may include, e.g., a physical layer (PHY) subsystem 602 and a Layer 2 data plane (DP) subsystem 604.
  • PHY subsystem 602 may receive a UL grant (e.g., a UL resource allocation grant) in a Physical Downlink Common Control Channel (PDCCH) occasion that is located at the beginning of each slot.
  • a UL grant e.g., a UL resource allocation grant
  • PDCCH Physical Downlink Common Control Channel
  • the UE may begin preparation of the UL data transmission (e.g., a Physical Uplink Shared Channel (PUSCH) transmission), which involves operations by the PHY subsystem 602 and Layer 2 DP subsystem 604.
  • PHY subsystem 602 may process (at 601) the UL grant and send an indication of the UL grant to Layer 2 DP subsystem 604.
  • Layer 2 DP subsystem 604 may perform (at 603) logical channel prioritization (LCP) to select logical channels and allocate granted resources to the selected logical channels.
  • LCP logical channel prioritization
  • Layer 2 DP subsystem 604 may issue (at 605) a transmitter (Tx) command to the DP hardware (e.g., such as a Layer 2 circuit) of Layer 2 DP subsystem 604.
  • the DP hardware may use the Tx commands to construct (at 607) MAC sub protocol data units (SDUs) on the fly and store them in a MAC inline buffer (not shown).
  • PHY subsystem 602 may then retrieve (at 609) the MAC SDUs (also referred to herein as “data packets” or “packets”) from the MAC inline buffer and perform Tx processing before transmitting the UL data transmission via the PUSCH at the time scheduled by the UL grant.
  • CA Carrier Aggregation
  • CCs active Component Carriers
  • the UE may receive multiple UL grants concurrently in one or more PDCCH occasions, where each grant is associated with a CC or serving cell.
  • Layer 2 DP subsystem 604 may generate (at 605) a list of Tx commands corresponding to a MAC SDU. These Tx commands may be pushed into multiple Tx command queues (not shown).
  • the UL MAC packet scheduling algorithm then manages the order of servicing these Tx command queues, such that all CCs have sufficient processed data, typically at least one symbol, in the MAC inline buffer by PHY subsystem 602 encoding due time (at 609).
  • One challenge of conventional UL MAC packet scheduling arises in the servicing of multiple concurrent grants from multiple CCs and/or dual connection configuration. This is because the UE, which is connected to two or more MAC entities that in turn are each connected to a base station with multiple CC of different bandwidth, resources, and radio channel conditions, needs to prepare the UL data transmissions for all CCs without any de-synchronization or loss of data and in a time and resource optimal manner.
  • the MAC data packets for each of the CCs may not be prepared by the encoding due time, which is the time at which PHY subsystem 602 pulls the MAC data packets from the MAC inline buffer.
  • the present disclosure provides a transmission (Tx) scheduler that services a set of transmission command queues using a credit- based scheduling technique, which enables the preparation of at least one symbol of each UL transmission by the encoding due time.
  • the credit-based scheduling technique may use a fixed credit size.
  • the credit size e.g., how many bytes will be processed
  • the Tx scheduler services all CCs equally in terms of processed data bytes, which results in the same data processing rate for all CCs.
  • the credit-based scheduling technique may use a symbol-sliced credit to reduce the first symbol preparation time.
  • the credit size may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one orthogonal frequency-division multiplexed (OFDM) symbol, which may differ from CC-to-CC.
  • the number of cycles required for processing one symbol of data is equal for all CCs.
  • the first symbol of data of all CCs may be ready in the MAC inline buffer as early as possible, which reduces the risk of missing the encoding due time of the PHY subsystem.
  • the credit-based scheduling technique may use a time-sliced credit for optimal inline buffer usage.
  • the credit size may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem) of the CC, which is typically the dequeue rate of the PHY subsystem.
  • the size of the MAC inline buffer for each CC may be optimized. Additional details of the present credit-based scheduling technique are provided below in connection with FIGs. 1-5.
  • Layer 2 data processing the same or similar techniques may be applied to Layer 3 and/or Layer 4 data processing to optimize power consumption at Layer 3 and/or Layer 4 subsystems without departing from the scope of the present disclosure.
  • FIG. 1 illustrates an exemplary wireless network 100, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • wireless network 100 may include a network of nodes, such as a user equipment 102, an access node 104, and a core network element 106.
  • User equipment 102 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Intemet-of-Things (IoT) node.
  • V2X vehicle to everything
  • IoT Intemet-of-Things
  • Access node 104 may be a device that communicates with user equipment 102, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 104 may have a wired connection to user equipment 102, a wireless connection to user equipment 102, or any combination thereof. Access node 104 may be connected to user equipment 102 by multiple connections, and user equipment 102 may be connected to other access nodes in addition to access node 104. Access node 104 may also be connected to other user equipments.
  • BS base station
  • eNodeB or eNB enhanced Node B
  • gNodeB or gNB next-generation NodeB
  • gNodeB next-generation NodeB
  • access node 104 may operate in millimeter wave (mmW) frequencies and/or near mmW frequencies in communication with the user equipment 102.
  • mmW millimeter wave
  • the access node 104 may be referred to as an mmW base station.
  • Extremely high frequency (EHF) is part of the RF in the electromagnetic spectrum. EHF has a range of 30 GHz to 300 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave.
  • Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters.
  • the super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW or near mmW radio frequency band have extremely high path loss and a short range.
  • the mmW base station may utilize beamforming with user equipment 102 to compensate for the extremely high path loss and short range. It is understood that access node 104 is illustrated by a radio tower by way of illustration and not by way of limitation.
  • Access nodes 104 which are collectively referred to as E-UTRAN in the evolved packet core network (EPC) and as NG-RAN in the 5G core network (5GC), interface with the EPC and 5GC, respectively, through dedicated backhaul links (e.g., SI interface).
  • EPC evolved packet core network
  • 5GC 5G core network
  • access node 104 may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages.
  • Access nodes 104 may communicate directly or indirectly (e.g., through the 5GC) with each other over backhaul links (e.g., X2 interface).
  • the backhaul links may be wired or wireless.
  • Core network element 106 may serve access node 104 and user equipment 102 to provide core network services.
  • core network element 106 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW).
  • HSS home subscriber server
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • EPC evolved packet core
  • core network element 106 includes an access and mobility management function (AMF), a session management function (SMF), or a user plane function (UPF) of the 5GC for the NR system.
  • the AMF may be in communication with a Unified Data Management (UDM).
  • UDM Unified Data Management
  • the AMF is the control node that processes the signaling between the user equipment 102 and the 5GC. Generally, the AMF provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF. The UPF provides UE IP address allocation as well as other functions. The UPF is connected to the IP Services. The IP Services may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. It is understood that core network element 106 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
  • IMS IP Multimedia Subsystem
  • Core network element 106 may connect with a large network, such as the Internet
  • IP Internet Protocol
  • data from user equipment 102 may be communicated to other user equipments connected to other access points, including, for example, a computer 110 connected to Internet 108, for example, using a wired connection or a wireless connection, or to a tablet 112 wirelessly connected to Internet 108 via a router 114.
  • computer 110 and tablet 112 provide additional examples of possible user equipments
  • router 114 provides an example of another possible access node.
  • a generic example of a rack-mounted server is provided as an illustration of core network element 106. However, there may be multiple elements in the core network including database servers, such as a database 116, and security and authentication servers, such as an authentication server 118.
  • Database 116 may, for example, manage data related to user subscription to network services.
  • a home location register (HLR) is an example of a standardized database of subscriber information for a cellular network.
  • authentication server 118 may handle authentication of users, sessions, and so on.
  • an authentication server function (AUSF) device may be the entity to perform user equipment authentication.
  • a single server rack may handle multiple such functions, such that the connections between core network element 106, authentication server 118, and database 116, may be local connections within a single rack.
  • Each element in FIG. 1 may be considered a node of wireless network 100. More detail regarding the possible implementation of a node is provided by way of example in the description of a node 500 in FIG. 5.
  • Node 500 may be configured as user equipment 102, access node 104, or core network element 106 in FIG. 1.
  • node 500 may also be configured as computer 110, router 114, tablet 112, database 116, or authentication server 118 in FIG. 1.
  • node 500 may include a processor 502, a memory 504, and a transceiver 506. These components are shown as connected to one another by a bus, but other connection types are also permitted.
  • node 500 When node 500 is user equipment 102, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 500 may be implemented as a blade in a server system when node 500 is configured as core network element 106. Other implementations are also possible.
  • UI user interface
  • sensors sensors
  • core network element 106 Other implementations are also possible.
  • Transceiver 506 may include any suitable device for sending and/or receiving data.
  • Node 500 may include one or more transceivers, although only one transceiver 506 is shown for simplicity of illustration.
  • An antenna 508 is shown as a possible communication mechanism for node 500. Multiple antennas and/or arrays of antennas may be utilized for receiving multiple spatially multiplex data streams.
  • examples of node 500 may communicate using wired techniques rather than (or in addition to) wireless techniques.
  • access node 104 may communicate wirelessly to user equipment 102 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 106.
  • Other communication hardware such as a network interface card (NIC), may be included as well.
  • NIC network interface card
  • node 500 may include processor 502. Although only one processor is shown, it is understood that multiple processors can be included.
  • Processor 502 may include microprocessors, microcontroller units (MCUs), digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure.
  • Processor 502 may be a hardware device having one or more processing cores.
  • Processor 502 may execute software.
  • node 500 may also include memory 504. Although only one memory is shown, it is understood that multiple memories can be included. Memory 504 can broadly include both memory and storage.
  • memory 504 may include random-access memory (RAM), read-only memory (ROM), static RAM (SRAM), dynamic RAM (DRAM), ferro electric RAM (FRAM), electrically erasable programmable ROM (EEPROM), compact disc read only memory (CD-ROM) or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 502.
  • RAM random-access memory
  • ROM read-only memory
  • SRAM static RAM
  • DRAM dynamic RAM
  • FRAM ferro electric RAM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disc read only memory
  • HDD hard disk drive
  • Flash drive such as magnetic disk storage or other magnetic storage devices
  • SSD solid-state drive
  • memory 504 may be embodied by any computer-readable medium, such as a non-transitory computer-readable medium.
  • Processor 502, memory 504, and transceiver 506 may be implemented in various forms in node 500 for performing wireless communication functions.
  • processor 502, memory 504, and transceiver 506 of node 500 are implemented (e.g., integrated) on one or more system-on-chips (SoCs).
  • SoCs system-on-chips
  • processor 502 and memory 504 may be integrated on an application processor (AP) SoC (sometimes known as a “host,” referred to herein as a “host chip”) that handles application processing in an operating system (OS) environment, including generating raw data to be transmitted.
  • API SoC application processor
  • OS operating system
  • processor 502 and memory 504 may be integrated on a baseband processor (BP) SoC (sometimes known as a “modem,” referred to herein as a “baseband chip”) that converts the raw data, e.g., from the host chip, to signals that can be used to modulate the carrier frequency for transmission, and vice versa, which can run a real-time operating system (RTOS).
  • BP baseband processor
  • RTOS real-time operating system
  • processor 502 and transceiver 506 may be integrated on an RF SoC (sometimes known as a “transceiver,” referred to herein as an “RF chip”) that transmits and receives RF signals with antenna 508.
  • RF SoC sometimes known as a “transceiver,” referred to herein as an “RF chip”
  • some or all of the host chip, baseband chip, and RF chip may be integrated as a single SoC.
  • a baseband chip and an RF chip may be integrated into a single SoC that manages all the radio functions for cellular communication.
  • user equipment 102 may include a Tx scheduler that services a set of transmission command queues using a credit-based scheduling technique that enables the preparation of at least one symbol of each UL transmission by the encoding due time at the PHY subsystem.
  • the credit-based scheduling technique of user equipment 102 may use a fixed credit size.
  • the credit size e.g., how many bytes will be processed
  • the Tx scheduler may service all CCs equally in terms of processed data bytes, which may result in the same data processing rate for each CC.
  • the credit-based scheduling technique of user equipment 102 may use a symbol-sliced credit to reduce the first symbol preparation time.
  • the credit size here may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one OFDM symbol of the CC.
  • the number of cycles required for processing one symbol of data may be equal for all CCs.
  • the first symbol of data of all CCs can be ready in the MAC inline buffer as early as possible, which reduces the risk of missing the encoding due time of the PHY subsystem.
  • the credit-based scheduling technique of user equipment 102 may use a time-sliced credit for optimal inline buffer usage.
  • the credit size here may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem) of the CC, which is typically the dequeue rate of the PHY subsystem.
  • the size of the inline buffer associated with each CC may be optimized. Additional details of the credit-based scheduling technique are provided below in connection with FIGs. 2, 3 A, 3B, 3C, 3D, 4A, and 4B.
  • FIG. 2 illustrates a block diagram of an apparatus 200 including a baseband chip
  • Apparatus 200 may be implemented as user equipment 102 of wireless network 100 in FIG. 1. As shown in FIG. 2, apparatus 200 may include baseband chip 202, RF chip 204, host chip 206, and one or more antennas 210.
  • baseband chip 202 is implemented by processor 502 and memory 504, and RF chip 204 is implemented by processor 502, memory 504, and transceiver 506, as described above with respect to FIG. 5.
  • apparatus 200 may further include an external memory 208 (e.g., the system memory or main memory) that can be shared by each chip 202, 204, or 206 through the system/main bus.
  • external memory 208 e.g., the system memory or main memory
  • baseband chip 202 is illustrated as a standalone SoC in FIG.
  • baseband chip 202 and RF chip 204 may be integrated as one SoC; in another example, baseband chip 202 and host chip 206 may be integrated as one SoC; in still another example, baseband chip 202, RF chip 204, and host chip 206 may be integrated as one SoC, as described above.
  • host chip 206 may generate raw data and send it to baseband chip 202 for encoding, modulation, and mapping. Interface 214 of baseband chip 202 may receive the data from host chip 206. Baseband chip 202 may also access the raw data generated by host chip 206 and stored in external memory 208, for example, using the direct memory access (DMA). Baseband chip 202 may first encode (e.g., by source coding and/or channel coding) the raw data and modulate the coded data using any suitable modulation techniques, such as multi-phase shift keying (MPSK) modulation or quadrature amplitude modulation (QAM).
  • MPSK multi-phase shift keying
  • QAM quadrature amplitude modulation
  • Baseband chip 202 may perform any other functions, such as symbol or layer mapping, to convert the raw data into a signal that can be used to modulate the carrier frequency for transmission.
  • baseband chip 202 may send the modulated signal to RF chip 204 via interface 214.
  • RF chip 204 through the transmitter, may convert the modulated signal in the digital form into analog signals, i.e., RF signals, and perform any suitable front-end RF functions, such as filtering, digital pre-distortion, up-conversion, or sample-rate conversion.
  • Antenna 210 e.g., an antenna array
  • antenna 210 may receive RF signals from an access node or other wireless device.
  • the RF signals may be passed to the receiver (Rx) of RF chip 204.
  • RF chip 204 may perform any suitable front-end RF functions, such as filtering, IQ imbalance compensation, down-paging conversion, or sample-rate conversion, and convert the RF signals (e.g., transmission) into low-frequency digital signals (baseband signals) that can be processed by baseband chip 202.
  • baseband chip 202 may include a Tx scheduler 240 that services a set of Tx command queues 230 using a credit-based scheduling technique (e.g., configured by uC 220) that enables the preparation of at least one symbol of each UL transmission by Layer 2 circuit 250.
  • a credit-based scheduling technique e.g., configured by uC 220
  • the data packet may arrive in the MAC inline buffer 260 by the encoding due time of the PHY subsystem 270.
  • the credit-based scheduling technique of baseband chip 202 may use a fixed credit size. For example, the credit size (e.g., how many bytes will be processed) is fixed for all CCs.
  • the Tx scheduler services all CCs equally in terms of processed data bytes, which may result in the same data processing rate for all CCs.
  • the credit-based scheduling technique of baseband chip 202 may use a symbol-sliced credit to reduce the first symbol preparation time.
  • the credit size here may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one OFDM symbol of the CC.
  • the number of cycles required for processing one symbol of data is made equal for all CCs.
  • the first symbol of data of all CCs can be ready in the MAC inline buffer 260 as early as possible, which may reduce the risk of the encoding due time being missed.
  • the credit-based scheduling technique of baseband chip 202 may use a time-sliced credit for optimal inline buffer usage.
  • the credit size here may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem 270) of the CC, which is typically the dequeue rate of the PHY subsystem 270.
  • the size of the MAC inline buffer 260 associated with each CC may be optimized. Additional details of the credit-based scheduling technique are provided below in connection with FIGs. 3A, 3B, 3C, 3D, 4A, and 4B.
  • FIG. 3 A illustrates a detailed block diagram of the exemplary baseband chip 202 of
  • FIG. 2 illustrates a flow diagram of a first exemplary credit-based scheduling technique 325 of the baseband chip 202 of FIG. 3A, according to some embodiments of the present disclosure.
  • FIG. 3C illustrates a flow diagram of a second exemplary credit-based scheduling technique 350 of the baseband chip 202 of FIG. 3A, according to some embodiments of the present disclosure.
  • FIG. 3D illustrates a flow diagram of a third exemplary credit-based scheduling technique 375 of the baseband chip 202 of FIG. 3 A, according to some embodiments of the present disclosure.
  • FIGs. 3 A-3D will be described together.
  • uC 220 may generate one or more packet descriptors 301 based on the corresponding UL grant. Packet descriptors 301 may vary depending upon the grant size. uC 220 may push packet descriptors 301 into the corresponding Tx command queue 230. Each Tx command queue 230 may correspond to a different CC (e.g., CC0, CC1, CC2, CC3, CC4, etc.).
  • Packet descriptor 301 may indicate the size of the data packet 303, the address of data packet 303 in the packet buffer 302, as well as the PDCP, RLC, and MAC header information used by Layer 2 circuit 250 to construct the MAC SDUs, which are the data packets stored in MAC inline buffer 260.
  • Tx scheduler 240 (shown in FIG. 3A as “Tx command queue (CmdQ) scheduler
  • Tx scheduler 240 may manage the order in which Tx command queues 230 are scheduled. For example, Tx scheduler 240 may service a Tx command queue 230 until one of the following conditions are met: 1) all the packet descriptors 301 (also referred to herein as “commands”) in a Tx command queue 230 have been processed, 2) the total size of processed data associated with the Tx command queue 230 surpasses the maximum credit size of a cycle of Layer 2 processing, or 3) there is no free space in MAC inline buffer 260 available for the CC associated with the Tx command queue 230.
  • the credit size can be statically or dynamically configured by uC 220, as described below in connection with FIG. 4B.
  • Each Tx command queue 230 may be configured with different credit size values.
  • Tx scheduler 240 may select a packet descriptor 301 to service and send information associated with the packet descriptor 301 to Layer 2 circuit 250.
  • Layer 2 circuit 250 may fetch a data packet 303 from packet buffer 302 and perform Layer 2 processing, e.g., such as PDCP processing, RLC processing, MAC processing, etc. based on the information of packet descriptor 301.
  • Layer 2 circuit 250 may store the processed data packet 303 in MAC inline buffer 260.
  • PHY subsystem 270 may dequeue a data packet 303 from MAC inline buffer 260 and prepare it for transmission.
  • PHY subsystem 270 may dequeue data packets 303 on a code block (CB) or symbol basis, which may enable pipelined processing such that the size of MAC inline buffer 260 is optimized, while minimizing latency.
  • baseband chip 202 prepares data packets 303 for dequeuing such that at least one symbol of data is located in MAC inline buffer 260 by the encoding due time of PHY subsystem 270. This may be implemented by Tx scheduler 240 using one of the example credit-based scheduling techniques described below.
  • Tx scheduler 240 may implement the credit- based scheduling technique using a fixed credit size.
  • the credit size e.g., how many bytes will be processed
  • Tx scheduler 240 may service all CCs equally in terms of processed data bytes, resulting in the same data processing rate for all CCs.
  • An example of this embodiment is depicted in FIG. 3B for three CCs.
  • Tx scheduler 240 implements three cycles of Layer 2 processing before the first data symbol for CC2 in MAC inline buffer 260, by which time CC0 and CC1 already have one-and-a-half and three symbols ready MAC inline buffer 260, respectively.
  • uC 220 may determine the fixed credit size for each CC during cell establishment or re-establishment. In some examples, uC 220 may determine the fixed credit size based on, e.g., the packet size indicated by the UL grant.
  • Tx scheduler 240 may implement the credit-based scheduling technique using a symbol-sliced credit that may reduce the first symbol preparation time, as compared to the embodiment described above in connection with FIG. 3B.
  • the credit size may be different for each CC and proportional to the symbol size associated with the CC.
  • the number of cycles required for processing one symbol of data is equal for all CCs.
  • the first symbol of data of all CCs may be ready in the MAC inline buffer 260 as early as possible, which reduces the chance of missing the encoding due time of PHY subsystem 270.
  • An example of this embodiment is depicted in FIG. 3C for three CCs.
  • the credit size is set as half of the OFDM symbol for each CC.
  • CC0 and CC1 also have one symbol ready in MAC inline buffer 260.
  • Tx scheduler 240 may implement the credit-based scheduling technique using a time-sliced credit for optimal MAC inline buffer 260 usage.
  • the credit size in this embodiment may be proportional to the data transmission rate of the CC, which is typically the dequeue rate of the PHY subsystem 270.
  • the size of MAC inline buffer 260 may be optimized for each CC. An example of this embodiment is depicted in FIG. 3D for three CCs.
  • CC0 and CC2 are assumed to dequeue one symbol of data within the time that CC1 dequeues two symbols of data.
  • the credit size is set to half of a symbol for CC0 and CC2, and one symbol for CC1 in this example.
  • CC0 and CC1 have one symbol and two symbols ready in MAC inline buffer 260, respectively.
  • FIG. 4A illustrates a flow chart of a first exemplary method 400 for DL Layer 2 data processing, according to some embodiments of the present disclosure.
  • Exemplary method 400 may be performed by an apparatus for wireless communication, e.g., such as user equipment 102, apparatus 200, baseband chip 202, uC 220, Tx command queues 230, Tx scheduler 240, Layer 2 circuit 250, MAC inline buffer 260, PHY subsystem 270, packet buffer 302, and/or node 500.
  • Method 400 may include steps 402-422 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4A.
  • the apparatus may initialize the byte count for all CCs to zero.
  • uC 220 may configure Tx scheduler 240 with a credit- based scheduling technique. Before beginning the credit-based scheduling technique, Tx scheduler 240 may set the byte count to zero for each of the Tx command queues 230. Tx scheduler 240 may use the byte count to keep track of whether the maximum credit size for a CC has been reached during a cycle of Layer 2 processing.
  • the apparatus may begin the credit-based scheduling technique for one of the Tx command queues.
  • Tx scheduler 240 may begin the credit-based scheduling technique for the Tx command queue 230 of CC0.
  • the apparatus may determine whether the Tx command queue 230 is empty.
  • Tx scheduler 240 may determine whether the Tx command queue for CC0 is empty. In response to determining that the Tx command queue is not empty, the operation may move to 408. Otherwise, in response to determining that the Tx command queue is empty, the operations may move to 420.
  • the apparatus may check the packet size indicated by the packet descriptor in the Tx command queue.
  • Tx scheduler may check the packet size indicated by the first packet descriptor 301 in the Tx command queue for CC0.
  • the apparatus may determine whether the MAC inline buffer associated with that CC has enough space to accommodate a packet of the size indicated by the packet descriptor. For example, referring to FIG. 3 A, Tx scheduler 240 may determine whether the MAC inline buffer 260 for CC0 has enough space to accommodate a data packet 303 of the size indicated by the first packet descriptor 301 from Tx command queue 230 of CC0. In response to determining that the MAC inline buffer does have enough space, the operations may move to 412. Otherwise, in response to determining that the MAC inline buffer does not have enough space, the operations may move to 420.
  • the apparatus may determine whether the byte count associated with this
  • Tx scheduler 240 may determine whether the byte count for CC0 during the first cycle of Layer 2 processing is less than the maximum credit size. In response to determining that the byte count is less than the maximum credit size, the operations may move to 414. Otherwise, in response to determining that the byte count is greater than or equal to the maximum credit size, the operations may move to 418.
  • the apparatus may service the first packet descriptor in the Tx command queue.
  • Tx scheduler 240 may determine the information (e.g., packet size, packet location in packet buffer 302, etc.) included in the first packet descriptor 301 and send this information to Layer 2 circuit 250.
  • Layer 2 circuit 250 may fetch a data packet 303 from packet buffer 302 and perform Layer 2 processing of the data packet 303. Once processed, Layer 2 circuit 250 may store the data packet 303 in MAC inline buffer 260.
  • the apparatus may increase the byte count by the number of bytes associated with the data packet that was serviced. For example, referring to FIG. 3 A, assuming the data packet 303 has a byte count of 500 bytes and the maximum credit size is 1000 bytes, Tx scheduler 240 may increase the byte count from 0 bytes to 500 bytes, which is still less than the maximum credit size. Once the byte count has been increased, the operations may return to 406.
  • the apparatus may determine whether the Tx command queue for the same
  • the apparatus may determine (at 408) whether the sub sequent packet descriptor indicates a byte size that would exceed the maximum credit size for that cycle of Layer 2 processing. For example, referring to FIG. 3 A, assuming the byte count is 500 bytes after the first data packet was serviced, the maximum credit size is 1000 bytes, and that the data size indicated by the second packet descriptor 301 is 700 bytes, Tx scheduler 240 may still service the second packet descriptor 301 such that the data packet of 700 bytes is processed by Layer 2 circuit 250. Now the first cycle of Layer 2 processing of CC0 is complete based on the credit-based scheduling technique.
  • the apparatus may decrease the byte count associated with a subsequent cycle of Layer 2 processing for that CC by the byte amount the maximum credit size exceeded by processing the second data packet. For example, referring to FIG. 3 A, using the same example, Tx scheduler 240 would set the byte count for the second cycle of Layer 2 processing for CC0 to 200 bytes or decrease the maximum credit size for the second cycle for CC0 to 800 bytes.
  • the apparatus may set the byte count to zero. Then, at 422, the apparatus may move to the next Tx command queue. For example, referring to FIG. 3 A, Tx scheduler 240 may move to the Tx command queue 230 associated with CC1 after performing the credit-based scheduling technique for CC0.
  • FIG. 4B illustrates a flow chart of a second exemplary method 425 for DL Layer 2 data processing, according to some embodiments of the present disclosure.
  • Exemplary method 425 may be performed by an apparatus for wireless communication, e.g., such as user equipment 102, apparatus 200, baseband chip 202, uC 220, Tx command queues 230, Tx scheduler 240, Layer 2 circuit 250, MAC inline buffer 260, PHY subsystem 270, packet buffer 302, and/or node 500.
  • Method 425 may include steps 430-442 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4B.
  • the apparatus may select a first credit-based scheduling technique from a set of credit-based scheduling techniques. For example, referring to FIG. 3A, uC 220 may select the credit-based scheduling technique described above in connection with FIG. 3B.
  • the apparatus may receive a UL grant from a PHY subsystem.
  • uC 220 may receive a UL grant from PHY subsystem 270.
  • the UL grant may include information, e.g., such as grant size (e.g., the byte size of scheduled data packet), the number of symbols in the data packet (e.g., associated with a CC), the symbol timing (e.g., associated with subcarrier spacing), just to name a few.
  • grant size e.g., the byte size of scheduled data packet
  • the number of symbols in the data packet e.g., associated with a CC
  • the symbol timing e.g., associated with subcarrier spacing
  • uC 220 may determine the time that the first symbol of the data packet will arrive in MAC inline buffer 260 based on the credit size using the first credit-based scheduling technique and/or the information included in the UL grant.
  • the apparatus may determine whether the first symbol ready time is greater than a due time threshold associated with the encoding due time of the PHY subsystem. For example, referring to FIG. 3 A, uC 220 may determine whether the first symbol will arrive in MAC inline buffer 260 before the encoding due time threshold.
  • the encoding due time threshold may be the encoding due time of PHY subsystem 270 or within a window of time prior to the encoding due time.
  • the operations may move to 438. Otherwise, in response to determining that the first symbol ready time is less than or equal to the encoding due time threshold, the operations may move to 440.
  • the apparatus may update the credit size for each CC based on a second credit-based scheduling technique.
  • uC 220 may switch to the credit-based scheduling technique of FIG. 3C and update the associated credit size used by Tx scheduler.
  • the credit size may be different for each CC and proportional to the symbol size associated with the CC. As such, the number of cycles required for processing one symbol of data is equal for all CCs.
  • the first symbol of data of all CCs may be ready in the MAC inline buffer 260 as early as possible, which reduces the risk that the encoding due time of the PHY subsystem 270 is missed.
  • the apparatus may determine whether the amount of free space in the MAC inline buffer is less than a space threshold. For example, referring to FIG. 3A, uC 220 may determine whether the space in MAC inline buffer 260 associated with the CC of the received UL grant is less than a space threshold for that CC. In response to determining that the free space is less than the space threshold, the operations may move to 442. Otherwise, in response to determining that the free space is greater than or equal to the space threshold, the operations may return to 432.
  • the apparatus may update the credit size for each CC based on a third credit- based scheduling technique.
  • uC 220 may switch to the credit- based scheduling technique of FIG. 3D and update the associated credit size used by Tx scheduler 240 when the free space is less than the space threshold.
  • a time-sliced credit for optimal MAC inline buffer 260 usage may be implemented by Tx scheduler 240.
  • the credit size in this embodiment may be proportional to the data transmission rate of the CC, which is typically the dequeue rate of the PHY subsystem 270. With matched enqueue and dequeue rates, the size of MAC inline buffer 260 may be optimized for each CC.
  • the credit-based scheduling technique of the present disclosure optimizes the usage of Layer 2 processing resources when UL data packets prepare concurrently for multiple CCs. Moreover, the present techniques optimize Layer 2 processing time to expedite UL data packet preparation for multiple CCs. Still further, using the present credit-based scheduling techniques, latency uncertainties associated with packet arrival and the encoding due time may be eliminated, even when preparing UL data packets for multiple CC grants.
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 500 in FIG. 5.
  • such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer.
  • Disk and disc includes CD, laser disc, optical disc, DVD, and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • a baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs.
  • the baseband chip may also include a Layer 2 microcontroller.
  • the Layer 2 microcontroller may be configured to generate the packet descriptors for each of the different CCs based on associated UL grant indicators.
  • the Layer 2 microcontroller may be configured to send each of the packet descriptors to the set of transmission command queues based on CC.
  • the Layer 2 microcontroller may be configured to select a credit-based scheduling mechanism from a set of credit-based scheduling mechanisms.
  • the Layer 2 microcontroller may be configured to configure a transmission scheduler with the credit-based scheduling mechanism.
  • the set of credit-based scheduling mechanisms may be associated with a first credit size fixed for each of the different CCs, a second credit size that is proportional to a symbol size associated with each of the different CCs, or a third credit size that is proportional to a data transmission rate associated with each of the different CCs.
  • the first credit size, the second credit size, and the third credit size may each associated with a number of bytes to be processed by a Layer 2 circuit.
  • the credit-based scheduling mechanism may indicate a maximum credit size for at least one cycle of Layer 2 processing by a Layer 2 circuit.
  • the transmission scheduler is configured to in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor.
  • the transmission scheduler is configured to in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
  • the Layer 2 circuit may be configured to receive the first packet descriptor from the transmission scheduler after the servicing. In some embodiments, the Layer 2 circuit may be configured to obtain a packet from a packet buffer based on the first packet descriptor. In some embodiments, the Layer 2 circuit may be configured to perform Layer 2 processing of the packet to generate a Layer 2 packet. In some embodiments, the Layer 2 circuit may be configured to send the Layer 2 packet to an inline buffer queue of an inline buffer.
  • the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
  • the transmission scheduler may be further configured to in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
  • the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
  • the transmission scheduler may be further configured to in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero. In some embodiments, the transmission scheduler may be further configured to in response to determining that the inline buffer threshold has been reached, retrieve a third packet descriptor from the second transmission command queue during the first cycle of Layer 2 processing.
  • the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit-based scheduling mechanism by configuring a first credit size associated with a first credit-based scheduling mechanism. In some embodiments, the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit- based scheduling mechanism by, in response to determining that a first symbol ready time is greater than a due time threshold, updating the first credit size to a second credit size and implement a second credit-based scheduling mechanism.
  • the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit-based scheduling mechanism by, in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, updating the first credit size to a third credit size and implement a third credit- based scheduling mechanism.
  • a baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs.
  • the baseband chip may further include a transmission scheduler.
  • the transmission scheduler may be configured to receive configuration information associated with a credit-based scheduling mechanism from a Layer 2 microcontroller.
  • the transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor.
  • the transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
  • the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
  • the at least one microcontroller may be further configured to, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
  • the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
  • the at least one microcontroller may be further configured to, in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the inline buffer threshold has been reached, retrieve a third packet descriptor from the second transmission command queue during the first cycle of Layer 2 processing.
  • the at least one microcontroller may be further configured to implement a first credit size associated with a first credit-based scheduling mechanism. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a first symbol ready time is greater than a due time threshold, update the first credit size to a second credit size and implement a second credit-based scheduling mechanism. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, update the first credit size to a third credit size and implement a third credit-based scheduling mechanism.
  • a method of wireless communication of a transmission scheduler may include receiving configuration information of credit-based scheduling mechanism configured from a Layer 2 microcontroller.
  • the method may include, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, servicing the first packet descriptor.
  • the method may include increasing the byte count associated with the maximum credit size by the first packet size.
  • the method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, servicing the second packet descriptor.
  • the method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increasing the byte count associated with the maximum credit size by the second packet size.
  • the method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, servicing the first packet descriptor.
  • the method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decreasing a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.

Abstract

According to one aspect of the disclosure, a baseband chip is provided. The baseband chip may include a set of transmission command queues each associated with a different component carrier (CC) and each configured to maintain packet descriptors associated with one of the different component carriers (CCs). The baseband chip may also include a Layer 2 microcontroller. The Layer 2 microcontroller may be configured to generate the packet descriptors for each of the different CCs based on associated uplink (UL) grant indicators. The Layer 2 microcontroller may be configured to send each of the packet descriptors to the set of transmission command queues based on CC. The Layer 2 microcontroller may be configured to select a credit-based scheduling mechanism from a set of credit-based scheduling mechanisms. The Layer 2 microcontroller may be configured to configure a transmission scheduler with the credit-based scheduling mechanism.

Description

APPARATUS AND METHOD OF CREDIT-BASED SCHEDULING MECHANISM FOR LAYER 2 TRANSMISSION SCHEDULER
BACKGROUND
[0001] Embodiments of the present disclosure relate to apparatus and method for wireless communication.
[0002] Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts. A radio access technology (RAT) is the underlying physical connection method for a radio-based communication network. Many modem terminal devices, such as mobile devices, support several RATs in one device. In cellular communication, such as the 4th-generation (4G) Long Term Evolution (LTE) and the 5th-generation (5G) New Radio (NR), the 3rd Generation Partnership Project (3GPP) defines a Radio Layer 2 (referred to here as “Layer 2”) as part of the cellular protocol stack structure corresponding to the data plane (DP) (also referred to as the “user plane”), which includes a Service Data Adaptation Protocol (SDAP) layer, a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, and a Medium Access Control (MAC), from top to bottom in the stack.
SUMMARY
[0003] Embodiments of apparatus and method for Layer 2 packet processing are disclosed herein.
[0004] According to one aspect of the disclosure, a baseband chip is provided. The baseband chip may include a set of transmission command queues each associated with a different component carrier (CC) and each configured to maintain packet descriptors associated with one of the different component carriers (CCs). The baseband chip may also include a Layer 2 microcontroller. The Layer 2 microcontroller may be configured to generate the packet descriptors for each of the different CCs based on associated uplink (UL) grant indicators. The Layer 2 microcontroller may be configured to send each of the packet descriptors to the set of transmission command queues based on CC. The Layer 2 microcontroller may be configured to select a credit- based scheduling mechanism from a set of credit-based scheduling mechanisms. The Layer 2 microcontroller may be configured to configure a transmission scheduler with the credit-based scheduling mechanism. [0005] In another aspect of the disclosure, a baseband chip is provided. The baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs. The baseband chip may further include a transmission scheduler. The transmission scheduler may be configured to receive configuration information associated with a credit-based scheduling mechanism from a Layer 2 microcontroller. The transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor. The transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
[0006] In yet another aspect of the disclosure, a method of wireless communication of a transmission scheduler is provided. The method may include receiving configuration information of credit-based scheduling mechanism configured from a Layer 2 microcontroller. The method may include, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, servicing the first packet descriptor. The method may include increasing the byte count associated with the maximum credit size by the first packet size. The method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, servicing the second packet descriptor. The method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increasing the byte count associated with the maximum credit size by the second packet size. The method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, servicing the first packet descriptor. The method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decreasing a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the description, further serve to explain the principles of the present disclosure and to enable a person skilled in the pertinent art to make and use the present disclosure.
[0008] FIG. 1 illustrates an exemplary wireless network, according to some embodiments of the present disclosure.
[0009] FIG. 2 illustrates a block diagram of an exemplary apparatus including a baseband chip, a radio frequency (RF) chip, and a host chip, according to some embodiments of the present disclosure.
[0010] FIG. 3A illustrates a detailed block diagram of an exemplary baseband chip, according to some embodiments of the present disclosure.
[0011] FIG. 3B illustrates a flow diagram of a first exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
[0012] FIG. 3C illustrates a flow diagram of a second exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
[0013] FIG. 3D illustrates a flow diagram of a third exemplary credit-based scheduling technique of the baseband chip of FIG. 3A, according to some embodiments of the present disclosure.
[0014] FIG. 4A illustrates a flow chart of a first exemplary method for DL Layer 2 data processing, according to some embodiments of the present disclosure.
[0015] FIG. 4B illustrates a flow chart of a second exemplary method for DL Layer 2 data processing, according to some embodiments of the present disclosure.
[0016] FIG. 5 illustrates a block diagram of an exemplary node, according to some embodiments of the present disclosure.
[0017] FIG. 6 illustrates a block diagram of a flow diagram for Layer 2 UL packet processing.
[0018] Embodiments of the present disclosure will be described with reference to the accompanying drawings.
DETAILED DESCRIPTION
[0019] Although some configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the pertinent art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the present disclosure. It will be apparent to a person skilled in the pertinent art that the present disclosure can also be employed in a variety of other applications.
[0020] It is noted that references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” “certain embodiments,” etc., indicate that the embodiment described may include a feature, structure, or characteristic, but every embodiment may not necessarily include the feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0021] In general, terminology may be understood at least in part from usage in context.
For example, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
[0022] Various aspects of wireless communication systems will now be described with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, units, components, circuits, steps, operations, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, firmware, computer software, or any combination thereof. Whether such elements are implemented as hardware, firmware, or software depends upon the application and design constraints imposed on the overall system.
[0023] The techniques described herein may be used for various wireless communication networks, such as code division multiple access (CDMA) system, time division multiple access (TDMA) system, frequency division multiple access (FDMA) system, orthogonal frequency division multiple access (OFDMA) system, single-carrier frequency division multiple access (SC- FDMA) system, wireless local area network (WLAN) system, and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio access technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc. A TDMA network may implement a RAT, such as global system for mobile communications (GSM). An OFDMA network may implement a first RAT, such as LTE or NR. A WLAN system may implement a second RAT, such as Wi-Fi. The techniques described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs.
[0024] In cellular and/or Wi-Fi communication, Layer 2 is the protocol stack layer responsible for ensuring a reliable, error-free datalink for the wireless modem (referred to herein as a “baseband chip”) of a UE. More specifically, Layer 2 interfaces with Radio Layer 1 (also referred to as “Layer 1” or the “physical (PHY) layer”) and Radio Layer 3 (also referred to as “Layer 3” or the “Internet Protocol (IP) layer”), passing data packets up or down the protocol stack structure, depending on whether the data packets are associated with UL or DL transmissions. [0025] Furthermore, Layer 2 may perform de-multiplexing / multiplexing, segmentation / reassembly, aggregation / de-aggregation, and sliding window automatic repeat request (ARQ) techniques, among others, to ensure reliable end-to-end data integrity and in-order error-free delivery of data packets. For a UL data packet, Layer 3 data packets (e.g., IP data packets) may be input into a Layer 2 packet buffer and which are then fetched by the Layer 2 protocol stack circuit, and encoded into MAC layer packets (e.g., 5GNR) for transporting to the PHY layer. The timing for Layer 2 processing of a UL data packet proceeds based on the grant indication received from a transmitter. The grant indication may indicate UL packet conditions, e.g., such as packet due time, byte size, etc., as shown in FIG. 6. [0026] FIG. 6 illustrates a flow diagram 600 for Layer 2 UL packet processing at a baseband chip of a user equipment (UE). As seen in FIG. 6, the baseband chip may include, e.g., a physical layer (PHY) subsystem 602 and a Layer 2 data plane (DP) subsystem 604. PHY subsystem 602 may receive a UL grant (e.g., a UL resource allocation grant) in a Physical Downlink Common Control Channel (PDCCH) occasion that is located at the beginning of each slot.
[0027] Upon reception of a UL grant, the UE may begin preparation of the UL data transmission (e.g., a Physical Uplink Shared Channel (PUSCH) transmission), which involves operations by the PHY subsystem 602 and Layer 2 DP subsystem 604. For example, PHY subsystem 602 may process (at 601) the UL grant and send an indication of the UL grant to Layer 2 DP subsystem 604. Layer 2 DP subsystem 604 may perform (at 603) logical channel prioritization (LCP) to select logical channels and allocate granted resources to the selected logical channels. For each MAC packet selected for transmission, Layer 2 DP subsystem 604 may issue (at 605) a transmitter (Tx) command to the DP hardware (e.g., such as a Layer 2 circuit) of Layer 2 DP subsystem 604. The DP hardware may use the Tx commands to construct (at 607) MAC sub protocol data units (SDUs) on the fly and store them in a MAC inline buffer (not shown). PHY subsystem 602 may then retrieve (at 609) the MAC SDUs (also referred to herein as “data packets” or “packets”) from the MAC inline buffer and perform Tx processing before transmitting the UL data transmission via the PUSCH at the time scheduled by the UL grant.
[0028] When using Carrier Aggregation (CA), multiple active Component Carriers (CCs) are aggregated for transmission. The UE may receive multiple UL grants concurrently in one or more PDCCH occasions, where each grant is associated with a CC or serving cell. For each UL grant, Layer 2 DP subsystem 604 may generate (at 605) a list of Tx commands corresponding to a MAC SDU. These Tx commands may be pushed into multiple Tx command queues (not shown). The UL MAC packet scheduling algorithm then manages the order of servicing these Tx command queues, such that all CCs have sufficient processed data, typically at least one symbol, in the MAC inline buffer by PHY subsystem 602 encoding due time (at 609).
[0029] One challenge of conventional UL MAC packet scheduling arises in the servicing of multiple concurrent grants from multiple CCs and/or dual connection configuration. This is because the UE, which is connected to two or more MAC entities that in turn are each connected to a base station with multiple CC of different bandwidth, resources, and radio channel conditions, needs to prepare the UL data transmissions for all CCs without any de-synchronization or loss of data and in a time and resource optimal manner. Using conventional transmission scheduling techniques, the MAC data packets for each of the CCs may not be prepared by the encoding due time, which is the time at which PHY subsystem 602 pulls the MAC data packets from the MAC inline buffer. This problem is exacerbated in low-latency scenarios, where meeting stringent delay sensitive timing requirements for multiple UL grants that schedule the UL transmission in the same slot in which the grant arrived can be problematic. Moreover, conventional scheduling techniques often use processing resources to prepare MAC data packets inefficiently. Due to the non- optimized packet scheduling of the conventional technique, a UE is forced to use an increased number of memory resources, which is undesirable in terms of power consumption and processing overhead.
[0030] Thus, there exists an unmet need for a Layer 2 scheduling technique that ensures that multiple concurrent UL grants received across multiple CCs are serviced in a time-sensitive manner such that at least one symbol of each UL packet is ready by the encoding due time.
[0031] To overcome these and other challenges, the present disclosure provides a transmission (Tx) scheduler that services a set of transmission command queues using a credit- based scheduling technique, which enables the preparation of at least one symbol of each UL transmission by the encoding due time. In some embodiments, the credit-based scheduling technique may use a fixed credit size. For example, the credit size (e.g., how many bytes will be processed) is fixed for all CCs. Here, the Tx scheduler services all CCs equally in terms of processed data bytes, which results in the same data processing rate for all CCs. In another embodiment, the credit-based scheduling technique may use a symbol-sliced credit to reduce the first symbol preparation time. Here, for example, the credit size may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one orthogonal frequency-division multiplexed (OFDM) symbol, which may differ from CC-to-CC. As such, the number of cycles required for processing one symbol of data is equal for all CCs. Hence, the first symbol of data of all CCs may be ready in the MAC inline buffer as early as possible, which reduces the risk of missing the encoding due time of the PHY subsystem. In yet another embodiment, the credit-based scheduling technique may use a time-sliced credit for optimal inline buffer usage. Here, for example, the credit size may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem) of the CC, which is typically the dequeue rate of the PHY subsystem. With matched enqueue and dequeue rates, the size of the MAC inline buffer for each CC may be optimized. Additional details of the present credit-based scheduling technique are provided below in connection with FIGs. 1-5.
[0032] Although the following processing techniques are described in connection with
Layer 2 data processing, the same or similar techniques may be applied to Layer 3 and/or Layer 4 data processing to optimize power consumption at Layer 3 and/or Layer 4 subsystems without departing from the scope of the present disclosure.
[0033] FIG. 1 illustrates an exemplary wireless network 100, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure. As shown in FIG. 1, wireless network 100 may include a network of nodes, such as a user equipment 102, an access node 104, and a core network element 106. User equipment 102 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Intemet-of-Things (IoT) node. It is understood that user equipment 102 is illustrated as a mobile phone simply by way of illustration and not by way of limitation.
[0034] Access node 104 may be a device that communicates with user equipment 102, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 104 may have a wired connection to user equipment 102, a wireless connection to user equipment 102, or any combination thereof. Access node 104 may be connected to user equipment 102 by multiple connections, and user equipment 102 may be connected to other access nodes in addition to access node 104. Access node 104 may also be connected to other user equipments. When configured as a gNB, access node 104 may operate in millimeter wave (mmW) frequencies and/or near mmW frequencies in communication with the user equipment 102. When access node 104 operates in mmW or near mmW frequencies, the access node 104 may be referred to as an mmW base station. Extremely high frequency (EHF) is part of the RF in the electromagnetic spectrum. EHF has a range of 30 GHz to 300 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave. Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters. The super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW or near mmW radio frequency band have extremely high path loss and a short range. The mmW base station may utilize beamforming with user equipment 102 to compensate for the extremely high path loss and short range. It is understood that access node 104 is illustrated by a radio tower by way of illustration and not by way of limitation.
[0035] Access nodes 104, which are collectively referred to as E-UTRAN in the evolved packet core network (EPC) and as NG-RAN in the 5G core network (5GC), interface with the EPC and 5GC, respectively, through dedicated backhaul links (e.g., SI interface). In addition to other functions, access node 104 may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. Access nodes 104 may communicate directly or indirectly (e.g., through the 5GC) with each other over backhaul links (e.g., X2 interface). The backhaul links may be wired or wireless.
[0036] Core network element 106 may serve access node 104 and user equipment 102 to provide core network services. Examples of core network element 106 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW). These are examples of core network elements of an evolved packet core (EPC) system, which is a core network for the LTE system. Other core network elements may be used in LTE and in other communication systems. In some embodiments, core network element 106 includes an access and mobility management function (AMF), a session management function (SMF), or a user plane function (UPF) of the 5GC for the NR system. The AMF may be in communication with a Unified Data Management (UDM). The AMF is the control node that processes the signaling between the user equipment 102 and the 5GC. Generally, the AMF provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF. The UPF provides UE IP address allocation as well as other functions. The UPF is connected to the IP Services. The IP Services may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. It is understood that core network element 106 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
[0037] Core network element 106 may connect with a large network, such as the Internet
108, or another Internet Protocol (IP) network, to communicate packet data over any distance. In this way, data from user equipment 102 may be communicated to other user equipments connected to other access points, including, for example, a computer 110 connected to Internet 108, for example, using a wired connection or a wireless connection, or to a tablet 112 wirelessly connected to Internet 108 via a router 114. Thus, computer 110 and tablet 112 provide additional examples of possible user equipments, and router 114 provides an example of another possible access node. [0038] A generic example of a rack-mounted server is provided as an illustration of core network element 106. However, there may be multiple elements in the core network including database servers, such as a database 116, and security and authentication servers, such as an authentication server 118. Database 116 may, for example, manage data related to user subscription to network services. A home location register (HLR) is an example of a standardized database of subscriber information for a cellular network. Likewise, authentication server 118 may handle authentication of users, sessions, and so on. In the NR system, an authentication server function (AUSF) device may be the entity to perform user equipment authentication. In some embodiments, a single server rack may handle multiple such functions, such that the connections between core network element 106, authentication server 118, and database 116, may be local connections within a single rack.
[0039] Each element in FIG. 1 may be considered a node of wireless network 100. More detail regarding the possible implementation of a node is provided by way of example in the description of a node 500 in FIG. 5. Node 500 may be configured as user equipment 102, access node 104, or core network element 106 in FIG. 1. Similarly, node 500 may also be configured as computer 110, router 114, tablet 112, database 116, or authentication server 118 in FIG. 1. As shown in FIG. 5, node 500 may include a processor 502, a memory 504, and a transceiver 506. These components are shown as connected to one another by a bus, but other connection types are also permitted. When node 500 is user equipment 102, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 500 may be implemented as a blade in a server system when node 500 is configured as core network element 106. Other implementations are also possible.
[0040] Transceiver 506 may include any suitable device for sending and/or receiving data.
Node 500 may include one or more transceivers, although only one transceiver 506 is shown for simplicity of illustration. An antenna 508 is shown as a possible communication mechanism for node 500. Multiple antennas and/or arrays of antennas may be utilized for receiving multiple spatially multiplex data streams. Additionally, examples of node 500 may communicate using wired techniques rather than (or in addition to) wireless techniques. For example, access node 104 may communicate wirelessly to user equipment 102 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 106. Other communication hardware, such as a network interface card (NIC), may be included as well.
[0041] As shown in FIG. 5, node 500 may include processor 502. Although only one processor is shown, it is understood that multiple processors can be included. Processor 502 may include microprocessors, microcontroller units (MCUs), digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure. Processor 502 may be a hardware device having one or more processing cores. Processor 502 may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Software can include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for instructing hardware are also permitted under the broad category of software. [0042] As shown in FIG. 5, node 500 may also include memory 504. Although only one memory is shown, it is understood that multiple memories can be included. Memory 504 can broadly include both memory and storage. For example, memory 504 may include random-access memory (RAM), read-only memory (ROM), static RAM (SRAM), dynamic RAM (DRAM), ferro electric RAM (FRAM), electrically erasable programmable ROM (EEPROM), compact disc read only memory (CD-ROM) or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 502. Broadly, memory 504 may be embodied by any computer-readable medium, such as a non-transitory computer-readable medium.
[0043] Processor 502, memory 504, and transceiver 506 may be implemented in various forms in node 500 for performing wireless communication functions. In some embodiments, processor 502, memory 504, and transceiver 506 of node 500 are implemented (e.g., integrated) on one or more system-on-chips (SoCs). In one example, processor 502 and memory 504 may be integrated on an application processor (AP) SoC (sometimes known as a “host,” referred to herein as a “host chip”) that handles application processing in an operating system (OS) environment, including generating raw data to be transmitted. In another example, processor 502 and memory 504 may be integrated on a baseband processor (BP) SoC (sometimes known as a “modem,” referred to herein as a “baseband chip”) that converts the raw data, e.g., from the host chip, to signals that can be used to modulate the carrier frequency for transmission, and vice versa, which can run a real-time operating system (RTOS). In still another example, processor 502 and transceiver 506 (and memory 504 in some cases) may be integrated on an RF SoC (sometimes known as a “transceiver,” referred to herein as an “RF chip”) that transmits and receives RF signals with antenna 508. It is understood that in some examples, some or all of the host chip, baseband chip, and RF chip may be integrated as a single SoC. For example, a baseband chip and an RF chip may be integrated into a single SoC that manages all the radio functions for cellular communication.
[0044] Referring back to FIG. 1, in some embodiments, user equipment 102 may include a Tx scheduler that services a set of transmission command queues using a credit-based scheduling technique that enables the preparation of at least one symbol of each UL transmission by the encoding due time at the PHY subsystem. In some embodiments, the credit-based scheduling technique of user equipment 102 may use a fixed credit size. For example, the credit size (e.g., how many bytes will be processed) is fixed for all CCs. Here, the Tx scheduler may service all CCs equally in terms of processed data bytes, which may result in the same data processing rate for each CC. In another embodiment, the credit-based scheduling technique of user equipment 102 may use a symbol-sliced credit to reduce the first symbol preparation time. For example, the credit size here may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one OFDM symbol of the CC. As such, the number of cycles required for processing one symbol of data may be equal for all CCs. Hence, the first symbol of data of all CCs can be ready in the MAC inline buffer as early as possible, which reduces the risk of missing the encoding due time of the PHY subsystem. In yet another embodiment, the credit-based scheduling technique of user equipment 102 may use a time-sliced credit for optimal inline buffer usage. For example, the credit size here may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem) of the CC, which is typically the dequeue rate of the PHY subsystem. With matched enqueue and dequeue rates, the size of the inline buffer associated with each CC may be optimized. Additional details of the credit-based scheduling technique are provided below in connection with FIGs. 2, 3 A, 3B, 3C, 3D, 4A, and 4B.
[0045] FIG. 2 illustrates a block diagram of an apparatus 200 including a baseband chip
202, an RF chip 204, and a host chip 206, according to some embodiments of the present disclosure. Apparatus 200 may be implemented as user equipment 102 of wireless network 100 in FIG. 1. As shown in FIG. 2, apparatus 200 may include baseband chip 202, RF chip 204, host chip 206, and one or more antennas 210. In some embodiments, baseband chip 202 is implemented by processor 502 and memory 504, and RF chip 204 is implemented by processor 502, memory 504, and transceiver 506, as described above with respect to FIG. 5. Besides the on-chip memory 218 (also known as “internal memory,” e.g., registers, buffers, or caches) on each chip 202, 204, or 206, apparatus 200 may further include an external memory 208 (e.g., the system memory or main memory) that can be shared by each chip 202, 204, or 206 through the system/main bus. Although baseband chip 202 is illustrated as a standalone SoC in FIG. 2, it is understood that in one example, baseband chip 202 and RF chip 204 may be integrated as one SoC; in another example, baseband chip 202 and host chip 206 may be integrated as one SoC; in still another example, baseband chip 202, RF chip 204, and host chip 206 may be integrated as one SoC, as described above.
[0046] In the uplink, host chip 206 may generate raw data and send it to baseband chip 202 for encoding, modulation, and mapping. Interface 214 of baseband chip 202 may receive the data from host chip 206. Baseband chip 202 may also access the raw data generated by host chip 206 and stored in external memory 208, for example, using the direct memory access (DMA). Baseband chip 202 may first encode (e.g., by source coding and/or channel coding) the raw data and modulate the coded data using any suitable modulation techniques, such as multi-phase shift keying (MPSK) modulation or quadrature amplitude modulation (QAM). Baseband chip 202 may perform any other functions, such as symbol or layer mapping, to convert the raw data into a signal that can be used to modulate the carrier frequency for transmission. In the uplink, baseband chip 202 may send the modulated signal to RF chip 204 via interface 214. RF chip 204, through the transmitter, may convert the modulated signal in the digital form into analog signals, i.e., RF signals, and perform any suitable front-end RF functions, such as filtering, digital pre-distortion, up-conversion, or sample-rate conversion. Antenna 210 (e.g., an antenna array) may transmit the RF signals provided by the transmitter of RF chip 204.
[0047] In the downlink, antenna 210 may receive RF signals from an access node or other wireless device. The RF signals may be passed to the receiver (Rx) of RF chip 204. RF chip 204 may perform any suitable front-end RF functions, such as filtering, IQ imbalance compensation, down-paging conversion, or sample-rate conversion, and convert the RF signals (e.g., transmission) into low-frequency digital signals (baseband signals) that can be processed by baseband chip 202.
[0048] As seen in FIG. 2, baseband chip 202 may include a Tx scheduler 240 that services a set of Tx command queues 230 using a credit-based scheduling technique (e.g., configured by uC 220) that enables the preparation of at least one symbol of each UL transmission by Layer 2 circuit 250. In so doing, the data packet may arrive in the MAC inline buffer 260 by the encoding due time of the PHY subsystem 270. In some embodiments, the credit-based scheduling technique of baseband chip 202 may use a fixed credit size. For example, the credit size (e.g., how many bytes will be processed) is fixed for all CCs. Here, the Tx scheduler services all CCs equally in terms of processed data bytes, which may result in the same data processing rate for all CCs. In another embodiment, the credit-based scheduling technique of baseband chip 202 may use a symbol-sliced credit to reduce the first symbol preparation time. For example, the credit size here may be different for each CC and proportional to the data size (e.g., symbol size) transmitted in one OFDM symbol of the CC. As such, the number of cycles required for processing one symbol of data is made equal for all CCs. Hence, the first symbol of data of all CCs can be ready in the MAC inline buffer 260 as early as possible, which may reduce the risk of the encoding due time being missed. In yet another embodiment, the credit-based scheduling technique of baseband chip 202 may use a time-sliced credit for optimal inline buffer usage. For example, the credit size here may be proportional to the data transmission rate (e.g., dequeue rate of the PHY subsystem 270) of the CC, which is typically the dequeue rate of the PHY subsystem 270. With matched enqueue and dequeue rates, the size of the MAC inline buffer 260 associated with each CC may be optimized. Additional details of the credit-based scheduling technique are provided below in connection with FIGs. 3A, 3B, 3C, 3D, 4A, and 4B.
[0049] FIG. 3 A illustrates a detailed block diagram of the exemplary baseband chip 202 of
FIG. 2, according to some embodiments of the present disclosure. FIG. 3B illustrates a flow diagram of a first exemplary credit-based scheduling technique 325 of the baseband chip 202 of FIG. 3A, according to some embodiments of the present disclosure. FIG. 3C illustrates a flow diagram of a second exemplary credit-based scheduling technique 350 of the baseband chip 202 of FIG. 3A, according to some embodiments of the present disclosure. FIG. 3D illustrates a flow diagram of a third exemplary credit-based scheduling technique 375 of the baseband chip 202 of FIG. 3 A, according to some embodiments of the present disclosure. FIGs. 3 A-3D will be described together.
[0050] Referring to FIG. 3A, for each UL grant indication from the PHY subsystem 270, uC 220 may generate one or more packet descriptors 301 based on the corresponding UL grant. Packet descriptors 301 may vary depending upon the grant size. uC 220 may push packet descriptors 301 into the corresponding Tx command queue 230. Each Tx command queue 230 may correspond to a different CC (e.g., CC0, CC1, CC2, CC3, CC4, etc.). Packet descriptor 301 may indicate the size of the data packet 303, the address of data packet 303 in the packet buffer 302, as well as the PDCP, RLC, and MAC header information used by Layer 2 circuit 250 to construct the MAC SDUs, which are the data packets stored in MAC inline buffer 260.
[0051] Tx scheduler 240 (shown in FIG. 3A as “Tx command queue (CmdQ) scheduler
240”) may manage the order in which Tx command queues 230 are scheduled. For example, Tx scheduler 240 may service a Tx command queue 230 until one of the following conditions are met: 1) all the packet descriptors 301 (also referred to herein as “commands”) in a Tx command queue 230 have been processed, 2) the total size of processed data associated with the Tx command queue 230 surpasses the maximum credit size of a cycle of Layer 2 processing, or 3) there is no free space in MAC inline buffer 260 available for the CC associated with the Tx command queue 230. The credit size can be statically or dynamically configured by uC 220, as described below in connection with FIG. 4B. Each Tx command queue 230 may be configured with different credit size values. [0052] Thus, for each cycle of Layer 2 processing, Tx scheduler 240 may select a packet descriptor 301 to service and send information associated with the packet descriptor 301 to Layer 2 circuit 250. Layer 2 circuit 250 may fetch a data packet 303 from packet buffer 302 and perform Layer 2 processing, e.g., such as PDCP processing, RLC processing, MAC processing, etc. based on the information of packet descriptor 301. Once processed, Layer 2 circuit 250 may store the processed data packet 303 in MAC inline buffer 260. At encoding due time, PHY subsystem 270 may dequeue a data packet 303 from MAC inline buffer 260 and prepare it for transmission. In some embodiments, PHY subsystem 270 may dequeue data packets 303 on a code block (CB) or symbol basis, which may enable pipelined processing such that the size of MAC inline buffer 260 is optimized, while minimizing latency. Thus, baseband chip 202 prepares data packets 303 for dequeuing such that at least one symbol of data is located in MAC inline buffer 260 by the encoding due time of PHY subsystem 270. This may be implemented by Tx scheduler 240 using one of the example credit-based scheduling techniques described below.
[0053] For example, in some embodiments, Tx scheduler 240 may implement the credit- based scheduling technique using a fixed credit size. In this example embodiment, the credit size (e.g., how many bytes will be processed) is fixed for all CCs. Here, Tx scheduler 240 may service all CCs equally in terms of processed data bytes, resulting in the same data processing rate for all CCs. An example of this embodiment is depicted in FIG. 3B for three CCs.
[0054] Referring to FIG. 3B, the size of an OFDM symbol is assumed different for each of the three CCs. The number of cycles (also referred to as “rounds”) used to transfer one data symbol data is two for CC0, one for CC1, and three for CC2. Therefore, Tx scheduler 240 implements three cycles of Layer 2 processing before the first data symbol for CC2 in MAC inline buffer 260, by which time CC0 and CC1 already have one-and-a-half and three symbols ready MAC inline buffer 260, respectively. uC 220 may determine the fixed credit size for each CC during cell establishment or re-establishment. In some examples, uC 220 may determine the fixed credit size based on, e.g., the packet size indicated by the UL grant.
[0055] In another embodiment, Tx scheduler 240 may implement the credit-based scheduling technique using a symbol-sliced credit that may reduce the first symbol preparation time, as compared to the embodiment described above in connection with FIG. 3B. In this example, the credit size may be different for each CC and proportional to the symbol size associated with the CC. As such, the number of cycles required for processing one symbol of data is equal for all CCs. Hence, the first symbol of data of all CCs may be ready in the MAC inline buffer 260 as early as possible, which reduces the chance of missing the encoding due time of PHY subsystem 270. An example of this embodiment is depicted in FIG. 3C for three CCs.
[0056] Referring to FIG. 3C, the credit size is set as half of the OFDM symbol for each CC.
Thus, by the time CC2 has its first symbol ready in MAC inline buffer 260, CC0 and CC1 also have one symbol ready in MAC inline buffer 260. uC 220 can calculate the credit size for each UL grant during run time using, e.g., Equation (1): creditSize= grantsize/numbersyms/K (1), where grantsize indicates the size of the data packet scheduled by the UL grant, numbersyms is the number of symbols that make up the data packet, and K is a positive integer.
[0057] In yet another embodiment, Tx scheduler 240 may implement the credit-based scheduling technique using a time-sliced credit for optimal MAC inline buffer 260 usage. For example, the credit size in this embodiment may be proportional to the data transmission rate of the CC, which is typically the dequeue rate of the PHY subsystem 270. With matched enqueue and dequeue rates, the size of MAC inline buffer 260 may be optimized for each CC. An example of this embodiment is depicted in FIG. 3D for three CCs.
[0058] Referring to FIG. 3D, CC0 and CC2 are assumed to dequeue one symbol of data within the time that CC1 dequeues two symbols of data. To match the dequeue rate of CC1, the credit size is set to half of a symbol for CC0 and CC2, and one symbol for CC1 in this example. Hence, by the time the first symbol for CC2 is ready in MAC inline buffer 260, CC0 and CC1 have one symbol and two symbols ready in MAC inline buffer 260, respectively. In this example, uC 220 may determine the credit size using Equation 2: creditSize= Tsiice x grantsize/numbersyms/TsYM (2), where grantsize indicates the size of the data packet scheduled by the UL grant, numbersyms is the number of symbols that make up the data packet, 0< TsiiCe <1, and TSYM may be the symbol timing, which may be determined based on the subcarrier spacing and cyclic prefix length used for the CC. [0059] FIG. 4A illustrates a flow chart of a first exemplary method 400 for DL Layer 2 data processing, according to some embodiments of the present disclosure. Exemplary method 400 may be performed by an apparatus for wireless communication, e.g., such as user equipment 102, apparatus 200, baseband chip 202, uC 220, Tx command queues 230, Tx scheduler 240, Layer 2 circuit 250, MAC inline buffer 260, PHY subsystem 270, packet buffer 302, and/or node 500. Method 400 may include steps 402-422 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4A.
[0060] Referring to FIG. 4, at 402, the apparatus may initialize the byte count for all CCs to zero. For example, referring to FIG. 3 A, uC 220 may configure Tx scheduler 240 with a credit- based scheduling technique. Before beginning the credit-based scheduling technique, Tx scheduler 240 may set the byte count to zero for each of the Tx command queues 230. Tx scheduler 240 may use the byte count to keep track of whether the maximum credit size for a CC has been reached during a cycle of Layer 2 processing.
[0061] At 404, the apparatus may begin the credit-based scheduling technique for one of the Tx command queues. For example, referring to FIG. 3A, Tx scheduler 240 may begin the credit-based scheduling technique for the Tx command queue 230 of CC0.
[0062] At 406, the apparatus may determine whether the Tx command queue 230 is empty.
For example, referring to FIG. 3A, Tx scheduler 240 may determine whether the Tx command queue for CC0 is empty. In response to determining that the Tx command queue is not empty, the operation may move to 408. Otherwise, in response to determining that the Tx command queue is empty, the operations may move to 420.
[0063] At 408, the apparatus may check the packet size indicated by the packet descriptor in the Tx command queue. For example, referring to FIG. 3 A, Tx scheduler may check the packet size indicated by the first packet descriptor 301 in the Tx command queue for CC0.
[0064] At 410, the apparatus may determine whether the MAC inline buffer associated with that CC has enough space to accommodate a packet of the size indicated by the packet descriptor. For example, referring to FIG. 3 A, Tx scheduler 240 may determine whether the MAC inline buffer 260 for CC0 has enough space to accommodate a data packet 303 of the size indicated by the first packet descriptor 301 from Tx command queue 230 of CC0. In response to determining that the MAC inline buffer does have enough space, the operations may move to 412. Otherwise, in response to determining that the MAC inline buffer does not have enough space, the operations may move to 420.
[0065] At 412, the apparatus may determine whether the byte count associated with this
CC and cycle of Layer 2 processing is less than the maximum credit size associated with the credit- based scheduling technique. For example, referring to FIG. 3A, Tx scheduler 240 may determine whether the byte count for CC0 during the first cycle of Layer 2 processing is less than the maximum credit size. In response to determining that the byte count is less than the maximum credit size, the operations may move to 414. Otherwise, in response to determining that the byte count is greater than or equal to the maximum credit size, the operations may move to 418.
[0066] At 414, the apparatus may service the first packet descriptor in the Tx command queue. For example, referring to FIG. 3A, Tx scheduler 240 may determine the information (e.g., packet size, packet location in packet buffer 302, etc.) included in the first packet descriptor 301 and send this information to Layer 2 circuit 250. Layer 2 circuit 250 may fetch a data packet 303 from packet buffer 302 and perform Layer 2 processing of the data packet 303. Once processed, Layer 2 circuit 250 may store the data packet 303 in MAC inline buffer 260.
[0067] At 416, the apparatus may increase the byte count by the number of bytes associated with the data packet that was serviced. For example, referring to FIG. 3 A, assuming the data packet 303 has a byte count of 500 bytes and the maximum credit size is 1000 bytes, Tx scheduler 240 may increase the byte count from 0 bytes to 500 bytes, which is still less than the maximum credit size. Once the byte count has been increased, the operations may return to 406.
[0068] At 406, the apparatus may determine whether the Tx command queue for the same
CC that was just serviced is now empty. If not, the apparatus may determine (at 408) whether the sub sequent packet descriptor indicates a byte size that would exceed the maximum credit size for that cycle of Layer 2 processing. For example, referring to FIG. 3 A, assuming the byte count is 500 bytes after the first data packet was serviced, the maximum credit size is 1000 bytes, and that the data size indicated by the second packet descriptor 301 is 700 bytes, Tx scheduler 240 may still service the second packet descriptor 301 such that the data packet of 700 bytes is processed by Layer 2 circuit 250. Now the first cycle of Layer 2 processing of CC0 is complete based on the credit-based scheduling technique. At 418, the apparatus may decrease the byte count associated with a subsequent cycle of Layer 2 processing for that CC by the byte amount the maximum credit size exceeded by processing the second data packet. For example, referring to FIG. 3 A, using the same example, Tx scheduler 240 would set the byte count for the second cycle of Layer 2 processing for CC0 to 200 bytes or decrease the maximum credit size for the second cycle for CC0 to 800 bytes.
[0069] In instances when the operations move to 420, the apparatus may set the byte count to zero. Then, at 422, the apparatus may move to the next Tx command queue. For example, referring to FIG. 3 A, Tx scheduler 240 may move to the Tx command queue 230 associated with CC1 after performing the credit-based scheduling technique for CC0.
[0070] FIG. 4B illustrates a flow chart of a second exemplary method 425 for DL Layer 2 data processing, according to some embodiments of the present disclosure. Exemplary method 425 may be performed by an apparatus for wireless communication, e.g., such as user equipment 102, apparatus 200, baseband chip 202, uC 220, Tx command queues 230, Tx scheduler 240, Layer 2 circuit 250, MAC inline buffer 260, PHY subsystem 270, packet buffer 302, and/or node 500. Method 425 may include steps 430-442 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4B.
[0071] At 430, the apparatus may select a first credit-based scheduling technique from a set of credit-based scheduling techniques. For example, referring to FIG. 3A, uC 220 may select the credit-based scheduling technique described above in connection with FIG. 3B.
[0072] At 432, the apparatus may receive a UL grant from a PHY subsystem. For example, referring to FIG. 3A, uC 220 may receive a UL grant from PHY subsystem 270. The UL grant may include information, e.g., such as grant size (e.g., the byte size of scheduled data packet), the number of symbols in the data packet (e.g., associated with a CC), the symbol timing (e.g., associated with subcarrier spacing), just to name a few. [0073] At 434, the apparatus may calculate the first symbol ready time for the UL grant.
For example, referring to FIG. 3 A, uC 220 may determine the time that the first symbol of the data packet will arrive in MAC inline buffer 260 based on the credit size using the first credit-based scheduling technique and/or the information included in the UL grant.
[0074] At 436, the apparatus may determine whether the first symbol ready time is greater than a due time threshold associated with the encoding due time of the PHY subsystem. For example, referring to FIG. 3 A, uC 220 may determine whether the first symbol will arrive in MAC inline buffer 260 before the encoding due time threshold. The encoding due time threshold may be the encoding due time of PHY subsystem 270 or within a window of time prior to the encoding due time. In response to determining that the first symbol ready time is greater than the encoding due time threshold, the operations may move to 438. Otherwise, in response to determining that the first symbol ready time is less than or equal to the encoding due time threshold, the operations may move to 440.
[0075] At 438, the apparatus may update the credit size for each CC based on a second credit-based scheduling technique. For example, referring to FIG. 3 A, uC 220 may switch to the credit-based scheduling technique of FIG. 3C and update the associated credit size used by Tx scheduler. Using the credit-based scheduling technique of FIG. 3C, the credit size may be different for each CC and proportional to the symbol size associated with the CC. As such, the number of cycles required for processing one symbol of data is equal for all CCs. Hence, by switching to the second credit-based scheduling technique, the first symbol of data of all CCs may be ready in the MAC inline buffer 260 as early as possible, which reduces the risk that the encoding due time of the PHY subsystem 270 is missed.
[0076] At 440, the apparatus may determine whether the amount of free space in the MAC inline buffer is less than a space threshold. For example, referring to FIG. 3A, uC 220 may determine whether the space in MAC inline buffer 260 associated with the CC of the received UL grant is less than a space threshold for that CC. In response to determining that the free space is less than the space threshold, the operations may move to 442. Otherwise, in response to determining that the free space is greater than or equal to the space threshold, the operations may return to 432.
[0077] At 442, the apparatus may update the credit size for each CC based on a third credit- based scheduling technique. For example, referring to FIG. 3 A, uC 220 may switch to the credit- based scheduling technique of FIG. 3D and update the associated credit size used by Tx scheduler 240 when the free space is less than the space threshold. Using the credit-based scheduling technique of FIG. 3D, a time-sliced credit for optimal MAC inline buffer 260 usage may be implemented by Tx scheduler 240. For example, the credit size in this embodiment may be proportional to the data transmission rate of the CC, which is typically the dequeue rate of the PHY subsystem 270. With matched enqueue and dequeue rates, the size of MAC inline buffer 260 may be optimized for each CC.
[0078] Using the above-described credit-based scheduling techniques of FIGs. 1-5, various advantages over conventional approaches may be realized. For example, the credit-based scheduling technique of the present disclosure optimizes the usage of Layer 2 processing resources when UL data packets prepare concurrently for multiple CCs. Moreover, the present techniques optimize Layer 2 processing time to expedite UL data packet preparation for multiple CCs. Still further, using the present credit-based scheduling techniques, latency uncertainties associated with packet arrival and the encoding due time may be eliminated, even when preparing UL data packets for multiple CC grants.
[0079] In various aspects of the present disclosure, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 500 in FIG. 5. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer. Disk and disc, as used herein, includes CD, laser disc, optical disc, DVD, and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0080] According to one aspect of the disclosure, a baseband chip is provided. The baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs. The baseband chip may also include a Layer 2 microcontroller. The Layer 2 microcontroller may be configured to generate the packet descriptors for each of the different CCs based on associated UL grant indicators. The Layer 2 microcontroller may be configured to send each of the packet descriptors to the set of transmission command queues based on CC. The Layer 2 microcontroller may be configured to select a credit-based scheduling mechanism from a set of credit-based scheduling mechanisms. The Layer 2 microcontroller may be configured to configure a transmission scheduler with the credit-based scheduling mechanism.
[0081] In some embodiments, the set of credit-based scheduling mechanisms may be associated with a first credit size fixed for each of the different CCs, a second credit size that is proportional to a symbol size associated with each of the different CCs, or a third credit size that is proportional to a data transmission rate associated with each of the different CCs.
[0082] In some embodiments, the first credit size, the second credit size, and the third credit size may each associated with a number of bytes to be processed by a Layer 2 circuit.
[0083] In some embodiments, the credit-based scheduling mechanism may indicate a maximum credit size for at least one cycle of Layer 2 processing by a Layer 2 circuit.
[0084] In some embodiments, based on the credit-based scheduling mechanism configured by the Layer 2 microcontroller, the transmission scheduler is configured to in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor. In some embodiments, based on the credit-based scheduling mechanism configured by the Layer 2 microcontroller, the transmission scheduler is configured to in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
[0085] In some embodiments, the Layer 2 circuit may be configured to receive the first packet descriptor from the transmission scheduler after the servicing. In some embodiments, the Layer 2 circuit may be configured to obtain a packet from a packet buffer based on the first packet descriptor. In some embodiments, the Layer 2 circuit may be configured to perform Layer 2 processing of the packet to generate a Layer 2 packet. In some embodiments, the Layer 2 circuit may be configured to send the Layer 2 packet to an inline buffer queue of an inline buffer.
[0086] In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
[0087] In some embodiments, the transmission scheduler may be further configured to in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
[0088] In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the transmission scheduler may be further configured to in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
[0089] In some embodiments, the transmission scheduler may be further configured to in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero. In some embodiments, the transmission scheduler may be further configured to in response to determining that the inline buffer threshold has been reached, retrieve a third packet descriptor from the second transmission command queue during the first cycle of Layer 2 processing.
[0090] In some embodiments, the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit-based scheduling mechanism by configuring a first credit size associated with a first credit-based scheduling mechanism. In some embodiments, the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit- based scheduling mechanism by, in response to determining that a first symbol ready time is greater than a due time threshold, updating the first credit size to a second credit size and implement a second credit-based scheduling mechanism. In some embodiments, the Layer 2 microcontroller may be configured to configure the transmission scheduler with the credit-based scheduling mechanism by, in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, updating the first credit size to a third credit size and implement a third credit- based scheduling mechanism.
[0091] In another aspect of the disclosure, a baseband chip is provided. The baseband chip may include a set of transmission command queues each associated with a different CC and each configured to maintain packet descriptors associated with one of the different CCs. The baseband chip may further include a transmission scheduler. The transmission scheduler may be configured to receive configuration information associated with a credit-based scheduling mechanism from a Layer 2 microcontroller. The transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor. The transmission scheduler may be configured to, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the first packet size.
[0092] In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
[0093] In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
[0094] In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increase the byte count associated with the maximum credit size by the second packet size.
[0095] In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the inline buffer threshold has been reached, retrieve a third packet descriptor from the second transmission command queue during the first cycle of Layer 2 processing.
[0096] In some embodiments, the at least one microcontroller may be further configured to implement a first credit size associated with a first credit-based scheduling mechanism. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that a first symbol ready time is greater than a due time threshold, update the first credit size to a second credit size and implement a second credit-based scheduling mechanism. In some embodiments, the at least one microcontroller may be further configured to, in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, update the first credit size to a third credit size and implement a third credit-based scheduling mechanism. [0097] In yet another aspect of the disclosure, a method of wireless communication of a transmission scheduler is provided. The method may include receiving configuration information of credit-based scheduling mechanism configured from a Layer 2 microcontroller. The method may include, in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, servicing the first packet descriptor. The method may include increasing the byte count associated with the maximum credit size by the first packet size. The method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, servicing the second packet descriptor. The method may include, in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, increasing the byte count associated with the maximum credit size by the second packet size. The method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, servicing the first packet descriptor. The method may include, in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, decreasing a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
[0098] The foregoing description of the embodiments will so reveal the general nature of the present disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
[0099] Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
[0100] The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims in any way.
[0101] Various functional blocks, modules, and steps are disclosed above. The arrangements provided are illustrative and without limitation. Accordingly, the functional blocks, modules, and steps may be re-ordered or combined in different ways than in the examples provided above. Likewise, certain embodiments include only a subset of the functional blocks, modules, and steps, and any such subset is permitted.
[0102] The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A baseband chip, comprising: a set of transmission command queues each associated with a different component carrier (CC) and each configured to maintain packet descriptors associated with one of the different component carriers (CCs); and a Layer 2 microcontroller configured to: generate the packet descriptors for each the different CCs based on associated uplink (UL) grant indicators; send each of the packet descriptors to the set of transmission command queues based on CC; select a credit-based scheduling mechanism from a set of credit-based scheduling mechanisms; and configure a transmission scheduler with the credit-based scheduling mechanism.
2. The baseband chip of claim 1, wherein the set of credit-based scheduling mechanisms are associated with a first credit size fixed for each of the different CCs, a second credit size that is proportional to a symbol size associated with each of the different CCs, or a third credit size that is proportional to a data transmission rate associated with each of the different CCs.
3. The baseband chip of claim 2, wherein the first credit size, the second credit size, and the third credit size are each associated with a number of bytes to be processed by a Layer 2 circuit.
4. The baseband chip of claim 1, wherein the credit-based scheduling mechanism indicates a maximum credit size for at least one cycle of Layer 2 processing by a Layer 2 circuit.
5. The baseband chip of claim 4, wherein the transmission scheduler, based on the credit-based scheduling mechanism configured by the Layer 2 microcontroller, is configured to: in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor, and increase the byte count associated with the maximum credit size by the first packet size.
6. The baseband chip of claim 5, wherein the Layer 2 circuit is configured to: receive the first packet descriptor from the transmission scheduler after the servicing; obtain a packet from a packet buffer based on the first packet descriptor; perform Layer 2 processing of the packet to generate a Layer 2 packet; and send the Layer 2 packet to an inline buffer queue of an inline buffer.
7. The baseband chip of claim 5, wherein the transmission scheduler is further configured to: in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor, and increase the byte count associated with the maximum credit size by the second packet size.
8. The baseband chip of claim 5, wherein the transmission scheduler is further configured to: in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor, and decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
9. The baseband chip of claim 8, wherein the transmission scheduler is further configured to: in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor, and increase the byte count associated with the maximum credit size by the second packet size.
10. The baseband chip of claim 8, wherein the transmission scheduler is further configured to: in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero, and retrieve a third packet descriptor from a second transmission command queue during the first cycle of Layer 2 processing.
11. The baseband chip of claim 1, wherein the Layer 2 microcontroller is configured to configure the transmission scheduler with the credit-based scheduling mechanism by: configuring a first credit size associated with a first credit-based scheduling mechanism; in response to determining that a first symbol ready time is greater than a due time threshold, updating the first credit size to a second credit size and implement a second credit-based scheduling mechanism; and in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, updating the first credit size to a third credit size and implement a third credit- based scheduling mechanism.
12. A baseband chip comprising: a set of transmission command queues each associated with a different component carrier (CC) and each configured to maintain packet descriptors associated with one of the different component carriers (CCs); and a transmission scheduler configured to: receive configuration information associated with a credit-based scheduling mechanism from a Layer 2 microcontroller; in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with the maximum credit size and that an inline buffer threshold has not been reached, service the first packet descriptor, and increase the byte count associated with the maximum credit size by the first packet size.
13. The baseband chip of claim 12, wherein the transmission scheduler is further configured to: in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor, and increase the byte count associated with the maximum credit size by the second packet size.
14. The baseband chip of claim 12, wherein the transmission scheduler is further configured to: in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing, service the first packet descriptor, and decrease a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
15. The baseband chip of claim 14, wherein the transmission scheduler is further configured to: in response to determining that a second packet size associated with a second packet descriptor of a second transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, service the second packet descriptor, and increase the byte count associated with the maximum credit size by the second packet size.
16. The baseband chip of claim 14, wherein the transmission scheduler is further configured to: in response to determining that the inline buffer threshold has been reached, set the second byte count associated with the first transmission command queue during the second cycle of Layer 2 processing to zero, and retrieve a third packet descriptor from a second transmission command queue during the first cycle of Layer 2 processing.
17. The baseband chip of claim 12, further comprising a Layer 2 circuit configured to: receive the first packet descriptor from the transmission scheduler after the servicing; obtain a packet from a packet buffer based on the first packet descriptor; perform Layer 2 processing of the packet to generate a Layer 2 packet; and send the Layer 2 packet to an inline buffer queue of an inline buffer.
18. The baseband chip of claim 12, wherein the set of credit-based scheduling mechanisms are associated with a first credit size fixed for each of the different CCs, a second credit size that is proportional to a symbol size associated with each of the different CCs, or a third credit size that is proportional to a data transmission rate associated with each of the different CCs.
19. The baseband chip of claim 12, wherein the Layer 2 microcontroller is configured to: implement a first credit size associated with a first credit-based scheduling mechanism; in response to determining that a first symbol ready time is greater than a due time threshold, update the first credit size to a second credit size and implement a second credit-based scheduling mechanism; and in response to determining that the first symbol ready time is less than the due time threshold and to determining that an amount of free space in an inline buffer is less than an inline buffer threshold, update the first credit size to a third credit size and implement a third credit-based scheduling mechanism.
20. A method of wireless communication of a transmission scheduler, comprising: receiving configuration information of credit-based scheduling mechanism configured from a Layer 2 microcontroller; in response to determining that a first packet size associated with a first packet descriptor of a first transmission command queue is less than a byte count associated with a maximum credit size and that an inline buffer threshold has not been reached, servicing the first packet descriptor; and increasing the byte count associated with the maximum credit size by the first packet size; in response to determining that a second packet size associated with a second packet descriptor of the first transmission command queue is less than the byte count associated with the maximum credit size and that the inline buffer threshold has not been reached, servicing the second packet descriptor; and increasing the byte count associated with the maximum credit size by the second packet size; in response to determining that the first packet size associated with the first packet descriptor of the first transmission command queue exceeds the maximum credit size during a first cycle of Layer 2 processing; servicing the first packet descriptor; and decreasing a second byte count associated with the first transmission command queue during a second cycle of Layer 2 processing based on an amount by which the maximum credit size is exceeded by the first packet size associated with the first packet descriptor during the first cycle of Layer 2 processing.
PCT/US2021/043576 2021-07-28 2021-07-28 Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler WO2023009117A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2021/043576 WO2023009117A1 (en) 2021-07-28 2021-07-28 Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler
CN202180098902.1A CN117643124A (en) 2021-07-28 2021-07-28 Apparatus and method for credit-based scheduling mechanism for layer 2 transmission scheduler

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/043576 WO2023009117A1 (en) 2021-07-28 2021-07-28 Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler

Publications (2)

Publication Number Publication Date
WO2023009117A1 true WO2023009117A1 (en) 2023-02-02
WO2023009117A8 WO2023009117A8 (en) 2024-02-01

Family

ID=85088076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/043576 WO2023009117A1 (en) 2021-07-28 2021-07-28 Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler

Country Status (2)

Country Link
CN (1) CN117643124A (en)
WO (1) WO2023009117A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080008135A1 (en) * 2004-12-28 2008-01-10 Toshiyuki Saito Communication Control Method, Wireless Communication System, And Wireless Communication Device
US20100303044A1 (en) * 2009-05-29 2010-12-02 Motorola, Inc. System and method for credit-based channel transmission scheduling (cbcts)
US20180295540A1 (en) * 2017-04-10 2018-10-11 Qualcomm Incorporated Transmission of buffer status reports on multiple component carriers
US20190207737A1 (en) * 2017-12-29 2019-07-04 Comcast Cable Communications, Llc Selection of Grant and CSI
US20200084150A1 (en) * 2018-09-09 2020-03-12 Mellanox Technologies, Ltd. Adjusting rate of outgoing data requests for avoiding incast congestion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080008135A1 (en) * 2004-12-28 2008-01-10 Toshiyuki Saito Communication Control Method, Wireless Communication System, And Wireless Communication Device
US20100303044A1 (en) * 2009-05-29 2010-12-02 Motorola, Inc. System and method for credit-based channel transmission scheduling (cbcts)
US20180295540A1 (en) * 2017-04-10 2018-10-11 Qualcomm Incorporated Transmission of buffer status reports on multiple component carriers
US20190207737A1 (en) * 2017-12-29 2019-07-04 Comcast Cable Communications, Llc Selection of Grant and CSI
US20200084150A1 (en) * 2018-09-09 2020-03-12 Mellanox Technologies, Ltd. Adjusting rate of outgoing data requests for avoiding incast congestion

Also Published As

Publication number Publication date
WO2023009117A8 (en) 2024-02-01
CN117643124A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US11706740B2 (en) Method and device in UE and base station used for wireless communication
US11252722B2 (en) Method and device in UE and base station for wireless communication
US20230101531A1 (en) Uplink medium access control token scheduling for multiple-carrier packet data transmission
US20240107574A1 (en) Terminal and communication method
EP3897030B1 (en) Data scheduling method and device
WO2023009117A1 (en) Apparatus and method of credit-based scheduling mechanism for layer 2 transmission scheduler
WO2023282888A1 (en) Latency-driven data activity scheme for layer 2 power optimization
WO2023203938A1 (en) Terminal, base station, communication method, and integrated circuit
EP4311331A1 (en) Terminal, base station and communication method
WO2024063785A1 (en) Apparatus and method for logical channel prioritization (lcp) processing of high-density, high-priority small packets
WO2023243614A1 (en) Terminal, base station, and communication method
WO2023003543A1 (en) Apparatus and method of power optimized hybrid parallel/pipelined layer 2 processing for packets of different throughput types
WO2022014281A1 (en) Terminal, base station, and communication method
EP4340493A1 (en) Communication device and communication method
US20230014887A1 (en) Uplink data grant scheduling
WO2023091125A1 (en) Apparatus and method of a layer 2 recovery mechanism to maintain synchronization for wireless communication
WO2023249637A1 (en) Apparatus and method to implement a token-based processing scheme for virtual dataplane threads
WO2023287422A1 (en) Apparatus and method of architecture resource prediction for low power layer 2 subsystem
WO2024080967A1 (en) Apparatus and method for mini-dataplane datapath small-packet processing
CN117694010A (en) Terminal, base station and communication method
WO2022225500A1 (en) Apparatus and method of multiple carrier slice-based uplink grant scheduling
WO2024076337A1 (en) Apparatus and method for adaptive activation/deactivation of an uplink layer 2 datapath and hardware threads
TW202406392A (en) Terminal, base station and communication method and integrated circuit
WO2024035394A1 (en) Apparatus and method for an adaptive small-packet processing subsystem
WO2023239365A1 (en) Apparatus and method for service data unit segment reassembly

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952071

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE