WO2024049405A1 - Apparatus and method for two-dimensional scheduling of downlink layer 1 operations - Google Patents

Apparatus and method for two-dimensional scheduling of downlink layer 1 operations Download PDF

Info

Publication number
WO2024049405A1
WO2024049405A1 PCT/US2022/041864 US2022041864W WO2024049405A1 WO 2024049405 A1 WO2024049405 A1 WO 2024049405A1 US 2022041864 W US2022041864 W US 2022041864W WO 2024049405 A1 WO2024049405 A1 WO 2024049405A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
task
ccs
core
buffer
Prior art date
Application number
PCT/US2022/041864
Other languages
French (fr)
Inventor
Dinesh Dharmaraju
Bao Vuong
Srinivas VYAS
Chenxi Wang
Charles Pandana
Original Assignee
Zeku, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku, Inc. filed Critical Zeku, Inc.
Priority to PCT/US2022/041864 priority Critical patent/WO2024049405A1/en
Publication of WO2024049405A1 publication Critical patent/WO2024049405A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0001Arrangements for dividing the transmission path
    • H04L5/0003Two-dimensional division
    • H04L5/0005Time-frequency
    • H04L5/0007Time-frequency the frequencies being orthogonal, e.g. OFDM(A), DMT
    • H04L5/001Time-frequency the frequencies being orthogonal, e.g. OFDM(A), DMT the frequencies being arranged in component carriers

Definitions

  • Embodiments of the present disclosure relate to apparatus and method for wireless communication.
  • Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts.
  • cellular communication such as the 4th-gen eration (4G) Long Term Evolution (LTE) and the 5th- generation (5G) New Radio (NR), the 3rd Generation Partnership Project (3GPP) defines various mechanisms for scheduling downlink (DL) Layer 1 operations implemented by a baseband chip.
  • 4G Long Term Evolution
  • 5G 5th- generation
  • 3GPP 3rd Generation Partnership Project
  • a baseband chip may include a first task sequencer (TS) configured to generate a first set of commands for a first set of hardware accelerators associated with at least one first timing group.
  • the baseband chip may include a second TS configured to generate a second set of commands for a second set of hardware accelerators associated with at least one second timing group.
  • the baseband chip may include a microcontroller cluster with a master core, a first core, and a second core.
  • the master core may be configured to identify a first set of component carriers (CCs) as the at least one first timing group and a second set of CCs as the at least one second timing group.
  • CCs component carriers
  • the master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the first core may be configured to control first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators.
  • the second core may be configured to control second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators.
  • a microcontroller cluster for a baseband chip may include a master core, a first core, and a second core.
  • the master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group.
  • the master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the first core may be configured to control first operations performed by a first TS to generate a first set of commands for the first set of hardware accelerators.
  • the second core may be configured to control second operations performed by a second TS to generate a second set of commands for the second set of hardware accelerators.
  • a method of wireless communication of a baseband chip may include identifying, by a master core of a microcontroller cluster, a first set of CCs as at least one first timing group and a second set of CCs as at least one second timing group.
  • the method may include assigning, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the method may include controlling, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group.
  • the method may include controlling, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group.
  • the method may include generating, by the first TS, a first set of commands for the first set of hardware accelerators associated with the at least one first timing group based on first task instructions from the first core.
  • the method may include generating, by the second TS, a second set of commands for the second set of hardware accelerators associated with the at least one second timing group based on a second set of task instructions from the second core.
  • FIG. 1 illustrates an exemplary wireless network, according to some embodiments of the present disclosure.
  • FIG. 2 illustrates a block diagram of an exemplary node, according to some embodiments of the present disclosure.
  • FIG. 3 illustrates a block diagram of an exemplary apparatus including a baseband chip, a radio frequency (RF) chip, and a host chip, according to some embodiments of the present disclosure.
  • RF radio frequency
  • FIG. 4 illustrates an exemplary two-dimensional scheduling diagram for DL Layer 1 operations implemented by the baseband chip of FIG. 3, according to some embodiments of the present disclosure.
  • FIG. 5 is a flowchart of a first method of wireless communication, according to some embodiments of the present disclosure.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” “certain embodiments,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • terminology may be understood at least in part from usage in context.
  • the term “one or more” as used herein, depending at least in part upon context may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense.
  • terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context.
  • the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC- FDMA single-carrier frequency division multiple access
  • WLAN wireless local area network
  • a CDMA network may implement a radio access technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc.
  • RAT radio access technology
  • UTRA Universal Terrestrial Radio Access
  • E-UTRA evolved UTRA
  • CDMA 2000 etc.
  • GSM Global System for Mobile Communications
  • An OFDMA network may implement a RAT, such as LTE or NR.
  • a WLAN system may implement a RAT, such as Wi-Fi.
  • the techniques described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs.
  • Layer 1 In cellular and/or Wi-Fi communication, Layer 1 (also referred to as “Radio Layer 1” or the “physical (PHY) layer”) is responsible for error detection, forward error correction (FEC) encoding/decoding of the transport channel, hybrid-automatic repeat-request (HARQ) soft- combing, de-rate-matching, demapping, demodulation of the physical channels, channel estimation and other radio characteristic measurements, just to name a few.
  • Layer 1 interfaces with Layer 2 and passes data packets up or down the protocol stack structure, depending on whether the data packets are associated with uplink (UL) or downlink (DL) transmission.
  • a user equipment receives a DL transmission via time/frequency resources in a physical downlink shared channel (PDSCH), which the base station allocates statically or dynamically.
  • PDSCH physical downlink shared channel
  • the UE’s shared channel (SCH) activity can be either asynchronous or synchronous.
  • the base station generally sends a DL grant before each DL transmission to indicate the time/frequency resources in which the UE will receive an incoming DL packet.
  • DL grants are sent using predefined time/frequency resources in a physical downlink control channel (PDCCH).
  • the UE may be required to monitor and decode the predefined time/frequency resources of the PDCCH to determine whether it has an incoming DL transmission.
  • the base station may allocate predefined time/frequency resources in the PDSCH using semi-persistent scheduling (SPS).
  • SPS semi-persistent scheduling
  • the UE may not be required to monitor the PDCCH for DL grants since it knows the interval of the PDSCH resources used to carry DL transmissions.
  • the UE When the UE is configurated for carrier aggregation (CA), multiple CCs are typically aggregated for reception and transmission. As such, the UE may receive multiple DL grants concurrently, one from each CC and cell, which identify the scheduled DL packet transmission on each CC. It is not required that the CCs used in CA have either the same transmission time interval (TTI) or subcarrier spacing (SCS).
  • TTI transmission time interval
  • SCS subcarrier spacing
  • the slot and symbol at which the UE is required to perform DL Layer 1 operations may not be the same and scheduling DL Layer 1 operations for multiple CCs with different timing requirements poses a significant challenge using software-based techniques in terms of time, computational resources, and power consumption.
  • the present disclosure provides a baseband chip with an exemplary two-dimensional control architecture.
  • the two-dimensional control architecture may include a slot scheduler configured to schedule DL Layer 1 operations at the slotlevel and a microcontroller (uC) cluster configured to schedule DL Layer 1 operations at the symbol-level.
  • the slot scheduler may generate monitoring-occasion information (e.g., CCH) and grant-type information (e.g., SCH), which are used by the uC cluster to schedule DL Layer 1 task instructions that are executed by a TS.
  • the TS Based on the task instructions, the TS generates commands, which are implemented by Layer 1 hardware accelerators to perform the DL Layer 1 operations at the appropriate time.
  • the uC cluster may include multiple cores, each of which are assigned a particular set of CCs by a master core.
  • the master core may identify two or more timing groups. Different timing groups may include, e.g., CCs with synchronous SCH activity, CCs with asynchronous SCH activity, CCs with a first TTI, CCs with a second TTI, CCs with a first SCS (also referred to as “numerology”), CCs with a second SCS, etc.
  • Each timing group may be assigned a different core, which performs slot-level scheduling of tasks that are sequenced by a dedicated hardware-based TS.
  • the hardware-based TS executes slotlevel instructions, as dictated by its core, to generate commands implemented by Layer 1 hardware accelerators to perform various DL Layer 1 operations.
  • the slot- scheduler and uC cluster may implement a two- dimensional DL Layer 1 scheduling mechanism that is more efficient and requires fewer computational resources and less power than software-based scheduling techniques. Additional details of the two-dimensional control architecture of the present baseband chip and the associated DL Layer 1 scheduling technique are provided below in connection with FIGs. 1-5.
  • FIG. 1 illustrates an exemplary wireless network 100, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • wireless network 100 may include a network of nodes, such as user equipment 102, an access node 104, and a core network element 106.
  • User equipment 102 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Internet-of-Things (loT) node.
  • V2X vehicle to everything
  • cluster network such as a cluster network
  • smart grid node such as a smart grid node
  • Internet-of-Things (loT) node such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Internet-of-Things (loT) node.
  • V2X vehicle to everything
  • LoT Internet-of-Things
  • Access node 104 may be a device that communicates with user equipment 102, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 104 may have a wired connection to user equipment 102, a wireless connection to user equipment 102, or any combination thereof. Access node 104 may be connected to user equipment 102 by multiple connections, and user equipment 102 may be connected to other access nodes in addition to access node 104. Access node 104 may also be connected to other user equipments.
  • BS base station
  • eNodeB or eNB enhanced Node B
  • gNodeB or gNB next-generation NodeB
  • gNodeB next-generation NodeB
  • access node 104 may operate in millimeter wave (mmW) frequencies and/or near mmW frequencies in communication with the user equipment 102.
  • mmW millimeter wave
  • the access node 104 may be referred to as an mmW base station.
  • Extremely high frequency (EHF) is part of the radio frequency (RF) in the electromagnetic spectrum. EHF has a range of 30 GHz to 200 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave.
  • Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters.
  • the super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW or near mmW radio frequency band have extremely high path loss and a short range.
  • the mmW base station may utilize beamforming with user equipment 102 to compensate for the extremely high path loss and short range. It is understood that access node 104 is illustrated by a radio tower by way of illustration and not by way of limitation.
  • Access nodes 104 which are collectively referred to as E-UTRAN in the evolved packet core network (EPC) and as NG-RAN in the 5G core network (5GC), interface with the EPC and 5GC, respectively, through dedicated backhaul links (e.g., SI interface).
  • EPC evolved packet core network
  • 5GC 5G core network
  • access node 104 may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages.
  • Access nodes 104 may communicate directly or indirectly (e.g., through the 5GC) with each other over backhaul links (e.g., X2 interface).
  • the backhaul links may be wired or wireless.
  • Core network element 106 may serve access node 104 and user equipment 102 to provide core network services.
  • core network element 106 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW).
  • HSS home subscriber server
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • EPC evolved packet core
  • core network element 106 includes an access and mobility management function (AMF), a session management function (SMF), or a user plane function (UPF) of the 5GC for the NR system.
  • the AMF may be in communication with a Unified Data Management (UDM).
  • UDM Unified Data Management
  • the AMF is the control node that processes the signaling between the user equipment 102 and the 5GC. Generally, the AMF provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF.
  • the UPF provides user equipment (UE) IP address allocation as well as other functions.
  • the UPF is connected to the IP Services.
  • the IP Services may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. It is understood that core network element 106 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
  • Core network element 106 may connect with a large network, such as the Internet 108, or another Internet Protocol (IP) network, to communicate packet data over any distance.
  • a large network such as the Internet 108, or another Internet Protocol (IP) network
  • IP Internet Protocol
  • data from user equipment 102 may be communicated to other user equipments connected to other access points, including, for example, a computer 110 connected to Internet 108, for example, using a wired connection or a wireless connection, or to a tablet 112 wirelessly connected to Internet 108 via a router 114.
  • IP Internet Protocol
  • computer 110 and tablet 112 provide additional examples of possible user equipments
  • router 114 provides an example of another possible access node.
  • a generic example of a rack-mounted server is provided as an illustration of core network element 106.
  • Database 116 may, for example, manage data related to user subscription to network services.
  • a home location register (HLR) is an example of a standardized database of subscriber information for a cellular network.
  • authentication server 118 may handle authentication of users, sessions, and so on.
  • an authentication server function (AUSF) device may be the entity to perform user equipment authentication.
  • a single server rack may handle multiple such functions, such that the connections between core network element 106, authentication server 118, and database 116, may be local connections within a single rack.
  • Each element in FIG. 1 may be considered a node of wireless network 100. More detail regarding the possible implementation of a node is provided by way of example in the description of a node 200 in FIG. 2.
  • Node 200 may be configured as user equipment 102, access node 104, or core network element 106 in FIG. 1.
  • node 200 may also be configured as computer 110, router 114, tablet 112, database 116, or authentication server 118 in FIG. 1.
  • node 200 may include a processor 202, a memory 204, and a transceiver 206. These components are shown as connected to one another by a bus, but other connection types are also permitted.
  • node 200 When node 200 is user equipment 102, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 200 may be implemented as a blade in a server system when node 200 is configured as core network element 106. Other implementations are also possible.
  • UI user interface
  • sensors sensors
  • core network element 106 Other implementations are also possible.
  • Transceiver 206 may include any suitable device for sending and/or receiving data.
  • Node 200 may include one or more transceivers, although only one transceiver 206 is shown for simplicity of illustration.
  • An antenna 208 is shown as a possible communication mechanism for node 200. Multiple antennas and/or arrays of antennas may be utilized for receiving multiple spatially multiplex data streams.
  • examples of node 200 may communicate using wired techniques rather than (or in addition to) wireless techniques.
  • access node 104 may communicate wirelessly to user equipment 102 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 106.
  • Other communication hardware such as a network interface card (NIC), may be included as well.
  • NIC network interface card
  • node 200 may include processor 202. Although only one processor is shown, it is understood that multiple processors can be included.
  • Processor 202 may include microprocessors, microcontroller units (MCUs), digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure.
  • Processor 202 may be a hardware device having one or more processing cores.
  • Processor 202 may execute software.
  • node 200 may also include memory 204. Although only one memory is shown, it is understood that multiple memories can be included. Memory 204 can broadly include both memory and storage.
  • memory 204 may include random-access memory (RAM), read-only memory (ROM), static RAM (SRAM), dynamic RAM (DRAM), ferroelectric RAM (FRAM), electrically erasable programmable ROM (EEPROM), compact disc readonly memory (CD-ROM) or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 202.
  • RAM random-access memory
  • ROM read-only memory
  • SRAM static RAM
  • DRAM dynamic RAM
  • FRAM ferroelectric RAM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disc readonly memory
  • HDD hard disk drive
  • flash drive such as magnetic disk storage or other magnetic storage devices
  • SSD solid-state drive
  • memory 204 may be embodied by any computer-readable medium, such as a non-transitory computer-readable medium.
  • Processor 202, memory 204, and transceiver 206 may be implemented in various forms in node 200 for performing wireless communication functions.
  • processor 202, memory 204, and transceiver 206 are integrated into a single system- on-chip (SoC) or a single system-in-package (SiP).
  • SoC system- on-chip
  • SiP single system-in-package
  • processor 202, memory 204, and transceiver 206 of node 200 are implemented (e.g., integrated) on one or more SoCs.
  • processor 202 and memory 204 may be integrated on an application processor (AP) SoC (sometimes known as a “host,” referred to herein as a “host chip”) that handles application processing in an operating system (OS) environment, including generating raw data to be transmitted.
  • API application processor
  • processor 202 and memory 204 may be integrated on a baseband processor (BP) SoC (sometimes known as a “modem,” referred to herein as a “baseband chip”) that converts the raw data, e.g., from the host chip, to signals that can be used to modulate the carrier frequency for transmission, and vice versa, which can run a real-time operating system (RTOS).
  • API SoC sometimes known as a “host,” referred to herein as a “host chip”
  • BP baseband processor
  • modem modem
  • RTOS real-time operating system
  • processor 202 and transceiver 206 may be integrated on an RF SoC (sometimes known as a “transceiver,” referred to herein as an “RF chip”) that transmits and receives RF signals with antenna 208.
  • RF SoC sometimes known as a “transceiver,” referred to herein as an “RF chip”
  • some or all of the host chip, baseband chip, and RF chip may be integrated as a single SoC.
  • a baseband chip and an RF chip may be integrated into a single SoC that manages all the radio functions for cellular communication.
  • user equipment 102 includes a baseband chip designed with an exemplary two-dimensional control architecture, which achieves efficient, low-power scheduling of DL Layer 1 operations.
  • the exemplary two-dimensional control architecture includes a slot scheduler configured to schedule DL Layer 1 operations at the slot-level and a uC cluster configured to schedule DL Layer 1 operations at the symbol-level.
  • the slot scheduler may generate monitoring-occasion information (e.g., CCH) and granttype information (e.g., SCH), which are used by the uC cluster to schedule DL Layer 1 task instructions that are executed by a hardware-based TS.
  • the TS uses the task instructions, the TS generates commands, which are implemented by Layer 1 hardware accelerators to perform the DL Layer 1 operations at the appropriate time. Additional details of the exemplary two-dimensional control architecture and the associated DL Layer 1 scheduling technique are provided below in connection with FIGs. 3-5.
  • FIG. 3 illustrates a block diagram of an apparatus 300 including a baseband chip 302, an RF chip 304, and a host chip 306, according to some embodiments of the present disclosure.
  • FIG. 4 illustrates an exemplary two-dimensional scheduling diagram 400 for DL Layer 1 operations implemented by baseband chip 302 in FIG. 3, according to some embodiments of the present disclosure. FIGs. 3 and 4 will be described together.
  • apparatus 300 may be implemented as user equipment 102 of wireless network 100 in FIG. 1.
  • apparatus 300 may include baseband chip 302, RF chip 304, host chip 306, and one or more antennas 310.
  • baseband chip 302 is implemented by a processor and a memory
  • RF chip 304 is implemented by a processor, a memory, and a transceiver.
  • apparatus 300 may further include an external memory 308 (e.g., the system memory or main memory) that can be shared by each chip 302, 304, or 306 through the system/main bus.
  • external memory 308 e.g., the system memory or main memory
  • baseband chip 302 is illustrated as a standalone SoC in FIG.
  • baseband chip 302 and RF chip 304 may be integrated as one SoC or one SiP; in another example, baseband chip 302 and host chip 306 may be integrated as one SoC or one SiP; in still another example, baseband chip 302, RF chip 304, and host chip 306 may be integrated as one SoC or one SiP, as described above.
  • host chip 306 may generate raw data and send it to baseband chip 302 for encoding, modulation, and mapping. Interface 314 of baseband chip 302 may receive the data from host chip 306. Baseband chip 302 may also access the raw data generated by host chip 306 and stored in external memory 308, for example, using the direct memory access (DMA). Baseband chip 302 may first encode (e.g., by source coding and/or channel coding) the raw data and modulate the coded data using any suitable modulation techniques, such as multi-phase shift keying (MPSK) modulation or quadrature amplitude modulation (QAM).
  • MPSK multi-phase shift keying
  • QAM quadrature amplitude modulation
  • Baseband chip 302 may perform any other functions, such as symbol or layer mapping, to convert the raw data into a signal that can be used to modulate the carrier frequency for transmission.
  • baseband chip 302 may send the modulated signal to RF chip 304 via interface 314.
  • RF chip 304 through the transmitter, may convert the modulated signal in the digital form into analog signals, i.e., RF signals, and perform any suitable front-end RF functions, such as filtering, digital pre-distortion, up-conversion, or sample-rate conversion.
  • Antenna 310 e.g., an antenna array
  • antenna 310 may receive RF signals from an access node or other wireless device.
  • the RF signals may be passed to the receiver (Rx) of RF chip 304.
  • RF chip 304 may perform any suitable front-end RF functions, such as filtering, IQ imbalance compensation, down-paging conversion, or sample-rate conversion, and convert the RF signals (e.g., transmission) into low-frequency digital signals (baseband signals) that can be processed by baseband chip 302.
  • baseband chip 302 includes a Layer 1 subsystem 350 designed with the exemplary two-dimensional control architecture.
  • Layer 1 subsystem 350 includes a lower-Layer 1 slot scheduler (LL1) 320, a uC cluster 322, a first TS 326a, a second TS 326b, a third TS 326c, a first set of hardware accelerators 330a (as used herein “a set of hardware accelerators” may include one or more hardware accelerators), a second set of hardware accelerators 330b, a third set of hardware accelerators 330c, etc.
  • LL1 lower-Layer 1 slot scheduler
  • first set of hardware accelerators 330a may perform demapping functions, while second set of hardware accelerators 33b may perform demapping.
  • Third set of hardware accelerators 330c may perform the same or different function(s) as first set of hardware accelerators 330a and/or second set of hardware accelerators 330b.
  • a single TS may include primitives to handle multiple priorities of different timing groups.
  • the granularity of atomicity of tasks for each task buffer (also referred to as “task queue”) may be adjusted by specially designed primitives for exclusive access acquisition to resources by a single queue and the release of the TS resources by an exclusive access release by that same queue.
  • the “exclusive access” acquisition process per task buffer itself is contention-based with priorities assignable to each task buffer.
  • the same timing group may be assigned to multiple hardware accelerators.
  • uC cluster 322 may include a plurality of cores, e.g., a master core 324e, a first core 324a, a second core 324b, a third core 324c, an auxiliary core 324d, etc. It is understood that uC cluster 322 may include more or fewer than five cores without departing from the scope of the present disclosure.
  • master core 324e may identify, from the plurality of CCs, different timing groups. A timing group may be identified based on timing characteristics that are shared among a set of CCs assigned to apparatus 300, where the set of CCs is less than the total number of CCs assigned for CA.
  • the timing characteristic(s) used to identify a timing group may include, e.g., a TTI of a first length, a TTI of a second length different than the first length, synchronous SCH activity (e.g., SPS-based SCH activity), asynchronous SCH activity (e.g., grant-based SCH activity), a first SCS, a second SCS different than the first SCS, just to name a few.
  • synchronous SCH activity e.g., SPS-based SCH activity
  • asynchronous SCH activity e.g., grant-based SCH activity
  • a first SCS e.g., a second SCS different than the first SCS
  • more than one timing group may be identified based on timing characteristics and assigned to the same task sequencer.
  • master core 324e may determine the clock frequency needed for DL Layer 1 scheduling/ operations for each set of CCs based on the timing group’s timing characteristics.
  • the clock frequency may be identified based on a look-up table that correlates clock frequencies with timing characteristics, for example.
  • the look-up table may be maintained in on-chip memory 318 and/or external memory 308.
  • Master core 324e may assign a clock frequency to each of first core 324a, second core 324b, and third core 324c.
  • the clock frequencies may be the same or different.
  • master core 324e identifies three different timing groups, timing group 0 (TG0), timing group 1 (TGI), and timing group 2 (TG2), from among the plurality of CCs assigned to apparatus 300.
  • master core 324e may assign TG0 to first core 324a, TGI to second core 324b, and TG2 to third core 324c.
  • First core 324a is responsible for symbol-level scheduling of first task instructions executed by first TS 326a
  • second core 324b is responsible for symbol-level scheduling of task instructions executed by second TS 326b
  • third core 324c is responsible for symbol-level scheduling of task instructions executed by third TS 326c.
  • Each of first, second, and third cores 324a, 324b, 324c may perform symbol-level scheduling for their respective TS using slot-level scheduling information generated by LL1 320.
  • master core 324e may assign TG0 and TGI to first core 342a and TG2 to second core 342b.
  • LL1 320 may generate monitoring- occasion information (e.g., CCH) and grant-type information (e.g., SCH), which are written to a corresponding mailbox 360.
  • Each timing group may have an assigned CCH fast mailbox (e.g., a first slot- scheduler buffer) into which LL1 320 writes/pushes monitoring-occasion information associated with a PDCCH, a first SCH fast mailbox (e.g., a second slot- scheduler buffer) into which LL1 320 writes/pushes first grant-type information associated with regular-priority PDSCH activity, and a second SCH fast mailbox (e.g., a third slot- scheduler buffer) into which LL1 320 writes/pushes second grant-type information associated with high-priority PDSCH activity.
  • CCH fast mailbox e.g., a first slot- scheduler buffer
  • a first SCH fast mailbox e.g., a second slot- scheduler buffer
  • a second SCH fast mailbox e
  • Regular-priority SCH activity may be associated with DL Layer 1 operations baseband chip 302 performs for PDSCH resources located in a slot that occurs later in the time domain (e.g., kO > 0) than the slot in which the DL grant allocating those PDSCH resources is received.
  • the turnaround time for scheduling DL Layer 1 operations for the same-slot PDSCH resources is shorter than those of different-slot PDSCH resource, and hence, given a higher priority by the core.
  • Higher-priority SCH activity may be associated with a retransmission and/or ultra-low latency communication (URLLC), for example.
  • URLLC ultra-low latency communication
  • Monitoring-occasion information may indicate one or more slots in which the apparatus 300 is required to monitor the PDCCH for a DL grant or other downlink control information (DCI). Additionally and/or alternatively, the monitoring-occasion information may indicate a slot in which DL Layer 1 operations, e.g., such as demodulation, de-mapping, de-rate- matching, channel estimation, etc., are performed using the PDCCH. Grant-type information may indicate information such as the starting resource block (RB), the ending RB, the start symbol, the end symbol of the PDSCH resources allocated for a DL transmission.
  • RB resource block
  • master core 324e may assign first TS 326a to first core 324a, second TS 326b to second core 324b, and third TS 326c to third core 324c.
  • First TS 326a may execute task instructions to generate first commands implemented by first set of hardware accelerators 330a for TG0
  • second TS 326b may execute second task instructions to generate second commands implemented by second set of hardware accelerators 330b for TGI
  • third TS 326c may execute third task instructions to generate third commands implemented by third set of hardware accelerators 330c for TG2.
  • Each TS may execute task instructions based on symbollevel scheduling of DL Layer 1 tasks by its respective core.
  • a core may schedule symbol-level DL Layer 1 operations may pushing/writing the memory address of an instruction into a task buffer located at its TS.
  • the TS may include multiple task buffers each associated with a particular CC of its timing group.
  • the instructions described below may be maintained in on-chip memory 318, external memory 308, host chip 306, or elsewhere in apparatus 300.
  • first core 324a may retrieve monitoring-occasion information from its CCH fast mailbox first, higher-priority grant-type information from the higher-priority SCH fast mailbox second, and regular-priority grant-type information from the regular-priority SCH fast mailbox third.
  • first core 324a may retrieve all monitoring-occasion information (e.g., highest-priority) from the CCH fast mailbox first, retrieve all grant-type information (e.g., second highest-priority) from the higher-priority SCH fast mailbox second, and retrieve all granttype information (e.g., lowest-priority) from the lower-priority SCH fast mailbox third.
  • first core 324a may perform symbol-level scheduling of tasks in the order of priority. In some other examples, first core 324a may perform round-robin retrieval of monitoring-occasion information, higher-priority grant-type information, and lower-priority grant-type information for a first CC before doing the same for a second CC.
  • the following example of symbol-level scheduling by first core 324a and second core 324b is described in connection with the round-robin embodiment. It is understood that the same or similar operations may be performed but in a different order in the embodiment in which the CCH fast mailbox is emptied before moving on to retrieving grant-type information from the higher-priority SCH mailbox, and so on, without departing from the scope of the present disclosure.
  • symbol-level scheduling based on slot-level monitoring-occasion information and slot-level higher-priority grant-type information.
  • the same or similar operations may be performed for symbol-level scheduling that additionally and/or alternatively includes regular-priority grant-type information without departing from the scope of the present disclosure.
  • first core 324a may identify a first memory address of a first instruction associated with the first monitoring-occasion information.
  • First core 324a may write/push a first task instruction that includes the first memory address of the first instruction into a first task buffer 328a of first TS 326a.
  • the first task buffer 328a may be associated with the first CC of TG0.
  • first core 324a may retrieve first grant-type information (e.g., associated with the first CC of TG0) from the higher-priority SCH fast mailbox.
  • First core 324a may identify a second memory address of a second instruction associated with the first granttype information.
  • First core 324a may write/push a second task instruction that includes the second memory address into the first task buffer 328a of first TS 326a. Assuming there is no other monitoring-occasion information and/or grant-type information for the first CC in TGO, first core 324a may move on to the second CC of TGO.
  • first core 324a retrieves second monitoring-occasion information (e.g., associated with a second CC in TGO) from the CCH fast mailbox.
  • First core 324a may identify a third memory address associated with a third instruction based on the second monitoringoccasion information.
  • First core 324a may write/push a third task instruction that includes the third memory address of the third instruction into a second task buffer 328b of first TS 326a.
  • second task buffer 328b may be associated with the second CC of TGO.
  • first core 324a may retrieve second grant-type information (e.g., associated with the second CC of TGO) from the higher-priority SCH fast mailbox.
  • First core 324a may identify a fourth memory address associated with a fourth instruction based on the second grant-type information. First core 324a may write/push a fourth task instruction that includes the fourth memory address into second task buffer 328b of first TS 326a.
  • First core 324a may include a time-stamp trigger and/or event-trigger that causes first TS 326a to retrieve and execute the instructions associated with the task instructions in its different task buffers when the associated trigger is met. For example, in response to a time-stamp trigger being met, first TS 326a may access the first task instruction from first task buffer 328a and identify the first memory address of the first instruction therefrom. First TS 326a may retrieve and execute the first instruction to generate a first command. First command may be sent to first set of hardware accelerators 330a, which perform first DL CCH Layer 1 operation(s) based on the first command.
  • first TS 326a may identify the second memory address of the second instruction based on the second task instruction in first task buffer 328a. First TS 326a may retrieve the second instruction from a second memory location associated with the second memory address. First TS 326a may execute the second instruction to generate a second command for first set of hardware accelerators 330b. Using the second command, second set of hardware accelerators 330a may perform first DL SCH Layer 1 operation(s).
  • an event-trigger may be implemented by first core 324a that causes first TS 326a to access the third task instruction in second task buffer 328b.
  • the event trigger may be the completion of command generation for the first CC of TGO.
  • first TS 326a may identify, from the third task instruction, a third memory address associated with the third instruction.
  • first TS 326a may execute the third instruction to generate a third command, which is sent to first set of hardware accelerators 330a.
  • First set of hardware accelerators 330a may perform second DL CCH Layer 1 operation(s) based on the third command.
  • first TS 326a may identify a fourth memory address of a fourth instruction based on the fourth task instruction in second task buffer 328b. First TS 326a may retrieve the fourth instruction from a fourth memory location associated with the fourth memory address. First TS 326a may execute the fourth instruction to generate a fourth command for first set of hardware accelerators 330a. First set of hardware accelerators 330a may perform second DL Layer 1 SCH operation(s) based on the fourth command. In this way, first core 324a may be configured to control operations performed by first TS 326a to generate a set of commands (e.g., first command, second command, third command, and fourth command) for first set of hardware accelerators 330a.
  • a set of commands e.g., first command, second command, third command, and fourth command
  • DL Layer 1 operations may be implemented in a timely manner by first set of hardware accelerators 330a for the CCs in TG0.
  • Second core 324b and third core 324c may perform the same or similar operations in parallel with first core 324a.
  • second core 324b may identify a fifth memory address of a fifth instruction associated with the third monitoring-occasion information.
  • Second core 324b may write/push a fifth task instruction that includes the fifth memory address of the fifth instruction into a third task buffer 328c of second TS 326b.
  • Third task buffer 328c may be associated with the first CC in TGI (also referred to herein as “third CC”).
  • second core 324b may retrieve third grant-type information (e.g., associated with the first CC of TGI) from the higher-priority SCH fast mailbox.
  • Second core 324b may identify a sixth memory address of a sixth instruction associated with the third grant-type information. Second core 324b may write/push a sixth task instruction that includes the sixth memory address into third task buffer 328c of second TS 326b. Assuming there is no other monitoring-occasion information and/or grant-type information for the first CC in TGI, second core 324b may move on to the second CC of TGI (also referred to herein as “fourth CC”).
  • second core 324b may retrieve fourth monitoring-occasion information (e.g., associated with a second CC in TGI) from the CCH fast mailbox. Second core 324b may identify a seventh memory address associated with a seventh instruction based on the fourth monitoring-occasion information. Second core 324b may write/push a seventh task instruction that includes the seventh memory address of the seventh instruction into a fourth task buffer 328d of second TS 326b. In this example, fourth task buffer 328d may be associated with the second CC of TGI . Then, second core 324b may retrieve fourth grant-type information (e.g., associated with the second CC of TGI) from the higher-priority SCH fast mailbox. Second core 324b may identify an eighth memory address associated with an eighth instruction based on the fourth grant-type information. First core 324a may write/push an eighth task instruction that includes the eighth memory address into fourth task buffer 328d of second TS 326b.
  • fourth monitoring-occasion information e.g., associated with a second CC in
  • Second core 324b may include a time-stamp trigger and/or event-trigger that causes second TS 326b to retrieve and execute the instructions associated with the task instructions in its different task buffers when the associated trigger is met. For example, in response to a time-stamp trigger being met, second TS 326b may access the fifth task instruction from third task buffer 328c and identify the fifth memory address of the fifth instruction therefrom. Second TS 326b may retrieve and execute the fifth instruction to generate a fifth command for second set of hardware accelerators 330b. Fifth command may be sent to second set of hardware accelerators 330b, which perform third DL CCH Layer 1 operation(s) based on the fifth command.
  • second TS 326b may identify the sixth memory address of the sixth instruction based on the sixth task instruction in third task buffer 328c. Second TS 326b may retrieve the sixth instruction from a sixth memory location associated with the sixth memory address. Second TS 326b may execute the sixth instruction to generate a sixth command for second set of hardware accelerators 330b. Using the sixth command, second set of hardware accelerators 330b may perform third DL SCH Layer 1 operation(s).
  • an event-trigger may be implemented by second core 324b that causes second TS 326b to access the seventh task instruction in fourth task buffer 328d.
  • the event trigger may be the completion of command generation for the first CC of TGI.
  • second TS 326b may identify, from the seventh task instruction, a seventh memory address associated with the seventh instruction.
  • second TS 326b may execute the seventh instruction to generate a seventh command, which is sent to second set of hardware accelerators 330b.
  • Second set of hardware accelerators 330b may perform fourth DL CCH Layer 1 operation(s) based on the seventh command.
  • second TS 326b may identify an eighth memory address of an eighth instruction based on the eighth task instruction in fourth task buffer 328d. Second TS 326b may retrieve the eighth instruction from an eighth memory location associated with the eighth memory address. Second TS 326b may execute the eighth instruction to generate an eighth command for second set of hardware accelerators 330b. Second set of hardware accelerators 330b may perform fourth DL Layer 1 SCH operation(s) based on the eighth command. In this way, second core 324b may be configured to control operations performed by second TS 326b to generate a set of commands (e.g., fifth command, sixth command, fourth command, and fifth command) for second set of hardware accelerators 330b. By controlling these operations with a symbol-level degree of granularity, DL Layer 1 operations may be implemented in a timely manner by second set of hardware accelerators 330b for the CCs in TGI .
  • commands e.g., fifth command, sixth command, fourth command, and fifth command
  • third core 324c, third TS 326c, and third set of hardware accelerators 330c may each perform the same or similar operations as described above in connection with TG0 and TGI.
  • master core 324e may determine that one or more of first core 324a, second core 324b, or third core 324c may be unable to generate task instructions in a timely manner, e.g., based on the number of CCs in its timing group.
  • auxiliary core 324d may be assigned some of the tasks to reduce the workload of the other cores and ensure the timely completion of DL Layer 1 operations by the hardware accelerators.
  • Auxiliary core 324d may have its own dedicated TS and hardware accelerators, or it may send task instructions to the TS associated with the timing group for which generates task instructions.
  • regular-priority DL SCH Layer 1 operations may also be scheduled by the cores and implemented by the hardware accelerators.
  • a core may use various techniques to cause its TS to execute the instructions for a particular CC in their entirety before moving on to those instructions for another CC with the use of, e.g., a mutex lock on the associated task buffer in the TS.
  • a core may stitch DL Layer 1 operations across CCs when appropriate.
  • FIG. 5 illustrates a flowchart of an exemplary method 500 of wireless communication, according to embodiments of the disclosure.
  • Exemplary method 500 may be performed by an apparatus for wireless communication, e.g., such as a UE, a baseband chip, a Layer 1 subsystem, an LL1, a uC cluster, a core, a TS, a hardware accelerator, an on-chip memory, or an external memory, just to name a few.
  • Method 500 may include steps 502-512 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 5.
  • the apparatus may identify, by a master core of a microcontroller cluster, a first set of CCs as a first timing group and a second set of CCs as a second timing group.
  • master core 324e may identify, from the plurality of CCs, different timing groups.
  • a timing group may be identified based on timing characteristics that are shared among a set of CCs assigned to apparatus 300, where the set of CCs is less than the total number of CCs assigned for CA.
  • the timing characteristic(s) used to identify a timing group may include, e.g., a TTI of a first length, a TTI of a second length different than the first length, synchronous SCH activity (e.g., SPS-based SCH activity), asynchronous SCH activity (e.g., grant-based SCH activity), a first SCS, a second SCS different than the first SCS, just to name a few.
  • synchronous SCH activity e.g., SPS-based SCH activity
  • asynchronous SCH activity e.g., grant-based SCH activity
  • a first SCS e.g., a second SCS different than the first SCS
  • the apparatus may assign, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core.
  • master core 324e may assign TG0 to first core 324a and TGI to second core 324b.
  • the apparatus may control, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group.
  • first core 324a may generate first, second, third, and fourth task instructions, which are used by first TS 326a to generate commands for first set of hardware accelerators 330a, as described above.
  • the apparatus may control, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group.
  • second core 324b may generate fifth, sixth, seventh, and eighth task instructions, which are used by second TS 326b to generate commands for second set of hardware accelerators 330b, as described above.
  • the apparatus may generate, by the first TS, a first set of commands for the first set of hardware accelerators associated with a first timing group based on first task instructions from the first core. For example, referring to FIGs. 3 and 4, first TS 326a may generate first, second, third, and fourth commands implemented by first set of hardware accelerators 330a to perform DL Layer 1 operations for TG0.
  • the apparatus may generate, by the second TS, a second set of commands for the second set of hardware accelerators associated with a second timing group based on a second set of task instructions from the second core.
  • second TS 326b may generate fifth, sixth, seventh, and eighth commands implemented by second set of hardware accelerators 330b to perform DL Layer 1 operations for TGI.
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 200 in FIG. 2.
  • such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer.
  • Disk and disc includes CD, laser disc, optical disc, digital video disc (DVD), and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • a baseband chip is provided.
  • the baseband chip may include a first TS configured to generate a first set of commands for a first set of hardware accelerators associated with at least one first timing group.
  • the baseband chip may include a second TS configured to generate a second set of commands for a second set of hardware accelerators associated with at least one second timing group.
  • the baseband chip may include a microcontroller cluster with a master core, a first core, and a second core.
  • the master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group.
  • the master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the first core may be configured to control first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators.
  • the second core may be configured to control second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators.
  • the first set of CCs in the at least one first timing group may be each associated with a first TTI, synchronous SCH activity, or a first SCS.
  • the second set of CCs in the at least one second timing group may be each associated with a second TTI different than the first TTI, asynchronous SCH activity, or a second SCS different than the first SCS.
  • the master core may be further configured to identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group. In some embodiments, the master core may be further configured to assign the first clock frequency to the first core and the second clock frequency to the second core.
  • the first TS may include a first task buffer associated with a first CC in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs.
  • the second TS comprises a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs.
  • the first core is configured to retrieve, from a first slot- scheduler buffer, first monitoring-occasion information for a first CCH of the first CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from a second slot- scheduler buffer, first grant-type information for a first SCH of the first CC in the first set of CCs.
  • the first core is configured to push a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to push a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS.
  • the first core is configured to retrieve, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from the second slotscheduler buffer, second grant-type information for a second SCH of the second CC in the first set of CCs.
  • the first core is configured to push a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS.
  • the first core is configured to push a fourth task instruction associated with the second grant-type information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS.
  • the first task buffer and the second task buffer may be different.
  • the first TS may be further configured to, in response to a first timestamp-trigger or first event-trigger, access the first task instruction from the first task buffer. In some embodiments, the first TS may be further configured to identify a first memory address of a first instruction based on the first task instruction. In some embodiments, the first TS may be further configured to retrieve the first instruction from a first location associated with the first memory address. In some embodiments, the first TS may be further configured to execute the first instruction to generate a first command for the first set of hardware accelerators. In some embodiments, the first TS may be further configured to, after executing the first instruction, identify a second memory address of a second instruction based on the second task instruction. In some embodiments, the first TS may be further configured to retrieve the second instruction from a second location associated with the second memory address. In some embodiments, the first TS may be further configured to execute the second instruction to generate a second command for the first set of hardware accelerators.
  • the first TS may be further configured to, in response to a second timestamp-trigger or a second event-trigger, access the third task instruction from the second task buffer. In some embodiments, the first TS may be further configured to identify a third memory address of a third instruction based on the third task instruction. In some embodiments, the first TS may be further configured to retrieve the third instruction from a third memory location associated with the third memory address. In some embodiments, the first TS may be further configured to execute the third instruction to generate a third command for the first set of hardware accelerators. In some embodiments, the first TS may be further configured to, after executing the third instruction, identify a fourth memory address of a fourth instruction based on the fourth task instruction.
  • the first TS may be further configured to retrieve the fourth instruction from a fourth memory location associated with the fourth memory address. In some embodiments, the first TS may be further configured to execute the fourth instruction to generate a fourth command for the first set of hardware accelerators.
  • the second event-trigger may be associated with a completed execution of the second instruction.
  • the second core is configured to retrieve, from a third slot- scheduler buffer, third monitoring-occasion information for a third CCH of the third CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from a fourth slot-scheduler buffer, third grant-type information for a third SCH of the third CC in the second set of CCs.
  • the second core is configured to push a fifth task instruction associated with the third monitoring-occasion information for the third CCH of the third CC in the second set of CCs into the third task buffer of the second TS. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to push a sixth task instruction associated with the third grant-type information for the third SCH of the third CC in the second set of CCs into the third task buffer of the second TS.
  • the second core is configured to retrieve, from the third slot- scheduler buffer, fourth monitoring-occasion information for a fourth CCH of the fourth CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from the fourth slot-scheduler buffer, fourth grant-type information for a fourth SCH of the fourth CC in the second set of CCs.
  • the second core is configured to push a seventh task instruction associated with the fourth monitoringoccasion information for the fourth CCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS.
  • the second core is configured to push an eighth task instruction associated with the fourth grant-type information for the fourth SCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS.
  • the third task buffer and the fourth task buffer may be different.
  • the second TS may be configured to, in response to a third timestamp-trigger or a third event-trigger, access the fifth task instruction from the third task buffer. In some embodiments, the second TS may be configured to identify a fifth memory address of a fifth instruction based on the fifth task instruction. In some embodiments, the second TS may be configured to retrieve the fifth instruction from a fifth location associated with the fifth memory address. In some embodiments, the second TS may be configured to execute the fifth instruction to generate a fifth command for the second set of hardware accelerators. In some embodiments, the second TS may be configured to, after executing the fifth instruction, identify a sixth memory address of a sixth instruction based on the sixth task instruction. In some embodiments, the second TS may be configured to retrieve the sixth instruction from a sixth location associated with the sixth memory address. In some embodiments, the second TS may be configured to execute the sixth instruction to generate a sixth command for the second set of hardware accelerators.
  • the second TS may be further configured to, in response to a fourth timestamp-trigger or a fourth event-trigger, access the seventh task instruction from the fourth task buffer. In some embodiments, the second TS may be further configured to identify a seventh memory address of a seventh instruction based on the seventh task instruction. In some embodiments, the second TS may be further configured to retrieve the seventh instruction from a seventh memory location associated with the seventh memory address. In some embodiments, the second TS may be further configured to execute the seventh instruction to generate a seventh command for the second set of hardware accelerators. In some embodiments, the second TS may be further configured to, after executing the seventh instruction, identify an eighth memory address of an eighth instruction based on the eighth task instruction. In some embodiments, the second TS may be further configured to retrieve the eighth instruction from an eighth memory location associated with the eighth memory address. In some embodiments, the second TS may be further configured to execute the eighth instruction to generate an eighth command for the second set of hardware accelerators.
  • a microcontroller cluster for a baseband chip may include a master core, a first core, and a second core.
  • the master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group.
  • the master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the first core may be configured to control first operations performed by a first TS to generate a first set of commands for the first set of hardware accelerators.
  • the second core may be configured to control second operations performed by a second TS to generate a second set of commands for the second set of hardware accelerators.
  • the first set of CCs in the at least one first timing group may be each associated with a first TTI.
  • the second set of CCs in the at least one second timing group may be each associated with a second TTI.
  • the first TTI and the second TTI may be different.
  • the first set of CCs in the at least one first timing group may be each associated with synchronous SCH activity.
  • the second set of CCs in the at least one second timing group may be each associated with asynchronous SCH activity.
  • the master core may be further configured to identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group. In some embodiments, the master core may be further configured to assign the first clock frequency to the first core and the second clock frequency to the second core.
  • a method of wireless communication of a baseband chip may include identifying, by a master core of a microcontroller cluster, a first set of CCs as a first timing group and a second set of CCs as a second timing group.
  • the method may include assigning, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core.
  • the method may include controlling, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group.
  • the method may include controlling, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group.
  • the method may include generating, by the first TS, a first set of commands for the first set of hardware accelerators associated with a first timing group based on first task instructions from the first core.
  • the method may include generating, by the second TS, a second set of commands for the second set of hardware accelerators associated with a second timing group based on a second set of task instructions from the second core.
  • the first set of CCs in the at least one first timing group may be each associated with a first TTI.
  • the second set of CCs in the at least one second timing group may be each associated with a second TTI.
  • the first TTI and the second TTI may be different.
  • the first set of CCs in the at least one first timing group may be each associated with synchronous SCH activity.
  • the second set of CCs in the at least one second timing group may be each associated with asynchronous SCH activity.
  • the first TS may include a first task buffer associated with a first CC in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs.
  • the second TS may be a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs.
  • the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from a first slot-scheduler buffer, first monitoringoccasion information for a first control channel (CCH) of the first CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from a second slot-scheduler buffer, first grant-type information for a first SCH of the first CC in the first set of CCs.
  • CCH control channel
  • the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS.
  • the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from the second slot-scheduler buffer, second granttype information for a second SCH of the second CC in the first set of CCs.
  • the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a fourth task instruction associated with the second granttype information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, the first task buffer and the second task buffer may be different.
  • the method may further include, in response to a first timestamp-trigger or first event-trigger, accessing, by the first TS, the first task instruction from the first task buffer.
  • the method may include identifying, by the first TS, a first memory address of a first instruction based on the first task instruction.
  • the method may include retrieving, by the first TS, the first instruction from a first location associated with the first memory address.
  • the method may include executing, by the first TS, the first instruction to generate a first command for the first set of hardware accelerators.
  • the method may include, after executing the first instruction, identifying, by the first TS, a second memory address of a second instruction based on the second task instruction. In some embodiments, the method may include retrieving, by the first TS, the second instruction from a second location associated with the second memory address. In some embodiments, the method may include executing, by the first TS, the second instruction to generate a second command for the first set of hardware accelerators.

Abstract

According to an aspect of the present disclosure, a microcontroller cluster for a baseband chip is provided. The microcontroller cluster may include a master core, a first core, and a second core. The master core may identify a first set of component carriers (CCs) as the at least one first timing group and a second set of CCs as the at least one second timing group. The master core may assign the at least one first timing group to the first core and the at least one second timing group to the second core. The first core may control first operations performed by a first task sequencer (TS) to generate a first set of commands for the first set of hardware accelerators. The second core may control second operations performed by a second TS to generate a second set of commands for the second set of hardware accelerators.

Description

APPARATUS AND METHOD FOR TWO-DIMENSIONAL SCHEDULING OF DOWNLINK LAYER 1 OPERATIONS
BACKGROUND
[0001] Embodiments of the present disclosure relate to apparatus and method for wireless communication.
[0002] Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts. In cellular communication, such as the 4th-gen eration (4G) Long Term Evolution (LTE) and the 5th- generation (5G) New Radio (NR), the 3rd Generation Partnership Project (3GPP) defines various mechanisms for scheduling downlink (DL) Layer 1 operations implemented by a baseband chip.
SUMMARY
[0003] According to one aspect of the present disclosure, a baseband chip is provided. The baseband chip may include a first task sequencer (TS) configured to generate a first set of commands for a first set of hardware accelerators associated with at least one first timing group. The baseband chip may include a second TS configured to generate a second set of commands for a second set of hardware accelerators associated with at least one second timing group. The baseband chip may include a microcontroller cluster with a master core, a first core, and a second core. The master core may be configured to identify a first set of component carriers (CCs) as the at least one first timing group and a second set of CCs as the at least one second timing group. The master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core. The first core may be configured to control first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators. The second core may be configured to control second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators.
[0004] According to another aspect of the present disclosure, a microcontroller cluster for a baseband chip is provided. The microcontroller cluster may include a master core, a first core, and a second core. The master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group. The master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core. The first core may be configured to control first operations performed by a first TS to generate a first set of commands for the first set of hardware accelerators. The second core may be configured to control second operations performed by a second TS to generate a second set of commands for the second set of hardware accelerators.
[0005] According to still another aspect of the present disclosure, a method of wireless communication of a baseband chip is provided. The method may include identifying, by a master core of a microcontroller cluster, a first set of CCs as at least one first timing group and a second set of CCs as at least one second timing group. The method may include assigning, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core. The method may include controlling, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group. The method may include controlling, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group. The method may include generating, by the first TS, a first set of commands for the first set of hardware accelerators associated with the at least one first timing group based on first task instructions from the first core. The method may include generating, by the second TS, a second set of commands for the second set of hardware accelerators associated with the at least one second timing group based on a second set of task instructions from the second core.
[0006] These illustrative embodiments are mentioned not to limit or define the present disclosure, but to provide examples to aid understanding thereof. Additional embodiments are discussed in the Detailed Description, and further description is provided there.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the description, further serve to explain the principles of the present disclosure and to enable a person skilled in the pertinent art to make and use the present disclosure.
[0008] FIG. 1 illustrates an exemplary wireless network, according to some embodiments of the present disclosure.
[0009] FIG. 2 illustrates a block diagram of an exemplary node, according to some embodiments of the present disclosure.
[0010] FIG. 3 illustrates a block diagram of an exemplary apparatus including a baseband chip, a radio frequency (RF) chip, and a host chip, according to some embodiments of the present disclosure.
[0011] FIG. 4 illustrates an exemplary two-dimensional scheduling diagram for DL Layer 1 operations implemented by the baseband chip of FIG. 3, according to some embodiments of the present disclosure.
[0012] FIG. 5 is a flowchart of a first method of wireless communication, according to some embodiments of the present disclosure.
[0013] Embodiments of the present disclosure will be described with reference to the accompanying drawings.
DETAILED DESCRIPTION
[0014] Although specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the pertinent art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the present disclosure. It will be apparent to a person skilled in the pertinent art that the present disclosure can also be employed in a variety of other applications.
[0015] It is noted that references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” “certain embodiments,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0016] In general, terminology may be understood at least in part from usage in context. For example, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
[0017] Various aspects of wireless communication systems will now be described with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, units, components, circuits, steps, operations, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, firmware, computer software, or any combination thereof. Whether such elements are implemented as hardware, firmware, or software depends upon the particular application and design constraints imposed on the overall system.
[0018] The techniques described herein may be used for various wireless communication networks, such as code division multiple access (CDMA) system, time division multiple access (TDMA) system, frequency division multiple access (FDMA) system, orthogonal frequency division multiple access (OFDMA) system, single-carrier frequency division multiple access (SC- FDMA) system, wireless local area network (WLAN) system, and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio access technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc. A TDMA network may implement a RAT, such as the Global System for Mobile Communications (GSM). An OFDMA network may implement a RAT, such as LTE or NR. A WLAN system may implement a RAT, such as Wi-Fi. The techniques described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs.
[0019] In cellular and/or Wi-Fi communication, Layer 1 (also referred to as “Radio Layer 1” or the “physical (PHY) layer”) is responsible for error detection, forward error correction (FEC) encoding/decoding of the transport channel, hybrid-automatic repeat-request (HARQ) soft- combing, de-rate-matching, demapping, demodulation of the physical channels, channel estimation and other radio characteristic measurements, just to name a few. Layer 1 interfaces with Layer 2 and passes data packets up or down the protocol stack structure, depending on whether the data packets are associated with uplink (UL) or downlink (DL) transmission.
[0020] A user equipment (UE) receives a DL transmission via time/frequency resources in a physical downlink shared channel (PDSCH), which the base station allocates statically or dynamically. Thus, the UE’s shared channel (SCH) activity can be either asynchronous or synchronous. For asynchronous SCH activity, the base station generally sends a DL grant before each DL transmission to indicate the time/frequency resources in which the UE will receive an incoming DL packet. DL grants are sent using predefined time/frequency resources in a physical downlink control channel (PDCCH). The UE may be required to monitor and decode the predefined time/frequency resources of the PDCCH to determine whether it has an incoming DL transmission. Thus, even though the UE’s SCH activity may be asynchronous, its control channel (CCH) activity is synchronous because the time/frequency resource(s) the UE is required to monitor is/are predefined. For synchronous SCH activity, the base station may allocate predefined time/frequency resources in the PDSCH using semi-persistent scheduling (SPS). When SPS is configured, the UE may not be required to monitor the PDCCH for DL grants since it knows the interval of the PDSCH resources used to carry DL transmissions.
[0021] When the UE is configurated for carrier aggregation (CA), multiple CCs are typically aggregated for reception and transmission. As such, the UE may receive multiple DL grants concurrently, one from each CC and cell, which identify the scheduled DL packet transmission on each CC. It is not required that the CCs used in CA have either the same transmission time interval (TTI) or subcarrier spacing (SCS). Thus, the slot and symbol at which the UE is required to perform DL Layer 1 operations (e.g., demodulation, de-mapping, de-rate- matching, channel estimation, etc.) for different CCs may not be the same and scheduling DL Layer 1 operations for multiple CCs with different timing requirements poses a significant challenge using software-based techniques in terms of time, computational resources, and power consumption.
[0022] Thus, there exists an unmet need for a scheduling technique that schedules DL Layer 1 operations (e.g., demodulation, de-mapping, de-rate-matching, channel estimation, etc.) for multiple CCs with different timing requirements efficiently and using fewer computational resources and less power.
[0023] To overcome these and other challenges, the present disclosure provides a baseband chip with an exemplary two-dimensional control architecture. The two-dimensional control architecture may include a slot scheduler configured to schedule DL Layer 1 operations at the slotlevel and a microcontroller (uC) cluster configured to schedule DL Layer 1 operations at the symbol-level. For instance, the slot scheduler may generate monitoring-occasion information (e.g., CCH) and grant-type information (e.g., SCH), which are used by the uC cluster to schedule DL Layer 1 task instructions that are executed by a TS. Based on the task instructions, the TS generates commands, which are implemented by Layer 1 hardware accelerators to perform the DL Layer 1 operations at the appropriate time.
[0024] The uC cluster may include multiple cores, each of which are assigned a particular set of CCs by a master core. Among the CCs assigned to the UE for CA, the master core may identify two or more timing groups. Different timing groups may include, e.g., CCs with synchronous SCH activity, CCs with asynchronous SCH activity, CCs with a first TTI, CCs with a second TTI, CCs with a first SCS (also referred to as “numerology”), CCs with a second SCS, etc. Each timing group may be assigned a different core, which performs slot-level scheduling of tasks that are sequenced by a dedicated hardware-based TS. The hardware-based TS executes slotlevel instructions, as dictated by its core, to generate commands implemented by Layer 1 hardware accelerators to perform various DL Layer 1 operations. By separating DL Layer 1 operations based on timing group (or at least timing group), the slot- scheduler and uC cluster may implement a two- dimensional DL Layer 1 scheduling mechanism that is more efficient and requires fewer computational resources and less power than software-based scheduling techniques. Additional details of the two-dimensional control architecture of the present baseband chip and the associated DL Layer 1 scheduling technique are provided below in connection with FIGs. 1-5.
[0025] Although the following scheduling techniques are described in connection with DL Layer 1 operations, the same or similar techniques may be used to schedule UL Layer 2, UL Layer 3, and/or UL Layer 4 data packet processing to improve efficiency and optimize power consumption at the higher-layer subsystems in the UL direction without departing from the scope of the present disclosure.
[0026] FIG. 1 illustrates an exemplary wireless network 100, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure. As shown in FIG. 1, wireless network 100 may include a network of nodes, such as user equipment 102, an access node 104, and a core network element 106. User equipment 102 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Internet-of-Things (loT) node. It is understood that user equipment 102 is illustrated as a mobile phone simply by way of illustration and not by way of limitation.
[0027] Access node 104 may be a device that communicates with user equipment 102, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 104 may have a wired connection to user equipment 102, a wireless connection to user equipment 102, or any combination thereof. Access node 104 may be connected to user equipment 102 by multiple connections, and user equipment 102 may be connected to other access nodes in addition to access node 104. Access node 104 may also be connected to other user equipments. When configured as a gNB, access node 104 may operate in millimeter wave (mmW) frequencies and/or near mmW frequencies in communication with the user equipment 102. When access node 104 operates in mmW or near mmW frequencies, the access node 104 may be referred to as an mmW base station. Extremely high frequency (EHF) is part of the radio frequency (RF) in the electromagnetic spectrum. EHF has a range of 30 GHz to 200 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave. Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters. The super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW or near mmW radio frequency band have extremely high path loss and a short range. The mmW base station may utilize beamforming with user equipment 102 to compensate for the extremely high path loss and short range. It is understood that access node 104 is illustrated by a radio tower by way of illustration and not by way of limitation.
[0028] Access nodes 104, which are collectively referred to as E-UTRAN in the evolved packet core network (EPC) and as NG-RAN in the 5G core network (5GC), interface with the EPC and 5GC, respectively, through dedicated backhaul links (e.g., SI interface). In addition to other functions, access node 104 may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. Access nodes 104 may communicate directly or indirectly (e.g., through the 5GC) with each other over backhaul links (e.g., X2 interface). The backhaul links may be wired or wireless. [0029] Core network element 106 may serve access node 104 and user equipment 102 to provide core network services. Examples of core network element 106 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW). These are examples of core network elements of an evolved packet core (EPC) system, which is a core network for the LTE system. Other core network elements may be used in LTE and in other communication systems. In some embodiments, core network element 106 includes an access and mobility management function (AMF), a session management function (SMF), or a user plane function (UPF) of the 5GC for the NR system. The AMF may be in communication with a Unified Data Management (UDM). The AMF is the control node that processes the signaling between the user equipment 102 and the 5GC. Generally, the AMF provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF. The UPF provides user equipment (UE) IP address allocation as well as other functions. The UPF is connected to the IP Services. The IP Services may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. It is understood that core network element 106 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
[0030] Core network element 106 may connect with a large network, such as the Internet 108, or another Internet Protocol (IP) network, to communicate packet data over any distance. In this way, data from user equipment 102 may be communicated to other user equipments connected to other access points, including, for example, a computer 110 connected to Internet 108, for example, using a wired connection or a wireless connection, or to a tablet 112 wirelessly connected to Internet 108 via a router 114. Thus, computer 110 and tablet 112 provide additional examples of possible user equipments, and router 114 provides an example of another possible access node. [0031] A generic example of a rack-mounted server is provided as an illustration of core network element 106. However, there may be multiple elements in the core network including database servers, such as a database 116, and security and authentication servers, such as an authentication server 118. Database 116 may, for example, manage data related to user subscription to network services. A home location register (HLR) is an example of a standardized database of subscriber information for a cellular network. Likewise, authentication server 118 may handle authentication of users, sessions, and so on. In the NR system, an authentication server function (AUSF) device may be the entity to perform user equipment authentication. In some embodiments, a single server rack may handle multiple such functions, such that the connections between core network element 106, authentication server 118, and database 116, may be local connections within a single rack.
[0032] Each element in FIG. 1 may be considered a node of wireless network 100. More detail regarding the possible implementation of a node is provided by way of example in the description of a node 200 in FIG. 2. Node 200 may be configured as user equipment 102, access node 104, or core network element 106 in FIG. 1. Similarly, node 200 may also be configured as computer 110, router 114, tablet 112, database 116, or authentication server 118 in FIG. 1. As shown in FIG. 2, node 200 may include a processor 202, a memory 204, and a transceiver 206. These components are shown as connected to one another by a bus, but other connection types are also permitted. When node 200 is user equipment 102, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 200 may be implemented as a blade in a server system when node 200 is configured as core network element 106. Other implementations are also possible.
[0033] Transceiver 206 may include any suitable device for sending and/or receiving data. Node 200 may include one or more transceivers, although only one transceiver 206 is shown for simplicity of illustration. An antenna 208 is shown as a possible communication mechanism for node 200. Multiple antennas and/or arrays of antennas may be utilized for receiving multiple spatially multiplex data streams. Additionally, examples of node 200 may communicate using wired techniques rather than (or in addition to) wireless techniques. For example, access node 104 may communicate wirelessly to user equipment 102 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 106. Other communication hardware, such as a network interface card (NIC), may be included as well.
[0034] As shown in FIG. 2, node 200 may include processor 202. Although only one processor is shown, it is understood that multiple processors can be included. Processor 202 may include microprocessors, microcontroller units (MCUs), digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure. Processor 202 may be a hardware device having one or more processing cores. Processor 202 may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Software can include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for instructing hardware are also permitted under the broad category of software. [0035] As shown in FIG. 2, node 200 may also include memory 204. Although only one memory is shown, it is understood that multiple memories can be included. Memory 204 can broadly include both memory and storage. For example, memory 204 may include random-access memory (RAM), read-only memory (ROM), static RAM (SRAM), dynamic RAM (DRAM), ferroelectric RAM (FRAM), electrically erasable programmable ROM (EEPROM), compact disc readonly memory (CD-ROM) or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 202. Broadly, memory 204 may be embodied by any computer-readable medium, such as a non-transitory computer-readable medium.
[0036] Processor 202, memory 204, and transceiver 206 may be implemented in various forms in node 200 for performing wireless communication functions. In some embodiments, at least two of processor 202, memory 204, and transceiver 206 are integrated into a single system- on-chip (SoC) or a single system-in-package (SiP). In some embodiments, processor 202, memory 204, and transceiver 206 of node 200 are implemented (e.g., integrated) on one or more SoCs. In one example, processor 202 and memory 204 may be integrated on an application processor (AP) SoC (sometimes known as a “host,” referred to herein as a “host chip”) that handles application processing in an operating system (OS) environment, including generating raw data to be transmitted. In another example, processor 202 and memory 204 may be integrated on a baseband processor (BP) SoC (sometimes known as a “modem,” referred to herein as a “baseband chip”) that converts the raw data, e.g., from the host chip, to signals that can be used to modulate the carrier frequency for transmission, and vice versa, which can run a real-time operating system (RTOS). In still another example, processor 202 and transceiver 206 (and memory 204 in some cases) may be integrated on an RF SoC (sometimes known as a “transceiver,” referred to herein as an “RF chip”) that transmits and receives RF signals with antenna 208. It is understood that in some examples, some or all of the host chip, baseband chip, and RF chip may be integrated as a single SoC. For example, a baseband chip and an RF chip may be integrated into a single SoC that manages all the radio functions for cellular communication. [0037] Referring back to FIG. 1, in some embodiments, user equipment 102 includes a baseband chip designed with an exemplary two-dimensional control architecture, which achieves efficient, low-power scheduling of DL Layer 1 operations. The exemplary two-dimensional control architecture includes a slot scheduler configured to schedule DL Layer 1 operations at the slot-level and a uC cluster configured to schedule DL Layer 1 operations at the symbol-level. For instance, the slot scheduler may generate monitoring-occasion information (e.g., CCH) and granttype information (e.g., SCH), which are used by the uC cluster to schedule DL Layer 1 task instructions that are executed by a hardware-based TS. Using the task instructions, the TS generates commands, which are implemented by Layer 1 hardware accelerators to perform the DL Layer 1 operations at the appropriate time. Additional details of the exemplary two-dimensional control architecture and the associated DL Layer 1 scheduling technique are provided below in connection with FIGs. 3-5.
[0038] FIG. 3 illustrates a block diagram of an apparatus 300 including a baseband chip 302, an RF chip 304, and a host chip 306, according to some embodiments of the present disclosure. FIG. 4 illustrates an exemplary two-dimensional scheduling diagram 400 for DL Layer 1 operations implemented by baseband chip 302 in FIG. 3, according to some embodiments of the present disclosure. FIGs. 3 and 4 will be described together.
[0039] Referring to FIG. 3, apparatus 300 may be implemented as user equipment 102 of wireless network 100 in FIG. 1. As shown in FIG. 3, apparatus 300 may include baseband chip 302, RF chip 304, host chip 306, and one or more antennas 310. In some embodiments, baseband chip 302 is implemented by a processor and a memory, and RF chip 304 is implemented by a processor, a memory, and a transceiver. Besides the on-chip memory 318 (also known as “internal memory,” e.g., registers, buffers, or caches) on each chip 302, 304, or 306, apparatus 300 may further include an external memory 308 (e.g., the system memory or main memory) that can be shared by each chip 302, 304, or 306 through the system/main bus. Although baseband chip 302 is illustrated as a standalone SoC in FIG. 3, it is understood that in one example, baseband chip 302 and RF chip 304 may be integrated as one SoC or one SiP; in another example, baseband chip 302 and host chip 306 may be integrated as one SoC or one SiP; in still another example, baseband chip 302, RF chip 304, and host chip 306 may be integrated as one SoC or one SiP, as described above.
[0040] In the uplink, host chip 306 may generate raw data and send it to baseband chip 302 for encoding, modulation, and mapping. Interface 314 of baseband chip 302 may receive the data from host chip 306. Baseband chip 302 may also access the raw data generated by host chip 306 and stored in external memory 308, for example, using the direct memory access (DMA). Baseband chip 302 may first encode (e.g., by source coding and/or channel coding) the raw data and modulate the coded data using any suitable modulation techniques, such as multi-phase shift keying (MPSK) modulation or quadrature amplitude modulation (QAM). Baseband chip 302 may perform any other functions, such as symbol or layer mapping, to convert the raw data into a signal that can be used to modulate the carrier frequency for transmission. In the uplink, baseband chip 302 may send the modulated signal to RF chip 304 via interface 314. RF chip 304, through the transmitter, may convert the modulated signal in the digital form into analog signals, i.e., RF signals, and perform any suitable front-end RF functions, such as filtering, digital pre-distortion, up-conversion, or sample-rate conversion. Antenna 310 (e.g., an antenna array) may transmit the RF signals provided by the transmitter of RF chip 304.
[0041] In the downlink, antenna 310 may receive RF signals from an access node or other wireless device. The RF signals may be passed to the receiver (Rx) of RF chip 304. RF chip 304 may perform any suitable front-end RF functions, such as filtering, IQ imbalance compensation, down-paging conversion, or sample-rate conversion, and convert the RF signals (e.g., transmission) into low-frequency digital signals (baseband signals) that can be processed by baseband chip 302.
[0042] Still referring to FIG. 3, baseband chip 302 includes a Layer 1 subsystem 350 designed with the exemplary two-dimensional control architecture. For instance, Layer 1 subsystem 350 includes a lower-Layer 1 slot scheduler (LL1) 320, a uC cluster 322, a first TS 326a, a second TS 326b, a third TS 326c, a first set of hardware accelerators 330a (as used herein “a set of hardware accelerators” may include one or more hardware accelerators), a second set of hardware accelerators 330b, a third set of hardware accelerators 330c, etc. By way of example and not limitation, first set of hardware accelerators 330a may perform demapping functions, while second set of hardware accelerators 33b may perform demapping. Third set of hardware accelerators 330c may perform the same or different function(s) as first set of hardware accelerators 330a and/or second set of hardware accelerators 330b. Moreover, a single TS may include primitives to handle multiple priorities of different timing groups. The granularity of atomicity of tasks for each task buffer (also referred to as “task queue”) may be adjusted by specially designed primitives for exclusive access acquisition to resources by a single queue and the release of the TS resources by an exclusive access release by that same queue. The “exclusive access” acquisition process per task buffer itself is contention-based with priorities assignable to each task buffer. In some embodiments, the same timing group may be assigned to multiple hardware accelerators.
[0043] uC cluster 322 may include a plurality of cores, e.g., a master core 324e, a first core 324a, a second core 324b, a third core 324c, an auxiliary core 324d, etc. It is understood that uC cluster 322 may include more or fewer than five cores without departing from the scope of the present disclosure. When CA is configured for apparatus 300, master core 324e may identify, from the plurality of CCs, different timing groups. A timing group may be identified based on timing characteristics that are shared among a set of CCs assigned to apparatus 300, where the set of CCs is less than the total number of CCs assigned for CA. The timing characteristic(s) used to identify a timing group may include, e.g., a TTI of a first length, a TTI of a second length different than the first length, synchronous SCH activity (e.g., SPS-based SCH activity), asynchronous SCH activity (e.g., grant-based SCH activity), a first SCS, a second SCS different than the first SCS, just to name a few. In some non-limiting embodiments, more than one timing group may be identified based on timing characteristics and assigned to the same task sequencer. Once the timing groups are identified, master core 324e may determine the clock frequency needed for DL Layer 1 scheduling/ operations for each set of CCs based on the timing group’s timing characteristics. The clock frequency may be identified based on a look-up table that correlates clock frequencies with timing characteristics, for example. The look-up table may be maintained in on-chip memory 318 and/or external memory 308. Master core 324e may assign a clock frequency to each of first core 324a, second core 324b, and third core 324c. The clock frequencies may be the same or different. [0044] By way of example and non-limitation, assume master core 324e identifies three different timing groups, timing group 0 (TG0), timing group 1 (TGI), and timing group 2 (TG2), from among the plurality of CCs assigned to apparatus 300. In this instance, master core 324e may assign TG0 to first core 324a, TGI to second core 324b, and TG2 to third core 324c. First core 324a is responsible for symbol-level scheduling of first task instructions executed by first TS 326a, second core 324b is responsible for symbol-level scheduling of task instructions executed by second TS 326b, and third core 324c is responsible for symbol-level scheduling of task instructions executed by third TS 326c. Each of first, second, and third cores 324a, 324b, 324c may perform symbol-level scheduling for their respective TS using slot-level scheduling information generated by LL1 320. Additionally and/or alternatively, master core 324e may assign TG0 and TGI to first core 342a and TG2 to second core 342b.
[0045] Referring to FIG. 4, for each timing group, LL1 320 may generate monitoring- occasion information (e.g., CCH) and grant-type information (e.g., SCH), which are written to a corresponding mailbox 360. Each timing group may have an assigned CCH fast mailbox (e.g., a first slot- scheduler buffer) into which LL1 320 writes/pushes monitoring-occasion information associated with a PDCCH, a first SCH fast mailbox (e.g., a second slot- scheduler buffer) into which LL1 320 writes/pushes first grant-type information associated with regular-priority PDSCH activity, and a second SCH fast mailbox (e.g., a third slot- scheduler buffer) into which LL1 320 writes/pushes second grant-type information associated with high-priority PDSCH activity. Regular-priority SCH activity may be associated with DL Layer 1 operations baseband chip 302 performs for PDSCH resources located in a slot that occurs later in the time domain (e.g., kO > 0) than the slot in which the DL grant allocating those PDSCH resources is received. Higher-priority SCH activity, on the other hand, may be associated with DL Layer 1 operations performed for PDSCH resources located in the same slot (e.g., kO = 0) in which the DL grant allocated to those PDSCH resources is received. Thus, the turnaround time for scheduling DL Layer 1 operations for the same-slot PDSCH resources is shorter than those of different-slot PDSCH resource, and hence, given a higher priority by the core. Higher-priority SCH activity may be associated with a retransmission and/or ultra-low latency communication (URLLC), for example.
[0046] Monitoring-occasion information may indicate one or more slots in which the apparatus 300 is required to monitor the PDCCH for a DL grant or other downlink control information (DCI). Additionally and/or alternatively, the monitoring-occasion information may indicate a slot in which DL Layer 1 operations, e.g., such as demodulation, de-mapping, de-rate- matching, channel estimation, etc., are performed using the PDCCH. Grant-type information may indicate information such as the starting resource block (RB), the ending RB, the start symbol, the end symbol of the PDSCH resources allocated for a DL transmission.
[0047] Referring again to FIG. 3, master core 324e may assign first TS 326a to first core 324a, second TS 326b to second core 324b, and third TS 326c to third core 324c. First TS 326a may execute task instructions to generate first commands implemented by first set of hardware accelerators 330a for TG0, second TS 326b may execute second task instructions to generate second commands implemented by second set of hardware accelerators 330b for TGI, and third TS 326c may execute third task instructions to generate third commands implemented by third set of hardware accelerators 330c for TG2. Each TS may execute task instructions based on symbollevel scheduling of DL Layer 1 tasks by its respective core. A core may schedule symbol-level DL Layer 1 operations may pushing/writing the memory address of an instruction into a task buffer located at its TS. The TS may include multiple task buffers each associated with a particular CC of its timing group. Moreover, the instructions described below may be maintained in on-chip memory 318, external memory 308, host chip 306, or elsewhere in apparatus 300.
[0048] For instance, first core 324a may retrieve monitoring-occasion information from its CCH fast mailbox first, higher-priority grant-type information from the higher-priority SCH fast mailbox second, and regular-priority grant-type information from the regular-priority SCH fast mailbox third. In some examples, first core 324a may retrieve all monitoring-occasion information (e.g., highest-priority) from the CCH fast mailbox first, retrieve all grant-type information (e.g., second highest-priority) from the higher-priority SCH fast mailbox second, and retrieve all granttype information (e.g., lowest-priority) from the lower-priority SCH fast mailbox third. In so doing, first core 324a may perform symbol-level scheduling of tasks in the order of priority. In some other examples, first core 324a may perform round-robin retrieval of monitoring-occasion information, higher-priority grant-type information, and lower-priority grant-type information for a first CC before doing the same for a second CC. The following example of symbol-level scheduling by first core 324a and second core 324b is described in connection with the round-robin embodiment. It is understood that the same or similar operations may be performed but in a different order in the embodiment in which the CCH fast mailbox is emptied before moving on to retrieving grant-type information from the higher-priority SCH mailbox, and so on, without departing from the scope of the present disclosure. The following examples are limited to symbollevel scheduling based on slot-level monitoring-occasion information and slot-level higher-priority grant-type information. The same or similar operations may be performed for symbol-level scheduling that additionally and/or alternatively includes regular-priority grant-type information without departing from the scope of the present disclosure.
[0049] By way of example and not limitation, after retrieving first monitoring-occasion information, which is associated with a first CC in TG0, first core 324a may identify a first memory address of a first instruction associated with the first monitoring-occasion information. First core 324a may write/push a first task instruction that includes the first memory address of the first instruction into a first task buffer 328a of first TS 326a. The first task buffer 328a may be associated with the first CC of TG0. Then, first core 324a may retrieve first grant-type information (e.g., associated with the first CC of TG0) from the higher-priority SCH fast mailbox. First core 324a may identify a second memory address of a second instruction associated with the first granttype information. First core 324a may write/push a second task instruction that includes the second memory address into the first task buffer 328a of first TS 326a. Assuming there is no other monitoring-occasion information and/or grant-type information for the first CC in TGO, first core 324a may move on to the second CC of TGO.
[0050] In this example, first core 324a retrieves second monitoring-occasion information (e.g., associated with a second CC in TGO) from the CCH fast mailbox. First core 324a may identify a third memory address associated with a third instruction based on the second monitoringoccasion information. First core 324a may write/push a third task instruction that includes the third memory address of the third instruction into a second task buffer 328b of first TS 326a. In this example, second task buffer 328b may be associated with the second CC of TGO. Then, first core 324a may retrieve second grant-type information (e.g., associated with the second CC of TGO) from the higher-priority SCH fast mailbox. First core 324a may identify a fourth memory address associated with a fourth instruction based on the second grant-type information. First core 324a may write/push a fourth task instruction that includes the fourth memory address into second task buffer 328b of first TS 326a.
[0051] First core 324a may include a time-stamp trigger and/or event-trigger that causes first TS 326a to retrieve and execute the instructions associated with the task instructions in its different task buffers when the associated trigger is met. For example, in response to a time-stamp trigger being met, first TS 326a may access the first task instruction from first task buffer 328a and identify the first memory address of the first instruction therefrom. First TS 326a may retrieve and execute the first instruction to generate a first command. First command may be sent to first set of hardware accelerators 330a, which perform first DL CCH Layer 1 operation(s) based on the first command.
[0052] After executing the first instruction, first TS 326a may identify the second memory address of the second instruction based on the second task instruction in first task buffer 328a. First TS 326a may retrieve the second instruction from a second memory location associated with the second memory address. First TS 326a may execute the second instruction to generate a second command for first set of hardware accelerators 330b. Using the second command, second set of hardware accelerators 330a may perform first DL SCH Layer 1 operation(s).
[0053] By way of example and not limitation, an event-trigger may be implemented by first core 324a that causes first TS 326a to access the third task instruction in second task buffer 328b. The event trigger may be the completion of command generation for the first CC of TGO. Once triggered, first TS 326a may identify, from the third task instruction, a third memory address associated with the third instruction. After retrieval, first TS 326a may execute the third instruction to generate a third command, which is sent to first set of hardware accelerators 330a. First set of hardware accelerators 330a may perform second DL CCH Layer 1 operation(s) based on the third command. After executing the third instruction, first TS 326a may identify a fourth memory address of a fourth instruction based on the fourth task instruction in second task buffer 328b. First TS 326a may retrieve the fourth instruction from a fourth memory location associated with the fourth memory address. First TS 326a may execute the fourth instruction to generate a fourth command for first set of hardware accelerators 330a. First set of hardware accelerators 330a may perform second DL Layer 1 SCH operation(s) based on the fourth command. In this way, first core 324a may be configured to control operations performed by first TS 326a to generate a set of commands (e.g., first command, second command, third command, and fourth command) for first set of hardware accelerators 330a. By controlling these operations with a symbol -level degree of granularity, DL Layer 1 operations may be implemented in a timely manner by first set of hardware accelerators 330a for the CCs in TG0. Second core 324b and third core 324c may perform the same or similar operations in parallel with first core 324a.
[0054] For example, after retrieving third monitoring-occasion information, which is associated with a first CC in TGI, second core 324b may identify a fifth memory address of a fifth instruction associated with the third monitoring-occasion information. Second core 324b may write/push a fifth task instruction that includes the fifth memory address of the fifth instruction into a third task buffer 328c of second TS 326b. Third task buffer 328c may be associated with the first CC in TGI (also referred to herein as “third CC”). Then, second core 324b may retrieve third grant-type information (e.g., associated with the first CC of TGI) from the higher-priority SCH fast mailbox. Second core 324b may identify a sixth memory address of a sixth instruction associated with the third grant-type information. Second core 324b may write/push a sixth task instruction that includes the sixth memory address into third task buffer 328c of second TS 326b. Assuming there is no other monitoring-occasion information and/or grant-type information for the first CC in TGI, second core 324b may move on to the second CC of TGI (also referred to herein as “fourth CC”).
[0055] For example, second core 324b may retrieve fourth monitoring-occasion information (e.g., associated with a second CC in TGI) from the CCH fast mailbox. Second core 324b may identify a seventh memory address associated with a seventh instruction based on the fourth monitoring-occasion information. Second core 324b may write/push a seventh task instruction that includes the seventh memory address of the seventh instruction into a fourth task buffer 328d of second TS 326b. In this example, fourth task buffer 328d may be associated with the second CC of TGI . Then, second core 324b may retrieve fourth grant-type information (e.g., associated with the second CC of TGI) from the higher-priority SCH fast mailbox. Second core 324b may identify an eighth memory address associated with an eighth instruction based on the fourth grant-type information. First core 324a may write/push an eighth task instruction that includes the eighth memory address into fourth task buffer 328d of second TS 326b.
[0056] Second core 324b may include a time-stamp trigger and/or event-trigger that causes second TS 326b to retrieve and execute the instructions associated with the task instructions in its different task buffers when the associated trigger is met. For example, in response to a time-stamp trigger being met, second TS 326b may access the fifth task instruction from third task buffer 328c and identify the fifth memory address of the fifth instruction therefrom. Second TS 326b may retrieve and execute the fifth instruction to generate a fifth command for second set of hardware accelerators 330b. Fifth command may be sent to second set of hardware accelerators 330b, which perform third DL CCH Layer 1 operation(s) based on the fifth command.
[0057] After executing the fifth instruction, second TS 326b may identify the sixth memory address of the sixth instruction based on the sixth task instruction in third task buffer 328c. Second TS 326b may retrieve the sixth instruction from a sixth memory location associated with the sixth memory address. Second TS 326b may execute the sixth instruction to generate a sixth command for second set of hardware accelerators 330b. Using the sixth command, second set of hardware accelerators 330b may perform third DL SCH Layer 1 operation(s).
[0058] By way of example and not limitation, an event-trigger may be implemented by second core 324b that causes second TS 326b to access the seventh task instruction in fourth task buffer 328d. The event trigger may be the completion of command generation for the first CC of TGI. Once triggered, second TS 326b may identify, from the seventh task instruction, a seventh memory address associated with the seventh instruction. After retrieval, second TS 326b may execute the seventh instruction to generate a seventh command, which is sent to second set of hardware accelerators 330b. Second set of hardware accelerators 330b may perform fourth DL CCH Layer 1 operation(s) based on the seventh command. After executing the seventh instruction, second TS 326b may identify an eighth memory address of an eighth instruction based on the eighth task instruction in fourth task buffer 328d. Second TS 326b may retrieve the eighth instruction from an eighth memory location associated with the eighth memory address. Second TS 326b may execute the eighth instruction to generate an eighth command for second set of hardware accelerators 330b. Second set of hardware accelerators 330b may perform fourth DL Layer 1 SCH operation(s) based on the eighth command. In this way, second core 324b may be configured to control operations performed by second TS 326b to generate a set of commands (e.g., fifth command, sixth command, fourth command, and fifth command) for second set of hardware accelerators 330b. By controlling these operations with a symbol-level degree of granularity, DL Layer 1 operations may be implemented in a timely manner by second set of hardware accelerators 330b for the CCs in TGI .
[0059] For TG2, third core 324c, third TS 326c, and third set of hardware accelerators 330c may each perform the same or similar operations as described above in connection with TG0 and TGI. Referring to FIG. 4, in some scenarios, master core 324e may determine that one or more of first core 324a, second core 324b, or third core 324c may be unable to generate task instructions in a timely manner, e.g., based on the number of CCs in its timing group. When this happens, auxiliary core 324d may be assigned some of the tasks to reduce the workload of the other cores and ensure the timely completion of DL Layer 1 operations by the hardware accelerators. Auxiliary core 324d may have its own dedicated TS and hardware accelerators, or it may send task instructions to the TS associated with the timing group for which generates task instructions. As also shown in FIG. 4, regular-priority DL SCH Layer 1 operations may also be scheduled by the cores and implemented by the hardware accelerators. A core may use various techniques to cause its TS to execute the instructions for a particular CC in their entirety before moving on to those instructions for another CC with the use of, e.g., a mutex lock on the associated task buffer in the TS. Still further, by using event-based triggers, a core may stitch DL Layer 1 operations across CCs when appropriate.
[0060] FIG. 5 illustrates a flowchart of an exemplary method 500 of wireless communication, according to embodiments of the disclosure. Exemplary method 500 may be performed by an apparatus for wireless communication, e.g., such as a UE, a baseband chip, a Layer 1 subsystem, an LL1, a uC cluster, a core, a TS, a hardware accelerator, an on-chip memory, or an external memory, just to name a few. Method 500 may include steps 502-512 as described below. It is to be appreciated that some of the steps may be optional, and some of the steps may be performed simultaneously, or in a different order than shown in FIG. 5.
[0061] Referring to FIG. 5, at 502, the apparatus may identify, by a master core of a microcontroller cluster, a first set of CCs as a first timing group and a second set of CCs as a second timing group. For example, referring to FIGs. 3 and 4, when CA is configured for apparatus 300, master core 324e may identify, from the plurality of CCs, different timing groups. A timing group may be identified based on timing characteristics that are shared among a set of CCs assigned to apparatus 300, where the set of CCs is less than the total number of CCs assigned for CA. The timing characteristic(s) used to identify a timing group may include, e.g., a TTI of a first length, a TTI of a second length different than the first length, synchronous SCH activity (e.g., SPS-based SCH activity), asynchronous SCH activity (e.g., grant-based SCH activity), a first SCS, a second SCS different than the first SCS, just to name a few.
[0062] At 504, the apparatus may assign, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core. For example, referring to FIG. 3, master core 324e may assign TG0 to first core 324a and TGI to second core 324b.
[0063] At 506, the apparatus may control, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group. For example, referring to FIGs. 3 and 4, first core 324a may generate first, second, third, and fourth task instructions, which are used by first TS 326a to generate commands for first set of hardware accelerators 330a, as described above.
[0064] At 508, the apparatus may control, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group. For example, referring to FIGs. 3 and 4, second core 324b may generate fifth, sixth, seventh, and eighth task instructions, which are used by second TS 326b to generate commands for second set of hardware accelerators 330b, as described above.
[0065] At 510, the apparatus may generate, by the first TS, a first set of commands for the first set of hardware accelerators associated with a first timing group based on first task instructions from the first core. For example, referring to FIGs. 3 and 4, first TS 326a may generate first, second, third, and fourth commands implemented by first set of hardware accelerators 330a to perform DL Layer 1 operations for TG0.
[0066] At 512, the apparatus may generate, by the second TS, a second set of commands for the second set of hardware accelerators associated with a second timing group based on a second set of task instructions from the second core. For example, referring to FIGs. 3 and 4, second TS 326b may generate fifth, sixth, seventh, and eighth commands implemented by second set of hardware accelerators 330b to perform DL Layer 1 operations for TGI.
[0067] In various aspects of the present disclosure, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 200 in FIG. 2. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer. Disk and disc, as used herein, includes CD, laser disc, optical disc, digital video disc (DVD), and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0068] According to one aspect of the present disclosure, a baseband chip is provided. The baseband chip may include a first TS configured to generate a first set of commands for a first set of hardware accelerators associated with at least one first timing group. The baseband chip may include a second TS configured to generate a second set of commands for a second set of hardware accelerators associated with at least one second timing group. The baseband chip may include a microcontroller cluster with a master core, a first core, and a second core. The master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group. The master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core. The first core may be configured to control first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators. The second core may be configured to control second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators.
[0069] In some embodiments, the first set of CCs in the at least one first timing group may be each associated with a first TTI, synchronous SCH activity, or a first SCS. In some embodiments, the second set of CCs in the at least one second timing group may be each associated with a second TTI different than the first TTI, asynchronous SCH activity, or a second SCS different than the first SCS.
[0070] In some embodiments, the master core may be further configured to identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group. In some embodiments, the master core may be further configured to assign the first clock frequency to the first core and the second clock frequency to the second core.
[0071] In some embodiments, the first TS may include a first task buffer associated with a first CC in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs. In some embodiments, the second TS comprises a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs.
[0072] In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from a first slot- scheduler buffer, first monitoring-occasion information for a first CCH of the first CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from a second slot- scheduler buffer, first grant-type information for a first SCH of the first CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to push a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to push a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to retrieve, from the second slotscheduler buffer, second grant-type information for a second SCH of the second CC in the first set of CCs. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to push a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to push a fourth task instruction associated with the second grant-type information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, the first task buffer and the second task buffer may be different.
[0073] In some embodiments, the first TS may be further configured to, in response to a first timestamp-trigger or first event-trigger, access the first task instruction from the first task buffer. In some embodiments, the first TS may be further configured to identify a first memory address of a first instruction based on the first task instruction. In some embodiments, the first TS may be further configured to retrieve the first instruction from a first location associated with the first memory address. In some embodiments, the first TS may be further configured to execute the first instruction to generate a first command for the first set of hardware accelerators. In some embodiments, the first TS may be further configured to, after executing the first instruction, identify a second memory address of a second instruction based on the second task instruction. In some embodiments, the first TS may be further configured to retrieve the second instruction from a second location associated with the second memory address. In some embodiments, the first TS may be further configured to execute the second instruction to generate a second command for the first set of hardware accelerators.
[0074] In some embodiments, the first TS may be further configured to, in response to a second timestamp-trigger or a second event-trigger, access the third task instruction from the second task buffer. In some embodiments, the first TS may be further configured to identify a third memory address of a third instruction based on the third task instruction. In some embodiments, the first TS may be further configured to retrieve the third instruction from a third memory location associated with the third memory address. In some embodiments, the first TS may be further configured to execute the third instruction to generate a third command for the first set of hardware accelerators. In some embodiments, the first TS may be further configured to, after executing the third instruction, identify a fourth memory address of a fourth instruction based on the fourth task instruction. In some embodiments, the first TS may be further configured to retrieve the fourth instruction from a fourth memory location associated with the fourth memory address. In some embodiments, the first TS may be further configured to execute the fourth instruction to generate a fourth command for the first set of hardware accelerators.
[0075] In some embodiments, the second event-trigger may be associated with a completed execution of the second instruction.
[0076] In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from a third slot- scheduler buffer, third monitoring-occasion information for a third CCH of the third CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from a fourth slot-scheduler buffer, third grant-type information for a third SCH of the third CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to push a fifth task instruction associated with the third monitoring-occasion information for the third CCH of the third CC in the second set of CCs into the third task buffer of the second TS. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to push a sixth task instruction associated with the third grant-type information for the third SCH of the third CC in the second set of CCs into the third task buffer of the second TS. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from the third slot- scheduler buffer, fourth monitoring-occasion information for a fourth CCH of the fourth CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to retrieve, from the fourth slot-scheduler buffer, fourth grant-type information for a fourth SCH of the fourth CC in the second set of CCs. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to push a seventh task instruction associated with the fourth monitoringoccasion information for the fourth CCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS. In some embodiments, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to push an eighth task instruction associated with the fourth grant-type information for the fourth SCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS. In some embodiments, the third task buffer and the fourth task buffer may be different.
[0077] In some embodiments, the second TS may be configured to, in response to a third timestamp-trigger or a third event-trigger, access the fifth task instruction from the third task buffer. In some embodiments, the second TS may be configured to identify a fifth memory address of a fifth instruction based on the fifth task instruction. In some embodiments, the second TS may be configured to retrieve the fifth instruction from a fifth location associated with the fifth memory address. In some embodiments, the second TS may be configured to execute the fifth instruction to generate a fifth command for the second set of hardware accelerators. In some embodiments, the second TS may be configured to, after executing the fifth instruction, identify a sixth memory address of a sixth instruction based on the sixth task instruction. In some embodiments, the second TS may be configured to retrieve the sixth instruction from a sixth location associated with the sixth memory address. In some embodiments, the second TS may be configured to execute the sixth instruction to generate a sixth command for the second set of hardware accelerators.
[0078] In some embodiments, the second TS may be further configured to, in response to a fourth timestamp-trigger or a fourth event-trigger, access the seventh task instruction from the fourth task buffer. In some embodiments, the second TS may be further configured to identify a seventh memory address of a seventh instruction based on the seventh task instruction. In some embodiments, the second TS may be further configured to retrieve the seventh instruction from a seventh memory location associated with the seventh memory address. In some embodiments, the second TS may be further configured to execute the seventh instruction to generate a seventh command for the second set of hardware accelerators. In some embodiments, the second TS may be further configured to, after executing the seventh instruction, identify an eighth memory address of an eighth instruction based on the eighth task instruction. In some embodiments, the second TS may be further configured to retrieve the eighth instruction from an eighth memory location associated with the eighth memory address. In some embodiments, the second TS may be further configured to execute the eighth instruction to generate an eighth command for the second set of hardware accelerators.
[0079] According to another aspect of the present disclosure, a microcontroller cluster for a baseband chip is provided. The microcontroller cluster may include a master core, a first core, and a second core. The master core may be configured to identify a first set of CCs as the at least one first timing group and a second set of CCs as the at least one second timing group. The master core may be configured to assign the at least one first timing group to the first core and the at least one second timing group to the second core. The first core may be configured to control first operations performed by a first TS to generate a first set of commands for the first set of hardware accelerators. The second core may be configured to control second operations performed by a second TS to generate a second set of commands for the second set of hardware accelerators.
[0080] In some embodiments, the first set of CCs in the at least one first timing group may be each associated with a first TTI. In some embodiments, the second set of CCs in the at least one second timing group may be each associated with a second TTI. In some embodiments, the first TTI and the second TTI may be different.
[0081] In some embodiments, the first set of CCs in the at least one first timing group may be each associated with synchronous SCH activity. In some embodiments, the second set of CCs in the at least one second timing group may be each associated with asynchronous SCH activity.
[0082] In some embodiments, the master core may be further configured to identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group. In some embodiments, the master core may be further configured to assign the first clock frequency to the first core and the second clock frequency to the second core.
[0083] According to still another aspect of the present disclosure, a method of wireless communication of a baseband chip is provided. The method may include identifying, by a master core of a microcontroller cluster, a first set of CCs as a first timing group and a second set of CCs as a second timing group. The method may include assigning, by the master core of the microcontroller cluster, the at least one first timing group to the first core and the at least one second timing group to the second core. The method may include controlling, by a first core of the microcontroller cluster, first operations performed by a first TS to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group. The method may include controlling, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group. The method may include generating, by the first TS, a first set of commands for the first set of hardware accelerators associated with a first timing group based on first task instructions from the first core. The method may include generating, by the second TS, a second set of commands for the second set of hardware accelerators associated with a second timing group based on a second set of task instructions from the second core.
[0084] In some embodiments, the first set of CCs in the at least one first timing group may be each associated with a first TTI. In some embodiments, the second set of CCs in the at least one second timing group may be each associated with a second TTI. In some embodiments, the first TTI and the second TTI may be different.
[0085] In some embodiments, the first set of CCs in the at least one first timing group may be each associated with synchronous SCH activity. In some embodiments, the second set of CCs in the at least one second timing group may be each associated with asynchronous SCH activity.
[0086] In some embodiments, the first TS may include a first task buffer associated with a first CC in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs. In some embodiments, the second TS may be a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from a first slot-scheduler buffer, first monitoringoccasion information for a first control channel (CCH) of the first CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from a second slot-scheduler buffer, first grant-type information for a first SCH of the first CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises retrieving, from the second slot-scheduler buffer, second granttype information for a second SCH of the second CC in the first set of CCs. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises pushing a fourth task instruction associated with the second granttype information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS. In some embodiments, the first task buffer and the second task buffer may be different.
[0087] In some embodiments, the method may further include, in response to a first timestamp-trigger or first event-trigger, accessing, by the first TS, the first task instruction from the first task buffer. In some embodiments, the method may include identifying, by the first TS, a first memory address of a first instruction based on the first task instruction. In some embodiments, the method may include retrieving, by the first TS, the first instruction from a first location associated with the first memory address. In some embodiments, the method may include executing, by the first TS, the first instruction to generate a first command for the first set of hardware accelerators. In some embodiments, the method may include, after executing the first instruction, identifying, by the first TS, a second memory address of a second instruction based on the second task instruction. In some embodiments, the method may include retrieving, by the first TS, the second instruction from a second location associated with the second memory address. In some embodiments, the method may include executing, by the first TS, the second instruction to generate a second command for the first set of hardware accelerators.
[0088] The foregoing description of the specific embodiments will so reveal the general nature of the present disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
[0089] Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
[0090] The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims in any way.
[0091] Various functional blocks, modules, and steps are disclosed above. The particular arrangements provided are illustrative and without limitation. Accordingly, the functional blocks, modules, and steps may be re-ordered or combined in different ways than in the examples provided above. Likewise, certain embodiments include only a subset of the functional blocks, modules, and steps, and any such subset is permitted.
[0092] The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A baseband chip, comprising: a first task sequencer (TS) configured to: generate a first set of commands for a first set of hardware accelerators associated with at least one first timing group; and a second TS configured to: generate a second set of commands for a second set of hardware accelerators associated with at least one second timing group; and a microcontroller cluster, comprising: a master core, a first core, and a second core, wherein the master core is configured to: identify a first set of component carriers (CCs) as the at least one first timing group and a second set of CCs as the at least one second timing group; assign the at least one first timing group to the first core and the at least one second timing group to the second core; and wherein the first core is configured to: control first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, and wherein the second core is configured to: control second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators.
2. The baseband chip of claim 1, wherein: the first set of CCs in the at least one first timing group are each associated with a first transmission time interval (TTI), synchronous shared channel (SCH) activity, or a first subcarrier spacing (SCS), the second set of CCs in the at least one second timing group are each associated with a second TTI, asynchronous SCH activity, or a second SCS, the first TTI and the second TTI are different, and the first SCS and the second SCS are different.
3. The baseband chip of claim 1, wherein the master core is further configured to: identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group; and assign the first clock frequency to the first core and the second clock frequency to the second core.
4. The baseband chip of claim 1, wherein: the first TS comprises a first task buffer associated with a first component carrier (CC) in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs, and the second TS comprises a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs.
5. The baseband chip of claim 4, wherein, to control the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators, the first core is configured to: retrieve, from a first slot-scheduler buffer, first monitoring-occasion information for a first control channel (CCH) of the first CC in the first set of CCs; retrieve, from a second slot- scheduler buffer, first grant-type information for a first shared channel (SCH) of the first CC in the first set of CCs; push a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS; push a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS; retrieve, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs; retrieve, from the second slot-scheduler buffer, second grant-type information for a second SCH of the second CC in the first set of CCs; push a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS; and push a fourth task instruction associated with the second grant-type information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS, wherein the first task buffer and the second task buffer are different.
6. The baseband chip of claim 5, wherein the first TS is configured to: in response to a first timestamp-trigger or first event-trigger, access the first task instruction from the first task buffer; identify a first memory address of a first instruction based on the first task instruction; retrieve the first instruction from a first location associated with the first memory address; execute the first instruction to generate a first command for the first set of hardware accelerators; after executing the first instruction, identify a second memory address of a second instruction based on the second task instruction; retrieve the second instruction from a second location associated with the second memory address; and execute the second instruction to generate a second command for the first set of hardware accelerators.
7. The baseband chip of claim 6, wherein the first TS is further configured to: in response to a second timestamp-trigger or a second event-trigger, access the third task instruction from the second task buffer; identify a third memory address of a third instruction based on the third task instruction; retrieve the third instruction from a third memory location associated with the third memory address; execute the third instruction to generate a third command for the first set of hardware accelerators; after executing the third instruction, identify a fourth memory address of a fourth instruction based on the fourth task instruction; retrieve the fourth instruction from a fourth memory location associated with the fourth memory address; and execute the fourth instruction to generate a fourth command for the first set of hardware accelerators.
8. The baseband chip of claim 7, wherein the second event-trigger is associated with a completed execution of the second instruction.
9. The baseband chip of claim 7, wherein, to control the second operations performed by the second TS to generate the second set of commands for the second set of hardware accelerators, the second core is configured to: retrieve, from a third slot- scheduler buffer, third monitoring-occasion information for a third CCH of the third CC in the second set of CCs; retrieve, from a fourth slot- scheduler buffer, third grant-type information for a third SCH of the third CC in the second set of CCs; push a fifth task instruction associated with the third monitoring-occasion information for the third CCH of the third CC in the second set of CCs into the third task buffer of the second TS; push a sixth task instruction associated with the third grant-type information for the third SCH of the third CC in the second set of CCs into the third task buffer of the second TS; retrieve, from the third slot-scheduler buffer, fourth monitoring-occasion information for a fourth CCH of the fourth CC in the second set of CCs; retrieve, from the fourth slot- scheduler buffer, fourth grant-type information for a fourth SCH of the fourth CC in the second set of CCs; push a seventh task instruction associated with the fourth monitoring-occasion information for the fourth CCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS; and push an eighth task instruction associated with the fourth grant-type information for the fourth SCH of the fourth CC in the second set of CCs into the fourth task buffer of the second TS, wherein the third task buffer and the fourth task buffer are different.
10. The baseband chip of claim 9, wherein the second TS is configured to: in response to a third timestamp-trigger or a third event-trigger, access the fifth task instruction from the third task buffer; identify a fifth memory address of a fifth instruction based on the fifth task instruction; retrieve the fifth instruction from a fifth location associated with the fifth memory address; execute the fifth instruction to generate a fifth command for the second set of hardware accelerators; after executing the fifth instruction, identify a sixth memory address of a sixth instruction based on the sixth task instruction; retrieve the sixth instruction from a sixth location associated with the sixth memory address; and execute the sixth instruction to generate a sixth command for the second set of hardware accelerators.
11. The baseband chip of claim 10, wherein the second TS is further configured to: in response to a fourth timestamp-trigger or a fourth event-trigger, access the seventh task instruction from the fourth task buffer; identify a seventh memory address of a seventh instruction based on the seventh task instruction; retrieve the seventh instruction from a seventh memory location associated with the seventh memory address; execute the seventh instruction to generate a seventh command for the second set of hardware accelerators; after executing the seventh instruction, identify an eighth memory address of an eighth instruction based on the eighth task instruction; retrieve the eighth instruction from an eighth memory location associated with the eighth memory address; and execute the eighth instruction to generate an eighth command for the second set of hardware accelerators.
12. A microcontroller for a baseband chip, comprising: a master core, a first core, and a second core, wherein the master core is configured to: identify a first set of component carriers (CCs) as a first timing group and a second set of CCs as a second timing group; assign the at least one first timing group to the first core and the at least one second timing group to the second core; and wherein the first core is configured to: control first operations performed by a first task sequencer (TS) to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group, and wherein the second core is configured to: control second operations performed by a second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group.
13. The microcontroller of claim 12, wherein: the first set of CCs in the at least one first timing group are each associated with a first transmission time interval (TTI), the second set of CCs in the at least one second timing group are each associated with a second TTI, and the first TTI and the second TTI are different.
14. The microcontroller of claim 12, wherein: the first set of CCs in the at least one first timing group are each associated with synchronous shared channel (SCH) activity, and the second set of CCs in the at least one second timing group are each associated with asynchronous SCH activity.
15. The microcontroller of claim 12, wherein the master core is further configured to: identify a first clock frequency associated with first packet processing for the at least one first timing group and a second clock frequency associated with the at least one second timing group; and assign the first clock frequency to the first core and the second clock frequency to the second core.
16. A method of wireless communication of a baseband chip, comprising: identifying, by a master core of a microcontroller cluster, a first set of component carriers (CCs) as a first timing group and a second set of CCs as a second timing group; assigning, by the master core of the microcontroller cluster, the at least one first timing group to a first core and the at least one second timing group to a second core; controlling, by a first core of the microcontroller cluster, first operations performed by a first task sequencer (TS) to generate a first set of commands for a first set of hardware accelerators associated with the at least one first timing group; controlling, by a second core of the microcontroller cluster, second operations performed by the second TS to generate a second set of commands for a second set of hardware accelerators associated with the at least one second timing group; generating, by the first TS, the first set of commands for the first set of hardware accelerators associated with a first timing group based on first task instructions from the first core; and generating, by the second TS, the second set of commands for the second set of hardware accelerators associated with a second timing group based on a second set of task instructions from the second core.
17. The method of claim 16, wherein: the first set of CCs in the at least one first timing group are each associated with a first transmission time interval (TTI), the second set of CCs in the at least one second timing group are each associated with a second TTI, and the first TTI and the second TTI are different.
18. The method of claim 16, wherein: the first set of CCs in the at least one first timing group are each associated with synchronous shared channel (SCH) activity, and the second set of CCs in the at least one second timing group are each associated with asynchronous SCH activity.
19. The method of claim 16, wherein: the first TS comprises a first task buffer associated with a first component carrier (CC) in the first set of CCs and a second task buffer associated with a second CC in the first set of CCs, the second TS comprises a third task buffer associated with a third CC in the second set of CCs and a fourth task buffer associated with a fourth CC in the second set of CCs, and the controlling, by the first core of the microcontroller cluster, the first operations performed by the first TS to generate the first set of commands for the first set of hardware accelerators comprises: retrieving, from a first slot- scheduler buffer, first monitoring-occasion information for a first control channel (CCH) of the first CC in the first set of CCs; retrieving, from a second slot- scheduler buffer, first grant-type information for a first shared channel (SCH) of the first CC in the first set of CCs; pushing a first task instruction associated with the first monitoring-occasion information for the first CCH of the first CC in the first set of CCs into the first task buffer of the first TS; pushing a second task instruction associated with the first grant-type information for the first SCH of the first CC in the first set of CCs into the first task buffer of the first TS; retrieving, from the first slot-scheduler buffer, second monitoring-occasion information for a second CCH of the second CC in the first set of CCs; retrieving, from the second slot-scheduler buffer, second grant-type information for a second SCH of the second CC in the first set of CCs; pushing a third task instruction associated with the second monitoring-occasion information for the second CCH of the second CC in the first set of CCs into the second task buffer of the first TS; and pushing a fourth task instruction associated with the second grant-type information for the second SCH of the second CC in the first set of CCs into the second task buffer of the first TS, wherein the first task buffer and the second task buffer are different.
20. The method of claim 19, further comprising: in response to a first timestamp-trigger or first event-trigger, accessing, by the first TS, the first task instruction from the first task buffer; identifying, by the first TS, a first memory address of a first instruction based on the first task instruction; retrieving, by the first TS, the first instruction from a first location associated with the first memory address; executing, by the first TS, the first instruction to generate a first command for the first set of hardware accelerators; after executing the first instruction, identifying, by the first TS, a second memory address of a second instruction based on the second task instruction; retrieving, by the first TS, the second instruction from a second location associated with the second memory address; and executing, by the first TS, the second instruction to generate a second command for the first set of hardware accelerators.
PCT/US2022/041864 2022-08-29 2022-08-29 Apparatus and method for two-dimensional scheduling of downlink layer 1 operations WO2024049405A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/041864 WO2024049405A1 (en) 2022-08-29 2022-08-29 Apparatus and method for two-dimensional scheduling of downlink layer 1 operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/041864 WO2024049405A1 (en) 2022-08-29 2022-08-29 Apparatus and method for two-dimensional scheduling of downlink layer 1 operations

Publications (1)

Publication Number Publication Date
WO2024049405A1 true WO2024049405A1 (en) 2024-03-07

Family

ID=90098481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/041864 WO2024049405A1 (en) 2022-08-29 2022-08-29 Apparatus and method for two-dimensional scheduling of downlink layer 1 operations

Country Status (1)

Country Link
WO (1) WO2024049405A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6515979B1 (en) * 1998-06-09 2003-02-04 Nec Corporation Baseband signal processor capable of dealing with multirate and multiuser communication with a small structure
US20120084543A1 (en) * 2010-10-01 2012-04-05 Intel Mobile Communications Technology Dresden GmbH Hardware accelerator module and method for setting up same
US20120166763A1 (en) * 2010-12-22 2012-06-28 Via Technologies, Inc. Dynamic multi-core microprocessor configuration discovery
US20140215236A1 (en) * 2013-01-29 2014-07-31 Nvidia Corporation Power-efficient inter processor communication scheduling
US20180077691A1 (en) * 2012-11-14 2018-03-15 Lg Electronics Inc. Method for operating terminal in carrier aggregation system, and apparatus using said method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6515979B1 (en) * 1998-06-09 2003-02-04 Nec Corporation Baseband signal processor capable of dealing with multirate and multiuser communication with a small structure
US20120084543A1 (en) * 2010-10-01 2012-04-05 Intel Mobile Communications Technology Dresden GmbH Hardware accelerator module and method for setting up same
US20120166763A1 (en) * 2010-12-22 2012-06-28 Via Technologies, Inc. Dynamic multi-core microprocessor configuration discovery
US20180077691A1 (en) * 2012-11-14 2018-03-15 Lg Electronics Inc. Method for operating terminal in carrier aggregation system, and apparatus using said method
US20140215236A1 (en) * 2013-01-29 2014-07-31 Nvidia Corporation Power-efficient inter processor communication scheduling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BEN-SHIMOL ET AL.: "Two-dimensional mapping for wireless OFDMA systems", IEEE TRANSACTIONS ON BROADCASTING, vol. 52, no. 3, 16 October 2022 (2022-10-16), pages 388 - 396, XP007903844, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/abstract/document/1677815> [retrieved on 20221016], DOI: 10.1109/TBC.2006.879937 *

Similar Documents

Publication Publication Date Title
US11122580B2 (en) Evolved node-b (ENB), user equipment (UE) and methods for flexible duplex communication
WO2020020180A1 (en) Resource allocation method and device
WO2020052514A1 (en) Information sending method, information receiving method, and device
EP3375213B1 (en) Method and device for performing uplink transmission
WO2020029996A1 (en) Method for detecting dci, method for configuring pdcch, and communication apparatus
WO2019233398A1 (en) Data transmission method, communication apparatus and storage medium
KR20210126607A (en) User equipment and systems that perform transmit and receive operations
WO2018228537A1 (en) Information sending and receiving method and apparatus
AU2018417481A1 (en) Data transmission method, terminal device and network device
US20220312459A1 (en) Enhanced Configured Grants
WO2018171461A1 (en) Information transmission method, apparatus and system
WO2020200012A1 (en) Communication method and communication device
WO2018107457A1 (en) Data multiplexing device, method, and communication system
WO2019192515A1 (en) Method and apparatus for transmitting feedback information
WO2022228117A1 (en) Method and device for determining ptrs pattern
US20220232619A1 (en) Method processing for split resources and processing device
WO2017114218A1 (en) Method and device for dividing resource
WO2024049405A1 (en) Apparatus and method for two-dimensional scheduling of downlink layer 1 operations
WO2019200507A1 (en) Control of d2d duplication
WO2022036527A1 (en) Uplink control information transmission method, communication apparatus, and related device
WO2023282888A1 (en) Latency-driven data activity scheme for layer 2 power optimization
WO2022213653A1 (en) Frequency domain resource location determination method and apparatus, terminal, and network device
US20230019102A1 (en) Data plane scalable architecture for wireless communication
US20230014887A1 (en) Uplink data grant scheduling
WO2022206893A1 (en) Communication method and communication apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22957571

Country of ref document: EP

Kind code of ref document: A1