WO2021152368A1 - Prédiction d'autorisation adaptable pour une transmission de données par paquets améliorée - Google Patents

Prédiction d'autorisation adaptable pour une transmission de données par paquets améliorée Download PDF

Info

Publication number
WO2021152368A1
WO2021152368A1 PCT/IB2020/059911 IB2020059911W WO2021152368A1 WO 2021152368 A1 WO2021152368 A1 WO 2021152368A1 IB 2020059911 W IB2020059911 W IB 2020059911W WO 2021152368 A1 WO2021152368 A1 WO 2021152368A1
Authority
WO
WIPO (PCT)
Prior art keywords
grant
prediction
packet data
processor
transmission
Prior art date
Application number
PCT/IB2020/059911
Other languages
English (en)
Inventor
Su-Lin Low
Tianan Tim Ma
Hong Kui Yang
Hausting Hong
Original Assignee
Zeku Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku Inc. filed Critical Zeku Inc.
Priority to CN202080094168.7A priority Critical patent/CN115398948A/zh
Publication of WO2021152368A1 publication Critical patent/WO2021152368A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0278Traffic management, e.g. flow control or congestion control using buffer status reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay

Definitions

  • Embodiments of the present disclosure relate to apparatuses and methods for grant prediction, which may be applicable to communication systems, such as wireless communication systems.
  • Communication systems such as wireless communication systems
  • wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts.
  • a modem having a protocol stack embodied in hardware and software may pass the packets down the protocol stack with a physical layer, including a radio frequency (RF) module, ultimately converting the bits of the packet into radio waves.
  • RF radio frequency
  • terminal devices such as user equipment
  • the user equipment may receive an actual grant in downlink control information (DCI) on the physical downlink control channel (PDCCH).
  • DCI downlink control information
  • PDCCH physical downlink control channel
  • NW network dynamic allocated grant.
  • the actual grant is received in the DCI in PDCCH, decoded and calculated, then used for gathering packets for priority transmission before the transmission (TX) deadline.
  • Embodiments of apparatuses and methods for data packet processing, including grant prediction and preparation of data packets in advance of a grant, are disclosed herein.
  • the apparatuses may be variously implemented as user equipment, systems-on-chip, or the components or sub-components thereof.
  • an apparatus for data packet processing can include at least one memory configured to store packet data for transmission.
  • the memory may be a local memory when, for example, the apparatus is a system-on-chip.
  • the memory may, alternatively, be external to a system-on-chip, and the apparatus may be a user equipment that includes the memory and the system- on-chip.
  • the apparatus can also include at least one processor, such as a system-on- chip or processor portion thereof, operatively connected to the at least one memory and configured to process the packet data for transmission.
  • the processor can be configured to, when processing the packet data for transmission, predict a grant amount of a future actual grant for the transmission. Prediction of the grant amount by the processor can take into account at least one past prediction of the apparatus.
  • the processor can also be configured to prepare, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • a method can include predicting, by a processor of an apparatus, a grant amount of a future actual grant for transmission. Prediction of the grant amount by the processor may take into account at least one past prediction of the apparatus. The method can also include preparing, by the processor of the apparatus, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • a non-transitory computer-readable medium can be encoded with instructions that, when executed by a processor of an apparatus, perform a process.
  • the process can include predicting a grant amount of a future actual grant for transmission. Prediction of the grant amount by the processor can take into account at least one past prediction of the apparatus.
  • the process can also include preparing, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • FIG. 1 illustrates data processing in a protocol stack, according to some embodiments of the present disclosure.
  • FIG. 2 A illustrates a method for predicted grant calculation according to certain embodiments.
  • FIG. 2B illustrates a grant prediction factor function that may be used in the method of FIG. 2A.
  • FIG. 3 illustrates a timing diagram of a grant prediction mechanism according to certain embodiments.
  • FIG. 4 is a flow chart corresponding to the timing diagram of FIG. 3.
  • FIG. 5 illustrates a detailed block diagram of a baseband system on chip (SoC) implementing Layer 2 packet processing using Layer 2 circuits and a microcontroller (MCU) according to some embodiments of the present disclosure.
  • FIG. 6 illustrates an exemplary wireless network that may incorporate data packet processing including grant prediction, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • SoC system on chip
  • MCU microcontroller
  • FIG. 7 illustrates a node that may be used for grant prediction and other aspects of data packet processing, according to some embodiments of the present disclosure.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • terminology may be understood at least in part from usage in context.
  • the term “one or more” as used herein, depending at least in part upon context may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense.
  • terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context.
  • the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single-carrier frequency division multiple access
  • a CDMA network may implement a radio access technology (RAT) such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, etc.
  • RAT radio access technology
  • UTRA Universal Terrestrial Radio Access
  • E-UTRA evolved UTRA
  • CDMA 2000 etc.
  • TDMA network may implement a RAT such as GSM.
  • An OFDMA network may implement a RAT, such as long term evolution (LTE) or new radio (NR).
  • LTE long term evolution
  • NR new radio
  • the techniques and system described herein may be used for the wireless networks and RATs mentioned above, as well as other wireless networks and RATs. Likewise, the techniques and systems described herein may also be applied to wired networks, such as networks based on optical fibers, coaxial cables, or twisted-pairs, or to satellite networks.
  • any dynamic allocation uplink (UL) medium access control (MAC) transmission scheme there may be the absence of a priori knowledge of the NW allocated grant. Accordingly, there may be a very limited time to decode and calculate the UL grant allocated by the NW in the PDCCFF s DCI, and to be able to compose and transmit the MACPDU with the given grant size within K2 slots (or symbols) from the current slot n. This is most computationally demanding when K2 is 1 slot or less than 1 slot (in symbols).
  • the present disclosure reveals that in the approach described above, there is an inability to compose MACPDU in advance of NW Grant allocation receipt time, there is an insufficient time for LCFP, and there is excessive delay in forming MACPDUs from L2 to MAC to PHY. Additionally, the present disclosure reveals that the above approach may result in large data transmission errors, large memory storage needed for L2 queues, and increased power due to large memory storage and increased data movement.
  • a simple, practical, and adaptable approach to predict the upcoming dynamic Network Grant allocation for the 5G UL MAC transmission is proposed.
  • the MACPDU for upcoming transmission can be prepared in advance using the predicted grant size, thus alleviating the critical time and MIPs challenges in composing the MACPDU when receiving the actual NW allocated grant, and transmitting within less than 1 slot.
  • Certain embodiments provide a simple, practical, and adaptable technique to predict the upcoming dynamic network grant allocation for the 5G UL MAC transmission.
  • the medium access control (MAC) protocol data unit (MACPDU) for upcoming transmission can be prepared in advance using the predicted grant size. Such an approach may alleviate the critical time and processing speed challenges in composing the MACPDU when both receiving the actual NW allocated grant and transmitting within less than 1 slot.
  • MAC medium access control
  • Certain embodiments include at least three aspects.
  • a first aspect may be that a predicted grant may be used for MACPDU transmission (TX) preparation in advance with logical channel prioritization (LCP).
  • a second aspect may be a grant prediction method.
  • a third aspect may relate to tunable factors for the grant prediction method.
  • the first aspect may relate to the use of a predicted grant for preparation in advance with LCP.
  • MAC may be able to perform the Logical Channel Prioritization (LCP) of packets in the logical channels well ahead, to pull data packets from different logical channels to compose the MACPDU. This may allow sufficient time to adjust the packet list to the actual size when the actual NW Grant arrives, as well as allowing encoding and streaming the data out for transmission.
  • LCP Logical Channel Prioritization
  • the second aspect may base its prediction on the last NW actual grant value and takes into account modem measurable inputs including total buffer queue sizes, transmission data rate, reception power, and network traffic loading. In addition, the method may also attempt to converge on its prediction by accommodating feedback from prediction errors from the previous slot. [0030] According to the third aspect, the method can include tunable factors that serve to scale the input values. The tunable factors may also allow the method to be adaptable and applicable to various systems that may have different network characteristics when allocating a dynamic grant.
  • 5G fifth-generation
  • certain embodiments may be applied to other communication systems, such as other communication systems that have dynamic grants.
  • the packet data protocol stack includes the modem layer 3 IP layer, the packet data convergence protocol (PDCP) layer, the radio link control (RLC) layer, and the medium access control (MAC) layer.
  • PDCP packet data convergence protocol
  • RLC radio link control
  • MAC medium access control
  • incoming IP packets are queued into L3 QoS flows in each data radio bearer (DRB) queue after IP layer processing.
  • DRB data radio bearer
  • These packets undergo PDCP processing into L2 logical channels (LC) queues at the RLC layer.
  • the PDCP layer processing includes ROHC compression, integrity checking, and ciphering.
  • RLC layer processing includes link layer error recoveries where status and retransmissions may also be put into LC queues for transmission.
  • UE At the MAC layer, in order to transmit data packets, UE first sends scheduling requests (SR) and buffer status reports (BSRs) to request for dynamically allocated grants from the network.
  • the UL scheduler at NW then sends the UE’s allocated grant in the DCI of the PDCCH each slot.
  • UE decodes and calculates the NW allocated grant size, then runs logical channel prioritization (LCP) to dequeue packets from each logical channel to compose the MAC PDU for the next transmission.
  • LCP logical channel prioritization
  • K2 The interval between the receipt of DCI grant to the transmission deadline time is denoted by K2, which may be expressed, for example, in terms of slots or symbols.
  • FIG. 2A illustrates a method for predicted grant calculation according to certain embodiments.
  • FIG. 2B illustrates a grant prediction factor function that may be used in the method of FIG. 2A.
  • a predicted grant can be calculated according to the function described below.
  • the predicted grant calculated at current slot n for the upcoming transmission (Tx) in the next slot ( «+ 1), can be modeled by an infinite impulse response (HR) difference equation, such as Equation (1).
  • HR infinite impulse response
  • Equation (1) the following may be the meanings of each term:
  • G A (n-l) Actual network grant that was predicted when at the previous slot, slot (n- 1)
  • the upcoming predicted grant G(n) can be largely based on what the network allocated in the previous grant, especially in a steady state after the initial radio resource control (RRC) connection setup when data transfer is first scheduled.
  • This can be weighted by a weighting factor, F(t), which may be greater than zero and less than one.
  • the weighting factor can be selected as desired and may be based on slot-to-slot correlation expectations of the communication system. For example, if it is expected that the grants will be highly correlated to preceding slot grants, then the weighting factor may be close to 1, whereas if it is expected that the grants will be highly uncorrelated to preceding slot grants, then the weighting factor may be close to 0.
  • the weighting factor is shown in a multiplier.
  • the multiplier may be factory configured or may be configurable by software or hardware in a user device that includes the multiplier.
  • the multiplier may be configured by the network.
  • the weighting factor can itself be dynamic, dependent on factors such as the length of a current communication session or connection to a given base station or other network devices. For example, when data transfer first starts, the network allocated grant may be small and then may ramp up as time progresses. Hence, early in a communication session, the expected NW grant may then be dependent on several modem input values, with a weightage of (l-F(t)). As time progresses, more weightage of F(t) may be given to the previous actual network grant value G A (n-l), with small adjustments factor of (l-F(t)) from other modem inputs to predict the upcoming network new grant.
  • F(t) may ramp up from closer to zero to closer to one, based on the duration of the connection or session.
  • the prediction factor function F(t) can be modeled as a ramp function of time, which starts off from zero and saturates at a near-constant peak of 0.9-0.99 in a steady state.
  • the factors K1 - K5 may be the weights for various inputs to the grant prediction, as mentioned above. More or fewer weights can be used, and the weights may be dynamic. Additionally, the weights may be tuned on a per MAC instance basis, such that each MAC instance may potentially have different weights. As another option, each user equipment may have its own set of weights.
  • the current total buffer size (Q) of all the logical channels may be a major input and may be weighted with the factor of Kl.
  • K1 may be selected such that the value of Q multiplied by Kl is larger than the weighted values of other parameters.
  • Kl may be larger or smaller than other weighting factors of parameters, but the result may be that the weighted contribution of current buffer size is relatively large, such as being half or more of the contribution of the parameters.
  • the total buffer size may be the value provided in a buffer status report (BSR) to the network, for example, upon request by the network.
  • BSR buffer status report
  • the current UL data rate (R) of the transmission (Tx) carrier channel may directly affect the grant size allocated.
  • R may be weighted with K2.
  • the receive (Rx) power (P) at the modem may directly affect the grant size allocated by the NW.
  • a strong signal may indicate that the NW would allocate more grants. This may be weighted with K3.
  • Other similar parameters such as signal to noise ration (SNR) or signal to interference plus noise ratio (SINR), may similarly be considered and weighted.
  • SNR signal to noise ration
  • SINR signal to interference plus noise ratio
  • the network traffic load (L) at the modem may be indicative of how busy the network is surrounding the UE, and consequently, the amount of other UEs that may be sharing the network resources.
  • a heavy load may indicate that the NW scheduler would reduce the grant to the UE.
  • This parameter may be derived from the Ec/Io values at UE, where Ec/Io is a measure of energy prior to de-spreading compared to interference present in a wide-band radio propagation signal. This input is weighted with K4 and impacts the grant negatively. Other ratios, such as Eb/No, the ratio of energy after de-spreading to noise of a wide-band radio propagation signal.
  • the device may attempt to distinguish interference from the network providing grants, as distinct from other networks and other radio access technologies (RATs) in the area.
  • RATs radio access technologies
  • the above parameters are provided as examples. In practice, more or fewer parameters may be used. If the network provides some indication of network load, or other factors, those network indications can be taken into consideration by the user equipment.
  • the grant prediction error (E) may be weighted by K5.
  • K5a and K5b Two options are shown in FIG. 2A, namely K5a and K5b.
  • the grant prediction error for a current slot n can be modeled by Equation (2):
  • Equation (2) [0060] Equation (2):
  • E(n) [G(n - 1) G A (n - 1)].
  • Equation (2) the error, E(n), can represent the difference between the predicted grant and the actual network grant allocated for the prediction interval of slot (n-1). Note that the Actual NW Grant of the previous slot, G A (n-l), is only decoded at the very beginning of slot n when the DCI/PDCCH is decoded, but the predictions are all done one slot ahead at slot (n-1).
  • FIG. 3 illustrates a timing diagram of a grant prediction mechanism according to certain embodiments.
  • the timing diagram illustrates the execution time sequence of an example implementation of a method, which may be implemented by a device, such as a baseband chip of a UE.
  • FIG. 4 is a flow chart of method 400, corresponding to the timing diagram of FIG. 3. As shown in FIG. 3, at 305, the DCI and PDCH are shown as the references.
  • FIG. 3 illustrates the case where K2 is less than one slot, although the same principles discussed herein may similarly be applied to cases where K2 is longer than one slot.
  • the DCI for slot n is provided in the physical downlink control channel (PDCCH) in a first portion of slot n, while the scheduled transmission may occur later in the same slot.
  • PDCCH physical downlink control channel
  • the scheduled transmission may occur later in the same slot.
  • the same may be true in slot n+1, as well as slot n-1, although the DCI and scheduled transmission are not shown in slot n-1.
  • the Ll/PHY can decode PDCCH and DCI and can calculate the actual network grant size G A (n-l).
  • the grant size may be signaled implicitly or explicitly by the network.
  • MAC software can service the actual NW grant size G A (n-l), which may involve adjusting a previously prepared MAC packet list to fit the actual grant size. Then, the PHY layer can encode and stream the MACPDU for UL transmission.
  • the device may calculate grant prediction error executed at ⁇ n-1). This grant prediction error may be calculated by comparing a value of predicted grant calculated at a previous time with the actual NW allocated grant just decoded at 310, at the beginning of slot n.
  • the device may gather other inputs, such as values of other parameters that may be used in calculating a predicted grant.
  • values of the following parameters may be retrieved: total buffer size, UL data rate, reception (Rx) power, and network traffic load. These may be retrieved from the memory of the device itself.
  • the device may calculate a new predicted grant for the next transmission, G(n).
  • the calculation may be based on the calculated error, parameter values, and any other inputs.
  • the calculation may take into account factors adjustments, such as the weights of Kl, K2, and so on, in Equation (1), above.
  • the factors adjustments may be taken into account.
  • the device can prepare a MacPDU packet list in advance with a size based on Predicted Grant G(n), by running mainly logical channel prioritization and packet data convergence protocol (PDCP) processing.
  • PDCP packet data convergence protocol
  • the packets prepared during grant prediction at 360 in slot n can be adjusted at 320 in slot n+1 and sent in slot n+1 at 305.
  • Certain embodiments of the present disclosure may have various benefits and/or advantages. For example, certain embodiments may provide a practical and adaptable approach that achieves its purpose with low complexity. Moreover, certain embodiments may be computationally efficient and readily implementable. Additionally, certain embodiments may take advantage of tunable weighting factors, which can be tuned for each system or for each MAC instance, as desired.
  • Certain embodiments provide an adaptable and flexible approach that allows easy addition of new input factors. Moreover, in certain embodiments, grant prediction errors may still improve system performance and may not cause performance degradation.
  • Certain embodiments may rely on little on-chip memory or CPU MIPs, and may not consume much power. Moreover, by the use of certain embodiments, the transmission timeline can be met easily when the K2 offset is ⁇ 1, namely when same-slot transmission is scheduled when the NW grant is received.
  • Certain embodiments may reduce latency in preparing MACPDU packets for transmission. Moreover, certain embodiments can coexist with non- predicted Grant allocation scheme and fixed grant allocation scheme. Certain embodiments may be applicable to different wireless technologies requiring dynamic uplink grant allocation access by the base station, such as 5G, LTE, or future 3GPP or standards.
  • Certain embodiments may apply other techniques, such as machine learning.
  • machine learning may be used to adjust the weight of the various parameters and to consider additional parameters for inclusion.
  • machine learning and other forms of artificial intelligence can be used for fine-tuning the factors with collected data.
  • FIG. 5 illustrates a detailed block diagram of a baseband SoC 502 implementing Layer 2 packet processing using Layer 2 circuits 508 and a microcontroller (MCU) 510 according to some embodiments of the present disclosure.
  • baseband SoC 502 may be one example of a software and hardware interworking system in which the software functions are implemented by MCU 510, and the hardware functions are implemented by Layer 2 circuits 508.
  • MCU 510 may be one example of a microcontroller
  • Layer 2 circuits 508 may be one example of integrated circuits, although other microcontroller and integrated circuits are also permitted.
  • Layer 2 circuits 508 include an SDAP circuit 520, a PDCP circuit 522, an RLC circuit 524, and a MAC circuit 526.
  • the dedicated integrated circuits (ICs) (for example, SDAP circuit 520, PDCP circuit 522, RLC circuit 524, and MAC circuit 526) controlled by MCU 510 can be used to conduct Layer 2 packet processing.
  • each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 is an IC dedicated to performing the functions of the respective layer in the Layer 2 user plane and/or control plane.
  • each of SDAP, PDCP, RLC, and MAC circuits 520, 522, 524, or 526 may be an ASIC, which may be customized for a particular use, rather than being intended for general-purpose use.
  • Some ASICs may have high speed, small die size, and low power consumption compared with a generic processor.
  • baseband SoC 502 may be operatively coupled to a host processor 504 and an external memory 506 through a main bus 538.
  • host processor 504 such as an application processor (AP)
  • AP application processor
  • host processor 504 may generate raw data that has not been coded and modulated yet by the PHY layer of baseband SoC 502.
  • host processor 504 may receive data after it is initially decoded and demodulated by the PHY layer and subsequently processed by Layer 2 circuits 508.
  • the raw data is formatted into data packets, according to any suitable protocols, for example, Internet Protocol (IP) data packets.
  • IP Internet Protocol
  • External memory 506 may be shared by host processor 504 and baseband SoC 502, or any other suitable components.
  • external memory 506 stores the raw data (e.g., IP data packets) to be processed by Layer 2 circuits 508 of baseband SoC 502 and stores the data processed by Layer 2 circuits 508 (e.g., MAC PDUs) to be accessed by Layer 1 (e.g., the PHY layer).
  • Layer 2 circuits 508 e.g., MAC PDUs
  • Layer 1 e.g., the PHY layer
  • External memory 506 may, or optionally may not, store any intermediate data of Layer 2 circuits 508, for example, PDCP PDUs/RLC SDUs or RLC PDUs/MAC SDUs.
  • Layer 2 circuits 508 may modify the data stored in external memory 506.
  • baseband SoC 502 may also direct memory access (DMA) 516 that can allow some Layer 2 circuits 508 to access external memory 506 directly independent of host processor 504.
  • DMA 516 may include a DMA controller and any other suitable input/output (I/O) circuits.
  • baseband SoC 502 may further include an internal memory 514, such as an on-chip memory on baseband SoC 502, which is distinguished from external memory 506 that is an off- chip memory not on baseband SoC 502.
  • internal memory 514 includes one or more LI, L2, L3, or L4 caches.
  • Layer 2 circuits 508 may access internal memory 514 through main bus 538 as well.
  • the internal memory 514 may, thus, by particularly for the baseband SoC 502 as distinct from other sub-components or components of an implementing system.
  • baseband SoC 502 may further include a memory 512 that can be shared by (e.g., both accessed by) Layer 2 circuits 508 and MCU 510. It is understood that although memory 512 is shown as an individual memory separate from internal memory 514, in some examples, memory 512 and internal memory 514 may be local partitions of the same physical memory structure, for example, a static random-access memory (SRAM). In one example, a logical partition in internal memory 514 may be dedicated to or dynamically allocated to Layer 2 circuits 508 and MCU 510 for exchanging commands and responses.
  • SRAM static random-access memory
  • memory 512 includes a plurality of command queues 534 for storing a plurality sets of commands, respectively, and a plurality of response queues 536 for storing a plurality sets of responses respectively.
  • Each pair of corresponding command queue 534 and response queue 536 may be dedicated to one of Layer 2 circuits 508.
  • baseband SoC 502 may further include a local bus 540.
  • MCU 510 may be operatively coupled to memory 512 and main bus 538 through local bus 540.
  • MCU 510 may be configured to generate a plurality sets of control commands and write each set of the commands into respective command queue 534 in memory 512 through local bus 540 and interrupts.
  • MCU 510 may also read a plurality sets of responses (e.g., processing result statuses) from response queues 536 in memory 512, respectively, through local bus 540 and interrupts.
  • MCU 510 generates a set of commands based on a set of responses from a higher layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 uplink data processing) or a lower layer in the Layer 2 protocol stack (e.g., the previous stage in Layer 2 downlink data processing).
  • MCU 510 can be operatively coupled to Layer 2 circuits 508 and control the operations of Layer 2 circuits 508 to process the Layer 2 data. It is understood that although one MCU 510 is shown in FIG. 5, the number of MCUs is scalable, such that multiple MCUs may be used in some examples.
  • memory 512 may be part of MCU 510, e.g., a cache integrated with MCU 510. It is further understood that regardless of the naming, any suitable processing units that can generate control commands to control the operations of Layer 2 circuits 508 and check the responses of Layer 2 circuits 508 may be considered as MCU 510 disclosed herein.
  • FIG. 6 illustrates an exemplary wireless network 600, in which some aspects of the present disclosure may be implemented, according to some embodiments of the present disclosure.
  • wireless network 600 may include a network of nodes, such as a UE 602, an access node 604, and a core network element 606.
  • User equipment 602 may be any terminal device, such as a mobile phone, a desktop computer, a laptop computer, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and transmitting information, such as any member of a vehicle to everything (V2X) network, a cluster network, a smart grid node, or an Intemet-of- Things (IoT) node.
  • V2X vehicle to everything
  • IoT Intemet-of- Things
  • Access node 604 may be a device that communicates with user equipment 602, such as a wireless access point, a base station (BS), a Node B, an enhanced Node B (eNodeB or eNB), a next-generation NodeB (gNodeB or gNB), a cluster master node, or the like. Access node 604 may have a wired connection to user equipment 602, a wireless connection to user equipment 602, or any combination thereof. Access node 604 may be connected to user equipment 602 by multiple connections, and user equipment 602 may be connected to other access nodes in addition to access node 604. Access node 604 may also be connected to other UEs. It is understood that access node 604 is illustrated by a radio tower by way of illustration and not by way of limitation.
  • Core network element 606 may serve access node 604 and user equipment 602 to provide core network services.
  • core network element 606 may include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (SGW), or a packet data network gateway (PGW).
  • HSS home subscriber server
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • core network elements of an evolved packet core (EPC) system which is a core network for the LTE system.
  • EPC evolved packet core
  • core network element 606 includes an access and mobility management function (AMF) device, a session management function (SMF) device, or a user plane function (UPF) device, of a core network for the NR system.
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • Core network element 606 may connect with a large network, such as the Internet 608, or another IP network, to communicate packet data over any distance.
  • data from user equipment 602 may be communicated to other UEs connected to other access points, including, for example, a computer 610 connected to Internet 608, for example, using a wired connection or a wireless connection, or to a tablet 612 wirelessly connected to Internet 608 via a router 614.
  • computer 610 and tablet 612 provide additional examples of possible UEs
  • router 614 provides an example of another possible access node.
  • a generic example of a rack-mounted server is provided as an illustration of core network element 606.
  • core network element 606 there may be multiple elements in the core network including database servers, such as a database 616, and security and authentication servers, such as an authentication server 618.
  • Database 616 may, for example, manage data related to user subscription to network services.
  • a home location register (HLR) is an example of a standardized database of subscriber information for a cellular network.
  • authentication server 618 may handle authentication of users, sessions, and so on.
  • an authentication server function (AUSF) device may be the specific entity to perform user equipment authentication.
  • a single server rack may handle multiple such functions, such that the connections between core network element 606, authentication server 618, and database 616, may be local connections within a single rack.
  • similar techniques may likewise be used for the other direction of processing and for processing in other devices, such as access nodes, and core network nodes.
  • any device that processes packets through a plurality of layers of a protocol stack may benefit some embodiments of the present disclosure, even if not specifically listed above or illustrated in the example network of FIG. 6.
  • Each of the elements of FIG. 6 may be considered a node of wireless network 600. More detail regarding the possible implementation of a node is provided by way of example in the description of a node 700 in FIG. 7 below.
  • Node 700 may be configured as user equipment 602, access node 604, or core network element 606 in FIG. 6.
  • node 700 may also be configured as computer 610, router 614, tablet 612, database 616, or authentication server 618 in FIG. 6.
  • node 700 may include a processor 702, a memory 704, a transceiver 706. These components are shown as connected to one another by bus 708, but other connection types are also permitted. When node 700 is user equipment 602, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 700 may be implemented as a blade in a server system when node 700 is configured as core network element 606. Other implementations are also possible.
  • Transceiver 706 may include any suitable device for sending and/or receiving data.
  • Node 700 may include one or more transceivers, although only one transceiver 706 is shown for simplicity of illustration.
  • An antenna 710 is shown as a possible communication mechanism for node 700. Multiple antennas and/or arrays of antennas may be utilized. Additionally, examples of node 700 may communicate using wired techniques rather than (or in addition to) wireless techniques.
  • access node 604 may communicate wirelessly to user equipment 602 and may communicate by a wired connection (for example, by optical or coaxial cable) to core network element 606.
  • Other communication hardware such as a network interface card (NIC), may be included as well.
  • NIC network interface card
  • node 700 may include processor 702. Although only one processor is shown, it is understood that multiple processors can be included.
  • Processor 702 may include microprocessors, microcontrollers, DSPs, ASICs, field- programmable gate arrays (FPGAs), programmable logic devices (PFDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout the present disclosure.
  • Processor 702 may be a hardware device having one or many processing cores.
  • Processor 702 may execute software.
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • Software can include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for instructing hardware are also permitted under the broad category of software.
  • Processor 702 may be a baseband chip, such as SoC 502 in FIG. 5.
  • the node 700 may also include other processors, not shown, such as a central processing unit of the device, a graphica processor, or the like.
  • the processor 702 may include internal memory (not shown in FIG. 7) that may serve as memory for L2 data, such as internal memory 514 in FIG. 5.
  • Processor 702 may include an RF chip, for example, integrated into a baseband chip, or an RF chip may be provided separately.
  • Processor 702 may be configured to operate as a modem of node 700, or may be one element or component of a modem. Other arrangements and configurations are also permitted. [0098] As shown in FIG. 7, node 700 may also include memory 704. Although only one memory is shown, it is understood that multiple memories can be included.
  • Memory 704 can broadly include both memory and storage.
  • memory 704 may include random-access memory (RAM), read-only memory (ROM), SRAM, dynamic RAM (DRAM), ferro-electric RAM (FRAM), electrically erasable programmable ROM (EEPROM), CD-ROM or other optical disk storage, hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702.
  • RAM random-access memory
  • ROM read-only memory
  • DRAM dynamic RAM
  • FRAM ferro-electric RAM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM or other optical disk storage such as hard disk drive (HDD), such as magnetic disk storage or other magnetic storage devices, Flash drive, solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702.
  • HDD hard disk drive
  • flash drive such as magnetic disk
  • the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium.
  • Computer- readable media includes computer storage media. Storage media may be any available media that can be accessed by a computing device, such as node 700 in FIG. 7.
  • such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD, such as magnetic disk storage or other magnetic storage devices, Flash drive, SSD, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system, such as a mobile device or a computer.
  • Disk and disc includes CD, laser disc, optical disc, DVD, and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • an apparatus for packet data processing can include at least one memory configured to store packet data for transmission.
  • the apparatus can also include at least one processor operatively connected to the at least one memory and configured to process the packet data for transmission.
  • the processor can be configured to, when processing the packet data for transmission, predict a grant amount of a future actual grant for the transmission. Prediction of the grant amount by the processor may take into account at least one past prediction of the apparatus.
  • the processor can also be configured to prepare, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • the processor can be further configured to calculate prediction error by comparing the at least one past prediction with an actual past grant.
  • the prediction error can be taken into account in the prediction of the grant amount.
  • the processor may further be configured to gather values of a plurality of parameters associated with a network associated with the grant amount.
  • the plurality of parameters may be taken into account in the prediction of the grant amount.
  • the processor when preparing the packet data for transmission, may be configured to prepare the packet data in a MAC packet data unit packet list.
  • the processor may further be configured to weight the values of the plurality of parameters.
  • the processor when weighting the values, may further be configured to weight the values on a per-MAC instance basis.
  • the parameters comprise at least one of total buffer size, uplink data rate, reception power, or network traffic load.
  • a method for grant prediction and preparation can include predicting, by a processor of an apparatus, a grant amount of a future actual grant for transmission, wherein prediction of the grant amount by the processor takes into account at least one past prediction of the apparatus.
  • the method can also include preparing, by the processor of the apparatus, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • the method can further include calculating, by the processor of the apparatus, prediction error by comparing the at least one past prediction with an actual past grant.
  • the prediction error may be taken into account in the prediction of the grant amount.
  • the method can also include gathering, by the processor of the apparatus, values of a plurality of parameters associated with a network associated with the grant amount.
  • the plurality of parameters may be taken into account in the prediction of the grant amount.
  • the preparing the packet data for transmission comprises preparing the packet data in a MAC packet data unit packet list.
  • the method can further include weighting the values of the plurality of parameters.
  • the weighting the values is performed on a per- MAC instance basis.
  • the parameters can include total buffer size, uplink data rate, reception power, network traffic load, or any combination thereof.
  • a non-transitory computer-readable medium encoded with instructions that, when executed by a processor of an apparatus, perform a process.
  • the process can include predicting a grant amount of a future actual grant for transmission. Prediction of the grant amount by the processor can take into account at least one past prediction of the apparatus.
  • the process can also include preparing, before receiving the future actual grant, the packet data for transmission based on the prediction of the grant amount.
  • the process can include calculating prediction error by comparing the at least one past prediction with an actual past grant.
  • the prediction error can be taken into account in the prediction of the grant amount.
  • the process can include gathering values of a plurality of parameters associated with a network associated with the grant amount.
  • the plurality of parameters can be taken into account in the prediction of the grant amount.
  • the preparing the packet data for transmission can include preparing the packet data in a MAC packet data unit packet list.
  • the process may further include weighting the values of the plurality of parameters.
  • the parameters can include at least one of total buffer size, uplink data rate, reception power, or network traffic load.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Des modes de réalisation d'appareils et de procédés de prédiction et de préparation d'autorisations peuvent être applicables à des systèmes de communication, tels que des systèmes de communication sans fil. Dans un exemple, un appareil pour la prédiction et la préparation d'autorisation peut comprendre au moins une mémoire configurée pour stocker des données de paquet destinées à une transmission. L'appareil comprend également au moins un processeur connecté fonctionnellement à la ou aux mémoires et configuré pour traiter les données de paquet destinées à une transmission. Le processeur peut être configuré pour prédire une quantité d'autorisations d'une future autorisation réelle pour la transmission. La prédiction de la quantité d'autorisations par le processeur peut prendre en compte au moins une prédiction passée de l'appareil. Le processeur peut également être configuré pour préparer, avant de recevoir l'autorisation réelle future, les données de paquet destinées à une transmission sur la base de la prédiction de la quantité d'autorisations.
PCT/IB2020/059911 2020-01-29 2020-10-22 Prédiction d'autorisation adaptable pour une transmission de données par paquets améliorée WO2021152368A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080094168.7A CN115398948A (zh) 2020-01-29 2020-10-22 用于增强分组数据传输的自适应授权预测

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062967459P 2020-01-29 2020-01-29
US62/967,459 2020-01-29

Publications (1)

Publication Number Publication Date
WO2021152368A1 true WO2021152368A1 (fr) 2021-08-05

Family

ID=77078129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/059911 WO2021152368A1 (fr) 2020-01-29 2020-10-22 Prédiction d'autorisation adaptable pour une transmission de données par paquets améliorée

Country Status (2)

Country Link
CN (1) CN115398948A (fr)
WO (1) WO2021152368A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030013454A1 (en) * 2001-03-09 2003-01-16 Denso Corporation Relative future activity indicators for assisting in selecting the source of received communications
US20100262881A1 (en) * 2009-04-08 2010-10-14 Via Telecom, Inc. Apparatus and method for reverse link transmission in an access terminal
US20150038156A1 (en) * 2013-07-31 2015-02-05 Qualcomm Incorporated Adapting mobile device behavior using predictive mobility
US20150124605A1 (en) * 2013-11-07 2015-05-07 Qualcomm Incorporated Method and apparatus for lte uplink throughput estimation
US20160113031A1 (en) * 2013-05-31 2016-04-21 Telefonaktiebolaget L M Ericsson (Publ) Predictive scheduling for uplink transmission in a cellular network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030013454A1 (en) * 2001-03-09 2003-01-16 Denso Corporation Relative future activity indicators for assisting in selecting the source of received communications
US20100262881A1 (en) * 2009-04-08 2010-10-14 Via Telecom, Inc. Apparatus and method for reverse link transmission in an access terminal
US20160113031A1 (en) * 2013-05-31 2016-04-21 Telefonaktiebolaget L M Ericsson (Publ) Predictive scheduling for uplink transmission in a cellular network
US20150038156A1 (en) * 2013-07-31 2015-02-05 Qualcomm Incorporated Adapting mobile device behavior using predictive mobility
US20150124605A1 (en) * 2013-11-07 2015-05-07 Qualcomm Incorporated Method and apparatus for lte uplink throughput estimation

Also Published As

Publication number Publication date
CN115398948A (zh) 2022-11-25

Similar Documents

Publication Publication Date Title
CN110474854B (zh) 资源分配的方法和装置
EP3668259B1 (fr) Appareil et procédé pour fournir un réseau de service dans un système de communication sans fil
JP7262591B2 (ja) 無線通信方法及び装置
JP2019525622A (ja) グラントフリー伝送のためのアップリンクデータスケジューリングのためのシステムおよび方法
JP6627966B2 (ja) 無線アクセスネットワークノード、外部ノード、及びこれらの方法
WO2020220954A1 (fr) Procédé et appareil de détermination de priorité de planification
US20230006935A1 (en) Mini-token bucket for uplink transmission
WO2021152369A1 (fr) Schéma de transfert dynamique de données de bout en bout en liaison montante avec trajet de mémoire optimisé
CN114073157A (zh) 信道接入优先级的选择
EP4104334A1 (fr) Procédés et dispositifs de communication
EP3500037B1 (fr) Procédé de transmission de données, dispositif terminal et dispositif de réseau
CN115349285A (zh) 用于分组延迟预算受限场景的模式2资源(重新)选择的通信装置和通信方法
US20230101531A1 (en) Uplink medium access control token scheduling for multiple-carrier packet data transmission
US20230019547A1 (en) Uplink data transmission scheduling
US20240267179A1 (en) User equipment, scheduling node, method for user equipment, and method for scheduling node
WO2021152368A1 (fr) Prédiction d'autorisation adaptable pour une transmission de données par paquets améliorée
WO2023282888A1 (fr) Schéma d'activité de données à latence pour une optimisation de puissance de couche 2
US20210274374A1 (en) Method and network node for supporting a service over a radio bearer
KR20160140098A (ko) 무선 패킷 네트워크를 위한 스케줄링 방법 및 장치
CN112566259B (zh) 数据传输方法、装置、基站和存储介质
KR20180045705A (ko) 무선랜 시스템의 네트워크 장치에서의 스케줄링을 위한 방법 및 장치
WO2024092633A1 (fr) Rapport d'informations d'ue et gestion de retard de paquets dans une communication sans fil
US20230014887A1 (en) Uplink data grant scheduling
WO2021109792A1 (fr) Procédé de traitement de données et appareil associé
WO2022225500A1 (fr) Appareil et procédé de planification d'autorisation de liaison montante basée sur une tranche à porteuses multiples

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20917196

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20917196

Country of ref document: EP

Kind code of ref document: A1