CN115398948A - Adaptive grant prediction for enhanced packet data transmission - Google Patents

Adaptive grant prediction for enhanced packet data transmission Download PDF

Info

Publication number
CN115398948A
CN115398948A CN202080094168.7A CN202080094168A CN115398948A CN 115398948 A CN115398948 A CN 115398948A CN 202080094168 A CN202080094168 A CN 202080094168A CN 115398948 A CN115398948 A CN 115398948A
Authority
CN
China
Prior art keywords
prediction
grant
packet data
processor
authorization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080094168.7A
Other languages
Chinese (zh)
Inventor
刘素琳
马天安
杨鸿魁
H·洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zeku Technology Shanghai Corp Ltd
Original Assignee
Zheku Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zheku Technology Co ltd filed Critical Zheku Technology Co ltd
Publication of CN115398948A publication Critical patent/CN115398948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0278Traffic management, e.g. flow control or congestion control using buffer status reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments of the apparatus and method for authorization prediction and provisioning may be applied to communication systems, such as a wireless communication system. In an example, an apparatus for authorization prediction and preparation may include at least one memory configured to store packet data for transmission. The apparatus also includes at least one processor operatively connected to the at least one memory and configured to process the packet data for transmission. The processor may be configured to predict an amount of authorization for future actual authorization of the transmission. The prediction of the authorization amount by the processor may take into account at least one past prediction of the device. The processor may be further configured to prepare the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.

Description

Adaptive grant prediction for enhanced packet data transmission
Cross Reference to Related Applications
This application is related to and claims priority from U.S. provisional patent application No. 62/967,459 filed on 29/1/2020, which is incorporated herein by reference in its entirety.
Technical Field
Embodiments of the present disclosure relate to an apparatus and method for grant prediction, which can be applied to a communication system such as a wireless communication system.
Background
Communication systems, such as wireless communication systems, are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasting. When a packet is to be transmitted over a medium, such as over the air in the case of wireless communications, a modem having a protocol stack embodied in hardware and software may pass the packet through a protocol stack having a physical layer (including a Radio Frequency (RF) module), ultimately converting bits of the packet into radio waves.
In certain communication systems, terminal devices, such as user equipment, are given dynamic bandwidth grants for transmissions. In this case, the user equipment may receive the actual grant in Downlink Control Information (DCI) on a Physical Downlink Control Channel (PDCCH). No prediction of authorization for Network (NW) dynamic allocation is made. The actual grant is received, decoded, and calculated in DCI in the PDCCH, and then used to collect packets for priority transmission before the Transmission (TX) deadline.
Disclosure of Invention
Embodiments of an apparatus and method for data packet processing including authorization prediction and preparation of data packets prior to authorization are disclosed herein. The apparatus may be variously implemented as a user equipment, a system on a chip, or a component or sub-component thereof.
In one example, an apparatus for data packet processing, such as authorization prediction, may include at least one memory configured to store packet data for transmission. For example, when the apparatus is a system on a chip, the memory may be a local memory. Alternatively, the memory may be external to the system-on-chip, and the apparatus may be a user equipment comprising the memory and the system-on-chip. The apparatus may also include at least one processor, such as a system on a chip or processor portion thereof, operatively connected to the at least one memory and configured to process the packet data for transmission. The processor may be configured to predict an amount of grant for a future actual grant for the transmission when processing the packet data for the transmission. The prediction of the amount of authorization by the processor may take into account at least one past prediction of the apparatus. The processor may be further configured to prepare the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
In another example, a method may include predicting, by a processor of an apparatus, an authorization amount for a future actual authorization for a transmission. The prediction of the authorization amount by the processor may take into account at least one past prediction of the device. The method may also include preparing, by a processor of the apparatus, the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
In yet another example, a non-transitory computer-readable medium may be encoded with instructions that, when executed by a processor of an apparatus, perform a process. The process may include predicting an amount of authorization for future actual authorization for transmission. The prediction of the authorization amount by the processor may take into account at least one past prediction of the device. The process may also include preparing the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
Drawings
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the detailed description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.
Fig. 1 illustrates data processing in a protocol stack according to some embodiments of the present disclosure.
Fig. 2A illustrates a method for computing a predictive authorization, in accordance with certain embodiments.
Fig. 2B illustrates an authorization predictor function that may be used in the method of fig. 2A.
Fig. 3 illustrates a timing diagram of an authorization prediction mechanism, in accordance with certain embodiments.
Fig. 4 is a flow chart corresponding to the timing diagram of fig. 3.
Fig. 5 illustrates a detailed block diagram of a baseband system on chip (SoC) implementing layer 2 packet processing using layer 2 circuitry and a Microcontroller (MCU), according to some embodiments of the present disclosure.
Fig. 6 illustrates an example wireless network in which some aspects of the present disclosure may be implemented, which may incorporate data packet processing including authorization prediction, in accordance with some embodiments of the present disclosure.
Fig. 7 illustrates a node that may be used for other aspects of authorization prediction and data packet processing according to some embodiments of the present disclosure.
Detailed Description
While specific configurations and arrangements are discussed, it should be understood that this is done for illustration only. Those skilled in the art will recognize that other configurations and arrangements may be used without departing from the spirit and scope of the present disclosure. It will be apparent to those skilled in the relevant art that the present disclosure may also be used in various other applications.
It is noted that references in the specification to "one embodiment," "an embodiment," "one example embodiment," "some embodiments," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Generally, terms are understood at least in part from the context of usage. For example, the term "one or more" as used herein may be used to describe any feature, structure, or characteristic in the singular or may be used to describe a combination of features, structures, or characteristics in the plural, depending, at least in part, on the context. Similarly, terms such as "a" or "the" may also be understood to refer to a singular use or to a plural use, depending at least in part on the context. Moreover, the term "based on" may be understood to not necessarily be meant to represent a dedicated set of factors, but rather may allow for the presence of some additional factors not necessarily explicitly described, also depending at least in part on the context.
Various aspects of a wireless communication system will now be described with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, units, components, circuits, steps, operations, procedures, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, firmware, computer software, or any combination thereof. Whether such elements are implemented as hardware, firmware, or software depends upon the particular application and design constraints imposed on the overall system.
The techniques described herein may be used for various wireless communication networks such as Code Division Multiple Access (CDMA) systems, time Division Multiple Access (TDMA) systems, frequency Division Multiple Access (FDMA) systems, orthogonal Frequency Division Multiple Access (OFDMA) systems, single carrier frequency division multiple access (SC-FDMA) systems, and other networks. The terms "network" and "system" are often used interchangeably. The CDMA network may implement a Radio Access Technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, and so on. A TDMA network may implement a RAT such as GSM. The OFDMA network may implement a RAT, such as Long Term Evolution (LTE) or New Radio (NR). The techniques and systems described herein may be used for the wireless networks and RATs described above as well as other wireless networks and RATs. Likewise, the techniques and systems described herein may also be applied to wired networks, such as fiber optic, coaxial, or twisted pair based networks or satellite networks.
In any dynamically allocated Uplink (UL) Medium Access Control (MAC) transmission mechanism, there may not be a priori knowledge of the grant of NW allocation. Thus, the time for decoding and calculating the UL grant allocated by the NW in the DCI of the PDCCH can be very limited and the mac pdu can be composed and transmitted with a given grant size within K2 slots (or symbols) starting from the current slot n. This is computationally the most demanding when K2 is 1 slot or less than 1 slot (in symbols).
The present disclosure reveals that in the above method, the MAC pdu cannot be composed before the NW grants the allocation of the reception time, the LCFP time is not sufficient, and there is an excessive delay in forming the MAC pdu from L2 to MAC to PHY. In addition, the present disclosure discloses that the above method may result in large data transmission errors, large memory storage required for the L2 queue, and increased power due to large memory storage and increased data movement.
In a particular embodiment, a simple, practical and adaptive method is proposed for predicting the upcoming dynamic network grant allocation for 5G UL MAC transmissions. In particular embodiments, the predicted grant size may be used to prepare in advance a mac pdu for an upcoming transmission, mitigating the critical time and MIP challenges of composing a mac pdu and transmitting in less than 1 slot when an actual NW-allocated grant is received.
Particular embodiments provide simple, practical, and adaptive techniques to predict upcoming dynamic network grant allocations for 5G UL MAC transmissions. In particular embodiments, a predicted grant size may be used to prepare a Medium Access Control (MAC) protocol data unit (MAC pdu) in advance for an upcoming transmission. This approach can mitigate the critical time and processing speed challenges of composing a mac pdu and transmitting in less than 1 slot when the actual NW-allocated grant is received.
Particular embodiments include at least three aspects. A first aspect may be that the predictive grant may be used for pre-mac pdu transmission (Tx) preparation with Logical Channel Prioritization (LCP). A second aspect may be an authorization prediction method. A third aspect may relate to a tunable factor for the authorization prediction method.
As mentioned above, the first aspect may involve predicting authorization for a prior preparation with LCP. By predicting the dynamic grants assigned by the network, the MAC can perform Logical Channel Prioritization (LCP) of packets in the logical channels in advance to extract data packets from different logical channels to compose a MAC pdu. This may allow enough time to resize the packet list to the actual size when the actual NW grant arrives, as well as allow the data to be encoded and streamed out for transmission.
In a second aspect, the grant prediction method may predict based on the last NW actual grant value and take into account measurable inputs of the modem, including total buffer queue size, transmission data rate, received power, and network traffic load. Furthermore, the method may also attempt to converge on its prediction by adapting the feedback of the prediction error from the previous slot.
According to a third aspect, the method may comprise an adjustable factor for scaling the input value. The adjustable factor may also allow the method to be adaptable and suitable for use in a variety of systems that may have different network characteristics when assigning dynamic grants. Thus, while a fifth generation (5G) uplink is used as an example, particular embodiments may be applied to other communication systems, such as other communication systems with dynamic grants.
As shown in fig. 1, in the 5G cellular radio modem, a packet data protocol stack includes a modem layer 3IP layer, a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, and a Medium Access Control (MAC) layer. Each layer is responsible for handling user plane packet data in the form of IP data or raw user data and ensures that data transmission is secure, on-time, and error-free.
In the uplink direction, incoming IP packets are queued into L3 QoS flows per Data Radio Bearer (DRB) queue after undergoing IP layer processing. These packets enter an L2 Logical Channel (LC) queue in the RLC layer after undergoing PDCP processing. PDCP layer processing includes ROHC compression, integrity checking and ciphering. RLC layer processing includes link layer error recovery in which status and retransmissions may also be placed in the LC queue for transmission.
At the MAC layer, in order to transmit a data packet, the UE first transmits a Scheduling Request (SR) and a Buffer Status Report (BSR) to request a dynamically allocated grant from the network. The UL scheduler on the NW then sends the grant of the allocation of the UE in the DCI of the PDCCH every slot. The UE decodes and computes the size of the NW-allocated grant and then runs Logical Channel Priority (LCP) to take the packet off each logical channel to compose a MAC PDU for the next transmission. The interval between receiving a DCI grant and the transmission deadline time is denoted by K2, K2 may be represented, for example, in slots or symbols.
Fig. 2A illustrates a method for computing a predictive authorization, in accordance with certain embodiments. Fig. 2B illustrates an authorization predictor function that may be used in the method of fig. 2A. As shown in fig. 2A, the predictive authorization may be calculated according to a function described below. The predicted grant for the upcoming transmission (Tx) in the next time slot (n + 1) calculated at the current time slot n may be modeled by an Infinite Impulse Response (IIR) differential equation, e.g. by equation (1).
Equation (1):
G(n)=F(t)*GA(n-1)+[1-F(t)]*[K1*Q(n)+K2*R(n)+K3*P(n)–K4*l(n)–K5*R(n)]。
in equation (1), the following may be the meaning of each term:
g (n) predictive grant at current time slot n
F (t) authorized predictor function
G A (n-1) actual network grant predicted at the previous time slot, i.e., time slot (n-1)
Q total buffer size
R uplink data rate
P received power
L network traffic load
E previous prediction error
K1-K5 input multiplication factor
F (t) authorizes the predictor function.
The upcoming predicted grant G (n) may be based largely on what the network allocated in the previous grant, especially in a steady state after the initial Radio Resource Control (RRC) connection establishment when data transmission is first scheduled. This may be weighted by a weighting factor F (t), which may be greater than 0 and less than 1. The weighting factors may be selected as desired and may be expected based on the slot-to-slot correlation of the communication system. For example, the weighting factor may be close to 1 if the expected grant would be highly correlated with the previous slot grant, and close to 0 if the expected grant would be highly uncorrelated with the previous slot grant.
In fig. 2A, the weighting factors are shown in the multipliers. In this exemplary implementation, the multiplier may be factory configured, or may be configured by software or hardware in the user equipment that includes the multiplier. Alternatively, the multiplier may be configured by a network.
Further, the weighting factors themselves may be dynamic, depending on factors such as the length of the current communication session or the connection to a given base station or other network device. For example, the grant of network allocation may be small when data transmission first begins, and then may gradually increase over time. Thus, early in the communication session, the expected NW authorization may depend on some modem input values weighted by (1-F (t)). Over time, the previous actual network authorization value G may be granted A (n-1) are given a greater weight F (t) while the adjustment factors (1-F (t)) from other modem input values are small to predict the upcoming network new grant. In other words, F (t) may rise from close to 0 to close to 1 based on the duration of the connection or session.
As shown in fig. 2B, the predictor function F (t) can be modeled as a ramp function of time, starting from zero and reaching saturation at a near constant peak of 0.9 to 0.99 at steady state.
As described above, the factors K1 through K5 may be weights for various inputs of the authorization prediction. More or fewer weights may be used and these weights may be dynamic. Further, these weights may be adjusted on a per MAC instance basis such that each MAC instance may potentially have a different weight. As another option, each user device may have its own set of weights.
The current total buffer size (Q) of all logical channels may be the primary input and may be weighted using a factor K1. K1 may be selected such that the value of Q multiplied by K1 is greater than the weighted value of the other parameters. Since different parameters have different units, K1 may be larger or smaller than the other parameter weighting factors, but the result may be that the weighted contribution of the current buffer size is relatively large, e.g., half or more of the contribution of the parameters. The total buffer size may be a value provided to the network in a Buffer Status Report (BSR), for example, at the time of network request.
The current UL data rate (R) of the transmission (Tx) carrier channel may directly affect the allocated grant size. R may be weighted with K2.
The receive (Rx) power (P) at the modem may directly affect the grant size allocated by the NW. A strong signal may indicate that the NW will assign more authorizations. This may be weighted with K3. Other similar parameters, such as signal-to-noise ratio (SNR) or signal-to-interference-plus-noise ratio (SINR), may be similarly considered and weighted.
The network traffic load (L) at the modem may indicate how busy the network around the UE is and thus the number of other UEs that may share network resources. Heavy loading may indicate that the NW scheduler will reduce authorization for the UE. This parameter may be derived from the Ec/Io value at the UE, where Ec/Io is the resulting measure of the energy before despreading compared to the interference present in the broadband radio propagation signal. This input is weighted with K4 and negatively affects authorization. Other ratios, e.g., eb/No, are the ratio of the energy after despreading to the noise of the wideband radio propagating signal.
The device may attempt to distinguish interference from authorized networks that are different from other networks and other Radio Access Technologies (RATs) in the area.
The above parameters are provided as examples. In practice, more or fewer parameters may be used. If the network provides some indication of the network load or other factors, the user equipment may take those network indications into account.
As a further aspect, the authorized prediction error (E) may be weighted by K5. Two options are shown in fig. 2A, namely K5a and K5b. The grant prediction error for the current slot n can be modeled by equation (2):
equation (2):
E(n)=[G(n-1)-G A (n-1)]。
in equation (2), the error E (n) may represent the difference between the predicted grant and the actual network grant allocated for the predicted interval of time slot (n-1). Note that the actual NW grant G for the previous slot is decoded only at the very beginning of slot n where the DCI/PDCCH is decoded A (n-1), but the predictions are all completed in time slot (n-1), i.e., the previous time slot.
This error can be taken into account using a factor K5 for the grant prediction for the current time slot n. However, depending on the error direction, the factor K5 may be generated from K5a and K5b.
If the error E (n) is positive (corresponding to the E + path with weight K5 a), this means that the previous predicted grant is larger than the actual network grant, which may be preferable to the opposite case. Tailoring the prepared packet list to fit the actual network authorization size may be easier than padding or otherwise populating the prepared packet list. Thus, a lower weight for K5a may reduce the impact on the current grant prediction.
If the error E (n) is negative (corresponding to an E-path with weight K5 b), this means that the previous predicted grant is smaller than the actual network grant. In this case, additional steps may need to be taken to extract more data packets from the L2 logical channel queue with LCP prioritization and append to the currently prepared packet list. Thus, for example, to minimize this, K5b may be given more weight to make the new grant prediction as close to the network grant value as possible, or even slightly larger.
Thus, in particular embodiments, K5b > K5a. Other implementations are possible, as are other details.
Fig. 3 illustrates a timing diagram of an authorization prediction mechanism, in accordance with certain embodiments. The timing diagram illustrates an execution time sequence of an example implementation of a method, which may be implemented by a device such as a baseband chip of a UE. Fig. 4 is a flow chart of a method 400 corresponding to the timing diagram of fig. 3. As shown in fig. 3, DCI and PDCH are shown for reference at 305. Further, fig. 3 shows the case where K2 is less than one slot, but the same principles discussed herein may be similarly applied to the case where K2 is longer than one slot. In this case, the DCI for slot n is provided in a Physical Downlink Control Channel (PDCCH) in the first portion of slot n, and the scheduled transmission may occur later in the same slot. Although DCI and scheduled transmissions are not shown in slot n-1, the same may be the case in slot n +1 as well as in slot n-1.
As further shown in fig. 3 and 4, at 310, the L1/PHY may decode PDCCH and DCI, and may calculate the actual network grant size G A (n-1). The grant size may be signaled implicitly or explicitly by the network.
As shown in fig. 3 and 4, at 320, the MAC Software (SW) may service the actual NW grant size G A (n-1), which may include adjusting a previously prepared list of MAC packets to fit the actual grant size. The PHY layer may then encode and stream the mac pdu for UL transmission.
At 330 in fig. 3 and 4, the device may calculate the error of the authorization prediction performed at (n-1). The grant prediction error may be calculated by comparing the value of the predicted grant calculated at the previous time with the actual NW allocation grant just decoded at 310 at the beginning of slot n.
At 340, the device may collect other inputs, such as values for other parameters that may be used to calculate predictive authorization. For example, in the example of equation (1) above, the values of the following parameters may be obtained: total buffer size, UL data rate, receive (Rx) power, and network traffic load. These may be retrieved from the memory of the device itself.
At 350, the device may calculate a new predicted grant G (n) for the next transmission. The calculation may be based on the calculated error, the parameter value, and any other input. The calculation may take into account the factor adjustments in equation (1) above, such as the weights of K1, K2, etc. Factor adjustments may be considered.
At 360, the device may prepare in advance a mac pdu packet list having a size based on the predicted grant G (n) by primarily running logical channel prioritization and Packet Data Convergence Protocol (PDCP) processing. This prepared list can be easily and quickly updated for data transmission when the next time slot NW actually grants the arrival of (n + 1).
As shown in fig. 3, a packet prepared at 360 during grant prediction in slot n may be adjusted in slot n +1 at 320 and transmitted in slot n +1 at 305.
Particular embodiments of the present disclosure may have various benefits and/or advantages. For example, certain embodiments may provide a practical and adaptive method to achieve its purpose with low complexity. In addition, particular embodiments may be computationally efficient and easy to implement. Moreover, particular embodiments may utilize adjustable weighting factors that may be adjusted as needed for each system or for each MAC instance.
Particular embodiments provide an adaptive and flexible approach that allows new input factors to be easily added. Additionally, in a particular embodiment, the grant prediction error may still improve system performance and not result in performance degradation.
Particular embodiments may be nearly independent of on-chip memory or CPU MIP and may not consume too much power. In addition, by using certain embodiments, the transmission timeline may be easily satisfied when K2 offset < 1, i.e., when the same slot transmission is scheduled upon receiving NW grants.
Particular embodiments may reduce the latency of preparing a mac pdu packet for transmission. In addition, certain embodiments may coexist with nonpredictive grant allocation schemes and fixed grant allocation schemes. Particular embodiments may be applicable to different wireless technologies requiring dynamic uplink grant allocation access by a base station, such as 5G, LTE or future 3GPP or standards.
Particular embodiments may apply other techniques, such as machine learning. For example, machine learning may be used to adjust the weights of various parameters and take into account additional parameters that are included. Additionally, machine learning and other forms of artificial intelligence can be used to fine-tune the factors using the collected data.
Fig. 5 illustrates a detailed block diagram of a baseband SoC 502 implementing layer 2 packet processing using layer 2 circuitry 508 and a Microcontroller (MCU) 510, according to some embodiments of the present disclosure.
As shown in fig. 5, the baseband SoC 502 may be an example of a software and hardware interworking system, where software functions are implemented by the MCU 510 and hardware functions are implemented by the layer 2 circuit 508. MCU 510 may be one example of a microcontroller and layer 2 circuit 508 may be one example of an integrated circuit, although other microcontrollers and integrated circuits are also permitted. In some embodiments, the layer 2 circuitry 508 includes SDAP circuitry 520, PDCP circuitry 522, RLC circuitry 524, and MAC circuitry 526. Application specific Integrated Circuits (ICs) controlled by MCU 510 (e.g., SDAP circuit 520, PDCP circuit 522, RLC circuit 524, and MAC circuit 526) may be used for layer 2 packet processing. In some embodiments, the SDAP circuit 520, the PDCP circuit 522, the RLC circuit 524, and the MAC circuit 526 are each ICs dedicated to performing the functions of the respective layers in the layer 2 user plane and/or the control plane. For example, the SDAP circuitry 520, the PDCP circuitry 522, the RLC circuitry 524, and the MAC circuitry 526 can each be an ASIC that can be customized for a particular use, rather than intended for a general purpose use. Some ASICs may have high speed, small die size, and low power consumption compared to general purpose processors.
As shown in fig. 5, the baseband SoC 502 may be operably coupled to the host processor 504 and the external memory 506 through the main bus 538. For uplink communications, a host processor 504, such as an Application Processor (AP), may generate raw data that has not been encoded and modulated by the PHY layer of the baseband SoC 502. Similarly, for downlink communications, the host processor 504 may receive the data after it is initially decoded and demodulated by the PHY layer and then processed by the layer 2 circuitry 508. In some embodiments, the raw data is formatted into data packets according to any suitable protocol, such as Internet Protocol (IP) data packets. External memory 506 may be shared by host processor 504 and baseband SoC 502 or any other suitable component.
In some embodiments, the external memory 506 stores raw data (e.g., IP data packets) to be processed by the layer 2 circuitry 508 of the baseband SoC 502 and stores data processed by the layer 2 circuitry 508 (e.g., MAC PDUs) to be accessed by the layer 1 (e.g., PHY layer). In a downlink stream from the user equipment, the situation may be reversed, where the external memory 506 may store data received from the PHY layer and data output from the layer 2 circuitry 508 after header removal and other tasks. The external memory 506 may or may optionally not store any intermediate data of the layer 2 circuitry 508, such as PDCP PDUs/RLC SDUs or RLC PDUs/MAC SDUs. For example, the layer 2 circuitry 508 may modify data stored in the external memory 506.
As shown in fig. 5, the baseband SoC 502 may also include Direct Memory Access (DMA) 516, which DMA 516 may allow some of the layer 2 circuitry 508 to Access the external Memory 506 directly independent of the host processor 504. The DMA 516 may include a DMA controller and any other suitable input/output (I/O) circuitry. As shown in fig. 5, the baseband SoC 502 may further include an internal memory 514, e.g., an on-chip memory on the baseband SoC 502, the internal memory 514 being distinct from the external memory 506, the external memory 506 being an off-chip memory not on the baseband SoC 502. In some embodiments, internal memory 514 includes one or more L1, L2, L3, or L4 caches. The layer 2 circuitry 508 may also access the internal memory 514 through the main bus 538. Thus, internal memory 514 may be specific to baseband SoC 502, as opposed to other subcomponents or components implementing the system.
As shown in fig. 5, the baseband SoC 502 may also include a memory 512, the memory 512 being shared by the layer 2 circuitry 508 and the MCU 510 (e.g., accessible by both the layer 2 circuitry 508 and the MCU 510). It should be appreciated that although the memory 512 is illustrated as a separate memory from the internal memory 514, in some examples the memory 512 and the internal memory 514 may be local partitions of the same physical memory structure (e.g., static Random Access Memory (SRAM)). In one example, a logical partition in internal memory 514 may be dedicated or dynamically allocated to layer 2 circuitry 508 and MCU 510 for exchanging commands and responses. In some embodiments, memory 512 includes multiple command queues 534, respectively, for storing multiple sets of commands and multiple response queues 536, respectively, for storing multiple sets of responses. Each pair of corresponding command queue 534 and response queue 536 may be dedicated to one of the plurality of layer 2 circuits 508.
As shown in fig. 5, baseband SoC 502 may also include a local bus 540. In some embodiments, MCU 510 may be operatively coupled to memory 512 and main bus 538 by local bus 540. MCU 510 may be configured to generate multiple sets of control commands and write each set of commands to a respective command queue 534 in memory 512 via local bus 540 and interrupts. MCU 510 may also read sets of responses (e.g., processing result status) from multiple response queues 536 in memory 512 via local bus 540 and interrupts, respectively. In some embodiments, MCU 510 generates a set of commands based on a set of responses from a higher layer in a layer 2 protocol stack (e.g., a previous stage in layer 2 uplink data processing) or a lower layer in a layer 2 protocol stack (e.g., a previous stage in layer 2 downlink data processing). MCU 510 may be operably coupled to layer 2 circuitry 508 and control the operation of layer 2 circuitry 508 to process layer 2 data through control commands in command queue 534 in memory 512. It should be understood that although one MCU 510 is shown in fig. 5, the number of MCUs is scalable such that multiple MCUs may be used in some examples. It should also be understood that in some embodiments, memory 512 may be part of MCU 510, e.g., a cache integrated with MCU 510. It is to be further understood that, regardless of nomenclature, any suitable processing unit that can generate control commands to control the operation of the layer 2 circuitry 508 and check the response of the layer 2 circuitry 508 can be considered to be the MCU 510 disclosed herein.
The software and hardware interworking system disclosed herein, such as the baseband SoC 502 in fig. 5, may be implemented by any suitable node in a wireless network. For example, fig. 6 illustrates an example wireless network 600 in which some aspects of the disclosure may be implemented, according to some embodiments of the disclosure.
As shown in fig. 6, wireless network 600 may include a network of nodes, such as UE 602, access node 604, and core network element 606. The user device 602 may be any terminal device, such as a mobile phone, a desktop computer, a laptop, a tablet, a vehicle computer, a gaming console, a printer, a positioning device, a wearable electronic device, a smart sensor, or any other device capable of receiving, processing, and sending information, such as any of a vehicle-to-anything (V2X) network, a swarm network, a smart grid node, or an internet of things (IoT) node. It should be understood that the user device 602 is shown as a mobile telephone by way of illustration only and not by way of limitation.
The access node 604 may be a device that communicates with the user equipment 602, such as a wireless access point, a Base Station (BS), a node B, an enhanced node B (eNodeB or eNB), a next generation node B (gdnodeb or gNB), a cluster master node, and so on. The access node 604 may have a wired connection to the user device 602, a wireless connection to the user device 602, or any combination thereof. The access node 604 may be connected to the user equipment 602 through multiple connections, and the user equipment 602 may be connected to other access nodes in addition to the access node 604. The access node 604 may also be connected to other UEs. It should be understood that the access node 604 is shown by way of illustration, and not by way of limitation, by a radio tower.
The core network element 606 may serve the access node 604 and the user equipment 602 to provide core network services. Examples of the core network element 606 may include a Home Subscriber Server (HSS), a Mobility Management Entity (MME), a Serving Gateway (SGW), or a packet data network gateway (PGW). These are examples of core network elements of an Evolved Packet Core (EPC) system, which is the core network of an LTE system. Other core network elements may be used in LTE and other communication systems. In some embodiments, the core network element 606 comprises an access and mobility management function (AMF) device, a Session Management Function (SMF) device, or a User Plane Function (UPF) device for a core network of the NR system. It should be understood that the core network element 606 is shown illustratively, but not restrictively, as a collection of rack-mounted servers.
The core network element 606 may be connected to a large network, such as the internet 608, or another IP network to transport packet data over any distance. In this manner, data from the user equipment 602 may be communicated to other UEs connected to other access points, including, for example, a computer 610 connected to the internet 608 using a wired or wireless connection or a tablet 612 wirelessly connected to the internet 608 via a router 614. Thus, computer 610 and tablet 612 provide further examples of possible UEs, while router 614 provides an example of another possible access node.
A general example of a rack-mounted server is provided and illustrated as core network element 606. However, there may be multiple elements in the core network, including a database server (e.g., database 616) and a security and authentication server (e.g., authentication server 618). For example, database 616 may manage data related to a user's subscription to a network service. A Home Location Register (HLR) is an example of a standardized database of subscriber information for cellular networks. Likewise, authentication server 618 can handle authentication of users, sessions, and the like. In NR systems, an authentication server function (AUSF) device may be a specific entity that performs user equipment authentication. In some embodiments, a single server chassis may handle multiple such functions, such that the connections between the core network elements 606, the authentication server 618, and the database 616 may be local connections within the single chassis.
Although the above description uses uplink and downlink processing of packets in user equipment as an example in various discussions, similar techniques may be used for the other direction of processing as well as for processing in other devices such as access nodes and core network nodes. For example, any device that processes packets through multiple layers of a protocol stack may benefit from some embodiments of the present disclosure, even if not specifically listed above or shown in the example network of fig. 6.
Each element of fig. 6 may be considered a node of wireless network 600. In the following description of node 700 in fig. 7, more details are provided, by way of example, regarding possible implementations of the node. The node 700 may be configured as the user equipment 602, the access node 604, or the core network element 606 in fig. 6. Similarly, node 700 may also be configured as computer 610, router 614, tablet 612, database 616, or authentication server 618 in fig. 6.
As shown in fig. 7, node 700 may include a processor 702, a memory 704, and a transceiver 706. These components are shown connected to each other by a bus 708, although other connection types are also permissible. When the node 700 is a user device 602, further components may be included, such as User Interfaces (UIs), sensors, etc. Similarly, when node 700 is configured as core network element 606, node 700 may be implemented as a blade in a server system. Other implementations are possible.
The transceiver 706 may include any suitable device for transmitting and/or receiving data. Node 700 may include one or more transceivers, but only one transceiver 706 is shown for simplicity of illustration. Antenna 710 is shown as a possible communication mechanism for node 700. Multiple antennas and/or antenna arrays may be utilized. Further, examples of the node 700 may communicate using wired techniques instead of, or in addition to, wireless techniques. For example, the access node 604 may communicate with the user equipment 602 in a wireless manner and may communicate with the core network element 606 over a wired connection (e.g., over an optical or coaxial cable). Other communication hardware, such as a Network Interface Card (NIC), may also be included.
As shown in fig. 7, node 700 may include a processor 702. Although only one processor is shown, it will be understood that multiple processors may be included. The processor 702 may include a microprocessor, microcontroller, DSP, ASIC, field Programmable Gate Array (FPGA), programmable Logic Device (PLD), state machine, gated logic, discrete hardware circuitry, and other suitable hardware configured to perform the various functions described throughout this disclosure. The processor 702 may be a hardware device having one or more processing cores. The processor 702 may execute software. Software shall be construed broadly to mean instructions, instruction sets, code segments, program code, programs, subprograms, software modules, applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Software may include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for indicating hardware are also allowed within the broad scope of software. Processor 702 may be a baseband chip, such as SoC 502 in fig. 5, and node 700 may also include other processors not shown, such as a central processing unit of a device, a graphics processor, and so forth. The processor 702 may include an internal memory (not shown in fig. 7) that may be used as a memory for L2 data, such as the internal memory 514 in fig. 5. The processor 702 may include, for example, an RF chip integrated into a baseband chip, or the RF chip may be separately provided. Processor 702 may be configured to operate as, or may be an element or component of, a modem of node 700. Other arrangements and configurations are also permitted.
As shown in fig. 7, node 700 may also include a memory 704. Although only one memory is shown, it should be understood that multiple memories may be included. The storage 704 may broadly include both storage and memory. For example, memory 704 may include Random Access Memory (RAM), read Only Memory (ROM), SRAM, dynamic RAM (DRAM), ferroelectric RAM (FRAM), electrically Erasable Programmable ROM (EEPROM), CD-ROM or other optical disk storage, a Hard Disk Drive (HDD) (e.g., a magnetic disk storage or other magnetic storage device), a flash memory drive, a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702. Broadly, the memory 704 can be embodied by any computer-readable medium, such as a non-transitory computer-readable medium. The memory 704 may be the external memory 506 in fig. 5, and the memory 704 may be shared by the processor 702 and other components of the node 700 (e.g., a graphics processor or central processing unit, not shown).
In various aspects of the disclosure, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium. Computer readable media includes computer storage media. A storage medium may be any available media that can be accessed by a computing device, such as node 700 in fig. 7. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD such as magnetic disk storage or other magnetic storage devices, flash drives, SSDs, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a processing system (e.g., a mobile device or computer). Disk and disc, as used herein, includes CD, laser disc, optical disc, DVD and floppy disk wherein disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
According to one aspect of the disclosure, an apparatus for packet data processing (such as authorization prediction and preparation) may include at least one memory configured to store packet data for transmission. The apparatus may also include at least one processor operatively connected to the at least one memory and configured to process the packet data for transmission. The processor may be configured to predict an amount of grant for a future actual grant for the transmission when processing the packet data for the transmission. The prediction of the authorization quantity by the processor may take into account at least one past prediction of the device. The processor may be further configured to prepare the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
In some embodiments, the processor may be further configured to calculate a prediction error by comparing the at least one past prediction to an actual past authorization. The prediction error may be taken into account in the prediction of the authorization quantity.
In some embodiments, the processor may be further configured to collect values for a plurality of parameters associated with the network associated with the authorized amount. The plurality of parameters may be considered in the prediction of the authorization quantity.
In some embodiments, when preparing the packet data for transmission, the processor may be configured to prepare the packet data in a MAC packet data unit packet list.
In some embodiments, the processor may be further configured to weight the values of the plurality of parameters.
In some embodiments, when weighting the value, the processor may be further configured to weight the value based on each MAC instance.
In some embodiments, wherein the plurality of parameters includes at least one of a total buffer size, an uplink data rate, a received power, and a network traffic load.
According to another aspect of the disclosure, a method for authorization prediction and preparation may include: a processor of the apparatus predicts an authorization amount for a future actual authorization for transmission, wherein the prediction of the authorization amount by the processor takes into account at least one past prediction of the apparatus. The method may further comprise: the processor of the apparatus prepares the packet data for transmission based on the prediction of an amount of authorization prior to receiving the future actual authorization.
In some embodiments, the method may further include the processor of the device calculating a prediction error by comparing the at least one past prediction to an actual past authorization. The prediction error may be taken into account in the prediction of the authorization quantity.
In some embodiments, the method may further include the processor of the device collecting values for a plurality of parameters associated with a network associated with the authorized amount. The plurality of parameters may be considered in the prediction of the authorization quantity.
In some embodiments, preparing the packet data for transmission includes preparing the packet data in a MAC packet data unit packet list.
In some embodiments, the method may further comprise weighting the values of the plurality of parameters.
In some embodiments, the weighting is performed on the value on a per-MAC instance basis.
In some embodiments, the plurality of parameters may include a total buffer size, an uplink data rate, a received power, a network traffic load, or any combination thereof.
According to yet another aspect of the disclosure, a non-transitory computer-readable medium is encoded with instructions that, when executed by a processor of an apparatus, perform a process. The process may include: the grant amount for future actual grants for transmission is predicted. The prediction of the amount of authorization by the processor may take into account at least one past prediction of the apparatus. The process may also include preparing the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
In some embodiments, the process may include calculating a prediction error by comparing the at least one past prediction to an actual past authorization. The prediction error may be taken into account in the prediction of the authorization quantity.
In some embodiments, the process may include collecting values for a plurality of parameters associated with the network associated with the authorized amount. The plurality of parameters may be taken into account in the prediction of the authorization quantity.
In some embodiments, preparing the packet data for transmission may include preparing the packet data in a MAC packet data unit packet list.
In some embodiments, the process may further include weighting the values of the plurality of parameters.
In some embodiments, the plurality of parameters may include at least one of a total buffer size, an uplink data rate, a received power, and a network traffic load.
The foregoing description of the specific embodiments will so reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. Boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
This summary and abstract sections can set forth one or more, but not all exemplary embodiments of the present disclosure as contemplated by the inventors and are, therefore, not intended to limit the present disclosure and the appended claims in any way.
Various functional blocks, modules, and steps have been disclosed above. The particular arrangements provided are illustrative rather than limiting. Accordingly, the functional blocks, modules, and steps may be reordered or combined in a manner different from the examples provided above. Similarly, some embodiments include only a subset of the functional blocks, modules, and steps, and allow for any such subset.
The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (20)

1. An apparatus for packet data processing, comprising:
at least one memory configured to store packet data for transmission; and
at least one processor operatively connected to the at least one memory and configured to process the packet data for transmission,
wherein the at least one processor is configured to, when processing the packet data for transmission:
predicting an authorization amount for a future actual authorization of the transmission, wherein the prediction of the authorization amount by the processor takes into account at least one past prediction of the apparatus; and
preparing the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
2. The apparatus of claim 1, wherein the at least one processor is further configured to calculate a prediction error by comparing the at least one past prediction to an actual past grant, wherein the prediction error is taken into account in the prediction of the grant amount.
3. The apparatus of claim 1, wherein the at least one processor is further configured to collect values for a plurality of parameters associated with a network associated with the authorization quantity, wherein the plurality of parameters are considered in the prediction of the authorization quantity.
4. The apparatus of claim 3, in which the at least one processor is further configured to weight the values of the plurality of parameters.
5. The apparatus of claim 4, wherein when weighting the value, the processor is configured to weight the value on a per MAC instance basis.
6. The apparatus of claim 3, wherein the plurality of parameters comprises at least one of a total buffer size, an uplink data rate, a received power, and a network traffic load.
7. The apparatus of claim 1, wherein when preparing the packet data for transmission, the at least one processor is configured to prepare the packet data in a Medium Access Control (MAC) packet data unit packet list.
8. A method for packet data processing, comprising:
predicting, by a processor of an apparatus, a grant quantity for a future actual grant for transmission, wherein the prediction of the grant quantity by the processor takes into account at least one past prediction of the apparatus; and
prior to receiving the future actual grant, the processor of the apparatus prepares the packet data for transmission based on the prediction of the grant amount.
9. The method of claim 8, further comprising:
the processor of the apparatus calculates a prediction error by comparing the at least one past prediction to an actual past grant, wherein the prediction error is taken into account in the prediction of the grant quantity.
10. The method of claim 8, further comprising:
the processor of the apparatus collects values of a plurality of parameters associated with a network associated with the authorization quantity, wherein the plurality of parameters are taken into account in the prediction of the authorization quantity.
11. The method of claim 10, further comprising:
weighting the values of the plurality of parameters.
12. The method of claim 11, wherein the weighting of the values is performed on a per-MAC instance basis.
13. The method of claim 11, wherein the plurality of parameters includes at least one of a total buffer size, an uplink data rate, a received power, and a network traffic load.
14. The method of claim 8, wherein the preparing the packet data for transmission comprises preparing the packet data in a Medium Access Control (MAC) packet data unit packet list.
15. A non-transitory computer readable medium encoded with instructions that, when executed by a processor of an apparatus, perform a process for packet data processing, the process comprising:
predicting an authorization amount for a future actual authorization for transmission, wherein the prediction of the authorization amount by the processor takes into account at least one past prediction of the apparatus; and
preparing the packet data for transmission based on the prediction of the grant amount prior to receiving the future actual grant.
16. The non-transitory computer-readable medium of claim 15, wherein the process further comprises:
calculating a prediction error by comparing the at least one past prediction with an actual past grant, wherein the prediction error is taken into account in the prediction of the grant quantity.
17. The non-transitory computer-readable medium of claim 15, wherein the process further comprises:
collecting values of a plurality of parameters associated with a network associated with the authorization quantity, wherein the plurality of parameters are taken into account in the prediction of the authorization quantity.
18. The non-transitory computer-readable medium of claim 17, wherein the process further comprises:
weighting the values of the plurality of parameters.
19. The non-transitory computer-readable medium of claim 17, wherein the plurality of parameters includes at least one of a total buffer size, an uplink data rate, a received power, and a network traffic load.
20. The non-transitory computer-readable medium of claim 15, wherein the preparing the packet data for transmission comprises preparing the packet data in a Medium Access Control (MAC) packet data unit packet list.
CN202080094168.7A 2020-01-29 2020-10-22 Adaptive grant prediction for enhanced packet data transmission Pending CN115398948A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202062967459P 2020-01-29 2020-01-29
US62/967,459 2020-01-29
PCT/IB2020/059911 WO2021152368A1 (en) 2020-01-29 2020-10-22 Adaptable grant prediction for enhanced packet data transmission

Publications (1)

Publication Number Publication Date
CN115398948A true CN115398948A (en) 2022-11-25

Family

ID=77078129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080094168.7A Pending CN115398948A (en) 2020-01-29 2020-10-22 Adaptive grant prediction for enhanced packet data transmission

Country Status (2)

Country Link
CN (1) CN115398948A (en)
WO (1) WO2021152368A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030013454A1 (en) * 2001-03-09 2003-01-16 Denso Corporation Relative future activity indicators for assisting in selecting the source of received communications
US20100262881A1 (en) * 2009-04-08 2010-10-14 Via Telecom, Inc. Apparatus and method for reverse link transmission in an access terminal
US20150038156A1 (en) * 2013-07-31 2015-02-05 Qualcomm Incorporated Adapting mobile device behavior using predictive mobility

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9900904B2 (en) * 2013-05-31 2018-02-20 Telefonaktiebolaget L M Ericsson (Publ) Predictive scheduling for uplink transmission in a cellular network
US9451489B2 (en) * 2013-11-07 2016-09-20 Qualcomm Incorporated Method and apparatus for LTE uplink throughput estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030013454A1 (en) * 2001-03-09 2003-01-16 Denso Corporation Relative future activity indicators for assisting in selecting the source of received communications
US20100262881A1 (en) * 2009-04-08 2010-10-14 Via Telecom, Inc. Apparatus and method for reverse link transmission in an access terminal
US20150038156A1 (en) * 2013-07-31 2015-02-05 Qualcomm Incorporated Adapting mobile device behavior using predictive mobility

Also Published As

Publication number Publication date
WO2021152368A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
KR101580116B1 (en) A scheduling concept
EP2634950B1 (en) Method and Apparatus for Power Headroom Reporting
RU2565247C1 (en) Method and assembly for processing scheduling request
US8605586B2 (en) Apparatus and method for load balancing
US9407563B2 (en) Methods and apparatuses for adapting application uplink rate to wireless communications network
JP4397928B2 (en) A method for allocating resources of a wireless communication network to traffic to be transmitted to user equipment over a network channel
US8331248B2 (en) System and method for dynamic resource allocation in wireless communications networks
RU2712826C1 (en) Method and system for scheduling data in an uplink for transmitting without granting permission
CN110622618A (en) Method and apparatus associated with direct communication in a radio access network
EP3138341B1 (en) Method and radio network node for scheduling of wireless devices in a cellular network
KR20110082471A (en) Method and apparatus of power increase/decrease request of a mobile station using a plurality of frequencies in a wireless communication system
JP2014507097A (en) LTE scheduling
CN113812199A (en) Logical channel prioritization
WO2015027481A1 (en) Passive inter modulation signal interference scheduling method and apparatus
CN114073157A (en) Selection of channel access priority
CN115349285A (en) Communication apparatus and communication method for mode 2 resource (re) selection for packet delay budget limited scenarios
CN110268740B (en) Beam avoidance method and base station
US10397922B2 (en) Method for allocating time-frequency resources for the transmission of data packets via a frequency selective channel
CN115398948A (en) Adaptive grant prediction for enhanced packet data transmission
EP3939192B1 (en) Early releasing uplink retransmission memory based upon prediction of uplink retransmission indicator
JP7423121B2 (en) Resource allocation method and device in wireless communication system
EP4104334A1 (en) Methods and communications devices
US20210274374A1 (en) Method and network node for supporting a service over a radio bearer
KR20160140098A (en) Method and Apparatus of scheduling for wireless packet network
WO2024092633A1 (en) Ue information reporting and packet delay management in wireless communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230731

Address after: Room 01, 8th floor, No.1 Lane 61, shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Zheku Technology (Shanghai) Co.,Ltd.

Address before: California, USA

Applicant before: Zheku Technology Co.,Ltd.

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20221125

WD01 Invention patent application deemed withdrawn after publication