WO2018133625A1 - Data processing method and apparatus for air interface protocol data plane - Google Patents

Data processing method and apparatus for air interface protocol data plane Download PDF

Info

Publication number
WO2018133625A1
WO2018133625A1 PCT/CN2017/118081 CN2017118081W WO2018133625A1 WO 2018133625 A1 WO2018133625 A1 WO 2018133625A1 CN 2017118081 W CN2017118081 W CN 2017118081W WO 2018133625 A1 WO2018133625 A1 WO 2018133625A1
Authority
WO
WIPO (PCT)
Prior art keywords
user data
hardware
cell
processing
thread group
Prior art date
Application number
PCT/CN2017/118081
Other languages
French (fr)
Chinese (zh)
Inventor
黄勇
吴治鸣
Original Assignee
京信通信系统(中国)有限公司
京信通信系统(广州)有限公司
京信通信技术(广州)有限公司
天津京信通信系统有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201710044159.7A priority Critical patent/CN106851667B/en
Priority to CN201710044159.7 priority
Application filed by 京信通信系统(中国)有限公司, 京信通信系统(广州)有限公司, 京信通信技术(广州)有限公司, 天津京信通信系统有限公司 filed Critical 京信通信系统(中国)有限公司
Publication of WO2018133625A1 publication Critical patent/WO2018133625A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/18Network planning tools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic or resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0205Traffic management, e.g. flow control or congestion control at the air interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management, e.g. wireless traffic scheduling or selection or allocation of wireless resources
    • H04W72/12Dynamic Wireless traffic scheduling ; Dynamically scheduled allocation on shared channel
    • H04W72/1205Schedule definition, set-up or creation
    • H04W72/1252Schedule definition, set-up or creation based on load

Abstract

The present invention discloses a data processing method and apparatus for an air interface protocol data plane. The method comprises: acquiring the total number of hardware cores Y of a base station and the number of cells N that need to be supported by the base station; determining, according to a relationship between the number of cells N that need to be supported by the base station and the total number of hardware cores Y of the base station, a number of hardware cores M1 for processing cell MAC layer scheduling; and determining, according to the total number of hardware cores Y of the base station and the number of hardware cores M1 for processing the cell MAC layer scheduling, a number of hardware cores M2 for processing user data, wherein M1 + M2 ≤ Y. The total number of hardware cores are appropriately allocated, and are specifically divided into M1 hardware cores for processing cell MAC layer scheduling, and M2 hardware cores for processing user data. Classification and parallel processing of a multi-core processor is utilized to achieve the purposes of a high throughput, multiple cells, and multiple users on an LTE air interface protocol stack data plane.

Description

Data processing method and device for air interface protocol data surface

This application claims the priority of the Chinese patent application filed on January 19, 2017, the Chinese Patent Office, the application number is 201710044159.7, and the invention name is "a data processing method and device for the data interface of the air interface protocol". The citations are incorporated herein by reference.

Technical field

The present invention relates to the field of LTE (Long Term Evolution) technology, and in particular, to a data processing method and apparatus for an air interface protocol data plane.

Background technique

As one of the standard technologies for fourth-generation mobile communication, LTE is a wireless communication standard designed for mobile high-bandwidth applications. The wireless interface can be divided into three protocol layers: physical layer L1, data link layer L2, and network layer L3, as shown in FIG.

The data plane L2 of the LTE wireless communication air interface protocol includes a PDCP (Packet Data Convergence Protocol) layer, an RLC (Radio Link Control) layer, and a MAC (Media Access Control Protocol). Floor. The PDCP layer is responsible for data header compression/decompression, integrity protection, encryption/decryption, PDCP SN (Serial Number) maintenance, sequential delivery, handover data switching, and timing discarding. The RLC protocol layer is responsible for TM (Transparent). Mode, transparent transmission mode), UM (Unacknowledged Mode) and AM (Acknowledged Mode) data transmission, segmentation, cascading, reassembly, re-segmentation and ARQ (Automatic Repeat) Request, automatic retransmission request, etc.; MAC protocol layer is responsible for logical channel and transport channel mapping, logical channel multiplexing and demultiplexing, HARQ (Hybrid Automatic Repeat Request), dynamic scheduling, semi-persistent scheduling And transport format selection and other features.

In the data plane data processing flow of the existing LTE air interface protocol, for downlink data transmission, the PDCP adds PDCP header information to the data packet from the upper layer, performs header compression, and encrypts and transmits the information to the RLC; the RLC pairs the wireless chain from the PDCP. The channel control layer service data unit performs segmentation, cascading, padding processing according to the scheduled length of the MAC layer, and adds corresponding RLC header information to form a radio link control layer protocol data unit; then the MAC layer will come from different logical channels. The data is multiplexed into the transport channel. The multiplexing process is based on a set of TB (Transport Block) sizes and transport formats, involving concatenation, padding, adding corresponding MAC header information and MAC control information to form a MAC PDU ( The operation of the Protocol Data Unit (the protocol data unit); the TB formed after the MAC processing is transmitted to the bottom layer and wirelessly transmitted to the terminal through the air interface.

In the data plane data processing flow of the existing LTE air interface protocol, for the uplink data transmission, the MAC layer performs the de-MAC header on the uplink received data, demultiplexes the data, and sends the demultiplexed data to the RLC. Transmitting the demultiplexed MAC control information to the MAC scheduling; the RLC performs the de-RLC header on the received data, reassembles and sends it to the PDCP; the PDCP decrypts the received data, decompresses the header, and goes to the PDCP header. And then deliver the packets to the upper layer in order.

In the case of single carrier, the LTE data transmission rate can be up to 50 Mbps and the downlink can reach 100 Mbps. In the case of carrier aggregation, the rate is multiplied by the number of aggregated carriers, and the number of intelligent terminal devices increases sharply. The number of users supported by the base station has increased by an order of magnitude. However, the data plane architecture of the prior art 2G and 3G air interface protocols is based on a single-threaded or multi-threaded design architecture on a single-core or multi-core hardware processor, which is difficult to meet the high-speed performance requirements of LTE.

In summary, the data plane architecture of the existing air interface protocol cannot meet the data plane requirements of the LTE base station equipment with high throughput, multi-cell, and multi-user.

Summary of the invention

The present invention provides a data processing method and device for an air interface protocol data plane, which is used to solve the problem that the data plane architecture of the prior art hollow port protocol cannot meet the data plane requirements of the high throughput, multi-cell, and multi-user LTE base station equipment.

In a first aspect, an embodiment of the present invention provides a data processing method for an air interface protocol data plane, including:

Obtaining a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;

Determining, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell;

Determining a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, where M1+M2≤Y.

Preferably, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined according to the relationship between the number of cells N to be supported by the base station and the total number of hardware cores Y of the base station, including:

According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;

When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N, and the hardware core number M2=Y-N for processing the user data is determined.

Preferably, the method further includes:

When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The hardware core number M2=Y-M1 for processing user data.

Preferably, after the determining the hardware core number M2 for processing the user data, the method further includes:

Determining a cell scheduling thread group and a user data thread group of the cell in the base station;

The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;

The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.

Preferably, the adjusting the deployment of the user accessing the user data thread group according to the load of the hardware core processing the user data comprises:

According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;

or,

According to each user data thread group, the average CPU load of the hardware core of the set time period is recorded, and the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;

or,

According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.

In a second aspect, the embodiment of the present invention further provides a data processing apparatus for a data plane of an air interface protocol, including:

The acquiring unit is configured to acquire a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;

a first determining unit, configured to determine, according to a relationship between a number N of cells that the base station needs to support and a total number of hardware cores Y of the base station, a hardware core number M1 for processing a cell MAC layer scheduling;

a second determining unit: configured to determine, according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, a hardware core number M2 for processing user data, where, +M2 ≤ Y.

Preferably, the first determining unit is specifically configured to:

According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;

When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N; the second determining unit determines the hardware core number M2=Y-N for processing the user data.

Preferably, the first determining unit is further configured to:

When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The determining unit determines the hardware core number M2=Y-M1 for processing the user data.

Preferably, the device further comprises an adjustment unit for:

Determining a cell scheduling thread group and a user data thread group of the cell in the base station;

The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;

The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.

Preferably, the adjusting unit is further configured to:

According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;

or,

According to the average CPU load of the hardware core of the user within the set duration, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;

or,

According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.

In a third aspect, the embodiment of the present application further provides a base station, including: at least one processor, a transceiver, and a memory communicably connected to the at least one processor; wherein the memory is stored by the at least one An instruction executed by the processor, the instruction being executed by the at least one processor to enable the at least one processor to perform a data processing method for an air interface protocol data plane of an embodiment of the present application.

The embodiment of the present application further provides a non-volatile computer readable storage medium storing computer-executable instructions for causing the computer to execute the present application. A data processing method for an air interface protocol data plane of an embodiment.

The embodiment of the present application further provides a computer program product, the computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising the computer executable instructions, when the computer When the executable instructions are executed by the computer, the computer is caused to execute the data processing method for the air interface protocol data plane of the embodiment of the present application.

An embodiment of the present invention provides a data processing method and apparatus for an air interface protocol data plane, which acquires a total number of hardware cores Y of a base station and a number of cells N that the base station needs to support; and a number N of cells that need to be supported according to the base station, and the The relationship between the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell; the total number of hardware cores Y according to the base station and the hardware core number M1 for processing the media layer access control MAC layer scheduling Determining the hardware core number M2 for processing user data, where M1+M2≤Y. By reasonably allocating the total number of hardware cores, it is divided into hardware cores that process the MAC layer scheduling of the cell, the number is M1; and the hardware core that processes the user data, the number is M2. The multi-core processor classification parallel processing achieves the goal of satisfying the data throughput, multi-cell, and multi-user of the LTE air interface protocol stack.

DRAWINGS

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention, Those skilled in the art can also obtain other drawings based on these drawings without paying for inventive labor.

FIG. 1 is a data plane of an LTE wireless communication air interface protocol according to an embodiment of the present invention;

2 is a schematic diagram of a data plane software architecture of an LTE air interface protocol according to an embodiment of the present invention;

FIG. 3 is a schematic flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present disclosure;

4 is a flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention;

FIG. 5 is a schematic structural diagram of a data processing apparatus for an air interface protocol data plane according to an embodiment of the present disclosure;

FIG. 6 is a schematic structural diagram of a base station according to an embodiment of the present disclosure.

detailed description

The present invention will be further described in detail with reference to the accompanying drawings, in which . All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.

The base station, which is a public mobile communication base station, is a form of a radio station, and refers to a radio transceiver for transmitting information between a mobile communication switching center and a mobile telephone terminal in a certain radio coverage area. Radio station.

The relevant terms in the embodiments of the present invention are explained below.

PDCP_UL THD (PDCP Up Link Thread): Responsible for decryption, decompression header, PDCP header information, and uplink (network layer) orderly delivery of data packets.

PDCP_DL THD (PDCP Down Link Thread): Responsible for encryption, header compression, adding PDCP header information, timing discarding, and lower layer (RLC) delivery of data packets.

RLC_UL THD (RLC Up Link Thread): Responsible for cascading, reassembly, reordering, uplink ARQ, and RLC header information.

RLC_DL THD (RLC Down Link Thread): Responsible for segmentation, padding, re-segmentation, downlink ARQ, and RLC header information.

MAC_UL THD (MAC Up Link Thread): It is responsible for demultiplexing the data of the transport channel to the logical channel, and the MAC header information.

MAC_DL THD (MAC Down Link Thread): It is responsible for multiplexing the data of the logical channel to the transport channel, filling and adding the MAC header information.

MAC_SCH_UL THD (MAC Schedule Up Link Thread): It is responsible for uplink MAC scheduling and uplink authorization.

MAC_SCH_DL THD (MAC Schedule Down Link Thread): It is responsible for downlink MAC scheduling function and downlink authorization.

In the embodiment of the present invention, the user data thread group includes a PDCP uplink processing thread PDCP_UL THD, a PDCP downlink processing thread PDCP_DL THD, an RLC uplink processing thread RLC_UL THD, an RLC downlink processing thread RLC_DL THD, a MAC uplink processing thread MAC_UL THD, and a MAC downlink processing thread MAC_DL. The THD, the cell scheduling thread group includes an uplink MAC scheduling thread MAC_SCH_UL THD and a downlink MAC scheduling thread MAC_SCH_DL THD.

The data processing method and device for the air interface protocol data plane provided by the embodiment of the present invention are based on the data plane software architecture. As shown in FIG. 2, the present invention provides a data plane software architecture diagram of the LTE air interface protocol, and the available kernel of the processor. The total number is S1+S2, one core processes one thread group, S1 cores are used to process S1 user data thread groups, and S2 cores are used to process S2 cell scheduling thread groups. By separating the cell scheduling thread group from the user data thread group, the multi-core processor is reasonably allocated to process the two types of thread groups in parallel to achieve the purpose of satisfying the data throughput, multi-cell, and multi-user of the LTE air interface protocol stack.

Embodiments of the present invention provide a data processing method for an air interface protocol data plane. As shown in FIG. 3, a schematic flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention includes:

Step 301: Acquire a total number of hardware cores Y of the base station and a number N of cells that the base station needs to support.

Specifically, the number of processor cores of the base station is queried, and the total number of hardware cores Y is obtained. Query the number of cells supported by the base station to obtain the number of cells N.

Step 302: Determine, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell;

Step 303: Determine a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the MAC layer scheduling of the cell, where M1+M2≤Y.

Specifically, in step 302, according to the principle that each cell occupies one hardware core for cell MAC layer scheduling, and each cell occupies a hardware core to process user data, it is determined whether Y is greater than or equal to 2N. At this time, for load balancing of each hardware core, first consider that each cell scheduling thread group separately processes user scheduling of one cell, and one cell configures one user data thread group to process user data.

When Y ≥ 2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N, and the hardware core number M2 = Y-N for processing the user data is determined.

At this time, the cell scheduling thread group needs to occupy N hardware cores, and the remaining hardware cores can be used to process user data. Since YN≥N, that is, the remaining hardware core number is greater than or equal to the number of cells, the remaining hardware cores can be used in whole or in part. A user data thread group is deployed, in which case one cell can be configured to configure at least one user data thread group to process user data.

For example, the total number of hardware cores of the base station is Y=8, and the number of cells of the base station is N=3. Since Y>2N, the number of hardware cores for processing MAC layer scheduling of the cell is M1=3, and the number of hardware cores for processing user data is M2. =5. That is, the cell scheduling thread group needs to occupy 3 hardware cores, and the remaining 5 hardware cores are used to process user data. The five hardware cores are not limited to all or part of the user data thread group, and can be deployed according to the number of users that need to be supported. Preferably, one cell is configured with at least one hardware core processing user data thread group.

When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The hardware core number M2=Y-M1 for processing user data.

At this time, one cell scheduling thread group processes user scheduling of two cells, at least N/2 hardware cores are needed to deploy a cell scheduling thread group, and the remaining hardware cores are used to deploy a user data thread group.

Specifically, when N/2 is not an integer, rounding up is required. For example, if the number of cells is N=3, then N/2=1.5, and M1 needs to be rounded up, that is, M1=2.

It should be noted that, in the case that the total number of hardware cores Y is small, a hardware core that processes the MAC layer scheduling of the cell may also serve two or more cells, that is, one cell scheduling thread group may also process more than two cells. User scheduling, no restrictions here.

When N≤Y-M1, that is, the number of remaining hardware cores is greater than or equal to the number of cells, the remaining hardware cores may be used in whole or in part for deploying user data thread groups, in which case one cell may be configured to configure at least one user data thread. Group to process user data.

For example, the total number of hardware cores of the base station is Y=7, and the number of cells of the base station is N=4. Since Y<2N, a hardware core that processes the MAC layer scheduling of the cell is used to serve two cells, that is, one cell scheduling thread group processes two. The user scheduling of the cell, at this time, the hardware core number M1=2 for processing the MAC layer scheduling of the cell, and the hardware core number M2=5 for processing the user data. Since N < M2, the remaining 5 hardware cores are used to process user data for 4 cells.

For another example, the total number of hardware cores of the base station is Y=9, and the number of cells of the base station is N=5. Since Y<2N, a hardware core that processes the MAC layer scheduling of the cell is used to serve two cells, that is, one cell scheduling thread group processes two. User scheduling of the cells. At this time, the hardware core number M1 for processing the MAC layer scheduling of the cell is rounded up to 3, and the hardware core number M2=6 for processing user data. Since N<M2, the remaining 6 hardware cores are used to process user data for 5 cells.

When N>Y-M1, that is, the remaining hardware cores are smaller than the number of cells, the remaining hardware cores may be used for all or part of the deployment of the user data thread group, and at this time, one user data thread group processes the user data of at least one cell.

For example, the total number of hardware cores of the base station is Y=5, and the number of cells of the base station is N=4. Since Y<2N, a hardware core that processes the MAC layer scheduling of the cell is used to serve two cells, that is, one cell scheduling thread group processes two. The user scheduling of the cell, at this time, the hardware core number M1=2 for processing the MAC layer scheduling of the cell, and the hardware core number M2=3 for processing the user data. Since N>M2, the remaining 3 hardware cores are used to process user data of 4 cells.

It should be noted that when N>Y-M1, and N/2 is not an integer, M1 may also perform rounding down, and no limitation is imposed herein. For example, the total number of hardware cores of the base station is Y=6, and the number of cells is N=5. The number of hardware cores M1 for processing the MAC layer scheduling of the cell may be set to 2, and the number of hardware cores for processing user data is M2. =4. Since N>M2, the remaining 4 hardware cores are used to process user data of 5 cells.

It should be noted that, in the LTE system, the number of users that the base station needs to support is hundreds or thousands of orders. Therefore, it is necessary to properly allocate users to each user data thread group to ensure balanced load of each hardware core. Therefore, after step 303, it is also necessary to adjust the deployment of the user accessing the user data thread group according to the load of the hardware core that processes the user data. Based on the data processing method for the data plane of the air interface protocol, the embodiment of the present invention further provides three methods for ensuring equalization of each hardware core load in the case where the number of users is large.

Method 1: According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user.

Specifically, each user data thread group records the number of users S that have been accessed, S is incremented by one when each user data thread group accesses a new user, and S is decremented by one when a user is released. Each time a new user is accessed, the user data thread group with the least number of accessed users S recorded in the M2 user data thread group is selected as the user data thread of the new access user. This method is relatively simple and easy to implement. The number of users processed by each user data thread group is basically the same, but because each user's service is different, the amount of data is different, and the load of each hardware core may have a certain imbalance.

Method Two:

According to the average CPU load of the hardware core of the user within the set time period, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user.

Specifically, each user data thread group records the CPU average processing load of the associated hardware core within the T seconds duration. When a new user accesses, the user data thread group deployed on the hardware core with the smallest CPU average processing load is selected as the user data thread group of the new access user. This method is complicated to implement the method, but basically enables the hardware load of each hardware. balanced.

Method three:

According to each user data thread group, the average CPU processing load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds the set threshold, the maximum CPU average load is deployed on the hardware core. The user of the user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.

The first method and the second method are all determining the user data thread group to which the new user accesses. However, the size of the user's service data is dynamically changed, although the hardware load of the user is balanced when accessing, but with The amount of data of each user's business changes, and the hardware core load also changes dynamically. Therefore, it is necessary to dynamically adjust the user data thread group to which the user belongs.

Specifically, each user data thread group records the real-time CPU average processing load of the associated hardware core within the T seconds duration. If the difference between the maximum CPU load and the minimum CPU load exceeds the set threshold H, the user of the user data thread group deployed on the hardware core of the maximum CPU average load is adjusted to the user data deployed on the hardware core of the minimum CPU average load. The thread group then repeats the above dynamic adjustment process so that the difference of the real-time CPU processing load of the hardware core to which each user data thread group belongs does not exceed the set threshold. This method is more complicated, but it can balance the load of each hardware core in real time.

The embodiment of the present invention provides a data processing method for an air interface protocol data plane. The hardware core of the processing MAC layer scheduling is divided into M1 by a reasonable allocation of hardware cores, and a hardware core for processing user data. The quantity is M2. The multi-core processor is used to reasonably allocate two types of thread groups for parallel processing to meet the requirements of high throughput, multi-cell and multi-user of the LTE air interface protocol stack, and at the same time balance the hardware core load, improve hardware resource utilization and system stability.

An embodiment of the present invention further provides a data processing method for an air interface protocol data plane. As shown in FIG. 4, a flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention includes:

Step 401: Acquire a total number of hardware cores Y of the base station and a number N of cells of the base station.

Step 402: It is determined whether Y≥2N is established. If yes, step 403 is performed, otherwise step 404 is performed.

Step 403: Determine a hardware core number M1=N for processing cell MAC layer scheduling, and determine a hardware core number M2=Y-N for processing user data.

For example, the total number of hardware cores of the base station is Y=8, and the number of cells of the base station is N=3. Since Y>2N, the number of hardware cores for processing MAC layer scheduling of the cell is M1=3, and the number of hardware cores for processing user data is M2. =5. That is, the cell scheduling thread group needs to occupy 3 hardware cores, and the remaining 5 hardware cores are used to process user data.

Step 404: Determine that the number of hardware cores for processing the MAC layer scheduling of the cell is M1, and M1 is an integer that is not less than N/2, and step 405 is continued.

Step 405: It is determined whether N>Y-M1 is established. If yes, step 406 is performed, otherwise step 407 is performed.

Step 406: Configure a user data thread group to process user data of at least one cell.

For example, the total number of hardware cores of the base station is Y=5, and the number of cells of the base station is N=4. Since Y<2N, a hardware core that processes the MAC layer scheduling of the cell is used to serve two cells, that is, one cell scheduling thread group processes two. The user scheduling of the cell, at this time, the hardware core number M1=2 for processing the MAC layer scheduling of the cell, and the hardware core number M2=3 for processing the user data. Since N>M2, the remaining 3 hardware cores are used to process user data of 4 cells, then configure a user data thread group to process user data of at least one cell.

Step 407: Configure at least one user data thread group to process user data of one cell.

For example, the total number of hardware cores of the base station is Y=7, and the number of cells of the base station is N=4. Since Y<2N, a hardware core that processes the MAC layer scheduling of the cell is used to serve two cells, that is, one cell scheduling thread group processes two. The user scheduling of the cell, at this time, the hardware core number M1=2 for processing the MAC layer scheduling of the cell, and the hardware core number M2=5 for processing the user data. Since N>M2, the remaining 5 hardware cores are used to process user data of 4 cells, at least one user data thread group is configured to process user data of one cell.

Based on the same inventive concept, the embodiment of the present invention further provides a data processing device for the data plane of the air interface protocol. As shown in FIG. 5, the schematic diagram of the data processing device for the data plane of the air interface protocol provided by the embodiment of the present invention includes:

The obtaining unit 501 is configured to acquire a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;

The first determining unit 502 is configured to determine, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, a hardware core number M1 for processing cell MAC layer scheduling;

a second determining unit 503, configured to determine a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, where M1+M2≤Y.

Preferably, the first determining unit 502 is specifically configured to:

According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;

When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N; the second determining unit determines the hardware core number M2=Y-N for processing the user data.

Preferably, the first determining unit 502 is further configured to:

When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The determining unit determines the hardware core number M2=Y-M1 for processing the user data.

Preferably, the device further includes an adjusting unit 504, configured to:

Determining a cell scheduling thread group and a user data thread group of the cell in the base station;

The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;

The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.

Preferably, the adjusting unit 504 is further configured to:

According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;

or,

According to the average CPU load of the hardware core of the user within the set duration, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;

or,

According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.

An embodiment of the present invention provides a data processing apparatus for a data plane of an air interface protocol, which acquires a total number of hardware cores Y of a base station and a number of cells N that the base station needs to support; a number of cells that need to be supported by the base station, N, and the base station The relationship between the total number of hardware cores Y, determining the hardware core number M1 for processing the MAC layer scheduling of the cell; determining according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell layer access control MAC layer scheduling The hardware core number M2 for processing user data, where M1+M2≤Y. By reasonably allocating the total number of hardware cores, it is divided into hardware cores that process small-area MAC layer scheduling, the number is M1; and the number of hardware cores that process user data, the number is M2. The multi-core processor is used to reasonably allocate two types of thread groups for parallel processing to meet the requirements of high throughput, multi-cell and multi-user of the LTE air interface protocol stack, and at the same time balance the hardware core load, improve hardware resource utilization and system stability.

It is worth mentioning that each unit involved in the above embodiments is a logical unit. In practical applications, a logical unit may be a physical unit, a part of a physical unit, or multiple physical units. A combination of units is implemented. In addition, in order to highlight the innovative part of the present application, the present embodiment does not introduce a unit that is not closely related to solving the technical problem proposed by the present application, but this does not indicate that there are no other units in the present embodiment.

Based on the same inventive concept, the embodiment of the present application further provides a base station, as shown in FIG. 6, including: at least one processor 600; a transceiver 610; and a memory 620 communicatively coupled to the at least one processor 600. The base station may be the data processing apparatus for the air interface protocol data plane in the foregoing embodiment, and the steps performed by the respective functional units in the data processing apparatus of the air interface protocol data plane may be performed by the processor in the base station provided by the embodiment of the present invention. .

The processor 600 is configured to read a program in the memory 620 and perform the following processes:

Obtaining a total number of hardware cores Y of the base station and a number of cells N to be supported by the base station; determining, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, determining hardware for processing the MAC layer scheduling of the cell a core number M1; determining, according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, a hardware core number M2 for processing user data, where M1+M2≤ Y.

Optionally, the processor 600 can:

According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;

When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N, and the hardware core number M2=Y-N for processing the user data is determined.

Optionally, the processor 600 is further capable of:

When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The hardware core number M2=Y-M1 for processing user data.

Optionally, the processor 600 is further capable of:

Determining a cell scheduling thread group and a user data thread group of the cell in the base station;

The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;

The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.

Optionally, the processor 600 can:

According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;

or,

According to the average CPU load of the hardware core of the user within the set duration, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;

or,

According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.

The transceiver 610 is configured to receive and transmit data under the control of the processor 600.

Wherein, in FIG. 6, the bus architecture can include any number of interconnected buses and bridges, specifically linked by one or more processors represented by processor 600 and various circuits of memory represented by memory 60. The bus architecture can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be further described herein. The bus interface provides an interface. Transceiver 610 can be a plurality of components, including a transmitter and a receiver, providing means for communicating with various other devices on a transmission medium.

The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 can store data used by the processor 600 in performing operations.

Optionally, the processor 600 may be a CPU (Central Embedded Device), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a CPLD (Complex Programmable Logic Device). , complex programmable logic devices).

Based on the same inventive concept, the present application provides a non-volatile computer storage medium storing computer-executable instructions for causing the computer to execute A data processing method for an air interface protocol data plane in any of the above embodiments.

Based on the same inventive concept, the present application provides a computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising the computer executable instructions When the computer executable instructions are executed by a computer, the computer is caused to perform the data processing method for the air interface protocol data plane in any of the above embodiments.

The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware. Based on such understanding, the above-described technical solutions may be embodied in the form of software products in essence or in the form of software products, which may be stored in a computer readable storage medium such as ROM/RAM, magnetic Discs, discs, etc., include instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments or portions of the embodiments.

The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. A system that implements the functions specified in a block or blocks of a flow or a flow and/or a block diagram of a flowchart.

The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that instructions stored in the computer readable memory produce an article of manufacture including an instruction system. The system implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of a flowchart.

These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

While the preferred embodiment of the invention has been described, it will be understood that Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and

It is apparent that those skilled in the art can make various modifications and variations to the invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and modifications of the invention

Claims (17)

  1. A data processing method for an air interface protocol data surface, comprising:
    Obtaining a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;
    Determining, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell;
    Determining a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, where M1+M2≤Y.
  2. The method according to claim 1, wherein the hardware core number M1 for processing the MAC layer scheduling of the cell is determined according to the relationship between the number N of cells supported by the base station and the total number of hardware cores Y of the base station, including :
    According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
    When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N, and the hardware core number M2=Y-N for processing the user data is determined.
  3. The method of claim 2, further comprising:
    When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The hardware core number M2=Y-M1 for processing user data.
  4. The method according to any one of claims 1 to 3, wherein after the determining the number of hardware cores M2 for processing the user data, the method further comprises:
    Determining a cell scheduling thread group and a user data thread group of the cell in the base station;
    The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
    The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  5. The method according to claim 4, wherein the adjusting the deployment of the user accessing the user data thread group according to the load of the hardware core processing the user data comprises:
    According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
    or,
    According to the average CPU load of the hardware core of the user within the set duration, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
    or,
    According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  6. A data processing device for a data plane of an air interface protocol, comprising:
    The acquiring unit is configured to acquire a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;
    a first determining unit, configured to determine, according to a relationship between a number N of cells that the base station needs to support and a total number of hardware cores Y of the base station, a hardware core number M1 for processing a cell MAC layer scheduling;
    a second determining unit: configured to determine, according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, a hardware core number M2 for processing user data, where, +M2 ≤ Y.
  7. The device according to claim 6, wherein the first determining unit is specifically configured to:
    According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
    When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N; the second determining unit determines the hardware core number M2=Y-N for processing the user data.
  8. The device according to claim 7, wherein the first determining unit is further configured to:
    When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The determining unit determines the hardware core number M2=Y-M1 for processing the user data.
  9. The apparatus according to any one of claims 6 to 8, further comprising an adjustment unit for:
    Determining a cell scheduling thread group and a user data thread group of the cell in the base station;
    The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
    The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  10. The apparatus according to claim 9, wherein the adjusting unit is further configured to:
    According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
    or,
    According to the average CPU load of the hardware core of the user within the set duration, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
    or,
    According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  11. A base station, comprising: at least one processor, a transceiver, and a memory communicatively coupled to the at least one processor;
    The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
    Obtaining a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;
    Determining, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell;
    Determining a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, where M1+M2≤Y.
  12. The base station of claim 11 wherein said processor is capable of:
    According to the principle that each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
    When Y≥2N, it is determined that the hardware core number M1 for processing the cell MAC layer scheduling is N, and the hardware core number M2=Y-N for processing the user data is determined.
  13. The base station according to claim 12, wherein said processor is further capable of:
    When Y<2N, according to a principle that the hardware core of the processing of the MAC layer of the cell is served by two cells, the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2; The hardware core number M2=Y-M1 for processing user data.
  14. The base station according to any one of claims 11 to 13, wherein the processor is further capable of:
    Determining a cell scheduling thread group and a user data thread group of the cell in the base station;
    The cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
    The deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  15. The base station of claim 14 wherein said processor is capable of:
    According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
    or,
    According to the average CPU load of the hardware core of the user within the set time period, the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user according to each user data thread group;
    or,
    According to each user data thread group, the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  16. A non-volatile computer storage medium, characterized in that the non-transitory computer-readable storage medium stores computer-executable instructions for causing the computer to perform claims 1-5 A data processing method for an air interface protocol data plane as described in any one of the preceding claims.
  17. A computer program product, comprising: a computing program stored on a non-transitory computer readable storage medium, the computer program comprising the computer executable instructions, when the computer executable instructions When executed by a computer, the computer is caused to perform the data processing method for the air interface data plane according to any one of claims 1-5.
PCT/CN2017/118081 2017-01-19 2017-12-22 Data processing method and apparatus for air interface protocol data plane WO2018133625A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710044159.7A CN106851667B (en) 2017-01-19 2017-01-19 A kind of data processing method and device for air protocol data surface
CN201710044159.7 2017-01-19

Publications (1)

Publication Number Publication Date
WO2018133625A1 true WO2018133625A1 (en) 2018-07-26

Family

ID=59119247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118081 WO2018133625A1 (en) 2017-01-19 2017-12-22 Data processing method and apparatus for air interface protocol data plane

Country Status (2)

Country Link
CN (1) CN106851667B (en)
WO (1) WO2018133625A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620753B1 (en) * 2005-03-17 2009-11-17 Apple Inc. Lockless access to a ring buffer
CN103154897A (en) * 2010-10-14 2013-06-12 阿尔卡特朗讯公司 Core abstraction layer for telecommunication network applications
CN103838552A (en) * 2014-03-18 2014-06-04 北京邮电大学 System and method for processing multi-core parallel assembly line signals of 4G broadband communication system
CN103906257A (en) * 2014-04-18 2014-07-02 北京邮电大学 LTE broadband communication system calculation resource dispatcher based on GPP and dispatching method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100508499C (en) * 2006-11-02 2009-07-01 杭州华三通信技术有限公司 Multi-core processor for realizing adaptive dispatching and multi-core processing method
FI20085217A0 (en) * 2008-03-07 2008-03-07 Nokia Corp Data Processing device
CN102868643B (en) * 2012-08-31 2015-06-17 苏州简约纳电子有限公司 Long-term evolution (LTE) data surface software architecture
CN105827654A (en) * 2016-05-26 2016-08-03 西安电子科技大学 Multi-core parallel protocol stack structure design method based on GMR-1 3G system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620753B1 (en) * 2005-03-17 2009-11-17 Apple Inc. Lockless access to a ring buffer
CN103154897A (en) * 2010-10-14 2013-06-12 阿尔卡特朗讯公司 Core abstraction layer for telecommunication network applications
CN103838552A (en) * 2014-03-18 2014-06-04 北京邮电大学 System and method for processing multi-core parallel assembly line signals of 4G broadband communication system
CN103906257A (en) * 2014-04-18 2014-07-02 北京邮电大学 LTE broadband communication system calculation resource dispatcher based on GPP and dispatching method thereof

Also Published As

Publication number Publication date
CN106851667B (en) 2019-07-02
CN106851667A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN103416017B (en) For performing channel aggregation and the method and apparatus of media access control re-transmission
JP5694388B2 (en) Method and system implemented in a base station or mobile station
CN104303446B (en) The method and apparatus that hybrid automatic repeat-request (HARQ) for carrier aggregation (CA) maps
EP2849524A1 (en) Scheduling virtualization for mobile RAN cloud and separation of of cell and user plane schedulers
US8532030B2 (en) Techniques for initiating communication in a wireless network
US9930596B2 (en) Method and apparatus for controlling small data transmission on the uplink
CN101971544B (en) Buffer status reporting system and method
ES2617750T3 (en) Procedures for intra base station handover optimizations
CN104335625B (en) Method and apparatus for the waiting for an opportunity property radio resources allocation in multi-carrier communications systems
CN104488308B (en) Method and apparatus for transmitting and receiving data in mobile communication system
TWM283441U (en) Apparatus having medium access control layer architecture for supporting enhanced uplink
CN104768206B (en) The data transmission method and device of device-to-device communication
US10110355B2 (en) Uplink transmission on unlicensed radio frequency band component carriers
TWI431996B (en) Apparatus for initializing, preserving and reconfiguring token buckets
CN101848487A (en) Method and communication apparatus for power headroom reporting
US20160150440A1 (en) Method for triggering a burffer status reporting and a device therefor
CN104782223A (en) Wireless communication in multi-RAT system
EP2568759A1 (en) Method and device for sending buffer status report in wireless network
JP2015536599A (en) Multi-RAT wireless communication system, operation method, and base station apparatus
DE112005002986T5 (en) Method and medium access controller for wireless broadband communication with variable size of the data units and delayed construction of data units
CN102111751B (en) Method and device for reporting buffer state report
CN106576318A (en) Timing alignment procedures for dual pucch
US9699800B2 (en) Systems, methods, and appartatuses for bearer splitting in multi-radio HetNet
EP2995161B1 (en) Uplink scheduling information reporting apparatus and system supporting multi-connectivity
EP3121981A1 (en) Data packet processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE