WO2018133625A1 - Procédé et appareil de traitement de données pour un plan de données de protocole d'interface radio - Google Patents

Procédé et appareil de traitement de données pour un plan de données de protocole d'interface radio Download PDF

Info

Publication number
WO2018133625A1
WO2018133625A1 PCT/CN2017/118081 CN2017118081W WO2018133625A1 WO 2018133625 A1 WO2018133625 A1 WO 2018133625A1 CN 2017118081 W CN2017118081 W CN 2017118081W WO 2018133625 A1 WO2018133625 A1 WO 2018133625A1
Authority
WO
WIPO (PCT)
Prior art keywords
user data
hardware
cell
thread group
processing
Prior art date
Application number
PCT/CN2017/118081
Other languages
English (en)
Chinese (zh)
Inventor
黄勇
吴治鸣
Original Assignee
京信通信系统(中国)有限公司
京信通信系统(广州)有限公司
京信通信技术(广州)有限公司
天津京信通信系统有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京信通信系统(中国)有限公司, 京信通信系统(广州)有限公司, 京信通信技术(广州)有限公司, 天津京信通信系统有限公司 filed Critical 京信通信系统(中国)有限公司
Publication of WO2018133625A1 publication Critical patent/WO2018133625A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/18Network planning tools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0205Traffic management, e.g. flow control or congestion control at the air interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/52Allocation or scheduling criteria for wireless resources based on load

Definitions

  • the present invention relates to the field of LTE (Long Term Evolution) technology, and in particular, to a data processing method and apparatus for an air interface protocol data plane.
  • LTE Long Term Evolution
  • LTE is a wireless communication standard designed for mobile high-bandwidth applications.
  • the wireless interface can be divided into three protocol layers: physical layer L1, data link layer L2, and network layer L3, as shown in FIG.
  • the data plane L2 of the LTE wireless communication air interface protocol includes a PDCP (Packet Data Convergence Protocol) layer, an RLC (Radio Link Control) layer, and a MAC (Media Access Control Protocol).
  • PDCP Packet Data Convergence Protocol
  • RLC Radio Link Control
  • MAC Media Access Control Protocol
  • the PDCP layer is responsible for data header compression/decompression, integrity protection, encryption/decryption, PDCP SN (Serial Number) maintenance, sequential delivery, handover data switching, and timing discarding.
  • the RLC protocol layer is responsible for TM (Transparent).
  • MAC protocol layer is responsible for logical channel and transport channel mapping, logical channel multiplexing and demultiplexing, HARQ (Hybrid Automatic Repeat Request), dynamic scheduling, semi-persistent scheduling And transport format selection and other features.
  • the PDCP adds PDCP header information to the data packet from the upper layer, performs header compression, and encrypts and transmits the information to the RLC; the RLC pairs the wireless chain from the PDCP.
  • the channel control layer service data unit performs segmentation, cascading, padding processing according to the scheduled length of the MAC layer, and adds corresponding RLC header information to form a radio link control layer protocol data unit; then the MAC layer will come from different logical channels.
  • the data is multiplexed into the transport channel.
  • the multiplexing process is based on a set of TB (Transport Block) sizes and transport formats, involving concatenation, padding, adding corresponding MAC header information and MAC control information to form a MAC PDU (The operation of the Protocol Data Unit (the protocol data unit); the TB formed after the MAC processing is transmitted to the bottom layer and wirelessly transmitted to the terminal through the air interface.
  • TB Transport Block
  • MAC PDU The operation of the Protocol Data Unit (the protocol data unit); the TB formed after the MAC processing is transmitted to the bottom layer and wirelessly transmitted to the terminal through the air interface.
  • the MAC layer performs the de-MAC header on the uplink received data, demultiplexes the data, and sends the demultiplexed data to the RLC. Transmitting the demultiplexed MAC control information to the MAC scheduling; the RLC performs the de-RLC header on the received data, reassembles and sends it to the PDCP; the PDCP decrypts the received data, decompresses the header, and goes to the PDCP header. And then deliver the packets to the upper layer in order.
  • the LTE data transmission rate can be up to 50 Mbps and the downlink can reach 100 Mbps.
  • the rate is multiplied by the number of aggregated carriers, and the number of intelligent terminal devices increases sharply.
  • the number of users supported by the base station has increased by an order of magnitude.
  • the data plane architecture of the prior art 2G and 3G air interface protocols is based on a single-threaded or multi-threaded design architecture on a single-core or multi-core hardware processor, which is difficult to meet the high-speed performance requirements of LTE.
  • the data plane architecture of the existing air interface protocol cannot meet the data plane requirements of the LTE base station equipment with high throughput, multi-cell, and multi-user.
  • the present invention provides a data processing method and device for an air interface protocol data plane, which is used to solve the problem that the data plane architecture of the prior art hollow port protocol cannot meet the data plane requirements of the high throughput, multi-cell, and multi-user LTE base station equipment.
  • an embodiment of the present invention provides a data processing method for an air interface protocol data plane, including:
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined according to the relationship between the number of cells N to be supported by the base station and the total number of hardware cores Y of the base station, including:
  • each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
  • the method further includes:
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2;
  • the hardware core number M2 Y-M1 for processing user data.
  • the method further includes:
  • the cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
  • the deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  • the adjusting the deployment of the user accessing the user data thread group according to the load of the hardware core processing the user data comprises:
  • the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
  • the average CPU load of the hardware core of the set time period is recorded, and the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
  • the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  • the embodiment of the present invention further provides a data processing apparatus for a data plane of an air interface protocol, including:
  • the acquiring unit is configured to acquire a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;
  • a first determining unit configured to determine, according to a relationship between a number N of cells that the base station needs to support and a total number of hardware cores Y of the base station, a hardware core number M1 for processing a cell MAC layer scheduling;
  • a second determining unit configured to determine, according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, a hardware core number M2 for processing user data, where, +M2 ⁇ Y.
  • the first determining unit is specifically configured to:
  • each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
  • the first determining unit is further configured to:
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2;
  • the device further comprises an adjustment unit for:
  • the cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
  • the deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  • the adjusting unit is further configured to:
  • the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
  • the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
  • the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  • the embodiment of the present application further provides a base station, including: at least one processor, a transceiver, and a memory communicably connected to the at least one processor; wherein the memory is stored by the at least one An instruction executed by the processor, the instruction being executed by the at least one processor to enable the at least one processor to perform a data processing method for an air interface protocol data plane of an embodiment of the present application.
  • the embodiment of the present application further provides a non-volatile computer readable storage medium storing computer-executable instructions for causing the computer to execute the present application.
  • the embodiment of the present application further provides a computer program product, the computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising the computer executable instructions, when the computer When the executable instructions are executed by the computer, the computer is caused to execute the data processing method for the air interface protocol data plane of the embodiment of the present application.
  • An embodiment of the present invention provides a data processing method and apparatus for an air interface protocol data plane, which acquires a total number of hardware cores Y of a base station and a number of cells N that the base station needs to support; and a number N of cells that need to be supported according to the base station, and the The relationship between the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell; the total number of hardware cores Y according to the base station and the hardware core number M1 for processing the media layer access control MAC layer scheduling Determining the hardware core number M2 for processing user data, where M1+M2 ⁇ Y.
  • the multi-core processor classification parallel processing achieves the goal of satisfying the data throughput, multi-cell, and multi-user of the LTE air interface protocol stack.
  • FIG. 1 is a data plane of an LTE wireless communication air interface protocol according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a data plane software architecture of an LTE air interface protocol according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention
  • FIG. 5 is a schematic structural diagram of a data processing apparatus for an air interface protocol data plane according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a base station according to an embodiment of the present disclosure.
  • the base station which is a public mobile communication base station, is a form of a radio station, and refers to a radio transceiver for transmitting information between a mobile communication switching center and a mobile telephone terminal in a certain radio coverage area. Radio station.
  • PDCP_UL THD (PDCP Up Link Thread): responsible for decryption, decompression header, PDCP header information, and uplink (network layer) orderly delivery of data packets.
  • PDCP_DL THD (PDCP Down Link Thread): responsible for encryption, header compression, adding PDCP header information, timing discarding, and lower layer (RLC) delivery of data packets.
  • RLC_UL THD (RLC Up Link Thread): Responsible for cascading, reassembly, reordering, uplink ARQ, and RLC header information.
  • RLC_DL THD RLC Down Link Thread: responsible for segmentation, padding, re-segmentation, downlink ARQ, and RLC header information.
  • MAC_UL THD (MAC Up Link Thread): It is responsible for demultiplexing the data of the transport channel to the logical channel, and the MAC header information.
  • MAC_DL THD (MAC Down Link Thread): It is responsible for multiplexing the data of the logical channel to the transport channel, filling and adding the MAC header information.
  • MAC_SCH_UL THD (MAC Schedule Up Link Thread): It is responsible for uplink MAC scheduling and uplink authorization.
  • MAC_SCH_DL THD (MAC Schedule Down Link Thread): It is responsible for downlink MAC scheduling function and downlink authorization.
  • the user data thread group includes a PDCP uplink processing thread PDCP_UL THD, a PDCP downlink processing thread PDCP_DL THD, an RLC uplink processing thread RLC_UL THD, an RLC downlink processing thread RLC_DL THD, a MAC uplink processing thread MAC_UL THD, and a MAC downlink processing thread MAC_DL.
  • the THD, the cell scheduling thread group includes an uplink MAC scheduling thread MAC_SCH_UL THD and a downlink MAC scheduling thread MAC_SCH_DL THD.
  • the data processing method and device for the air interface protocol data plane provided by the embodiment of the present invention are based on the data plane software architecture.
  • the present invention provides a data plane software architecture diagram of the LTE air interface protocol, and the available kernel of the processor.
  • the total number is S1+S2, one core processes one thread group, S1 cores are used to process S1 user data thread groups, and S2 cores are used to process S2 cell scheduling thread groups.
  • the multi-core processor is reasonably allocated to process the two types of thread groups in parallel to achieve the purpose of satisfying the data throughput, multi-cell, and multi-user of the LTE air interface protocol stack.
  • Embodiments of the present invention provide a data processing method for an air interface protocol data plane.
  • a schematic flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention includes:
  • Step 301 Acquire a total number of hardware cores Y of the base station and a number N of cells that the base station needs to support.
  • the number of processor cores of the base station is queried, and the total number of hardware cores Y is obtained. Query the number of cells supported by the base station to obtain the number of cells N.
  • Step 302 Determine, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, the hardware core number M1 for processing the MAC layer scheduling of the cell;
  • Step 303 Determine a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the MAC layer scheduling of the cell, where M1+M2 ⁇ Y.
  • step 302 according to the principle that each cell occupies one hardware core for cell MAC layer scheduling, and each cell occupies a hardware core to process user data, it is determined whether Y is greater than or equal to 2N. At this time, for load balancing of each hardware core, first consider that each cell scheduling thread group separately processes user scheduling of one cell, and one cell configures one user data thread group to process user data.
  • the cell scheduling thread group needs to occupy N hardware cores, and the remaining hardware cores can be used to process user data. Since YN ⁇ N, that is, the remaining hardware core number is greater than or equal to the number of cells, the remaining hardware cores can be used in whole or in part.
  • a user data thread group is deployed, in which case one cell can be configured to configure at least one user data thread group to process user data.
  • the five hardware cores are not limited to all or part of the user data thread group, and can be deployed according to the number of users that need to be supported.
  • one cell is configured with at least one hardware core processing user data thread group.
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2;
  • the hardware core number M2 Y-M1 for processing user data.
  • one cell scheduling thread group processes user scheduling of two cells, at least N/2 hardware cores are needed to deploy a cell scheduling thread group, and the remaining hardware cores are used to deploy a user data thread group.
  • N/2 is not an integer
  • a hardware core that processes the MAC layer scheduling of the cell may also serve two or more cells, that is, one cell scheduling thread group may also process more than two cells.
  • User scheduling no restrictions here.
  • the remaining hardware cores may be used in whole or in part for deploying user data thread groups, in which case one cell may be configured to configure at least one user data thread. Group to process user data.
  • the remaining hardware cores may be used for all or part of the deployment of the user data thread group, and at this time, one user data thread group processes the user data of at least one cell.
  • N>Y-M1 is not an integer
  • M1 may also perform rounding down, and no limitation is imposed herein.
  • the embodiment of the present invention further provides three methods for ensuring equalization of each hardware core load in the case where the number of users is large.
  • Method 1 According to the number of users accessed by each user data thread group, the user data thread group with the least number of accessed users is used as the user data thread group of the new access user.
  • each user data thread group records the number of users S that have been accessed, S is incremented by one when each user data thread group accesses a new user, and S is decremented by one when a user is released.
  • S is incremented by one when each user data thread group accesses a new user
  • S is decremented by one when a user is released.
  • Each time a new user is accessed the user data thread group with the least number of accessed users S recorded in the M2 user data thread group is selected as the user data thread of the new access user.
  • This method is relatively simple and easy to implement.
  • the number of users processed by each user data thread group is basically the same, but because each user's service is different, the amount of data is different, and the load of each hardware core may have a certain imbalance.
  • the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user.
  • each user data thread group records the CPU average processing load of the associated hardware core within the T seconds duration.
  • the user data thread group deployed on the hardware core with the smallest CPU average processing load is selected as the user data thread group of the new access user. This method is complicated to implement the method, but basically enables the hardware load of each hardware. balanced.
  • the average CPU processing load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds the set threshold, the maximum CPU average load is deployed on the hardware core. The user of the user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  • the first method and the second method are all determining the user data thread group to which the new user accesses.
  • the size of the user's service data is dynamically changed, although the hardware load of the user is balanced when accessing, but with The amount of data of each user's business changes, and the hardware core load also changes dynamically. Therefore, it is necessary to dynamically adjust the user data thread group to which the user belongs.
  • each user data thread group records the real-time CPU average processing load of the associated hardware core within the T seconds duration. If the difference between the maximum CPU load and the minimum CPU load exceeds the set threshold H, the user of the user data thread group deployed on the hardware core of the maximum CPU average load is adjusted to the user data deployed on the hardware core of the minimum CPU average load. The thread group then repeats the above dynamic adjustment process so that the difference of the real-time CPU processing load of the hardware core to which each user data thread group belongs does not exceed the set threshold. This method is more complicated, but it can balance the load of each hardware core in real time.
  • the embodiment of the present invention provides a data processing method for an air interface protocol data plane.
  • the hardware core of the processing MAC layer scheduling is divided into M1 by a reasonable allocation of hardware cores, and a hardware core for processing user data.
  • the quantity is M2.
  • the multi-core processor is used to reasonably allocate two types of thread groups for parallel processing to meet the requirements of high throughput, multi-cell and multi-user of the LTE air interface protocol stack, and at the same time balance the hardware core load, improve hardware resource utilization and system stability.
  • An embodiment of the present invention further provides a data processing method for an air interface protocol data plane.
  • a flowchart of a data processing method for an air interface protocol data plane according to an embodiment of the present invention includes:
  • Step 401 Acquire a total number of hardware cores Y of the base station and a number N of cells of the base station.
  • Step 402 It is determined whether Y ⁇ 2N is established. If yes, step 403 is performed, otherwise step 404 is performed.
  • Step 404 Determine that the number of hardware cores for processing the MAC layer scheduling of the cell is M1, and M1 is an integer that is not less than N/2, and step 405 is continued.
  • Step 405 It is determined whether N>Y-M1 is established. If yes, step 406 is performed, otherwise step 407 is performed.
  • Step 406 Configure a user data thread group to process user data of at least one cell.
  • Step 407 Configure at least one user data thread group to process user data of one cell.
  • the embodiment of the present invention further provides a data processing device for the data plane of the air interface protocol.
  • the schematic diagram of the data processing device for the data plane of the air interface protocol provided by the embodiment of the present invention includes:
  • the obtaining unit 501 is configured to acquire a total number of hardware cores Y of the base station and a number of cells N that the base station needs to support;
  • the first determining unit 502 is configured to determine, according to the relationship between the number of cells N that the base station needs to support and the total number of hardware cores Y of the base station, a hardware core number M1 for processing cell MAC layer scheduling;
  • a second determining unit 503 configured to determine a hardware core number M2 for processing user data according to the total number of hardware cores Y of the base station and the hardware core number M1 for processing the cell medium access control MAC layer scheduling, where M1+M2 ⁇ Y.
  • the first determining unit 502 is specifically configured to:
  • each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
  • the first determining unit 502 is further configured to:
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2;
  • the device further includes an adjusting unit 504, configured to:
  • the cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
  • the deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  • the adjusting unit 504 is further configured to:
  • the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
  • the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
  • the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  • An embodiment of the present invention provides a data processing apparatus for a data plane of an air interface protocol, which acquires a total number of hardware cores Y of a base station and a number of cells N that the base station needs to support; a number of cells that need to be supported by the base station, N, and the base station
  • the hardware core number M2 for processing user data where M1+M2 ⁇ Y.
  • the multi-core processor is used to reasonably allocate two types of thread groups for parallel processing to meet the requirements of high throughput, multi-cell and multi-user of the LTE air interface protocol stack, and at the same time balance the hardware core load, improve hardware resource utilization and system stability.
  • each unit involved in the above embodiments is a logical unit.
  • a logical unit may be a physical unit, a part of a physical unit, or multiple physical units. A combination of units is implemented.
  • the present embodiment does not introduce a unit that is not closely related to solving the technical problem proposed by the present application, but this does not indicate that there are no other units in the present embodiment.
  • the embodiment of the present application further provides a base station, as shown in FIG. 6, including: at least one processor 600; a transceiver 610; and a memory 620 communicatively coupled to the at least one processor 600.
  • the base station may be the data processing apparatus for the air interface protocol data plane in the foregoing embodiment, and the steps performed by the respective functional units in the data processing apparatus of the air interface protocol data plane may be performed by the processor in the base station provided by the embodiment of the present invention. .
  • the processor 600 is configured to read a program in the memory 620 and perform the following processes:
  • the processor 600 can:
  • each cell occupies one hardware core for cell MAC layer scheduling, each cell occupies a hardware core to process user data, and determines whether Y is greater than or equal to 2N;
  • processor 600 is further capable of:
  • the hardware core number M1 for processing the MAC layer scheduling of the cell is determined, and M1 is an integer not less than N/2;
  • the hardware core number M2 Y-M1 for processing user data.
  • processor 600 is further capable of:
  • the cell scheduling thread group is deployed on the M1 hardware cores that process the MAC layer scheduling of the cell, and the user data thread group is deployed on the M2 hardware cores that process the user data;
  • the deployment of the user accessing the user data thread group is adjusted according to the load of the hardware core processing the user data.
  • the processor 600 can:
  • the user data thread group with the least number of accessed users is used as the user data thread group of the new access user;
  • the user data thread group deployed on the hardware core with the smallest CPU average load is used as the user data thread group of the new access user;
  • the average CPU load of the hardware core in the set duration is recorded. If the difference between the maximum CPU average load and the minimum CPU average load exceeds a set threshold, the maximum CPU average load is on the hardware core. The user of the deployed user data thread group adjusts to the user data thread group deployed on the hardware core of the minimum CPU average load.
  • the transceiver 610 is configured to receive and transmit data under the control of the processor 600.
  • the bus architecture can include any number of interconnected buses and bridges, specifically linked by one or more processors represented by processor 600 and various circuits of memory represented by memory 60.
  • the bus architecture can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be further described herein.
  • the bus interface provides an interface.
  • Transceiver 610 can be a plurality of components, including a transmitter and a receiver, providing means for communicating with various other devices on a transmission medium.
  • the processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 can store data used by the processor 600 in performing operations.
  • the processor 600 may be a CPU (Central Embedded Device), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a CPLD (Complex Programmable Logic Device). , complex programmable logic devices).
  • CPU Central Embedded Device
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • CPLD Complex Programmable Logic Device
  • the present application provides a non-volatile computer storage medium storing computer-executable instructions for causing the computer to execute A data processing method for an air interface protocol data plane in any of the above embodiments.
  • the present application provides a computer program product comprising a computing program stored on a non-transitory computer readable storage medium, the computer program comprising the computer executable instructions
  • the computer executable instructions When executed by a computer, the computer is caused to perform the data processing method for the air interface protocol data plane in any of the above embodiments.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that instructions stored in the computer readable memory produce an article of manufacture including an instruction system.
  • the system implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of a flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

La présente invention concerne un procédé et un appareil de traitement de données pour un plan de données de protocole d'interface radio. Le procédé consiste à : acquérir le nombre total de cœurs matériels Y d'une station de base et le nombre de cellules N qui doivent être prises en charge par la station de base ; déterminer, d'après une relation entre le nombre de cellules N qui doivent être prises en charge par la station de base et le nombre total de cœurs matériels Y de la station de base, un nombre de cœurs matériels M1 pour gérer une programmation de couche MAC de cellule ; et déterminer, d'après le nombre total de cœurs matériels Y de la station de base et le nombre de cœurs matériels M1 pour gérer la programmation de couche MAC de cellule, un nombre de cœurs matériels M2 pour gérer des données d'utilisateur, où M1 + M2 ≤ Y. Le nombre total de cœurs matériels est attribué de manière appropriée, et divisé de manière spécifique en des cœurs matériels M1 pour gérer une programmation de couche MAC de cellule, et des cœurs matériels M2 pour gérer des données d'utilisateur. L'implémentation d'une classification et d'une gestion parallèle d'un processeur multicœur permet d'atteindre les objectifs visés de haut débit, de cellules multiples et d'utilisateurs multiples sur un plan de données d'une pile de protocoles d'interface radio LTE.
PCT/CN2017/118081 2017-01-19 2017-12-22 Procédé et appareil de traitement de données pour un plan de données de protocole d'interface radio WO2018133625A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710044159.7A CN106851667B (zh) 2017-01-19 2017-01-19 一种针对空口协议数据面的数据处理方法及装置
CN201710044159.7 2017-01-19

Publications (1)

Publication Number Publication Date
WO2018133625A1 true WO2018133625A1 (fr) 2018-07-26

Family

ID=59119247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118081 WO2018133625A1 (fr) 2017-01-19 2017-12-22 Procédé et appareil de traitement de données pour un plan de données de protocole d'interface radio

Country Status (2)

Country Link
CN (1) CN106851667B (fr)
WO (1) WO2018133625A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190728B (zh) * 2019-12-13 2023-08-25 北京山石网科信息技术有限公司 资源调整方法及装置
CN112612221A (zh) * 2020-10-31 2021-04-06 泰州物族信息科技有限公司 应用spi通信的智能化控制平台及方法
CN117149373A (zh) * 2021-06-17 2023-12-01 安科讯(福建)科技有限公司 一种mac层的数据调度方法及终端
CN113535401A (zh) * 2021-07-19 2021-10-22 大唐网络有限公司 5g通讯中rlc层的数据处理方法、装置、系统以及处理器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620753B1 (en) * 2005-03-17 2009-11-17 Apple Inc. Lockless access to a ring buffer
CN103154897A (zh) * 2010-10-14 2013-06-12 阿尔卡特朗讯公司 用于电信网络应用的核抽象层
CN103838552A (zh) * 2014-03-18 2014-06-04 北京邮电大学 4g宽带通信系统多核并行流水线信号的处理系统和方法
CN103906257A (zh) * 2014-04-18 2014-07-02 北京邮电大学 基于gpp的lte宽带通信系统计算资源调度器及其调度方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100508499C (zh) * 2006-11-02 2009-07-01 杭州华三通信技术有限公司 实现自适应调度的多内核处理器及多内核处理方法
FI20085217A0 (fi) * 2008-03-07 2008-03-07 Nokia Corp Tietojenkäsittelyjärjestely
CN102868643B (zh) * 2012-08-31 2015-06-17 苏州简约纳电子有限公司 一种lte数据面装置
CN105827654A (zh) * 2016-05-26 2016-08-03 西安电子科技大学 基于gmr-1 3g系统多核并行协议栈结构设计方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620753B1 (en) * 2005-03-17 2009-11-17 Apple Inc. Lockless access to a ring buffer
CN103154897A (zh) * 2010-10-14 2013-06-12 阿尔卡特朗讯公司 用于电信网络应用的核抽象层
CN103838552A (zh) * 2014-03-18 2014-06-04 北京邮电大学 4g宽带通信系统多核并行流水线信号的处理系统和方法
CN103906257A (zh) * 2014-04-18 2014-07-02 北京邮电大学 基于gpp的lte宽带通信系统计算资源调度器及其调度方法

Also Published As

Publication number Publication date
CN106851667B (zh) 2019-07-02
CN106851667A (zh) 2017-06-13

Similar Documents

Publication Publication Date Title
US10708940B2 (en) Method and apparatus for reporting buffer state by user equipment in communication system
CN106961741B (zh) 一种上行资源分配方法和装置
WO2018133625A1 (fr) Procédé et appareil de traitement de données pour un plan de données de protocole d'interface radio
CN105557014B (zh) 触发和报告缓冲器状态的方法及其设备
CN110691419B (zh) 平衡用于双连接的上行链路传输
CN107484183B (zh) 一种分布式基站系统、cu、du及数据传输方法
CN103222320B (zh) 一种载波聚合调度装置、载波聚合调度方法和基站
CN104640223B (zh) 一种上报bsr的方法、基站和终端
CN107277856A (zh) 一种配置、触发缓冲区状态上报的方法及装置
CN104854940B (zh) 用户装置以及发送控制方法
CN107359968B (zh) 一种单层序列号的数据传输方法及装置
CN106605441A (zh) 无线通信系统中的低延迟、低带宽和低占空比操作
CN106105307B (zh) 用户装置以及上行链路数据发送方法
CN105027511B (zh) 在无线通信中用于并行化分组处理的方法和系统
CN105813213B (zh) 双连接方案中传输数据的方法、基站和系统
CN107249197A (zh) 一种缓冲区状态上报的方法、系统和设备
WO2018219229A1 (fr) Procédé de communication et dispositif de réseau
CN108811154A (zh) 数据包传输方法和设备
CN108616909A (zh) 数据传输方法及装置
TW200931853A (en) Method and apparatus for performing buffer status reporting
CN107113862A (zh) 用于无线接入的网络功能的灵活分配
US20230103808A1 (en) Method and apparatus for scheduling in wireless communication system
CN110324902A (zh) 通信方法、通信装置和系统
CN102868643A (zh) 一种lte数据面软件架构
WO2014117347A1 (fr) Procédé et appareil de planification de données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/11/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17893261

Country of ref document: EP

Kind code of ref document: A1