CN113407320A - MAC (media Access control) layer scheduling method and terminal based on 5G small base station - Google Patents
MAC (media Access control) layer scheduling method and terminal based on 5G small base station Download PDFInfo
- Publication number
- CN113407320A CN113407320A CN202110669640.1A CN202110669640A CN113407320A CN 113407320 A CN113407320 A CN 113407320A CN 202110669640 A CN202110669640 A CN 202110669640A CN 113407320 A CN113407320 A CN 113407320A
- Authority
- CN
- China
- Prior art keywords
- scheduling
- processing
- threads
- processed
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000003111 delayed effect Effects 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 8
- 230000001360 synchronised effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 230000000737 periodic effect Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 101000633815 Homo sapiens TELO2-interacting protein 1 homolog Proteins 0.000 description 1
- 101000633807 Homo sapiens TELO2-interacting protein 2 Proteins 0.000 description 1
- 102100029253 TELO2-interacting protein 1 homolog Human genes 0.000 description 1
- 102100029259 TELO2-interacting protein 2 Human genes 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a 5G small base station-based MAC (media access control) layer scheduling method and a terminal, wherein a corresponding scheduling event is generated according to a scheduling relation between each uplink physical channel and each downlink physical channel, the scheduling events which can be processed concurrently are configured into different threads for concurrent processing, and the scheduling events which can only be processed serially are configured into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multistage pipeline processing mode, and the number of scheduled users is increased correspondingly within the same time due to the fact that the threads are added for delayed processing compared with the original processing process, after a large number of user equipment are accessed, the number of the scheduled users is increased correspondingly, therefore, the number of the mobile devices processed by the small base station within each scheduling time can be increased under the condition that the processing capacity of a CPU is not changed, and the overall scheduling speed is increased.
Description
Technical Field
The invention relates to the field of mobile communication, in particular to a method and a terminal for scheduling an MAC layer based on a 5G small base station.
Background
With the increase of bandwidth, compared to LTE (Long Term Evolution), data processed by NR (New Radio over the air) is increased by ten times per TTI (Transport Time Interval), and meanwhile, the scheduling Time of each TTI is reduced from 1ms to 1 slot (slot), so that a larger amount of data and scheduling need to be processed in a shorter Time.
However, since the CPU core of the NR small base station has limited processing performance, the number of user equipments that can be scheduled per TTI is small, and it is difficult to process a large amount of data in a short time.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the MAC layer scheduling method and the terminal based on the 5G small base station are provided, the number of mobile devices processed by the small base station in each scheduling time is increased, and the overall rate is increased.
In order to solve the technical problems, the invention adopts the technical scheme that:
a MAC layer scheduling method based on a 5G small base station comprises the following steps:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a MAC layer scheduling terminal based on a 5G small cell, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
The invention has the beneficial effects that: generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel, configuring the scheduling events which can be processed concurrently into different threads for concurrent processing, and configuring the scheduling events which can only be processed serially into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.
Drawings
Fig. 1 is a flowchart of a MAC layer scheduling method based on a 5G small base station according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a MAC layer scheduling terminal based on a 5G small base station according to an embodiment of the present invention;
fig. 3 is a schematic diagram of MAC layer scheduling of a 5G small base station-based MAC layer scheduling method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of serially scheduling an MAC layer in time order according to an MAC layer scheduling method based on a 5G small base station according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a relationship between an abstract serial step and an actual MAC layer process of a 5G small cell base station-based MAC layer scheduling method according to an embodiment of the present invention;
FIG. 6 is a diagram of a single thread model for MAC layer scheduling in the prior art;
fig. 7 is a schematic diagram of a secondary thread model of a MAC layer scheduling method based on a 5G small cell site according to an embodiment of the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, an embodiment of the present invention provides a MAC layer scheduling method based on a 5G small cell, including:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
From the above description, the beneficial effects of the present invention are: generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel, configuring the scheduling events which can be processed concurrently into different threads for concurrent processing, and configuring the scheduling events which can only be processed serially into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.
Further, the allocating the concurrently processable scheduling events in different threads for concurrent processing includes:
distributing the concurrently processable scheduling events in different threads;
and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.
According to the above description, the scheduling events which can be processed concurrently are distributed in different threads, and the time slot time synchronization is performed on the threads, so that the scheduling events which can be processed concurrently can be guaranteed to be processed concurrently in the same time slot, and the processing efficiency of the scheduling events is improved.
Further, the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:
acquiring the processing time of the scheduling event processed in series;
dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;
and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.
As can be seen from the above description, according to the processing time of the scheduling event of serial processing, the thread of serial processing is divided into a plurality of serial sub-threads whose processing time is less than or equal to one time slot, so that the processing time of serial processing can be ensured to be within one time slot, and missing of the sending opportunity of an air interface due to the fact that the processing time exceeds one time slot is avoided.
Further, the allocating the serial sub-threads obtained after the division to different CPU cores for performing the delay processing includes:
putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;
and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.
As can be seen from the above description, the scheduling manner of the pipeline can be implemented by storing the result of the sub-thread after processing in the preset queue and acquiring data by the next sub-thread through the preset queue, so that the processing time of each serial sub-thread is reduced, the number of processable user devices is increased, and the overall scheduling rate is increased.
Further, still include:
scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;
and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.
As can be seen from the above description, scheduling in advance the thread corresponding to the scheduling event that acquires the resource required for scheduling, since the resource required for scheduling is periodic, fixed, or predictable, allocating the resource required for scheduling in advance can improve the subsequent scheduling time.
Referring to fig. 2, another embodiment of the present invention provides a MAC layer scheduling terminal based on a 5G small cell, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the following steps when executing the computer program:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
As can be seen from the above description, the corresponding scheduling event is generated according to the scheduling relationship between each uplink and downlink physical channel, the scheduling events that can be processed concurrently are configured into different threads for concurrent processing, and the scheduling events that can only be processed serially are configured into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.
Further, the allocating the concurrently processable scheduling events in different threads for concurrent processing includes:
distributing the concurrently processable scheduling events in different threads;
and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.
According to the above description, the scheduling events which can be processed concurrently are distributed in different threads, and the time slot time synchronization is performed on the threads, so that the scheduling events which can be processed concurrently can be guaranteed to be processed concurrently in the same time slot, and the processing efficiency of the scheduling events is improved.
Further, the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:
acquiring the processing time of the scheduling event processed in series;
dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;
and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.
As can be seen from the above description, according to the processing time of the scheduling event of serial processing, the thread of serial processing is divided into a plurality of serial sub-threads whose processing time is less than or equal to one time slot, so that the processing time of serial processing can be ensured to be within one time slot, and missing of the sending opportunity of an air interface due to the fact that the processing time exceeds one time slot is avoided.
Further, the allocating the serial sub-threads obtained after the division to different CPU cores for performing the delay processing includes:
putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;
and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.
As can be seen from the above description, the scheduling manner of the pipeline can be implemented by storing the result of the sub-thread after processing in the preset queue and acquiring data by the next sub-thread through the preset queue, so that the processing time of each serial sub-thread is reduced, the number of processable user devices is increased, and the overall scheduling rate is increased.
Further, still include:
scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;
and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.
As can be seen from the above description, scheduling in advance the thread corresponding to the scheduling event that acquires the resource required for scheduling, since the resource required for scheduling is periodic, fixed, or predictable, allocating the resource required for scheduling in advance can improve the subsequent scheduling time.
The MAC layer scheduling method and terminal based on the 5G small base station according to the present invention are applicable to scheduling the MAC layer by using multi-thread concurrence under the condition that the single CPU processing capability in the NR small base station is insufficient, and can increase the number of user equipments scheduled in each transmission time interval, thereby increasing the overall scheduling efficiency, and are described below by specific embodiments:
example one
Referring to fig. 1, a MAC layer scheduling method based on a 5G small cell includes the steps of:
and S1, generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel in the MAC layer.
Specifically, referring to fig. 3, the 5G base station includes multiple scheduling information to be processed, and generates a corresponding scheduling event according to a scheduling relationship between an uplink channel and a downlink channel, where in this embodiment, the ULSCH is uplink data and needs to perform MAC decoding; rach (random Access Channel) is a random Access Channel, and a Control Channel Element (CCE) and an uplink RB (Resource Block) need to be allocated during scheduling; the CRC is an uplink HARQ (Hybrid Automatic Repeat reQuest) process, the BSR (Buffer Status Report) is an uplink retransmission process, and the UE is scheduled for retransmission or retransmission according to information of the CRC and the BSR or SR (Scheduling reQuest); the SRS (Sounding Reference Signal) information is used to determine codebook or frequency selective scheduling of the UE; RLC SDU (Radio Link Control structure Service Data Unit) is downlink Data delivered by an upper Service, HARQ is information reported by PUCCH (Physical Uplink Control Channel), and both are used for downlink retransmission or new transmission of UE. The CSI (Channel State Information) is used to determine related configuration Information during downlink transmission, such as MCS (Modulation and Coding Scheme), PMI (Precoding Matrix Indicator), Layer, and the like during downlink transmission.
Referring to fig. 4, in the prior art, the MAC layer of a 5G small cell generally schedules each scheduling event in series according to an event sequence, and this processing method processes a whole flow in a single thread and a single CPU, so that it is required to ensure that time consumed for processing these flows is less than time of one slot, and if the time exceeds one slot, a sending opportunity of an air interface is missed, resulting in data transmission failure. The CPU used by the small base station generally has relatively poor performance, and the processing time of MAC scheduling is controlled by limiting the number of UEs processed in each slot.
And S2, distributing the scheduling events which can be processed concurrently in different threads for concurrent processing, distributing the scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores.
Wherein, the allocating the scheduling events that can be processed concurrently in different threads for concurrent processing includes:
distributing the concurrently processable scheduling events in different threads;
and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.
Specifically, in this embodiment, the scheduling module is allocated to different threads, each thread is bound to a different CPU core, and slot time synchronization is performed between multiple threads through a slot indication message of a PHY (Physical layer).
Wherein, still include: scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;
and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.
Specifically, the common channel information is periodic or fixed or predictable, and can be scheduled N slots in advance, CCEs and RBs are allocated, 3 slots are prepared in advance for scheduling the common channel initially, and then each slot is scheduled once.
Referring to fig. 5, in the present embodiment, the uplink decoded PUSCH uses an independent thread processing; the remaining scheduling events are processed in serial steps 1-N.
S3, distributing the scheduling event of the serial processing to a plurality of independent threads for delay processing;
dividing the scheduling event of serial processing into a plurality of steps, allocating each step to different threads for delay processing, and allocating each thread in the serial processing to different time slots once according to a serial processing sequence; therefore, the expansibility of placing the serial events in different threads of different time slots for processing is better, and the efficiency of concurrent processing is further improved, wherein the common channel processing thread is scheduled once in each slot, so that an uplink decoding step is adaptively added in each slot, and the information in the common channel is decoded and acquired.
And distributing the scheduling events which can be processed concurrently in different threads for concurrent processing, wherein the consumed time of each thread is low, and the number of the UE processed in each TTI can be increased as long as the maximum consumed time of each thread in each slot is not more than 1 slot during distribution.
Example two
The difference between this embodiment and the first embodiment is that how to allocate the serially processed scheduling events to multiple threads for delayed processing is further defined:
specifically, the allocating the serially processed scheduling event to a plurality of independent threads for performing the delay processing includes:
acquiring the processing time of the scheduling event processed in series;
dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;
and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.
In this embodiment, the thread processed in series is divided into a plurality of serial sub-threads with processing time less than or equal to one time slot according to the processing time of the scheduling event processed in series, for example, the step to be processed in each TTI of the base station is abstracted to the serial steps 1 to N, and the total processing time of the N steps is longer, so the N steps are divided into two parts with similar execution time, and the first part includes steps 1 to N1; the second part comprises steps N2-N, where N2 is (N1) +1, so that the total processing time of both parts after division is reduced to within one time slot.
Specifically, a plurality of serial sub-threads are put on independent thread-independent CPU cores for processing, whether the processing sequence of the serial sub-threads is the first or not is judged, if yes, data are received from an air interface and an upper layer service and the serial sub-threads are processed, and if not, data are obtained from a preset queue and the serial sub-threads are processed;
and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.
Referring to fig. 5, the periodic and predictable channels such as the common channel are scheduled several slots in advance; uplink decoding is carried out on each TTI by one thread; the processing of the remaining channels is handled in serial steps 1 to N.
Referring to fig. 6, before this embodiment is implemented, each TTI receives a downlink data packet from an upper layer service and an uplink data packet from a physical layer in a single thread according to a time sequence, then performs all serial steps, and sends related data to the physical layer and then sends the related data out through an air interface by the physical layer after the processing is completed. Suppose that only 2 UEs can be handled per TTI.
Referring to fig. 7, after the embodiment is implemented, the serial step is split into two parts with similar execution time, where the first part includes steps 1 to N1; the second part comprises steps N2-N, each user device needs to be executed in two separate threads, each thread is bound to a separate CPU core, so that the two parts can be run simultaneously at any time.
Specifically, in TTI1, thread 2 schedules a first part of UEs 1-4;
in TTI2, thread 1 schedules the second part of UE 1-4, thread 2 schedules the first part of UE 4-8;
and by analogy, each subsequent TTI is a second part of the previous TTI scheduled by the thread 1, and a first part of the current TTI scheduled by the thread 2.
I.e., similar to two-stage pipeline processing, the first portion at thread 2 is then placed in the queue and the remaining second portion is processed by thread 1 starting at the next TTI. Therefore, one more concurrent thread can be considered to be processed, and the number of UEs that can be processed in each TTI is doubled.
The above steps are that the serial step is divided into two parts, and the two parts are processed by 2 threads, namely two-stage pipeline processing; the serial step can be divided into N parts according to specific requirements, and the N parts are processed by N threads so as to be expanded into an N-stage pipeline, thereby improving the overall performance of the system.
EXAMPLE III
Referring to fig. 2, a MAC layer scheduling terminal based on a 5G small cell includes a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor executes the computer program to implement the steps of the MAC layer scheduling method based on a 5G small cell according to the first embodiment or the second embodiment.
In summary, according to the MAC layer scheduling method and terminal based on the 5G small cell, the corresponding scheduling event is generated according to the scheduling relationship between each uplink and downlink physical channel, the scheduling events that can be processed concurrently are configured to different threads for processing concurrently, and the scheduling events that can only be processed serially are configured to the same thread for processing serially; when the serial processing thread is subjected to pipeline processing, serial scheduling events which are most time-consuming and cannot be processed concurrently are distributed to a plurality of threads for processing, although delay is carried out when the serial scheduling events are processed, the processing data volume in each unit time can be improved by introducing a multi-core mode under the condition that the processing capacity of a CPU is not changed, wherein the serial processing thread is divided into a plurality of serial sub-threads with the processing time less than or equal to one time slot according to the processing time of the serial processing scheduling events, so that the processing time of serial processing can be ensured to be in one time slot, and the sending opportunity of an air interface missing due to the fact that the processing time exceeds one time slot is avoided; therefore, the scheduling events are processed in a concurrent mode, the scheduling events which can only be processed in a serial mode are distributed to the CPU cores, and the existing MAC layer scheduling which can only be processed in the serial mode is processed in different CPU cores, so that the number of the mobile devices processed in each scheduling time of the small base station is increased, and the overall scheduling rate is increased.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. A MAC layer scheduling method based on a 5G small base station is characterized by comprising the following steps:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
2. The MAC layer scheduling method based on a 5G small cell site as claimed in claim 1, wherein the allocating scheduling events that can be processed concurrently in different threads for processing concurrently comprises:
distributing the concurrently processable scheduling events in different threads;
and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.
3. The MAC layer scheduling method based on a 5G small cell base station according to claim 1, wherein allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:
acquiring the processing time of the scheduling event processed in series;
dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;
and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.
4. The MAC layer scheduling method based on the 5G small cell as claimed in claim 3, wherein the allocating the serial sub-threads obtained after the division to different CPU cores for delay processing comprises:
putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;
and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.
5. The MAC layer scheduling method based on 5G small cell base station as claimed in any of claims 1 to 4, further comprising:
scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;
and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.
6. A MAC layer scheduling terminal based on a 5G small cell, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;
distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;
and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.
7. The MAC layer scheduling terminal based on the 5G small cell base station as claimed in claim 6, wherein the allocating the scheduling events that can be processed concurrently in different threads for processing concurrently comprises:
distributing the concurrently processable scheduling events in different threads;
and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.
8. The MAC layer scheduling terminal based on the 5G small cell base station as claimed in claim 6, wherein the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:
acquiring the processing time of the scheduling event processed in series;
dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;
and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.
9. The MAC layer scheduling terminal based on a 5G small base station as claimed in claim 8, wherein the allocating the serial sub-threads obtained after the splitting to different CPU cores for performing the delay processing comprises:
putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;
and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.
10. The MAC layer scheduling terminal based on 5G small cell base station according to any of claims 6 to 9, further comprising:
scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;
and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110669640.1A CN113407320B (en) | 2021-06-17 | 2021-06-17 | MAC layer scheduling method and terminal based on 5G small cell |
CN202310891744.6A CN117149373A (en) | 2021-06-17 | 2021-06-17 | Data scheduling method and terminal of MAC layer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110669640.1A CN113407320B (en) | 2021-06-17 | 2021-06-17 | MAC layer scheduling method and terminal based on 5G small cell |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310891744.6A Division CN117149373A (en) | 2021-06-17 | 2021-06-17 | Data scheduling method and terminal of MAC layer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113407320A true CN113407320A (en) | 2021-09-17 |
CN113407320B CN113407320B (en) | 2023-08-11 |
Family
ID=77684565
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110669640.1A Active CN113407320B (en) | 2021-06-17 | 2021-06-17 | MAC layer scheduling method and terminal based on 5G small cell |
CN202310891744.6A Pending CN117149373A (en) | 2021-06-17 | 2021-06-17 | Data scheduling method and terminal of MAC layer |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310891744.6A Pending CN117149373A (en) | 2021-06-17 | 2021-06-17 | Data scheduling method and terminal of MAC layer |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113407320B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117149441A (en) * | 2023-10-27 | 2023-12-01 | 南京齐芯半导体有限公司 | Task scheduling optimization method applied to IoT |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1874538A (en) * | 2005-07-20 | 2006-12-06 | 华为技术有限公司 | Concurrent method for treating calling events |
US20150195850A1 (en) * | 2012-09-17 | 2015-07-09 | Huawei Technologies Co., Ltd. | Scheduling method, base station, user equipment, and system |
CN106851667A (en) * | 2017-01-19 | 2017-06-13 | 京信通信系统(广州)有限公司 | A kind of data processing method and device for air protocol data surface |
CN108174463A (en) * | 2018-02-10 | 2018-06-15 | 北京理工大学 | A kind of media access control sublayer design of soft base station and configuration method towards more scenes |
-
2021
- 2021-06-17 CN CN202110669640.1A patent/CN113407320B/en active Active
- 2021-06-17 CN CN202310891744.6A patent/CN117149373A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1874538A (en) * | 2005-07-20 | 2006-12-06 | 华为技术有限公司 | Concurrent method for treating calling events |
US20150195850A1 (en) * | 2012-09-17 | 2015-07-09 | Huawei Technologies Co., Ltd. | Scheduling method, base station, user equipment, and system |
CN106851667A (en) * | 2017-01-19 | 2017-06-13 | 京信通信系统(广州)有限公司 | A kind of data processing method and device for air protocol data surface |
CN108174463A (en) * | 2018-02-10 | 2018-06-15 | 北京理工大学 | A kind of media access control sublayer design of soft base station and configuration method towards more scenes |
Non-Patent Citations (1)
Title |
---|
曹玉树: "《基于通用处理器的LTE系统MAC子层的设计与实现》" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117149441A (en) * | 2023-10-27 | 2023-12-01 | 南京齐芯半导体有限公司 | Task scheduling optimization method applied to IoT |
CN117149441B (en) * | 2023-10-27 | 2024-01-05 | 南京齐芯半导体有限公司 | Task scheduling optimization method applied to IoT |
Also Published As
Publication number | Publication date |
---|---|
CN117149373A (en) | 2023-12-01 |
CN113407320B (en) | 2023-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110535609B (en) | Determination method of target parameter, communication node and storage medium | |
US11665756B2 (en) | Communicating control data in a wireless communication network | |
KR101361022B1 (en) | Method and apparatus for reporting buffer status | |
CN109150458B (en) | Control information transmission method and device | |
US20170311337A1 (en) | Data Processing Implementation Method, Base Station and User Equipment | |
WO2019158013A1 (en) | Channel transmission method and apparatus, network device, and computer readable storage medium | |
CN101873704A (en) | Method, system and equipment for resource scheduling in long-term evolution system | |
CN108886716A (en) | A kind of data transmission method and device | |
AU2019460320B2 (en) | Sharing HARQ processes by multiple configured grants resources | |
JP6567779B2 (en) | Method and apparatus for transmitting uplink control information UCI | |
CN107567105B (en) | PUCCH resource scheduling method and device | |
JP2020511870A (en) | System parameter set configuration method and apparatus, and storage medium | |
CN112911713A (en) | Configuration method and equipment of uplink control channel | |
CN111835478B (en) | PUCCH resource allocation method and device | |
CN113407320B (en) | MAC layer scheduling method and terminal based on 5G small cell | |
CN112399595A (en) | Communication method and device | |
US10728921B2 (en) | Information processing method and device | |
CN115190596A (en) | Method, terminal and equipment for transmitting UCI on PUSCH | |
EP3527023B1 (en) | Methods and devices for transmitting/receiving scheduling commands | |
CN115362741A (en) | PUCCH repetition number indication | |
CN111757485A (en) | Method, device and system for allocating uplink control information resources | |
EP4373193A1 (en) | Channel processing method and apparatus | |
WO2023134661A1 (en) | Uci transmission method and apparatus, terminal, network device, and storage medium | |
Yang et al. | Research and Implementation of RB Allocation in LTE | |
CN115715022A (en) | Channel processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |