CN113254080A - PDSCH channel parallel processing and multi-thread task dynamic allocation method - Google Patents
PDSCH channel parallel processing and multi-thread task dynamic allocation method Download PDFInfo
- Publication number
- CN113254080A CN113254080A CN202110539753.XA CN202110539753A CN113254080A CN 113254080 A CN113254080 A CN 113254080A CN 202110539753 A CN202110539753 A CN 202110539753A CN 113254080 A CN113254080 A CN 113254080A
- Authority
- CN
- China
- Prior art keywords
- parallel
- task
- code block
- processing
- pdsch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims description 37
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims 1
- 238000013461 design Methods 0.000 abstract description 3
- 238000012423 maintenance Methods 0.000 abstract description 3
- 238000003672 processing method Methods 0.000 abstract description 2
- 230000005540 biological transmission Effects 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3818—Decoding for concurrent execution
- G06F9/3822—Parallel decoding, e.g. parallel decode units
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0061—Error detection codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0067—Rate matching
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a PDSCH channel parallel processing method, which comprises the following steps: parallel preprocessing for decoupling between code blocks processed by the PDSCH; parallel tasks are added through multi-thread task allocation, and a plurality of threads process the tasks in parallel. The invention adopts a downlink PDSCH signal parallel processing mode, improves the processing speed, realizes the low time delay and the large throughput required by a 5G system, and has the advantages of simple realization, high flexibility, white box design, easy upgrading, maintenance and expansion compared with a hardware mode.
Description
Technical Field
The invention belongs to the technical field of communication, and particularly relates to a PDSCH (physical downlink shared channel) channel parallel processing and multi-thread task dynamic allocation method.
Background
The fifth generation mobile communication access technology (5G), also known as NR, defines three major application scenarios, namely enhanced mobile broadband (eMBB), ultra-low latency ultra-high reliability communication (URLLC) and large-scale machine communication (mtc). The eMBB pursues high-speed transmission, the peak rate of the eMBB is required to reach 10Gbps, the connection rate of 100Mbps is at least guaranteed in any occasion, and a high-speed mobile scene up to 500km/h is supported. In order to achieve the indexes, the 5G is significantly improved in terms of physical layer signal processing compared with the 4G, for example, large bandwidth transmission is used, a millimeter wave frequency band is introduced, a large-scale MIMO technology is used, a flexible resource allocation mode is adopted, and the like. Intuitively, the transmission rate increases and the processing speed requirements for the data transmission channel are higher. For downlink transmission, service data is processed and transmitted on a Physical Downlink Shared Channel (PDSCH), and the processing delay of the PDSCH directly determines whether the downlink processing of the 5G base station can meet the transmission rate index required by the 5G base station.
The load data processed by the PDSCH is called a Transport Block (TB), the first processing performed after the transport block enters the physical layer from the MAC layer through a downlink transport channel (DL-SCH) is CRC addition, and then selection of an LDPC coding basic diagram (base graph) is performed, and 5G supports two basic diagrams, the first basic diagram (base graph1) is used for a larger scene of the transport block, and the other basic diagram (base graph2) is used for a small scene of the transport block. The segmentation of the larger transport block after the selection of the base map is called code block segmentation. The reason for this is that the complexity of channel coding increases sharply as the length of data increases, and can be significantly reduced if the length of data of channel coding is limited. For the LDPC coding adopted for the PDSCH channel, the 3GPP protocol specifies that the length of the coded input data of the LDPC basic fig. 1 does not exceed 8448 bits, and the length of the coded input data of the LDPC basic fig. 2 does not exceed 3840 bits. After the code block segmentation is completed, 24-bit CRC check bits need to be added to each code block, and thus, for the LDPC basic fig. 1 and the basic fig. 2, the maximum code block length of the code block segmentation is 8424 bits and 3816 bits, respectively. The code blocks obtained after code block segmentation are of equal length. Next, LDPC encoding is performed on each code block to which CRC check bits are added, including generation of LDPC check bits and rate matching. And finally, carrying out code block concatenation according to the sequence before code block segmentation. The above processing is a so-called bit-level processing.
After the PDSCH bit-level processing is completed, the code words after channel coding are obtained, and one PDSCH channel can simultaneously transmit one or two code words as input data for symbol-level processing. Firstly, scrambling a pseudo-random sequence on a code word, and modulating after scrambling is finished to obtain a series of modulation symbols expressed by complex numbers. The 3GPP protocol specifies 4 modulation schemes for PDSCH channels: QPSK, 16QAM, 64QAM and 256 QAM. Next, layer mapping is performed, and modulation symbols corresponding to one codeword are mapped onto 4 MIMO layers at most. The precoding of 5G is performed in a manner transparent to the protocol, and whether precoding is performed or not depends on the implementation of the base station. If precoding is needed, the data and the DMRS are multiplied by a same precoding matrix, the output corresponds to antenna ports, if precoding is not needed, all layers are mapped to the antenna ports one by one, and the data of the antenna ports are mapped to time-frequency resource grids corresponding to the antenna ports finally.
The traditional PDSCH processing of the base station is realized in a special chip or DSP mode, which belongs to a hardware realization mode, the mode has high operation speed, can realize low time delay and large throughput required by 5G, but is inflexible and difficult to upgrade and expand. In recent years, the feature of easy upgrading and extending of a base station implemented by software is receiving attention from the industry due to its flexibility and low cost, but the biggest problem to be solved by the software implementation is to deal with real-time performance and throughput.
In the processing of a base station side Physical Downlink Shared Channel (PDSCH), data from a Medium Access Control (MAC) layer is processed by a bit-level data processing module in a physical layer, and then the symbol-level processing module maps the data into symbols according to the configuration of an upper-layer protocol, and then the symbols are sent to a radio frequency unit (RRU) through a fronthaul interface, and a radio signal is formed through the processing of the RRU and enters an air interface. The time resource of the air interface is divided into continuous time slots, and the data of a certain time slot must be processed before the air interface time of the time slot arrives, which puts requirements on the real-time performance of downlink signal processing.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a PDSCH channel parallel processing method, including:
parallel preprocessing for decoupling between code blocks processed by the PDSCH;
parallel tasks are added through multi-thread task allocation, and a plurality of threads process the tasks in parallel.
Preferably, the parallel preprocessing includes separating the scrambling sequence generation from the scrambling operation, the scrambling sequence generation is performed in the high-level parameter analysis step, and the address offset of the scrambling sequence required by each code block is calculated, so that each thread acquires the scrambling sequence according to the address offset of the scrambling sequence of the current code block in the parallel calculation process.
Preferably, the parallel preprocessing comprises the steps of:
analyzing high-level parameters; calculating a codeword parameter; calculating a number of code block parameters; DMRS sequence generation and resource mapping; generating a scrambling sequence; the transport block CRC is added.
Preferably, the parallel preprocessing further includes an RE resource list corresponding to each code block when performing resource mapping, and each code block is directly mapped to a position specified by the list when performing resource mapping.
Preferably, the parallel processing task includes: adding code block CRC, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping.
The method for distributing the multi-thread tasks comprises the following steps:
judging whether the task is a single user or multiple users, if the task is the single user, dividing the parallel tasks according to code blocks, and adding the divided tasks into a task queue; if the user is a multi-user, dividing parallel tasks processed by the PDSCH of each user according to code blocks; adding the divided tasks of each user into a task queue; if the user only has one code block, the division is not carried out, and the parallel task processed by the PDSCH of the user is taken as a task to be added into a task queue;
acquiring an idle thread through a pre-established thread pool interface;
the idle thread respectively takes out one task from the task queue for processing, wherein the task comprises code block CRC addition, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping;
and the thread which finishes executing the task continues to take the task from the task queue for processing until the tasks in the task queue are all executed.
The invention has the beneficial effects that: the invention adopts a downlink PDSCH signal parallel processing mode, improves the processing speed, realizes the low time delay and the large throughput required by a 5G system, and has the advantages of simple realization, high flexibility, white box design, easy upgrading, maintenance and expansion compared with a hardware mode.
Drawings
Fig. 1 is a downlink PDSCH channel processing flow;
fig. 2 is a single user scenario PDSCH parallel processing flow;
fig. 3 is a multi-user scenario PDSCH parallel processing flow;
FIG. 4 is a table of x86 platform parameter configurations;
FIG. 5 is an ARM platform parameter configuration table;
FIG. 6 is a table of single user full load configurations;
figure 7 is a multi-user full configuration table.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
as shown in fig. 1, the method for processing PDSCH channels in parallel according to the present invention includes:
parallel preprocessing for decoupling between code blocks processed by the PDSCH;
parallel tasks are added through multi-thread task allocation, and a plurality of threads process the tasks in parallel.
Preferably, the parallel preprocessing includes separating the scrambling sequence generation from the scrambling operation, the scrambling sequence generation is performed in the high-level parameter analysis step, and the address offset of the scrambling sequence required by each code block is calculated, so that each thread acquires the scrambling sequence according to the address offset of the scrambling sequence of the current code block in the parallel calculation process.
Preferably, the parallel preprocessing comprises the steps of:
analyzing high-level parameters; calculating a codeword parameter; calculating each code block parameter; DMRS sequence generation and resource mapping; generating a scrambling sequence; the transport block CRC is added.
Preferably, the parallel preprocessing further includes an RE resource list corresponding to each code block when performing resource mapping, and each code block is directly mapped to a position specified by the list when performing resource mapping.
Preferably, the parallel processing task includes: adding code block CRC, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping.
The method for distributing the multi-thread tasks comprises the following steps:
judging whether the task is a single user or multiple users, if the task is the single user, dividing the parallel tasks according to code blocks, and adding the divided tasks into a task queue; if the user is a multi-user, dividing parallel tasks processed by the PDSCH of each user according to code blocks; adding the divided tasks of each user into a task queue; if the user only has one code block, the division is not carried out, and the parallel task processed by the PDSCH of the user is taken as a task to be added into a task queue;
acquiring an idle thread through a pre-established thread pool interface;
the idle thread respectively takes out one task from the task queue for processing, wherein the task comprises code block CRC addition, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping;
and the thread which finishes executing the task continues to take the task from the task queue for processing until the tasks in the task queue are all executed.
As shown in fig. 1, in the PDSCH processing flow of 5G, after the code block division, the PDSCH payload data is divided into several parts, each of which is called a Code Block (CB), and the subsequent LDPC encoding process is performed independently for each code block using the same encoding parameters, so that the LDPC encoding of each code block is a natural parallel relationship. After the code blocks are coded, the code blocks are cascaded together, scrambling is carried out by using a scrambling sequence generated by iteration, the generation and the scrambling of the scrambling sequence are integrated by the whole transmission code block (TB), and the steps of modulation, layer mapping, resource mapping and the like are also carried out according to the whole TB.
The serial processing flow under the PDSCH multi-CB scene comprises the following steps:
a. high-level parameter analysis: the task of the step is to analyze the configuration parameters from the high layer and generate the parameters required by the signal processing of each subsequent module, such as the parameters of each CB for LDPC coding;
b. adding TB CRC;
c. dividing the data block added with the TB CRC into N _ { CB } code blocks with equal length, wherein the length of each code block is L _ { CB } bit;
d. CRC, LDPC coding and rate matching are sequentially added to each CB
e. Code block concatenation: connecting the output processed by each CB in the step 4 end to end, and calling the output data block as a code word (codeword);
f. scrambling: generating a binary sequence with the same length as the code word, and then carrying out bitwise XOR on the sequence and the code word to obtain a scrambled code word;
g. modulation: mapping the scrambled code word to a modulation symbol according to a modulation order;
h. layer mapping: mapping the modulation symbols to each layer according to the number of layers configured by the high layer;
i. resource mapping: and mapping the modulation symbols of each layer to a time-frequency resource grid according to high-layer configuration.
As can be seen from the above PDSCH serial processing flow, except that there is no dependency relationship between CBs in step d, the processing can be performed in parallel, and the rest steps are all operations on the whole codeword. In order to achieve the optimal parallel effect, the dependency relationship between CBs of other steps needs to be released, so that each CB independently performs all the steps. But the generation of scrambling sequences cannot be in parallel, since the generation of scrambling sequences is recursive, i.e. the generation of scrambling sequences of the following CBs depends on the generation of scrambling sequences of the preceding CBs.
The invention separates the generation of the scrambling sequence from the scrambling operation, the generation of the scrambling sequence is carried out in the high-level parameter analysis step, the address offset of the scrambling sequence required by each CB is calculated at the same time, and each thread acquires the scrambling sequence according to the address offset of the scrambling sequence of the current CB in the parallel calculation process.
The parallel processing flow of the invention is as follows:
(1) parallel preprocessing is used for relieving coupling among code blocks processed by the PDSCH, and the principle is as follows: by calculating the signal processing parameters of each CB in the subsequent steps in advance, particularly isolating the buffers of each CB in the signal processing process, the unexpected processing result caused by thread competition is avoided; in addition, a scrambling sequence in the serial processing flow is generated in advance, and the scrambling sequence offset of each CB is calculated according to the configuration, so that a thread for processing a certain CB does not need to relate to the recursive generation process of the scrambling sequence, and the scrambling sequence is directly obtained according to the offset; the preprocessing also divides an RE resource list corresponding to each CB when the resource mapping is carried out in advance, so that each CB can be directly mapped to a position specified by the list when the resource mapping is carried out, and the operation of other CBs is not concerned;
the parallel preprocessing specifically comprises the steps of: analyzing high-level parameters: analyzing the configuration parameters issued by the high layer; calculating code word parameters such as code word length, scrambling initialization parameters and the like; calculating each CB parameter, including an LPDC coding parameter, a scrambling sequence offset, a resource mapping position and the like; DMRS sequence generation and resource mapping; generating a scrambling sequence; adding TB CRC;
(2) parallel tasks are added through multi-thread task allocation, and a plurality of threads process the tasks in parallel: a parallel processing task comprising: adding code block CRC, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping.
After the coupling between code blocks is removed, the PDSCH processing uses a plurality of threads to execute tasks in parallel, thereby improving the overall performance of the PDSCH processing. The minimum granularity of PDSCH parallel processing is 1 CB, that is, 1 thread processes 1 CB, and if there are N (N >1) CBs in a PDSCH processing flow, N threads may be used to perform processing simultaneously. However, in the engineering implementation, the product implementation needs to be performed by comprehensively considering the characteristics and the task number of the multi-core processor platform. For example, according to the 5G protocol, 4T4R scenario, the number of CBs can reach 150 under the condition of single-user full load, and if one CB uses one thread to process, 150 threads are needed; however, in the actual product implementation, the number of cores allocated to PDSCH processing is not too many, and is far less than 150, which is limited by the number of cores of the multi-core processor and the remaining computing tasks of the 5G base station.
In consideration of the application scope of the invention, the description of the embodiment content is described for the case of 4 threads, and other thread number cases are similar to the case of the embodiment content and are within the protection scope of the patent.
The invention adopts a dynamic task allocation method on multi-thread task allocation, for a single-user scene, a TTI physical layer receives data and corresponding processing parameters from a user of layer 2, and then starts the processing of PDSCH; for multiple users, a TTI physical layer receives data and corresponding processing parameters from multiple users of layer 2, and then starts processing PDSCH, specifically performing the following steps:
determining whether to be a single user or multiple users according to the layer 2 message;
if the user is a single user, dividing the parallel tasks according to code blocks, and adding the divided tasks into a task queue; if the user is multi-user, dividing parallel tasks processed by the PDSCH of each user according to code blocks, and adding the divided tasks of each user into a task queue; if a certain user only has 1 CB, the division is not carried out, and the PDSCH processing of the user is added into a task queue as a task;
acquiring 4 idle threads through a pre-established thread pool interface;
the idle thread respectively takes out one task from the task queue for processing, wherein the task comprises code block CRC addition, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping;
and the thread which finishes executing the task continues to take the task from the task queue for processing until the tasks in the task queue are all executed.
The task execution process for the single-user scenario is shown in fig. 2, and the multi-user scenario execution process is shown in fig. 3.
The invention is tested under an x86 platform and an ARM platform respectively, a test hardware platform parameter x86 platform parameter configuration table is shown in figure 4, and an ARM platform parameter configuration table is shown in figure 5; the single user full configuration is shown in fig. 6 and the multiple user full configuration is shown in fig. 7.
The invention adopts a downlink PDSCH signal parallel processing mode, improves the processing speed, realizes the low time delay and the large throughput required by a 5G system, and has the advantages of simple realization, high flexibility, white box design, easy upgrading, maintenance and expansion compared with a hardware mode.
The technical solution of the present invention is not limited to the limitations of the above specific embodiments, and all technical modifications made according to the technical solution of the present invention fall within the protection scope of the present invention.
Claims (6)
1. A method for processing PDSCH channels in parallel, comprising:
parallel preprocessing for decoupling between code blocks processed by the PDSCH;
parallel tasks are added through multi-thread task allocation, and a plurality of threads process the tasks in parallel.
2. The method of claim 1, wherein the parallel pre-processing comprises separating scrambling sequence generation and scrambling operation, the scrambling sequence generation is performed in a higher layer parameter analysis step, and address offsets of the scrambling sequences required by the code blocks are calculated, so that during the parallel calculation, the scrambling sequences are obtained by the threads according to the address offsets of the scrambling sequences of the current code blocks.
3. The method for parallel processing of PDSCH channels according to claim 1, wherein the parallel pre-processing comprises the steps of:
analyzing high-level parameters; calculating a codeword parameter; calculating a number of code block parameters; DMRS sequence generation and resource mapping; generating a scrambling sequence; the transport block CRC is added.
4. The method of claim 1, wherein the parallel pre-processing further includes partitioning a resource list of REs corresponding to each code block during resource mapping, and each code block is mapped directly onto a position specified in the list during resource mapping.
5. The method for parallel processing of PDSCH channels according to claim 1, wherein the parallel processing task includes: adding code block CRC, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping.
6. A method for multi-thread task allocation is used for dynamic task allocation and is characterized by comprising the following steps:
judging whether the task is a single user or multiple users, if the task is the single user, dividing the parallel tasks according to code blocks, and adding the divided tasks into a task queue; if the user is a multi-user, dividing parallel tasks processed by the PDSCH of each user according to code blocks; adding the divided tasks of each user into a task queue; if the user only has one code block, the division is not carried out, and the parallel task processed by the PDSCH of the user is taken as a task to be added into a task queue;
acquiring an idle thread through a pre-established thread pool interface;
the idle thread respectively takes out one task from the task queue for processing, wherein the task comprises code block CRC addition, LDPC coding, LDPC rate matching, code block scrambling, code block modulation, code block layer mapping and code block resource mapping;
and the thread which finishes executing the task continues to take the task from the task queue for processing until the tasks in the task queue are all executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110539753.XA CN113254080A (en) | 2021-05-18 | 2021-05-18 | PDSCH channel parallel processing and multi-thread task dynamic allocation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110539753.XA CN113254080A (en) | 2021-05-18 | 2021-05-18 | PDSCH channel parallel processing and multi-thread task dynamic allocation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113254080A true CN113254080A (en) | 2021-08-13 |
Family
ID=77182490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110539753.XA Pending CN113254080A (en) | 2021-05-18 | 2021-05-18 | PDSCH channel parallel processing and multi-thread task dynamic allocation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113254080A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114499768A (en) * | 2022-04-15 | 2022-05-13 | 成都爱瑞无线科技有限公司 | Data processing method and device for PDSCH (physical Downlink shared channel) and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102299735A (en) * | 2010-06-25 | 2011-12-28 | 普天信息技术研究院有限公司 | Method for decreasing bandwidth of Ir interface and distributed base station |
US20130051354A1 (en) * | 2010-05-05 | 2013-02-28 | Xiaojing Ling | De-rate matching method and device for downlink traffic channel in long term evolution |
CN106416166A (en) * | 2015-03-17 | 2017-02-15 | 华为技术有限公司 | Method and communication device for data processing |
-
2021
- 2021-05-18 CN CN202110539753.XA patent/CN113254080A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130051354A1 (en) * | 2010-05-05 | 2013-02-28 | Xiaojing Ling | De-rate matching method and device for downlink traffic channel in long term evolution |
CN102299735A (en) * | 2010-06-25 | 2011-12-28 | 普天信息技术研究院有限公司 | Method for decreasing bandwidth of Ir interface and distributed base station |
CN106416166A (en) * | 2015-03-17 | 2017-02-15 | 华为技术有限公司 | Method and communication device for data processing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114499768A (en) * | 2022-04-15 | 2022-05-13 | 成都爱瑞无线科技有限公司 | Data processing method and device for PDSCH (physical Downlink shared channel) and storage medium |
CN114499768B (en) * | 2022-04-15 | 2022-06-10 | 成都爱瑞无线科技有限公司 | Data processing method and device for PDSCH (physical Downlink shared channel) and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106160937B (en) | A kind of method and device for realizing code block segmentation | |
TWI638542B (en) | Method and device for determining multi-user transmission mode | |
CN109792637B (en) | Transmission of user plane data through a split forward interface | |
CN109314952A (en) | The new Radio Physics down control channel design of 5th generation | |
CN102438338B (en) | Base station based on multicore general processor for broadband mobile communication system | |
CN105281868A (en) | Sending method based on code block grouping and apparatus thereof | |
CN107517503B (en) | Processing device, BBU, RRU and antenna correction method | |
US7580413B2 (en) | Dynamic allocation method in digital signal processors | |
CN113254080A (en) | PDSCH channel parallel processing and multi-thread task dynamic allocation method | |
CN112636873B (en) | Data transmission method, data transmission device, storage medium and electronic device | |
CN111031608B (en) | Resource indication method and device | |
CN101359972A (en) | Encoding method and apparatus for multi-carrier time division duplex system service transmission channel | |
RU2712122C1 (en) | Methods and apparatus for transmitting and receiving a radio frame | |
CN102611522B (en) | Data reconstruction method and device | |
US10601540B2 (en) | Communication method and communication device | |
WO2021056312A1 (en) | Method and apparatus for determining data transmission mode, and device and storage medium | |
CN106685499B (en) | Downlink transmitter system and working method | |
CN106559175A (en) | A kind of data transmission method, equipment and system | |
US20200112388A1 (en) | Transmission apparatus and transmission method | |
CN113660695B (en) | Method and device for processing cell data | |
CN113825169B (en) | Microwave data processing method, device and equipment | |
CN104734764B (en) | LTE-A uplink processing method and device | |
US20230026248A1 (en) | Systems and methods for hybrid hardware and software implementation of multiple wireless standards | |
CN112449413A (en) | Power indication method, determination method, device, network side equipment and terminal | |
US20230276479A1 (en) | Systems and methods for multiplexing multiple wireless technologies in resource constrained environment based on spectral utilization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210813 |
|
RJ01 | Rejection of invention patent application after publication |