CN107241163B - Interleaving processing method and device - Google Patents

Interleaving processing method and device Download PDF

Info

Publication number
CN107241163B
CN107241163B CN201710295108.1A CN201710295108A CN107241163B CN 107241163 B CN107241163 B CN 107241163B CN 201710295108 A CN201710295108 A CN 201710295108A CN 107241163 B CN107241163 B CN 107241163B
Authority
CN
China
Prior art keywords
interleaving
block
processed
task
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710295108.1A
Other languages
Chinese (zh)
Other versions
CN107241163A (en
Inventor
吴晨璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710295108.1A priority Critical patent/CN107241163B/en
Publication of CN107241163A publication Critical patent/CN107241163A/en
Application granted granted Critical
Publication of CN107241163B publication Critical patent/CN107241163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Error Detection And Correction (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

An interleaving method and device, the method obtains the interleaving task to be processed, and determines the memory space needed by the interleaver to interleave the interleaving task to be processed. Dividing the memory space of the interleaver into N block memory spaces according to the memory space required by the interleaving task to be processed and the maximum block memory space of the interleaver, wherein N is a positive integer, and the size of each block memory space in the N block memory spaces is smaller than or equal to the size of the maximum block memory space. And dividing the interleaving task to be processed into at least one interleaving block, and writing the interleaving block into the N block storage spaces by taking the interleaving block as a unit so as to solve the problems of high storage overhead, prolonged processing time and frequent switching of size and specification tasks.

Description

Interleaving processing method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to an interleaving method and apparatus.
Background
The physical layer is the lowest layer of the radio interface layer and directly influences the capacity of a radio link and the performance of a system. Bit information processing is a complex processing step in the physical layer, however, on such a variable parameter channel of land mobile communication, bit errors often occur in a string, and some important bit information is destroyed if burst interference is encountered.
Among them, interleaving is an effective technique to overcome bursty interference. By disturbing the correlation between symbols, the burst interference is randomized, and the influence caused by channel fading and interference is reduced. At present, a channel interleaving mode of "row write column read" may be adopted at a transmitting end. Correspondingly, a de-interleaving mode of column writing and row reading is used at the receiving end. For example, the sending end fills the original data into the interleaving matrix according to the rows, reads the original data in the form of columns to obtain the interleaving data, sends the interleaving data to the receiving end, and the receiving end writes the interleaving data into the interleaving matrix according to the columns and then reads the interleaving data according to the rows to realize the de-interleaving of the interleaving data.
Where interleaving may be implemented by an interleaver, assume CmuxIs the number of columns of the interleaving matrix, RmuxFor interleaving the number of rows of the matrix, LmuxFor the granularity of interleaving data of the interleaving matrix, C is needed for the interleaver to realize interleavingmux*Rmux*LmuxThe storage overhead of (c). When the interleaving is realized through the interleaver, firstly, according to the sequence of 'rows', each clock period fills one position of the interleaver, and each position stores LmuxAfter all the rows of the data are fully written, the data are sequentially read out according to the sequence of columns, and the interleaving process is completed. It is composed ofIn (1), the processing time of interleaving is Cmux*RmuxAnd the interleaving can be started only after all 'row' writing is needed, and the starting time delay is Cmux*Rmux. Similarly, when deinterleaving is performed by a deinterleaver, C is also requiredmux*Rmux*LmuxIn "column" order, one location of the interleaver is filled every clock cycle, each location storing an LmuxAfter all columns of the data are fully written, the data are read out in sequence according to the rows, and the de-interleaving process is completed.
In the interleaving/de-interleaving process, C is required for completing row-writing column reading/column-writing row readingmux*Rmux*LmuxMemory overhead of (C), memory overhead is large, cost is high, and C is requiredmux*RmuxAfter all rows/columns are filled in one clock cycle, the interleaving/de-interleaving can be started, and the interleaving/de-interleaving starting time is slow. Furthermore, only one data can be filled in each clock cycle, and the full filling needs Cmux*RmuxThe interleaving/deinterleaving efficiency is also low per clock cycle. And due to protocol evolution, specification difference of the number of rows and the number of columns of interleaving is increased, and under a multi-scene concurrent scene, tasks of large and small specifications are switched frequently, and switching time delay is large.
Disclosure of Invention
The embodiment of the application provides an interleaving processing method and device, and aims to solve the problems of high storage overhead, prolonged processing time and frequent switching of size and specification tasks.
In a first aspect, an interleaving processing method is provided, in which an interleaving task to be processed is obtained, a storage space of an interleaver is divided into a plurality of block storage spaces according to a size of a storage space required by the interleaving task to be processed, the interleaving task to be processed is divided into at least one interleaving block, and the interleaving task is processed in parallel by the plurality of block storage spaces in the plurality of block storage spaces by taking the interleaving block as a unit.
In the embodiment of the application, the maximum block storage space can be set, and the size of each block storage space is smaller than or equal to that of the maximum block storage space, that is, the storage space required by the interleaving task processed by each block storage space does not exceed the maximum block storage space, so that the size of the interleaving task processed by each block storage space can be limited, and the processing time delay is distributed in a balanced manner. The memory space of the interleaver can be divided into N block memory spaces according to the acquired memory space required by the interleaving task to be processed and the maximum block memory space of the interleaving memory. Wherein N is a positive integer.
If the storage space required by the interleaving task to be processed is less than or equal to the maximum block storage space B, the interleaving task to be processed is not divided, and the interleaving task to be processed is written into the first block storage space with the storage space size required by the interleaving task to be processed as an interleaving block, wherein the storage space size included in the N block storage spaces is the first block storage space size required by the interleaving task to be processed. And if a plurality of interleaving tasks with the required storage space smaller than the maximum block storage space B exist, under the condition that the storage space of the interleaver is not changed, a plurality of interleaving tasks with the required storage space smaller than the maximum block storage space B can be stored in the storage space of the interleaver.
If the memory space required by the interleaving task to be processed is larger than the maximum blocking memory space B, the interleaving task to be processed can be blocked, the interleaving task to be processed is divided into a plurality of interleaving blocks, the plurality of interleaving blocks obtained through division are written into at least one second blocking memory space with the memory space size being the size of the maximum blocking memory space in the N blocking memory spaces by taking the interleaving blocks as a unit, and the plurality of interleaving blocks are processed in parallel.
In one possible design, the interleaving task to be processed may be partitioned when the memory space required by the interleaving task to be processed is larger than the maximum partition memory space, and the partition may be specifically partitioned according to the maximum partition memory space B and the column number C of the interleaving task matrixmuxAnd determining the blocking granularity R, and then carrying out blocking processing on the task to be interleaved according to the interleaving granularity R. For example, R ═ B/C can be expressedmuxPartition granularity of (2), to Cmux*RmuxThe interleaving matrix is processed in a block mode to obtain a plurality of CmuxInterleaved blocks of R, i.e. pairsThe interleaving task to be processed is subjected to block processing to obtain the row number of an interleaving matrix in an interleaving block as the quotient of the maximum block storage space and the row number of the interleaving matrix corresponding to the interleaving task to be processed; and performing block processing on the interleaving task to be processed to obtain the number of columns of an interleaving matrix in an interleaving block as the number of the interleaving matrix columns corresponding to the interleaving task to be processed. And writing the interleaving blocks into the maximum block storage space of the interleaver by taking the interleaving blocks obtained by division as a unit, so that the interleaving operation of 'row write column read/column write row read' can be completed in one interleaving block.
In another possible design, a circular buffer mechanism may be used to store each interleaving block, for example, when different interleaving block tasks are switched, although the remaining address space is not enough to store the data of a complete interleaving block, the remaining address space may be filled according to a dynamic waterline, the data that is not input is subjected to backpressure, and the data is input when the space is allowed. Under the condition of adopting the circular buffer mechanism, N block storage spaces obtained by dividing the storage space of the interleaver comprise a third block storage space, the size of the third block storage space is smaller than the size of the maximum block storage space, and the interleaving blocks written into the third block storage space are partial interleaving blocks in the interleaving blocks obtained after the interleaving tasks to be processed are subjected to block processing. By adopting the circular buffer mechanism, the scheduling efficiency loss caused by the interleaving of large and small interleaving blocks after the block division can be avoided.
In another possible design, when the multilayer codeword needs to be layered mapped, the layers may be equivalent to the number of columns of the interleaving matrix, that is, the number of columns of the interleaving matrix is equivalent to the product of the number of columns of the interleaving matrix corresponding to the interleaving task to be processed and the number of layers for performing layer mapping on the interleaving matrix, and the interleaving processing is performed by using the above-mentioned interleaving processing method, so that the layering mapping process is completed while completing the row-column interleaving.
In yet another possible design, for the interleaving bypass scenario, each column after interleaving bypass may be equivalent to one LmuxThe small task of the column, the number of columns C in the equivalent interleaving matrixeq=LmuxTask number T ═ CmuxThe number of blocks is Cmux*MmuxThen, the interleaving process is performed according to the interleaving processing method of the first aspect, so that hierarchical mapping can be completed while interleaving bypass.
In a second aspect, an interleave processing apparatus having all the functions of an interleaver in the interleave processing method is provided. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. The modules may be software and/or hardware.
In one possible design, the interleaving device includes an obtaining unit, a processing unit, and an interleaving unit. The functions of the obtaining unit, the processing unit and the interleaving unit may correspond to the steps of each method, and are not described herein again.
In a third aspect, an interleaver is provided that includes a mapping circuit, a read-write circuit, and an interleaving memory. The mapping circuit and the read-write circuit are used for executing the interleaving processing method in the first aspect or any possible design of the first aspect, and storing data in the interleaving processing process into the interleaving memory.
In a fourth aspect, there is provided a computer readable storage medium or a computer program product for storing a computer program for performing the method of the first aspect as well as any possible design of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of a bit level implementation apparatus according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of another bit level implementation apparatus provided in the embodiment of the present application;
fig. 3 is a flowchart of an interleaving processing method according to an embodiment of the present application
FIG. 4 is a schematic diagram illustrating a division of an interleaver memory space according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating another division of the memory space of the interleaver according to an embodiment of the present application;
fig. 6 is a schematic diagram of an interleaving task processing process according to an embodiment of the present application;
fig. 7 is a schematic diagram of an interleaving task processing process according to an embodiment of the present application;
FIG. 8 is a block diagram illustrating an embodiment of the present invention, showing a storage data format of a 4-block dual-port TP-RAM;
FIG. 9 is a schematic diagram of a process for writing data according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a process for reading data according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a circular storage process provided by an embodiment of the present application;
fig. 12 is a schematic diagram of an interleaving process provided in an embodiment of the present application;
fig. 13 is a schematic diagram illustrating a process of performing interleaving and layer mapping simultaneously according to an embodiment of the present application;
fig. 14 is a schematic diagram of a data storage process in which interleaving and layer mapping are simultaneously performed according to an embodiment of the present application;
fig. 15 is a process of completing layered mapping while implementing interleaving bypass according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an interleaving apparatus according to an embodiment of the present application;
fig. 17 is another schematic structural diagram of an interleaving apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
In order to flexibly expand and support the problem of multiple scenes in response to the change of protocols in the processes of bit-level processing, a transmitting end can adopt a bit-level implementation device facing 5G and having high flexible expansibility as shown in FIG. 1, a receiving end can adopt a bit-level implementation device facing 5G and having high flexible expansibility as shown in FIG. 2, various frame formats are flexibly supported, and various modules can be flexibly combined.
In FIG. 1, a subtask controller is used to control the scheduling of subtasks. The modulation module is used for calculating a time domain/frequency domain modulation symbol mean value. The scrambling module is used for scrambling and calculating the data. The Uplink Control Information (UCI) multiplexing module is configured to perform Channel Quality Indicator (CQI), Resource Identifier (RI), and Acknowledgement (ACK) coding, and multiplexing of CQI, RI, ACK, and pilot and data. The interleaver is used for performing block processing on data and performing read-write address out-of-order read-write on a Random Access Memory (RAM). A double Data Rate (DDR 0) is used to store Data before interleaving. The DDR1 is used to store interleaved data. In physical implementation, the DDR0 and the DDR1 may also be replaced by different address segments in the same cache block.
In fig. 2, the subtask controller can Control and Control the subtask scheduling, the demodulation module is configured to calculate soft Information of each data, the descrambling module is configured to calculate data before scrambling, and the Uplink Control Information (UCI) detection module is configured to perform RI and ACK mapping and decoding. The de-interleaver is used for processing the data in blocks and reading and writing the RAM out of order of read and write addresses. And the UCI demultiplexing and detecting module is used for separating RI, ACK, pilot frequency and data and calculating the RI/ACK detection result. DDR 0: the outer memory stores the DDR of the data before interleaving. DDR 1: and externally storing the DDR of the interleaved data. In physical implementation, the DDR0 and the DDR1 may also be replaced by different address segments in the same cache block
In the process of performing bit-level data processing by each processing module of the bit-level implementation apparatus shown in fig. 1 and 2, the subtask controller can flexibly configure the starting and processing flows of each module to adapt to various scenes. For example, through various switch combinations, the effect that under various changes of 5G, only the changed module is bypassed, and the rest functions can be normally operated is realized. Or through various switch combinations, dozens of scenes or hundreds of scenes can be evolved, and most of possible extended scenes are covered. The output various data contents and formats are flexibly configured through a switch, and certain data can be output by a bypass for processing by software or other hardware modules under various changes of 5G. For example, in the process of deinterleaving and layer mapping in 5G, the frame format and the interleaving mode are three complex scenes of hierarchical deinterleaving, bit-level deinterleaving, and symbol-level deinterleaving, and for the complex scene, the bit-level implementation apparatus shown in fig. 2 is applied, and the subtask controller may be implemented in the configuration mode shown in table 1 below.
TABLE 1
Figure BDA0001282876770000041
In order to save scheduling delay, the complex frame format can be split into a plurality of subtasks for processing. In the bit-level implementation apparatus shown in fig. 1 and 2, the subtask controller can implement a single scheduling of multiple subtasks, and after a complex task is split, the complex task needs to be scheduled only once, and the apparatus automatically and continuously executes multiple subtasks, and finally sends out a completion message. For example: for example, some frame formats which do not conform to LTE (Long term evolution) are adopted, the scheduling delay is reduced by adopting the subtasks, the processing under a complex scene is realized by splitting the tasks and combining, and the method has the advantages of low cost and low delay.
The bit-level implementation apparatuses shown in fig. 1 and 2 may support various bitmap manners to perform RI/ACK mapping. For example, the RI/ACK bitmap supporting column distribution, and the RI, ACK, and pilot bitmaps supporting Resource Element (RE) level distribution may also be supported, so that demodulation, descrambling, deinterleaving, and demultiplexing of a data path may be normally completed in various frame format scenarios, and it is ensured that a subsequent level can be started in advance. And the RI/ACK can demodulate, descramble, separate, deinterleave and de-layer map according to the specified format, thereby reducing the overhead of software to the maximum extent and reducing the processing time delay.
The bit level implementation apparatus shown in fig. 1 can flexibly support interleaving and layer mapping of data under various specifications, and the bit level implementation apparatus shown in fig. 2 can flexibly support de-interleaving and de-layer mapping of data under various specifications, so as to solve the problems of large storage overhead, prolonged processing time, and frequent switching of size and specification tasks.
The following description of the embodiments of the present application mainly refers to a process of implementing interleaving processing by using the bit level implementation apparatus shown in fig. 1, and since the deinterleaving is an inverse process of interleaving, the details of the deinterleaving process will not be described in detail in the embodiments of the present application.
Fig. 3 is a flowchart of an interleaving method according to an embodiment of the present application, where an execution main body of the interleaving method shown in fig. 3 may be an interleaver or may also be referred to as an interleaving memory, and referring to fig. 3, the interleaving method includes:
s101: and acquiring an interleaving task to be processed, and determining a storage space required by an interleaver for interleaving the interleaving task to be processed.
The interleaving tasks to be processed in the embodiment of the present application may be understood as data to be interleaved in the form of an interleaving matrix. The interleaver refers to a component that performs interleaving processing on data. During the interleaving process of the interleaver, memory space needs to be allocated to store the interleaved data, wherein the size of the allocated memory space can be represented by the number of rows and columns of the interleaving matrix and the granularity of the interleaved data, for example, the interleaver pair Cmux*RmuxThe interleaving matrix carries out interleaving processing, and the interleaving data granularity of the interleaving matrix is LmuxIf the interleaving matrix is not interleaved, the size of the memory space allocated for the interleaving matrix by the interleaver is Cmux*Rmux*Lmux
S102: and dividing the storage space of the interleaver into N block storage spaces according to the storage space required by the interleaving task to be processed and the maximum block storage space of the interleaving memory. Wherein N is a positive integer.
In the embodiment of the application, in order to increase the processing parallelism and reduce the processing delay and the storage overhead, the storage space of the interleaver can be divided into a plurality of block storage spaces according to the size of the storage space required by the interleaved task to be processed, and the plurality of block storage spaces process the interleaved task in parallel. In the embodiment of the application, the maximum block storage space can be set, and the size of each block storage space is smaller than or equal to that of the maximum block storage space, that is, the storage space required by the interleaving task processed by each block storage space does not exceed the maximum block storage space, so that the size of the interleaving task processed by each block storage space can be limited, and the processing time delay is distributed in a balanced manner.
In the embodiment of the present application, the following method may be adopted to divide the storage space of the interleaver into N block storage spaces according to the storage space required by the interleaving task to be processed and the maximum block storage space of the interleaving memory:
assuming that the size of the maximum block storage space is B, if the storage space required by the interleaving task to be processed is less than or equal to the maximum block storage space B, dividing a first block storage space with the storage space size being the size of the storage space required by the interleaving task to be processed in the storage space of the interleaver, and writing the interleaving task to be processed with the storage space less than or equal to the maximum block storage space B into the first block storage space. If the storage space required by the interleaving task to be processed is larger than the maximum block storage space B, dividing at least one second block storage space with the size of the maximum block storage space B in the storage space of the interleaver, wherein the number of the second block storage spaces is determined according to the storage space required by the interleaving task to be processed, namely the divided second storage spaces can meet the requirement of writing all the interleaving tasks to be processed. And writing part of the interleaving tasks to be processed into the second blocked storage space, wherein the size of the storage space occupied by the part of the interleaving tasks is the size of the maximum blocked storage space B, or is smaller than the size of the maximum blocked storage space B. The partial interleaving task may be understood as an interleaving block obtained by performing block processing on the required storage space greater than the maximum block storage space B.
For the storage space remaining after the first block storage space or the second block storage space is divided, the division can be continued according to the storage space required by other interleaving tasks to be processed in the manner described above, and the interleaving tasks to be interleaved are written in the divided storage space or the interleaving blocks obtained after the interleaving tasks to be processed are subjected to the block division processing are obtained.
For convenience in description in the embodiment of the application, the number of the partitioned storage spaces obtained by partitioning the storage space of the interleaver is set to be N, where N is a positive integer, and the specific value is determined according to the number of the interleaving tasks to be processed and the size of the required storage space.
S103: and dividing the interleaving task to be processed into at least one interleaving block, and writing the interleaving block into the N block storage spaces by taking the interleaving block as a unit.
In the embodiment of the application, the interleaver is divided into N block storage spaces, and interleaving tasks can be written into the N block storage spaces respectively, so that parallel processing of a plurality of interleaving tasks is realized. The interleaving method is applied to the embodiment of the application, the interleaving tasks to be processed can be divided into at least one interleaving block, and the interleaving blocks are written in the N block storage spaces by taking the interleaving blocks as units. If the storage space required by the interleaving task to be processed is less than or equal to the maximum block storage space B, the interleaving task to be processed is not divided, and the interleaving task to be processed is written into the first block storage space with the storage space size required by the interleaving task to be processed as an interleaving block, wherein the storage space size included in the N block storage spaces is the first block storage space size required by the interleaving task to be processed. And if a plurality of interleaving tasks with the required storage space smaller than the maximum block storage space B exist, under the condition that the storage space of the interleaver is not changed, a plurality of interleaving tasks with the required storage space smaller than the maximum block storage space B can be stored in the storage space of the interleaver. As shown in fig. 4, B is stored in the memory space of the interleavers0、Bs1、Bs2、Bs3、Bs4And Bs5And the storage space size required by the six interleaving tasks is smaller than the maximum block storage space B, and the six interleaving tasks can be processed in parallel by the interleaver.
If the memory space needed by the interleaving task to be processed is larger than the maximum blocking memory space B, the interleaving task to be processed can be blocked, the interleaving task to be processed is divided into a plurality of interleaving blocks, the plurality of interleaving blocks obtained by division are written into at least one second blocking memory space with the memory space size being the size of the maximum blocking memory space in the N blocking memory spaces by taking the interleaving blocks as a unit. For example, pending interleaved task Bb0The required storage space is 2B, the interleaving task to be processed is divided into two interleaving blocks which are B respectivelyb0SLC0And Bb0SLC1,Bb0SLC0And Bb0SLC1The required storage space size is the maximum block storage space B, and B is divided intob0SLC0And Bb0SLC1And writing the data into two partitioned storage spaces of which the storage space size is the size of the maximum partitioned storage space, among the N partitioned storage spaces, respectively, as shown in fig. 5. In fig. 5, two interleaved blocks of the interleaving task to be processed are written into the memory space of the interleaver, and the interleaver can process the two interleaved blocks in parallel.
By adopting the process of dividing the memory space of the interleaver into a plurality of block memory spaces according to the size of the interleaving tasks and processing the plurality of interleaving tasks in parallel, which is provided by the embodiment of the application, an interleaving mode with variable block granularity is realized, and a schematic diagram of the realization process can be shown in fig. 6. In fig. 6, in the process of interleaving blocks 0, 1, 2, and 3, interleaving blocks may be written in sequence in units of interleaving blocks, where interleaving block 0, interleaving block 1, interleaving block 2, and interleaving block 3 may be written in sequence, and after writing of interleaving block 0 is completed, the read operation of interleaving block 0 may be performed, and the read operation of interleaving block 1, interleaving block 2, and interleaving block 3 may be performed similarly subsequently, as shown in fig. 6.
In the embodiment of the application, if the storage space required by the interleaving task to be processed is less than or equal to the maximum blocking storage space, the interleaving task to be processed does not need to be blocked, and the interleaving task to be processed is written into and divided according to the original interleaving processing mode of the interleaving task to obtain the corresponding blocking storage space. The following description focuses on the process of block processing of the interleaving task to be processed and interleaving processing after writing the interleaving block under the condition that the memory space required by the interleaving task to be processed is larger than the maximum block memory space.
In the embodiment of the present application, it is assumed that the number of the to-be-processed interleaved tasks is CmuxNumber of rows RmuxThe maximum block storage space of the interleaving matrix is B, and then R is B/CmuxPartition granularity of (2), to Cmux*RmuxThe interleaving matrix is processed in a block mode to obtain a plurality of CmuxR interleaving blocks, namely, the number of rows of an interleaving matrix in an interleaving block obtained by performing block processing on the interleaving task to be processed is the quotient of the maximum block storage space and the number of columns of the interleaving matrix corresponding to the interleaving task to be processed; and performing block processing on the interleaving task to be processed to obtain the number of columns of an interleaving matrix in an interleaving block as the number of the interleaving matrix columns corresponding to the interleaving task to be processed. The interleaving blocks obtained by division are used as a unit, and the interleaving blocks are written into the maximum block storage space of the interleaver, so that the interleaving operation of "row write column read/column write row read" can be completed in one interleaving block, as shown in fig. 7. The interleaved blocks divided in fig. 7 are denoted by C, respectivelymux*R0、Cmux*R1……Cmux*Rn. Data is written into the interleaver in row/column order, full of Cmux*R0After the data amount is obtained, all data in the interleaving block can be read out according to the sequence of columns/rows, address offset is reserved when the columns are output during interleaving, and the data output by each interleaving block are ensured to be continuously arranged in the DDR finally. When the interleaving is performed, the data in the interleaving block is read in a jumping mode when the column is input, and finally the continuous interleaving data in the DDR is read equivalently. The same continues to write into the interleaver in "row/column" order, full with Cmux*R1The data amount of the data block can be read out according to the sequence of 'column/row' to read out all the data in the interleaving block. And sequentially processing the subsequent interleaving blocks until the interleaving processing of all the interleaving blocks obtained by division is completed. By adopting the interleaving processing mode, the starting time delay of interleaving is reduced to CmuxR instead of Cmux*RmuxProcessing delay can be reduced.
In the following, the interleaving process described above is described by taking a channel interleaving process of one code in an LTE protocol as an example. According to the protocol, a Single-carrier frequency-Division Multiple Access (SC-FDMA) symbol sequence of a time sequence resource corresponds to a column sequence number C in an interleaving matrixmuxThe frequency domain resource RE sequence corresponds to the row sequence number R in the interleaving matrixmux. Assume a data block size of Cmux*RmuxWhen B is 2048, R is 2048/16 is 128, that is, block interleaving is performed according to R is 128, and the whole data block is divided into 3 interleaving blocks. R0=128,R1=128,R044. The first two blocks are written with 16 data in one row and 128 rows, and after the writing is finished, 128 data in each column are sequentially output. To continue the external data, the output address is shifted, column 0 outputs 0-127 positions, column 1 is shifted 300 data positions relative to column 0, 300-327 positions are written, and so on. When the second block is output, row 0 is written in 128-255 positions, row 1 is written in 328-555 positions … …, when the third block is output, row 0 is written in 256-299 positions, and row 1 is written in 556-599 positions … …. If the data block size is Cmux*RmuxWhen the value of B is 2048 and 64 is 300, R is 2048/64 is 32, that is, block interleaving is performed according to R is 32, and the whole data block is divided into 10 interleaving blocks. R0=32,R1=32,…R9=12。
The interleaving processing method in the embodiment of the application can realize setting of the variable block granularity R for different to-be-processed interleaving tasks by setting the size of the B, and determine the block granularity and the block number according to the R, so that the interleaving block processing of any specification can be supported under the condition of fixing the size of the storage space. And under the scene that the number of columns in the interleaving task is small, the R which can be blocked is large, and the bus efficiency is improved.
In the embodiment of the application, in order to improve the processing capacity of 'row write column read/column write row read', two positions of 'write' or 'read' are set in an interleaver in a way that 4 blocks of dual-port TP-RAMs are arranged in each clock cycle. In the case of 4 dual-port TP-RAM, in order to ensure that the read/write ports of the TP-RAM do not collide during the interleaving read/write process, the data in the TP-RAM needs to be stored in a specific format, for example, the data may be stored in a manner shown in fig. 8. The arrangement format can effectively avoid read-write collision.
In the process of writing by interleaving 4 blocks of dual-port TP-RAMs, 2 data can be written in each clock cycle, for example, as shown in fig. 9, x0 and x1 are written in the first clock cycle, and are written in addresses 0 of the RAM0 and the RAM1, respectively; the second clock cycle writes x2, x3, to address 1 of RAM0, RAM1, respectively. And the rest is done until all data is written. Similar to the write operation for the read operation, the first clock cycle reads address 0 of RAM0, RAM 2; the second clock cycle reads address 6 of RAM0, RAM 2. And repeating the steps until all data are read.
The process of the write operation and the read operation can be implemented as follows:
x (k) - > x (r, c), wherein r ═ 0, 1, 2, … …, Rmax-1; c is 0, 1, 2, … …, Cmax-1. Wherein Rmax is the total number of rows of the interleaving matrix, Cmax is the total number of columns of the interleaving matrix, and the following relationship is present:
k=r*Cmax+c。
r-floor (k/Cmax) floor () represents rounding down
c ═ k% Cmax% denotes modulus
Suppose that wr0, wr1, wr2 and wr3 respectively represent write enable signals of the INTL RAMs 0-3. waddr0 to waddr3 respectively indicate write addresses of INTL RAMs 0 to 3. Then:
wr0=~(r%2)&~(c%2);
wr1=~(r%2)&(c%2);
wr2=(r%2)&~(c%2);
wr3=(r%2)&(c%2);
Figure BDA0001282876770000081
where MaxAddr is the actual depth of the physical RAM.
Let rd0, rd1, rd2, rd3 denote the read enable signals of the INTL RAMs 0-3, respectively. The raddr0 to raddr3 respectively indicate read addresses of the INTL RAMs 0 to 3. Then:
rd0=~(r%2)&~(c%2);
rd1=~(r%2)&(c%2);
rd2=(r%2)&~(c%2);
rd3=(r%2)&(c%2);
it can be understood that the read-write enable and read-write address calculation formulas are the same. The difference is that the sequence is different, the serial number of c is increased first in the writing operation, and then the serial number of r is increased; the read operation first increments the sequence number of r and then increments the sequence number of c.
In one possible implementation manner in this embodiment of the present application, in order to improve the blocking efficiency, the interleaver may use a circular buffer mechanism, and a buffer space is reserved for the to-be-processed task according to the data amount of B × 2, that is, any C can be satisfiedmux*RmuxAre interlaced. The circular cache mechanism can be understood as the following implementation process: when switching different interleaving block tasks, although the remaining address space is not enough to store the data of a complete interleaving block, the remaining space can be filled according to the dynamic waterline, the data which is not input is subjected to back pressure, and the data is input when the space allows, as shown in fig. 11, the interleaving block B is stored in the RAM2b0SLC2、Bs1、Bs2And Bs3The remaining storage space thereafter is not sufficient to interleave block Bb1SLC0Complete storage, in which case the interleaved block B may beb1SLC0The partially interleaved blocks of (A) are stored first and then written to (B) when space permits inputb1SLC0The remainder, as shown in fig. 11. Similarly, in RAM3, B is stored circularlyb2SLC0The partial interleaved block writing. In the embodiment of the present application, when the above circular buffer mechanism is adopted, it may be understood that N block storage spaces obtained by dividing a storage space of an interleaver include a third block storage space, a size of the third block storage space is smaller than a size of a maximum block storage space, and an interleaving block written in the third block storage space is a part of interleaving blocks in interleaving blocks obtained by performing block processing on the interleaving task to be processed. By adopting the circular buffer mechanism, the scheduling efficiency loss caused by the interleaving of large and small interleaving blocks after the block division can be avoided.
In the embodiment of the application, a circular cache mechanism is adopted, and the actual write address of the physical RAM can be obtained after the computed logical address modulo the MaxAddr.
The bit level implementation apparatus shown in fig. 1 and fig. 2 applies the interleaving processing method in the embodiment of the present application, and the interleaving process is schematically shown in fig. 12. In FIG. 12, for interleaved row-write-column-read, after two-dimensional matrix mapping, according to x0、x1、x2、……、x15… … are written sequentially into the interleaved memory. Then according to x0、x12、x24、x36、……、x1、x13… … are read from the interleaver memory. For de-interleaved column write row read, after two-dimensional matrix mapping, according to x0、x12、x24、x36、……、x1、x13… … are written sequentially into the deinterleaving memory. Then according to x0、x1、x2、……、x15… … are read from the deinterleaving memory.
In another embodiment of the present application, when the multilayer codeword needs to be subjected to hierarchical mapping, a layer may be equivalent to the number of columns of the interleaving matrix, that is, the number of columns of the interleaving matrix is equivalent to the product of the number of columns of the interleaving matrix corresponding to the interleaving task to be processed and the number of layers for performing layer mapping on the interleaving matrix, and the interleaving processing is performed by using the above-mentioned interleaving processing method, so that the completion of the hierarchical mapping process is realized while completing row-column interleaving. For example, completing the hierarchical mapping process while completing row-column interleaving may be accomplished as follows:
assuming the number of layers is LmuxThe number of columns of the interleaving matrix is CmuxThen the column number of the equivalent interleaving matrix is represented as Ceq=Lmux*Cmux. When writing in "row" order, the data of column 0 layer 0, column 0 layer 1, column 0 layer 2, column 0 layer 3, column 1 layer 0, column 1 layer 1, column 1 layer 2, and column 1 layer 3 … … are equivalently written together as columns 0, 1, 2, 3, 5, 6, 7, 8 … …. When reading out according to the 'column' sequence, firstly reading out all data of column 0 in the interleaving block, then reading out data of column 1, then reading out data of column 2, then reading out data of column 3, then reading out column 4, namely in the same interleaving processThe hierarchical mapping is completed. The implementation process is shown in fig. 13.
When the circular buffer mechanism according to the above-described embodiment is used for interleaving, the data arrangement manner of the buffer is adaptively changed, for example, two layers are taken as an example, and the data arrangement manner is as shown in fig. 14. In addition, when calculating the output address offset, the inter-layer cost needs to be increased, and the specific implementation process is as follows:
1) according to the codeword output start address cw _ o _ base, the output start address of each Sym can be calculated:
sym_o_base=cw_o_base+Rmux*sym_idx*data_width*lay_num
2) calculating an output offset address according to the RE serial number of the data in the current interleaving block:
sym_o_offset=(re_idx+Rmux*lay_idx)*data_width
3) outputting data to DDR1 according to Sym layer granularity in each interleaving block, wherein the starting address of each output is as follows:
sym_o_addr=sym_o_base+sym_o_offset
4) the data length of each output is R data _ width, and C is calculatedmuxAnd outputting N times of lay _ num times, and then obtaining the data after the whole code word interleaving from the DDR1 (wherein N is the number of interleaving blocks, and when the residual data amount of the last interleaving block is less than R, outputting the data according to the actual size).
In another embodiment of the present application, for a scenario of interleaving bypass in an LTE protocol, each column after interleaving bypass may be equivalent to one LmuxThe small task of the column, the number of columns C in the equivalent interleaving matrixeq=LmuxTask number T ═ CmuxThe number of blocks is Cmux*MmuxThen, the interleaving processing procedure is performed according to the interleaving processing method in the above embodiment, which can realize the completion of hierarchical mapping while bypassing interleaving. As shown in fig. 15, when data in each column is written in the "row" order in fig. 15, the data in the column 0 layer 0, the column 0 layer 1, the column 0 layer 2, and the column 0 layer 3 are equivalent to the data written in the columns 0, 1, 2, and 3. When reading out in the "column" order, all the data of column 0 in the interleaved block is read out firstThe column 1 data is read, the column 2 data is read, and the column 3 data is read. After completing the operation of one column, the next column operation is performed according to the same procedure, i.e. the hierarchical mapping is completed while interleaving the bypass.
According to the interleaving processing method provided by the embodiment of the application, the storage space of the interleaver is divided into a plurality of block storage spaces according to the size of the storage space required by the interleaving task, the interleaving tasks corresponding to a plurality of interleaving blocks are processed in parallel in the plurality of block storage spaces, and in the interleaving read-write process, the interleaving blocks are independent from each other, so that the reading operation or the writing operation is performed by taking the interleaving blocks as a unit. And the size of the interleaving block can be flexibly configured, and the interleaving block processing of any specification can be supported under the condition of fixing the size of the storage space.
To explain the beneficial effects of the present application more clearly, taking a full-specification transmission code block in LTE as an example, when 256-phase Quadrature Amplitude Modulation (QAM), the data size of code block channel interleaving is 3300RE 64Sym 4Lay 64Bit, and then the interleaving processing method according to the embodiment of the present application is adopted, and only 2048RE 64Bit 2Buffer is needed to realize the stream processing. The storage capacity, processing delay, handover delay and startup delay required by the prior art and the present application are shown in table 2 below:
TABLE 2
Figure BDA0001282876770000101
Figure BDA0001282876770000111
As can be seen from table 2, the present application can reduce the required storage capacity, processing delay, switching delay, and start-up delay in the interleaving process.
Based on the interleaving processing method provided by the above embodiment, the embodiment of the present application further provides an interleaving processing apparatus. It is understood that the interleaving processing device includes hardware structures and/or software modules for performing the above functions. The elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be embodied in hardware or in a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present teachings.
In the embodiment of the present application, functional units may be divided according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of using an integrated unit, fig. 16 shows a schematic configuration of an interleave processing apparatus, which is shown in fig. 16. The interleave processing apparatus 100 includes an acquisition unit 101, a processing unit 102, and an interleave unit 103. An obtaining unit 101 is configured to obtain an interleaving task to be processed. A processing unit 102, configured to determine a storage space required by the interleaver for performing interleaving processing on the to-be-processed interleaving task acquired by the acquiring unit 101, and divide the storage space of the interleaver into N block storage spaces according to the storage space required by the to-be-processed interleaving task and a maximum block storage space of the interleaver, where N is a positive integer, and a size of each block storage space in the N block storage spaces is smaller than or equal to a size of the maximum block storage space. An interleaving unit 103, configured to divide the interleaving task to be processed, acquired by the acquiring unit 101, into at least one interleaving block, and write the interleaving block in the N block storage spaces obtained by dividing by the processing unit 102, with the interleaving block as a unit.
In a possible implementation manner, if the storage space required by the interleaving task to be processed is less than or equal to the maximum blocking storage space, the processing unit 102 divides N blocking storage spaces including a first blocking storage space whose storage space size is the storage space required by the interleaving task to be processed. The interleaving unit 103 writes the interleaving task to be processed in the first partitioned storage space obtained by the processing unit 102.
In another possible implementation manner, if the storage space required by the interleaving task to be processed is greater than the maximum blocking storage space, the processing unit 102 divides the interleaving task to obtain N blocking storage spaces including at least one second blocking storage space whose storage space is the size of the maximum blocking storage space. The interleaving unit 103 writes the interleaving blocks obtained by performing the block processing on the interleaving tasks to be processed into the second block storage space obtained by dividing by the processing unit 102.
In another possible implementation manner, the N blocking storage spaces further include a third blocking storage space, and the interleaving block written in the third blocking storage space is a partial interleaving block in the interleaving blocks obtained by blocking the interleaving task to be processed.
In another possible implementation manner, if the memory space required by the interleaving task to be processed is greater than the maximum blocking memory space, the interleaving unit 103 performs blocking processing on the interleaving task to be processed to obtain the number of rows of an interleaving matrix in an interleaving block, which is the quotient of the maximum blocking memory space and the number of columns of the interleaving matrix corresponding to the interleaving task to be processed; and performing block processing on the interleaving task to be processed to obtain the number of columns of an interleaving matrix in an interleaving block as the number of the interleaving matrix columns corresponding to the interleaving task to be processed.
In yet another possible implementation manner, the number of columns of the interleaving matrix in the interleaving block, which is obtained by the interleaving matrix in the interleaving block by performing the block division processing on the interleaving task to be processed, is the product of the number of columns of the interleaving matrix corresponding to the interleaving task to be processed and the number of layers for performing the layer mapping on the interleaving matrix.
When implemented in hardware, the obtaining unit 101 and the processing unit 102 may be mapping circuits, and the interleaving unit 103 may be a read-write circuit. When the acquiring unit and the processing unit are mapping circuits and the interleaving unit is a read-write circuit, the interleaving processing means may be an interleaver as shown in fig. 17.
Fig. 17 is a schematic diagram illustrating another structure of an interleaving apparatus according to an embodiment of the present application. In fig. 17, the interleaving processing apparatus may be an interleaver 1000, and the interleaver 1000 includes a mapping circuit 1001, a read/write circuit 1002, and an interleaving memory 1003.
The mapping circuit 1001 is configured to obtain a task to be interleaved, and perform two-dimensional sequence number mapping on data to be interleaved in the task to be interleaved to obtain an interleaving matrix, so that each data in the data to be interleaved corresponds to one element in the interleaving matrix. And determining a storage space required by the interleaver for performing interleaving processing on the interleaving tasks to be processed, and dividing the storage space of the interleaver 1003 into N block storage spaces according to the storage space required by the interleaving tasks to be processed and the maximum block storage space of the interleaver 1003, where N is a positive integer, and the size of each block storage space in the N block storage spaces is less than or equal to the size of the maximum block storage space. And the read-write circuit is used for dividing the interleaving task to be processed into at least one interleaving block and writing the interleaving block into the N block storage spaces by taking the interleaving block as a unit. The interleaver memory 1003 is a memory for storing data during interleaving, and generally includes one or more RAMs.
The mapping circuit 1001 and the reading and writing circuit 1002 have corresponding functions in the interleaving processing method according to the above embodiment, and the specific function implementation process may refer to the related description of the above embodiment, which is not described herein again.
In the embodiment of the present application, the interleaving apparatus 100 and the interleaver 1000 have the function of performing interleaving processing by the interleaver in the above method embodiment, and for a place where description of the embodiment of the present invention is not exhaustive, reference may be made to relevant descriptions of the above embodiment, and details of the embodiment of the present application are not repeated herein.
Based on the foregoing embodiments, the present application further provides a computer medium or a computer program product, which is used to store computer software instructions for the interleaving processing apparatus and the interleaver, and includes a program for executing the interleaving processing method according to the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (10)

1. An interleaving processing method, characterized in that the method comprises:
acquiring an interleaving task to be processed, and determining a storage space required by an interleaver for interleaving the interleaving task to be processed;
dividing a storage space of an interleaver into N block storage spaces according to a storage space required by an interleaving task to be processed and a maximum block storage space of the interleaver, wherein N is a positive integer, and the size of each block storage space in the N block storage spaces is smaller than or equal to the size of the maximum block storage space;
dividing the interleaving task to be processed into at least one interleaving block, and writing the interleaving block into the N block storage spaces by taking the interleaving block as a unit; if the memory space required by the interweaving task to be processed is larger than the maximum blocking memory space, the number of rows of an interweaving matrix in the interweaving block obtained by blocking the interweaving task to be processed is the quotient of the maximum blocking memory space and the number of interweaving matrix columns corresponding to the interweaving task to be processed; and the column number of an interleaving matrix in the interleaving block obtained by partitioning the interleaving task to be processed is the interleaving matrix column number corresponding to the interleaving task to be processed.
2. The method according to claim 1, wherein if the memory space required by the interleaved task to be processed is less than or equal to the maximum blocked memory space, the N blocked memory spaces include a first blocked memory space having a memory space size that is the memory space required by the interleaved task to be processed, and the interleaved block written in the first blocked memory space is the interleaved task to be processed.
3. The method according to claim 1, wherein if the memory space required by the interleaving task to be processed is larger than the maximum blocking memory space, the N blocking memory spaces include at least one second blocking memory space whose memory space is the size of the maximum blocking memory space, and the interleaving block written in the second blocking memory space is an interleaving block obtained by blocking the interleaving task to be processed.
4. The method according to any one of claims 1 to 3, wherein the N block storage spaces further include a third block storage space, and the interleaving blocks written in the third block storage space are partial interleaving blocks among the interleaving blocks obtained by performing block processing on the interleaving tasks to be processed.
5. The method according to claim 4, wherein the number of columns of the interleaving matrix in the interleaving block obtained by the interleaving matrix in the interleaving block performing the block processing on the interleaving task to be processed is the product of the number of columns of the interleaving matrix corresponding to the interleaving task to be processed and the number of layers for performing the layer mapping on the interleaving matrix.
6. An interleave processing apparatus comprising:
the acquiring unit is used for acquiring the interweaving task to be processed;
the processing unit is used for determining a storage space required by the interleaver for performing interleaving processing on the interleaving tasks to be processed, which are acquired by the acquisition unit, and dividing the storage space of the interleaver into N block storage spaces according to the storage space required by the interleaving tasks to be processed and the maximum block storage space of the interleaver, wherein N is a positive integer, and the size of each block storage space in the N block storage spaces is smaller than or equal to the size of the maximum block storage space;
the interleaving unit is used for dividing the interleaving task to be processed acquired by the acquisition unit into at least one interleaving block, and writing the interleaving block into N block storage spaces obtained by dividing the interleaving block by the processing unit by taking the interleaving block as a unit; the interleaving unit divides the interleaving task to be processed into at least one interleaving block in the following mode: if the memory space needed by the interweaving task to be processed is larger than the maximum blocking memory space, the row number of an interweaving matrix in the interweaving block obtained by blocking the interweaving task to be processed is the quotient of the maximum blocking memory space and the interweaving matrix column number corresponding to the interweaving task to be processed; and the column number of an interleaving matrix in the interleaving block obtained by partitioning the interleaving task to be processed is the interleaving matrix column number corresponding to the interleaving task to be processed.
7. The apparatus of claim 6, wherein the processing unit divides the memory space of the interleaver into N block memory spaces according to the memory space required by the interleaving task to be processed and the maximum block memory space of the interleaver by:
if the storage space required by the interweaving task to be processed is less than or equal to the maximum block storage space, dividing N block storage spaces including a first block storage space with the storage space size being the storage space required by the interweaving task to be processed;
the interleaving unit writes the interleaving blocks in the N block storage spaces by taking the interleaving blocks as a unit in the following mode:
and writing the interleaving tasks to be processed in the first block storage space obtained by the processing unit through division.
8. The apparatus of claim 7, wherein the processing unit divides the memory space of the interleaver into N block memory spaces according to the memory space required by the interleaving task to be processed and the maximum block memory space of the interleaver by:
if the required storage space of the interleaving task to be processed is larger than the maximum block storage space, dividing to obtain N block storage spaces including at least one second block storage space with the storage space being the size of the maximum block storage space;
the interleaving unit writes the interleaving blocks in the N block storage spaces by taking the interleaving blocks as a unit in the following mode:
and writing the interleaving blocks obtained after the interleaving tasks to be processed are subjected to the block processing in the second block storage space obtained by the division of the processing unit.
9. The apparatus according to any one of claims 6 to 8, wherein the N block storage spaces further include a third block storage space, and the interleaving blocks written in the third block storage space are partial interleaving blocks in the interleaving blocks obtained by performing block processing on the interleaving tasks to be processed.
10. The apparatus according to claim 9, wherein the number of columns of the interleaving matrix in the interleaving block obtained by the interleaving matrix in the interleaving block performing the block processing on the interleaving task to be processed is the product of the number of columns of the interleaving matrix corresponding to the interleaving task to be processed and the number of layers for performing layer mapping on the interleaving matrix.
CN201710295108.1A 2017-04-28 2017-04-28 Interleaving processing method and device Active CN107241163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710295108.1A CN107241163B (en) 2017-04-28 2017-04-28 Interleaving processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710295108.1A CN107241163B (en) 2017-04-28 2017-04-28 Interleaving processing method and device

Publications (2)

Publication Number Publication Date
CN107241163A CN107241163A (en) 2017-10-10
CN107241163B true CN107241163B (en) 2020-02-21

Family

ID=59985521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710295108.1A Active CN107241163B (en) 2017-04-28 2017-04-28 Interleaving processing method and device

Country Status (1)

Country Link
CN (1) CN107241163B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115865267A (en) * 2019-03-29 2023-03-28 中兴通讯股份有限公司 Resource scheduling method, device and storage medium
CN114741329B (en) * 2022-06-09 2022-09-06 芯动微电子科技(珠海)有限公司 Multi-granularity combined memory data interleaving method and interleaving module

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662336A (en) * 2009-09-16 2010-03-03 北京海尔集成电路设计有限公司 Configurable interleave and deinterleave method and device thereof
CN101719810A (en) * 2009-11-13 2010-06-02 清华大学 Simulation generation method for parallel interleaver
CN101924608A (en) * 2010-09-01 2010-12-22 北京天碁科技有限公司 Method, device and transmitter for realizing block interleaving
CN102356554A (en) * 2011-08-23 2012-02-15 华为技术有限公司 Turbo code data interweaving process method and interweaving device used for interweaving turbo code data
CN104184536A (en) * 2013-05-21 2014-12-03 华为技术有限公司 Sub block interleaving control method based on LTE (Long Term Evolution) Turbo decoding, device and equipment
CN105490776A (en) * 2015-11-26 2016-04-13 华为技术有限公司 Interleaving method and interleaver
CN106603191A (en) * 2015-10-15 2017-04-26 普天信息技术有限公司 Parallel-processing-based block interleaving method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9128781B2 (en) * 2012-12-28 2015-09-08 Intel Corporation Processor with memory race recorder to record thread interleavings in multi-threaded software
US9509545B2 (en) * 2013-07-19 2016-11-29 Blackberry Limited Space and latency-efficient HSDPA receiver using a symbol de-interleaver

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662336A (en) * 2009-09-16 2010-03-03 北京海尔集成电路设计有限公司 Configurable interleave and deinterleave method and device thereof
CN101719810A (en) * 2009-11-13 2010-06-02 清华大学 Simulation generation method for parallel interleaver
CN101924608A (en) * 2010-09-01 2010-12-22 北京天碁科技有限公司 Method, device and transmitter for realizing block interleaving
CN102356554A (en) * 2011-08-23 2012-02-15 华为技术有限公司 Turbo code data interweaving process method and interweaving device used for interweaving turbo code data
CN104184536A (en) * 2013-05-21 2014-12-03 华为技术有限公司 Sub block interleaving control method based on LTE (Long Term Evolution) Turbo decoding, device and equipment
CN106603191A (en) * 2015-10-15 2017-04-26 普天信息技术有限公司 Parallel-processing-based block interleaving method and apparatus
CN105490776A (en) * 2015-11-26 2016-04-13 华为技术有限公司 Interleaving method and interleaver

Also Published As

Publication number Publication date
CN107241163A (en) 2017-10-10

Similar Documents

Publication Publication Date Title
US10256840B2 (en) ECC word configuration for system-level ECC compatibility
JP5575310B2 (en) Tile-based interleaving and deinterleaving for digital signal processing
US8892829B2 (en) Methods, systems, and computer readable media for integrated sub-block interleaving and rate matching
US9684592B2 (en) Memory address generation for digital signal processing
CN105490776A (en) Interleaving method and interleaver
US7644340B1 (en) General convolutional interleaver and deinterleaver
US10025709B2 (en) Convolutional de-interleaver and convolutional de-interleaving method
JP7241760B2 (en) RESOURCE MAPPING METHOD AND APPARATUS AND DEVICE
CN107241163B (en) Interleaving processing method and device
KR102082704B1 (en) Apparatus for transmitting broadcast signals, apparatus for receiving broadcast signals, method for transmitting broadcast signals, and method for receiving broadcast signals
CN102037652A (en) A data handling system comprising memory banks and data rearrangement
KR102248750B1 (en) Bit interleaver and bit de-interleaver
US9900201B2 (en) Time de-interleaving circuit and method thereof
US7073012B2 (en) System and method for interleaving data in a communications device
CN101924608B (en) Method, device and transmitter for realizing block interleaving
CN105760310A (en) Address assignment method and DDR controller
CN108809588B (en) Control channel resource mapping method and device
US10140209B2 (en) Time de-interleaving circuit and time de-interleaving method for reducing a number of times of accessing memory
CN110741559A (en) Polarization encoder, communication unit, integrated circuit and method thereof
JP4024102B2 (en) OFDM transmitter
US8443253B2 (en) Turbo decoding device and communication device
JP5785881B2 (en) Semiconductor device
CN109728826B (en) Data interleaving and de-interleaving method and device
CN101520750B (en) Method for storing a plurality of data in SDRAM
JP2006287325A (en) Interleave and de-interleave method, wireless apparatus, and semiconductor device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant