WO2024098687A1 - 交织数据的处理方法、装置、存储介质及电子装置 - Google Patents

交织数据的处理方法、装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2024098687A1
WO2024098687A1 PCT/CN2023/092082 CN2023092082W WO2024098687A1 WO 2024098687 A1 WO2024098687 A1 WO 2024098687A1 CN 2023092082 W CN2023092082 W CN 2023092082W WO 2024098687 A1 WO2024098687 A1 WO 2024098687A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
interleaved data
cache
matrix
units
Prior art date
Application number
PCT/CN2023/092082
Other languages
English (en)
French (fr)
Inventor
李钊
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2024098687A1 publication Critical patent/WO2024098687A1/zh

Links

Definitions

  • Embodiments of the present application relate to the field of communications, and in particular, to a method, device, storage medium, and electronic device for processing interleaved data.
  • forward error correction code In the field of communications, forward error correction code (FEC) can ensure the credibility of communication signals and achieve reliable transmission over long distances.
  • FEC symbol error rate
  • SER symbol error rate
  • the receiving end can correct the error and restore the payload data after decoding.
  • the receiving side may encounter a burst error (BE) in a certain data, that is, when an error occurs, the error is overly concentrated in a single payload data or a continuous payload interval, exceeding the system's correctable error carrying capacity, resulting in the loss of the original payload data.
  • BE burst error
  • the existing technology disperses the possible burst over-limit errors, gives full play to the system error correction coding capability, introduces interleaving after encoding, and introduces deinterleaving before decoding.
  • the data encoded on the sending side is arranged into a matrix in rows and columns, and the matrix is divided into small blocks.
  • the data in the small blocks are rearranged and interleaved in sequence; on the receiving side, the interleaved data is restored to the original encoded data according to a similar processing method on the interleaving side, and output to the decoding unit for data decoding and error correction processing.
  • the interleaving module needs to write the entire interleaved block into the cache space before it can read the data according to the interleaved processing method.
  • a ping-pong operation is adopted.
  • the conventional ping-pong operation processing method requires twice the storage space of the interleaved data, and the system resource consumption is large, which becomes a bottleneck restricting the system performance.
  • the embodiments of the present application provide a method, device, storage medium and electronic device for processing interleaved data, so as to at least solve the problem in the related art that processing interleaved data occupies more storage resources.
  • a method for processing interleaved data comprising: determining a first cache space for first interleaved data, wherein the first cache space includes N cache units, and the N is a natural number greater than 1;
  • the first interleaved data and other interleaved data are written and read out in a ping-pong read-write manner, wherein the second cache space includes M cache units, and the M is a natural number less than the N.
  • a device for processing interleaved data comprising: a first determining module, configured to determine a first cache space for first interleaved data, wherein the first cache space includes N cache units, and N is a natural number greater than 1; a first processing module, configured to write and read the first interleaved data and other interleaved data in the first cache space and the second cache space in a ping-pong read-write manner, wherein the second cache space
  • the cache memory includes M cache units, where M is a natural number smaller than N.
  • the above-mentioned first determination module includes: a first determination unit, configured to determine the data amount of the above-mentioned first interleaved data; a first generation unit, configured to generate matrix code elements of preset rows and columns according to the data amount of the above-mentioned first interleaved data, wherein the above-mentioned matrix code elements include N matrix areas, each of the above-mentioned matrix areas corresponds to one of the above-mentioned cache units, and each of the above-mentioned matrix areas includes multiple matrix units; a second determination unit, configured to determine the above-mentioned matrix code elements as the above-mentioned first cache space.
  • the first determination unit includes: a first acquisition subunit, configured to acquire interleaved data output by the first encoder and interleaved data output by the second encoder; a first determination subunit, configured to determine the data amount of the interleaved data output by the first encoder and the data amount of the interleaved data output by the second encoder as the data amount of the first interleaved data, wherein the interleaved data output by the first encoder is cached in the matrix space of the even rows in the matrix code elements, and the interleaved data output by the second encoder is cached in the matrix space of the odd rows in the matrix code elements.
  • the first processing module includes: a first cache unit, configured to cache the first interleaved data into the first cache space; a first reading unit, configured to write part of the other interleaved data into M cache units when part of the first interleaved data is read from K cache units, wherein the K cache units are cache units located in the same column among the N cache units, and the cache spaces of the K cache units and the M cache units are the same; a first writing unit, configured to write the remaining data of the other interleaved data into the K cache units when part of the first interleaved data is fully read from the K cache units, and continue to read the remaining data of the first interleaved data in N-K cache units, wherein the amount of data of the remaining data of the other interleaved data is the same as the cache space in the K cache units, and the N-K cache units are cache units located in the same column among the N cache units.
  • the above-mentioned device also includes: a first marking module, which is configured to mark the current state of each of the above-mentioned cache units after writing and reading the above-mentioned first interleaved data and other interleaved data in the above-mentioned first cache space and the second cache space in a ping-pong read and write manner, wherein the current state of each of the above-mentioned cache units includes at least one of the following: in a reading state, a read empty or empty state, and in a writing state, a write full or full state; a first control module, which is configured to control the reading or writing of the above-mentioned cache unit based on the current state of the above-mentioned cache unit.
  • a first marking module which is configured to mark the current state of each of the above-mentioned cache units after writing and reading the above-mentioned first interleaved data and other interleaved data in the above-mentioned first cache space and the second
  • the device further includes: a first reading module, configured to, after determining a first cache space for the first interleaved data, read the first interleaved data when the first interleaved data is cached in the first cache space, so as to store the first interleaved data in a storage device.
  • a first reading module configured to, after determining a first cache space for the first interleaved data, read the first interleaved data when the first interleaved data is cached in the first cache space, so as to store the first interleaved data in a storage device.
  • the above-mentioned first reading module includes: a third determination unit, configured to determine the multiple columns of matrix space included in each of the above-mentioned matrix areas, wherein each column of the above-mentioned matrix space includes multiple of the above-mentioned matrix units, each column of the above-mentioned matrix space corresponds to a storage area in the above-mentioned storage device, and one of the above-mentioned storage areas includes a group of storage units; a fourth determination unit, configured to determine the multiple sub-matrix spaces included in each of the above-mentioned matrix units; a second reading unit, configured to read the data blocks in each sub-column in each of the above-mentioned sub-matrix spaces in turn, so as to store the data blocks in each of the above-mentioned sub-columns in each of the above-mentioned storage units.
  • a third determination unit configured to determine the multiple columns of matrix space included in each of the above-mentioned matrix areas, where
  • the apparatus further includes: a second determination module, configured to sequentially read the data blocks in each sub-column in each of the sub-matrix spaces, so as to determine a read-write mapping table of the first interleaved data before storing the data blocks in each of the sub-columns in each of the storage units, wherein the read-write mapping table includes a read address for reading the first interleaved data from the first cache space, a write address for writing the first interleaved data into the storage device, and a read address for reading the first interleaved data from the first cache space.
  • the first search module is configured to use the read-write mapping table to search and read the address of each data block in the first interleaved data.
  • the above-mentioned device also includes: a second reading module, which is configured to read the data blocks in each sub-column in each of the above-mentioned sub-matrix spaces in turn, so as to store the data blocks in each of the above-mentioned sub-columns in each of the above-mentioned storage units, and then read the data blocks in the storage addresses corresponding to each of the above-mentioned sub-columns in each of the above-mentioned storage units in turn to read out the above-mentioned first interleaved data.
  • a second reading module which is configured to read the data blocks in each sub-column in each of the above-mentioned sub-matrix spaces in turn, so as to store the data blocks in each of the above-mentioned sub-columns in each of the above-mentioned storage units, and then read the data blocks in the storage addresses corresponding to each of the above-mentioned sub-columns in each
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
  • an electronic device including a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • FIG1 is a hardware structure block diagram of a mobile terminal of a method for processing interleaved data according to an embodiment of the present application
  • FIG2 is a flow chart of a method for processing interleaved data according to an embodiment of the present application
  • FIG3 is a schematic diagram of partitioning a cache space according to an embodiment of the present application.
  • FIG4 is a schematic diagram of a state of a cache unit according to an embodiment of the present application.
  • FIG5 is a schematic diagram of a cache space area division according to an embodiment of the present application.
  • FIG6 is a schematic diagram of an interleaved data output method according to an embodiment of the present application.
  • FIG7-1 is a diagram (I) of an interleaved data cache read/write address and read/write pointer mapping according to an embodiment of the present application
  • FIG7-2 is a diagram (II) of an interleaved data cache read/write address and read/write pointer mapping according to an embodiment of the present application
  • FIG8 is a structural block diagram of an interleaving system according to an embodiment of the present application.
  • FIG9 is a flowchart of processing interleaved data according to an embodiment of the present application.
  • FIG10-1 is a recursive diagram (I) of the interleaved data input rearrangement of the 0th region according to an embodiment of the present application;
  • FIG10-2 is a recursive diagram (II) of the reordering of interleaved data input of region 0 according to an embodiment of the present application;
  • FIG10-3 is a recursive diagram (I) of the interleaved data input rearrangement of the first region according to an embodiment of the present application;
  • FIG10-4 is a recursive diagram (II) of the interleaved data input rearrangement of the first region according to an embodiment of the present application;
  • FIG10-5 is a recursive diagram (I) of the interleaved data input rearrangement of the second region according to an embodiment of the present application;
  • FIG10-6 is a recursive diagram (II) of the interleaved data input rearrangement of the second region according to an embodiment of the present application;
  • FIG. 10-7 is a recursive diagram (I) of the interleaved data input rearrangement of the third region according to an embodiment of the present application.
  • FIG10-8 is a recursive diagram (II) of the interleaved data input rearrangement of the third region according to an embodiment of the present application;
  • FIG11 is a structural block diagram of the processing of interleaved data according to an embodiment of the present application.
  • FIG1 is a hardware structure block diagram of a mobile terminal of a method for processing interleaved data in an embodiment of the present application.
  • the mobile terminal may include one or more (only one is shown in FIG1 ) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 configured to store data, wherein the mobile terminal may also include a transmission device 106 and an input/output device 108 for communication functions.
  • processors 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA
  • a memory 104 configured to store data
  • the mobile terminal may also include a transmission device 106 and an input/output device 108 for communication functions.
  • FIG1 is only for illustration and does not limit the structure of the mobile terminal.
  • the mobile terminal may also include more or fewer components than those shown in FIG1 , or have a configuration different from that shown in FIG1 .
  • the memory 104 may be configured to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the method for processing interleaved data in the embodiment of the present application.
  • the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, to implement the above method.
  • the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory remotely arranged relative to the processor 102, and these remote memories may be connected to the mobile terminal via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the transmission device 106 is configured to receive or send data via a network.
  • Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of the mobile terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, referred to as NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 can be a radio frequency (Radio Frequency, referred to as RF) module, which is configured to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • the principle of FEC includes: pre-encoding the signal (data payload) to be processed according to a certain algorithm, adding redundant codes (code block data for error correction) with the characteristics of the data payload itself, and sending it to the transmission channel. Then, at the receiving end, the data is decoded according to the corresponding algorithm, the bit errors are found and corrected, and the original data payload is restored.
  • FEC effectively resists the interference caused by the transmission channel at a very low cost, improves the reliability of the communication system, extends the transmission distance of the communication signal, and reduces the cost of the communication system.
  • the bit error rate is within the correctable range before the bit error can be corrected and the original payload data can be restored.
  • N bits of payload data are input, K bits of redundant code are generated according to the relevant algorithm, and then (N+K) bits of data are sent to the receiving end.
  • the payload data and redundant code are easily affected by environmental factors and cause errors. If the error is within the correctable range, the receiving end can correct the error and restore the N bit payload data after decoding.
  • the receiving side may encounter a burst error in a certain (N+K) bit data, that is, when an error occurs, the error is overly concentrated in a single payload or a certain continuous payload interval, exceeding the system's correctable error carrying capacity, resulting in the loss of the original payload data.
  • N+K certain bit data
  • the possible burst over-limit errors are dispersed, the system's error correction coding capabilities are fully utilized, interleaving is introduced after encoding, and deinterleaving is introduced before decoding.
  • the M (N+K) bit data encoded by the sending side are arranged in rows and columns into an M row*(N+K) column matrix, and the matrix is divided into A*A small blocks, where both M and (N+K) are divisible by A.
  • the data in the small blocks are rearranged in sequence from top to bottom and from left to right, that is, written in rows (columns) and read out in columns (rows) in the cache space for interleaving processing; on the receiving side, the interleaved data is restored to the original encoded data according to a similar processing method on the interleaving side, and output to the decoding unit for data decoding and error correction processing.
  • the entire interleaving block needs to be written into the cache space before the data can be read out according to the interleaving processing method.
  • a ping-pong operation is adopted. Two storage areas, p0 and p1, are used. Data is first written to p0. When p0 is full, data is written to p1. At the same time, the data in p0 is read out. When p1 is full, p0 is empty. This reciprocating operation is used to achieve uninterrupted data flow processing.
  • FIG. 2 is a flow chart of the method for processing interleaved data according to an embodiment of the present application. As shown in FIG. 2 , the process includes the following steps:
  • Step S202 determining a first cache space for first interleaved data, wherein the first cache space includes N cache units, where N is a natural number greater than 1;
  • Step S204 writing and reading the first interleaved data and other interleaved data in the first cache space and the second cache space in a ping-pong read-write manner, wherein the second cache space includes M cache units, where M is a natural number less than N.
  • the first cache space and the second cache space can be the two storage areas p0 and p1 mentioned above. Interleaved data can be written to p0, and when p0 is full, data is written to p1, and data in p0 is read out at the same time. When p1 is full, p0 is empty, and this reciprocating process is repeated to achieve uninterrupted processing of data flow.
  • the second cache space is smaller than the first cache space, and the size of the second cache space is half the size of the first cache space, that is, twice the cache space is not needed, and only 1.5 times the cache space is needed to implement the ping-pong operation.
  • the other interleaved data may be interleaved data output by an encoder.
  • the execution subject of the above steps may be a terminal, a server, a specific processor provided in the terminal or the server, or a processor or processing device provided relatively independently from the terminal or the server, but is not limited thereto.
  • a first cache space for the first interleaved data is determined, wherein the first cache space includes N cache units, and N is a natural number greater than 1; in the first cache space and the second cache space, the first interleaved data and other interleaved data are written and read in a ping-pong read-write manner, wherein the second cache space includes M cache units, and M is a natural number less than N.
  • a cache space twice the amount of interleaved data is not required, and only a small amount of cache space is required to achieve uninterrupted processing of interleaved data. Therefore, the problem of occupying more storage resources for processing interleaved data in the related art can be solved, thereby achieving the effect of reducing storage resources.
  • determining a first buffer space for first interleaved data includes:
  • matrix code elements of preset rows and columns according to the data amount of the first interleaved data, wherein the matrix code elements include N matrix regions, each matrix region corresponds to a cache unit, and each matrix region includes a plurality of matrix units;
  • the data volume of the first interleaved data can be determined by the original payload data and redundant data in the FEC system.
  • the original payload data in the FEC system is 111 bits
  • the redundant data is 17 bits
  • the complete first interleaved data that needs to be interleaved after encoding is 128 bits*16*84, a total of 172032 bits.
  • the first interleaved data is cut into 84 rows ⁇ 8 columns of matrix code elements, each small block of the code element is a 16 ⁇ 16 bit matrix unit, and the entire interleaved data frame occupies 84 ⁇ 8 ⁇ 16 ⁇ 16 bits, that is, 172032 bits.
  • determining the data amount of the first interleaved data includes:
  • the amount of data of the first interleaved data comes from the output of the first encoder and the second encoder.
  • the data output by the first encoder enc0 is cached in the even rows of the 84-row ⁇ 8-column matrix
  • the data output by the second encoder enc1 is cached in the odd rows of the 84-row ⁇ 8-column matrix.
  • the two encoders are delayed by 4 clock cycles in the order of 0/4, 1/5, 2/6, and 3/7 columns to fill the matrix space of the odd and even rows.
  • a total of 84/2 ⁇ 4 beats, or 168 clock cycle delays, are required to complete the processing.
  • writing and reading the first interleaved data and other interleaved data in a ping-pong read-write manner includes:
  • the first cache space can be a matrix space of 84 rows ⁇ 8 columns.
  • the 84 rows ⁇ 8 columns matrix space is cut into four cache units ((N-4)) u0, u1, u2, and u3 according to the top, bottom, left, and right, respectively, representing rows 0 to 41 and 0 to 3 columns, rows 0 to 41 and 4 to 7 columns, rows 42 to 83 and 0 to 3 columns, and rows 42 to 83 and 4 to 7 columns.
  • Each cache unit is stored with the same set of storage units, and the specifications of the storage unit are single-port SDRAM with a depth of 84 bits and a width of 32.
  • the upper half of the first interleaved data is first written into u0 and u1, and then the lower half of the first interleaved data is written into u2 and u3.
  • the first interleaved data is output after being filled, and when half of it is output, the next other interleaved data is filled in half.
  • the remaining total number of bits to be filled in the next cache unit is the same as the total number of bits that have been read out of the previous cache unit, so that the half of the next cache unit to be filled can be stored in the part that has been read out of the previous cache unit.
  • u4 and u5 are introduced, u4 is used for the upper left part of the next other interleaved data, and u5 is used for the upper right part of the next other interleaved data, and the current interleaved data frame is read out by column at the same time, thereby achieving the purpose of using 1.5 times the cache space to realize the ping-pong operation.
  • the method further includes:
  • S1 marking the current state of each cache unit, wherein the current state of each cache unit includes at least one of the following: in a reading state, a read empty or empty state, in a writing state, a write full or full state;
  • this embodiment processes interleaved data in a ping-pong operation manner and in accordance with the state of the cache unit.
  • the ping-pong operation manner specifically includes:
  • the method further includes:
  • the first cache space is divided into a plurality of subsets according to the output of the first interleaved data.
  • the interleaved cache space is first divided into two upper and lower regions by row (each region is 42 rows and 8 columns), and then divided into 4 subsets in turn, each subset space having 21*8 16*16 bit data blocks.
  • the first interleaved data is read according to the subset index in FIG5 , thereby realizing a ping-pong operation.
  • reading the first interleaved data to store the first interleaved data in a storage device includes:
  • each matrix space column includes a plurality of matrix units, each matrix space column corresponds to a storage region in a storage device, and a storage region includes a group of storage units;
  • the data blocks in each sub-column in each sub-matrix space can be read in sequence according to the subset index shown in FIG5.
  • the small column here refers to a small column of a 16*16bit data block
  • the same data acquisition operation is performed on the 1st to 127th small columns of data in sequence.
  • b(x, y, z) that is, data block b (where x, y are used to indicate that data block b is in the xth row and yth column of the 84 rows * 8 columns matrix of the interleaved cache space, and z is used to indicate the zth column of the 16*16bit data block at the (x, y) coordinate position, and the value range of x is 0 to 83, the value range of y is 0 to 7, and the value range of z is 0 to 15).
  • the method before sequentially reading the data blocks in each sub-column in each sub-matrix space to store the data blocks in each sub-column in each storage unit, the method further includes:
  • the read-write mapping table includes a read address for reading the first interleaved data from a first cache space, a write address for writing the first interleaved data into a storage device, and an operation mode for remapping the first interleaved data to a storage unit according to a row-storage-column-access operation mode;
  • the read-write mapping table includes a read pointer, a read address, a write pointer, and a write address as shown in Figure 7.
  • the address of each data block can be quickly found according to the read-write mapping table.
  • the method further includes:
  • the writing and reading of data are processed in units of a small column of a certain 16*16bit small block.
  • the 0th/1st row is stored in SDRAM0 to SDRAM15 address 0; the 2nd/3rd row is stored in SDRAM0 to SDRAM15 address 1, ..., and the 40th/41st row is stored in SDRAM0 to SDRAM15 address 20.
  • a small column of data in the 0th column is read out from SDRAM0 to SDRAM15 in sequence.
  • the first beat outputs (0, 0, 0) (1, 0, 0) from the 0th SDRAM address 0; the 1st SDRAM1 address 1 outputs (2, 0, 0) (3, 0, 0); ...; the 15th SDRAM address 15 outputs (30, 0, 0) (31, 0, 0); since the (32, 0, 0) (33, 0, 0) data is at address 16, and the (34, 0, 0) (35, 0, 0) data is at address 17, ..., the second beat needs to output the remaining data of the 0th small column and part of the 1st small column data at the same time, that is, read (32, 0, 0) (33, 0, 0) from SDRAM0 address 16, read (34, 0, 0) (35, 0, 0) from SDRAM1 address 17, ..., Read out (40, 0, 0) (41, 0, 0) from address 20, read out (0, 0, 1) (1, 0, 1) from SDRAM5 address 20, read out (40, 0, 0) (41, 0, 0) from
  • This embodiment is described by taking the interleaving system for the interleaving data generated by the FEC system as an example.
  • FIG8 it is a block diagram of the structure of the interleaving system in this embodiment.
  • the encoded data of the FEC system is sent to the interleaver in the interleaving system.
  • the interleaver divides the entire interleaved data into several areas according to the system parallelism and the size of the entire interleaved data block.
  • the input data is pre-processed and reordered.
  • the code block unit of the interleaved data block is stored in or read out of the interleaving cache according to the ping-pong operation.
  • the data is re-ordered in the output pre-processing module to obtain the final required data format.
  • FIG9 it is a flowchart of processing interleaved data in this embodiment, which includes the following steps:
  • the interleaved data is first pre-processed by data rearrangement
  • This embodiment takes the u0 area shown in Figure 3 (rows 0 to 41 and columns 0 to 3 in the 84-row * 8-column interleaved cache) as an example to illustrate the read and write control of the interleaved cache and the derivation of input data remapping of this embodiment.
  • this area contains 16 blocks of SDRAM with a depth of 84 bits and a width of 32 bits, with a storage capacity of 43008 bits.
  • the amount of data written in each beat is 32*16 bits.
  • encoders encO and enc1 shown in Figure 3 is divided into 16 parts according to the 16-bit granularity, and then each corresponding part is remapped and spliced together to form 32 bits and stored in 16 blocks of SDRAM respectively. A total of 84 beats are required to complete the storage of interleaved data.
  • the storage space of the 0th area is 21*32*16 bits.
  • the 0th column of the u0 area in Figure 3 contains 4*256 bits, that is, the 0th to 3rd areas store the data of the 0th to 3rd columns in the area u0 respectively.
  • the data is dispersed and stored in 16 SDRAMs respectively, and the data of b(x, 0, 0) is written to ram0 to ram15 in sequence.
  • wr_point is the write pointer, which indicates the number of beats to be written.
  • the 0th beat is filled into the 0th and 1st rows and the 0th columns
  • the 1st beat is filled into the 0th and 1st rows and the 1st columns
  • the 2nd beat is filled into the 0th and 1st rows and the 2nd columns
  • the 3rd beat is filled into the 0th and 1st rows and the 3rd columns.
  • the 0th beat is filled into the 0th area shown in Figures 10-1 and 10-2 of this embodiment
  • the 1st beat is filled into the 1st area shown in Figures 10-3 and 10-4 of this embodiment
  • the 2nd beat is filled into the 2nd area shown in Figures 10-5 and 10-6 of this embodiment
  • the 3rd beat is filled into the 3rd area shown in Examples 10-7 and 10-8 of this application...
  • Each odd and even large row of the interleaved cache matrix needs 4 beats to be filled.
  • the specific interleaved data write cache operation is explained by taking the input data from the 0th to the 4th beat as an example.
  • the 0th beat corresponds to the write pointer 0 located in area 0
  • the 1st beat corresponds to the write pointer 1 area 1
  • the 2nd beat corresponds to the write pointer 2 area 2
  • the 3rd beat corresponds to the write pointer 3 area 3
  • the 4th beat corresponds to the write pointer 4 area 0.
  • the write pointers of area 0 shown in FIG. 10-1 and FIG. 10-2 are 0, 4, 8, 12, ..., 72, 76, 80 in sequence; further, the write pointers of area 1 are 1, 5, 9, 13, ..., 73, 77, 81 in sequence; the write pointers of area 2 are 2, 6, 10, ..., 82 in sequence; the write pointers of area 3 are 3, 7, 11, ..., 75, 79, 84 in sequence.
  • the addresses of areas 0 to 3 are shown in FIG. 7 in ascending order.
  • the data of the 0th column is stored in the 0th area shown in FIG. 10-1 and FIG. 10-2.
  • the reading of the interleaved data is as shown in FIG. 6 of this embodiment, and the 0th to 127th columns are read out in sequence in the order of a small column.
  • This embodiment continuously reads out the following data of the interleaved buffer according to the ping-pong operation shown in FIG. 5 according to the 1024-bit parallelism. Since the data is stored in a whole small column, the output is output in sequence according to the upper and lower halves of a small column. Therefore, it is necessary to perform rearrangement preprocessing on the output data on the interleaved output side.
  • the writing and reading of data are processed in units of a small column of a 16*16bit small block.
  • the 0th/1st row is stored in SDRAM0 to SDRAM15 address 0; the 2nd/3rd row is stored in SDRAM0 to SDRAM15 address 1, ..., and the 40th/41st row is stored in SDRAM0 to SDRAM15 address 20.
  • a small column of data in the 0th column is read from SDRAM0 to SDRAM15 in sequence.
  • the first beat outputs (0, 0, 0) (1, 0, 0) from the address 0 of the 0th SDRAM; the address 1 of the 1st SDRAM 1 outputs (2, 0, 0) (3, 0, 0); ...; the address 15 of the 15th SDRAM outputs (30, 0, 0) (31, 0, 0); since the data (32, 0, 0) (33, 0, 0) is at address 16, and the data (34, 0, 0) (35, 0, 0) is at address 17, ..., the second beat needs to output the remaining data of the 0th small column and part of the data of the 1st small column at the same time, That is, (32, 0, 0) (33, 0, 0) is read from SDRAM0 address 16, (34, 0, 0) (35, 0, 0) is read from SDRAM1 address 17, ..., (40, 0, 0) (41, 0, 0) is read from SDRAM4 address 20, (0, 0, 1) (1, 0, 1) is read from SDRAM5 address 0, (40, 0,
  • Address 1 reads (2, 0, 1) (3, 0, 1), ..., and SDRAM15 address 10 reads (20, 0, 1) (21, 0, 1).
  • the order of input data after rearrangement can be further derived. Taking the input at address 0 as an example, the data is divided into 16 parts in units of 16 bits.
  • the write order after the input order (0, 0, 0) (1, 0, 0), (0, 0, 1) (1, 0, 1), ..., (0, 0, 15) (1, 0, 15) is mapped through the output derivation: (0, 0, 0) (1, 0, 0), (0, 0, 13) (1, 0, 13), (0, 0, 10) (1, 0, 10), ..., (0, 0, 3) (1, 0, 3).
  • the input data needs to be rearranged.
  • the read and write pointers of the u1, u2, and u3 areas and the remapping of the written data can be derived in a similar way.
  • the ping-pong operation applied to interleaved data flow control in this embodiment can realize interleaved data processing with less resource consumption. Compared with the existing interleaved ping-pong storage method, the storage resources of this embodiment can be reduced by 25%, effectively reducing system costs and project risks.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course by hardware, but in many cases the former is a better implementation method.
  • the technical solution of the present application, or the part that contributes to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), and includes a number of instructions for a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, magnetic disk, optical disk
  • a terminal device which can be a mobile phone, computer, server, or network device, etc.
  • a processing device for interleaved data is also provided, and the device is configured to implement the above-mentioned embodiments and preferred implementation modes, and the descriptions that have been made will not be repeated.
  • the term "module” can implement a combination of software and/or hardware of a predetermined function.
  • the device described in the following embodiments is preferably implemented in software, the implementation of hardware, or a combination of software and hardware, is also possible and conceived.
  • FIG. 11 is a structural block diagram of processing of interleaved data according to an embodiment of the present application. As shown in FIG. 11 , the apparatus includes:
  • a first determining module 1102 is configured to determine a first cache space for first interleaved data, wherein the first cache space includes N cache units, where N is a natural number greater than 1;
  • the first processing module 1104 is configured to write and read the first interleaved data and other interleaved data in a first cache space and a second cache space in a ping-pong read-write manner, wherein the second cache space includes M cache units, and M is a natural number less than N.
  • the first determining module includes:
  • a first determining unit configured to determine the data amount of the first interleaved data
  • a first generating unit is configured to generate matrix code elements of preset rows and columns according to the data amount of the first interleaved data, wherein the matrix code elements include N matrix regions, each of the matrix regions corresponds to one of the cache units, and each of the matrix regions includes a plurality of matrix units;
  • the second determining unit is configured to determine the matrix code element as the first cache space.
  • the first determining unit includes:
  • a first acquisition subunit is configured to acquire interleaved data output by the first encoder and interleaved data output by the second encoder:
  • the first determining subunit is configured to determine the data amount of the interleaved data output by the first encoder and the data amount of the interleaved data output by the second encoder as the data amount of the first interleaved data, wherein the interleaved data output by the first encoder is cached in the matrix space of the even rows in the matrix code elements, and the interleaved data output by the second encoder is cached in the matrix space of the odd rows in the matrix code elements.
  • the first processing module comprises:
  • a first cache unit configured to cache the first interleaved data into the first cache space
  • a first reading unit configured to write part of the other interleaved data into the M cache units when part of the first interleaved data is read from the K cache units, wherein the K cache units are cache units located in the same column of the N cache units, and the cache spaces of the K cache units and the M cache units are the same;
  • the first writing unit is configured to write the remaining data of the other interleaved data into the K cache units when part of the data in the first interleaved data is completely read out from the K cache units, and continue to read the remaining data of the first interleaved data in the N-K cache units, wherein the data amount of the remaining data in the other interleaved data is the same as the cache space in the K cache units, and the N-K cache units are cache units located in the same column of the N cache units.
  • the apparatus further comprises:
  • a first marking module is configured to mark the current state of each of the cache units after writing and reading the first interleaved data and other interleaved data in the first cache space and the second cache space in a ping-pong read-write manner, wherein the current state of each of the cache units includes at least one of the following: in a read state, a read empty or empty state, in a write state, a write full or full state;
  • the first control module is configured to control the reading or writing of the cache unit based on the current state of the cache unit.
  • the device further includes: a first reading module, configured to, after determining a first cache space for the first interleaved data, read the first interleaved data when the first interleaved data is cached in the first cache space, so as to store the first interleaved data in a storage device.
  • a first reading module configured to, after determining a first cache space for the first interleaved data, read the first interleaved data when the first interleaved data is cached in the first cache space, so as to store the first interleaved data in a storage device.
  • the first reading module comprises:
  • the third determining unit is configured to determine a plurality of columns of matrix space included in each of the matrix regions, wherein each column of the matrix space includes a plurality of the matrix units, and each column of the matrix space corresponds to a storage unit in the storage device.
  • Region one of the above storage regions includes a group of storage units;
  • a fourth determining unit configured to determine a plurality of sub-matrix spaces included in each of the above-mentioned matrix units
  • the second reading unit is configured to sequentially read the data blocks in each sub-column in each of the sub-matrix spaces, so as to store the data blocks in each of the sub-columns in each of the storage units.
  • the apparatus further includes: a second determination module, configured to sequentially read the data blocks in each sub-column in each of the sub-matrix spaces, so as to determine a read-write mapping table of the first interleaved data before storing the data blocks in each of the sub-columns in each of the storage units, wherein the read-write mapping table includes a read address for reading the first interleaved data from the first cache space, a write address for writing the first interleaved data into the storage device, and an operation mode for remapping the first interleaved data to the storage unit according to a row-storage-column-access operation mode;
  • the first search module is configured to use the read-write mapping table to search for an address of each of the data blocks in the first interleaved data.
  • the apparatus further comprises:
  • the second reading module is configured to sequentially read the data blocks in each subcolumn in each of the above-mentioned submatrix spaces, and after storing the data blocks in each of the above-mentioned subcolumns in each of the above-mentioned storage units, sequentially read the data blocks in the storage addresses corresponding to each of the above-mentioned subcolumns in each of the above-mentioned storage units to read out the above-mentioned first interleaved data.
  • the above modules can be implemented by software or hardware. For the latter, it can be implemented in the following ways, but not limited to: the above modules are all located in the same processor; or the above modules are located in different processors in any combination.
  • An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when run.
  • the above-mentioned computer-readable storage medium may include, but is not limited to: a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk or an optical disk, and other media that can store computer programs.
  • An embodiment of the present application further provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
  • modules or steps of the present application can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed on a network composed of multiple computing devices. They can be implemented with program codes executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, the steps shown or described can be performed in a different order than herein, or they can be made into individual integrated circuit modules, or multiple modules or steps therein can be made into a single integrated circuit module for implementation.
  • the present application is not limited to any particular combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Error Detection And Correction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请公开了一种交织数据的处理方法、装置、存储介质及电子装置。其中,该方法包括:确定第一交织数据的第一缓存空间,其中,第一缓存空间中包括N个缓存单元,N是大于1的自然数;在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据,其中,第二缓存空间中包括M个缓存单元,M是小于N的自然数。

Description

交织数据的处理方法、装置、存储介质及电子装置
相关申请的交叉引用
本申请基于2022年11月11日提交的发明名称为“交织数据的处理方法、装置、存储介质及电子装置”的中国专利申请CN202211415638.2,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本申请。
技术领域
本申请实施例涉及通信领域,具体而言,涉及一种交织数据的处理方法、装置、存储介质及电子装置。
背景技术
在通信领域中,前向纠错码(Forward Error Correction,简称为FEC),可以确保通信信号的可信度,实现长距离的可靠传输。FEC在解码时,误码率(Symbol Error Rate,简称为SER)在可纠错范围内,才可以纠正误码并恢复原始净荷数据。在传送过程中,净荷数据和冗码极易受到环境因素的影响而产生误码。如果误码在可纠错范围内,则接收端经过解码后,可以纠正错误的误码恢复净荷数据。如果将编码过后的净荷及冗码数据直接进行传输,接收侧可能出现在某个数据中由于突发错误(Burst Error,简称BE),即在发生错误时,错误过度集中于单一净荷数据或者某一连续净荷区间,超出系统的可纠错承载能力,造成原始净荷数据丢失的场景。
为了解决这一问题,现有技术将可能出现的突发超限错误分散,充分发挥系统纠错编码的能力,在编码后引入交织,译码前引入解交织。将发送侧编码后的数据按行列排列成矩阵,将矩阵划分为小块。将小块内的数据重新排列,依次进行交织处理;在接收侧,将交织处理后的数据,按照交织侧类似的处理方法恢复到编码后的原始数据,输出给译码单元,进行数据译码纠错处理。交织模块需要将整个交织块写入缓存空间后,才可以按照交织的处理方式读出数据。为了保证数据流不间断,采取乒乓操作。常规的乒乓操作处理方法需要两倍于交织数据的存储空间,系统的资源消耗大,成为制约系统性能的瓶颈。
发明内容
本申请实施例提供了一种交织数据的处理方法、装置、存储介质及电子装置,以至少解决相关技术中处理交织数据占用较多存储资源的问题。
根据本申请的一个实施例,提供了一种交织数据的处理方法,包括:确定第一交织数据的第一缓存空间,其中,上述第一缓存空间中包括N个缓存单元,上述N是大于1的自然数;
在上述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出上述第一交织数据和其他交织数据,其中,上述第二缓存空间中包括M个缓存单元,上述M是小于上述N的自然数。
根据本申请的一个实施例,提供了一种交织数据的处理装置,包括:第一确定模块,设置为确定第一交织数据的第一缓存空间,其中,所述第一缓存空间中包括N个缓存单元,所述N是大于1的自然数;第一处理模块,设置为在所述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出所述第一交织数据和其他交织数据,其中,所述第二缓存空间 中包括M个缓存单元,所述M是小于所述N的自然数。
在一个示例性实施例中,上述第一确定模块,包括:第一确定单元,设置为确定上述第一交织数据的数据量;第一生成单元,设置为按照上述第一交织数据的数据量生成预设行列的矩阵码元,其中,上述矩阵码元中包括N个矩阵区域,每个上述矩阵区域对应一个上述缓存单元,每个上述矩阵区域中均包括多个矩阵单元;第二确定单元,设置为将上述矩阵码元确定为上述第一缓存空间。
在一个示例性实施例中,上述第一确定单元,包括:第一获取子单元,设置为获取第一编码器输出的交织数据,以及第二编码器输出的交织数据;第一确定子单元,设置为将上述第一编码器输出的交织数据的数据量和上述第二编码器输出的交织数据的数据量确定为上述第一交织数据的数据量,其中,上述第一编码器输出的交织数据缓存在上述矩阵码元中偶数行的矩阵空间中,上述第二编码器输出的交织数据缓存在上述矩阵码元中奇数行的矩阵空间中。
在一个示例性实施例中,上述第一处理模块,包括:第一缓存单元,设置为将上述第一交织数据缓存至上述第一缓存空间;第一读取单元,设置为在从K个上述缓存单元中读取上述第一交织数据中的部分数据的情况下,将上述其他交织数据中的部分数据写入M个上述缓存单元中,其中,K个上述缓存单元是N个上述缓存单元中位于同一列的缓存单元,K个上述缓存单元和M个上述缓存单元的缓存空间相同;第一写入单元,设置为在上述第一交织数据中的部分数据从K个上述缓存单元中全部读出的情况下,将上述其他交织数据中的剩余数据写入K个上述缓存单元中,并继续读取N-K个上述缓存单元中的上述第一交织数据的剩余数据,其中,上述其他交织数据中的剩余数据的数据量与K个上述缓存单元中的缓存空间相同,N-K个上述缓存单元是N个上述缓存单元中位于同一列的缓存单元。
在一个示例性实施例中,上述装置还包括:第一标记模块,设置为在上述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出上述第一交织数据和其他交织数据之后,标记每个上述缓存单元的当前状态,其中,每个上述缓存单元的当前状态包括以下至少之一:在读状态,读空或空状态,在写状态,写满或满状态;第一控制模块,设置为基于上述缓存单元的当前状态控制上述缓存单元的读取或写入。
在一个示例性实施例中,上述装置还包括:第一读取模块,设置为确定第一交织数据的第一缓存空间之后,在上述第一交织数据缓存至上述第一缓存空间的情况下,读取上述第一交织数据,以将上述第一交织数据存储至存储设备中。
在一个示例性实施例中,上述第一读取模块,包括:第三确定单元,设置为确定每个上述矩阵区域中包括的多列矩阵空间,其中,每列上述矩阵空间中均包括多个上述矩阵单元,每列上述矩阵空间均对应上述存储设备中的一个存储区域,一个上述存储区域中包括一组存储单元;第四确定单元,设置为确定每个上述矩阵单元中包括的多个子矩阵空间;第二读取单元,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中。
在一个示例性实施例中,上述装置还包括:第二确定模块,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中之前,确定上述第一交织数据的读写映射表,其中,上述读写映射表中包括将上述第一交织数据从上述第一缓存空间中读出的读地址、将上述第一交织数据写入上述存储设备中的写地 址以及按照行存列取的操作方式将上述第一交织数据重映射至上述存储单元的操作方式;第一查找模块,设置为利用上述读写映射表查找读取上述第一交织数据中每个上述数据块的地址。
在一个示例性实施例中,上述装置还包括:第二读取模块,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中之后,依次读取每个上述存储单元中每个上述子列对应的存储地址中的数据块,以读出上述第一交织数据。
根据本申请的又一个实施例,还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本申请的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
附图说明
图1是本申请实施例的一种交织数据的处理方法的移动终端的硬件结构框图;
图2是根据本申请实施例的交织数据的处理方法的流程图;
图3是根据本申请实施例的对缓存空间进行空间划分的示意图;
图4是根据本申请实施例的缓存单元的状态示意图;
图5是根据本申请实施例的缓存空间区域划分的示意图;
图6是根据本申请实施例的交织数据输出方式的示意图;
图7-1是根据本申请实施例的交织数据缓存读写地址及读写指针映射图(一);
图7-2是根据本申请实施例的交织数据缓存读写地址及读写指针映射图(二);
图8是根据本申请实施例的交织系统的结构框图;
图9是根据本申请实施例的交织数据的处理流程图;
图10-1是根据本申请实施例的第0区域的交织数据输入重排的递推图(一);
图10-2是根据本申请实施例的第0区域的交织数据输入重排的递推图(二);
图10-3是根据本申请实施例的第1区域的交织数据输入重排的递推图(一);
图10-4是根据本申请实施例的第1区域的交织数据输入重排的递推图(二);
图10-5是根据本申请实施例的第2区域的交织数据输入重排的递推图(一);
图10-6是根据本申请实施例的第2区域的交织数据输入重排的递推图(二);
图10-7是根据本申请实施例的第3区域的交织数据输入重排的递推图(一);
图10-8是根据本申请实施例的第3区域的交织数据输入重排的递推图(二);
图11是根据本申请实施例的交织数据的处理的结构框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本申请的实施例。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必设置为描述特定的顺序或先后次序。
本申请实施例中所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在移动终端上为例,图1是本申请实施例的一种交织数据的处理方法的移动终端的硬件结构框图。如图1所示,移动终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和设置为存储数据的存储器104,其中,上述移动终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可设置为存储计算机程序,例如,应用软件的软件程序以及模块,如本申请实施例中的交织数据的处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106设置为经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,简称为RF)模块,其设置为通过无线方式与互联网进行通讯。
本实施例对FEC的原理进行说明:
FEC的原理包括:将需要处理的信号(数据净荷)按照一定的算法进行编码预处理,加入带有数据净荷本身特性的冗码(用于纠错的码块数据),送入传输信道。然后在接收端按照相应的算法对数据进行解码,找出误码并纠错,恢复原有数据净荷。FEC以很小的开销代价,有效抵抗传输信道带来的干扰,提高通信系统的可靠性,延长通信信号的传输距离,降低通信系统的成本。
FEC在解码时,误码率在可纠错范围内,才可以纠正误码并恢复原始净荷数据。例如,输入N bit净荷数据,按照相关算法生成K bit冗码,然后将(N+K)bit数据送到接收端。 在传送过程中,净荷数据和冗码极易受到环境因素的影响而产生误码。如果误码在可纠错范围内,则接收端经过解码后,可以纠正错误的误码恢复N bit净荷数据。如果将编码过后的净荷及冗码数据直接进行传输,接收侧可能出现在某个(N+K)bit数据中由于突发错误,即在发生错误时,错误过度集中于单一净荷或者某一连续净荷区间,超出系统的可纠错承载能力,造成原始净荷数据丢失的场景。为了解决这一问题,将可能出现的突发超限错误分散,充分发挥系统纠错编码的能力,在编码后引入交织,译码前引入解交织。将发送侧编码后的M个(N+K)bit数据按行列排列成M行*(N+K)列矩阵,将该矩阵划分为A*A个小块,M与(N+K)均可被A整除。将小块内的数据重新排列,依次按照从上到下、从左到右,即按行(列)写入,按列(行)读出缓存空间内进行交织处理;在接收侧,将交织处理后的数据,按照交织侧类似的处理方法恢复到编码后的原始数据,输出给译码单元,进行数据译码纠错处理。
交织模块由于按行写入,按列读出(或者按列写入、按行读出)的特性,需要将整个交织块写入缓存空间后,才可以按照交织的处理方式读出数据。为了保证数据流不间断,采取乒乓操作。采取p0与p1两块存储区域,开始向p0写入数据,当p0写满时向p1写入数据,同时读出p0中的数据,当p1写满时p0读空,如此往复实现数据流不间断处理。
在本实施例中提供了一种交织数据的处理方法,图2是根据本申请实施例的交织数据的处理方法的流程图,如图2所示,该流程包括如下步骤:
步骤S202,确定第一交织数据的第一缓存空间,其中,第一缓存空间中包括N个缓存单元,N是大于1的自然数;
步骤S204,在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据,其中,第二缓存空间中包括M个缓存单元,M是小于N的自然数。
可选地,第一缓存空间和第二缓存空间可以是上述中的p0与p1两块存储区域,交织数据可以向p0写入数据,当p0写满时向p1写入数据,同时读出p0中的数据,当p1写满时p0读空,如此往复实现数据流不间断处理。
可选地,第二缓存空间小于第一缓存空间,第二缓存空间的大小是第一缓存空间大小的一半,即并不需要两倍的缓存空间,仅仅需要1.5倍的缓存空间即可以实现乒乓操作。
可选地,其他交织数据可以是编码器输出的交织数据。
其中,上述步骤的执行主体可以为终端、服务器、终端或服务器中设置的具体处理器,或者与终端或者服务器相对独立设置的处理器或者处理设备等,但不限于此。
通过上述步骤,通过确定第一交织数据的第一缓存空间,其中,第一缓存空间中包括N个缓存单元,N是大于1的自然数;在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据,其中,第二缓存空间中包括M个缓存单元,M 是小于N的自然数。上述方法中,并不需要两倍于交织数据量的缓存空间,仅仅需要较少的缓存空间,就可以实现交织数据的不间断处理。因此,可以解决相关技术中处理交织数据占用较多存储资源的问题,达到减少存储资源的效果。
在一个示例性实施例中,确定第一交织数据的第一缓存空间,包括:
S1,确定第一交织数据的数据量;
S2,按照第一交织数据的数据量生成预设行列的矩阵码元,其中,矩阵码元中包括N个矩阵区域,每个矩阵区域对应一个缓存单元,每个矩阵区域中均包括多个矩阵单元;
S3,将矩阵码元确定为第一缓存空间。
可选地,第一交织数据的数据量可以由FEC系统中的原始净荷数据和冗码数据确定。例如,FEC系统中的原始净荷数据是111bit,冗码数据是17bit,经过编码后的需要交织的完整的第一交织数据为128bit*16*84,共172032bit。将第一交织数据切割分成84行×8列的矩阵码元,码元的每一个小块为16×16bit的矩阵单元,整个交织数据帧共占用84×8×16×16bit,即172032bit。通过对第一交织数据矩阵码元的确定,可以准确的确定出所需要的第一缓存空间。
在一个示例性实施例中,确定第一交织数据的数据量,包括:
S1,获取第一编码器输出的交织数据,以及第二编码器输出的交织数据;
S2,将第一编码器输出的交织数据的数据量和第二编码器输出的交织数据的数据量确定为第一交织数据的数据量,其中,第一编码器输出的交织数据缓存在矩阵码元中偶数行的矩阵空间中,第二编码器输出的交织数据缓存在矩阵码元中奇数行的矩阵空间中。
可选地,第一交织数据的数据量来自第一编码器和第二编码器的输出。例如,将第一编码器enc0输出的数据缓存在84行×8列矩阵的偶数行,将第二编码器enc1输出的数据缓存到84行×8列矩阵的奇数行。两个编码器按照0/4、1/5、2/6、3/7列的顺序经4拍时钟周期延时,将奇数行偶数行的矩阵空间填满。总计需要84/2×4拍,也即168拍时钟周期延时处理完毕。
在一个示例性实施例中,在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据,包括:
S1,将第一交织数据缓存至第一缓存空间;
S2,在从K个缓存单元中读取第一交织数据中的部分数据的情况下,将其他交织数据中 的部分数据写入M个缓存单元中,其中,K个缓存单元是N个缓存单元中位于同一列的缓存单元,K个缓存单元和M个缓存单元的缓存空间相同;
S3,在第一交织数据中的部分数据从K个缓存单元中全部读出的情况下,将其他交织数据中的剩余数据写入K个缓存单元中,并继续读取N-K个缓存单元中的第一交织数据的剩余数据,其中,其他交织数据中的剩余数据的数据量与K个缓存单元中的缓存空间相同,N-K个缓存单元是N个缓存单元中位于同一列的缓存单元。
可选地,第一缓存空间可以是84行×8列的矩阵空间。如图3所示,将84行×8列矩阵空间按照上下左右分别切割为u0,u1,u2,u3四个缓存单元((N-4)),分别代表0~41行0~3列、0~41行4~7列、42~83行0~3列、42~83行4~7列。每一个缓存单元用相同的一组存储单元储存,存储单元的规格为深度为84位宽为32的单端口SDRAM。例如,当一个第一交织数据帧写入时,首先将第一交织数据的上半部分写入u0和u1,再将第一交织数据的下半部分写入u2和u3。第一交织数据填满后输出,当输出一半时,下一个其他交织数据填入一半,此时下一个缓存单元待填入的剩余总比特数和前一缓存单元已读出的总比特数量相同,即可将下一缓存单元待填入的一半,储存到前一个缓存单元已经被读出的部分。例如,引入2(M-2)个缓存单元u4和u5,u4用于下一个其他交织数据的左上部分,u5用于下一个其他交织数据的右上部分,同时按列读出当前交织数据帧。从而实现用1.5倍的缓存空间实现乒乓操作的目的。
在一个示例性实施例中,在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据之后,方法还包括:
S1,标记每个缓存单元的当前状态,其中,每个缓存单元的当前状态包括以下至少之一:在读状态,读空或空状态,在写状态,写满或满状态;
S2,基于缓存单元的当前状态控制缓存单元的读取或写入。
可选地,本实施例按照乒乓操作的方式并按照缓存单元的状态处理交织数据。例如,如图4所示,在缓存单元的状态包括s0至s13种状态的情况下,乒乓操作的方式具体包括:
s0状态,u0和u1写入数据;s1状态,u2和u3写入数据;待u2和u3写满即完成首个交织数据帧的写入后,跳转到s2状态开始从u0和u2读出数据,此时对u4和u5写入数据,待u4和u5写满时,u0和u2被读空;此时跳转到s3状态,开始从u1和u3读出数据,用u0存储下一个交织数据帧的左下部分,u2存储下一个交织数据帧的右下部分;待u1,u3读空,u0和u2写满,此时跳转到s4状态,开始从u4和u0读出数据并将下一个交织块的数据写入 u1和u3;待u4和u0读空,u1和u3写满跳转到s5状态,开始从u5和u2读取数据,并写入u4和u0;此后跳转到s6状态,开始从u1和u4读取并写入u5和u2;当u1和u4读空,u5和u2写满后跳转到s7状态,开始从u3和u0读取数据并写入u2和u4;当u3和u0读空并且u2和u4写满后跳转到s8,此时从u5和u1读取并将新数据写入u3和u0;当u5和u1读空并且u3和u0写满后跳转到s9,此时从u2和u4读取并将新数据写入u5和u1;当u2和u4读空并且u5和u1写满后跳转到s10,此时从u3和u5读取数据并将新数据写入u2和u4;当u3和u5读空并且u2和u4写满后跳转到s11,此时从u0和u1读出数据并将新数据写入u3和u5;s12和s13即使s0和s1,不需要额外的存储空间即可持续进行交织数据帧的处理。原本需要至少两倍于交织数据帧数据量的存储空间,消减到1.5倍的存储空间即可实现。
在一个示例性实施例中,确定第一交织数据的第一缓存空间之后,方法还包括:
S1,在第一交织数据缓存至第一缓存空间的情况下,读取第一交织数据,以将第一交织数据存储至存储设备中。
可选地,确定第一交织数据的第一缓存空间之后,将第一缓存空间按照第一交织数据的输出将第一缓存空间划分为多个子集。如图5所示,将交织的缓存空间首先按行划分成上下两个区域(每个区域均为42行8列),然后再依次划分成4个子集,每个子集空间有21*8个16*16bit的数据块。按照图5中的子集索引读取第一交织数据,从而实现乒乓操作。
在一个示例性实施例中,在第一交织数据缓存至第一缓存空间的情况下,读取第一交织数据,以将第一交织数据存储至存储设备中,包括:
S1,确定每个矩阵区域中包括的多列矩阵空间,其中,每列矩阵空间中均包括多个矩阵单元,每列矩阵空间均对应存储设备中的一个存储区域,一个存储区域中包括一组存储单元;
S2,确定每个矩阵单元中包括的多个子矩阵空间;
S3,依次读取每个子矩阵空间中每个子列中的数据块,以将每个子列中的数据块存储至每个存储单元中。
可选地,可以按照图5所示的子集索引依次读取每个子矩阵空间中每个子列中的数据块。例如,如图6所示,首先将第一缓存空间的第0小列(相较于84行*8列的交织矩阵,这里的小列指代16*16bit数据块的一小列)中的数据全部读出,接下来依次对第1-127小列数据进行相同的取数操作。引入b(x,y,z)即数据块b(其中,x,y用于表示数据块b处于交织缓存空间84行*8列矩阵的第x行第y列,z用于表示(x,y)坐标位置处16*16bit数据块的第z列,x的取值范围为0至83,y的取值范围为0至7,z的取值范围为0至15)。
对于交织缓存空间第0列中的数据,依次从图6所示每个子集中读出8bit。具体为依次读出子集0中第0行第0列、子集1中第1行第0列、子集2中第42行第0列、子集3中第43行第0列中第0小列至第15小列。即数据块b(0,0,0)、b(1,0,0)、b(42,0,0)、b(43,0,0)中第0小列上面8bit数据;然后依次取数据块b(0,0,0)、b(1,0,0)、b(42,0,0)、b(43,0,0)下面8bit数据;接下来是数据块b(2,0,0)、b(3,0,0)、b(44,0,0)、b(45,0,0)中上面8bit数据,随后取数据块b(2,0,0)、b(3,0,0)、b(44,0,0)、b(45,0,0)下面8bit数据,...,最后取数据块b(40,0,0)、b(41,0,0)、b(82,0,0)、b(83,0,0)上面8bit数据及下面8bit数据。如此共需进行42*8次取数操作,即可将84行*8列交织缓存空间中第0小列数据全部读出。
在一个示例性实施例中,依次读取每个子矩阵空间中每个子列中的数据块,以将每个子列中的数据块存储至每个存储单元中之前,方法还包括:
S1,确定第一交织数据的读写映射表,其中,读写映射表中包括将第一交织数据从第一缓存空间中读出的读地址、将第一交织数据写入存储设备中的写地址以及按照行存列取的操作方式将第一交织数据重映射至存储单元的操作方式;
S2,利用读写映射表查找读取第一交织数据中每个数据块的地址。
可选地,读写映射表如图7所示,包括读指针、读地址、写指针、写地址。按照读写映射表可以快速的查找到每个数据块的地址。
在一个示例性实施例中,依次读取每个子矩阵空间中每个子列中的数据块,以将每个子列中的数据块存储至每个存储单元中之后,方法还包括:
S1,依次读取每个存储单元中每个子列对应的存储地址中的数据块,以读出第一交织数据。
可选地,以u0区域第0列数据为例,数据的写入和读出均以某一16*16bit小块的一小列为单位处理。第0/1行存入SDRAMO至SDRAM15地址0;第2/3行存入SDRAMO至SDRAM15地址1,...,第40/41行存入SDRAMO至SDRAM15地址20。读出时依次从SDRAMO至SDRAM15读出第0列中某1小列数据。例如,第一拍从第0块SDRAM地址0输出(0,0,0)(1,0,0);第1块SDRAM1地址1输出(2,0,0)(3,0,0);...;第15块SDRAM地址15输出(30,0,0)(31,0,0);由于(32,0,0)(33,0,0)数据处于地址16,(34,0,0)(35,0,0)数据处于地址17范围,...,第2拍需要将第0小列剩余数据和部分第1小列数据同时输出,即从SDRAMO地址16读出(32,0,0)(33,0,0),SDRAM1地址17读出(34,0,0)(35,0,0),...,从SDRAM4 地址20读出(40,0,0)(41,0,0),从SDRAM5地址0读出(0,0,1)(1,0,1),从SDRAM4地址20读出(40,0,0)(41,0,0),从SDRAM5地址0读出(0,0,1)(1,0,1),从SDRAM6地址1读出(2,0,1)(3,0,1),...,从SDRAM15地址10读出(20,0,1)(21,0,1)。
下面结合具体实施例对本申请进行说明:
本实施例以交织系统对FEC系统产生的交织数据为例进行说明。
如图8所示,是本实施例中的交织系统的结构框图,FEC系统经编码后的数据送入交织系统中的交织器,交织器根据系统并行度及整个交织数据块的大小,将整个交织数据划分为若干区域;首先经过输入数据预处理重排序,然后在缓存管理单元的控制下,按照乒乓操作将交织数据块的码块单元存入或者读出交织缓存,最后在输出预处理模块对数据重排序得到最终的需要的数据格式。
如图9所示,是本实施例中交织数据的处理流程图,包括以下步骤:
S901,交织数据首先经过数据重排预处理;
S902,在缓存管理单元的读写控制下,按照新的乒乓操作写入或读出缓存数据;
S903,经由交织输出预处理单元,对输出数据重排序,此后输出给下游处理模块进行处理。
本实施例以图3所示的u0区域(84行*8列交织缓存中0至41行0至3列)为例说明本实施例交织缓存的读写控制及输入数据重映射推导。如图10-1到图10-8所示,此区域包含16块深度为84位宽为32bit的SDRAM,存储容量为43008bit。每一拍写入的数据量为32*16bit,将图3所示的编码器encO、enc1的输出按照16bit颗粒度分割成16份,然后对应的每一份重新映射再拼接起来组成32bit分别存入16块SDRAM中。共需84拍完成交织数据的存储。
以图10-1、图10-2所示的第0区域为例,第0区域存储空间21*32*16bit,图3中u0区域第0列包含4*256bit,即第0区域至第3区域分别存储区域u0中第0至第3列数据。以u0中第0大列第0小列数据为例说明具体的数据存储输入重排及地址映射过程,将数据分散分别储存到16块SDRAM,则b(x,0,0)的数据依次写入ram0至ram15。wr_point为写指针,表示写入的第几拍。
按照本实施例图3所示编码输出数据填充到交织缓存的方式,第0拍填入第0、1行第0列,第1拍填入第0、1行第1列,第2拍填入第0、1行第2列,第3拍填入第0、1行第3 列,...。即第0拍填入本实施例图10-1、图10-2所示的第0区域,第1拍填入本实施例图10-3、图10-4所示的第1区域,第2拍填入本实施例10-5、图10-6所示第2区域,第3拍填入本申请实施例10-7、图10-8所示第3区域...。写指针及写地址的映射参见图7及图3,交织缓存矩阵的每一奇偶大行需要4拍填满,以第0拍至第4拍输入数据为例说明具体的交织数据写入缓存操作。第0拍对应于写指针0位于区域0,第1拍对应于写指针1区域1,第2拍对应于写指针2区域2,第3拍对应于写指针3区域3,第4拍对应于写指针4区域0。对应于图10-1、图10-2所示的区域0的写指针依次为0,4,8,12,...,72,76,80;进一步得出区域1的写指针依次为1,5,9,13,...,73,77,81;区域2的写指针依次为2,6,10,...,82;区域3的写指针依次为3,7,11,...,75,79,84。区域0至3的地址依次递增如图7所示。
本实施例图3所示u0区域,第0列的数据存入图10-1、图10-2所示的所示第0区域。交织数据的读取,按照本实施例图6所示,按照一小列的顺序依次将第0至127列读出。本实施例按照1024bit并行度按照图5所示乒乓操作连续读出交织缓存的某一下列数据,由于数据是整小列存,输出是按照一小列的上半部分和下半部分依次输出,因此需要在交织输出侧,对输出数据做重排预处理。引入操作[(a:b)-c*d](其中a,b,c,d均为正整数),[(a:b)-c*d]等价于[a-c*d:b-c*d]。数据重排后1024bit,具体参见公式1及公式2所示(j为整数且大于等于0小于16)。
dat[(1023:992)-64*j]={r0[(511:504)-32*j],r0[(495:488)-32*j],r1[(511:504)-32*
j],r1[(495:488)-32*j]};公式1;
dat[(991:960)-64*j]={r0[(503:496)-32*j],r0[(487:480)-32*j],r1[(503:496)-32*j
],r1[(487:480)-32*j]};公式2;
由图3及图10-1到图10-18可知,以u0区域第0列数据为例,数据的写入和读出均以某一16*16bit小块的一小列为单位处理。第0/1行存入SDRAMO至SDRAM15地址0;第2/3行存入SDRAMO至SDRAM15地址1,...,第40/41行存入SDRAMO至SDRAM15地址20。读出时依次从SDRAMO至SDRAM15读出第0列中某1小列数据。如第一拍从第0块SDRAM地址0输出(0,0,0)(1,0,0);第1块SDRAM1地址1输出(2,0,0)(3,0,0);...;第15块SDRAM地址15输出(30,0,0)(31,0,0);由于(32,0,0)(33,0,0)数据处于地址16,(34,0,0)(35,0,0)数据处于地址17范围,...,第2拍需要将第0小列剩余数据和部分第1小列数据同时输出,即从SDRAMO地址16读出(32,0,0)(33,0,0),SDRAM1地址17读出(34,0,0)(35,0,0),...,从SDRAM4地址20读出(40,0,0)(41,0,0),从SDRAM5地址0读出(0,0,1)(1,0,1),从SDRAM4地址20读出(40,0,0)(41,0,0),从SDRAM5地址0读出(0,0,1)(1,0,1),从SDRAM6 地址1读出(2,0,1)(3,0,1),...,从SDRAM15地址10读出(20,0,1)(21,0,1)。进一步可推导出输入数据重排后排列顺序。以地址0处的输入为例,数据以16bit为单位分割为16份,以输入顺序(0,0,0)(1,0,0),(0,0,1)(1,0,1),...,(0,0,15)(1,0,15)经由输出推导映射后的写入顺序为:(0,0,0)(1,0,0),(0,0,13)(1,0,13),(0,0,10)(1,0,10),...,(0,0,3)(1,0,3)。也即在图9所示交织数据写入交织缓存前,需要将输入数据做重排处理。u1,u2,u3区域的读写指针及写入数据的重映射均可按照类似方式推导得出。
本实施中的应用于交织数据流控制的乒乓操作可以以更少的资源消耗来实现交织的数据处理。与现有交织乒乓存储方式相比,本实施例存储资源可减少25%,有效降低系统成本及项目风险。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在本实施例中还提供了一种交织数据的处理装置,该装置设置为实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图11是根据本申请实施例的交织数据的处理的结构框图,如图11所示,该装置包括:
第一确定模块1102,设置为确定第一交织数据的第一缓存空间,其中,第一缓存空间中包括N个缓存单元,N是大于1的自然数;
第一处理模块1104,设置为在第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出第一交织数据和其他交织数据,其中,第二缓存空间中包括M个缓存单元,M是小于N的自然数。
在一个示例性实施例中,上述第一确定模块,包括:
第一确定单元,设置为确定上述第一交织数据的数据量;
第一生成单元,设置为按照上述第一交织数据的数据量生成预设行列的矩阵码元,其中,上述矩阵码元中包括N个矩阵区域,每个上述矩阵区域对应一个上述缓存单元,每个上述矩阵区域中均包括多个矩阵单元;
第二确定单元,设置为将上述矩阵码元确定为上述第一缓存空间。
在一个示例性实施例中,上述第一确定单元,包括:
第一获取子单元,设置为获取第一编码器输出的交织数据,以及第二编码器输出的交织数据:
第一确定子单元,设置为将上述第一编码器输出的交织数据的数据量和上述第二编码器输出的交织数据的数据量确定为上述第一交织数据的数据量,其中,上述第一编码器输出的交织数据缓存在上述矩阵码元中偶数行的矩阵空间中,上述第二编码器输出的交织数据缓存在上述矩阵码元中奇数行的矩阵空间中。
在一个示例性实施例中,上述第一处理模块,包括:
第一缓存单元,设置为将上述第一交织数据缓存至上述第一缓存空间;
第一读取单元,设置为在从K个上述缓存单元中读取上述第一交织数据中的部分数据的情况下,将上述其他交织数据中的部分数据写入M个上述缓存单元中,其中,K个上述缓存单元是N个上述缓存单元中位于同一列的缓存单元,K个上述缓存单元和M个上述缓存单元的缓存空间相同;
第一写入单元,设置为在上述第一交织数据中的部分数据从K个上述缓存单元中全部读出的情况下,将上述其他交织数据中的剩余数据写入K个上述缓存单元中,并继续读取N-K个上述缓存单元中的上述第一交织数据的剩余数据,其中,上述其他交织数据中的剩余数据的数据量与K个上述缓存单元中的缓存空间相同,N-K个上述缓存单元是N个上述缓存单元中位于同一列的缓存单元。
在一个示例性实施例中,上述装置还包括:
第一标记模块,设置为在上述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出上述第一交织数据和其他交织数据之后,标记每个上述缓存单元的当前状态,其中,每个上述缓存单元的当前状态包括以下至少之一:在读状态,读空或空状态,在写状态,写满或满状态;
第一控制模块,设置为基于上述缓存单元的当前状态控制上述缓存单元的读取或写入。
在一个示例性实施例中,上述装置还包括:第一读取模块,设置为确定第一交织数据的第一缓存空间之后,在上述第一交织数据缓存至上述第一缓存空间的情况下,读取上述第一交织数据,以将上述第一交织数据存储至存储设备中。
在一个示例性实施例中,上述第一读取模块,包括:
第三确定单元,设置为确定每个上述矩阵区域中包括的多列矩阵空间,其中,每列上述矩阵空间中均包括多个上述矩阵单元,每列上述矩阵空间均对应上述存储设备中的一个存储 区域,一个上述存储区域中包括一组存储单元;
第四确定单元,设置为确定每个上述矩阵单元中包括的多个子矩阵空间;
第二读取单元,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中。
在一个示例性实施例中,上述装置还包括:第二确定模块,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中之前,确定上述第一交织数据的读写映射表,其中,上述读写映射表中包括将上述第一交织数据从上述第一缓存空间中读出的读地址、将上述第一交织数据写入上述存储设备中的写地址以及按照行存列取的操作方式将上述第一交织数据重映射至上述存储单元的操作方式;
第一查找模块,设置为利用上述读写映射表查找读取上述第一交织数据中每个上述数据块的地址。
在一个示例性实施例中,上述装置还包括:
第二读取模块,设置为依次读取每个上述子矩阵空间中每个子列中的数据块,以将每个上述子列中的数据块存储至每个上述存储单元中之后,依次读取每个上述存储单元中每个上述子列对应的存储地址中的数据块,以读出上述第一交织数据。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。
本申请的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本申请的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上, 它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (12)

  1. 一种交织数据的处理方法,包括:
    确定第一交织数据的第一缓存空间,其中,所述第一缓存空间中包括N个缓存单元,所述N是大于1的自然数;
    在所述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出所述第一交织数据和其他交织数据,其中,所述第二缓存空间中包括M个缓存单元,所述M是小于所述N的自然数。
  2. 根据权利要求1所述的方法,确定第一交织数据的第一缓存空间,包括:
    确定所述第一交织数据的数据量;
    按照所述第一交织数据的数据量生成预设行列的矩阵码元,其中,所述矩阵码元中包括N个矩阵区域,每个所述矩阵区域对应一个所述缓存单元,每个所述矩阵区域中均包括多个矩阵单元;
    将所述矩阵码元确定为所述第一缓存空间。
  3. 根据权利要求2所述的方法,确定所述第一交织数据的数据量,包括:
    获取第一编码器输出的交织数据,以及第二编码器输出的交织数据;
    将所述第一编码器输出的交织数据的数据量和所述第二编码器输出的交织数据的数据量确定为所述第一交织数据的数据量,其中,所述第一编码器输出的交织数据缓存在所述矩阵码元中偶数行的矩阵空间中,所述第二编码器输出的交织数据缓存在所述矩阵码元中奇数行的矩阵空间中。
  4. 根据权利要求1所述的方法,在所述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出所述第一交织数据和其他交织数据,包括:
    将所述第一交织数据缓存至所述第一缓存空间;
    在从K个所述缓存单元中读取所述第一交织数据中的部分数据的情况下,将所述其他交织数据中的部分数据写入M个所述缓存单元中,其中,K个所述缓存单元是N个所述缓存单元中位于同一列的缓存单元,K个所述缓存单元和M个所述缓存单元的缓存空间相同;
    在所述第一交织数据中的部分数据从K个所述缓存单元中全部读出的情况下,将所述其他交织数据中的剩余数据写入K个所述缓存单元中,并继续读取N-K个所述缓存单元中的所述第一交织数据的剩余数据,其中,所述其他交织数据中的剩余数据的数据量与K个所述缓存单元中的缓存空间相同,N-K个所述缓存单元是N个所述缓存单元中位于同一列的缓存单元。
  5. 根据权利要求1所述的方法,在所述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出所述第一交织数据和其他交织数据之后,所述方法还包括:
    标记每个所述缓存单元的当前状态,其中,每个所述缓存单元的当前状态包括以下至少之一:在读状态,读空或空状态,在写状态,写满或满状态;
    基于所述缓存单元的当前状态控制所述缓存单元的读取或写入。
  6. 根据权利要求2所述的方法,确定第一交织数据的第一缓存空间之后,所述方法还包括:
    在所述第一交织数据缓存至所述第一缓存空间的情况下,读取所述第一交织数据,以将所述第一交织数据存储至存储设备中。
  7. 根据权利要求6所述的方法,在所述第一交织数据缓存至所述第一缓存空间的情况下,读取所述第一交织数据,以将所述第一交织数据存储至存储设备中,包括:
    确定每个所述矩阵区域中包括的多列矩阵空间,其中,每列所述矩阵空间中均包括多个所述矩阵单元,每列所述矩阵空间均对应所述存储设备中的一个存储区域,一个所述存储区域中包括一组存储单元;
    确定每个所述矩阵单元中包括的多个子矩阵空间;
    依次读取每个所述子矩阵空间中每个子列中的数据块,以将每个所述子列中的数据块存储至每个所述存储单元中。
  8. 根据权利要求7所述的方法,依次读取每个所述子矩阵空间中每个子列中的数据块,以将每个所述子列中的数据块存储至每个所述存储单元中之前,所述方法还包括:
    确定所述第一交织数据的读写映射表,其中,所述读写映射表中包括将所述第一交织数据从所述第一缓存空间中读出的读地址、将所述第一交织数据写入所述存储设备中的写地址以及按照行存列取的操作方式将所述第一交织数据重映射至所述存储单元的操作方式;
    利用所述读写映射表查找读取所述第一交织数据中每个所述数据块的地址。
  9. 根据权利要求7所述的方法,依次读取每个所述子矩阵空间中每个子列中的数据块,以将每个所述子列中的数据块存储至每个所述存储单元中之后,所述方法还包括:
    依次读取每个所述存储单元中每个所述子列对应的存储地址中的数据块,以读出所述第一交织数据。
  10. 一种交织数据的处理装置,包括:
    第一确定模块,设置为确定第一交织数据的第一缓存空间,其中,所述第一缓存空间中包括N个缓存单元,所述N是大于1的自然数;
    第一处理模块,设置为在所述第一缓存空间和第二缓存空间中,按照乒乓读写的方式写入和读出所述第一交织数据和其他交织数据,其中,所述第二缓存空间中包括M个缓存单元,所述M是小于所述N的自然数。
  11. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现所述权利要求1至9任一项中所述的方法。
  12. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至9任一项中所述的方法。
PCT/CN2023/092082 2022-11-11 2023-05-04 交织数据的处理方法、装置、存储介质及电子装置 WO2024098687A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211415638.2 2022-11-11
CN202211415638.2A CN118074853A (zh) 2022-11-11 2022-11-11 交织数据的处理方法、装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2024098687A1 true WO2024098687A1 (zh) 2024-05-16

Family

ID=91031859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/092082 WO2024098687A1 (zh) 2022-11-11 2023-05-04 交织数据的处理方法、装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN118074853A (zh)
WO (1) WO2024098687A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188429A (zh) * 2007-12-24 2008-05-28 北京创毅视讯科技有限公司 一种比特交织器和进行比特交织的方法
CN101800619A (zh) * 2009-12-28 2010-08-11 福州瑞芯微电子有限公司 一种基于块交织的交织或解交织方法及其装置
CN103677655A (zh) * 2012-09-26 2014-03-26 北京信威通信技术股份有限公司 一种二维数组数据流在存储器上的读写方法及装置
CN103916140A (zh) * 2014-04-18 2014-07-09 上海高清数字科技产业有限公司 卷积交织/解交织的实现方法及装置
US20170371812A1 (en) * 2016-06-27 2017-12-28 Qualcomm Incorporated System and method for odd modulus memory channel interleaving
CN113726476A (zh) * 2021-07-26 2021-11-30 上海明波通信技术股份有限公司 信道交织处理方法和处理模块

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188429A (zh) * 2007-12-24 2008-05-28 北京创毅视讯科技有限公司 一种比特交织器和进行比特交织的方法
CN101800619A (zh) * 2009-12-28 2010-08-11 福州瑞芯微电子有限公司 一种基于块交织的交织或解交织方法及其装置
CN103677655A (zh) * 2012-09-26 2014-03-26 北京信威通信技术股份有限公司 一种二维数组数据流在存储器上的读写方法及装置
CN103916140A (zh) * 2014-04-18 2014-07-09 上海高清数字科技产业有限公司 卷积交织/解交织的实现方法及装置
US20170371812A1 (en) * 2016-06-27 2017-12-28 Qualcomm Incorporated System and method for odd modulus memory channel interleaving
CN113726476A (zh) * 2021-07-26 2021-11-30 上海明波通信技术股份有限公司 信道交织处理方法和处理模块

Also Published As

Publication number Publication date
CN118074853A (zh) 2024-05-24

Similar Documents

Publication Publication Date Title
KR100370239B1 (ko) 고속 블럭 파이프라인 구조의 리드-솔로몬 디코더에적용하기 위한 메모리 장치와 메모리 액세스 방법 및 그메모리 장치를 구비한 리드-솔로몬 디코더
US7840859B2 (en) Block interleaving with memory table of reduced size
US7600177B2 (en) Delta syndrome based iterative Reed-Solomon product code decoder
US6839870B2 (en) Error-correcting code interleaver
CN109120276A (zh) 信息处理的方法、通信装置
US20100017682A1 (en) Error correction code striping
KR100403634B1 (ko) 고속 파이프라인 리드-솔로몬 디코더에 적용하기 위한메모리 장치와 메모리 액세스 방법 및 그 메모리 장치를구비한 리드-솔로몬 디코더
US6687860B1 (en) Data transfer device and data transfer method
WO2024098687A1 (zh) 交织数据的处理方法、装置、存储介质及电子装置
CN107659319B (zh) 一种对Turbo乘积码编码的方法和装置
KR100499467B1 (ko) 블록 인터리빙 방법 및 그를 위한 장치
US7299387B2 (en) Address generator for block interleaving
CN102318249B (zh) 一种交织和解交织的方法、交织器和解交织器
CN105577196A (zh) 基于宽带OFDM电力线通信系统的Turbo码数据交织方法和交织器
CN108023662B (zh) 一种可配置的分组交织方法及交织器
JP2000137651A (ja) データ誤り訂正装置およびその方法
US8176395B2 (en) Memory module and writing and reading method thereof
JPH02121027A (ja) 画像処理装置
KR20000022714A (ko) 오류 정정 시스템, 오류 정정 방법 및 오류 정정 기능을 갖는데이타 기억 시스템
KR100651567B1 (ko) 내부 메모리와 외부 메모리를 이용한 디인터리빙 장치 및 방법
JP2000124816A (ja) 符号化インタリーブ装置
CN101160729B (zh) 用于并行处理递归数据的定址体系结构
US8205133B2 (en) Error corrector with a high use efficiency of a memory
KR100390120B1 (ko) 신호 처리 장치
CN117879619A (zh) 一种并行高速双空间sc-ldpc译码器及译码方法