WO2021073338A1 - Procédé de décodage et décodeur - Google Patents
Procédé de décodage et décodeur Download PDFInfo
- Publication number
- WO2021073338A1 WO2021073338A1 PCT/CN2020/115383 CN2020115383W WO2021073338A1 WO 2021073338 A1 WO2021073338 A1 WO 2021073338A1 CN 2020115383 W CN2020115383 W CN 2020115383W WO 2021073338 A1 WO2021073338 A1 WO 2021073338A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- decoder
- llr
- sequence
- decoding
- sequences
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
Definitions
- This application relates to the field of communications, and more specifically, to a decoding method and decoder of polarization codes in the field of communications.
- Polar code was first proposed by Arikan in 2008, and it was proven to reach Shannon's limit capacity.
- the encoding of Polar codes is mainly based on the theory of channel polarization.
- the process of channel polarization is mainly divided into channel combination and channel decomposition. When the number of channel combinations tends to be infinite, channel polarization will occur.
- the phenomenon of channel polarization is that the channel capacity obviously tends to two levels, one part tends to a pole with a channel capacity of 1, that is, a noiseless channel; and the other part tends to a pole with a channel capacity of 0, that is, a full-noise channel.
- the channel Since the channel is polarized, we can reasonably use the polarization feature to encode, set the information bit on the noise-free channel, that is, transmit the information bit; and set the freeze bit on the all-noise channel, that is, transmit the fixed bit without information. , Such as 0 or 1, these bits can also be called frozen bits.
- the decoding calculation complexity of the polarization code is O(N log 2 N), where N is the code length.
- Encoder pair information sequence Perform Polar encoding to get the encoded sequence
- the recursive generation process of polarization codes can be expressed in the form of matrix multiplication as follows:
- matrix I the polarization matrix
- the relatively reliable K bits are selected for the transmission of information bits, that is, the bits corresponding to the above-mentioned noise-free channel, and the set is represented by A; and the value of the NK relatively unreliable bits is set to one A fixed value, for example, set to 0 or 1.
- These bits are called frozen bits, and the positions of these bits can also be called fixed bits, that is, the bits corresponding to the above-mentioned full-noise channel.
- Polar’s main decoding methods include serial decoding methods and parallel decoding methods.
- Serial decoding methods include: serial cancellation (successive cancellation, SC) decoding method, list serial cancellation (successive cancellation list, SCL) decoding method and cyclic redundancy check (CRC) assisted SCL (CRC aided SCL, CA-SCL) decoding method, etc.
- Parallel decoding methods include: Belief Propagation (BP)/ The minimum sum (Min-Sum, MS) decoding method and the deep neutral network (DNN), etc. These decoding methods have their own strengths.
- the serial decoding method has better decoding performance, but the decoding delay is large, and the decoding throughput rate is limited; while the parallel decoding method has high parallelism , But the decoding performance often has a big gap compared with the serial decoding method.
- enhanced mobile broadband enhanced mobile broadband, eMBB
- LDPC low density parity check
- the Polar code decoder needs to be designed to better meet the actual application requirements of the polar code in the communication system.
- the various embodiments of the present application provide parallel decoders for cascaded decoding, cascaded decoders, and methods for cascaded decoding, which are used to improve the decoding performance of parallel decoding while maintaining relatively high performance. High throughput rate.
- the embodiment of the present application also provides a multi-cascade decoder, which can further reduce the hardware overhead of the multi-mode decoder in the communication device.
- the embodiments of the present application provide a parallel decoder for cascaded decoding, which is used to decode one or more LLR sequences of length N in with a maximum of S times. , And provide one or more input LLR sequences of length N out to the next decoder in each iteration.
- the parallel decoder includes: a first-stage update unit and a second-stage update unit. For the sth iteration,
- the first-stage update unit is used to perform the first-stage update on the input LLR sequence of the parallel decoder to obtain the output LLR sequence
- the output LLR sequence is used to provide the input LLR sequence of the next-stage decoder
- the input LLR sequence of the parallel decoder includes N in LLRs
- the output LLR sequence includes N in LLRs
- the input LLR sequence of the next-level decoder includes N in the output LLR sequence. out LLRs;
- the second-stage update unit is used to obtain the output LLR sequence of the next-stage decoder, and the output LLR sequence of the next-stage decoder includes N out LLRs.
- the second-stage update unit is configured to perform a second-stage update on a second update input LLR sequence to obtain a second update output LLR sequence, and the second update input LLR sequence is obtained according to the obtained s-1th iteration
- the first-stage update unit is configured to perform a first-stage update on the second updated output LLR sequence to obtain an output LLR sequence, and the output LLR sequence is used to provide an input LLR sequence of a next-stage decoder.
- the above-mentioned parallel decoder can realize large-block-length decoding calculations in cascaded decoding, and can be adapted to the next-stage decoder with better decoding performance for small-block lengths, which helps to improve the overall decoding throughput Rate, and use the decoding performance of the next-level decoder.
- the second update input LLR sequence is the (s-2) ⁇ N out +1th in the output LLR sequence in the s-1th iteration LLR to the second (s-1) ⁇ N out a replacement for the first LLR s-1 iterations LLR output sequence of the next stage of the decoder LLR obtained number N out.
- the parallel decoder outputs the output LLR sequence to the next-stage decoder, so that the The next-stage decoder obtains its input LLR sequence according to the (s-1) ⁇ N out +1 th LLR to the s ⁇ N out LLR of the output LLR sequence.
- the parallel decoder determines the (s-1) ⁇ N out +1th from the output LLR sequence
- the LLR to the s ⁇ N out- th LLR are output to the next-stage decoder as the input LLR sequence of the next-stage decoder.
- the parallel decoder includes at least the n out +1 th layer to the n in +1 th layer n in of the factor graph. -n out decoding layers.
- the first-stage update unit is specifically configured to update the N LLR nodes of the n in +1th layer to the input LLR Sequence, and perform soft value update layer by layer from the n in +1 layer to the n out +1 layer to obtain the N in LLR nodes of the n out +1 layer, and the output LLR sequence includes the n out + N in LLR nodes of layer 1;
- the second-stage update unit is specifically configured to update the N LLR nodes of the n out + 1th layer to the second update input LLR sequence, and start from the n out + 1-layer to the first layer n in +1 direction to perform a soft layer to obtain a first value updating n in N in a layer LLR + 1 nodes, the second output LLR update sequence comprising n in N in the first layer +1 LLR nodes.
- the first-stage update unit is specifically further configured to determine the LDPC check matrix corresponding to the input LLR sequence, and the The first-stage update unit and the second-stage update unit update the input LLR sequence based on the LDPC check matrix.
- the Polar code is decoded by the calculation unit of the LDPC code, which makes it possible to decode the Polar code in common mode with the LDPC code and save overhead.
- the second-stage update unit is further configured to return the decoded LLR sequence to the upper-level decoder, and the decode The LLR sequence includes the second update output sequence.
- the parallel decoder includes one or more of the following: a BP decoder, an MS decoder, or a DNN decoder.
- the value of N in can be any of the following: 8192,4096,2048,1024,512,256,128,64,32; the value of N out can be any of the following: 1024,512,256,128,64, 32,16,8,4,2,1128,64,32,16,8,4,2,1.
- an embodiment of the present application provides a cascaded decoding method, which performs a maximum T iterations of cascaded decoding on an input LLR sequence, where the t-th iteration includes:
- each second input LLR sequence is N
- the length of each second output LLR sequence is N
- each third input LLR sequence includes Ns LLRs in the corresponding second output LLR sequence.
- the above method uses parallel decoding algorithms to perform parallel decoding calculations for large block lengths, which improves the throughput of decoding, and provides small block lengths to the next level of serial decoding algorithms, and uses small block length serial decoding algorithms to improve Decoding performance, combining the advantages of the two, enables decoding to take into account both throughput and decoding performance at the same time.
- each of the third input LLR sequences includes the (t-1) ⁇ N S +1 th LLR in the corresponding second output LLR sequence To the t ⁇ N S LLR.
- the parallel decoding algorithm is used to perform L(t) second input LLR sequences Decode and obtain L(t) second output LLR sequences, including:
- a parallel decoding algorithm is used to perform a first stage update on the L(t) second input LLR sequences respectively to obtain the L(t) second output LLR sequences.
- the second input LLR sequence is the input LLR sequence of the cascaded decoder.
- the t-1th iterative decoding also includes:
- M(t-1) third output LLR sequences are obtained, and each of the third output LLR sequences includes Ns LLRs.
- the t-th iteration further includes:
- the parallel decoding algorithm is used to perform a second stage update on the L(t) second updated input LLR sequences to obtain the L(t) second input LLR sequences.
- the parallel decoding algorithm includes n s +1 th to n +1 th layer nn s translations.
- Code layer Each layer includes N LLR nodes; the first stage update includes:
- the second phase update includes:
- a serial decoding algorithm is used to decode the L(t) third input LLR sequences Obtain M(t) decoding paths, including:
- the M(t) decoding paths are M(t) decoding paths with the largest path metric among the L(t)*2 k decoding paths, or the M(t) decoding paths Among the L(t)*2 k decoding paths, M(t) decoding paths have the largest path metric and pass the CRC check.
- the parallel decoding algorithm is used to decode the L(t) second input LLR sequences to obtain L(t) second output LLRs Sequence, including:
- the parallel decoding algorithm includes: BP decoding algorithm, MS decoding algorithm, or DNN decoding algorithm;
- the serial decoding algorithm includes: SCL decoding algorithm or CA-SCL Decoding algorithm.
- N is any of the following: 1024, 512, 256, 128, 64, 32; the value of Ns is any of the following: 128, 64, 32, 16, 8, 4, 2, 1 .
- t ⁇ T and the tth iteration does not meet the early termination condition, the tth iteration also includes :
- M(t) third output LLR sequences are obtained.
- t T, or the t-th iteration satisfies the early termination condition; the t-th iteration also include:
- the serial decoder obtains the decoding result according to the M(t) decoding paths, and terminates iteration.
- the input LLR sequence is the information sequence
- the information sequence includes a plurality of information bits, or one or more information bits and one or more frozen bits; said obtaining a decoding result according to the M(t) decoding paths includes: Perform a hard decision on one decoding path with the largest path metric value or the largest path metric value and a successful CRC check among the M(t) decoding paths to obtain each information bit in the information sequence.
- an embodiment of the present application provides a cascaded decoder, the cascaded decoder includes a serial decoder, and the second aspect of the first aspect or any possible implementation of the first aspect.
- the second parallel decoder decodes L(t) second input LLR sequences to obtain L(t) second output LLR sequences, each second input LLR sequence has a length of N, and each second input LLR sequence has a length of N.
- the length of the output LLR sequence is N;
- the serial decoder decodes the L(t) third input LLR sequences to obtain M(t) decoding paths, and each of the third input LLR sequences includes the corresponding second output LLR sequence. Ns LLRs;
- the serial decoder determines to continue the iteration, or the serial decoder determines to terminate the iteration.
- the above-mentioned cascaded decoder uses a parallel decoder to perform parallel decoding calculations for large block lengths to improve decoding throughput, and provides small block length decoding to the serial decoder, and uses small block length serial translation
- the encoder improves the decoding performance and combines the advantages of the two, so that the decoding can take into account both the throughput and the decoding performance at the same time.
- each of the third input LLR sequences includes the (t-1) ⁇ N S +1 th LLR in the corresponding second output LLR sequence To the t ⁇ N S LLR.
- the second parallel decoder decodes L(t) second input LLR sequences to obtain L(t) second output LLR sequence, including:
- the second parallel decoder performs a first stage update on the L(t) second input LLR sequences respectively to obtain the L(t) second output LLR sequences.
- the second input LLR sequence is the stage The input LLR sequence of the connected decoder.
- the t-1th iterative decoding also includes:
- the serial decoder obtains M(t-1) third output LLR sequences according to the M(t-1) decoding paths, and each third output LLR sequence includes Ns LLRs.
- the t-th iteration further includes:
- the second parallel decoder obtains L(t) according to the M(t-1) third output LLR sequences and the M(t-1) second output LLR sequences of the t-1th iteration
- the second updated input LLR sequence of the t-th iteration, L(t) M(t-1);
- the second parallel decoder performs a second stage update on the L(t) second updated input LLR sequences to obtain the L(t) second input LLR sequences.
- the second parallel decoder includes at least the n s +1 th to n +1 th layer nn s Decoding layers, Each layer includes N LLR nodes;
- the second parallel decoder performs a first stage update on each second input LLR sequence to obtain a corresponding second output LLR sequence, including:
- the second parallel decoder updates the N LLR nodes of the n+1th layer to N LLRs in the second input LLR sequence
- Said second decoder performs parallel from the first layer n + 1 to the first direction of the soft layer n s +1 to obtain the updated value of n s N + 1 th layer LLR nodes, the corresponding first
- the second output LLR sequence includes the N LLR nodes of the n s +1 th layer;
- the second parallel decoder performs a second stage update on each second update input LLR sequence to obtain the corresponding second input LLR sequence, including:
- the second parallel decoder updates the N LLR nodes of the n s +1 th layer to the N LLRs in the second updated input LLR sequence
- the second parallel decoder performs a soft value update from the n s +1 th layer to the n+1 th layer to obtain the N LLR nodes of the n+1 th layer, and the corresponding second The input LLR sequence includes N LLR nodes of the n+1th layer.
- the serial decoder interprets the L(t) third input LLR sequences
- the code gets M(t) decoding paths, including:
- the serial decoder decodes the L(t) third input LLR sequences to obtain L(t)*2 k decoding paths, where k is a positive integer;
- the M(t) decoding paths are M(t) decoding paths with the largest path metric among the L(t)*2 k decoding paths, or the M(t) decoding paths Among the L(t)*2 k decoding paths, M(t) decoding paths have the largest path metric and pass the CRC check.
- the second parallel decoder is configured to perform processing on L(t) of the second input LLR sequences Determine the corresponding LDPC check matrix respectively;
- the second parallel decoder decodes the L(t) second input LLR sequences based on the LDPC check matrix to obtain L(t) second output sequences.
- the second parallel decoder includes one or more of the following: BP decoder or MS A decoder or a DNN decoder, and the serial decoder includes an SCL decoder or a CA-SCL decoder.
- the value of N is any one of the following: 1024, 512, 256, 128, 64, 32; the value of Ns is the following Any item: 128,64,32,16,8,4,2,1.
- the serial decoder determines to continue iteration, including:
- the serial decoder determines that t ⁇ T and the t-th iteration does not meet the early termination condition
- the t-th iteration further includes:
- the serial decoder obtains M(t) third output LLR sequences according to the M(t) decoding paths.
- the serial decoder determines to terminate the iteration, including:
- the t-th iteration further includes:
- the serial decoder obtains the decoding result according to the M(t) decoding paths, and terminates iteration.
- the input LLR sequence of the cascaded decoder is an information sequence
- the serial decoder performs a hard decision on one of the M(t) decoding paths with the largest path metric value or one decoding path with the largest path metric value and a successful CRC check, and obtains the result Each information bit in the information sequence.
- the second-level iteration is a cascaded decoding iteration like the second aspect or any possible implementation of the second aspect, the maximum number of iterations is T, the i-th iteration, i ⁇ I, including:
- the length of each first input LLR sequence is N p
- the length of each first output LLR sequence is N p ;
- each of the second input LLR sequences respectively includes a corresponding first output LLR N LLRs in the sequence.
- the multi-cascade decoding method can further share part of the parallel decoding units of the LDPC codes, saving system overhead.
- each of the second input LLR sequences includes the (i-1) ⁇ N+1th in the corresponding first output LLR sequence LLR to the i ⁇ Nth LLR.
- the first parallel decoding algorithm is used to perform the processing of the K(i)
- the first input LLR sequence is decoded to obtain K(i) first output LLR sequences, including:
- the first parallel decoding algorithm performs a first stage update on the K(i) first input LLR sequences respectively to obtain the K(i) first output LLR sequences.
- the first input LLR sequence is the initial LLR sequence.
- the i-1th iterative decoding also includes:
- the i-th iteration further includes:
- the first parallel decoding algorithm includes at least n+1th to np +1th layers n p -n decoding layers, Each layer includes N p LLR nodes;
- the first stage update of each first input LLR sequence by using the first parallel decoding algorithm to obtain the corresponding first output LLR sequence includes:
- N p LLR nodes of n+1 layer Perform soft value update from the n p +1th layer to the n+1th layer to obtain N p LLR nodes of the n+1th layer, and the corresponding first output LLR sequence includes the first output LLR sequence.
- the second stage update of each second updated input LLR sequence by using the first parallel decoding algorithm to obtain the corresponding first input LLR sequence includes:
- the tth iteration of the cascade decoding method includes: decoding according to the M(i,t) The path gets M(i,t) third output LLR sequences.
- the t iterations include:
- M(i,t) third output LLR sequences are obtained.
- the second output of the M(i,t) tth iteration is obtained. Update the input LLR sequence;
- the t-th iteration of the associative decoding method includes:
- the first parallel decoder algorithm includes one or more of the following: BP decoding algorithm, or MS decoding algorithm or DNN decoding algorithm.
- the value of N p includes any one of the following: 8192, 4096, 2048, 1024, 512, 256, 128.
- the cascaded decoder is a lower-level decoder of the first parallel decoder, and the first parallel decoder is used to decode one or more LLR sequences with a length of N p.
- the first parallel decoder performs the first stage iterative decoding on K(i) first input LLR sequences to obtain K(i) first output LLR sequences, and the length of each first input LLR sequence is N p , The length of each first output LLR sequence is N p ;
- the cascaded decoder performs a second stage iterative decoding on K(i) second input LLR sequences, and each of the second input LLR sequences includes N LLRs in the corresponding first output LLR sequence.
- the multi-cascade decoder can further share part of the parallel decoding unit of the LDPC decoder to save system overhead.
- each of the second input LLR sequences includes the (i-1) ⁇ N+1th in the corresponding first output LLR sequence LLR to the i ⁇ Nth LLR.
- the first parallel decoder receives the K(i) first inputs
- the LLR sequence is decoded to obtain K(i) first output LLR sequences, including:
- the first parallel decoder performs a first stage update on the K(i) first input LLR sequences respectively to obtain the K(i) first output LLR sequences.
- the first input LLR sequence is the initial LLR sequence.
- the i-1th iterative decoding also includes:
- the cascaded decoder outputs M(i-1,t) second decoded LLR sequences to the first parallel decoder, and the second decoded LLR sequence includes N LLRs.
- the i-th iteration further includes:
- the first parallel decoder obtains M(i-1,t) corresponding to the i-1th iteration of M(i-1,t) second decoding LLR sequences of the i-1th iteration ) First output LLR sequence;
- the first parallel decoder performs a second stage update on the K(i) second updated input LLR sequences to obtain the L(i, 1) first input LLR sequences.
- the first parallel decoder includes at least n+1th to np +1th layers n p -n decoding layers, Each layer includes N p LLR nodes;
- the first parallel decoder performs a first stage update on each first input LLR sequence to obtain the corresponding first output LLR sequence, including:
- the first parallel decoder performs a soft value update from the np +1th layer to the n+1th layer to obtain the Np LLR nodes of the n+1th layer, and the corresponding An output LLR sequence includes N p LLR nodes of the n+1th layer;
- the first parallel decoder performs a second stage update on each second update input LLR sequence to obtain the corresponding first input LLR sequence, including:
- the first parallel decoder LLR the nodes of N p n + 1, the second layer updates the LLR updated input sequence of N p LLR,
- the first parallel decoder performs a soft value update from the n+1th layer to the np +1th layer to obtain the Np LLR nodes of the np +1th layer, and the corresponding
- the first input LLR sequence includes N p LLR nodes of the n p +1 th layer.
- the serial decoder determines that t ⁇ T and the tth iteration does not meet the early termination condition, and the tth iteration of the cascaded decoder includes: the serial The decoder obtains M(i,t) third output LLR sequences according to the M(i,t) decoding paths.
- the decoder terminates conditions early, and the t-th iteration of the cascaded decoder includes:
- the serial decoder obtains M(i,t) third output LLR sequences according to the M(i,t) decoding paths.
- the second parallel decoder obtains M(i,t) according to the M(i,t) third output LLR sequences and the M(i,t) second output LLR sequences of the tth iteration
- the second parallel decoder performs a second stage update on the M(i,t) second update input LLR sequences to obtain M(i,t) second decoded LLR sequences;
- the second parallel decoder outputs the M(i,t) second decoded LLR sequences to the first parallel decoder.
- the stage iteration satisfies the early termination condition of the multi-cascade decoder
- the t-th iteration of the cascade decoder includes:
- the serial decoder obtains the decoding result according to the M(i,t) decoding paths, and terminates the iteration.
- the first parallel decoder includes one or more of the following: a BP decoder, an MS decoder, or a DNN decoder.
- the value of N p includes any one of the following: 8192, 4096, 2048, 1024, 512, 256, 128.
- an embodiment of the present application provides a decoding device, which has the function of implementing the method described in any one of the possible designs of the second aspect and the fourth aspect.
- the function can be realized by hardware, or by hardware executing corresponding software.
- the hardware or software includes one or more modules or units corresponding to the above-mentioned functions.
- the decoding device when part or all of the functions are realized by hardware, includes: an input interface circuit for obtaining the LLR sequence corresponding to the bit sequence to be decoded; and a logic circuit for executing The method described in the second aspect or the fourth aspect or any one of the possible designs of the foregoing two aspects; an output interface circuit for outputting information bits.
- the decoding device may be a chip or an integrated circuit.
- the decoding device when part or all of the function is realized by software, the decoding device includes: a memory for storing a program; a processor for executing the program stored in the memory, when When the program is executed, the decoding device can implement the method described in the second aspect or the fourth aspect or any one of the possible designs of the foregoing two aspects.
- the foregoing memory may be a physically independent unit, or may be integrated with the processor.
- the decoding device when part or all of the functions are implemented by software, the decoding device includes a processor.
- the memory for storing the program is located outside the decoding device, and the processor is connected to the memory through a circuit/wire for reading and executing the program stored in the memory.
- the communication device provided in the sixth aspect includes a processor and a transceiver component, and the processor and the transceiver component can be used to implement the functions of each part of the foregoing encoding or decoding method.
- the communication device is a terminal, a base station or other network equipment, its transceiver component can be a transceiver.
- the communication device is a baseband chip or a baseband single board, its transceiver component can be a baseband chip or a baseband single board.
- the input/output circuit is used to realize the receiving/sending of input/output signals.
- the communication device may further include a memory for storing data and/or instructions.
- an embodiment of the present application provides a network device, including any possible decoder as in the first aspect, the third aspect, or the fifth aspect, or the decoding device of the sixth aspect.
- an embodiment of the present application provides a terminal device, including any possible decoder as in the first aspect, the third aspect, or the fifth aspect, or the decoding device in the sixth aspect.
- an embodiment of the present application provides a communication system, which includes the network device of the seventh aspect and the terminal device of the eighth aspect.
- an embodiment of the present application provides a computer storage medium that stores a computer program, and the computer program includes instructions for executing the method described in any one of the above-mentioned second or fourth aspects.
- a computer program product containing instructions which when running on a computer, causes the computer to execute the method described in any one of the possible designs of the second or fourth aspects.
- Figure 1 is an architecture diagram of the communication system provided by this application.
- Figure 2a is a schematic diagram of the decoding path of an SCL decoding algorithm provided by this application.
- 2b is a schematic diagram of the decoding path of an SCL decoding algorithm provided by this application.
- Fig. 3a is a schematic diagram of a basic processing unit of a parallel decoding algorithm provided by this application;
- FIG. 3b is a schematic diagram of iterative calculation of a parallel decoding algorithm butterfly network provided by this application.
- 3c is a schematic diagram of an iterative operation unit of a DNN decoding algorithm provided by this application;
- FIG. 4 is an example of a Tanner graph of an LDPC code provided by an embodiment of the application.
- FIG. 5 is a schematic structural diagram of a parallel decoder provided by an embodiment of this application.
- FIG. 6 is a schematic structural diagram of a cascaded decoder provided by an embodiment of this application.
- FIG. 7 is a flowchart of a cascaded decoding method provided by an embodiment of this application.
- FIG. 8 is a schematic structural diagram of a multi-cascade decoder provided by an embodiment of this application.
- FIG. 9 is a flowchart of a cascaded decoding method provided by an embodiment of this application.
- Fig. 10 is a decoding performance diagram of the cascaded decoding method and other decoding algorithms provided by the implementation of this application.
- the embodiments of the present application can be applied to various fields that adopt Polar coding, such as: data storage field, optical network communication field, wireless communication field, and so on.
- the wireless communication systems involved in the embodiments of the present application include but are not limited to: global system for mobile communications (GSM) system, code division multiple access (CDMA) system, and broadband code division multiple access (GSM) system.
- GSM global system for mobile communications
- CDMA code division multiple access
- GSM broadband code division multiple access
- WCDMA wideband code division multiple access
- GPRS general packet radio service
- LTE long term evolution
- FDD frequency division duplex
- TDD time division duplex
- UMTS universal mobile telecommunication system
- WiMAX worldwide interoperability for microwave access
- V2X can include vehicle-to-network (V2N), vehicle-to-vehicle (V2V) ), Vehicle to Infrastructure (V2I), Vehicle to Pedestrian (V2P), etc.
- LTE-V Long Term Evolution-Vehicle (LTE-V) of Workshop Communication, Internet of Vehicles, Machine Communication ( Machine type communication (MTC), Internet of Things (IoT), Long Term Evolution-Machine (LTE-M), Machine to Machine (M2M), etc.
- MTC Machine Communication
- IoT Internet of Things
- LTE-M Long Term Evolution-Machine
- M2M Machine to Machine
- the communication device involved in this application may be a chip (such as a baseband chip, or a data signal processing chip, or a general-purpose chip, etc.), a terminal, a base station, or other network equipment.
- a terminal is a device with a communication function, which can communicate with one or more core networks via a radio access network (Radio Access Network, RAN).
- the terminal may include a handheld device with a wireless communication function, a vehicle-mounted device, a wearable device, a computing device, or other processing device connected to a wireless modem.
- Terminals can be called different names in different networks, such as: user equipment (UE), mobile station (Mobile Station, MS), subscriber unit, station, cellular phone, personal digital assistant, wireless modem, wireless communication equipment , Handheld devices, laptop computers, cordless phones, wireless local loop stations, etc.
- UE user equipment
- MS mobile station
- MS mobile station
- cellular phone personal digital assistant
- wireless modem wireless communication equipment
- Handheld devices laptop computers
- cordless phones wireless local loop stations, etc.
- a base station also called a base station device
- the name of the base station may be different in different wireless access systems.
- the base station is called NodeB (NodeB)
- NodeB the base station is called NodeB
- NodeB in the LTE network
- NodeB the base station is called NodeB.
- the base station is called an evolved NodeB (evolved NodeB, eNB or eNodeB), and the base station in the new radio (NR) network is called the transmission reception point (TRP) or the next generation node B (gNB) ), or the base station can also be a relay station, access point, in-vehicle device, wearable device, network equipment in the future evolution of the Public Land Mobile Network (PLMN), or base station in various other evolved networks Other names may also be used.
- the present invention is not limited to this.
- FIG. 1 is an architecture diagram of the communication system provided by this application. It should be noted that FIG. 1 merely illustrates an architecture diagram of a communication system in the form of an example, and is not a limitation on the architecture diagram of the communication system.
- FIG. 1 includes a communication device 101 and a communication device 102.
- this article uses the communication device 101 as a transmitting end device to send a signal, and the communication device 102 as a receiving end device to receive a signal as an example for description.
- the communication device 102 can also send information to the communication device 101, and the communication device 101 receives the signal accordingly, then the communication device 102 is the transmitting end device, and the communication device 101 is the receiving end device.
- the embodiment of the present invention is not limited to this.
- the transmitting end device includes an encoder
- the receiving end device includes a decoder. Since the communication device may be a transmitting end device or a receiving end device, it may include an encoder and a decoder.
- the communication device 101 can respond to the sequence of information to be sent Such as the signaling transmitted on the control channel, perform polar encoding and output the encoded sequence Encoded sequence After rate matching, interleaving, and modulation, it is transmitted to the communication device 102 on the control channel.
- the communication device 102 performs processing such as demodulation on the received signal to obtain a Likelihood Rate (LLR) sequence LLR sequence
- LLR Likelihood Rate
- the number of LLR soft values included in is the same as the number of bits included in the information sequence, and both are N. It can also be said that its length is N, and N is a positive integer greater than 0.
- the communication device 102 performs Polar decoding according to the received LLR sequence.
- the communication device 102 may make a misjudgment.
- b 0) that is correctly judged as 0 by the receiving end device and the probability p(r
- b 0)/p(r
- b 1)].
- LLR can be a floating point number.
- Serial decoding algorithms mainly include SC decoding algorithm and SCL decoding algorithm. Among them, there are many improved decoding algorithms based on SCL decoding algorithm, such as CA-SCL algorithm with CRC check.
- u i is a frozen bit, and its value is known, for example, it is fixed to 0 or 1, so it can be directly judged And the decision result of this bit is used for the decision of the next bit u i+1 ; if i ⁇ A , u i is the information bit, and the decision result of each bit before the bit is obtained After that, the decoding LLR is And make a hard judgment on the LLR to get the judgment result And the decision result of this bit is used for the decision of the next bit u i+1 .
- the decision function of the above polarization code is as follows:
- the decoding LLR corresponding to u i is defined as follows:
- Is the sequence A subsequence of odd-numbered elements in the middle Is the sequence A subsequence composed of elements with an even index in the middle.
- the LLR sequence of length N can be reduced to two LLR sequences of length N/2 for calculation, and according to the above recursive process recursion, it can be calculated by multiple reductions to LLR sequences of length 1, that is, to 1 A soft value of LLR is calculated.
- the LLR sequence of, a total of 4 LLR sequences of length 2 are calculated, and further recursive, reduced to 8 LLR sequences of length 1.
- the calculation of the soft value of 1 LLR can be obtained according to the following formula:
- the SC decoding process can be described as a depth-first search process on a code tree.
- Figure 2a shows an example of a code tree.
- Each layer corresponds to an information bit or a frozen bit, and the two edges between each parent node and two child nodes are marked as 0 path and 1 path respectively, and a total of 2 N paths can be expanded.
- the SC decoder starts decoding from the root node u 1 , and selects 0 path or 1 path each time according to the decision result of the current bit. After reaching the leaf node, the N bit decision ends, and the path of the SC decoder in the code tree is Is the decoding result, as shown in Figure 2a, the SC decoding result is
- the SC decoding algorithm selects 0 path or 1 path at each node according to the current decision result, and each step is a local optimal choice. If a certain bit is judged incorrectly, it will continue to expand along the path. The current error cannot be corrected, and the current error will affect the subsequent decoding process.
- the SCL decoding algorithm changes the hard decision in the SC decoding algorithm to a soft decision, that is, the L paths with a decision of 0 or 1 are retained, where L is the search width.
- the path metric (PM) For each path expansion, the path metric (PM) must be calculated. PM is the probability of the decoding sequence corresponding to a path, which is usually expressed in logarithmic form as follows:
- the SCL decoding algorithm sorts the PM value and outputs the L decoding paths with the largest PM value. At the last bit, the path with the largest PM value is selected as the decoding output and input.
- the CA-SCL decoding algorithm is an optimization of the SCL decoding algorithm.
- the CRC is introduced into the information sequence, and the CRC is used to assist the decision, and the path that passes the CRC check and the PM value is the largest is selected.
- the path formed from the root node to any node in the code tree corresponds to a path metric value; each time the path is expanded, the L paths with the largest path metric value in the current layer are selected .
- the decoding sequences corresponding to the L paths are output in the order of the metric value from small to large, forming a set of candidate decoding sequences. Perform a CRC check on the candidate decoding sequence, and select the path with the largest path metric value that can pass the CRC check as the final decoding result.
- Parallel decoding algorithms include BP decoding algorithm, MS decoding algorithm and DNN decoding algorithm. It can be used for Polar code decoding and LDPC code decoding.
- the parallel decoding algorithm for Polar codes is decoded based on the factor graph of the generator matrix G.
- the following takes the BP decoding algorithm as an example to introduce.
- PE basic processing elements
- each node in the first layer represents the information sequence
- the information will be transferred layer by layer from left to right, that is, from the first layer to the n+1 layer. It is called the right operation R operation. From right to left, that is, from the n+1 layer to the first layer.
- the layer transfer information is called the left operation L operation.
- LLRs LLRs, where t is the number of iterations, 0 ⁇ t ⁇ T, and T is the maximum number of iterations of the BP decoding algorithm.
- the basic processing unit is represented as a butterfly operation unit in the figure, for example: a butterfly unit connecting nodes (1,1), (1,2), (2,1) and (2,2), connecting nodes ( 2,2), (2,3), (4,2) and (4,3) butterfly elements, connecting nodes (4,3), (4,4), (8,3) and (8, 4) Butterfly unit and so on.
- the LLR L operation is updated layer by layer from right to left. After reaching the left end, the LLR R operation update is performed layer by layer. When all nodes are visited once, an iteration operation is completed. After each iteration is completed, the LLR value of the corresponding information bit is hard-decided and the CRC is checked. If the CRC passes or reaches the maximum number of iterations, the iteration is stopped, otherwise the iteration is continued.
- the LLR sequence of length N can be regarded as reduced to two LLR subsequences of length N/2 for the next layer update respectively. Update one layer on the left, and the length of the LLR subsequence can be reduced to half of the previous layer.
- the jth layer it can be regarded as 2 n+1-j LLR subsequences with a length of 2 j-1.
- the iterative process of the MS decoding algorithm is similar to the iterative process of the BP decoding algorithm, and it will not be described separately below, and it is generally called the BP/MS decoding algorithm.
- the node update process of DNN can simulate the node update process of one or more iterations in the BP/MS decoding algorithm, and it can also assign different weights to each edge during the update process to improve the decoding performance.
- DNN When DNN is used to realize Polar decoding, it can imitate the structure of BP/MS decoding algorithm. As shown in Figure 3c, it is the architecture of the DNN decoding algorithm.
- the L operation from right to left of the butterfly unit network and the R operation from left to right are cascaded to form an iterative operation unit, which combines the iterative operation unit of T By cascading, the DNN decoding corresponding to T iterations can be completed.
- the parallel decoding algorithm for LDPC codes is decoded based on a check matrix.
- the parallel decoding algorithm for LDPC codes can also be updated in two stages, corresponding to the L operation and the R operation in the Polar code factor graph parallel decoding algorithm.
- the transfer information corresponds to the L operation in the Polar code factor graph parallel decoding algorithm
- the transfer information corresponds to the Polar code R operation in factor graph parallel decoding algorithm. Since the LDPC matrix supports row-column exchange and does not change its decoding properties, the corresponding R operation can also be calculated row by row from bottom to top for the check matrix, and the corresponding L operation can be calculated row by row from top to bottom.
- the LDPC check matrix can correspond to the Tanner graph.
- an example of the LDPC code check matrix and its corresponding check equation is:
- the Tanner graph corresponding to the check matrix can be represented as shown in Figure 4.
- Each circular node in Figure 4 is a variable node, representing a column in the check matrix H
- each square node is a check node, representing a check In a row of matrix H
- each edge connecting the check node and the variable node in Figure 4 represents a non-zero element at the intersection of the row and column corresponding to the two nodes.
- decoding algorithms such as BP/MS can also be used.
- BP the decoding formula can be written as:
- R ij represents the LLR that needs to be updated for the j-th variable node
- Q ji represents the LLR passed by other variable nodes to the current check node
- the communication system puts forward higher performance requirements and throughput requirements for decoding.
- the 5G communication system requires both Polar decoding and LDPC encoding and decoding, reducing the overhead of the decoder is also a need to solve The problem.
- FIG. 5 is a schematic structural diagram of a parallel decoder 500 for cascaded decoding according to an embodiment of the application, which is used to decode one or more log-likelihood ratio LLR sequences of length N in. And provide one or more input LLR sequences of length N out to the next stage decoder.
- N in and N out are both integers
- N out ⁇ N in are both integers
- N out and N in are generally powers of 2.
- N in can be any of the following: 8192, 4096, 2048, 1024, 512, 256, 128, 64, 32, and N out can be any of the following One: 1024,512,256,128,64,32,16,8,4,2,1128,64,32,16,8,4,2,1.
- the parallel decoder (N in , N out ) can be used to represent the length of the decoded input LLR sequence of the parallel decoder 500 and the input LLR sequence provided to the next stage, for example, the parallel decoder (1024, 64) represents The length of the decoded input LLR sequence is 1024, and the parallel decoder with the length of the input LLR sequence provided to the next stage is 64; the parallel decoder (8192, 512) indicates that the length of the decoded input LLR sequence is 8192, The parallel decoder with the length of the input LLR sequence provided by the next stage is 512. It should be noted that this is only an example, not limited to the above examples.
- the parallel decoder 500 can update the information of the decoded input LLR sequence according to any of the aforementioned parallel decoding algorithms, and the length of the output LLR sequence and the decoded input LLR sequence are usually equal, and both are N in .
- the parallel decoder 500 provides an input LLR sequence with a length of N out to the next-stage decoder.
- the parallel decoder 500 may determine N out LLRs as the next LLR sequence in the output LLR sequence with a length of N in.
- the input LLR sequence of the first-level decoder can also be determined by the next-level decoder according to the output LLR sequence of the parallel decoder 500 whose length is N in .
- N out LLRs are used as the input of the next-level decoder.
- the LLR sequence is not limited in this embodiment of the application. If multiple iterative decoding is performed, in a possible implementation, for the s-th iterative decoding, the input LLR sequence of the next-stage decoder includes the (s-1) ⁇ N out +th in the output LLR sequence 1 LLR to the s ⁇ N outth LLR. In this way, the maximum number of iterations is N in /N out .
- the decoding process is adjusted accordingly.
- the parallel decoder 500 may include a first-stage update unit 510 and a second-stage update unit 520. Among them, for the sth iteration, 0 ⁇ s ⁇ S, and S is the maximum number of iterations:
- the first-stage update unit 510 can be used to decode the input LLR sequence of the parallel decoder 500 Perform the first stage update to obtain the output LLR sequence of the parallel decoder 500
- the output LLR sequence Used to provide the input LLR sequence of the next stage decoder
- the input LLR sequence of the first stage update unit 510 The LLR sequence is input for the decoding of the parallel decoder 500.
- the second-stage update unit 520 may be used for:
- the second updated input LLR sequence for the sth iteration Perform the second stage update to get the second update output LLR sequence
- the second update input LLR sequence For will Replace the (s-2) th N out +1 LLR to the (s-1) th N out LLR with The LLR sequence obtained by the N out LLRs.
- the input LLR sequence of the first stage update unit 510 is The second update output LLR sequence obtained from the second-stage update unit 520, the first-stage update unit 510 may be used to perform the first-stage update on the input LLR sequence to obtain the output LLR sequence of the parallel decoder 500
- the output LLR sequence is used to provide the input LLR sequence of the next stage decoder.
- the parallel decoder 500 may include at least n out +1 th to n in +1 th decoding layers.
- the N in LLRs of the n in +1th layer are assigned to the LLR sequence
- the N in LLRs of the n out +1 layer are recorded as the LLR sequence
- the LLR sequence that the first stage update unit 510 can input N in +1 from the layer begin to give a soft value updating n out N in number of nodes +1 LLR layer, the first n out N in a layer LLR +1 nodes as output LLR sequence
- the first-stage update unit 510 obtains the LLR sequence as input N in N in number of nodes +1 LLR LLR for the layer sequence LLR N in number, comprising a first stage update from n in n to the layer of n out +1 +1 direction in -n out secondary layer LLR soft value update, the calculation formula updated from the n in +1 layer to the n out +1 layer direction can refer to the formulas (10) and (11) of the L operation in the aforementioned parallel decoding algorithm, or, from top to bottom. Formula (14) or (15) for row calculation.
- the obtained N in LLR nodes are the output LLR sequence It may include N in /N out LLR subsequences with a length of N out.
- the first-stage update unit 510 obtains the LLR sequence as input N in N in number of nodes +1 LLR LLR for the layer sequence LLR N in number
- the first stage of updating comprises the first layer of m + 1 + 1 layer n out from the first direction and from n in n out +1 to update the first layer n in +1 direction layer 2 ⁇ m ⁇ (n in -n out) times LLR soft values
- m is an integer, m ⁇ 0.
- the calculation formula updated from the n out +1th layer to the n in +1th layer can refer to the formulas (12) and (13) of the R operation in the aforementioned parallel decoding algorithm.
- the layer starts to update the LLR nodes layer by layer, and when the layer is updated to the n out +1th layer, the obtained N in LLR nodes are the output LLR sequence It may include N in /N out LLR subsequences with a length of N out. It should be noted that in each iteration of the parallel decoder 500, the value of m may be different.
- the input LLR sequence of the first stage update unit 510 It may be the decoded input LLR sequence of the parallel decoder 500, or it may be the second updated output LLR sequence obtained by the second-stage update unit 520.
- the first-stage update unit 510 may also be used to output LLR sequence from Determine the input LLR sequence provided to the next-stage decoder.
- the input LLR sequence provided to the next-stage decoder includes an output LLR sequence There are N out LLRs. Due to the output LLR sequence It may include N in /N out LLR subsequences with a length of N out , and the first-stage update unit 510 may start from Determine an LLR subsequence as the input LLR sequence provided to the next-stage decoder.
- the LLR subsequence is the output LLR sequence
- the s-th subsequence in may include the output LLR sequence (S-1) ⁇ N out + 1 LLR to s ⁇ N out LLR,
- the second-stage update unit 520 can be used to input the LLR sequence for the second update To give n in N in number of nodes LLR + 1 layer, the first n in N in a layer of the +1 LLR node n out +1 layer started from the second stage update The LLR sequence is output as the second update.
- the second-stage update unit 520 obtains the LLR sequence as input N out N in number of nodes +1 LLR LLR for the layer sequence number N in LLR, the second stage of updating comprises n out +1 from layer to layer of n in +1 direction n in -n out times
- the LLR soft value is updated, and the updated calculation formula can refer to the formulas (12) and (13) of the R operation in the aforementioned parallel decoding algorithm, or the formulas (14) or (15) calculated row by row from bottom to top.
- the second-stage update unit 520 updates the LLR nodes layer by layer starting from the n out +1 layer, and when updating to the n in +1 layer, the value of the N in LLR nodes is the second update output LLR sequence Its length is N in .
- the decoding input LLR sequence of is derived from the input LLR sequence of the first parallel decoder, and the second parallel decoder is also used for the upper decoder, for example, to return one or more lengths to the first parallel decoder It is the decoded LLR sequence of N in2 or N out1.
- one or more parallel decoders 500 may be cascaded with a serial decoder supporting a smaller code length.
- the parallel decoder 620 (N, N S ) and the serial decoder 630 supporting a code length of N S are cascaded.
- the parallel decoder 810 (N P , N ), the parallel decoder 620 (N, N S ) and the serial decoder 630 supporting a code length of N S are cascaded.
- N can be any of the following: 1024, 512, 256, 128, 64, 32, and Ns can be any of the following: 128, 64, 32, 16, 8, 4, 2, 1, N p can be Value any of the following: 8192,4096,2048,1024,512,256,128. It should be noted that this is only an example and is not limited to this.
- the parallel decoder 620 is referred to as a second parallel decoder
- the parallel decoder 810 is referred to as a first parallel decoder.
- the parallel decoder provided by the embodiment of the present invention can be used for cascaded decoding, which converts a larger code length into a smaller code length through parallel decoding and outputs it to the next-stage decoder, which can improve the throughput rate. It can also reduce the implementation overhead of the next-level decoder.
- FIG. 6 it is a schematic structural diagram of a cascaded decoder 600 in which a parallel decoder 620 and a serial decoder 630 are cascaded according to an embodiment of the present invention, wherein the serial decoder 630 is a parallel decoder 620's next-level decoder.
- the parallel decoder 620 can use a BP decoding algorithm, an MS decoding algorithm or a DNN decoding algorithm
- the serial decoder 630 can use an SCL decoding algorithm, a CA-SCL decoding algorithm, and so on.
- the length of the input LLR sequence provided by the parallel decoder 620 to the serial decoder 630 is N S.
- the length of the input LLR sequence of the parallel decoder 620 is 8, and the length of the input LLR sequence provided to the serial decoder 630 is 4.
- FIG. 7 it is a flowchart of a decoding method of a cascade decoder according to an embodiment of the present invention.
- the decoding method of the cascaded decoder shown in FIG. 7 will be described below in conjunction with the cascaded decoder 600 of FIG. 6.
- the input LLR sequence of the cascaded decoder 600 includes N input LLRs, and the input LLR sequence is subjected to a maximum of T iterative cascaded decoding, where the t-th iterative cascaded decoding, 0 ⁇ t ⁇ T, including the method steps shown in Figure 7:
- Step 710 The second parallel decoder 620 decodes the L(t) second input LLR sequences to obtain L(t) second output LLR sequences.
- each second input LLR sequence is N, expressed as The length of each second output LLR sequence is N, expressed as
- the second parallel decoder 620 obtains the corresponding second output LLR sequence each iteration Used to provide the third input LLR sequence of the serial decoder 630 (t).
- the third input LLR sequence include the second output LLR sequence
- Ns LLRs for example: cLLR (t-1) ⁇ Ns+1 , cLLR (t-1) ⁇ Ns+2 ,..., cLLR t ⁇ Ns .
- the second parallel decoder 620 responds to the second input LLR sequence
- For the decoding process refer to the description of the parallel decoder in the foregoing embodiment, which will not be repeated here.
- the second parallel decoder 620 according to the second output LLR sequence Determine the third input LLR sequence And output to the serial decoder 630; in another possible implementation, the second parallel decoder 620 outputs the second LLR sequence Output to the serial decoder 630, and the serial decoder 630 according to the second output LLR sequence Determine the third input LLR sequence (t).
- the second parallel decoder 620 uses the initial input LLR sequence of the cascaded decoder as the second input LLR sequence I.e. 1 second input LLR sequence
- the initial input LLR sequence can be the LLR sequence obtained by demodulation and other processing after the receiving end device receives the signal And information sequence correspond.
- the second parallel decoder 620 responds to the second input LLR sequence Perform the first stage update to get the second output LLR sequence
- the second parallel decoder 620 obtains the M(t-1) third output LLR sequences of the previous iteration, that is, the t-1th iteration, from the serial decoder 630, where each third output LLR sequence Including N S LLRs, denoted as
- the second parallel decoder 620 obtains M(t-1) second output LLR sequences corresponding to the M(t-1) third output LLR sequences in the t-1th iteration Since in the t-1th iteration, the second parallel decoder generates L(t-1) second output LLR sequences.
- the serial decoder 630 returns M(t-1) third output LLR sequences through path selection The second parallel decoder 620 determines the sequence of LLRs with these third outputs The second output LLR sequence corresponding to the parent path, and the corresponding M(t-1) second output LLR sequences are obtained
- the second parallel decoder 620 converts M(t-1) third output LLR sequences Replace the corresponding second output LLR sequence respectively In the corresponding sequence number of LLR, cLLR (t-2) ⁇ Ns+1 , cLLR (t-2) ⁇ Ns+2 ,..., cLLR (t-1) ⁇ Ns , get M(t-1) sequences
- the second parallel decoder 620 can also be used to determine the corresponding L(t) for the L(t) second input LLR sequences. LDPC check matrices; the second parallel decoder 620 decodes L(t) second input LLR sequences respectively based on the L(t) LDPC check matrices to obtain L(t) second output sequences .
- the second parallel decoder 620 responds to the second input LLR sequence
- For the decoding process refer to the description of the parallel decoder in the foregoing embodiment, which will not be repeated here.
- Step 720 The serial decoder 630 decodes the L(t) third input LLR sequences to obtain M(t) decoding paths. Wherein, each of the third input LLR sequence include the corresponding second output LLR sequence respectively Ns LLRs.
- the maximum number of reserved decoding paths of the serial decoder 630 is M, and M is an integer greater than zero.
- the serial decoder 630 serially decodes the L(t) third input LLR sequences to obtain M(t) decoding paths.
- the maximum reserved decoding path of the serial decoder 630 is M paths.
- the M(t) decoding paths are the L(t)*2 k decoding paths generated by the serial decoder 630 and the M(t) decoding paths with the largest path metric value, or the The M(t) decoding paths are L(t)*2 generated by the serial decoder 630.
- the M(t) decoding paths have the largest path metric value and pass the CRC check, and k is Positive integer.
- M(t) is the minimum value of M and L(t)*2 k.
- Step 730 The serial decoder 630 determines to continue the next iteration process and executes step 740, or the serial decoder 630 determines to terminate the iteration process and executes step 750.
- the serial decoder 630 executes step 740 and continues to the tth +1 iteration processing.
- step 750 is executed.
- Step 740 The serial decoder 630 obtains M(t) third output LLR sequences according to the M(t) decoding paths.
- the serial decoder 630 judges the M(t) decoding paths to obtain M(t) third output LLR sequences, and each third output LLR sequence includes Ns LLR soft values, denoted as
- Step 750 The serial decoder 630 obtains the decoding result according to the M(t) decoding paths, and terminates the iteration.
- the serial decoder 630 makes a hard decision on one of the M(t) decoding paths with the largest path metric value or one decoding path with the largest path metric value and a successful CRC check, and obtains the information sequence Corresponding information bits.
- Information sequence It includes multiple information bits, or one or more information bits and one or more frozen bits. After the serial decoder 630 makes a hard decision, it only needs to output one or more information bits.
- the serial decoder only needs every The decoding with a length of 8 is executed once. This method can greatly increase the decoding throughput, and can use the decoding performance of the serial decoder to compensate for the decoding performance of the parallel decoder, so that the overall decoding performance and throughput are improved.
- the performance curve of the SC decoding algorithm is represented by a diamond curve
- the performance curve of the SCL8 decoding algorithm is represented by a square curve
- the performance curve of the BP decoding algorithm is represented by an X-shaped curve.
- the performance curve is represented by a circular curve.
- 5G communication systems support both Polar codes and LDPC codes, using DNN, BP, MS and other parallel translation
- the decoder of the code algorithm is a general architecture, which can support Polar code decoding and LDPC code decoding.
- the cascaded decoder of the embodiment of this application can share the parallel decoding operation of LDPC code. Unit, which can save hardware implementation overhead and avoid waste.
- the first parallel decoder 810 (Np, N), the second parallel decoder 620 (N, Ns) and the serial decoder 630 are cascaded according to another embodiment of this application.
- the first-level decoder is a parallel decoder 810
- the second-level decoder is a parallel decoder 620
- the third-level decoder 630 is a serial decoder. It can also be regarded as the parallel decoder 810 and the cascaded decoder 600 shown in FIG. 6 are cascaded, the parallel decoder 810 performs the first stage iterative decoding, and the cascade decoder 600 performs the second stage iterative decoding. code.
- the first parallel decoder 810 may use a BP decoding algorithm, an MS decoding algorithm or a DNN decoding algorithm.
- the length of the input LLR sequence of, is N, or it can be said that the length of the input LLR sequence provided to the second parallel decoder 620 is N.
- the decoding process of the first parallel decoder 810 for the input sequence is similar to the decoding process of the second parallel decoder 620. You can refer to the description of the foregoing embodiment. The difference is that the second parallel decoder 620 serves as the next stage.
- the decoder also needs to return to its upper level decoder, the first parallel decoder 810, to decode the LLR sequence.
- the maximum number of iterations I N p /N, that is, the maximum number of iterations of the first-level iteration is I, and the input of the first parallel decoder 810 is K(i)
- the first input LLR sequence, the length is N p expressed as
- the cascaded decoder 600 includes a second parallel decoder 620 and a serial decoder 630 cascaded, for each second input LLR sequence Decode to get the second output LLR sequence
- the process can refer to the method steps described in Figure 7.
- the difference is that the second parallel decoder 620 also needs to return the decoded LLR sequence to the first parallel decoder, and the serial decoder 630 needs to iterate according to the first stage, the first The output is different whether the level 2 iteration is terminated or not.
- Step 910 The first parallel decoder 810 performs the first stage decoding on the K(i) first input LLR sequences to obtain K(i) first output LLR sequences.
- each first input LLR sequence is N p
- the length of each first output LLR sequence is N p
- i is the number of iterations of the first level
- t is the number of iterations of the second level.
- the first parallel decoder 810 obtains the corresponding first output LLR sequence each iteration Used to provide the second input LLR sequence of the cascaded decoder 600 Among them, the second input LLR sequence Include the corresponding first output LLR sequence There are N LLRs, for example: eLLR (i-1) ⁇ N+1 , eLLR (i-1) ⁇ N+2 ,..., eLLR i ⁇ N .
- the second input LLR sequence can also be expressed as In order to simplify the description, it is consistent with the expression in Figure 7, and the second input LLR sequence is expressed as Understandable, that is
- the first parallel decoder 810 responds to the first input LLR sequence
- For the decoding process refer to the description of the parallel decoder decoding process in the foregoing embodiment, which is not repeated here.
- the first parallel decoder 810 according to the first output LLR sequence Determine the second input LLR sequence And output to the next-level decoder: the cascade decoder 600 or the second parallel decoder 620; in another possible implementation manner, the first parallel decoder 810 outputs the first LLR sequence Output to the next stage decoder: cascade decoder 600, or second parallel decoder 620, the next stage decoder according to the first output LLR sequence Determine the second input LLR sequence
- the first parallel decoder 810 uses the initial input LLR sequence of the multi-cascade decoder as the first input LLR sequence I.e. 1 first input LLR sequence
- the initial input LLR sequence can be the LLR sequence obtained by demodulation and other processing after the receiving end device receives the signal And information sequence correspond.
- the first parallel decoder 810 responds to the first input LLR sequence Perform the first stage update to get the first output LLR sequence
- the first parallel decoder 810 obtains the previous iteration, that is, the M(i-1,t) of the i-1th iteration from the next-stage decoder, as shown in FIG. 8
- the second decoded LLR sequence Each second decoding LLR sequence includes N LLRs, and the corresponding first output LLR sequence of the i-1th iteration The (i-2) ⁇ N+1th LLR to the (i-1) ⁇ Nth LLR correspond.
- the first parallel decoder 810 obtains M(i-1,t) first output LLR sequences corresponding to the M(i-1,t) second decoded LLR sequences in the i-1th iteration Since in the i-1th iteration, the first parallel decoder 810 generates K(i-1) first output LLR sequences After t iterations, the cascaded decoder 600 obtains M(i-1, t) second decoded LLR sequences The parent path where each second decoded LLR sequence is located corresponds to a first output LLR sequence.
- the first parallel decoder 810 decodes M(i-1, t) second decoded LLR sequences Replace the corresponding first output LLR sequence respectively In the corresponding sequence number LLR, eLLR (i-2) ⁇ N+1 , eLLR (i-2) ⁇ N+2 ,..., eLLR (i-1) ⁇ N get the sequence
- the first parallel decoder 810 pairs M(i-1, t) Perform the second stage update respectively to obtain M(i-1, t) second update output sequences, and the first parallel decoder 810 regards the M(i-1, t) second update output sequences as K(i-1, t) second update output sequences. ) First input LLR sequence
- the first parallel decoder 810 responds to the first input LLR sequence
- the parallel decoder in the foregoing embodiment which will not be repeated here.
- Step 920 The cascade decoder 600 performs the second stage iterative decoding on the K(i) second input LLR sequences.
- the iterative process of the cascade decoder 600 decoding the K(i) second input LLR sequences is the second-level iteration, and the maximum number of iterations is T times.
- step 710 For the process of the tth iteration, refer to step 710 to step 750.
- the maximum number of iterations is the product of the number of iterations of the two-stage decoder, T ⁇ I.
- the tth iteration includes the following steps:
- Step 9201 The second parallel decoder 620 decodes the L(i,t) second input LLR sequences to obtain L(i,t) second output LLR sequences.
- each second input LLR sequence is N, expressed as The length of each second output LLR sequence is N, expressed as
- the second parallel decoder 620 responds to each second input LLR sequence Perform the first stage update separately to get the second output LLR sequence Used to provide the third input LLR sequence of the serial decoder 630
- the second parallel decoder 620 obtains the M(i, t-1) third output LLR sequences of the previous iteration, that is, the t-1th iteration, from the serial decoder 630, where each third output
- the LLR sequence includes N S LLRs, denoted as
- the second parallel decoder 620 obtains M(i, t-1) second output LLR sequences corresponding to the M(i, t-1) third output LLR sequences in the t-1th iteration Since in the t-1th iteration, the second parallel decoder generates L(i, t-1) second output LLR sequences.
- the serial decoder 630 returns M(i, t-1) third output LLR sequences through path selection
- the second parallel decoder 620 determines the sequence of LLRs with these third outputs
- the second output LLR sequence corresponding to the parent path, and the corresponding M(i, t-1) second output LLR sequences are obtained
- the second parallel decoder 620 converts M(i, t-1) third output LLR sequences Replace the corresponding second output LLR sequence respectively In the corresponding sequence number of LLR, cLLR (t-2) ⁇ Ns+1 , cLLR (t-2) ⁇ Ns+2 ,..., cLLR (t-1) ⁇ Ns , get M(i,t-1) sequence
- the second parallel decoder 620 responds to the second input LLR sequence
- For the decoding process refer to the description of the parallel decoder in the foregoing embodiment, which will not be repeated here.
- Step 9202 The serial decoder 630 decodes the L(i,t) third input LLR sequences to obtain M(i,t) decoding paths.
- Step 9203 The serial decoder 630 determines to continue the next iterative process and executes step 9204, or the serial decoder 630 determines to terminate the second-level iterative process but does not terminate the first-level iterative process, and executes steps 9204 to 9205, or The serial decoder 630 determines to terminate the first-stage iterative process, and executes step 9206.
- the serial decoder 630 executes Step 9204, continue the t+1th level 2 iterative processing.
- steps 9204 and 9205 are executed.
- step 9206 is executed.
- Step 9204 The serial decoder 630 obtains M(i,t) third output LLR sequences according to the M(i,t) decoding paths.
- the serial decoder 630 determines the M(i,t) decoding paths to obtain M(i,t) third output LLR sequences, and each third output LLR sequence includes Ns LLR soft values, denoted as
- Step 9205 The second parallel decoder 620 obtains the second decoded LLR sequence according to the M(i,t) third output LLR sequences, and terminates the second stage iteration.
- the second parallel decoder 620 obtains the LLR sequence with the M(i,t) third output The corresponding M(i,t) second output LLR sequence of the tth iteration
- the second parallel decoder 620 according to the M(i,t) third output LLR sequences And the M(i,t) second output LLR sequence of the tth iteration Get the second updated input LLR sequence of M(i,t) iteration t
- the second parallel decoder 620 updates the M(i,t) second update input LLR sequences Perform the second stage update to get M(i,t) second decoded LLR sequences
- the second parallel decoder 620 outputs the M(i,t) second decoded LLR sequences to the first parallel decoder 810 And terminate the second iteration.
- M(i,t) second decoded LLR sequences It can also be regarded as the output of the cascaded decoder 600 to the upper-level decoder, the first parallel decoder 810.
- Step 9206 The serial decoder 630 obtains the decoding result according to the M(i,t) decoding paths, and terminates the multi-cascade decoding iteration.
- the serial decoder 630 makes a hard decision on M(i,t) decoding paths to obtain the information sequence Corresponding information bits.
- Information sequence It includes multiple information bits, or one or more information bits and one or more frozen bits, and the serial decoder 630 only needs to output the information bits after making a hard decision.
- the multi-cascade decoder can share part of the parallel decoding units of the LDPC decoder, saving system overhead .
- the second parallel decoder 620 in this case, can convert the factor graph of the second input LLR sequence, determine the corresponding LDPC check matrix, and perform LDPC decoding through a small block-length serial
- the decoder performs path selection. This method not only improves the throughput of decoding, improves the decoding performance of parallel decoders, but also provides a basis for common mode with decoders of other codes such as LDPC and saves system overhead.
- the cascaded decoding method provided in the embodiments of the present application may be executed by a decoding device or a chip in a decoding device in various network equipment or terminal equipment.
- An embodiment of the present application also provides a decoding device.
- the decoding device may adopt the structure of FIG. 6 or FIG. 8 to execute the decoding method shown in FIG. 7 or FIG. 9. Some or all of these decoding methods can be implemented by hardware or software.
- the decoding device can include: an input interface circuit for obtaining the LLR sequence corresponding to the information sequence; and a logic circuit for implementing FIG. 7 Or the decoding method shown in Figure 9; output interface circuit for outputting information bits.
- the decoding device may be a chip or an integrated circuit in specific implementation.
- An embodiment of the present application also provides a decoding device.
- the decoding device may adopt the structure of FIG. 6 or FIG. 8 to execute the decoding method shown in FIG. 7 or FIG. 9. Some or all of these decoding methods can be implemented by hardware or software.
- the decoding device can include: a memory for storing a program; a processor for executing a program stored in the memory. When the program is executed At this time, the decoding device can realize the decoding method shown in FIG. 7 or 9.
- the foregoing memory may be a physically independent unit, or may be integrated with the processor.
- the decoding device may also only include a processor.
- the memory for storing the program is located outside the decoding device, and the processor is connected to the memory through a circuit/wire for reading and executing the program stored in the memory.
- the processor may be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
- CPU central processing unit
- NP network processor
- the processor may further include a hardware chip.
- the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
- ASIC application-specific integrated circuit
- PLD programmable logic device
- the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
- CPLD complex programmable logic device
- FPGA field-programmable gate array
- GAL generic array logic
- the memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory) , Hard disk drive (HDD) or solid-state drive (solid-state drive, SSD); the memory may also include a combination of the foregoing types of memory.
- volatile memory volatile memory
- non-volatile memory non-volatile memory
- flash memory flash memory
- HDD Hard disk drive
- SSD solid-state drive
- the embodiment of the present application also provides a computer storage medium storing a computer program, and the computer program includes a decoding method for executing the decoding method provided in the foregoing method embodiment.
- the embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the decoding method provided by the foregoing method embodiments.
- Any decoding device provided in the embodiments of the present application may also be a chip.
- this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Error Detection And Correction (AREA)
Abstract
L'invention concerne un décodeur en cascade (600) et un procédé de décodage. Le décodeur en cascade (600) comprend un décodeur parallèle (620) et un décodeur série (630), et un décodage itératif est effectué pour un maximum de T itérations ; le décodeur parallèle (620) est configuré pour décoder une ou plusieurs séquences LLR avec une longueur de N ; le décodeur série (630) est configuré pour décoder une ou plusieurs séquences LLR ayant une longueur de NS, dans laquelle NS<N, et la T-ième itération est effectuée. Le procédé comprend les étapes suivantes : le décodeur parallèle (620) décode L(t) des deuxièmes séquences LLR d'entrée pour obtenir L(t) des deuxièmes séquences LLR de sortie (710) ; le décodeur série (630) décode L(t) troisième séquence LLR d'entrée pour obtenir M(t) trajets de décodage (720), chacune des troisièmes séquences LLR d'entrée comprenant N LLR dans la deuxième séquence LLR de sortie correspondante, respectivement ; le décodeur série (630) détermine s'il faut mettre fin à une itération (730). Par l'adoption du décodeur en cascade (600), le débit du décodage est amélioré au moyen du décodeur parallèle (620), et les performances de décodage parallèle sont améliorées au moyen du décodeur série (630), de telle sorte que les performances de décodage et le débit sont améliorés de manière générale.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910972581.8A CN112737600B (zh) | 2019-10-14 | 2019-10-14 | 译码方法和译码器 |
CN201910972581.8 | 2019-10-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021073338A1 true WO2021073338A1 (fr) | 2021-04-22 |
Family
ID=75537704
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/115383 WO2021073338A1 (fr) | 2019-10-14 | 2020-09-15 | Procédé de décodage et décodeur |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112737600B (fr) |
WO (1) | WO2021073338A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114039699A (zh) * | 2021-10-14 | 2022-02-11 | 中科南京移动通信与计算创新研究院 | 数据链通信方法、装置及可读介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113708892B (zh) * | 2021-08-13 | 2023-01-10 | 上海交通大学 | 基于稀疏二分图的多模通用译码系统及方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160056843A1 (en) * | 2011-11-08 | 2016-02-25 | Warren GROSS | Methods and systems for decoding polar codes |
CN109004939A (zh) * | 2017-06-06 | 2018-12-14 | 华为技术有限公司 | 极化码译码装置和方法 |
CN109495116A (zh) * | 2018-10-19 | 2019-03-19 | 东南大学 | 极化码的sc-bp混合译码方法及其可调式硬件架构 |
-
2019
- 2019-10-14 CN CN201910972581.8A patent/CN112737600B/zh active Active
-
2020
- 2020-09-15 WO PCT/CN2020/115383 patent/WO2021073338A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160056843A1 (en) * | 2011-11-08 | 2016-02-25 | Warren GROSS | Methods and systems for decoding polar codes |
CN109004939A (zh) * | 2017-06-06 | 2018-12-14 | 华为技术有限公司 | 极化码译码装置和方法 |
CN109495116A (zh) * | 2018-10-19 | 2019-03-19 | 东南大学 | 极化码的sc-bp混合译码方法及其可调式硬件架构 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114039699A (zh) * | 2021-10-14 | 2022-02-11 | 中科南京移动通信与计算创新研究院 | 数据链通信方法、装置及可读介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112737600B (zh) | 2023-07-18 |
CN112737600A (zh) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7757150B2 (en) | Structured puncturing of irregular low-density parity-check (LDPC) codes | |
KR101217925B1 (ko) | 높은 쓰루풋 어플리케이션을 위한 harq 레이트 호환가능 저 밀도 패리티-체크 (ldpc) 코드 | |
CN109314600B (zh) | 用于在使用通用极化码时进行速率匹配的系统和方法 | |
WO2020077596A1 (fr) | Procédé et appareil de décodage pour codes ldpc | |
WO2013152605A1 (fr) | Procédé de décodage et dispositif de décodage de code polaire | |
CN108282259B (zh) | 一种编码方法及装置 | |
WO2019134553A1 (fr) | Procédé et dispositif de décodage | |
CN110326342A (zh) | 一种用于指定编码子信道的有序序列的装置和方法 | |
US11323727B2 (en) | Alteration of successive cancellation order in decoding of polar codes | |
WO2021063217A1 (fr) | Procédé et appareil de décodage | |
WO2021073338A1 (fr) | Procédé de décodage et décodeur | |
US10892783B2 (en) | Apparatus and method for decoding polar codes | |
WO2019206136A1 (fr) | Procédé et dispositif d'adaptation de débit et de désadaptation de débit de code polaire | |
US20240128988A1 (en) | Method and device for polar code encoding and decoding | |
Cao et al. | CRC-aided sparse regression codes for unsourced random access | |
WO2020088256A1 (fr) | Procédé et dispositif de décodage | |
KR20090012189A (ko) | Ldpc 부호의 성능 개선을 위한 스케일링 기반의 개선된min-sum 반복복호알고리즘을 이용한 복호 장치 및그 방법 | |
CN113472360A (zh) | 极化码的译码方法和译码装置 | |
CN109639290B (zh) | 一种半随机分组叠加编码及译码方法 | |
CN110324111B (zh) | 一种译码方法及设备 | |
WO2017214851A1 (fr) | Procédé de transfert de signal, terminal émetteur et terminal récepteur | |
CN114124108A (zh) | 基于低密度奇偶校验的编码方法、译码方法和相关装置 | |
Oliveira et al. | Polarization-driven puncturing for polar codes in 5g systems | |
US12088321B2 (en) | Device and method for decoding polar code in communication system | |
TWI783727B (zh) | 使用極化碼之通訊系統及其解碼方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20875829 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20875829 Country of ref document: EP Kind code of ref document: A1 |