WO2019137231A1 - 一种译码方法及装置 - Google Patents

一种译码方法及装置 Download PDF

Info

Publication number
WO2019137231A1
WO2019137231A1 PCT/CN2018/124375 CN2018124375W WO2019137231A1 WO 2019137231 A1 WO2019137231 A1 WO 2019137231A1 CN 2018124375 W CN2018124375 W CN 2018124375W WO 2019137231 A1 WO2019137231 A1 WO 2019137231A1
Authority
WO
WIPO (PCT)
Prior art keywords
vector
inverting
llr
vectors
decoding
Prior art date
Application number
PCT/CN2018/124375
Other languages
English (en)
French (fr)
Inventor
童佳杰
张华滋
乔云飞
李榕
刘小成
王俊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18900114.2A priority Critical patent/EP3731418A4/en
Publication of WO2019137231A1 publication Critical patent/WO2019137231A1/zh
Priority to US16/923,898 priority patent/US11171673B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/09Error detection only, e.g. using cyclic redundancy check [CRC] codes or single parity bit
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3707Adaptive decoding and hybrid decoding, e.g. decoding methods or techniques providing more than one decoding algorithm for one code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6561Parallelized implementations

Definitions

  • the present application relates to the field of coding and decoding technologies, and in particular, to a decoding method and apparatus.
  • the rapid evolution of wireless communication indicates that the fifth generation (5th generation, 5G) communication system will present some new features.
  • the most typical three communication scenarios include enhanced mobile broadband (eMBB) and massive machine connection.
  • eMBB enhanced mobile broadband
  • LTE long term evolution
  • mMTC massive machine type communication
  • URLLC ultra reliable low latency communication
  • channel coding is one of the important research objects to meet the needs of 5G communication.
  • Polar Codes are selected as the control channel coding method in the 5G standard.
  • the polarization code also known as the Polar code, is the first and only known channel coding method that can be rigorously proven to "reach" the channel capacity.
  • Polar codes Under different code lengths, especially for finite codes, the performance of Polar codes is much better than Turbo codes and low density parity check (LDPC) codes. In addition, Polar codes have lower computational complexity in terms of encoding and decoding. These advantages make Polar code have great development and application prospects in 5G.
  • the decoding process of the existing method of the Successive Cancellation is: after receiving the information to be decoded (including information bits and fixed bits), Decoding the information bits in the information, calculating the Log Likelihood Ratio (LLR) of each information bit one by one, and performing bit-by-bit decision.
  • SC Successive Cancellation
  • the decoding result is 0, if the information bit If the LLR ⁇ 0, the decoding result is 1, and for the fixed bit in the information to be decoded, no matter how many decoding results of the LLR are set to 0, all the bits are sequentially decoded in order, and the result of the previous decoding bit As an input to the calculation of the latter decoding bit, once the error is judged, the error is spread and there is no chance to recover, so the decoding performance is not high.
  • L the preset path width
  • PM path metric
  • the L paths save and continue to develop the path to decode subsequent decoding bits, wherein the PM value is used to judge whether the path is good or bad, and the PM value is calculated by the LLR. For each level of decoding bits, the PM values of the L paths are sorted from small to large, and the correct path is filtered by the PM value, and so on, until the last bit is translated.
  • the number of decoding bits is very large.
  • the PM value of all paths under each decoding bit is calculated, and all paths are based on the PM value. Performing a sort, its computational complexity and decoding delay due to sorting are high.
  • the embodiment of the present application provides a decoding method and apparatus for improving parallelism of decoding bit decisions and reducing decoding delay.
  • a decoding method is provided.
  • the execution body of the method is a decoding device.
  • the decoding device implements the method by performing a hard decision on each LLR in the input LLR vector to obtain an original vector.
  • the length of the LLR vector is M, M ⁇ N, N is the length of the information to be decoded, and N and M are positive integer powers of 2; based on the original vector, Y diagnostic vectors are determined, wherein the The diagnosis vector is obtained by inverting at least 0 of the X elements of the original vector, the position of the X elements in the original vector and the front X of the LLR vector sorted by an absolute value from small to large.
  • the positions of the LLRs are identical, Y ⁇ 2 X ; based on each of the Y to-be-diagnosed vectors, at least one candidate vector is determined, wherein the manner of determining at least one candidate vector based on any of the to-be-diagnosed vectors Determining, according to the generation matrix, an intermediate decoding vector of the vector to be diagnosed, and selecting a symptom vector in the intermediate decoding vector according to a position of the frozen bit, and selecting at least a symptom diagnosis table according to the symptom vector a diagnosis vector, performing an exclusive-OR operation on each of the diagnostic vectors and the vector to be diagnosed to obtain at least one candidate vector, wherein the symptom diagnosis table includes a correspondence relationship between the symptom vector and the diagnosis vector; Among the at least Y candidate vectors obtained by the vector, L candidate vectors are selected, and the decoding result of the LLR vector is determined according to the L candidate vectors.
  • the process of path splitting, PM value accumulation, error correction, bit decision, etc. can be moved from the last level to the intermediate level. If the number of intermediate level LLRs can be any value, for any number of information bits
  • the decoding information or the sub-blocks to be decoded are determined in parallel, which helps to reduce the computational complexity. In particular, when M is greater than 4, the above decoding method is adopted, and the computational complexity can be largely reduced with respect to the exhaustive expansion of the existing ML decoding method.
  • the input LLR vector is interleaved and processed in the LLR vector after the interleaving process.
  • Each LLR performs a hard decision to obtain an original vector; wherein the first bit sequence performs the same interleaving process to obtain the second bit sequence, and the position of the frozen bit is determined by the second bit sequence;
  • Each of the L candidate vectors performs deinterleaving processing, and the decoding result of the LLR vector is determined according to the L candidate vectors after the deinterleaving process.
  • the input LLR can obtain the decoded result through the symptom diagnosis table corresponding to the information bit position.
  • de-duplication processing is performed on the at least Y candidate vectors, after de-duplication processing L candidate vectors are selected among the candidate vectors, wherein any two candidate vectors in the dequantized candidate vectors are different. In this way, L candidate vectors can be selected to avoid that the selected candidate vectors are less than L due to repetition.
  • the X elements of the original vector are inverted to obtain a deduplication vector, wherein the positions of the X elements in the original vector are consistent with the positions of the first X LLRs in the LLR vector sorted according to the absolute value from small to large.
  • the diagnostic vector and the deduplication vector are ANDed. If there is an element containing 1 in the obtained result vector, the corresponding diagnostic vector is marked as unavailable, or the PM value of the candidate vector obtained by the corresponding diagnostic vector is set. It is infinite, so these vectors are filtered out when the preferred path is filtered according to the PM value.
  • the diagnostic vector of the 2ith row in the symptom diagnosis table is pre-stored, and the diagnostic vector of the 2i+1th row in the symptom diagnosis table is calculated online, wherein the online calculation
  • the method is to invert the last element in the stored diagnosis vector of the 2ith row, i is a non-negative integer. This saves storage space.
  • the symptom diagnosis table stores only the entire information of the first row, all the information of the first column, and the correspondence between the j of each row and the zero row of each row. This can further save storage space.
  • the coding side adopts a shortening coding mode
  • the obtained L candidate vectors are compared with the positions of the shortened bits, the unmatched candidate vectors are deleted, or the PM values of the unmatched candidate vectors are marked as infinity, wherein the mismatch refers to the candidate vectors.
  • the element in the shortened bit position is not 0.
  • a decoding method is provided.
  • the execution body of the method is a decoding device.
  • the decoding device implements the method by performing a hard decision on each LLR in the input LLR vector to obtain a first vector.
  • the third element is inverted to obtain a fourth vector;
  • the fourth element in the first vector is inverted to obtain a fifth vector;
  • the fifth element in the first vector is inverted to obtain a sixth vector
  • inverting the sixth element in the first vector to obtain a seventh vector; inverting the seventh element in the first vector to obtain an eighth vector; and using the first element in the first vector
  • the second element is in
  • the process of path splitting, PM value accumulation, error correction, bit decision, etc. can be moved from the last level to the intermediate level. If the number of intermediate level LLRs can be any value, for any number of information bits
  • the decoding information or the sub-blocks to be decoded are determined in parallel, which helps to reduce the computational complexity. In particular, when M is greater than 4, the above decoding method is adopted, and the computational complexity can be largely reduced with respect to the exhaustive expansion of the existing ML decoding method.
  • the first 7 LLRs in the LLR vector are sorted by [LLR0, LLR1, LLR2, ..., LLR6] according to the absolute value of the first 7 LLRs.
  • the positions of the first to seventh elements in the first vector correspond one-to-one with the positions of [LLR0, LLR1, LLR2, ..., LLR6] in the LLR vector. That is, the position of the first element in the first vector coincides with the position of LLR0 in the LLR vector, and the position of the second element in the first vector coincides with the position of LLR1 in the LLR vector, similarly determining the position of other elements.
  • the process of path splitting, PM value accumulation, error correction, bit decision, etc. can be moved from the last level to the intermediate level. If the number of intermediate level LLRs can be any value, for any number of information bits
  • the decoding information or the sub-blocks to be decoded are determined in parallel, which helps to reduce the computational complexity. In particular, when M is greater than 4, the above decoding method is adopted, and the computational complexity can be largely reduced with respect to the exhaustive expansion of the existing ML decoding method.
  • the verification fails, the following at least L operations are performed in sequence: inverting the first element in the first vector to obtain a second vector; The second element is inverted to obtain a third vector; the third element in the first vector is inverted to obtain a fourth vector; and the fourth element in the first vector is inverted to obtain a fifth vector And inverting the fifth element in the first vector to obtain a sixth vector; inverting the sixth element in the first vector to obtain a seventh vector; and performing the seventh element in the first vector Inverting, obtaining an eighth vector; inverting an eighth element in the first vector to obtain a ninth vector; and inverting the first element, the second element, and the third element in the first vector to obtain a tenth vector; inverting the first element, the second element, and the fourth element in the first vector to obtain an eleventh vector; and using the first element, the third element, and the fourth element in the first vector Performing a negation to obtain a twelfth vector;
  • the second element, the third element is inverted to obtain
  • a decoding method where the execution body of the method is a decoding device, and the decoding device implements the method by: receiving information to be decoded, where the length of the information to be decoded is N,
  • the information to be decoded includes Q subcode blocks, the length of one subcode block is M, M ⁇ N, and M is a positive integer power of 2; for any subcode block of the Q subcode blocks, L is determined.
  • the method for determining L first candidate vectors according to any sub-code block, according to the L candidate in the method as described in the first aspect or any one of the possible aspects of the first aspect The determining method of the vector is performed, or the method for determining L vectors in the possible design of the second aspect or the second aspect, or the possible design according to any of the third aspect or the third aspect Determine the method execution of L vectors.
  • a decoding apparatus having the functionality to implement the method described in the first aspect and any of the possible aspects of the first aspect.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the decoding apparatus when part or all of the function is implemented by hardware, includes: an input interface circuit for acquiring information to be decoded; and a logic circuit for performing the first aspect and The behavior described in any of the possible designs of the first aspect; an output interface circuit for outputting the decoded result.
  • the decoding device may be a chip or an integrated circuit.
  • the decoding apparatus when part or all of the function is implemented by software, includes: a memory for storing a program; a processor for executing the program stored by the memory, when When the program is executed, the interleaving device can implement the method as described in the first aspect and any of the possible aspects of the first aspect described above.
  • the above memory may be a physically separate unit or may be integrated with the processor.
  • the decoding device when some or all of the functionality is implemented in software, the decoding device includes a processor.
  • a memory for storing a program is located outside the decoding device, and the processor is connected to the memory through a circuit/wire for reading and executing a program stored in the memory.
  • a decoding apparatus having the functionality of implementing the method of any of the possible aspects of the second aspect and the second aspect described above.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the decoding apparatus when part or all of the function is implemented by hardware, includes: an input interface circuit for acquiring information to be decoded; and a logic circuit for performing the second aspect and The behavior described in any of the possible designs of the second aspect; an output interface circuit for outputting the decoded result.
  • the decoding device may be a chip or an integrated circuit.
  • the decoding apparatus when part or all of the function is implemented by software, includes: a memory for storing a program; a processor for executing the program stored by the memory, when When the program is executed, the interleaving device can implement the method as described in any of the possible aspects of the second aspect and the second aspect described above.
  • the above memory may be a physically separate unit or may be integrated with the processor.
  • the decoding device when some or all of the functionality is implemented in software, the decoding device includes a processor.
  • a memory for storing a program is located outside the decoding device, and the processor is connected to the memory through a circuit/wire for reading and executing a program stored in the memory.
  • a decoding apparatus having the functionality of implementing the method described in any of the possible aspects of the third aspect and the third aspect above.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the decoding apparatus when part or all of the function is implemented by hardware, includes: an input interface circuit for acquiring information to be decoded; and a logic circuit for performing the third aspect and The behavior described in any of the possible designs of the third aspect; an output interface circuit for outputting the decoded result.
  • the decoding device may be a chip or an integrated circuit.
  • the decoding apparatus when part or all of the function is implemented by software, includes: a memory for storing a program; a processor for executing the program stored by the memory, when When the program is executed, the interleaving device can implement the method as described in any of the possible aspects of the third aspect and the third aspect described above.
  • the above memory may be a physically separate unit or may be integrated with the processor.
  • the decoding device when some or all of the functionality is implemented in software, the decoding device includes a processor.
  • a memory for storing a program is located outside the decoding device, and the processor is coupled to the memory through a circuit/wire for reading and executing a program stored in the memory.
  • a decoding apparatus having the function of implementing the method described in any of the possible aspects of the fourth aspect and the fourth aspect described above.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the decoding apparatus when part or all of the function is implemented by hardware, includes: an input interface circuit for acquiring information to be decoded; and a logic circuit for performing the fourth aspect and The behavior described in any of the possible designs of the fourth aspect; an output interface circuit for outputting the decoded result.
  • the decoding device may be a chip or an integrated circuit.
  • the decoding apparatus when part or all of the function is implemented by software, includes: a memory for storing a program; a processor for executing the program stored by the memory, when When the program is executed, the interleaving device can implement the method as described in any of the possible aspects of the fourth aspect and the fourth aspect described above.
  • the above memory may be a physically separate unit or may be integrated with the processor.
  • the decoding device when some or all of the functionality is implemented in software, the decoding device includes a processor.
  • a memory for storing a program is located outside the decoding device, and the processor is connected to the memory through a circuit/wire for reading and executing a program stored in the memory.
  • a communication system comprising a network device and a terminal, both of which can perform the method as described in the above aspects or possible designs.
  • a computer storage medium is provided, stored with a computer program comprising instructions for performing the methods described above in various aspects or possible designs.
  • a computer program product comprising instructions for causing a computer to perform the methods described in the above aspects when executed on a computer is provided.
  • FIG. 1 is a schematic diagram of a SCL decoding method in the prior art
  • FIG. 2 is a schematic diagram of a SC decoding method in the prior art
  • FIG. 3 is a schematic structural diagram of a communication system in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a partial decoding process in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a decoding method in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of interleaving processing in an embodiment of the present application.
  • FIG. 7 is a second schematic diagram of a decoding method in an embodiment of the present application.
  • FIG. 8 is a third schematic diagram of a decoding method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a decoding process in an application scenario according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a decoding process in another application scenario in the embodiment of the present application.
  • FIG. 11 is a fourth schematic diagram of a decoding method in an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 13 is a second schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 14 is a third schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • 15 is a fourth schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • 16 is a fifth schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • 17 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • 20 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • 21 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 22 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • the present application provides a decoding method and apparatus for improving the length of parallel decoding bits in a decoding process, reducing the decoding depth, reducing the computational complexity of decoding, and reducing the decoding performance on the basis of ensuring decoding performance. Delay.
  • the method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
  • the Polar code is the first channel coding method that can theoretically be proven to "reach" the channel capacity.
  • the Polar code is a linear block code whose generating matrix is G N and its encoding process is Is a binary line vector of length N (ie code length);
  • B N is an N ⁇ N transposed matrix, such as a bit-reverse transposed matrix; where B N is an optional quantity, and the operation of generating the matrix G N can omit the operation of B N .
  • B N is an N ⁇ N transposed matrix, such as a bit-reverse transposed matrix
  • B N is an optional quantity
  • the operation of generating the matrix G N can omit the operation of B N .
  • the multiplied by the generator matrix G N gives the encoded bits, and the process of multiplication is the process of encoding.
  • a part of the bits are used to carry information, called information bits, and the set of index bits of information bits is recorded as
  • the other part of the bit is set to a fixed value pre-agreed by the transceiver, which is called a fixed bit, and the set of indexes is used.
  • the fixed bit is usually set to 0, and only needs to be pre-agreed by the transceiver.
  • the fixed bit sequence can be arbitrarily set.
  • the LLR of the information bit is calculated one by one. If the LLR of the information bit is > 0, the decoding result is 0. If the LLR of the information bit is ⁇ 0, the decoding result is 1, and the fixed bit is decoded regardless of the LLR. The result is set to zero.
  • 2 is a schematic diagram of the SC decoding calculation process. Taking the decoding bits as four as an example, there are 8 computing nodes in FIG. 2, among which 4 F nodes, 4 G nodes, F nodes and G nodes respectively correspond to F functions. And G functions. The calculation of the F node requires two LLR inputs on the right side. The calculation of the G node requires the LLR input on the right side and the output of the previous stage as input. The output can be calculated only after the input item is calculated. According to the above calculation rule, starting from the right side of the signal in FIG. 2, 8 nodes are sequentially calculated, and the obtained decoding bits are sequentially 1 ⁇ 2 ⁇ 3 ⁇ 4, and the decoding is completed.
  • the information to be decoded is also referred to as a codeword to be decoded, a code block to be decoded, a codeword, and a code block.
  • the information to be decoded can be divided into a plurality of subcode blocks in parallel for decoding processing.
  • the length of the information to be decoded is denoted by N, and the length of the subcode block decoded in parallel is denoted by M.
  • the number of sub-code blocks to be decoded of length M containing information bits is denoted by K.
  • FIG. 3 shows an architecture of a possible communication system to which the decoding method provided by the embodiment of the present application is applicable.
  • the communication system 300 includes: a network device 301 and one or more terminals 302.
  • the network device 301 can also be connected to the core network.
  • Network device 301 can communicate with IP network 303, for example, IP network 303 can be: the Internet, a private IP network, or other data network or the like.
  • IP network 303 can be: the Internet, a private IP network, or other data network or the like.
  • Network device 301 provides services to terminals 302 within coverage.
  • network device 301 provides wireless access to one or more terminals 302 within the coverage of network device 301.
  • Network devices may also be in communication with one another, for example, network device 301 may communicate with network device 301'.
  • the network device 301 is a device that connects the terminal 302 to the wireless network in the communication system to which the present application is applied.
  • the network device 301 is a node in a radio access network (RAN), which may also be referred to as a base station, and may also be referred to as a RAN node (or device).
  • RAN radio access network
  • some examples of network devices 301 are: gNB/NR-NB, transmission reception point (TRP), evolved Node B (eNB), radio network controller (RNC).
  • Node B Node B
  • BSC base station controller
  • BTS base transceiver station
  • HNB home base station
  • BBU baseband A base band unit
  • AP wireless fidelity access point
  • 5G communication system or a network side device in a possible future communication system.
  • the terminal 302 also referred to as a user equipment (UE), a mobile station (MS), a mobile terminal (MT), etc., is a device that provides voice and/or data connectivity to the user.
  • the terminal 302 includes a handheld device having a wireless connection function, an in-vehicle device, and the like.
  • the terminal 302 can be: a mobile phone, a tablet, a laptop, a palmtop, a mobile internet device (MID), a wearable device (such as a smart watch, a smart bracelet, a pedometer, etc.).
  • in-vehicle equipment eg, cars, bicycles, electric vehicles, airplanes, ships, trains, high-speed rails, etc.
  • virtual reality (VR) equipment e.g., virtual reality (VR) equipment
  • augmented reality (AR) equipment industrial control (industrial control)
  • Wireless terminal smart home device (eg, refrigerator, television, air conditioner, electric meter, etc.), intelligent robot, workshop equipment, wireless terminal in self driving, wireless terminal in remote medical surgery,
  • a wireless terminal in a smart grid, a wireless terminal in a transportation safety, a wireless terminal in a smart city, or a wireless terminal in a smart home, or a flying device for example, Intelligent robots, hot air balloons, drones, airplanes, etc.
  • the decoding method provided by the embodiment of the present application may be performed by the network device 301 or by the terminal 302.
  • the decoding method provided in this embodiment of the present application may be applicable to various wireless communication scenarios, and may be, but not limited to, including an enhanced mobile broadband (eMBB), massive machine type communication (mMTC), and High reliable low latency communication (URLLC) scenario.
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communication
  • URLLC High reliable low latency communication
  • N can also be regarded as the length of the Polar code mother code, and the information to be decoded is decoded to obtain a decoding result (ie, a decoding bit).
  • the transceiver end pre-arranges the position of the fixed bit, and the fixed bit is usually set to 0.
  • the content of the information bit actually needs to be obtained through the decoding process. In practical applications, the number of N may be large. If the existing SCL decoding method is adopted, the LLR vector of length N corresponding to the information to be decoded passes through multiple levels of F/G operations, reaches the last level, and performs bit decision on the LLR of the last level to obtain a bit.
  • the decoding bit adopts a bit-by-bit splitting path.
  • the optimal L paths are selected according to the PM value, and the splitting path is continued, and the computational complexity is very high.
  • the embodiment of the present application implements M bit parallel decision, M ⁇ N.
  • the LLR vector length of this level is M.
  • the number of information bits in the code block or subcode block corresponding to the level is large, the number of split paths increases exponentially, and the method provided by the present application is helpful. Reduce the number of split paths and reduce computational complexity.
  • the right side is the LLR input side, or the codeword side; the left side is the information side, or the decoding bit side.
  • Yi is the information to be decoded, and ui is the decoding bit.
  • N 16
  • the split path is directly performed on the M LLR levels, and the M decoded bits are parallelly determined.
  • M can be any layer that N LLRs achieve through F/G operation. In the embodiment of the present application, N and M are both positive integer powers of 2.
  • the decoding method provided by the embodiment of the present application is specifically as follows.
  • the execution body of the decoding method is a decoding device, and the decoding device may be the network device 301 shown in FIG. Terminal 302 shown in 3.
  • any at least two consecutive steps may separately form a solution to be protected by the embodiment of the present application.
  • steps 503 to 507 form a set of solutions
  • step 501 and step 502 are optional steps.
  • Step 502 Perform at least one level of F/G operation on the N LLR levels corresponding to the information to be decoded, until the length of the LLR vector on the level after the F/G operation is equal to M, and perform step 503.
  • Step 503 Perform a hard decision on each LLR in the input LLR vector to obtain an original vector.
  • the length of the original vector is M.
  • the M LLRs corresponding to the decoding information or the subcode block are hardly judged one by one, and the hard decision function used in the hard decision may be: Where x is the value of LLR.
  • Step 504 Determine Y to-be-diagnosed vectors based on the original vector.
  • the length of the vector to be diagnosed is M.
  • the to-be diagnosed vector is obtained by inverting at least 0 of the X elements of the original vector, and the positions of the X elements in the original vector and the positions of the first X LLRs in the LLR vector sorted according to the absolute value from small to large. Consistent, Y ⁇ 2 X. Inverted, element 0 becomes 1, and element 1 becomes 0.
  • the value of X can be adjusted arbitrarily.
  • the value of Y can also be adjusted.
  • the value of Y is 2 X .
  • the value of Y can also be less than 2 X .
  • the values of X and Y can be determined based on the balance between decoding accuracy and computational complexity.
  • Step 505 Determine at least one candidate vector based on each of the Y to-be-diagnosed vectors.
  • the method for determining the at least one candidate vector based on any of the to-be-diagnosed vectors is: determining an intermediate decoding vector of the vector to be diagnosed according to the generation matrix, and selecting a symptom vector according to the position of the frozen bit in the intermediate decoding vector, according to The symptom vector selects at least one diagnostic vector in the symptom diagnosis table, and XORs each of the diagnostic vectors with the vector to be diagnosed to obtain at least one candidate vector.
  • the symptom diagnosis table includes the correspondence relationship between the symptom vector and the diagnosis vector.
  • the generator matrix is G N , and the intermediate vector modulo 2 is multiplied by G N to obtain an intermediate decoding vector. According to the position of the frozen bit in the subcode block corresponding to the LLR vector of length M, the intermediate decoding vector is selected at the position. One or more elements on the composition of the symptom vector. Alternatively, the vector to be diagnosed is multiplied by the block check matrix H to obtain a symptom vector.
  • Step 506 Select, among the at least Y candidate vectors obtained by the Y to-be-diagnosed vectors, L candidate vectors.
  • the path width of the SCL is L
  • the number of candidate vectors selected at the level of the LLR vector is also represented by L, but the number of candidate vectors may be the same as the path width. Can be different.
  • Step 507 Determine a decoding result of the LLR vector according to the L candidate vectors.
  • each candidate vector of the L candidate vectors is calculated with the generation matrix to obtain L candidate results, and the L candidate results are determined to obtain a decoding result of the information to be decoded.
  • each candidate vector of the L candidate vectors is operated with the generation matrix to obtain L candidate results, and the L candidate results are determined, and a partial decoding result of the information to be decoded is obtained, or obtained.
  • the decoding result of the subcode block after the decoding of all the subcode blocks is completed, the decoding result of the information to be decoded is output.
  • the input LLR vector ⁇ LLR0-LLR7 ⁇ ⁇ 1,-3,-2,2,-1,3,-4,1 ⁇
  • the absolute value of LLR in the LLR vector is from small to large.
  • the X elements of the original vector are the elements at the 0th, 4th, and 7th positions.
  • E i is an empty set
  • the 0 elements in the original vector are inverted, that is, the obtained vector to be diagnosed is equal to the original vector ⁇ 0, 1, 1, 0, 1, 0, 1, 0 ⁇
  • E 1 ⁇ a 0 ⁇
  • the elements at the 0th position of the original vector are inverted, and the vector to be diagnosed is ⁇ 1,1,1,0,1,0,1,0 ⁇
  • E 2 ⁇ a 1 ⁇
  • the elements at the 4th position of the original vector are inverted, and the vector to be diagnosed is ⁇ 0,1,1,0,0,0,1,0 ⁇
  • E 3 ⁇ a 0 , a 1 ⁇
  • the elements at the 0th and 4th positions of the original vector are inverted, and the vector to be diagnosed is ⁇ 1 , 1 , 1 , 0 , 0 , 0 , 1 , 0 ⁇
  • 4 is
  • the elements at the 7 positions are inverted, and the vector to be diagnosed is ⁇ 1, 1, 1, 0, 0, 0, 1, 1 ⁇ .
  • the Y to-be-diagnosed vectors obtained in step 504 may be less than or equal to 8, that is, a part of the 8 to-be-diagnosed vectors is selected to proceed to step 505.
  • any of the to-be-diagnosed vectors determines an intermediate decoding vector, for example, the vector to be diagnosed is ⁇ 1, 1, 1, 0, 0, 0, 1, 0 ⁇ , and the modulo 2 is multiplied by the matrix G N to obtain an intermediate decoding.
  • Vector ⁇ 0,1,0,0,1,0,1,0 ⁇ if the position of information bits and freeze bits in the code block or subcode block corresponding to the M LLRs is set to ⁇ 0, 0, 0,1,0,1,1,1 ⁇ , select the element of the position of the frozen bit in the intermediate decoding vector ⁇ 0,1,0,0,1,0,1,0 ⁇ , that is, select the 0th, The elements at 1, 2, and 4 positions get the symptom vector ⁇ 0, 1, 0, 1 ⁇ .
  • each vector to be diagnosed can obtain a symptom vector in the manner described above.
  • the symptom diagnosis table may also be referred to as a checklist.
  • the symptom diagnosis table stores the correspondence between the symptom vector and the diagnosis vector.
  • One symptom vector may correspond to one or more diagnostic vectors, and the length of the symptom vector is the subcode block to be decoded. The number of frozen bits in the middle, and the length of the diagnostic vector is M.
  • the corresponding symptom diagnosis table is selected according to the number of frozen bits in the subcode block to be decoded.
  • a symptom diagnosis table contains one or more lines, which are stored in order according to the decimal size of the symptom vector, for example, according to the decimal size of the symptom vector from small to large.
  • Z diagnostic vectors can be selected by the symptom vector, Z ⁇ 1. The value of Z can be adjusted. The larger the value of Z is, the higher the decoding precision is, and the higher the computational complexity is. The smaller the value of Z is, the lower the decoding precision is, and the lower the computational complexity is. The value of Z can be determined based on the balance between decoding accuracy and computational complexity.
  • part of the correspondence between the symptom vector and the diagnosis vector is stored in the symptom diagnosis table, and another part is calculated online by using a part of the storage.
  • the line number in the symptom diagnosis table starts from 0.
  • the symptom diagnosis table designed in the embodiment of the present application stores only the diagnostic vectors of the even-numbered rows in the traditional symptom diagnosis table, and the diagnostic vectors of the odd-numbered rows in the traditional symptom diagnosis table are obtained by online calculation, specifically through the last of the diagnostic vectors of the even-numbered rows.
  • the traditional symptom diagnosis table is called the original table, and the symptom diagnosis table provided by the embodiment of the present application is called a new table.
  • Table[2i+1] Table[2i] ⁇ 0x0001
  • Table[2i+1] is used to represent the odd row
  • Table[2i] represents Even lines.
  • the size of the new table is 1/2 of the original table size, which is half the storage space compared to the original table.
  • the row of the symptom diagnosis table is represented by i
  • the column is represented by j, and can be further reduced to three sets of values, and only all information of the first row, all information of the first column, and each i and of each row are stored.
  • the correspondence of j of the zeroth line This can further save storage space.
  • only odd lines can be stored, and the diagnostic vectors of even lines are obtained by online calculation, which is obtained by inverting the last element of the diagnostic vector of the odd lines. The principle is the same and will not be described again.
  • the corresponding symptom diagnosis table is determined according to the length K of the information bit. That is, different K values correspond to different symptom diagnosis tables.
  • the symptom diagnosis table corresponding to the K value is selected based on the value of K, and the diagnosis vector is determined based on the symptom diagnosis table, and finally the candidate vector is obtained.
  • one or more information bit sequences may occur for a given code length M and information bit length K.
  • a K value corresponds to a symptom diagnosis table, that is, an information bit sequence corresponds to a symptom diagnosis table, and if the information bit sequence corresponding to the code block or the sub code block to be decoded and the information bit corresponding to the symptom diagnosis table are If the sequence does not correspond, the information bit sequence corresponding to the coded block or the sub-code block is first subjected to the same code re-interleaving, so that the information bit sequence after the interleaving is the same as the information bit sequence corresponding to the symptom diagnosis table, and correspondingly, The LLR vector is subjected to the same interleaving process, and the intermediate decoding result is deinterleaved in the same manner, so that the input LLR can finally obtain the decoding result through the above-mentioned steps shown in FIG. Specifically, before step 503, the input LLR vector is interleaved. In step 507, the L candidate vectors are deinterleaved, and then the decoded result
  • LLR 0 , LLR 1 ,..., LLR 15 [l0,l1,l2,l3,l4,l5,l6,l7, L8, l9, l10, l11, l12, l13, l14, l15].
  • the second bit sequence corresponding to the symptom diagnosis table is: [i 0 , i 1 , i 2 ...
  • the information bit sequence is used to indicate the position of the information bit and the frozen bit
  • the first bit sequence is subjected to interleaving processing as shown in FIG.
  • the input LLR vector needs to be interleaved as shown in FIG. 6, that is, LLR 4 to LLR 7 in the input LLR vector are exchanged with LLR 8 to LLR 11 .
  • the LLR vectors after the interleaving process are [l0, l1, l2, l3, l8, l9, l10, l11, l4, l5, l6, l7, l12, l13, l14, l15].
  • the intermediate decoding result or the partial intermediate decoding result is deinterleaved in the manner of the above interleaving processing.
  • the intermediate decoding result or the intermediate decoding result of the portion is [b 0 , b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , b 7 , b 8 , b 9 , b 10 , b 11 , b 12 , b 13 , b 14 , b 15 ], the elements of the 4th to 7th positions and the 8th to 11th positions of the sequence are interchanged to obtain a final decoding result or a final partial decoding.
  • the present application first refers to Y.
  • the candidate vectors are subjected to de-reprocessing, and L candidate vectors are selected in the de-re-processed candidate vectors, wherein the de-duplication processing refers to retaining only one candidate candidate vector after the de-duplication processing. Any two candidate vectors are different.
  • the X elements of the original vector are inverted to obtain a deduplication vector, wherein the positions of the X elements in the original vector are consistent with the positions of the first X LLRs in the LLR vector sorted by the absolute value from small to large, and the definition of X Consistent with the above description.
  • the diagnostic vector and the deduplication vector are ANDed. If there is an element containing 1 in the obtained result vector, the corresponding diagnostic vector is marked as unavailable, or the PM value of the candidate vector obtained by the corresponding diagnostic vector is set to Infinity, so these vectors are filtered out when the preferred path is filtered by PM value.
  • the elements at the 0th, 4th, and 7th positions of the original vector are inverted to obtain ⁇ 1, 0, 0, 0, 1, 0, 0, 1 ⁇ , which is called a deduplication vector.
  • the diagnostic vector and the de-duplication vector will be ORed. If there is an element containing 1 in the obtained result vector, the corresponding diagnostic vector is marked as unavailable, or the PM value of the candidate vector obtained by the corresponding diagnostic vector is set. For infinity.
  • the obtained diagnostic vector is ⁇ 0, 0, 0, 0, 1, 1, 0, 0 ⁇
  • the result of the AND operation with the deduplication vector is ⁇ 0, 0, 0, 0, 1, 0, 0,0 ⁇
  • the diagnostic vector ⁇ 0,0,0,0,1,1,0,0 ⁇ is not available, marking the diagnostic vector ⁇ 0,0,0,0,1,1,0,0 ⁇ as Not available, or the PM value of the candidate vector obtained by the diagnostic vector ⁇ 0, 0, 0, 0, 1, 1, 0, 0 ⁇ is set to infinity.
  • the coding side adopts a shortening coding mode
  • the L candidate vectors obtained in step 506 are compared with the positions of the shortened bits, the unmatched candidate vectors are deleted, or the PM values of the unmatched candidate vectors are marked as infinity, wherein the mismatch is It means that the element shortening the bit position in the candidate vector is not 0.
  • the number of information bits corresponding to the information to be decoded or the subcode block to be decoded is K, and the translation shown in FIG.
  • the code method is suitable for 0 ⁇ the number of information bits K ⁇ M.
  • M 16
  • the decoding method shown in Fig. 5 is applied to 0 ⁇ K ⁇ 16.
  • parallel decision of the information to be decoded or the sub-code block to be decoded including any number of information bits helps to reduce the computational complexity.
  • M is greater than 4, the decoding method shown in FIG.
  • the decoding method shown in FIG. 5 can shorten the decoding time by 40% compared with the conventional expansion method of the conventional ML decoding method.
  • the execution body of the decoding method is a decoding device, and the decoding device may be the network device 301 shown in FIG. It can also be the terminal 302 shown in FIG.
  • Step 702 Perform at least one level of F/G operation on the N LLR levels corresponding to the information to be decoded, until the length of the LLR vector on the level after the F/G operation is equal to M, and perform step 703.
  • Step 703 Perform a hard decision on each LLR in the input LLR vector to obtain an original vector.
  • the original vector may also be referred to as a first vector.
  • Step 704 Perform at least the following (L-1) operations in sequence:
  • the first 7 LLRs of the small to large order are assumed to be represented by [LLR0, LLR1, LLR2, ..., LLR6], and the positions of the first to seventh elements in the first vector in the first vector are [LLR0,
  • the positions of LLR1, LLR2, ..., LLR6] in the LLR vector are in one-to-one correspondence. That is, the position of the first element in the first vector coincides with the position of LLR0 in the LLR vector, and the position of the second element in the first vector coincides with the position of LLR1 in the LLR vector, similarly determining the position of other elements.
  • Step 705 Select the first L vectors in order from the first vector in the obtained vector.
  • Step 706 Determine a decoding result of the LLR vector according to the L vectors.
  • the execution body of the decoding method is a decoding device, and the decoding device may be the network shown in FIG.
  • the device 301 may also be the terminal 302 shown in FIG.
  • Step 802 Perform at least one level of F/G operation on the N LLR levels corresponding to the information to be decoded, until the length of the LLR vector on the level after the F/G operation is equal to M, and perform step 803.
  • Step 803 Perform a hard decision on each LLR in the input LLR vector to obtain an original vector.
  • the original vector may also be referred to as a first vector.
  • Step 804 Perform parity check on the first vector. If the verification passes, perform steps 805 to 807. If the verification fails, perform steps 805' to 807'.
  • Step 805 Perform at least the following (L-1) operations in sequence:
  • the first 8 LLRs of the small to large order are assumed to be represented by [LLR0, LLR1, LLR2, ..., LLR7], and the positions of the first to eighth elements in the first vector in the first vector are [LLR0,
  • the positions of LLR1, LLR2, ..., LLR7] in the LLR vector are in one-to-one correspondence. That is, the position of the first element in the first vector coincides with the position of LLR0 in the LLR vector, and the position of the second element in the first vector coincides with the position of LLR1 in the LLR vector, similarly determining the position of other elements.
  • Step 806 In the first vector and the vector obtained in step 805, the first L vectors are sequentially selected starting from the first vector.
  • Step 807 Determine a decoding result of the LLR vector according to the L vectors.
  • Step 805' performing at least the first L operations in sequence:
  • the first 8 LLRs of the small to large order are assumed to be represented by [LLR0, LLR1, LLR2, ..., LLR7], and the positions of the first to eighth elements in the first vector in the first vector are [LLR0,
  • the positions of LLR1, LLR2, ..., LLR7] in the LLR vector are in one-to-one correspondence. That is, the position of the first element in the first vector coincides with the position of LLR0 in the LLR vector, and the position of the second element in the first vector coincides with the position of LLR1 in the LLR vector, similarly determining the position of other elements.
  • Step 806' in the vector obtained in step 805', selects the first L vectors in order from the second vector.
  • Step 807' determining a decoding result of the LLR vector according to the L vectors.
  • the candidate vector may be obtained by selecting an exhaustive expansion manner of the existing ML decoding method.
  • the method of FIG. 5 is applicable to the case of 0 ⁇ K ⁇ M
  • the existing ML decoding method is poor.
  • the expansion mode is applicable to the case where the K value is not greater than the threshold, for example, the threshold can be set to 6.
  • decoding can be performed by an exhaustive expansion method of the existing ML decoding method.
  • the right side is the LLR input side, or the code word side; the left side is the information side, or the decoding bit side.
  • the LLR input vector is [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8].
  • the path is directly split at the level of the LLR input.
  • the encoded codewords [c 0 , c 1 , c 2 , ..., c 7 ] may also have 8 cases, that is, 8 possible candidate vectors are split at the level of the LLR input, respectively: 0,0,0,0,0,0,0,0], [1,1,1,1,1,1,1], [1,0,1,0,1,0,1, 0], [0,1,0,1,0,1,0,1],...,[1,0,0,1,1,0,0,1].
  • the PM values are calculated for the eight candidate vectors, and the PM values are obtained as 0, 3.6, 1.6, 2.0, ..., 1.8.
  • L candidate vectors are selected among the 8 candidate vectors according to the magnitude of the PM value.
  • the formula for calculating the PM value (indicated by ⁇ PM) at the LLR level of length 8 is: Where c i is used to represent the ith element of the candidate vector, L i is used to represent the ith element of the LLR vector, and c i- (1-sgn(L i ))/2 is used to calculate the ith of the LLR vector Whether the elements match the ith element of the candidate vector.
  • decoding can be performed by the method shown in FIG. 10.
  • the right side is the LLR input side, or the code word side; the left side is the information side, or the decoding bit side.
  • the LLR input vector [L 0 , L 1 , ..., L 7 ] is [ 0.1 , 0.2 , 0.3 , 0.4, 0.5, 0.6 , 0.7 , 0.8 ].
  • the path is split directly at the level of the LLR input, but splitting is performed according to the method shown in FIG. Specifically, the original vector obtained after the hard decision of the LLR vector is [0, 0, 0, 0, 0, 0, 0, 0].
  • the first two LLRs of the absolute value in the LLR vector are L 0 and L 1 , that is, the 0th position and the 1st position, the original vector [0,0,0,0,0, The 0th position in 0,0,0] and the element of at least 0 position in the 1st position are inverted, and up to 4 to-be-diagnosed vectors are obtained, respectively: [0000 0000], [1000 0000], [ 0100 0000], [1100 0000]. Determining an intermediate decoding vector of the vector to be diagnosed according to the generation matrix, and selecting a symptom vector according to the position of the frozen bit in the intermediate decoding vector. As can be seen from FIG.
  • the position of the frozen bit is the 0th position and the 1st position, and the selection is performed.
  • the symptom vectors are [00], [11], [01], [10].
  • the diagnosis vector is selected in the symptom diagnosis table based on the symptom vector.
  • Table 1 shows a partial line of the symptom diagnosis table, in which a part of Table 1 can be stored in advance and another part can be calculated online.
  • Each diagnostic vector is XORed with the vector to be diagnosed to obtain 16 candidate vectors. specific:
  • the candidate vectors that appear repeatedly in the 16 candidate vectors are deleted, and the candidate vectors whose appearances are repeated as shown above are repeated. After de-reprocessing, the candidate vectors are obtained as ⁇ [0000 0000], [1010 0000], [1000 1000], [1000 0010], [1101 0000], [1100 0100], [1100 0001] ⁇ .
  • the PM values of the above 16 candidate vectors ⁇ 0, ⁇ , ⁇ , ⁇ , ⁇ , 0.4, 0.6, 0.8 ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , 0.7, 0.9, 1.1 ⁇ .
  • L candidate vectors are selected according to the PM value, and L decoding results of the LLR vectors are determined according to the L candidate vectors and the generation matrix.
  • the decoding result includes frozen bits and information bits.
  • a decoding method as shown in FIG. 11 may be adopted.
  • the execution body of the decoding method is a decoding device, and the decoding device may be the network device 301 shown in FIG. It may be the terminal 302 shown in FIG.
  • Step 1101 Receive information to be decoded, the length of the information to be decoded is N, and the information to be decoded includes Q subcode blocks, and the length of one subcode block is M, M ⁇ N, and M is a positive integer power of 2;
  • Step 1102 Determine, for any one of the Q subcode blocks, L first candidate vectors
  • Step 1103 In the legal candidate vectors in the Q*L first candidate vectors determined by the Q subcode blocks, select L second candidate vectors with the best PM value as the decoding result of the information to be decoded, where The position of the auxiliary bit in the candidate result determined by the legal candidate vector and the generation matrix conforms to the setting on the encoding side.
  • the method for determining L first candidate vectors according to any sub-code block in step 1102 may be performed according to the determining method of L candidate vectors in the method shown in FIG. 5, or may be according to FIG. 7 or FIG. 8 The method of determining L vectors in the method is performed. The repetitions will not be repeated here.
  • the embodiment of the present application further provides a decoding apparatus 1200, where the decoding apparatus 1200 is configured to execute the decoding method shown in FIG. include:
  • the hard decision unit 1201 is configured to perform a hard decision on each LLR of the input log likelihood ratio LLR vector to obtain an original vector.
  • the length of the LLR vector is M, M ⁇ N, where N is the length of the information to be decoded. N, M is a positive integer power of 2;
  • the determining unit 1202 is configured to determine Y to-be-diagnosed vectors based on the original vector obtained by the hard decision unit 1201, where the to-be-diagnosed vector is obtained by inverting at least 0 of the X elements of the original vector, and the X elements are in the original vector.
  • the position in the LLR vector coincides with the position of the first X LLRs sorted by the absolute value from the smallest to the largest, Y ⁇ 2 X ; and, for each of the Y to be diagnosed vectors based on the Y to be diagnosed vectors, are determined to be at least a candidate vector, wherein the at least one candidate vector is determined based on any of the to-be-diagnosed vectors: determining an intermediate decoding vector of the vector to be diagnosed according to the generation matrix, and selecting a symptom in the intermediate decoding vector according to the position of the frozen bit Vector, selecting at least one diagnostic vector in the symptom diagnosis table according to the symptom vector, performing an exclusive-OR operation on each diagnosis vector and the vector to be diagnosed, to obtain at least one candidate vector, and the symptom diagnosis table includes a correspondence relationship between the symptom vector and the diagnosis vector;
  • a selecting unit 1203, configured to select L candidate vectors among at least Y candidate vectors obtained by the Y to-be-diagnosed vectors determined by the determining unit 1202;
  • the determining unit 1202 is further configured to determine a decoding result of the LLR vector according to the L candidate vectors selected by the selecting unit 1203.
  • the decoding apparatus 1200 further includes an interleaving unit 1204, configured to:
  • the input LLR vector is interleaved, and each LLR in the interleaved LLR vector is hard-decised to obtain an original vector;
  • the first bit sequence performs the same interleaving process to obtain a second bit sequence, and the position of the frozen bit is determined by the second bit sequence;
  • the interleaving unit 1204 is further configured to: perform deinterleave processing on each of the L candidate vectors, and determine a decoding result of the LLR vector according to the L candidate vectors after the deinterleaving process.
  • the selecting unit 1203 is configured to perform deduplication processing on at least Y candidate vectors if there are duplicate candidate vectors among the at least Y candidate vectors obtained by the Y to-be-diagnosed vectors, and candidates after de-duplication processing L candidate vectors are selected in the vector, wherein any two candidate vectors in the de-reprocessed candidate vector are different.
  • the embodiment of the present application further provides a decoding apparatus 1300.
  • the decoding apparatus 1300 is configured to execute the decoding method shown in FIG. include:
  • the hard decision unit 1301 is configured to perform hard decision on each LLR of the input log likelihood ratio LLR vector to obtain a first vector.
  • the length, N, M is a positive integer power of 2, and K is the length of the information bit;
  • the inversion unit 1302 is configured to perform at least the following (L-1) operations in sequence:
  • the positions of the first element to the Xth element in the first vector correspond to the positions of the first X LLRs in the LLR vector sorted according to the absolute value from small to large;
  • a selecting unit 1303, configured to sequentially select the first L vectors from the first vector in the obtained vector
  • the determining unit 1304 is configured to determine a decoding result of the LLR vector according to the L vectors.
  • the embodiment of the present application further provides a decoding apparatus 1400.
  • the decoding apparatus 1400 is configured to execute the decoding method shown in FIG. include:
  • the hard decision unit 1401 is configured to perform hard decision on each LLR of the input log likelihood ratio LLR vector to obtain a first vector.
  • a checking unit 1402 configured to perform parity check on the first vector obtained by the hard decision unit 1401;
  • the inversion unit 1403 is configured to: if the verification performed by the verification unit 1402 passes, then:
  • the positions of the first element to the Xth element in the first vector correspond to the positions of the first X LLRs in the LLR vector sorted according to the absolute value from small to large;
  • a selecting unit 1404 configured to sequentially select the first L vectors from the first vector in the obtained vector
  • the determining unit 1405 is configured to determine a decoding result of the LLR vector according to the L vectors.
  • the inverting unit 1403 is further configured to: if the verification performed by the verification unit fails, the following:
  • the positions of the first element to the Xth element in the first vector correspond to the positions of the first X LLRs in the LLR vector sorted according to the absolute value from small to large;
  • the selecting unit 1404 is further configured to select the first L vectors in order from the second vector in the obtained vector;
  • the determining unit 1405 is further configured to determine a decoding result of the LLR vector according to the L vectors.
  • the decoding apparatus 1500 is further configured to perform the decoding method shown in FIG. include:
  • the receiving unit 1501 is configured to receive information to be decoded, where the length of the information to be decoded is N, the information to be decoded includes Q subcode blocks, and the length of one subcode block is M, M ⁇ N, and M is a positive integer of 2. Power
  • a determining unit 1502 configured to determine L first candidate vectors for any one of the Q subcode blocks
  • the selecting unit 1503 is configured to select L second candidate vectors with the best PM value as the decoding result of the information to be decoded, among the legal candidate vectors in the Q*L first candidate vectors determined by the Q subcode blocks. Wherein, the position of the auxiliary bit in the candidate result determined by the legal candidate vector and the generation matrix conforms to the setting of the encoding side.
  • the determining unit 1502 is configured to:
  • the division of the module by the decoding apparatus shown in FIG. 12 to FIG. 15 in the embodiment of the present application is schematic, and is only a logical function division. In actual implementation, there may be another division manner.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the embodiment of the present application further provides a decoding apparatus 1600, which is used to execute the decoding method shown in FIG. .
  • a decoding apparatus 1600 which is used to execute the decoding method shown in FIG. .
  • Some or all of the decoding methods shown in FIG. 5 may be implemented by hardware or by software.
  • the decoding apparatus 1600 includes: an input interface circuit 1601 for acquiring information to be decoded;
  • the logic circuit 1602 is configured to execute the decoding method shown in FIG. 5; and the output interface circuit 1603 is configured to output a decoding result.
  • the decoding device 1600 may be a chip or an integrated circuit when implemented.
  • the decoding apparatus 1700 when part or all of the decoding method shown in FIG. 5 is implemented by software, as shown in FIG. 17, the decoding apparatus 1700 includes: a memory 1701 for storing a program; and a processor 1702 for The program stored in the memory 1701 is executed, and when the program is executed, the decoding device 1700 can implement the decoding method shown in FIG.
  • the foregoing memory 1701 may be a physically independent unit or may be integrated with the processor 1702.
  • the de-interleaving apparatus 1700 may also include only the processor 1702.
  • the memory 1701 for storing programs is located outside the decoding device 1700, and the processor 1702 is connected to the memory 1701 through circuits/wires for reading and executing programs stored in the memory 1701.
  • the processor 1702 may be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • CPU central processing unit
  • NP network processor
  • the processor 1702 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 1701 may include a volatile memory such as a random-access memory (RAM); the memory 1701 may also include a non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 1701 may also include a combination of the above types of memories.
  • RAM random-access memory
  • non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 1701 may also include a combination of the above types of memories.
  • a decoding apparatus 1800 for performing the decoding method shown in FIG. 7 is also provided in the embodiment of the present application.
  • Some or all of the decoding methods shown in FIG. 7 may be implemented by hardware or by software.
  • the decoding apparatus 1800 includes: an input interface circuit 1801, configured to acquire information to be decoded;
  • the logic circuit 1802 is configured to execute the decoding method shown in FIG. 7;
  • the output interface circuit 1803 is configured to output the decoding result.
  • the decoding device 1800 may be a chip or an integrated circuit when implemented.
  • the decoding apparatus 1900 includes: a memory 1901 for storing a program; and a processor 1902 for The program stored in the memory 1901 is executed, and when the program is executed, the decoding device 1900 can implement the decoding method shown in FIG.
  • the foregoing memory 1901 may be a physically independent unit or may be integrated with the processor 1902.
  • the de-interleaving device 1900 may also include only the processor 1902.
  • the memory 1901 for storing programs is located outside the decoding device 1900, and the processor 1902 is connected to the memory 1901 through circuits/wires for reading and executing programs stored in the memory 1901.
  • the processor 1902 may be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • CPU central processing unit
  • NP network processor
  • the processor 1902 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 1901 may include a volatile memory such as a random-access memory (RAM); the memory 1901 may also include a non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 1901 may also include a combination of the above types of memories.
  • RAM random-access memory
  • non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 1901 may also include a combination of the above types of memories.
  • the embodiment of the present application further provides a decoding apparatus 2000, which is used to execute the decoding method shown in FIG. .
  • a decoding apparatus 2000 which is used to execute the decoding method shown in FIG. .
  • Some or all of the decoding methods shown in FIG. 8 may be implemented by hardware or by software.
  • the decoding apparatus 2000 includes: an input interface circuit 2001 for acquiring information to be decoded;
  • the logic circuit 2002 is configured to execute the decoding method shown in FIG. 8;
  • the output interface circuit 2003 is configured to output a decoding result.
  • the decoding device 2000 may be a chip or an integrated circuit in a specific implementation.
  • the decoding apparatus 2100 when part or all of the decoding method shown in FIG. 8 is implemented by software, as shown in FIG. 21, the decoding apparatus 2100 includes: a memory 2101 for storing a program; and a processor 2102 for The program stored in the memory 2101 is executed, and when the program is executed, the decoding device 2100 can implement the decoding method shown in FIG.
  • the foregoing memory 2101 may be a physically separate unit or may be integrated with the processor 2102.
  • the de-interleaving device 2100 may also include only the processor 2102.
  • the memory 2101 for storing programs is located outside the decoding device 2100, and the processor 2102 is connected to the memory 2101 through circuits/wires for reading and executing programs stored in the memory 2101.
  • the processor 2102 can be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • CPU central processing unit
  • NP network processor
  • the processor 2102 can also further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 2101 may include a volatile memory such as a random-access memory (RAM); the memory 2101 may also include a non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 2101 may also include a combination of the above types of memory.
  • RAM random-access memory
  • non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD)
  • SSD solid-state drive
  • the memory 2101 may also include a combination of the above types of memory.
  • the embodiment of the present application further provides a decoding apparatus 2200, which is used to execute the decoding method shown in FIG. .
  • a decoding apparatus 2200 which is used to execute the decoding method shown in FIG. .
  • Some or all of the decoding methods shown in FIG. 11 may be implemented by hardware or by software.
  • the decoding apparatus 2200 includes: an input interface circuit 2201 for acquiring information to be decoded;
  • the logic circuit 2202 is configured to execute the decoding method shown in FIG. 11;
  • the output interface circuit 2203 is configured to output a decoding result.
  • the decoding device 2200 may be a chip or an integrated circuit when implemented.
  • the decoding apparatus 2300 when part or all of the decoding method shown in FIG. 11 is implemented by software, as shown in FIG. 23, the decoding apparatus 2300 includes: a memory 2301 for storing a program; and a processor 2302 for The program stored in the memory 2301 is executed, and when the program is executed, the decoding device 2300 can implement the decoding method shown in FIG.
  • the foregoing memory 2301 may be a physically independent unit or may be integrated with the processor 2302.
  • the deinterleaving device 2300 may also include only the processor 2302.
  • a memory 2301 for storing a program is located outside the decoding device 2300, and the processor 2302 is connected to the memory 2301 via a circuit/wire for reading and executing a program stored in the memory 2301.
  • the processor 2302 can be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • CPU central processing unit
  • NP network processor
  • the processor 2302 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 2301 may include a volatile memory such as a random-access memory (RAM); the memory 2301 may also include a non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 2301 may also include a combination of the above types of memory.
  • RAM random-access memory
  • non-volatile memory such as a flash memory (flash) Memory), hard disk drive (HDD) or solid-state drive (SSD); the memory 2301 may also include a combination of the above types of memory.
  • the embodiment of the present application provides a computer storage medium, which stores a computer program, and the computer program includes a decoding method provided by the foregoing method embodiments.
  • the embodiment of the present application provides a computer program product comprising instructions, when executed on a computer, causing a computer to execute the decoding method provided by the foregoing method embodiment.
  • Any of the decoding devices provided by the embodiments of the present application may also be a chip.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Error Detection And Correction (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

一种译码方法及装置,用以提高译码比特判决的并行度,降低译码时延。该方法为:对输入的长度为M的LLR向量中的每一个LLR进行硬判决,得到原始向量,M≤N,N为待译码信息的长度;基于所述原始向量,确定Y个待诊断向量,其中,所述待诊断向量为所述原始向量的X个元素中的至少0个取反得到,所述X个元素在所述原始向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2X;基于所述Y个待诊断向量中的每一个待诊断向量,根据症状诊断表,均确定至少一个候选向量;在由所述Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量,根据所述L个候选向量确定所述LLR向量的译码结果。

Description

一种译码方法及装置
本申请要求在2018年01月09日提交中国专利局、申请号为201810020396.4、发明名称为“一种译码方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及编译码技术领域,尤其涉及一种译码方法及装置。
背景技术
无线通信的快速演进预示着未来第五代(5th generation,5G)通信系统将呈现出一些新的特点,最典型的三个通信场景包括增强型移动互联网(enhance mobile broadband,eMBB)、海量机器连接通信(massive machine type communication,mMTC)和高可靠低延迟通信(ultra reliable low latency communication,URLLC),这些通信场景的需求将对现有长期演进(long term evolution,LTE)技术提出新的挑战。信道编码作为最基本的无线接入技术,是满足5G通信需求的重要研究对象之一。极化码(Polar Codes)在5G标准中被选作控制信道编码方式。极化码也可以称为Polar码,是第一种、也是已知的唯一一种能够被严格证明“达到”信道容量的信道编码方法。在不同码长下,尤其对于有限码,Polar码的性能远优于Turbo码和低密度奇偶校验码(low density parity check,LDPC)码。另外,Polar码在编译码方面具有较低的计算复杂度。这些优点让Polar码在5G中具有很大的发展和应用前景。
在Polar码的译码方法中,现有的一种逐比特消除译码方法(Successive Cancellation,SC)的译码过程为:接收到待译码信息(包括信息比特和固定比特)后,对于待译码信息中信息比特,逐个计算每一个信息比特的对数似然比(Log Likelihood Ratio,LLR),进行逐比特判决,若信息比特的LLR>0,则译码结果为0,若信息比特的LLR<0,则译码结果为1,对于待译码信息中的固定比特,无论LLR为多少译码结果都置为0,按顺序依次译出所有的比特,前一个译码比特的结果作为后一个译码比特计算的一个输入,一旦判错,会导致错误扩散,且没有机会挽回,因此译码性能不高。为解决这一问题,在逐次消除列表算法(Successive Cancellation List,SCL)中,SCL算法在译码每个信息比特时,将0和1对应的译码结果都保存作为2个分支译码路径(简称路径分裂),图1为SCL算法中的译码路径示意图,如图1所示,每一层代表1个译码比特,若译码结果为0,则沿着左子树发展路径,若译码结果为1,则沿着右子树发展路径,当译码路径的总数超过预设的路径宽度L(一般L=2 l)时,选择出路径度量(Path Metric,PM)值最佳的L条路径保存并继续发展路径以译出后续的译码比特,其中的PM值用于判断路径的好坏,PM值通过LLR计算得出。对于每一级的译码比特,对L条路径的PM值按照从小到大排序,并通过PM值筛选出正确的路径,如此反复,直到译完最后一个比特。
在实际应用中,译码比特的数目是非常大的,使用SCL译码方法,对于每一个译码比特,都要计算每一个译码比特下所有路径的PM值,并对所有路径根据PM值进行一次排序, 其计算复杂度和由于排序带来的译码时延都很高。
发明内容
本申请实施例提供一种译码方法及装置,用以提高译码比特判决的并行度,降低译码时延。
本申请实施例提供的具体技术方案如下:
第一方面,提供一种译码方法,该方法的执行主体为译码设备,译码设备通过以下步骤实现该方法:对输入的LLR向量中的每一个LLR进行硬判决,得到原始向量,所述LLR向量的长度为M,M≤N,N为待译码信息的长度,N、M为2的正整数次幂;基于所述原始向量,确定Y个待诊断向量,其中,所述待诊断向量为所述原始向量的X个元素中的至少0个取反得到,所述X个元素在所述原始向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2 X;基于所述Y个待诊断向量中的每一个待诊断向量,均确定至少一个候选向量,其中,基于任一待诊断向量确定至少一个候选向量的的方式为:根据生成矩阵,确定所述待诊断向量的中间译码向量,并根据冻结比特的位置在所述中间译码向量选择出症状向量,根据所述症状向量在症状诊断表中选择至少一个诊断向量,将每一个所述诊断向量与待诊断向量进行异或运算,得到至少一个候选向量,所述症状诊断表中包括症状向量与诊断向量的对应关系;在由所述Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量,根据所述L个候选向量确定所述LLR向量的译码结果。通过上述步骤,可以将路径分裂、PM值累计、纠错、比特判决等过程由最后一个层级移至中间层级上,若中间层级LLR的个数可以为任意值,对于包含任意数量信息比特的待译码信息或者待译码子码块并行判决,有助于降低计算复杂度。尤其对于M大于4时,采用上述译码方法,相对于现有ML译码方法的穷举展开方式,能够很大程度上降低计算复杂度。
在一个可能的设计中,若所述LLR向量对应的第一比特序列与设定的第二比特序列不相同,则对所述输入的LLR向量进行交织处理,对交织处理后的LLR向量中的每一个LLR进行硬判决,得到原始向量;其中,所述第一比特序列进行相同的所述交织处理得到所述第二比特序列,所述冻结比特的位置由所述第二比特序列确定;对所述L个候选向量中的每一个候选向量进行解交织处理,根据解交织处理后的L个候选向量确定所述LLR向量的译码结果。这样,输入的LLR才能通过信息比特位置相对应的症状诊断表获得译码结果。
在一个可能的设计中,若由所述Y个待诊断向量获得的至少Y个候选向量中存在重复的候选向量,则对所述至少Y个候选向量进行去重处理,在去重处理后的候选向量中选择L个候选向量,其中,所述去重处理后的候选向量中任意两个候选向量不同。这样,才能选出L个候选向量,避免选择的候选向量因重复而导致数量小于L。
可选的,将原始向量的X个元素取反,得到去重复向量,其中,X个元素在原始向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,将诊断向量和去重复向量做“与”操作,若获得的结果向量中存在含有1的元素,则将对应的诊断向量标记为不可用,或者将对应诊断向量获得的候选向量的PM值设为无穷大,这样在按照PM值筛选较优路径时便会过滤掉这些向量。
在一个可能的设计中,所述症状诊断表中第2i行的诊断向量为预先存储的,所述症状诊断表中第2i+1行的诊断向量为在线计算所得,其中,所述在线计算的方式为将存储的所 述第2i行的诊断向量中的最后一个元素取反,i为非负整数。这样,能够节省存储空间。
可选的,症状诊断表仅存储第一行的全部信息、第一列的全部信息、以及每行中每个i和第零行的j的对应关系。这样能够进一步节省存储空间。
在一个可能的设计中,若编码侧采用缩短(shorten)的编码方式,则待译码信息或者待译码的子码块的译码结果可能会存在缩短比特。对于这种情况,将得到的L个候选向量与缩短比特的位置进行比较,将不匹配的候选向量删除,或者将不匹配的候选向量的PM值标记为无穷大,其中,不匹配是指候选向量中缩短比特位置的元素不为0。
第二方面,提供一种译码方法,该方法的执行主体为译码设备,译码设备通过以下步骤实现该方法:对输入的LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,K=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;按序执行以下至少前(L-1)个操作:将所述第一向量中第一元素进行取反,得到第二向量;将所述第一向量中第二元素进行取反,得到第三向量;将所述第一向量中第三元素进行取反,得到第四向量;将所述第一向量中第四元素进行取反,得到第五向量;将所述第一向量中第五元素进行取反,得到第六向量;将所述第一向量中第六元素进行取反,得到第七向量;将所述第一向量中第七元素进行取反,得到第八向量;将所述第一向量中第一元素和第二元素进行取反,得到第九向量;将所述第一向量中第一元素和第三元素进行取反,得到第十向量;将所述第一向量中第一元素和第四元素进行取反,得到第十一向量;将所述第一向量中第二元素和第三元素进行取反,得到第十二向量;将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十三向量;其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;在得到的向量中从所述第一向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。通过上述步骤,可以将路径分裂、PM值累计、纠错、比特判决等过程由最后一个层级移至中间层级上,若中间层级LLR的个数可以为任意值,对于包含任意数量信息比特的待译码信息或者待译码子码块并行判决,有助于降低计算复杂度。尤其对于M大于4时,采用上述译码方法,相对于现有ML译码方法的穷举展开方式,能够很大程度上降低计算复杂度。
在一个可能的设计中,若X=7,则LLR向量中按照绝对值由小到大排序的前7个LLR假设用[LLR0、LLR1、LLR2、……、LLR6]来表示,则第一向量中第一元素~第七元素在第一向量中的位置与[LLR0、LLR1、LLR2、……、LLR6]在LLR向量中的位置一一对应。即,第一元素在第一向量的位置与LLR0在LLR向量中的位置一致,第二元素在第一向量的位置与LLR1在LLR向量中的位置一致,类似的判断其它元素的位置。
第三方面,提供一种译码方法,该方法的执行主体为译码设备,译码设备通过以下步骤实现该方法:对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,(K+1)=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;对第一向量进行奇偶校验,若校验通过,则:按序执行以下至少前(L-1)个操作:将所述第一向量中第一元素和第二元素进行取反,得到第二向量;将所述第一向量中第一元素和第三元素进行取反,得到第三向量;将所述第一向量中第一元素和第四元素进行取反,得到第四向量;将所述第一向量中第一元素和第五元素进行取反,得到第五向量;
将所述第一向量中第一元素和第六元素进行取反,得到第六向量;将所述第一向量中 第一元素和第七元素进行取反,得到第七向量;将所述第一向量中第一元素和第八元素进行取反,得到第八向量;将所述第一向量中第二元素和第三元素进行取反,得到第九向量;将所述第一向量中第二元素和第四元素进行取反,得到第十向量;将所述第一向量中第二元素和第五元素进行取反,得到第十一向量;将所述第一向量中第三元素和第四元素进行取反,得到第十二向量;将所述第一向量中第一元素~第四元素进行取反,得到第十三向量;其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;在得到的向量中从所述第一向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。通过上述步骤,可以将路径分裂、PM值累计、纠错、比特判决等过程由最后一个层级移至中间层级上,若中间层级LLR的个数可以为任意值,对于包含任意数量信息比特的待译码信息或者待译码子码块并行判决,有助于降低计算复杂度。尤其对于M大于4时,采用上述译码方法,相对于现有ML译码方法的穷举展开方式,能够很大程度上降低计算复杂度。
在一个可能的设计中,若校验不通过,则:按序执行以下至少前L个操作:将所述第一向量中第一元素进行取反,得到第二向量;将所述第一向量中第二元素进行取反,得到第三向量;将所述第一向量中第三元素进行取反,得到第四向量;将所述第一向量中第四元素进行取反,得到第五向量;将所述第一向量中第五元素进行取反,得到第六向量;将所述第一向量中第六元素进行取反,得到第七向量;将所述第一向量中第七元素进行取反,得到第八向量;将所述第一向量中第八元素进行取反,得到第九向量;将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十向量;将所述第一向量中第一元素、第二元素和第四元素进行取反,得到第十一向量;将所述第一向量中第一元素、第三元素和第四元素进行取反,得到第十二向量;将所述第一向量中第二元素、第三元素和第四元素进行取反,得到第十三向量;将所述第一向量中第一元素、第二元素和第五元素进行取反,得到第十四向量;其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;在得到的向量中从所述第二向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。
第四方面,提供一种译码方法,该方法的执行主体为译码设备,译码设备通过以下步骤实现该方法:接收待译码信息,所述待译码信息的长度为N,所述待译码信息包括Q个子码块,一个子码块的长度为M,M≤N,M为2的正整数次幂;针对所述Q个子码块中的任一子码块,均确定L个第一候选向量;在由所述Q个子码块确定的Q*L个第一候选向量中的合法候选向量中,选择PM值最优的L个第二候选向量作为所述待译码信息的译码结果,其中,所述合法候选向量与生成矩阵确定的候选结果中辅助比特的位置符合编码侧的设置。这样,能够通过比较对译码结果进行检错,从而避免CRC虚警的问题。
在一个可能的设计中,所述根据任一子码块确定L个第一候选向量的方法,按照如第一方面或第一方面的任一种可能的设计中所述的方法中L个候选向量的确定方法执行,或者,按照第二方面或第二方面的任一种可能的设计中确定L个向量的方法执行,或者,按照第三方面或第三方面的任一种可能的设计中确定L个向量的方法执行。
第五方面,提供一种译码装置,该装置具有实现上述第一方面和第一方面的任一种可能的设计中所述的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,当所述功能的部分或全部通过硬件实现时,所述译码装置包括: 输入接口电路,用于获取待译码信息;逻辑电路,用于执行上述第一方面和第一方面的任一种可能的设计中所述的行为;输出接口电路,用于输出译码结果。
可选的,所述译码装置可以是芯片或者集成电路。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述交织装置可以实现如上述第一方面和第一方面的任一种可能的设计中所述的方法。
可选的,上述存储器可以是物理上独立的单元,也可以与处理器集成在一起。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括处理器。用于存储程序的存储器位于所述译码装置之外,处理器通过电路/电线与存储器连接,用于读取并执行所述存储器中存储的程序。
第六方面,提供一种译码装置,该装置具有实现上述第二方面和第二方面的任一种可能的设计中所述的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,当所述功能的部分或全部通过硬件实现时,所述译码装置包括:输入接口电路,用于获取待译码信息;逻辑电路,用于执行上述第二方面和第二方面的任一种可能的设计中所述的行为;输出接口电路,用于输出译码结果。
可选的,所述译码装置可以是芯片或者集成电路。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述交织装置可以实现如上述第二方面和第二方面的任一种可能的设计中所述的方法。
可选的,上述存储器可以是物理上独立的单元,也可以与处理器集成在一起。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括处理器。用于存储程序的存储器位于所述译码装置之外,处理器通过电路/电线与存储器连接,用于读取并执行所述存储器中存储的程序。
第七方面,提供一种译码装置,该装置具有实现上述第三方面和第三方面的任一种可能的设计中所述的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,当所述功能的部分或全部通过硬件实现时,所述译码装置包括:输入接口电路,用于获取待译码信息;逻辑电路,用于执行上述第三方面和第三方面的任一种可能的设计中所述的行为;输出接口电路,用于输出译码结果。
可选的,所述译码装置可以是芯片或者集成电路。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述交织装置可以实现如上述第三方面和第三方面的任一种可能的设计中所述的方法。
可选的,上述存储器可以是物理上独立的单元,也可以与处理器集成在一起。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括处理器。用于存储程序的存储器位于所述译码装置之外,处理器通过电路/电线与存储器连 接,用于读取并执行所述存储器中存储的程序。
第八方面,提供一种译码装置,该装置具有实现上述第四方面和第四方面的任一种可能的设计中所述的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,当所述功能的部分或全部通过硬件实现时,所述译码装置包括:输入接口电路,用于获取待译码信息;逻辑电路,用于执行上述第四方面和第四方面的任一种可能的设计中所述的行为;输出接口电路,用于输出译码结果。
可选的,所述译码装置可以是芯片或者集成电路。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述交织装置可以实现如上述第四方面和第四方面的任一种可能的设计中所述的方法。
可选的,上述存储器可以是物理上独立的单元,也可以与处理器集成在一起。
在一个可能的设计中,当所述功能的部分或全部通过软件实现时,所述译码装置包括处理器。用于存储程序的存储器位于所述译码装置之外,处理器通过电路/电线与存储器连接,用于读取并执行所述存储器中存储的程序。
第九方面,提供了一种通信系统,该通信系统包括网络设备和终端,所述网络设备、所述终端均可以执行如上述各方面或可能的设计所述的方法。
第十方面,提供了一种计算机存储介质,存储有计算机程序,该计算机程序包括用于执行上述各方面或可能的设计所述的方法的指令。
第十一方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
附图说明
图1为现有技术中SCL译码方法示意图;
图2为现有技术中SC译码方法示意图;
图3为本申请实施例中通信系统架构示意图;
图4为本申请实施例中部分译码过程示意图;
图5为本申请实施例中译码方法示意图之一;
图6为本申请实施例中交织处理示意图;
图7为本申请实施例中译码方法示意图之二;
图8为本申请实施例中译码方法示意图之三;
图9为本申请实施例中一种应用场景下译码过程示意图;
图10为本申请实施例中另一种应用场景下译码过程示意图;
图11为本申请实施例中译码方法示意图之四;
图12为本申请实施例中译码装置结构示意图之一;
图13为本申请实施例中译码装置结构示意图之二;
图14为本申请实施例中译码装置结构示意图之三;
图15为本申请实施例中译码装置结构示意图之四;
图16为本申请实施例中译码装置结构示意图之五;
图17为本申请实施例中译码装置结构示意图之六;
图18为本申请实施例中译码装置结构示意图之七;
图19为本申请实施例中译码装置结构示意图之八;
图20为本申请实施例中译码装置结构示意图之九;
图21为本申请实施例中译码装置结构示意图之十;
图22为本申请实施例中译码装置结构示意图之十一;
图23为本申请实施例中译码装置结构示意图之十二。
具体实施方式
本申请提供一种译码方法及装置,用以提高译码过程中并行译码比特的长度,降低译码深度,减少译码的运算复杂度,在保证译码性能的基础上降低译码时延。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
以下,对本申请中的部分用语和Polar码的基础知识进行解释说明,以便于本领域技术人员理解。
1)Polar码
Polar码是第一种在理论上能够被证明“达到”信道容量的信道编码方法。Polar码是一种线性块码,其生成矩阵为G N,其编码过程为
Figure PCTCN2018124375-appb-000001
是一个二进制的行矢量,长度为N(即码长);且
Figure PCTCN2018124375-appb-000002
这里
Figure PCTCN2018124375-appb-000003
B N是一个N×N的转置矩阵,例如比特逆序转置矩阵;其中,B N是可选量,生成矩阵G N的运算过程可以省略B N的运算。
Figure PCTCN2018124375-appb-000004
定义为log 2N个矩阵F 2的克罗内克(Kronecker)乘积,x 1 N是编码后的比特(也叫码字),
Figure PCTCN2018124375-appb-000005
与生成矩阵G N相乘后就得到编码后的比特,相乘的过程就是编码的过程。在Polar码的编码过程中,
Figure PCTCN2018124375-appb-000006
中的一部分比特用来携带信息,称为信息比特,信息比特的索引的集合记作
Figure PCTCN2018124375-appb-000007
中另外的一部分比特置为收发端预先约定的固定值,称之为固定比特,其索引的集合用
Figure PCTCN2018124375-appb-000008
的补集
Figure PCTCN2018124375-appb-000009
表示。固定比特通常被设为0,只需要收发端预先约定,固定比特序列可以被任意设置。
2)现有的SC译码方法
接收到信号后,逐个计算信息比特的LLR,若信息比特的LLR>0,则译码结果为0,若信息比特的LLR<0,则译码结果为1,固定比特无论LLR为多少译码结果都置为0。图2为SC译码计算过程示意图,以译码比特为4个为例,图2中共有8个计算节点,其中有4个F节点,4个G节点,F节点和G节点分别对应F函数和G函数。F节点的计算需要其右侧2项LLR输入,G节点的计算需要其右侧2项LLR输入以及上一级的输出也作为输入,只有输入项计算完成后,才能计算输出。按照上述计算规则,图2中从右侧接收信号开始,按序计算8个节点,获得的译码比特依次为①→②→③→④,至此译码完成。
3)现有的SCL译码方法
具体如图1所示的方法中的描述,在此不再赘述。
4)待译码信息
本申请中,待译码信息又称为待译码码字、待译码码块、码字、码块。可以将待译码 信息分为多个子码块并行做译码处理。待译码信息的长度用N表示,并行译码的子码块长度用M表示。长度为M的待译码的子码块包含信息比特的个数用K表示。
5)在本申请的描述中,字符“/”一般表示前后关联对象是一种“或”的关系。“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
下面将结合附图,对本申请实施例进行详细描述。
以下介绍一下本申请实施例适用的通信系统架构。
图3示出了本申请实施例提供的译码方法适用的一种可能的通信系统的架构,参阅图3所示,通信系统300中包括:网络设备301和一个或多个终端302。当通信系统300包括核心网时,网络设备301还可以与核心网相连。网络设备301可以与IP网络303进行通信,例如,IP网络303可以是:因特网(internet),私有的IP网,或其它数据网等。网络设备301为覆盖范围内的终端302提供服务。例如,参见图3所示,网络设备301为网络设备301覆盖范围内的一个或多个终端302提供无线接入。除此之外,网络设备之间的覆盖范围可以存在重叠的区域,例如网络设备301和网络设备301’。网络设备之间还可以可以互相通信,例如,网络设备301可以与网络设备301’之间进行通信。
网络设备301是本申请应用的通信系统中将终端302接入到无线网络的设备。网络设备301为无线接入网(radio access network,RAN)中的节点,又可以称为基站,还可以称为RAN节点(或设备)。目前,一些网络设备301的举例为:gNB/NR-NB、传输接收点(transmission reception point,TRP)、演进型节点B(evolved Node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(Node B,NB)、基站控制器(base station controller,BSC)、基站收发台(base transceiver station,BTS)、家庭基站(例如,home evolved NodeB,或home Node B,HNB)、基带单元(base band unit,BBU),或无线保真(wireless fidelity,Wifi)接入点(access point,AP),或5G通信系统或者未来可能的通信系统中的网络侧设备等。
终端302,又称之为用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等,是一种向用户提供语音和/或数据连通性的设备。例如,终端302包括具有无线连接功能的手持式设备、车载设备等。目前,终端302可以是:手机(mobile phone)、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备(例如智能手表、智能手环、计步器等),车载设备(例如,汽车、自行车、电动车、飞机、船舶、火车、高铁等)、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、智能家居设备(例如,冰箱、电视、空调、电表等)、智能机器人、车间设备、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端,或智慧家庭(smart home)中的无线终端、飞行设备(例如,智能机器人、热气球、无人机、飞机)等。
本申请实施例提供的译码方法可以由网络设备301来执行,也可以由终端302来执行。本申请实施例提供的译码方法可以适用于各种无线通信场景,可以但不限于包括适用于增强型移动互联网(enhance mobile broadband,eMBB)、海量机器连接通信(massive machine type communication,mMTC)和高可靠低延迟通信(ultra reliable low latency communication, URLLC)的场景。
以下简单介绍一下本申请实施例的基本思想。
假设待译码信息的长度为N,N也可认为是Polar码母码长度,对待译码信息进行译码获得译码结果(即译码比特)。收发端预先约定了固定比特的位置,固定比特通常设为0,通过译码过程实际上需要获得信息比特的内容。实际应用中,N的数目可能会很大。若采用现有SCL译码方法,则:待译码信息对应的长度为N的LLR向量经过多个层级的F/G运算,到达最后一个层级,对最后一个层级的LLR进行比特判决,得到一个译码比特,采用逐比特分裂路径,在路径数量大于L时,根据PM值选择最优的L条路径,并继续分裂路径,计算复杂度非常高。本申请实施例实现M个比特并行判决,M≤N。M=N时,接收信号对应的层级的LLR向量长度为M;M<N时,待译码信息对应的长度为N的LLR向量经过一个或多个层级的F/G运算,到达一个层级,该层级的LLR向量长度为M。在LLR向量长度为M的层级上分裂路径,当该层级对应的码块或子码块中信息比特数量较大时,分裂路径的数量会呈指数增长,通过采取本申请提供的方法有助于减少分裂路径的数量,减少计算复杂度。
结合图4来介绍一下上段描述中层级的概念。如图4所示,右侧为LLR输入侧,或者称为码字侧;左侧为信息侧,或者称为译码比特侧。yi为待译码信息,ui为译码比特。从译码开始,层级依次为s=4、s=3、s=2、s=1和s=0。假设待译码信息的长度N=16,若采用现有的SCL译码方法,则在s=4的层级上,待译码信息对应的16个LLR进行F/G运算,得到s=3的层级上的8个LLR。则s=3的层级上的8个LLR继续进行F/G运算,得到s=2的层级上的4个LLR,s=2的层级上的4个LLR继续进行F/G运算,得到s=1的层级上的2个LLR,s=1的层级上的2个LLR继续进行F/G运算,得到s=0的层级上的1个LLR,在s=0的层级上逐比特分裂路径。
本申请实施例中,直接在M个LLR的层级上进行分裂路径,达到M个译码比特并行判决。如图4中,若M=16,则通过本申请提供的方法直接在s=4的层级上分裂路径,达到16个译码比特并行判决;若M=8,则通过本申请提供的方法直接在s=3的层级上分裂路径,达到8个译码比特并行判决。当然,N、M可以取其他的数值,例如N=32、64、128、256、512、1024。M可以是N个LLR经过F/G运算达到的任意一层。本申请实施例中N、M均为2的正整数次幂。
以下具体介绍一下本申请实施例提供的译码方法。
如图5所示,本申请实施例提供的译码方法具体如下所述,该译码方法的执行主体为译码设备,译码设备可以是图3所示的网络设备301,也可以是图3所示的终端302。
以下描述中,任意至少两个连续的步骤均可以单独形成本申请实施例需要保护的方案,例如,步骤503~步骤507形成一组方案,步骤501和步骤502为可选步骤。
步骤501、判断待译码信息的长度N与M的大小关系,若待译码信息的长度N>M,则执行步骤502;若待译码信息的长度N=M,则执行步骤503。
步骤502、对待译码信息对应的N个LLR逐层级进行至少一个层级的F/G运算,直到经过F/G运算后层级上的LLR向量长度等于M,执行步骤503。
步骤503、对输入的LLR向量中的每一个LLR进行硬判决,得到原始向量。
原始向量的长度为M。
在此描述一下硬判决的方法。对待译码信息或者子码块对应的M个LLR逐一进行硬 判决,硬判决采用的硬判决函数可以是:
Figure PCTCN2018124375-appb-000010
其中,x为LLR的值。
步骤504、基于原始向量,确定Y个待诊断向量。
待诊断向量的长度为M。
具体的,待诊断向量为原始向量的X个元素中的至少0个取反得到,X个元素在原始向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2 X。取反即元素0变为1,元素1变为0。
X的取值可以任意调整。X的取值越大,译码精度越高,计算复杂度越高;X的取值越小,译码精度越低,计算复杂度越低。相对应的,Y的取值也可以调整。一般情况下,Y的取值为2 X。Y的取值也可以小于2 X。Y的取值越大,译码精度越高,计算复杂度越高;Y的取值越小,译码精度越低,计算复杂度越低。X、Y的取值可以根据译码精度和计算复杂度两者之间的平衡来确定。
步骤505、基于Y个待诊断向量中的每一个待诊断向量,均确定至少一个候选向量。
具体的,基于任一待诊断向量确定至少一个候选向量的的方式为:根据生成矩阵,确定待诊断向量的中间译码向量,并根据冻结比特的位置在中间译码向量选择出症状向量,根据症状向量在症状诊断表中选择至少一个诊断向量,将每一个诊断向量与待诊断向量进行异或运算,得到至少一个候选向量。这样根据Y个待诊断向量可以获得至少Y个候选向量。其中,症状诊断表中包括症状向量与诊断向量的对应关系。生成矩阵为G N,待诊断向量模2乘以G N后得到中间译码向量,根据长度为M的LLR向量对应的子码块中冻结比特的位置,在中间译码向量中选择位于该位置上的一个或多个元素,组成症状向量。或者,待诊断向量乘以字块校验矩阵H,得到症状向量。
步骤506、在由Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量。
计算所述至少Y个候选向量对应的PM值,选择最优的L个候选向量。
需要说明的是,本申请实施例中,若SCL的路径宽度为L,在长度为LLR向量的层级上选择的候选向量数量虽然也用L表示,但是候选向量的数量可以与路径宽度相同,也可以不同。
步骤507、根据L个候选向量确定LLR向量的译码结果。
具体的,若N=M,则将L个候选向量中的每一个候选向量与生成矩阵进行运算,获得L个候选结果,在L个候选结果进行判决,得到待译码信息的译码结果。
若N>M,将L个候选向量中的每一个候选向量与生成矩阵进行运算,获得L个候选结果,在L个候选结果进行判决,得到待译码信息的部分译码结果,或者说得到子码块的译码结果,待所有子码块译码结束后,输出待译码信息的译码结果。
以下通过举例对上述步骤503~步骤505做进一步说明。
假设M=8,输入的LLR向量{LLR0-LLR7}={1,-3,-2,2,-1,3,-4,1},通过硬判决得到原始向量={0,1,1,0,1,0,1,0}。LLR向量中LLR的绝对值由小到大的排序前X个LLR所在位置A={a 0,a 1...a x-1},假设X=3,绝对值由小到大排序的前3个LLR在LLR向量中的位置为第0个位置、第4个位置、第7个位置,即{a 0,a 1,a 2}={0,4,7}。从A中选取任意数量元素组成翻转集合E i
Figure PCTCN2018124375-appb-000011
翻转集合E i的数量为2 x个。X=3时,共有8个翻转 集合,具体为:
Figure PCTCN2018124375-appb-000012
E 1={a 0},E 2={a 1},E 3={a 0,a 1},E 4={a 2},E 5={a 0,a 2},E 6={a 1,a 2},E 7={a 0,a 1,a 2}。通过翻转集合E i,将原始向量的X个元素的至少0个元素进行取反,得到待诊断向量。其中,原始向量的X个元素为第0、4、7位置上的元素。例如,如果E i为空集,则将原始向量中的0个元素进行取反,即得到的待诊断向量等于原始向量{0,1,1,0,1,0,1,0};若E 1={a 0},则将原始向量的第0个位置上的元素进行取反,得到待诊断向量为{1,1,1,0,1,0,1,0};若E 2={a 1},则将原始向量的第4个位置上的元素进行取反,得到待诊断向量为{0,1,1,0,0,0,1,0};若E 3={a 0,a 1},则将原始向量的第0、4个位置上的元素进行取反,得到待诊断向量为{1,1,1,0,0,0,1,0};若E 4={a 2},则将原始向量的第7个位置上的元素进行取反,得到待诊断向量为{0,1,1,0,1,0,1,1};若E 5={a 0,a 2},则将原始向量的第0、4个位置上的元素进行取反,得到待诊断向量为{1,1,1,0,0,0,1,0};若E 6={a 1,a 2},则将原始向量的第4、7个位置上的元素进行取反,得到待诊断向量为{0,1,1,0,0,0,1,1};若E 7={a 0,a 1,a 2},则将原始向量的第0、4、7个位置上的元素进行取反,得到待诊断向量为{1,1,1,0,0,0,1,1}。综上,输入的LLR向量{LLR0-LLR7}={1,-3,-2,2,-1,3,-4,1},X=3时,由原始向量获得的8个待诊断向量分别为:{0,1,1,0,1,0,1,0};{1,1,1,0,1,0,1,0};{0,1,1,0,0,0,1,0};{1,1,1,0,0,0,1,0};{0,1,1,0,1,0,1,1};{1,1,1,0,0,0,1,0};{0,1,1,0,0,0,1,1};{1,1,1,0,0,0,1,1}。其中,X=3时,步骤504获得的Y个待诊断向量可以小于等于8个,即选择8个待诊断向量的其中一部分进入步骤505。步骤505中,任一待诊断向量确定中间译码向量,例如待诊断向量为{1,1,1,0,0,0,1,0},模2乘以矩阵G N后得到中间译码向量{0,1,0,0,1,0,1,0},若该M个LLR对应的待译码码块或者子码块中信息比特和冻结比特的位置设置为{0,0,0,1,0,1,1,1},则在中间译码向量{0,1,0,0,1,0,1,0}中选择冻结比特的位置的元素,即选择第0、1、2、4个位置上的元素,得到症状向量{0,1,0,1}。类似的,每一个待诊断向量都可以按照上述方式,得到症状向量。
以下介绍一下本申请实施例上述描述中的症状诊断表。
症状诊断表也可以称为校验表,症状诊断表中存储有症状向量和诊断向量的对应关系,一个症状向量可以对应一个或多个诊断向量,症状向量的长度为待译码的子码块中冻结比特的个数,诊断向量的长度为M。现有的症状诊断表的大小与待译码的子码块中冻结比特的个数或者信息比特的个数有关,假设信息比特的个数为K,子码块的大小为M,则症状诊断表的大小=2 (M-K)。译码器或者译码装置(设备)中会根据不同的K存储不同的症状诊断表,在步骤505中,根据待译码的子码块中冻结比特的个数,选择对应的症状诊断表。通常情况下,一张症状诊断表中包含一行或者多行,按照症状向量的十进制大小按顺序排列进行存储,例如按照症状向量的十进制大小从小到大排列。每获取一个症状向量,在选择的症状诊断表中选择对应行,在对应行中确定该症状向量对应的诊断向量。具体的,可以通过症状向量选择Z个诊断向量,Z≥1。Z的取值可以调整,Z的取值越大,译码精度越高,计算复杂度越高;Z的取值越小,译码精度越低,计算复杂度越低。Z的取值可以根据译码精度和计算复杂度两者之间的平衡来确定。
本申请实施例中,为了节省症状诊断表占用的存储空间,将症状向量和诊断向量的对应关系的一部分存储在症状诊断表中,另一部分通过存储的一部分进行在线计算。假设症状诊断表中的行号从0开始,可选的,症状诊断表中第2i行的诊断向量为预先存储的,症 状诊断表中第2i+1行的诊断向量为在线计算所得,其中,在线计算的方式为将存储的第2i行的诊断向量中的最后一个元素取反,其中,i为非负整数。即i=0、1、2、……。也就是,本申请实施例设计的症状诊断表中仅存储传统症状诊断表中偶数行的诊断向量,传统症状诊断表中奇数行的诊断向量通过在线计算得到,具体通过偶数行的诊断向量的最后一个元素取反得到。例如,M=16,K=7,症状诊断表的大小为512。传统症状诊断表称为原表,本申请实施例提供的症状诊断表称为新表。则原表中奇数行和偶数行只相差最后一个比特,关系表示为:Table[2i+1]=Table[2i]^0x0001,Table[2i+1]用于表示奇数行,Table[2i]表示偶数行。新表Table_new[i]通过原表进行表示为:Value[2i]=Table_new[i];Value[2i+1]=Table_new[i]^0x0001。新表中第x行的第i个翻转信息可以表示为:Value[x][i]=Value[x][0]^Value[0][j]。新表的大小为原表大小的1/2,也就是相对于原表节省了一半的存储空间。进一步的,症状诊断表的行用i表示,列用j来表示,可以进一步缩小为三组值,仅存储第一行的全部信息、第一列的全部信息、以及每行中每个i和第零行的j的对应关系。这样能够进一步节省存储空间。当然,也可以只存奇数行,而偶数行的的诊断向量通过在线计算得到,具体通过奇数行的诊断向量的最后一个元素取反得到,原理是一致的,不再赘述。
本申请实施例中,对于大小为M的待译码码块或者子码块,根据包含信息比特的长度K来确定对应的症状诊断表。也就是,不同的K值对应不同的症状诊断表。首先,根据K的值来选择K值对应的症状诊断表,再根据症状诊断表确定诊断向量,最后获得候选向量。但是根据Polar码的构造,对于给定的码长M和信息比特长度K,可能会出现一个或多个信息比特序列。本申请实施例中,一个K值对应一个症状诊断表,即一种信息比特序列对应一个症状诊断表,若待译码码块或者子码块对应的信息比特序列与症状诊断表对应的信息比特序列不对应,则需要对待译码码块或者子码块对应的信息比特序列先进行同码重交织,使得交织后的信息比特序列与症状诊断表对应的信息比特序列相同,相对应的,需要将LLR向量进行相同的交织处理,对中间译码结果做相同方式的解交织处理,这样,输入的LLR才能够通过上述图5所示的步骤最终获得译码结果。具体的,在步骤503之前,对输入的LLR向量进行交织处理,在步骤507中,先对L个候选向量进行解交织处理,再根据交织处理后的候选向量确定LLR向量的译码结果。
举例来说,假设M=16,K=7,输入的LLR向量表示为:[LLR 0,LLR 1,…,LLR 15]=[l0,l1,l2,l3,l4,l5,l6,l7,l8,l9,l10,l11,l12,l13,l14,l15]。与症状诊断表对应的第二比特序列为:[i 0,i 1,i 2...i 15]=[0,0,0,0,0,0,1,1,0,0,0,1,1,1,1,1],该信息比特序列用于表示信息比特和冻结比特的位置,待译码的码块或子码块对应的第一比特序列输入为:[i 0,i 1,i 2...i 15]=[0,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1]。可见,两个信息比特序列不相同,即信息比特和冻结比特的位置不同。将第一比特序列做如图6所示的交织处理能够得到第二比特序列,即将第一比特序列的i 4~i 7与i 8~i 11做交换,可得到第二比特序列。相对应的,需要将输入LLR向量做如图6所示的交织处理,即将输入的LLR向量中LLR 4~LLR 7与LLR 8~LLR 11做交换。交织处理后的LLR向量为[l0,l1,l2,l3,l8,l9,l10,l11,l4,l5,l6,l7,l12,l13,l14,l15]。在获得长度为M的LLR向量的候选向量后,进一步获得长度为M的待译码信息的中间译码结果,或者获得长度为M的子码块对应的部分中间译码结果后,将该中间译码结果或者该部分中间译码结果按照上述交织处理的方式进行解交织处理。例如,该中间译码结果或者该部分中间译码结果为[b 0,b 1,b 2,b 3,b 4,b 5,b 6,b 7,b 8,b 9,b 10,b 11,b 12,b 13,b 14,b 15],将该序列的第4~7个位置和 第8~11个位置的元素进行互换,得到最终的译码结果或最终的部分译码结果:[b 0,b 1,b 2,b 3,b 8,b 9,b 10,b 11,b 4,b 5,b 6,b 7,b 12,b 13,b 14,b 15]。
需要说明的是,在图5所示的方法中,在Y个候选向量中选择L个候选向量的过程中,需要考虑Y个候选向量中存在重复向量的可能性,所以,本申请先对Y个候选向量进行去重处理,在去重处理后的候选向量中选择L个候选向量,其中,所述去重处理是指,将重复的候选向量只保留一个,在去重处理后的候选向量中任意两个候选向量不同。
以下介绍一下去重处理的方法。
将原始向量的X个元素取反,得到去重复向量,其中,X个元素在原始向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,X的定义与上文描述一致。将诊断向量和去重复向量做“与”操作,若获得的结果向量中存在含有1的元素,则将对应的诊断向量标记为不可用,或者将对应诊断向量获得的候选向量的PM值设为无穷大,这样在按照PM值筛选较优路径时便会过滤掉这些向量。
例如,上文中的举例中,X=3,绝对值由小到大排序的前3个LLR在LLR向量中的位置为第0个位置、第4个位置、第7个位置,M=8,对原始向量的第0、4、7个位置上的元素进行取反,得到{1,0,0,0,1,0,0,1},称为去重复向量。将得到诊断向量和去重复向量做“与”操作,若获得的结果向量中存在含有1的元素,则将对应的诊断向量标记为不可用,或者将对应诊断向量获得的候选向量的PM值设为无穷大。例如得到的诊断向量为{0,0,0,0,1,1,0,0},和去重复向量做“与”操作后的结果为{0,0,0,0,1,0,0,0},因此诊断向量{0,0,0,0,1,1,0,0}不可用,将诊断向量{0,0,0,0,1,1,0,0}标记为不可用,或者将诊断向量{0,0,0,0,1,1,0,0}获得的候选向量的PM值设为无穷大。
另外,若编码侧采用缩短(shorten)的编码方式,则待译码信息或者待译码的子码块的译码结果可能会存在缩短比特。对于这种情况,将步骤506中得到的L个候选向量与缩短比特的位置进行比较,将不匹配的候选向量删除,或者将不匹配的候选向量的PM值标记为无穷大,其中,不匹配是指候选向量中缩短比特位置的元素不为0。
综上所述,对于长度为M的待译码信息或者待译码的子码块,该待译码信息或者待译码的子码块对应的信息比特数量为K,图5所示的译码方法适用于0<信息比特的数量K≤M。例如,当M=16时,图5所示的译码方法适用于0<K≤16。通过图5所示的译码方法,对于包含任意数量信息比特的待译码信息或者待译码子码块并行判决,有助于降低计算复杂度。尤其对于M大于4时,采用图5所示的译码方法,相对于现有最大似然估计(maximum like-hood,ML)译码方法的穷举展开方式,能够很大程度上降低计算复杂度。对于路径宽度L=8,即采用SCL-8译码时,图5所示的译码方法相对于传统ML译码方法的穷举展开方式能够缩短40%的译码时长。
基于图5所示的译码方法,本申请实施例中,对于K=M时,可以采用下述图7所示的译码方法。
如图7所示,本申请实施例提供的K=M时的译码方法具体如下所述,该译码方法的执行主体为译码设备,译码设备可以是图3所示的网络设备301,也可以是图3所示的终端302。
步骤701、判断待译码信息的长度N与M的大小关系,若待译码信息的长度N>M,则执行步骤702;若待译码信息的长度N=M,则执行步骤703。
步骤702、对待译码信息对应的N个LLR逐层级进行至少一个层级的F/G运算,直到 经过F/G运算后层级上的LLR向量长度等于M,执行步骤703。
步骤703、对输入的LLR向量中的每一个LLR进行硬判决,得到原始向量。为方便说明,原始向量也可以称为第一向量。
步骤704、按序执行以下至少前(L-1)个操作:
将第一向量中第一元素进行取反,得到第二向量;
将第一向量中第二元素进行取反,得到第三向量;
将第一向量中第三元素进行取反,得到第四向量;
将第一向量中第四元素进行取反,得到第五向量;
将第一向量中第五元素进行取反,得到第六向量;
将第一向量中第六元素进行取反,得到第七向量;
将第一向量中第七元素进行取反,得到第八向量;
将第一向量中第一元素和第二元素进行取反,得到第九向量;
将第一向量中第一元素和第三元素进行取反,得到第十向量;
将第一向量中第一元素和第四元素进行取反,得到第十一向量;
将第一向量中第二元素和第三元素进行取反,得到第十二向量;
将第一向量中第一元素、第二元素和第三元素进行取反,得到第十三向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;若X=7,则LLR向量中按照绝对值由小到大排序的前7个LLR假设用[LLR0、LLR1、LLR2、……、LLR6]来表示,则第一向量中第一元素~第七元素在第一向量中的位置与[LLR0、LLR1、LLR2、……、LLR6]在LLR向量中的位置一一对应。即,第一元素在第一向量的位置与LLR0在LLR向量中的位置一致,第二元素在第一向量的位置与LLR1在LLR向量中的位置一致,类似的判断其它元素的位置。
步骤705、在得到的向量中从第一向量开始依次选择前L个向量。
步骤706、根据L个向量确定LLR向量的译码结果。
具体的,若L=8,则前L个向量为第一向量、第二向量、……、第八向量。如L=4,则前L个向量为第一向量、第二向量、……、第四向量。
基于图5所示的译码方法,本申请实施例中,对于K=M-1时,可以采用下述图8所示的译码方法。
如图8所示,本申请实施例提供的K=M-1时的译码方法具体如下所述,该译码方法的执行主体为译码设备,译码设备可以是图3所示的网络设备301,也可以是图3所示的终端302。
步骤801、判断待译码信息的长度N与M的大小关系,若待译码信息的长度N>M,则执行步骤802;若待译码信息的长度N=M,则执行步骤803。
步骤802、对待译码信息对应的N个LLR逐层级进行至少一个层级的F/G运算,直到经过F/G运算后层级上的LLR向量长度等于M,执行步骤803。
步骤803、对输入的LLR向量中的每一个LLR进行硬判决,得到原始向量。为方便说明,原始向量也可以称为第一向量。
步骤804、对第一向量进行奇偶校验,若校验通过,则执行步骤805~步骤807,若校验不通过,则执行步骤805’~步骤807’。
步骤805、按序执行以下至少前(L-1)个操作:
将第一向量中第一元素和第二元素进行取反,得到第二向量;
将第一向量中第一元素和第三元素进行取反,得到第三向量;
将第一向量中第一元素和第四元素进行取反,得到第四向量;
将第一向量中第一元素和第五元素进行取反,得到第五向量;
将第一向量中第一元素和第六元素进行取反,得到第六向量;
将第一向量中第一元素和第七元素进行取反,得到第七向量;
将第一向量中第一元素和第八元素进行取反,得到第八向量;
将第一向量中第二元素和第三元素进行取反,得到第九向量;
将第一向量中第二元素和第四元素进行取反,得到第十向量;
将第一向量中第二元素和第五元素进行取反,得到第十一向量;
将第一向量中第三元素和第四元素进行取反,得到第十二向量;
将第一向量中第一元素~第四元素进行取反,得到第十三向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;若X=8,则LLR向量中按照绝对值由小到大排序的前8个LLR假设用[LLR0、LLR1、LLR2、……、LLR7]来表示,则第一向量中第一元素~第八元素在第一向量中的位置与[LLR0、LLR1、LLR2、……、LLR7]在LLR向量中的位置一一对应。即,第一元素在第一向量的位置与LLR0在LLR向量中的位置一致,第二元素在第一向量的位置与LLR1在LLR向量中的位置一致,类似的判断其它元素的位置。
步骤806、在第一向量和步骤805得到的向量中,从第一向量开始依次选择前L个向量。
步骤807、根据L个向量确定LLR向量的译码结果。
步骤805’、按序执行以下至少前L个操作:
将第一向量中第一元素进行取反,得到第二向量;
将第一向量中第二元素进行取反,得到第三向量;
将第一向量中第三元素进行取反,得到第四向量;
将第一向量中第四元素进行取反,得到第五向量;
将第一向量中第五元素进行取反,得到第六向量;
将第一向量中第六元素进行取反,得到第七向量;
将第一向量中第七元素进行取反,得到第八向量;
将第一向量中第八元素进行取反,得到第九向量;
将第一向量中第一元素、第二元素和第三元素进行取反,得到第十向量;
将第一向量中第一元素、第二元素和第四元素进行取反,得到第十一向量;
将第一向量中第一元素、第三元素和第四元素进行取反,得到第十二向量;
将第一向量中第二元素、第三元素和第四元素进行取反,得到第十三向量;
将第一向量中第一元素、第二元素和第五元素进行取反,得到第十四向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;若X=8,则LLR向量中按照绝对值由小到大排序的前8个LLR假设用[LLR0、LLR1、LLR2、……、LLR7]来表示,则第一向量中第一元素~第八元素在第一向量中的位置与[LLR0、LLR1、LLR2、……、LLR7]在LLR向量中的位置一一对应。即,第一元素在第一向量的位置与LLR0在LLR向量中的位置一致,第二元素在 第一向量的位置与LLR1在LLR向量中的位置一致,类似的判断其它元素的位置。
步骤806’、在步骤805’得到的向量中从第二向量开始依次选择前L个向量.
步骤807’、根据L个向量确定LLR向量的译码结果。
本申请实施例中,可选的,对于K的值比较小的情况,也可以选择现有ML译码方法的穷举展开方式获得候选向量。
综上所述,根据K值的大小,可以选择图5、图7、图8或者现有ML译码方法的穷举展开方式来进行译码。其中,图7的方法适用于M=K的情况,图8的方法适用于M=K+1的情况,图5的方法适用于0<K<M的情况,现有ML译码方法的穷举展开方式适用于K值不大于阈值的情况,例如阈值可以设置为6。
例如,M=16时,若K≤6,则选择现有ML译码方法的穷举展开方式进行译码;若6<K小于14,则选择图5所示的方法进行译码;若K=14则选择图7所示的方法进行译码;若K=15则选择图8所示的方法进行译码。
以下通过具体的例子来对现有的现有ML译码方法的穷举展开方式进行说明,以及对本申请实施例图5所示的方法进一步说明。
如图9所示,若M=8,K=3,可以采用现有ML译码方法的穷举展开方式进行译码。右侧为LLR输入侧,或者称为码字侧;左侧为信息侧,或者称为译码比特侧。LLR输入向量为[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8]。在LLR输入的层级上直接分裂路径,由于左侧信息侧信息比特的位置为第5、6、7位,假设信息比特用u 5、u 6、u 7来表示,则信息比特[u 5、u 6、u 7]可能有8种情况,分别为:[0,0,0]、[0,0,1]、[0,1,0]、[0,1,1]、…、[1,1,1]。相对应的,编码后码字[c 0,c 1,c 2,…,c 7]也可能有8种情况,即在LLR输入的层级上分裂出8种可能的候选向量,分别为:[0,0,0,0,0,0,0,0]、[1,1,1,1,1,1,1,1]、[1,0,1,0,1,0,1,0]、[0,1,0,1,0,1,0,1]、…、[1,0,0,1,1,0,0,1]。对这8个候选向量计算PM值,得到PM值为0、3.6、1.6、2.0、…、1.8。再根据PM值的大小在8个候选向量中选择L个候选向量。其中,在长度为8的LLR层级上PM值(用ΔPM表示)的计算公式为:
Figure PCTCN2018124375-appb-000013
其中,c i用于表示候选向量的第i个元素,L i用于表示LLR向量的第i个元素,c i-(1-sgn(L i))/2用于计算LLR向量的第i个元素与候选向量的第i个元素是否相符。
如图10所示,若M=8,K=6,可以采用图5所示的方法进行译码。右侧为LLR输入侧,或者称为码字侧;左侧为信息侧,或者称为译码比特侧。LLR输入向量[L 0,L 1,…,L 7]为[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8]。在LLR输入的层级上直接分裂路径,但需要根据图5所示的方法进行分裂。具体的,LLR向量进行硬判决后得到的原始向量为[0,0,0,0,0,0,0,0]。假设X=2,LLR向量中绝对值从小到大排序的前2个LLR为L 0和L 1,即第0个位置和第1个位置,原始向量[0,0,0,0,0,0,0,0]中第0个位置和第1个位置的至少0个位置的元素进行取反,最多可得到4个待诊断向量,分别为:[0000 0000],[1000 0000],[0100 0000],[1100 0000]。根据生成矩阵确定待诊断向量的中间译码向量,并根据冻结比特的位置在中间译码向量选择出症状向量,由图10可知,冻结比特的位置为第0个位置和第1个位置,选择出的症状向量分别为[00],[11],[01],[10]。根据症状向量在症状诊断表中选择出诊断向量。表1示出了症状诊断表的部分行,其中表1中的一部分可以预先存储,另一部分可以在线计算。
表1
Figure PCTCN2018124375-appb-000014
Figure PCTCN2018124375-appb-000015
将每一个诊断向量与待诊断向量进行异或运算,得到16个候选向量。具体的:
Figure PCTCN2018124375-appb-000016
Figure PCTCN2018124375-appb-000017
Figure PCTCN2018124375-appb-000018
Figure PCTCN2018124375-appb-000019
将16个候选向量中重复出现的候选向量进行删除,重复出现的候选向量如上加粗显示的候选向量。去重处理后获得候选向量为{[0000 0000],[1010 0000],[1000 1000],[1000 0010],[1101 0000],[1100 0100],[1100 0001]}。
或者将重复出现的候选向量在后续PM值计算时将PM值标记为无穷大。上述16个候选向量的PM值={0,∞,∞,∞},{∞,0.4,0.6,0.8},{∞,∞,∞,∞},{∞,0.7,0.9,1.1}。
在去重处理后的候选向量中根据PM值选择L个候选向量,根据L个候选向量与生成矩阵,确定LLR向量的L个译码结果。译码结果中包括冻结比特和信息比特。
基于上述图5、图7和图8所示的译码方法,本申请实施例中,为了减少CRC虚警,可以采用如图11所示的译码方法。
如图11所示,本申请实施例提供的另一种译码方法具体如下所述,该译码方法的执行主体为译码设备,译码设备可以是图3所示的网络设备301,也可以是图3所示的终端302。
步骤1101、接收待译码信息,待译码信息的长度为N,待译码信息包括Q个子码块,一个子码块的长度为M,M≤N,M为2的正整数次幂;
步骤1102、针对Q个子码块中的任一子码块,均确定L个第一候选向量;
步骤1103、在由Q个子码块确定的Q*L个第一候选向量中的合法候选向量中,选择PM值最优的L个第二候选向量作为待译码信息的译码结果,其中,合法候选向量与生成矩阵确定的候选结果中辅助比特的位置符合编码侧的设置。
其中,步骤1102中根据任一子码块确定L个第一候选向量的方法,可以按照如图5所示的方法中L个候选向量的确定方法执行,或者,也可以按照图7或图8所述的方法中确定L个向量的方法执行。重复之处在此不再赘述。
基于图5所示的译码方法,如图12所示,本申请实施例还提供一种译码装置1200,译码装置1200用于执行图5所示的译码方法,该译码装置1200包括:
硬判决单元1201,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到原始向量,LLR向量的长度为M,M≤N,N为待译码信息的长度,N、M为2的正整数次幂;
确定单元1202,用于基于硬判决单元1201得到的原始向量,确定Y个待诊断向量,其中,待诊断向量为原始向量的X个元素中的至少0个取反得到,X个元素在原始向量中 的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2 X;以及,用于基于Y个待诊断向量中的每一个待诊断向量,均确定至少一个候选向量,其中,基于任一待诊断向量确定至少一个候选向量的的方式为:根据生成矩阵,确定待诊断向量的中间译码向量,并根据冻结比特的位置在中间译码向量选择出症状向量,根据症状向量在症状诊断表中选择至少一个诊断向量,将每一个诊断向量与待诊断向量进行异或运算,得到至少一个候选向量,症状诊断表中包括症状向量与诊断向量的对应关系;
选择单元1203,用于在由确定单元1202确定的Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量;
确定单元1202,还用于根据选择单元1203选择的L个候选向量确定LLR向量的译码结果。
可选的,该译码装置1200还包括交织单元1204,用于:
若LLR向量对应的第一比特序列与设定的第二比特序列不相同,则对输入的LLR向量进行交织处理,对交织处理后的LLR向量中的每一个LLR进行硬判决,得到原始向量;其中,第一比特序列进行相同的交织处理得到第二比特序列,冻结比特的位置由第二比特序列确定;
交织单元1204还用于:对L个候选向量中的每一个候选向量进行解交织处理,根据解交织处理后的L个候选向量确定LLR向量的译码结果。
可选的,选择单元1203用于:若由Y个待诊断向量获得的至少Y个候选向量中存在重复的候选向量,则对至少Y个候选向量进行去重处理,在去重处理后的候选向量中选择L个候选向量,其中,去重处理后的候选向量中任意两个候选向量不同。
基于图7所示的译码方法,如图13所示,本申请实施例还提供一种译码装置1300,译码装置1300用于执行图7所示的译码方法,该译码装置1300包括:
硬判决单元1301,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,LLR向量的长度为M,K=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
取反单元1302,用于按序执行以下至少前(L-1)个操作:
将第一向量中第一元素进行取反,得到第二向量;
将第一向量中第二元素进行取反,得到第三向量;
将第一向量中第三元素进行取反,得到第四向量;
将第一向量中第四元素进行取反,得到第五向量;
将第一向量中第五元素进行取反,得到第六向量;
将第一向量中第六元素进行取反,得到第七向量;
将第一向量中第七元素进行取反,得到第八向量;
将第一向量中第一元素和第二元素进行取反,得到第九向量;
将第一向量中第一元素和第三元素进行取反,得到第十向量;
将第一向量中第一元素和第四元素进行取反,得到第十一向量;
将第一向量中第二元素和第三元素进行取反,得到第十二向量;
将第一向量中第一元素、第二元素和第三元素进行取反,得到第十三向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
选择单元1303,用于在得到的向量中从第一向量开始依次选择前L个向量;
确定单元1304,用于根据L个向量确定LLR向量的译码结果。
基于图8所示的译码方法,如图14所示,本申请实施例还提供一种译码装置1400,译码装置1400用于执行图8所示的译码方法,该译码装置1400包括:
硬判决单元1401,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,LLR向量的长度为M,(K+1)=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
校验单元1402,用于对硬判决单元1401得到的第一向量进行奇偶校验;
取反单元1403,用于若校验单元1402置执行的校验通过,则:
按序执行以下至少前(L-1)个操作:
将第一向量中第一元素和第二元素进行取反,得到第二向量;
将第一向量中第一元素和第三元素进行取反,得到第三向量;
将第一向量中第一元素和第四元素进行取反,得到第四向量;
将第一向量中第一元素和第五元素进行取反,得到第五向量;
将第一向量中第一元素和第六元素进行取反,得到第六向量;
将第一向量中第一元素和第七元素进行取反,得到第七向量;
将第一向量中第一元素和第八元素进行取反,得到第八向量;
将第一向量中第二元素和第三元素进行取反,得到第九向量;
将第一向量中第二元素和第四元素进行取反,得到第十向量;
将第一向量中第二元素和第五元素进行取反,得到第十一向量;
将第一向量中第三元素和第四元素进行取反,得到第十二向量;
将第一向量中第一元素~第四元素进行取反,得到第十三向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
选择单元1404,用于在得到的向量中从第一向量开始依次选择前L个向量;
确定单元1405,用于根据L个向量确定LLR向量的译码结果。
可选的,取反单元1403还用于:若校验单元置执行的校验不通过,则:
按序执行以下至少前L个操作:
将第一向量中第一元素进行取反,得到第二向量;
将第一向量中第二元素进行取反,得到第三向量;
将第一向量中第三元素进行取反,得到第四向量;
将第一向量中第四元素进行取反,得到第五向量;
将第一向量中第五元素进行取反,得到第六向量;
将第一向量中第六元素进行取反,得到第七向量;
将第一向量中第七元素进行取反,得到第八向量;
将第一向量中第八元素进行取反,得到第九向量;
将第一向量中第一元素、第二元素和第三元素进行取反,得到第十向量;
将第一向量中第一元素、第二元素和第四元素进行取反,得到第十一向量;
将第一向量中第一元素、第三元素和第四元素进行取反,得到第十二向量;
将第一向量中第二元素、第三元素和第四元素进行取反,得到第十三向量;
将第一向量中第一元素、第二元素和第五元素进行取反,得到第十四向量;
其中,第一元素~第X元素在第一向量中的位置与LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
选择单元1404还用于,在得到的向量中从第二向量开始依次选择前L个向量;
确定单元1405还用于,根据L个向量确定LLR向量的译码结果。
基于图11所示的译码方法,如图15所示,本申请实施例还提供一种译码装置1500,译码装置1500用于执行图11所示的译码方法,该译码装置1500包括:
接收单元1501,用于接收待译码信息,待译码信息的长度为N,待译码信息包括Q个子码块,一个子码块的长度为M,M≤N,M为2的正整数次幂;
确定单元1502,用于针对Q个子码块中的任一子码块,均确定L个第一候选向量;
选择单元1503,用于在由Q个子码块确定的Q*L个第一候选向量中的合法候选向量中,选择PM值最优的L个第二候选向量作为待译码信息的译码结果,其中,合法候选向量与生成矩阵确定的候选结果中辅助比特的位置符合编码侧的设置。
可选的,确定单元1502用于:
在根据任一子码块确定L个第一候选向量时,按照如图5所示的方法中L个候选向量的确定方法执行,或者,按照图7或图8的方法中确定L个向量的方法执行。
需要说明的是,本申请实施例中图12~图15所示的译码装置对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
基于图5所示的译码方法的同一发明构思,如图16所示,本申请实施例中还提供一种译码装置1600,该译码装置1600用于执行图5所示的译码方法。图5所示的译码方法中的部分或全部可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,译码装置1600包括:输入接口电路1601,用于获取待译码信息;逻辑电路1602,用于执行图5所示的译码方法;输出接口电路1603,用于输出译码结果。
可选的,译码装置1600在具体实现时可以是芯片或者集成电路。
可选的,当图5所示的译码方法中的部分或全部通过软件来实现时,如图17所示,译码装置1700包括:存储器1701,用于存储程序;处理器1702,用于执行存储器1701存储的程序,当程序被执行时,使得译码装置1700可以实现图5所示的译码方法。
可选的,上述存储器1701可以是物理上独立的单元,也可以与处理器1702集成在一起。
可选的,当图5所示的译码方法中的部分或全部通过软件实现时,解交织装置1700也可以只包括处理器1702。用于存储程序的存储器1701位于译码装置1700之外,处理器1702通过电路/电线与存储器1701连接,用于读取并执行存储器1701中存储的程序。
处理器1702可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器1702还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device, CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器1701可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器1701也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器1701还可以包括上述种类的存储器的组合。
基于图7所示的译码方法的同一发明构思,如图18所示,本申请实施例中还提供一种译码装置1800,该译码装置1800用于执行图7所示的译码方法。图7所示的译码方法中的部分或全部可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,译码装置1800包括:输入接口电路1801,用于获取待译码信息;逻辑电路1802,用于执行图7所示的译码方法;输出接口电路1803,用于输出译码结果。
可选的,译码装置1800在具体实现时可以是芯片或者集成电路。
可选的,当图7所示的译码方法中的部分或全部通过软件来实现时,如图19所示,译码装置1900包括:存储器1901,用于存储程序;处理器1902,用于执行存储器1901存储的程序,当程序被执行时,使得译码装置1900可以实现图7所示的译码方法。
可选的,上述存储器1901可以是物理上独立的单元,也可以与处理器1902集成在一起。
可选的,当图7所示的译码方法中的部分或全部通过软件实现时,解交织装置1900也可以只包括处理器1902。用于存储程序的存储器1901位于译码装置1900之外,处理器1902通过电路/电线与存储器1901连接,用于读取并执行存储器1901中存储的程序。
处理器1902可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器1902还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器1901可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器1901也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器1901还可以包括上述种类的存储器的组合。
基于图8所示的译码方法的同一发明构思,如图20所示,本申请实施例中还提供一种译码装置2000,该译码装置2000用于执行图8所示的译码方法。图8所示的译码方法中的部分或全部可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,译码装置2000包括:输入接口电路2001,用于获取待译码信息;逻辑电路2002,用于执行图8所示的译码方法;输出接口电路2003,用于输出译码结果。
可选的,译码装置2000在具体实现时可以是芯片或者集成电路。
可选的,当图8所示的译码方法中的部分或全部通过软件来实现时,如图21所示,译码装置2100包括:存储器2101,用于存储程序;处理器2102,用于执行存储器2101存储的程序,当程序被执行时,使得译码装置2100可以实现图8所示的译码方法。
可选的,上述存储器2101可以是物理上独立的单元,也可以与处理器2102集成在一起。
可选的,当图8所示的译码方法中的部分或全部通过软件实现时,解交织装置2100也可以只包括处理器2102。用于存储程序的存储器2101位于译码装置2100之外,处理器2102通过电路/电线与存储器2101连接,用于读取并执行存储器2101中存储的程序。
处理器2102可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器2102还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器2101可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器2101也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器2101还可以包括上述种类的存储器的组合。
基于图11所示的译码方法的同一发明构思,如图22所示,本申请实施例中还提供一种译码装置2200,该译码装置2200用于执行图11所示的译码方法。图11所示的译码方法中的部分或全部可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,译码装置2200包括:输入接口电路2201,用于获取待译码信息;逻辑电路2202,用于执行图11所示的译码方法;输出接口电路2203,用于输出译码结果。
可选的,译码装置2200在具体实现时可以是芯片或者集成电路。
可选的,当图11所示的译码方法中的部分或全部通过软件来实现时,如图23所示,译码装置2300包括:存储器2301,用于存储程序;处理器2302,用于执行存储器2301存储的程序,当程序被执行时,使得译码装置2300可以实现图11所示的译码方法。
可选的,上述存储器2301可以是物理上独立的单元,也可以与处理器2302集成在一起。
可选的,当图11所示的译码方法中的部分或全部通过软件实现时,解交织装置2300也可以只包括处理器2302。用于存储程序的存储器2301位于译码装置2300之外,处理器2302通过电路/电线与存储器2301连接,用于读取并执行存储器2301中存储的程序。
处理器2302可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器2302还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器2301可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器2301也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘 (solid-state drive,SSD);存储器2301还可以包括上述种类的存储器的组合。
本申请实施例提供了一种计算机存储介质,存储有计算机程序,该计算机程序包括用于执行上述方法实施例提供的译码方法。
本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法实施例提供的译码方法。
本申请实施例提供的任一种译码装置还可以是一种芯片。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (25)

  1. 一种译码方法,其特征在于,包括:
    译码设备对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到原始向量,所述LLR向量的长度为M,M≤N,N为待译码信息的长度,N、M为2的正整数次幂;
    所述译码设备基于所述原始向量,确定Y个待诊断向量,其中,所述待诊断向量为所述原始向量的X个元素中的至少0个取反得到,所述X个元素在所述原始向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2 X
    所述译码设备基于所述Y个待诊断向量中的每一个待诊断向量,均确定至少一个候选向量,其中,基于任一待诊断向量确定至少一个候选向量的的方式为:根据生成矩阵,确定所述待诊断向量的中间译码向量,并根据冻结比特的位置在所述中间译码向量选择出症状向量,根据所述症状向量在症状诊断表中选择至少一个诊断向量,将每一个所述诊断向量与待诊断向量进行异或运算,得到至少一个候选向量,所述症状诊断表中包括症状向量与诊断向量的对应关系;
    所述译码设备在由所述Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量,根据所述L个候选向量确定所述LLR向量的译码结果。
  2. 如权利要求1所述的方法,其特征在于,所述译码设备对输入的LLR向量中的每一个LLR进行硬判决,得到原始向量,包括:
    若所述LLR向量对应的第一比特序列与设定的第二比特序列不相同,则所述译码设备对所述输入的LLR向量进行交织处理,对交织处理后的LLR向量中的每一个LLR进行硬判决,得到原始向量;其中,所述第一比特序列进行相同的所述交织处理得到所述第二比特序列,所述冻结比特的位置由所述第二比特序列确定;
    所述译码设备根据所述L个候选向量确定所述LLR向量的译码结果,包括:
    所述译码设备对所述L个候选向量中的每一个候选向量进行解交织处理,根据解交织处理后的L个候选向量确定所述LLR向量的译码结果。
  3. 如权利要求1或2所述的方法,其特征在于,所述译码设备在由所述Y个待诊断向量获得的至少Y个候选向量中,选择L个候选向量,包括:
    若由所述Y个待诊断向量获得的至少Y个候选向量中存在重复的候选向量,则所述译码设备对所述至少Y个候选向量进行去重处理,在去重处理后的候选向量中选择L个候选向量,其中,所述去重处理后的候选向量中任意两个候选向量不同。
  4. 如权利要求1~3任一项所述的方法,其特征在于,所述症状诊断表中第2i行的诊断向量为预先存储的,所述症状诊断表中第2i+1行的诊断向量为在线计算所得,其中,所述在线计算的方式为将存储的所述第2i行的诊断向量中的最后一个元素取反,i为非负整数。
  5. 一种译码方法,其特征在于,包括:
    译码设备对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,K=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
    所述译码设备按序执行以下至少前(L-1)个操作:
    将所述第一向量中第一元素进行取反,得到第二向量;
    将所述第一向量中第二元素进行取反,得到第三向量;
    将所述第一向量中第三元素进行取反,得到第四向量;
    将所述第一向量中第四元素进行取反,得到第五向量;
    将所述第一向量中第五元素进行取反,得到第六向量;
    将所述第一向量中第六元素进行取反,得到第七向量;
    将所述第一向量中第七元素进行取反,得到第八向量;
    将所述第一向量中第一元素和第二元素进行取反,得到第九向量;
    将所述第一向量中第一元素和第三元素进行取反,得到第十向量;
    将所述第一向量中第一元素和第四元素进行取反,得到第十一向量;
    将所述第一向量中第二元素和第三元素进行取反,得到第十二向量;
    将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十三向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
    所述译码设备在得到的向量中从所述第一向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。
  6. 一种译码方法,其特征在于,包括:
    译码设备对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,(K+1)=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
    所述译码设备对第一向量进行奇偶校验,若校验通过,则:
    按序执行以下至少前(L-1)个操作:
    将所述第一向量中第一元素和第二元素进行取反,得到第二向量;
    将所述第一向量中第一元素和第三元素进行取反,得到第三向量;
    将所述第一向量中第一元素和第四元素进行取反,得到第四向量;
    将所述第一向量中第一元素和第五元素进行取反,得到第五向量;
    将所述第一向量中第一元素和第六元素进行取反,得到第六向量;
    将所述第一向量中第一元素和第七元素进行取反,得到第七向量;
    将所述第一向量中第一元素和第八元素进行取反,得到第八向量;
    将所述第一向量中第二元素和第三元素进行取反,得到第九向量;
    将所述第一向量中第二元素和第四元素进行取反,得到第十向量;
    将所述第一向量中第二元素和第五元素进行取反,得到第十一向量;
    将所述第一向量中第三元素和第四元素进行取反,得到第十二向量;
    将所述第一向量中第一元素~第四元素进行取反,得到第十三向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
    所述译码设备在得到的向量中从所述第一向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。
  7. 如权利要求6所述的方法,其特征在于,若校验不通过,则:
    所述译码设备按序执行以下至少前L个操作:
    将所述第一向量中第一元素进行取反,得到第二向量;
    将所述第一向量中第二元素进行取反,得到第三向量;
    将所述第一向量中第三元素进行取反,得到第四向量;
    将所述第一向量中第四元素进行取反,得到第五向量;
    将所述第一向量中第五元素进行取反,得到第六向量;
    将所述第一向量中第六元素进行取反,得到第七向量;
    将所述第一向量中第七元素进行取反,得到第八向量;
    将所述第一向量中第八元素进行取反,得到第九向量;
    将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十向量;
    将所述第一向量中第一元素、第二元素和第四元素进行取反,得到第十一向量;
    将所述第一向量中第一元素、第三元素和第四元素进行取反,得到第十二向量;
    将所述第一向量中第二元素、第三元素和第四元素进行取反,得到第十三向量;
    将所述第一向量中第一元素、第二元素和第五元素进行取反,得到第十四向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
    所述译码设备在得到的向量中从所述第二向量开始依次选择前L个向量,根据所述L个向量确定所述LLR向量的译码结果。
  8. 一种译码方法,其特征在于,包括:
    译码设备接收待译码信息,所述待译码信息的长度为N,所述待译码信息包括Q个子码块,一个子码块的长度为M,M≤N,M为2的正整数次幂;
    所述译码设备针对所述Q个子码块中的任一子码块,均确定L个第一候选向量;
    所述译码设备在由所述Q个子码块确定的Q*L个第一候选向量中的合法候选向量中,选择PM值最优的L个第二候选向量作为所述待译码信息的译码结果,其中,所述合法候选向量与生成矩阵确定的候选结果中辅助比特的位置符合编码侧的设置。
  9. 如权利要求8所述的方法,其特征在于,所述译码设备根据任一子码块确定L个第一候选向量的方法,按照如权利要求1~4任一项所述的方法中L个候选向量的确定方法执行,或者,按照权利要求5~7任一项所述的方法中确定所述L个向量的方法执行。
  10. 一种译码装置,其特征在于,包括:
    硬判决单元,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到原始向量,所述LLR向量的长度为M,M≤N,N为待译码信息的长度,N、M为2的正整数次幂;
    确定单元,用于基于所述硬判决单元得到的原始向量,确定Y个待诊断向量,其中,所述待诊断向量为所述原始向量的X个元素中的至少0个取反得到,所述X个元素在所述原始向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置一致,Y≤2 X;以及,用于基于所述Y个待诊断向量中的每一个待诊断向量,均确定至少一个候选向量,其中,基于任一待诊断向量确定至少一个候选向量的的方式为:根据生成矩阵,确定所述待诊断向量的中间译码向量,并根据冻结比特的位置在所述中间译码向量选择出症状向量,根据所述症状向量在症状诊断表中选择至少一个诊断向量,将每一个所述诊断向量与待诊断向量进行异或运算,得到至少一个候选向量,所述症状诊断表中包括症状向量与诊断向量的对应关系;
    选择单元,用于在由所述确定单元确定的Y个待诊断向量获得的至少Y个候选向量中, 选择L个候选向量;
    所述确定单元,还用于根据所述选择单元选择的所述L个候选向量确定所述LLR向量的译码结果。
  11. 如权利要求10所述的装置,其特征在于,所述装置还包括交织单元,用于:
    若所述LLR向量对应的第一比特序列与设定的第二比特序列不相同,则对所述输入的LLR向量进行交织处理,对交织处理后的LLR向量中的每一个LLR进行硬判决,得到原始向量;其中,所述第一比特序列进行相同的所述交织处理得到所述第二比特序列,所述冻结比特的位置由所述第二比特序列确定;
    所述交织单元还用于:
    对所述L个候选向量中的每一个候选向量进行解交织处理,根据解交织处理后的L个候选向量确定所述LLR向量的译码结果。
  12. 如权利要求10或11所述的装置,其特征在于,所述选择单元用于:
    若由所述Y个待诊断向量获得的至少Y个候选向量中存在重复的候选向量,则对所述至少Y个候选向量进行去重处理,在去重处理后的候选向量中选择L个候选向量,其中,所述去重处理后的候选向量中任意两个候选向量不同。
  13. 如权利要求10~12任一项所述的装置,其特征在于,所述症状诊断表中第2i行的诊断向量为预先存储的,所述症状诊断表中第2i+1行的诊断向量为在线计算所得,其中,所述在线计算的方式为将存储的所述第2i行的诊断向量中的最后一个元素取反,i为非负整数。
  14. 一种译码装置,其特征在于,包括:
    硬判决单元,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,K=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
    取反单元,用于按序执行以下至少前(L-1)个操作:
    将所述第一向量中第一元素进行取反,得到第二向量;
    将所述第一向量中第二元素进行取反,得到第三向量;
    将所述第一向量中第三元素进行取反,得到第四向量;
    将所述第一向量中第四元素进行取反,得到第五向量;
    将所述第一向量中第五元素进行取反,得到第六向量;
    将所述第一向量中第六元素进行取反,得到第七向量;
    将所述第一向量中第七元素进行取反,得到第八向量;
    将所述第一向量中第一元素和第二元素进行取反,得到第九向量;
    将所述第一向量中第一元素和第三元素进行取反,得到第十向量;
    将所述第一向量中第一元素和第四元素进行取反,得到第十一向量;
    将所述第一向量中第二元素和第三元素进行取反,得到第十二向量;
    将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十三向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
    选择单元,用于在得到的向量中从所述第一向量开始依次选择前L个向量;
    确定单元,用于根据所述L个向量确定所述LLR向量的译码结果。
  15. 一种译码装置,其特征在于,
    硬判决单元,用于对输入的对数似然比LLR向量中的每一个LLR进行硬判决,得到第一向量,所述LLR向量的长度为M,(K+1)=M≤N,N为待译码信息的长度,N、M为2的正整数次幂,K为信息比特的长度;
    校验单元,用于对所述硬判决单元得到的第一向量进行奇偶校验;
    取反单元,用于若所述校验单元置执行的校验通过,则:
    按序执行以下至少前(L-1)个操作:
    将所述第一向量中第一元素和第二元素进行取反,得到第二向量;
    将所述第一向量中第一元素和第三元素进行取反,得到第三向量;
    将所述第一向量中第一元素和第四元素进行取反,得到第四向量;
    将所述第一向量中第一元素和第五元素进行取反,得到第五向量;
    将所述第一向量中第一元素和第六元素进行取反,得到第六向量;
    将所述第一向量中第一元素和第七元素进行取反,得到第七向量;
    将所述第一向量中第一元素和第八元素进行取反,得到第八向量;
    将所述第一向量中第二元素和第三元素进行取反,得到第九向量;
    将所述第一向量中第二元素和第四元素进行取反,得到第十向量;
    将所述第一向量中第二元素和第五元素进行取反,得到第十一向量;
    将所述第一向量中第三元素和第四元素进行取反,得到第十二向量;
    将所述第一向量中第一元素~第四元素进行取反,得到第十三向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照绝对值由小到大排序的前X个LLR的位置相对应;
    选择单元,用于在得到的向量中从所述第一向量开始依次选择前L个向量;
    确定单元,用于根据所述L个向量确定所述LLR向量的译码结果。
  16. 如权利要求15所述的装置,其特征在于,所述取反单元还用于:
    若所述校验单元置执行的校验不通过,则:
    按序执行以下至少前L个操作:
    将所述第一向量中第一元素进行取反,得到第二向量;
    将所述第一向量中第二元素进行取反,得到第三向量;
    将所述第一向量中第三元素进行取反,得到第四向量;
    将所述第一向量中第四元素进行取反,得到第五向量;
    将所述第一向量中第五元素进行取反,得到第六向量;
    将所述第一向量中第六元素进行取反,得到第七向量;
    将所述第一向量中第七元素进行取反,得到第八向量;
    将所述第一向量中第八元素进行取反,得到第九向量;
    将所述第一向量中第一元素、第二元素和第三元素进行取反,得到第十向量;
    将所述第一向量中第一元素、第二元素和第四元素进行取反,得到第十一向量;
    将所述第一向量中第一元素、第三元素和第四元素进行取反,得到第十二向量;
    将所述第一向量中第二元素、第三元素和第四元素进行取反,得到第十三向量;
    将所述第一向量中第一元素、第二元素和第五元素进行取反,得到第十四向量;
    其中,所述第一元素~所述第X元素在所述第一向量中的位置与所述LLR向量中按照 绝对值由小到大排序的前X个LLR的位置相对应;
    所述选择单元还用于,在得到的向量中从所述第二向量开始依次选择前L个向量;
    所述确定单元还用于,根据所述L个向量确定所述LLR向量的译码结果。
  17. 一种译码装置,其特征在于,包括:
    接收单元,用于接收待译码信息,所述待译码信息的长度为N,所述待译码信息包括Q个子码块,一个子码块的长度为M,M≤N,M为2的正整数次幂;
    确定单元,用于针对所述Q个子码块中的任一子码块,均确定L个第一候选向量;
    选择单元,用于在由所述Q个子码块确定的Q*L个第一候选向量中的合法候选向量中,选择PM值最优的L个第二候选向量作为所述待译码信息的译码结果,其中,所述合法候选向量与生成矩阵确定的候选结果中辅助比特的位置符合编码侧的设置。
  18. 如权利要求17所述的装置,其特征在于,所述确定单元用于:
    在根据任一子码块确定L个第一候选向量时,按照如权利要求1~4任一项所述的方法中L个候选向量的确定方法执行,或者,按照权利要求5~7任一项所述的方法中确定所述L个向量的方法执行。
  19. 一种译码装置,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述处理器用于执行如权利要求1~9任一项所述的方法。
  20. 如权利要求19所述的装置,其特征在于,所述译码装置为芯片或集成电路。
  21. 一种译码装置,其特征在于,包括:
    输入接口电路,用于获取待译码信息;
    逻辑电路,用于基于获取的待译码信息执行所述权利要求1~9任一项所述的方法,得到译码结果;
    输出接口电路,用于输出译码结果。
  22. 一种芯片,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的所述程序,当所述程序被执行时,所述处理器用于执行如权利要求1~9任一项所述的方法。
  23. 一种芯片,其特征在于,包括:
    输入接口电路,用于获取待译码信息;
    逻辑电路,用于基于获取的待译码信息执行所述权利要求1~9任一项所述的方法,得到译码结果;
    输出接口电路,用于输出译码结果。
  24. 一种计算机可读存储介质,其特征在于,所述计算机存储介质中存储有计算机可读指令,当计算机读取并执行所述计算机可读指令时,使得计算机执行如权利要求1-9任意一项所述的方法。
  25. 一种计算机程序产品,其特征在于,当计算机读取并执行所述计算机程序产品时,使得计算机执行如权利要求1-9任意一项所述的方法。
PCT/CN2018/124375 2018-01-09 2018-12-27 一种译码方法及装置 WO2019137231A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18900114.2A EP3731418A4 (en) 2018-01-09 2018-12-27 DECODING PROCESS AND DEVICE
US16/923,898 US11171673B2 (en) 2018-01-09 2020-07-08 Decoding method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810020396.4A CN110022158B (zh) 2018-01-09 2018-01-09 一种译码方法及装置
CN201810020396.4 2018-01-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/923,898 Continuation US11171673B2 (en) 2018-01-09 2020-07-08 Decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2019137231A1 true WO2019137231A1 (zh) 2019-07-18

Family

ID=67187855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124375 WO2019137231A1 (zh) 2018-01-09 2018-12-27 一种译码方法及装置

Country Status (4)

Country Link
US (1) US11171673B2 (zh)
EP (1) EP3731418A4 (zh)
CN (1) CN110022158B (zh)
WO (1) WO2019137231A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114421975A (zh) * 2022-01-18 2022-04-29 重庆邮电大学 一种基于翻转集的极化码sclf译码方法
US11695430B1 (en) * 2022-06-09 2023-07-04 Hon Lin Technology Co., Ltd. Method for decoding polar codes and apparatus thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188906A1 (en) * 2001-06-06 2002-12-12 Kurtas Erozan M. Method and coding apparatus using low density parity check codes for data storage or data transmission
CN104038234A (zh) * 2013-03-07 2014-09-10 华为技术有限公司 极性码的译码方法和译码器
CN107425857A (zh) * 2017-06-19 2017-12-01 华为技术有限公司 一种极化码编译码方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7441110B1 (en) * 1999-12-10 2008-10-21 International Business Machines Corporation Prefetching using future branch path information derived from branch prediction
US6940429B2 (en) * 2003-05-28 2005-09-06 Texas Instruments Incorporated Method of context based adaptive binary arithmetic encoding with decoupled range re-normalization and bit insertion
WO2011127287A1 (en) * 2010-04-08 2011-10-13 Marvell World Trade Ltd. Non-binary ldpc code decoder
US9176927B2 (en) * 2011-11-08 2015-11-03 The Royal Institution For The Advancement Of Learning/Mcgill University Methods and systems for decoding polar codes
CN104242957B (zh) * 2013-06-09 2017-11-28 华为技术有限公司 译码处理方法及译码器
CN104158549A (zh) * 2014-07-24 2014-11-19 南京大学 一种极性码译码方法及译码装置
US9793923B2 (en) * 2015-11-24 2017-10-17 Texas Instruments Incorporated LDPC post-processor architecture and method for low error floor conditions
US20170222754A1 (en) * 2016-01-28 2017-08-03 Lg Electronics Inc. Error correcting coding method based on cross-layer error correction with likelihood ratio and apparatus thereof
CN106788453B (zh) * 2016-11-11 2020-06-19 山东科技大学 一种并行的极化码译码方法及装置
CN107040262B (zh) * 2017-03-28 2020-07-28 北京航空航天大学 一种计算polar码SCL+CRC译码的List预测值的方法
CN107528597B (zh) * 2017-09-25 2020-12-08 桂林电子科技大学 一种基于crc校验码的ldpc码后处理译码方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188906A1 (en) * 2001-06-06 2002-12-12 Kurtas Erozan M. Method and coding apparatus using low density parity check codes for data storage or data transmission
CN104038234A (zh) * 2013-03-07 2014-09-10 华为技术有限公司 极性码的译码方法和译码器
CN107425857A (zh) * 2017-06-19 2017-12-01 华为技术有限公司 一种极化码编译码方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3731418A4 *

Also Published As

Publication number Publication date
CN110022158A (zh) 2019-07-16
CN110022158B (zh) 2021-04-09
EP3731418A1 (en) 2020-10-28
EP3731418A4 (en) 2021-03-03
US11171673B2 (en) 2021-11-09
US20200343916A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
JP7471357B2 (ja) 符号化方法、復号方法、装置、および装置
KR102621627B1 (ko) 순환 중복 검사와 극 부호를 이용하는 부호화를 위한 장치 및 방법
US11025278B2 (en) Polar coding encoding/decoding method and apparatus
CN107370560B (zh) 一种极化码的编码和速率匹配方法、装置及设备
US11171741B2 (en) Polar code transmission method and apparatus
CN112953558B (zh) 一种Polar码编码方法及装置
CN108282259B (zh) 一种编码方法及装置
JP7027520B2 (ja) Polar符号化方法および装置
US11245423B2 (en) Interleaving method and apparatus
WO2019001436A1 (zh) 一种Polar码的编码方法及装置
WO2019137231A1 (zh) 一种译码方法及装置
WO2020252792A1 (zh) 一种极化码译码方法、装置、芯片、存储介质及程序产品
US11075715B2 (en) Encoding method and apparatus
WO2019029205A1 (zh) 编码方法及装置
CN110324111B (zh) 一种译码方法及设备
WO2022257718A1 (zh) 一种极化码编码方法、译码方法及装置
CN103401566A (zh) 参数化的bch纠错码的并行编码方法及装置
WO2012109872A1 (zh) 通信系统中的循环冗余校验处理方法、装置和lte终端
CN111713023B (zh) 一种极化码译码方法及译码装置
WO2019144787A1 (zh) 一种交织方法及交织设备
CN111600613B (zh) 一种校验方法、装置、译码器、接收机及计算机存储介质
WO2016165395A1 (zh) 一种译码方法及译码器
CN113067582A (zh) 一种并行译码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900114

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018900114

Country of ref document: EP

Effective date: 20200722