WO2015000100A1 - 实现Turbo均衡补偿的方法以及Turbo均衡器和系统 - Google Patents

实现Turbo均衡补偿的方法以及Turbo均衡器和系统 Download PDF

Info

Publication number
WO2015000100A1
WO2015000100A1 PCT/CN2013/078570 CN2013078570W WO2015000100A1 WO 2015000100 A1 WO2015000100 A1 WO 2015000100A1 CN 2013078570 W CN2013078570 W CN 2013078570W WO 2015000100 A1 WO2015000100 A1 WO 2015000100A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
data block
data segments
convolutional code
ldpc convolutional
Prior art date
Application number
PCT/CN2013/078570
Other languages
English (en)
French (fr)
Inventor
常德远
肖治宇
喻凡
赵羽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to ES13888637.9T priority Critical patent/ES2683084T3/es
Priority to CN201380001159.9A priority patent/CN103688502B/zh
Priority to EP13888637.9A priority patent/EP3001570B1/en
Priority to PCT/CN2013/078570 priority patent/WO2015000100A1/zh
Publication of WO2015000100A1 publication Critical patent/WO2015000100A1/zh
Priority to US14/984,351 priority patent/US10574263B2/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1148Structural properties of the code parity-check or generator matrix
    • H03M13/1154Low-density parity-check convolutional codes [LDPC-CC]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • H03M13/2978Particular arrangement of the component decoders
    • H03M13/2987Particular arrangement of the component decoders using more component decoders than component codes, e.g. pipelined turbo iterations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3746Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with iterative decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/615Use of computational or mathematical techniques
    • H03M13/616Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • H03M13/6331Error control coding in combination with equalisation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6561Parallelized implementations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/25Arrangements specific to fibre transmission
    • H04B10/2507Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion
    • H04B10/25073Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion using spectral equalisation, e.g. spectral filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/25Arrangements specific to fibre transmission
    • H04B10/2507Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion
    • H04B10/2543Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion due to fibre non-linearities, e.g. Kerr effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/25Arrangements specific to fibre transmission
    • H04B10/2507Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion
    • H04B10/2569Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion due to polarisation mode dispersion [PMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/60Receivers
    • H04B10/61Coherent receivers
    • H04B10/616Details of the electronic signal processing in coherent optical receivers
    • H04B10/6161Compensation of chromatic dispersion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/60Receivers
    • H04B10/61Coherent receivers
    • H04B10/616Details of the electronic signal processing in coherent optical receivers
    • H04B10/6162Compensation of polarization related effects, e.g., PMD, PDL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03012Arrangements for removing intersymbol interference operating in the time domain
    • H04L25/03019Arrangements for removing intersymbol interference operating in the time domain adaptive, i.e. capable of adjustment during data reception
    • H04L25/03057Arrangements for removing intersymbol interference operating in the time domain adaptive, i.e. capable of adjustment during data reception with a recursive structure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03171Arrangements involving maximum a posteriori probability [MAP] detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L2025/03777Arrangements for removing intersymbol interference characterised by the signalling
    • H04L2025/03783Details of reference signals
    • H04L2025/03789Codes therefore
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation

Definitions

  • the present invention relates to the field of optical communications, and in particular to a method for implementing Turbo equalization compensation and a Turbo equalizer and a Turbo equalizer system. Background technique
  • LDPC codeword where the BCJR module uses a conventional serial structure sliding window BCJR.
  • the length of the LDPC codeword used in optical communication reaches tens of thousands of bits, and the entire LDPC codeword needs to be stored in the BCJR module, so the BCJR module must have a huge storage resource.
  • Turbo equalizers with feedback structures, large LDPC codewords, and complex BCJR modules limit system throughput.
  • the present invention proposes a method for realizing Turbo equalization compensation, and an equalizer and system, aiming at solving the problem of limited throughput when implementing Turbo equalization compensation in a high speed optical fiber transmission system.
  • a method for implementing Turbo equalization compensation comprising: dividing a first data block into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, n is a positive integer greater than or equal to 2, D is a positive integer greater than or equal to 1, recursively processing each of the n data segments, and combining the n data segments after the recursive processing Obtaining a second data block; performing iterative decoding on the second data block to output a third data block; wherein data of the first data block, the second data block, and the third data block
  • the length is 1/T of the code length of the low density parity check LDPC convolutional code, and the T is the number of layers of the parity check matrix of the LDPC convolutional code.
  • the performing recursive processing on each of the n data segments separately includes: performing, in parallel, each of the n data segments A data segment performs forward recursive operations and backward recursive operations.
  • the performing recursive processing on each of the n data segments separately includes: performing, in parallel, each of the n data segments A data segment performs a forward recursive operation.
  • the performing recursive processing on each of the n data segments separately includes: performing, in parallel, each of the n data segments A data segment is subjected to a backward recursive operation.
  • the iteratively decoding the second data block to output the third data block includes: Receiving the second data block; performing decoding processing on the received second data block and other T-1 data blocks that have been iteratively decoded, where the other T-1 have been iteratively decoded
  • the data length of the data block is 1/T of the code length of the LDPC convolutional code; the third data block subjected to the most decoding process is output.
  • the method before the dividing the first data block into n data segments, the method further includes: The first data block performs conditional transition probability distribution estimation to determine parameter information of the channel estimation.
  • a Turbo equalizer including: an overlapped parallel BCJR (OP-BCJR) unit, configured to divide the first data block into n data segments, where Two adjacent data segments of the n data segments overlap D bits, n is a positive integer greater than or equal to 2, and D is a positive integer greater than or equal to 1, for each of the n data segments
  • the data segments are respectively subjected to recursive processing, and the n data segments subjected to the recursive processing are combined to obtain a second data block;
  • a low density parity check LDPC convolutional code decoding unit is connected to the OP-BCJR unit And performing iterative decoding on the second data block to output a third data block; wherein data lengths of the first data block, the second data block, and the third data block are all low density
  • the code length of the parity LDPC convolutional code is 1/T, and the T is the number of layers of the parity check matrix of the LDPC convolutional code.
  • the OP-BCJR unit includes: a segmentation module, configured to divide the first data block into n data segments, where the n data segments are phased The two adjacent data segments overlap D bits, n is a positive integer greater than or equal to 2, and D is a positive integer greater than or equal to 1; a recursive module is configured to separately segment each of the n data segments Performing a recursive process; a merging module, configured to merge the n data segments after the recursive processing to obtain a second data block.
  • the recursive module is specifically configured to: perform forward recursive operation on each of the n data segments in parallel and Backward recursive operation.
  • the recursive module is specifically configured to: perform a forward recursive operation on each of the n data segments in parallel.
  • the recursive module is specifically configured to: perform a backward recursive operation on each of the n data segments in parallel.
  • the LDPC convolutional code decoding unit includes: a receiving module, configured to receive the a second data block; a decoding module, configured to decode the received second data block and other T-1 data blocks that have been iteratively decoded, where the other T-1 have passed
  • the data length of the iteratively decoded data block is 1/T of the code length of the LDPC convolutional code; and the output module is configured to output the third data block that has undergone the most decoding process.
  • the method further includes: a channel estimation unit, configured to be in the OP-BCJR unit Before the first data block is divided into n data segments, conditional transition probability distribution estimation is performed on the first data block to determine parameter information of the channel estimation.
  • a Turbo equalizer system comprising: at least one Turbo equalizer, at least one Turbo equalizer, wherein each of the at least one Turbo equalizer comprises: 0P-BCJR unit,
  • the first data block is divided into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, n is a positive integer greater than or equal to 2, and D is a positive value greater than or equal to 1.
  • the data length of the block is 1/T of the code length of the low density parity check LDPC convolutional code, and the T is the number of layers of the parity check matrix of the LDPC convolutional code.
  • a Turbo equalizer system comprising: at least one Turbo equalizer, wherein each of the at least one Turbo equalizer comprises: an overlapping parallel OP-BCJR unit, for The data block is divided into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, n is a positive integer greater than or equal to 2, and D is a positive integer greater than or equal to 1.
  • Each of the n data segments is recursively processed, and the n data segments subjected to the recursive processing are combined to obtain a second data block, and the low density parity check LDPC convolutional code decoding unit And connecting to the OP-BCJR unit, performing iterative decoding on the second data block to output a third data block, where the first data block, the second data block, and the third The data length of the data block is 1/T of the code length of the low density parity check LDPC convolutional code, and the T is the number of layers of the parity check matrix of the LDPC convolutional code; at least one low density parity Verify LDPC volume a code decoding unit, wherein each of the at least one LDPC convolutional code decoding unit receives the third data block output by one of the at least one Turbo equalizer And performing the iterative decoding on the third data block to output a fourth data block, where the data length of the fourth data block is 1/T of the code length of the LDPC
  • Embodiments of the present invention are applied to a receiving end of a high speed fiber optic transmission system.
  • segmentation processing and forward-backward recursive operation on the received data block in the 0P-BCJR unit, and Turbo iterative processing on the data acquired from the 0P-BCJR unit in the LDPC convolutional code decoding unit, Effectively increase system throughput.
  • 1 is a flow chart of a method of Turbo equalization compensation in accordance with an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a turbo equalizer according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural view of an OP-BCJR unit in a turbo equalizer according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the structure of an LDPC convolutional code decoding unit in a turbo equalizer according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a turbo equalizer according to another embodiment of the present invention.
  • FIG. 6 is a block diagram showing the structure of a Turbo equalizer system in accordance with an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a turbo equalizer system according to another embodiment of the present invention.
  • FIG. 8 is a block diagram of a Turbo equalizer system in accordance with an embodiment of the present invention.
  • Figure 9 is a block diagram of a Turbo equalizer in accordance with an embodiment of the present invention.
  • Figure 10 is a timing diagram of an iterative process of an OP-BCJR unit in a Turbo equalizer in accordance with an embodiment of the present invention.
  • FIG. 11 is a schematic diagram showing a specific processing procedure of an OP-BCJR unit in a turbo equalizer according to an embodiment of the present invention.
  • FIG. 12 is a block diagram of a turbo equalizer system in accordance with another embodiment of the present invention. detailed description
  • GSM Global System of Mobile Communication
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GPRS General Packet Radio Service
  • LTE Long Term Evolution
  • a user equipment may also be referred to as a mobile terminal (Mobile Terminal), a mobile station (Mobile Station), etc., and may be connected to one or more core networks via a radio access network (eg, RAN, Radio Access Network). Communicate.
  • the UE exchanges voice and/or data with the radio access network.
  • the base station may be a base station (BTS, Base Transceiver Station) in GSM or CDMA, or may be a base station (Node B) in WCDMA, or may be an evolved base station (eNB or e-NodeB, evolved Node B) in LTE.
  • BTS Base Transceiver Station
  • Node B base station
  • eNB evolved base station
  • e-NodeB evolved Node B
  • LTE Long Term Evolution-Node B
  • the embodiment of the present invention proposes a method for implementing Turbo equalization compensation applied to the receiving end of the high-speed optical fiber transmission system.
  • the transmitting end after the transmitting signal passes through a framing unit (framer) in an optical transport unit (OTU), convolutional code encoding is performed in the LDPC convolutional code encoder, and differential is performed in the differential encoder. Encoding, and finally sending an optical signal through the optical modulator into the optical fiber transmission network.
  • the optical signal is subjected to coherent detection, analog-to-digital converter (ADC) sampling, and conventional signal equalization processing, and then enters the Turbo equalizer system to implement Turbo equalization compensation, and finally passes the de-frame in the OTU.
  • the unit (deframer) then forms a received signal.
  • the turbo equalizer system may comprise at least one turbo equalizer, e.g. each Turbo equalizer comprises an OP-BCJR unit and an LDPC convolutional code decoding unit.
  • each Turbo equalizer comprises an OP-BCJR unit and an LDPC convolutional code decoding unit.
  • the turbo equalizer system can also include at least one independent LDPC convolutional code decoding unit.
  • the OP-BCJR unit in the Turbo equalizer divides the first data block into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, and n is greater than or equal to 2
  • An integer, D is a positive integer greater than or equal to 1, performing recursive processing on each of the n data segments, and combining the n data segments after the recursive processing to obtain second data Piece.
  • the data lengths of the first data block and the second data block are both LDPC convolutional codes 1/T of the code length
  • the ⁇ is the number of layers of the check matrix of the LDPC convolutional code of the step.
  • the state values of the starting symbols of the overlapping D bits are equal probability distributions.
  • the equal probability distribution means that the probability distribution of the state at the bit is equal in each possible state.
  • the code length of the LDPC convolutional code refers to the data length that satisfies one layer of the check relationship.
  • ⁇ ⁇ constitutes a check matrix of the LDPC convolutional code.
  • T indicates that a total of T codeword blocks are combined to satisfy the check relationship of the LDPC convolutional code.
  • T is determined by the step structure parameter (i.e., the number of layers) of the check matrix H of the LDPC convolutional code. For example, it is assumed that the number of columns in which the i-th layer and the i+1th layer H 1+1 of the parity check matrix of the LDPC convolutional code are shifted from each other is N T , and the number of columns in each layer of the check matrix of the LDPC convolutional code is N, generally take
  • performing recursive processing on each of the n data segments respectively may include: performing forward recursive operation and backward recursion on each of the n data segments in parallel Operation.
  • the performing recursive processing for each of the n data segments may also include: performing forward recursive operation on each of the n data segments in parallel.
  • the recursive processing of each of the n data segments may also include: performing backward recursive operations on each of the n data segments in parallel.
  • performing recursive processing on each of the n data segments respectively may include: performing forward recursive processing on a part of the n data segments, and remaining data segments Perform backward recursive processing.
  • the OP-BCJR unit needs to use the probability density function (PDF) and the transfer probability parameter of the channel in the forward recursive operation and the backward recursive operation.
  • PDF probability density function
  • these parameters are known in advance, and in some application scenarios, these parameters can only be obtained through channel estimation. Therefore, before the first data block is divided into n data segments, the conditional transition probability distribution estimation of the first data block is further performed to determine parameter information of the channel estimation.
  • the LDPC convolutional code decoding unit in the Turbo equalizer performs iterative decoding on the second data block to output a third data block.
  • the data length of the third data block is also LDPC.
  • the code length of the convolutional code is 1/T.
  • iteratively decoding the second data block to output the third data block includes: receiving the second data block; and receiving the received second data block and other T-1
  • the iteratively decoded data block is subjected to decoding processing; and the third data block subjected to the most decoding processing is output.
  • the data length of the other T-1 data blocks that have been iteratively decoded is 1/T of the code length of the LDPC convolutional code.
  • the embodiment of the present invention is applied to the receiving end of a high speed optical fiber transmission system.
  • segmentation processing and forward-backward recursive operation on the received data block in the OP-BCJR unit, and performing Turbo iterative processing on the data acquired from the OP-BCJR unit in the LDPC convolutional code decoding unit, Effectively increase system throughput.
  • FIG. 2 illustrates a Turbo equalizer implementing Turbo equalization compensation in accordance with an embodiment of the present invention.
  • the method of implementing Turbo equalization compensation as described above will be described in detail below in conjunction with the Turbo equalizer of FIG.
  • the turbo equalizer 20 includes an OP-BCJR unit 21 and an LDPC convolutional code decoding unit 22. among them:
  • the OP-BCJR unit 21 is configured to segment the first data block into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, and n is a positive integer greater than or equal to 2.
  • D is a positive integer greater than or equal to 1, recursively processing each of the n data segments separately, and combining the n data segments after the recursive processing to obtain a second data block;
  • the LDPC convolutional code decoding unit 22 is connected to the OP-BCJR unit 21 for iteratively decoding the second data block to output a third data block.
  • the data lengths of the first data block, the second data block, and the third data block are all 1/T of the code length of the low density parity check LDPC convolutional code, and the T is a stepped shape.
  • the number of layers of the check matrix of the LDPC convolutional code is a stepped shape.
  • the OP-BCJR unit 21 may include a segmentation module 211, a recursive module 212, and a merge module 213. among them:
  • the segmentation module 211 is configured to segment the first data block into n data segments, wherein two adjacent data segments of the n data segments overlap D bits, and n is a positive integer greater than or equal to 2, D a positive integer greater than or equal to 1;
  • the recursive module 212 is configured to perform recursive processing on each of the n data segments separately;
  • the merging module 213 is configured to merge the n data segments after the recursive processing to obtain a second data block.
  • the recursive module 212 is configured to perform a forward recursive operation and a backward recursive operation on each of the n data segments in parallel; or, in parallel, each of the n data segments The data segment performs a forward recursive operation; or, a backward recursive operation is performed on each of the n data segments in parallel.
  • the recursive module 212 is further configured to perform forward recursive processing on a portion of the n data segments and backward recursive processing on the remaining data segments.
  • the LDPC convolutional code decoding unit 22 may include a receiving module 221, a decoding module 222, and an output module 223. among them:
  • the receiving module 221 is configured to receive the second data block.
  • the decoding module 222 is configured to perform decoding processing on the received second data block and other T-1 data blocks that have been iteratively decoded, where the other T-1 data that has been iteratively decoded
  • the data length of the block is 1/T of the code length of the LDPC convolutional code
  • the output module 223 is for outputting the third data block subjected to the most decoding process.
  • the embodiment of the present invention is applied to the receiving end of a high speed optical fiber transmission system.
  • segmentation processing and forward-backward recursive operation on the received data block in the OP-BCJR unit, and performing Turbo iterative processing on the data acquired from the OP-BCJR unit in the LDPC convolutional code decoding unit, Effectively increase system throughput.
  • a channel estimating unit 23 for dividing the first data block into the OP-BCJR unit is further included. Before the n data segments, conditional transition probability distribution estimation is performed on the first data block to determine parameter information of the channel estimation.
  • the PDF probability distribution parameters and transition probability parameters that need to be used for the channel can be obtained by channel estimation.
  • a Turbo equalizer in the Turbo equalizer system is taken as an example for description.
  • the Turbo equalizer system can include at least one Turbo equalizer as described above.
  • the Turbo equalizer system may further comprise at least one Turbo equalizer as described above, and at least one of the above The LDPC convolutional code decoding unit; wherein, the relative positions of the turbo equalizer and the LDPC convolutional code decoding unit may be arbitrarily changed, and are not limited.
  • the data block whose data length is 1/T of the code length of the convolutional code can sequentially perform multi-stage BCJR and LDPC convolutional code decoding processing, since the OP-BCJR unit and the LDPC convolutional code decoding unit are connected in series. Connection mode, the data block will be iteratively processed by Turbo equalization.
  • the Turbo equalizer system 60 shown in Figure 6 includes at least one Turbo equalizer 20 as shown in Figure 2.
  • the Turbo equalizer system 70 shown in Fig. 7 includes at least one Turbo equalizer 20 as shown in Fig. 2, and at least one LDPC convolutional code decoding unit 22.
  • the LDPC convolutional code decoding unit of the at least one LDPC convolutional code decoding unit receives one of the at least one Turbo equalizer or the at least one LDPC convolutional code decoding unit.
  • the third data block output by another LDPC convolutional code decoding unit, and iteratively decoding the third data block to output a fourth data block, wherein the data length of the fourth data block is 1/T of the code length of the LDPC convolutional code.
  • the third data block output by the first turbo equalizer 20 is supplied to the second turbo equalizer 20 as the first data block of the second turbo equalizer, and the output of the second turbo equalizer 20.
  • the third data block will be supplied to the third turbo equalizer 20 as the first data block of the third turbo equalizer, and so on.
  • the third data block output by the last turbo equalizer 20 is supplied to the first LDPC convolutional code decoding unit 22 as the second data segment for the iterative decoding by the first LDPC convolutional code decoding unit 22.
  • the third data block output by the first LDPC convolutional code decoding unit 22 will be the second data that is iteratively decoded by the second LDPC convolutional code decoding unit 22.
  • the segment is used to output the iteratively decoded third data segment, and so on.
  • turbo equalizer 20 and the LDPC convolutional code decoding unit 22 in the turbo equalizer system may also be interleavedly connected to each other.
  • the inputs of the previous processing module (Turbo equalizer 20 or LDPC convolutional code decoding unit 22) are iterated sequentially.
  • the LDPC convolutional code encoder is used for encoding, and then the difference is made.
  • the optical modulator transmits the optical signal into the optical fiber transmission network; at the receiving end, after the coherent detection, the digital-to-analog converter sampling, and the conventional signal equalization processing of the equalizer, the Turbo equalizer system is entered.
  • the Turbo equalizer system includes: a primary Turbo equalizer (ie, a Turbo equalizer connected to a conventional signal equalizer) and M subsequent Turbo equalizers, wherein the difference between the primary Turbo equalizer and the subsequent Turbo equalizer is when the OP-BCJR unit is operated.
  • the start symbol status value is set differently.
  • the start symbol state value of the OP-BCJR unit in the primary Turbo equalizer is an equal probability distribution state value
  • the start symbol state value of the OP-BCJR unit in the subsequent Turbo equalizer is read from the memory.
  • the state value at this bit of the first-order OP-BCJR unit operation Both the primary Turbo equalizer and the subsequent Turbo equalizer consist of an OP-BCJR unit and an LDPC convolutional code decoding unit, as shown in Figure 9.
  • the Turbo equalizer system shown in Figure 8 also includes N independent LDPC convolutional code decoding units. It can be understood that the positions of the M subsequent turbo equalizers and the N LDPC convolutional code decoding units in the turbo equalizer system may not be limited to the connection manner shown in FIG. 8, and may be indirectly interleaved.
  • FIG. 9 through Figure 11 illustrate the working principle of the Turbo equalizer.
  • d, C 2 , and C 3 C T together form a codeword sequence of a check relationship of the kth layer that needs to satisfy the check matrix of the LDPC convolutional code. Decoding and soft information calculation according to the check relationship of the layer.
  • the OP-BCJR unit will receive the data block C. According to the state bits in the trellis diagram, there are overlaps of D bits between adjacent segments, and the n-segment overlapping data segments are respectively sent to n segment processing units (such as BPU_1 to BPU_n) for BCJR operation. Processing (composed of forward recursive operations and/or backward recursive operations, see Figure 11 and related descriptions).
  • the data block C T is output to the next-stage Turbo equalizer.
  • the processed data block C is received from the OP-BCJR unit of this level.
  • d, C 2 , C 3 C T4 still in the LDPC convolutional code decoder unit together constitute a codeword sequence of an incremental layer of the check layer of the kth layer that satisfies the check matrix of the LDPC convolutional code Decoding processing and soft information calculation according to the check relationship of the layer.
  • d, C 2 , C 3 , and C 4 together form a codeword sequence of a check relationship of the Hee 3 layer of the check matrix He that satisfies the LDPC convolutional code, and performs decoding processing and soft according to the check relationship of the layer.
  • Information calculation; at the same time, the same data block C is received by the OP-BCJR unit pair in the (i-1)th stage. BCJR parallel operation processing is performed by overlapping segments.
  • the LDPC convolutional code decoding unit in the (i-1)th stage outputs the data block C 4 to the i-th turbo-equalizer.
  • the processed data block C is received from the OP-BCJR unit of the present stage.
  • C. And d, C 2 , C 3 which are still in the LDPC convolutional code decoder unit, constitute a codeword sequence which needs to satisfy the check relationship of the H e 4 layer of the check matrix He of the LDPC convolutional code, This layer check relationship is used for decoding and soft information calculation.
  • the BPU module is responsible for processing the information update of each segment, as shown in Figure 10 below, BPU_1, BPU_2, and BPU_3.
  • the bit segment that is required by each BPU module to dynamically update the a posteriori soft information is the first part of the BPU_1, BPU_2, and BPU_3 modules, which are respectively BPU_1-1, BPU_2-1, and BPU_3-1;
  • the state value obtained by the segment in the last iteration to update only the overlapping portion of the state value is the second part of the BPU_1, BPU_2, and BPU_3 modules (the state value of the starting symbol shown in this section is obtained by using the previous iteration of the previous paragraph) Forward state value) and the third part (the state value of the start symbol shown in this section uses the backward state value obtained from the last iteration of the last segment), respectively BPU_l-2, BPU_2-2, BPU_3-2, and BPU_l -3, BPU_2-3, B
  • the processing flow of the OP-BCJR unit is as follows: (1) In each BPU module, the start symbol of the bit segment (second part) overlapping with the previous segment is read from the memory (as shown by the hollow small block on the bit axis) Forward state value, read the backward state value of the start symbol of the bit segment (third part) overlapping with the back segment (as shown by the solid small block on the bit axis), if it is the primary Turbo equalizer In the OP-BCJR unit, the corresponding state value of the start symbol is an equal probability distribution state value; (2) each BPU module performs overlapping forward recursive operation on the overlapping bit segments of the second part (the dotted line in the figure) Show), until the end point of the second part of the bit segment, and the overlapping part of the third part is overlapped and then recursively (shown by the dotted line in the figure) until the end point of the third part of the bit segment; 3) taking the end point of the bit segment of the second part and the third part as the starting
  • Fig. 11 describes the process of performing forward recursive processing and backward recursive processing for each data segment (i.e., BPU module). It should be understood that for the recursive processing, only forward recursive processing or backward recursive processing may be performed for each data segment (ie, the BPU module); or, some data segments may be forward recursively processed, and another partial data block may be processed. Perform backward recursive processing.
  • this embodiment adopts segmentation processing and forward-backward recursive operation on the received data block in the OP-BCJR unit, and acquires from the OP-BCJR unit in the LDPC convolutional code decoding unit.
  • the data is Turbo iterative processing, which effectively improves the throughput of Turbo equalization compensation and reduces the required storage resources.
  • Figure 12 illustrates another embodiment of a turbo equalizer system in accordance with an embodiment of the present invention.
  • the output signal of the prior art equalizer is passed through a channel estimation unit (conditional transition probability distribution estimator in FIG. 12) to determine parameters of the channel estimation (eg, PDF probability distribution parameters of the channel, transition probability parameters, etc.) Only enter the primary Turbo equalizer.
  • the conditional transition probability distribution to be used by the OP-BCJR unit in the primary Turbo equalizer is estimated based on the training sequence in the system. That is to say, the nonlinearity and PMD effect damage generated in the fiber channel are compensated.
  • this embodiment also adopts segmentation processing and/or forward-backward recursive operation on the received data block in the OP-BCJR unit, and acquires from the OP-BCJR unit in the LDPC convolutional code decoding unit.
  • the data is Turbo iterative processing, which effectively improves the throughput of Turbo equalization compensation and reduces the required storage resources, and at the same time compensates for nonlinear and PMD effect damage generated in the fiber channel.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential to the prior art or part of the technical solution, may be embodied in the form of a software product stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Mathematical Physics (AREA)
  • Power Engineering (AREA)
  • Computational Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Nonlinear Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Error Detection And Correction (AREA)

Abstract

本发明实施例涉及实现Turbo均衡补偿的方法以及均衡器和系统。其中,实现Turbo均衡补偿的方法包括:将第一数据块分成n个数据段,其中n个数据段中相邻的两个数据段重叠D个比特,对该n个数据段中的每一个数据段分别进行递归处理,合并经过递归处理后的n个数据段,以获得第二数据块;对第二数据块进行迭代译码,以输出第三数据块;其中第一数据块、第二数据块和第三数据块的数据长度均为LDPC卷积码的码长的1/T。本发明实施例应用于高速光纤传输系统的接收端。通过在OP-BCJR单元中对接收到的数据块采用分段处理和向前向后递归运算,以及在LDPC卷积码译码单元中对从OP-BCJR单元获取的数据进行Turbo迭代处理,能够有效提升系统吞吐量。

Description

实现 Turbo均衡补偿的方法以及 Turbo均衡器和系统 技术领域
本发明涉及光通信领域, 具体而言, 涉及实现 Turbo均衡补偿的方法以 及 Turbo均衡器和 Turbo均衡器系统。 背景技术
当前,高速光纤传输系统中传输速率得到提升,例如从 40Gb/s到 100Gb/s, 甚至达到 400Gb/s。但是,光纤传输系统中的非线性效应、偏振模色散(PMD, polarization mode dispersion )效应和差分编码代价等各种效应已经严重限制 了高速光纤传输系统的传输距离。 众所周知, 这些损伤效应的作用过程可以 用网格(trellis ) 图来描述, 因此可以用前向后向递归运算算法 BCJR ( Bahl, Cocke, Jelinek和 Raviv )补偿算法对其进行一定程度的补偿。
为了进一步补偿光纤传输系统中的各种效应对高速光纤传输系统的传 输距离的限制, 已经提出在高速光纤传输系统的接收端利用 Turbo均衡 ( equalization )的方式进行补偿, 即通过多个软信息处理模块之间的交互迭 代,提高系统性能。例如,在低密度奇偶校验( LDPC, low density parity check ) 译码器与 BCJR模块之间进行迭代, 实现差分编码效应、 非线性效应及 PMD 效应等的补偿。 这种 Turbo均衡的方式通过补偿信道中的损伤, 可以大幅提 升系统性能。 这里, 一个比特(bit )的软信息指对于这一比特来说, 判决是 0或 1的概率值; 为了化筒运算, 通常采用判决是 0的概率和判决是 1的概率的 比值再取对数。
LDPC码字, 其中的 BCJR模块采用常规的串行结构的滑窗 ( sliding window ) BCJR。 一般地, 光通信中所用的 LDPC码字的长度达到上万比特, BCJR模 块中需要存储整个 LDPC码字, 因此 BCJR模块必须具备巨大的存储资源。 然 而, 采用反馈结构的 Turbo均衡器、 巨大的 LDPC码字以及复杂的 BCJR模块 均限制了系统吞吐量。
由此可见, 在超 100G的高速光纤通信系统中, 为了实现超 100Gb/s高吞 吐量, 上述 Turbo均衡的方式无法适应大容量的高速传输。 发明内容
本发明提出了实现 Turbo均衡补偿的方法以及均衡器和系统, 旨在解决 在高速光纤传输系统中实现 Turbo均衡补偿时吞吐量受限的问题。
第一方面, 提出了一种实现 Turbo均衡补偿的方法, 包括: 将第一数据 块分成 n个数据段,其中所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D为大于或等于 1的正整数, 对所述 n个数据 段中的每一个数据段分别进行递归处理, 合并经过所述递归处理后的所述 n 个数据段, 以获得第二数据块; 对所述第二数据块进行迭代译码, 以输出第 三数据块; 其中所述第一数据块、 所述第二数据块和所述第三数据块的数据 长度均为低密度奇偶校验 LDPC卷积码的码长的 1/T, 所述 T是阶梯形的所 述 LDPC卷积码的校验矩阵的层数。
结合第一方面, 在第一方面的第一实施方式中, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括: 并行地对所述 n个数据段中的 每一个数据段进行前向递归运算和后向递归运算。
结合第一方面, 在第一方面的第二实施方式中, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括: 并行地对所述 n个数据段中的 每一个数据段进行前向递归运算。
结合第一方面, 在第一方面的第三实施方式中, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括: 并行地对所述 n个数据段中的 每一个数据段进行后向递归运算。
结合第一方面或其第一、 第二、 第三实施方式, 在第一方面的第四实施 方式中, 所述对所述第二数据块进行迭代译码, 以输出第三数据块包括: 接 收所述第二数据块; 将接收到的所述第二数据块与其他 T-1个已经过迭代译 码的数据块进行译码处理, 其中所述其他 T-1个已经过迭代译码的数据块的 数据长度为 LDPC卷积码的码长的 1/T; 输出经过最多次译码处理的第三数 据块。
结合第一方面或其第一、 第二、 第三、 第四实施方式, 在第一方面的第 五实施方式中, 在所述将第一数据块分成 n个数据段之前, 还包括: 对所述 第一数据块进行条件转移概率分布估计, 以确定信道估计的参数信息。
第二方面,提出了一种 Turbo均衡器, 包括:重叠并行 BCJR( OP-BCJR, overlapped parallel BCJR )单元, 用于将第一数据块分成 n个数据段, 其中 所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正 整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据段分 别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得第二 数据块; 低密度奇偶校验 LDPC卷积码译码单元, 与所述 OP-BCJR单元连 接, 用于对所述第二数据块进行迭代译码, 以输出第三数据块; 其中所述第 一数据块、所述第二数据块和所述第三数据块的数据长度均为低密度奇偶校 验 LDPC卷积码的码长的 1/T, 所述 T是阶梯形的所述 LDPC卷积码的校验 矩阵的层数。
结合第二方面, 在第二方面的第一实施方式中, 所述 OP-BCJR单元包 括: 分段模块, 用于将第一数据块分成 n个数据段, 其中所述 n个数据段中 相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D为大于或 等于 1的正整数; 递归模块, 用于对所述 n个数据段中的每一个数据段分别 进行递归处理;合并模块,用于合并经过所述递归处理后的所述 n个数据段, 以获得第二数据块。
结合第二方面的第一实施方式, 在第二方面的第二实施方式中, 所述递 归模块具体用于: 并行地对所述 n个数据段中的每一个数据段进行前向递归 运算和后向递归运算。
结合第二方面的第一实施方式, 在第二方面的第三实施方式中, 所述递 归模块具体用于: 并行地对所述 n个数据段中的每一个数据段进行前向递归 运算。
结合第二方面的第一实施方式, 在第二方面的第四实施方式中, 所述递 归模块具体用于: 并行地对所述 n个数据段中的每一个数据段进行后向递归 运算。
结合第二方面或其第一、 第二、 第三、 第四实施方式, 在第二方面的第 五实施方式中, 所述 LDPC卷积码译码单元包括: 接收模块, 用于接收所述 第二数据块; 译码模块, 用于将接收到的所述第二数据块与其他 T-1个已经 过迭代译码的数据块进行译码处理, 其中所述其他 T-1个已经过迭代译码的 数据块的数据长度为 LDPC卷积码的码长的 1/T; 输出模块, 用于输出经过 最多次译码处理的第三数据块。
结合第二方面或其第一、 第二、 第三、 第四、 第五实施方式, 在第二方 面的第六实施方式中, 还包括: 信道估计单元, 用于在所述 OP-BCJR单元 将第一数据块分成 n个数据段之前,对所述第一数据块进行条件转移概率分 布估计, 以确定信道估计的参数信息。
第三方面, 提出了一种 Turbo均衡器系统, 包括: 至少一个 Turbo均衡 器, 至少一个 Turbo 均衡器, 其中所述至少一个 Turbo 均衡器中的每一个 Turbo均衡器包括: 0P-BCJR单元, 用于将第一数据块分成 n个数据段, 其 中所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的 正整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据段 分别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得第 二数据块; LDPC卷积码译码单元, 与所述 0P-BCJR单元连接, 用于对所 述第二数据块进行迭代译码, 以输出第三数据块; 其中所述第一数据块、 所 述第二数据块和所述第三数据块的数据长度均为低密度奇偶校验 LDPC 卷 积码的码长的 1/T,所述 T是阶梯形的所述 LDPC卷积码的校验矩阵的层数。
第四方面, 提出了一种 Turbo均衡器系统, 包括: 至少一个 Turbo均衡 器, 其中所述至少一个 Turbo均衡器中的每一个 Turbo均衡器包括: 重叠并 行 0P-BCJR单元, 用于将第一数据块分成 n个数据段, 其中所述 n个数据 段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D为大 于或等于 1的正整数,对所述 n个数据段中的每一个数据段分别进行递归处 理, 合并经过所述递归处理后的所述 n个数据段, 以获得第二数据块, 低密 度奇偶校验 LDPC卷积码译码单元, 与所述 0P-BCJR单元连接, 用于对所 述第二数据块进行迭代译码, 以输出第三数据块, 其中所述第一数据块、 所 述第二数据块和所述第三数据块的数据长度均为低密度奇偶校验 LDPC 卷 积码的码长的 1/T,所述 T是阶梯形的所述 LDPC卷积码的校验矩阵的层数; 至少一个低密度奇偶校验 LDPC卷积码译码单元, 其中所述至少一个 LDPC 卷积码译码单元中的每一个 LDPC卷积码译码单元接收所述至少一个 Turbo 均衡器中的一个 Turbo均衡器输出的所述第三数据块, 对所述第三数据块进 行迭代译码, 以输出第四数据块, 其中所述第四数据块的数据长度为 LDPC 卷积码的码长的 1/T。
本发明实施例应用于高速光纤传输系统的接收端。 通过在 0P-BCJR单 元中对接收到的数据块采用分段处理和向前向后递归运算,以及在 LDPC卷 积码译码单元中对从 0P-BCJR单元获取的数据进行 Turbo迭代处理, 能够 有效提升系统吞吐量。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对本发明实施例中 所需要使用的附图作筒单地介绍, 显而易见地, 下面所描述的附图仅仅是本 发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动的 前提下, 还可以根据这些附图获得其他的附图。
图 1是根据本发明实施例的 Turbo均衡补偿的方法的流程图。
图 2是根据本发明实施例的 Turbo均衡器的结构示意图。
图 3是根据本发明实施例的 Turbo均衡器中 OP-BCJR单元的结构示意 图。
图 4是根据本发明实施例的 Turbo均衡器中 LDPC卷积码译码单元的结 构示意图。
图 5是根据本发明另一实施例的 Turbo均衡器的结构示意图。
图 6是根据本发明实施例的 Turbo均衡器系统的结构示意图。
图 7是根据本发明另一实施例的 Turbo均衡器系统的结构示意图。
图 8是根据本发明具体实施例的 Turbo均衡器系统的结构图。
图 9是根据本发明具体实施例的 Turbo均衡器的结构图。
图 10是根据本发明具体实施例的 Turbo均衡器中 OP-BCJR单元的迭代 处理的时隙示意图。
图 11是根据本发明具体实施例的 Turbo均衡器中 OP-BCJR单元的具体 处理过程的示意图。
图 12是根据本发明另一具体实施例的 Turbo均衡器系统的结构图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例是本发明的一部分实施例, 而不 是全部实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创 造性劳动的前提下所获得的所有其他实施例, 都应属于本发明保护的范围。
本发明的技术方案, 可以应用于各种通信系统, 例如: 全球移动通讯系 统( GSM, Global System of Mobile communication ), 码分多址( CDMA, Code Division Multiple Access ) 系统, 宽带码分多址( WCDMA, Wideband Code Division Multiple Access Wireless ), 通用分组无线月良务技术( GPRS , General Packet Radio Service ), 长期演进 ( LTE, Long Term Evolution )等。
用户设备( UE, User Equipment )也可称之为移动终端( Mobile Terminal )、 移动台 (Mobile Station )等, 可以经无线接入网(例如, RAN, Radio Access Network ) 与一个或多个核心网进行通信。 UE 与无线接入网交换语音和 /或 数据。
基站,可以是 GSM或 CDMA中的基站( BTS , Base Transceiver Station ), 也可以是 WCDMA中的基站( Node B ),还可以是 LTE中的演进型基站( eNB 或 e-NodeB , evolved Node B )„ 另外, 一个基站可能支持 /管理一个或多个小 区 (cell ), UE需要和网络通信时, 它将选择一个小区发起网络接入。
为了能够解决在高速光纤传输系统中实现 Turbo均衡补偿时系统吞吐量 受限的问题, 本发明实施例提出了应用于高速光纤传输系统的接收端的实现 Turbo均衡补偿的方法。
例如, 对于发射端, 发送信号经过光传输单元(OTU, optical transport unit )中的成帧单元( framer )之后, 依次在 LDPC卷积码编码器中进行卷积 码编码、 在差分编码器进行差分编码, 最后经过光调制器发送光信号进入光 纤传输网络。 对于接收端, 光信号经过相干 ( coherent )检测、 模数转换器 ( ADC, analog to digital converter )采样以及常规信号均衡处理之后, 进入 Turbo 均衡器系统实现 Turbo 均衡补偿, 最后经过 OTU 中的解帧单元 ( deframer )之后形成接收信号。
在本发明实施例中, Turbo均衡器系统可以包括至少一个 Turbo均衡器, 例如每个 Turbo均衡器包括 OP-BCJR单元和 LDPC卷积码译码单元。 此外, Turbo均衡器系统还可以包括至少一个独立的 LDPC卷积码译码单元。
以下将以 Turbo均衡器系统包括一个 Turbo均衡器为例说明根据本发明 实施例的实现 Turbo均衡补偿的方法, 参照如下步骤。
Sll , Turbo均衡器中的 OP-BCJR单元将第一数据块分成 n个数据段, 其中所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2 的正整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据 段分别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得 第二数据块。
这里,所述第一数据块和所述第二数据块的数据长度均为 LDPC卷积码 的码长的 1/T, 所述 Τ是阶梯形的所述 LDPC卷积码的校验矩阵的层数。 此 夕卜, 重叠的 D个比特的起始符号的状态值是等概率分布。 其中, 等概率分布 是指在该比特处的状态分布在每个可能状态上的概率是相等的。
这里, LDPC卷积码的码长是指满足一层校验关系的数据长度。 这里, 以上, "满足一层校验关系 "是指 x* t = 0, i = l,2,....T, 其中 x是满足该关 系的硬判决比特数据, 是 LDPC卷积码的校验矩阵的第 i层 的转置。 这里, 至 Ητ构成了 LDPC卷积码的校验矩阵。
也就是说, T表示总共有 T个码字块组合起来一起满足 LDPC卷积码的 校验关系。 T是由 LDPC卷积码的校验矩阵 H的阶梯形结构参数(即层数) 决定的。 例如, 假设, LDPC卷积码的校验矩阵的第 i层 和第 i+1层 H1+1 相互错开的列数为 NT, LDPC卷积码的校验矩阵的每层的列数为 N, —般取
Ντ和 N为常数, 则 T=N/NT
由此可见, 由于 Turbo均衡器中处理的数据块的数据长度仅为 LDPC卷 积码的码长的 1/T, 因此可以降低 OP-BCJR单元所需的存储资源。
进一步而言, 所述对所述 n个数据段中的每一个数据段分别进行递归处 理可以包括: 并行地对所述 n个数据段中的每一个数据段进行前向递归运算 和后向递归运算。所述对所述 n个数据段中的每一个数据段分别进行递归处 理也可以包括: 并行地对所述 n个数据段中的每一个数据段进行前向递归运 算。所述对所述 n个数据段中的每一个数据段分别进行递归处理也可以包括: 并行地对所述 n个数据段中的每一个数据段进行后向递归运算。 可选地, 所 述对所述 n个数据段中的每一个数据段分别进行递归处理可以包括: 对所述 n个数据段中的部分数据段进行前向递归处理, 而对剩余的数据段进行后向 递归处理。
此外, OP-BCJR单元在做前向递归运算和后向递归运算时,需要用到信 道的概率密度函数(PDF, probability density function )概率分布参数以及转 移概率参数。在某些场景下,这些参数是事先已知的, 而在某些应用场景下, 这些参数只能通过信道估计获得。 因此, 在将第一数据块分成 n个数据段之 前, 还需要对所述第一数据块进行条件转移概率分布估计, 以确定信道估计 的参数信息。
S12, Turbo均衡器中的 LDPC卷积码译码单元将所述第二数据块进行迭 代译码, 以输出第三数据块。 这里, 所述第三数据块的数据长度也为 LDPC 卷积码的码长的 1/T。
由于 Turbo均衡
的 1/T, 于是能够有效提升系统吞吐量。
具体而言, 将所述第二数据块进行迭代译码, 以输出第三数据块包括: 接收所述第二数据块; 将接收到的所述第二数据块与其他 T-1个已经过迭代 译码的数据块进行译码处理;输出经过最多次译码处理的第三数据块。这里, 所述其他 T-1个已经过迭代译码的数据块的数据长度为 LDPC卷积码的码长 的 1/T。
由上可知, 本发明实施例应用于高速光纤传输系统的接收端。 通过在 OP-BCJR单元中对接收到的数据块采用分段处理和向前向后递归运算,以及 在 LDPC卷积码译码单元中对从 OP-BCJR单元获取的数据进行 Turbo迭代 处理, 可以有效提升系统吞吐量。
图 2示出了根据本发明实施例的实现 Turbo均衡补偿的 Turbo均衡器。 以下将结合图 2的 Turbo均衡器详细说明如何如上所述的实现 Turbo均衡补 偿的方法。
在图 2中, Turbo均衡器 20包括 OP-BCJR单元 21和 LDPC卷积码译码 单元 22。 其中:
OP-BCJR单元 21用于将第一数据块分段成 n个数据段, 其中所述 n个 数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D 为大于或等于 1的正整数,对所述 n个数据段中的每一个数据段分别进行递 归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得第二数据块;
LDPC卷积码译码单元 22, 与所述 OP-BCJR单元 21连接, 用于对所述 第二数据块进行迭代译码, 以输出第三数据块。
以上, 所述第一数据块、 所述第二数据块和所述第三数据块的数据长度 均为低密度奇偶校验 LDPC卷积码的码长的 1/T, 所述 T是阶梯形的所述 LDPC卷积码的校验矩阵的层数。
进一步地, 如图 3所示, OP-BCJR单元 21可以包括分段模块 211、 递 归模块 212和合并模块 213。 其中:
分段模块 211用于将第一数据块分段成 n个数据段, 其中所述 n个数据 段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D为大 于或等于 1的正整数; 递归模块 212用于对所述 n个数据段中的每一个数据段分别进行递归处 理;
合并模块 213用于合并经过所述递归处理后的所述 n个数据段, 以获得 第二数据块。
具体而言, 递归模块 212用于并行地对所述 n个数据段中的每一个数据 段进行前向递归运算和后向递归运算; 或者, 并行地对所述 n个数据段中的 每一个数据段进行前向递归运算; 或者, 并行地对所述 n个数据段中的每一 个数据段进行后向递归运算。 可选地, 所述递归模块 212还可以用于对所述 n个数据段中的部分数据段进行前向递归处理, 而对剩余的数据段进行后向 递归处理。
进一步地, 如图 4所示, LDPC卷积码译码单元 22可以包括接收模块 221、 译码模块 222和输出模块 223。 其中:
接收模块 221用于接收所述第二数据块;
译码模块 222用于将接收到的所述第二数据块与其他 T-1个已经过迭代 译码的数据块进行译码处理, 其中所述其他 T-1个已经过迭代译码的数据块 的数据长度为 LDPC卷积码的码长的 1/T;
输出模块 223用于输出经过最多次译码处理的第三数据块。
由上可知, 本发明实施例应用于高速光纤传输系统的接收端。 通过在 OP-BCJR单元中对接收到的数据块采用分段处理和向前向后递归运算,以及 在 LDPC卷积码译码单元中对从 OP-BCJR单元获取的数据进行 Turbo迭代 处理, 可以有效提升系统吞吐量。
图 5所示的 Turbo均衡器 50中, 除了 OP-BCJR单元 21和 LDPC卷积 码译码单元 22, 还包括信道估计单元 23, 其用于在所述 OP-BCJR单元将第 一数据块分成 n个数据段之前,对所述第一数据块进行条件转移概率分布估 计, 以确定信道估计的参数信息。
这样, OP-BCJR单元在做前向和 /或后向递归运算时, 需要用到信道的 PDF概率分布参数以及转移概率参数等可以通过信道估计获得。
以上的实施例中, 均是以 Turbo均衡器系统中的一个 Turbo均衡器为例 进行说明。 事实上, 为了使得 Turbo均衡补偿的效果更优, 通常考虑 Turbo 均衡器系统中可以包括至少一个如上所述的 Turbo均衡器。 或者, Turbo均 衡器系统还可以包括至少一个如上所述的 Turbo均衡器, 以及至少一个如上 所述的 LDPC卷积码译码单元; 其中, Turbo均衡器和 LDPC卷积码译码单 元的相对位置可以任意变化, 并不限定。 于是, 其数据长度为卷积码的码长 的 1/T的数据块可以依次进行多级 BCJR和 LDPC卷积码译码的处理, 由于 OP-BCJR单元和 LDPC卷积码译码单元采用串联连接方式, 数据块将经过 Turbo均衡迭代处理。
图 6所示的 Turbo均衡器系统 60包括至少一个如图 2所示的 Turbo均 衡器 20。
图 7所示的 Turbo均衡器系统 70包括至少一个如图 2所示的 Turbo均 衡器 20,以及至少一个 LDPC卷积码译码单元 22。其中,所述至少一个 LDPC 卷积码译码单元中的一个 LDPC卷积码译码单元接收所述至少一个 Turbo均 衡器中的一个 Turbo均衡器或所述至少一个 LDPC卷积码译码单元中的另一 个 LDPC卷积码译码单元输出的所述第三数据块,并对所述第三数据块进行 迭代译码, 以输出第四数据块, 其中所述第四数据块的数据长度为 LDPC卷 积码的码长的 1/T。
例如, 根据图 7的 Turbo均衡器系统的结构示意图所示, 多个 Turbo均 衡器 20连接后,再与一个或多个 LDPC卷积码译码单元 22连接。也就是说, 第一个 Turbo均衡器 20输出的第三数据块将被提供给第二个 Turbo均衡器 20作为第二个 Turbo均衡器的第一数据块, 第二个 Turbo均衡器 20输出的 第三数据块将被提供给第三个 Turbo均衡器 20作为第三个 Turbo均衡器的 第一数据块, 以此类推。 于是, 最后一个 Turbo均衡器 20输出的第三数据 块将被提供给第一个 LDPC卷积码译码单元 22作为第一个 LDPC卷积码译 码单元 22进行迭代译码的第二数据段以便输出迭代译码后的第三数据段, 第一个 LDPC卷积码译码单元 22输出的第三数据块将作为第二个 LDPC卷 积码译码单元 22进行迭代译码的第二数据段以便输出迭代译码后的第三数 据段, 以此类推。
可选地, Turbo均衡器系统中的 Turbo均衡器 20与 LDPC卷积码译码单 元 22也可以相互穿插地连接。 由上可知, 之前的处理模块(Turbo 均衡器 器 20或 LDPC卷积码译码单元 22 ) 的输入, 依次迭代。
以下结合图 8, 详细描述 Turbo均衡器系统的工作原理。
如图 8所示, 在发送端, 用 LDPC卷积码编码器进行编码, 然后做差分 编码, 再经过光调制器发送光信号进入光纤传输网络; 在接收端, 经过相干 检测、 数模转换器采样及均衡器的常规信号均衡处理之后, 进入 Turbo均衡 器系统。
Turbo均衡器系统包括: 初级 Turbo均衡器(即与常规信号均衡器连接 的 Turbo均衡器)和 M个后续 Turbo均衡器, 其中初级 Turbo均衡器与后续 Turbo均衡器的区别在于 OP-BCJR单元运算时的起始符号状态值设置方式不 同。 例如, 初级 Turbo均衡器中的 OP-BCJR单元的起始符号状态值为等概 率分布状态值, 而后续 Turbo均衡器中的 OP-BCJR单元的起始符号状态值 是从存储器中读取的上一级 OP-BCJR单元运算所得的该比特处的状态值。 初级 Turbo 均衡器与后续 Turbo 均衡器都是由一个 OP-BCJR单元和一个 LDPC卷积码译码单元组成, 如图 9所示。
此外,图 8所示的 Turbo均衡器系统还包括 N个独立的 LDPC卷积码译 码单元。 可以理解, Turbo均衡器系统中的 M个后续 Turbo均衡器与 N个 LDPC卷积码译码单元的位置可以不限于图 8所示的连接方式, 还可以相互 间接穿插连接。
图 9至图 11相结合说明了 Turbo均衡器的工作原理。
如图 9所示, 在 LDPC卷积码译码单元内, d、 C2、 C3 CT一起组 成一个需要满足 LDPC卷积码的校验矩阵的第 k层的校验关系的码字序列, 按照该层的校验关系进行译码和软信息的计算。与此同时, OP-BCJR单元将 接收到的数据块 C。按网格( trellis ) 图中的状态比特进行分段, 相邻段之间 存在 D个比特的重叠, n段重叠数据段分别送入 n个分段处理单元(如 BPU_1 到 BPU_n )进行 BCJR运算处理(由前向递归运算和 /或后向递归运算组成, 详见图 11及相关描述)。
当 LDPC卷积码译码单元完成了 d、 C2、 C3 CT的软信息更新之 后,将数据块 CT输出给下一级 Turbo均衡器。与此同时,从本级的 OP-BCJR 单元接收处理完毕的数据块 C。, C。和仍然在 LDPC卷积码译码器单元内的 d、 C2、 C3 CT4一起组成需要满足 LDPC卷积码的校验矩阵的第 k层 的递增一层的校验关系的码字序列,按照该层的校验关系进行译码处理和软 信息计算。
上述 Turbo迭代处理过程用时序图表示如图 10所示, 以 T = 4为例。 在第一个时刻,在第 (i-1)级 Turbo模块的 LDPC卷积码译码单元内, d、 C2、 C3、 C4一起组成一个需要满足 LDPC卷积码的校验矩阵 He的第 He 3层 的校验关系的码字序列, 按该层的校验关系进行译码处理和软信息计算; 与 此同时, 同在第 (i-1)级的 OP-BCJR单元对接收到的数据块 C。按重叠分段的 进行 BCJR并行运算处理。
在第二个时刻,第 (i-1)级中的 LDPC卷积码译码单元将数据块 C4输出给 第 i级 Turbo均衡器。 与此同时,从本级的 OP-BCJR单元接收处理完毕的数 据块 C。, C。和仍然在 LDPC卷积码译码器单元内的 d、 C2、 C3—起组成需 要满足 LDPC卷积码的校验矩阵 He的第 He 4层的校验关系的码字序列, 按 该层校验关系进行译码和软信息计算。
重叠并行的 OP-BCJR单元的具体处理过程如图 11所示。 事先把数据块
C0分成多段, 各段之间有重叠, BPU模块负责处理各段的信息更新, 如图 10中下方标示的 BPU_1、 BPU_2和 BPU_3。 其中, 每个 BPU模块所负责的 真正需要更新后验软信息的比特段为 BPU_1、 BPU_2和 BPU_3模块中的第 一部分, 分别为 BPU_1-1、 BPU_2-1和 BPU_3-1 ; 把需要利用相邻段在上次 迭代所获得的状态值来只更新状态值的重叠部分为 BPU_1、BPU_2和 BPU_3 模块中的第二部分(该部分所示的起始符号的状态值利用前一段上次迭代所 得的前向状态值)和第三部分(该部分所示的起始符号的状态值利用后一段 上次迭代所得的后向状态值), 分别为 BPU_l-2、 BPU_2-2、 BPU_3-2 和 BPU_l-3、 BPU_2-3、 BPU_3-3。
OP-BCJR单元处理流程如下: ( 1 )在每个 BPU模块中, 从存储器中读 取与前段相重叠的比特段 (第二部分)的起始符号 (如比特轴上的空心小块所 示)的前向状态值,读取与后段相重叠的比特段(第三部分)的起始符号(如 比特轴上的实心小块所示) 的后向状态值, 如果是初级 Turbo 均衡器中的 OP-BCJR单元, 则起始符号的相应状态值为等概率分布状态值; (2 )每个 BPU模块在第二部分的重叠比特段进行重叠前向递归运算(图中点状虚线所 示), 直到第二部分的比特段的终点比特, 而在第三部分的重叠比特段进行 重叠后向递归运算(图中线状虚线所示), 直到第三部分的比特段的终点比 特; (3 )以第二部分、 第三部分的比特段的终点比特为起始符号, 每个 BPU 模块在它所真正负责更新的第一部分的比特段进行前向递归运算和后向递 归运算, 根据所得的前向、 后向状态值, 计算每个比特的后验软信息; (4 ) 每个 BPU模块要保存与相邻比特段所重叠的第二部分、 第三部分的起始符 号的前向、 后向状态值, 用于下一级 OP-BCJR单元的运算。
以上, 结合图 11所述的实施例描述了对每个数据段(即 BPU模块)进 行前向递归处理和后向递归处理的过程。 应理解, 为了筒化递归处理, 也可 以对每个数据段(即 BPU模块)仅进行前向递归处理或后向递归处理; 或 者,对部分数据段进行前向递归处理,对另一部分数据块进行后向递归处理。
由此可见, 该实施例通过在 OP-BCJR单元中对接收到的数据块采用分 段处理和向前向后递归运算,以及在 LDPC卷积码译码单元中对从 OP-BCJR 单元获取的数据进行 Turbo迭代处理, 从而有效提升 Turbo均衡补偿的吞吐 量以及降低所需的存储资源。
图 12示出了根据本发明实施例的 Turbo均衡器系统的另一具体实施例。 其中, 现有技术的均衡器的输出信号要经过信道估计单元(图 12 中为条件 转移概率分布估算器) 以确定信道估计的参数(例如, 信道的 PDF概率分 布参数,转移概率参数等)之后,才进入初级 Turbo均衡器。于是,初级 Turbo 均衡器中的 OP-BCJR单元所要用的条件转移概率分布要根据系统中的训练 序列来进行估计。 也就是说, 对光纤信道内产生的非线性、 PMD 效应损伤 进行补偿。
显然, 该实施例也是通过在 OP-BCJR单元中对接收到的数据块采用分 段处理和 /或向前向后递归运算, 以及在 LDPC 卷积码译码单元中对从 OP-BCJR单元获取的数据进行 Turbo迭代处理,从而有效提升 Turbo均衡补 偿的吞吐量以及降低所需的存储资源, 同时可以实现对光纤信道内产生的非 线性、 PMD效应损伤进行补偿。
应理解, 本发明的每个权利要求所叙述的方案也应看做是一个实施例, 并且是权利要求中的特征是可以结合的,如本发明中的判断步骤后的执行的 不同分支的步骤可以作为不同的实施例。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结 合来实现。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方案的特 定应用和设计约束条件。 专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能, 但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到, 为描述的方便和筒洁, 上述描 述的系统、 装置和单元的具体工作过程, 可以参考前述方法实施例中的对应 过程, 在此不再赘述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置和 方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示 意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个 系统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间 的耦合或直接耦合或通信连接可以是通过一些接口, 装置或单元的间接耦合 或通信连接, 可以是电性, 机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作 为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或 者全部单元来实现本实施例方案的目的。
另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元 中, 也可以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一 个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明 的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部 分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质 中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。 而前 述的存储介质包括: U盘、移动硬盘、只读存储器( ROM , Read-Only Memory )、 随机存取存储器(RAM, Random Access Memory ), 磁碟或者光盘等各种可 以存储程序代码的介质。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局限 于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易 想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护 范围应所述以权利要求的保护范围为准。

Claims

权利要求
1、 一种实现 Turbo均衡补偿的方法, 其特征在于, 包括:
将第一数据块分成 n个数据段, 其中所述 n个数据段中相邻的两个数据 段重叠 D个比特, n为大于或等于 2的正整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据段分别进行递归处理,合并经过所述递归 处理后的所述 n个数据段, 以获得第二数据块;
对所述第二数据块进行迭代译码, 以输出第三数据块;
其中所述第一数据块、所述第二数据块和所述第三数据块的数据长度均 为低密度奇偶校验 LDPC卷积码的码长的 1/T,所述 T是阶梯形的所述 LDPC 卷积码的校验矩阵的层数。
2、 根据权利要求 1所述的方法, 其特征在于, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括:
并行地对所述 n个数据段中的每一个数据段进行前向递归运算和后向递 归运算。
3、 根据权利要求 1所述的方法, 其特征在于, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括:
并行地对所述 n个数据段中的每一个数据段进行前向递归运算。
4、 根据权利要求 1所述的方法, 其特征在于, 所述对所述 n个数据段 中的每一个数据段分别进行递归处理, 包括:
并行地对所述 n个数据段中的每一个数据段进行后向递归运算。
5、 根据权利要求 1至 4所述的方法, 其特征在于, 所述对所述第二数 据块进行迭代译码, 以输出第三数据块包括:
接收所述第二数据块;
将接收到的所述第二数据块与其他 T-1个已经过迭代译码的数据块进行 卷积码的码长的 1/T;
输出经过最多次译码处理的第三数据块。
6、 根据权利要求 1至 5中任一项所述的方法, 其特征在于, 在所述将 第一数据块分成 n个数据段之前, 还包括:
对所述第一数据块进行条件转移概率分布估计, 以确定信道估计的参数 信息。
7、 一种 Turbo均衡器, 其特征在于, 包括:
重叠并行 0P-BCJR单元, 用于将第一数据块分段成 n个数据段, 其中 所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正 整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据段分 别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得第二 数据块;
低密度奇偶校验 LDPC卷积码译码单元, 与所述 OP-BCJR单元连接, 用于对所述第二数据块进行迭代译码, 以输出第三数据块;
其中所述第一数据块、所述第二数据块和所述第三数据块的数据长度均 为低密度奇偶校验 LDPC卷积码的码长的 1/T,所述 T是阶梯形的所述 LDPC 卷积码的校验矩阵的层数。
8、 根据权利要求 7所述的 Turbo均衡器, 其特征在于, 所述 OP-BCJR 单元包括:
分段模块, 用于将第一数据块分段成 n个数据段, 其中所述 n个数据段 中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正整数, D为大于 或等于 1的正整数;
递归模块,用于对所述 n个数据段中的每一个数据段分别进行递归处理; 合并模块, 用于合并经过所述递归处理后的所述 n个数据段, 以获得第 二数据块。
9、 根据权利要求 8所述的 Turbo均衡器, 其特征在于, 所述递归模块 具体用于:
并行地对所述 n个数据段中的每一个数据段进行前向递归运算和后向递 归运算。
10、 根据权利要求 8所述的 Turbo均衡器, 其特征在于, 所述递归模块 具体用于:
并行地对所述 n个数据段中的每一个数据段进行前向递归运算。
11、 根据权利要求 8所述的 Turbo均衡器, 其特征在于, 所述递归模块 具体用于:
并行地对所述 n个数据段中的每一个数据段进行后向递归运算。
12、 根据权利要求 8至 11中任一项所述的 Turbo均衡器, 其特征在于, 所述 LDPC卷积码译码单元包括:
接收模块, 用于接收所述第二数据块;
译码模块, 用于将接收到的所述第二数据块与其他 T-1个已经过迭代译 码的数据块进行译码处理, 其中所述其他 T-1个已经过迭代译码的数据块的 数据长度为 LDPC卷积码的码长的 1/T;
输出模块, 用于输出经过最多次译码处理的第三数据块。
13、 根据权利要求 7至 12中任一项所述的 Turbo均衡器, 其特征在于, 还包括:
信道估计单元, 用于在所述 OP-BCJR单元将第一数据块分成 n个数据 段之前, 对所述第一数据块进行条件转移概率分布估计, 以确定信道估计的 参数信息。
14、一种 Turbo均衡器系统, 其特征在于, 包括至少一个 Turbo均衡器, 其中所述至少一个 Turbo均衡器中的每一个 Turbo均衡器包括:
重叠并行 0P-BCJR单元, 用于将第一数据块分段成 n个数据段, 其中 所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2的正 整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据段分 别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得第二 数据块;
低密度奇偶校验 LDPC卷积码译码单元, 与所述 OP-BCJR单元连接, 用于对所述第二数据块进行迭代译码, 以输出第三数据块;
其中所述第一数据块、所述第二数据块和所述第三数据块的数据长度均 为低密度奇偶校验 LDPC卷积码的码长的 1/T,所述 T是阶梯形的所述 LDPC 卷积码的校验矩阵的层数。
15、 一种 Turbo均衡器系统, 其特征在于, 包括:
至少一个 Turbo 均衡器, 其中所述至少一个 Turbo 均衡器中的每一个
Turbo均衡器包括:
重叠并行 OP-BCJR单元, 用于将第一数据块分段成 n个数据段, 其中所述 n个数据段中相邻的两个数据段重叠 D个比特, n为大于或等于 2 的正整数, D为大于或等于 1的正整数, 对所述 n个数据段中的每一个数据 段分别进行递归处理, 合并经过所述递归处理后的所述 n个数据段, 以获得 第二数据块, 低密度奇偶校验 LDPC卷积码译码单元, 与所述 OP-BCJR单元连 接, 用于对所述第二数据块进行迭代译码, 以输出第三数据块,
其中所述第一数据块、所述第二数据块和所述第三数据块的数据长 度均为低密度奇偶校验 LDPC卷积码的码长的 1/T, 所述 T是阶梯形的所述 LDPC卷积码的校验矩阵的层数;
至少一个低密度奇偶校验 LDPC 卷积码译码单元, 其中所述至少一个 LDPC 卷积码译码单元中的一个 LDPC 卷积码译码单元接收所述至少一个 Turbo均衡器中的一个 Turbo均衡器或所述至少一个 LDPC卷积码译码单元 中的另一个 LDPC卷积码译码单元输出的所述第三数据块,并对所述第三数 据块进行迭代译码, 以输出第四数据块, 其中所述第四数据块的数据长度为 LDPC卷积码的码长的 1/T。
PCT/CN2013/078570 2013-07-01 2013-07-01 实现Turbo均衡补偿的方法以及Turbo均衡器和系统 WO2015000100A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
ES13888637.9T ES2683084T3 (es) 2013-07-01 2013-07-01 Método de ecualización Turbo y sistema de ecualización Turbo
CN201380001159.9A CN103688502B (zh) 2013-07-01 2013-07-01 实现Turbo均衡补偿的方法以及Turbo均衡器和系统
EP13888637.9A EP3001570B1 (en) 2013-07-01 2013-07-01 Turbo equalization method and turbo equalization system
PCT/CN2013/078570 WO2015000100A1 (zh) 2013-07-01 2013-07-01 实现Turbo均衡补偿的方法以及Turbo均衡器和系统
US14/984,351 US10574263B2 (en) 2013-07-01 2015-12-30 Method for implementing turbo equalization compensation, turbo equalizer and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/078570 WO2015000100A1 (zh) 2013-07-01 2013-07-01 实现Turbo均衡补偿的方法以及Turbo均衡器和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/984,351 Continuation US10574263B2 (en) 2013-07-01 2015-12-30 Method for implementing turbo equalization compensation, turbo equalizer and system

Publications (1)

Publication Number Publication Date
WO2015000100A1 true WO2015000100A1 (zh) 2015-01-08

Family

ID=50323339

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/078570 WO2015000100A1 (zh) 2013-07-01 2013-07-01 实现Turbo均衡补偿的方法以及Turbo均衡器和系统

Country Status (5)

Country Link
US (1) US10574263B2 (zh)
EP (1) EP3001570B1 (zh)
CN (1) CN103688502B (zh)
ES (1) ES2683084T3 (zh)
WO (1) WO2015000100A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9660845B2 (en) * 2015-10-06 2017-05-23 Huawei Technologies Co., Ltd. System and method for state reduction in trellis equalizers using bounded state enumeration
US9800437B2 (en) * 2016-03-16 2017-10-24 Northrop Grumman Systems Corporation Parallelizable reduced state sequence estimation via BCJR algorithm
JP2017175352A (ja) * 2016-03-23 2017-09-28 パナソニック株式会社 ターボ等化装置およびターボ等化方法
CN109687935B (zh) * 2017-10-18 2022-09-13 吕文明 译码方法和装置
CN110166171A (zh) * 2018-03-19 2019-08-23 西安电子科技大学 多元ldpc码基于ems的分段式补偿高性能译码方案
US10491432B1 (en) * 2018-10-01 2019-11-26 Huawei Technologies Co., Ltd. System and method for turbo equalization and decoding in a receiver
CN114073045B (zh) 2020-05-06 2023-08-04 华为技术有限公司 用于解码和均衡的设备和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174401A1 (en) * 2001-04-30 2002-11-21 Zhongfeng Wang Area efficient parallel turbo decoding
CN101060481A (zh) * 2007-02-05 2007-10-24 中兴通讯股份有限公司 一种Turbo码传输块的分段方法
CN101321043A (zh) * 2007-06-08 2008-12-10 大唐移动通信设备有限公司 低密度校验码编码的译码方法及译码装置
CN101442321A (zh) * 2007-12-27 2009-05-27 美商威睿电通公司 涡轮码的并行译码以及数据处理方法和装置
CN102340320A (zh) * 2011-07-08 2012-02-01 电子科技大学 卷积Turbo码双向并行译码方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MY108838A (en) * 1992-07-03 1996-11-30 Koninklijke Philips Electronics Nv Adaptive viterbi detector
JP3798640B2 (ja) * 2001-03-02 2006-07-19 富士通株式会社 受信装置及び受信信号の波形劣化補償方法並びに波形劣化検出装置及び方法並びに波形測定装置及び方法
US7283694B2 (en) * 2001-10-09 2007-10-16 Infinera Corporation Transmitter photonic integrated circuits (TxPIC) and optical transport networks employing TxPICs
US7457538B2 (en) * 2002-05-15 2008-11-25 Nortel Networks Limited Digital performance monitoring for an optical communications system
WO2006039801A1 (en) * 2004-10-12 2006-04-20 Nortel Networks Limited System and method for low density parity check encoding of data
US8006163B2 (en) 2006-12-27 2011-08-23 Nec Laboratories America, Inc. Polarization mode dispersion compensation using BCJR equalizer and iterative LDPC decoding
US8185796B2 (en) 2008-08-20 2012-05-22 Nec Laboratories America, Inc. Mitigation of fiber nonlinearities in multilevel coded-modulation schemes
US8924811B1 (en) * 2010-01-12 2014-12-30 Lockheed Martin Corporation Fast, efficient architectures for inner and outer decoders for serial concatenated convolutional codes
CN101951266B (zh) * 2010-08-24 2013-04-24 中国科学院计算技术研究所 Turbo并行译码的方法及译码器
US8566665B2 (en) * 2011-06-24 2013-10-22 Lsi Corporation Systems and methods for error correction using low density parity check codes using multiple layer check equations
CN102725964B (zh) * 2011-11-17 2014-02-26 华为技术有限公司 一种编码方法、译码方法及编码装置、译码装置
WO2013097174A1 (zh) * 2011-12-30 2013-07-04 华为技术有限公司 前向纠错编、解码方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174401A1 (en) * 2001-04-30 2002-11-21 Zhongfeng Wang Area efficient parallel turbo decoding
CN101060481A (zh) * 2007-02-05 2007-10-24 中兴通讯股份有限公司 一种Turbo码传输块的分段方法
CN101321043A (zh) * 2007-06-08 2008-12-10 大唐移动通信设备有限公司 低密度校验码编码的译码方法及译码装置
CN101442321A (zh) * 2007-12-27 2009-05-27 美商威睿电通公司 涡轮码的并行译码以及数据处理方法和装置
CN102340320A (zh) * 2011-07-08 2012-02-01 电子科技大学 卷积Turbo码双向并行译码方法

Also Published As

Publication number Publication date
EP3001570A1 (en) 2016-03-30
CN103688502B (zh) 2016-06-08
EP3001570B1 (en) 2018-06-06
EP3001570A4 (en) 2016-08-10
CN103688502A (zh) 2014-03-26
US10574263B2 (en) 2020-02-25
US20160112065A1 (en) 2016-04-21
ES2683084T3 (es) 2018-09-24

Similar Documents

Publication Publication Date Title
JP6817452B2 (ja) レートマッチング方法、符号化装置、および通信装置
WO2015000100A1 (zh) 实现Turbo均衡补偿的方法以及Turbo均衡器和系统
CN109314600B (zh) 用于在使用通用极化码时进行速率匹配的系统和方法
WO2018142798A1 (en) Soft output decoding of polar codes using successive cancelation list (scl) decoding
JP2018508134A (ja) データ伝送の方法及びデバイス
JP2004528747A (ja) 多重送受信における時空間組合わせによりコード化されたデジタル・データ・ストリームのための反復コード化/復号方法およびシステム
CN106992841B (zh) 一种分组马尔可夫叠加编码的硬判决迭代译码方法
KR101583139B1 (ko) 높은 처리량과 낮은 복잡성을 갖는 연속 제거 극 부호 복호 장치 및 그 방법
EP3577767B1 (en) Alteration of successive cancellation order in decoding of polar codes
US20130007568A1 (en) Error correcting code decoding device, error correcting code decoding method and error correcting code decoding program
CN110601699B (zh) 码率动态可变的多元ldpc码实现方法
CN106656209B (zh) 一种采用迭代译码的纠正同步错误的级联码方法
Wang et al. Joint successive cancellation decoding of polar codes over intersymbol interference channels
CN112737600B (zh) 译码方法和译码器
WO2016000197A1 (zh) 用于译码的方法和装置
US10177876B2 (en) Sequence detector
CN102832954A (zh) 一种基于软信息平均最小值的Turbo码译码迭代停止方法
WO2016172937A1 (zh) 一种利用多元极化码进行数据传输的方法、装置
US20200403644A1 (en) Information Decoder for Polar Codes
US20210306193A1 (en) Reduction of peak to average power ratio
US10958488B2 (en) Signal transmission method and system
US10944605B2 (en) Reduction of peak to average power ratio
EP3447926B1 (en) Convolutional ldpc decoding method, device, decoder, and system
WO2021132983A1 (ko) 통신 시스템에서 극부호의 복호화 장치 및 방법
Ma et al. Fast weighted bit flipping algorithm for higher-speed decoding of low-density parity-check codes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13888637

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013888637

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE