US20030097633A1 - High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture - Google Patents

High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture Download PDF

Info

Publication number
US20030097633A1
US20030097633A1 US10/065,408 US6540802A US2003097633A1 US 20030097633 A1 US20030097633 A1 US 20030097633A1 US 6540802 A US6540802 A US 6540802A US 2003097633 A1 US2003097633 A1 US 2003097633A1
Authority
US
United States
Prior art keywords
decoder
data
log
map
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/065,408
Inventor
Quang Nguyen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IComm Tech Inc
Original Assignee
IComm Tech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=24733789&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20030097633(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by IComm Tech Inc filed Critical IComm Tech Inc
Priority to US10/065,408 priority Critical patent/US20030097633A1/en
Priority to US10/248,245 priority patent/US6799295B2/en
Publication of US20030097633A1 publication Critical patent/US20030097633A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/005Iterative decoding, including iteration between signal detection and decoding operation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3922Add-Compare-Select [ACS] operation in forward or backward recursions
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3961Arrangements of methods for branch or transition metric calculation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0052Realisations of complexity reduction techniques, e.g. pipelining or use of look-up tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0055MAP-decoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving

Definitions

  • This invention relates to Wireless Baseband Processor and Forward Error-Correction (FEC) Codes for 3G Wireless Mobile Communications; and more particularly, to a very high speed Turbo Codes Decoder using pipelined Log-MAP decoders architecture for 3G CDMA2000 and 3G WCDMA.
  • FEC Forward Error-Correction
  • Turbo Codes is based upon the classic forward error correction concepts that include the use of recursive systematic constituent Encoders (RSC) and Interleaver to reduce E b /N 0 for power-limited wireless applications such as digital 3G Wireless Mobile Communications.
  • RSC systematic constituent Encoders
  • Interleaver to reduce E b /N 0 for power-limited wireless applications such as digital 3G Wireless Mobile Communications.
  • a Turbo Codes Decoder is an important baseband processor of the digital wireless communication Receiver, which was used to reconstruct the corrupted and noisy received data and to improve BER (10 ⁇ 6 ) throughput.
  • the FIG. 1. shows an example of a 3G Receiver with a Turbo Codes Decoder 13 which decodes data from the Demodulator 11 and De-mapping 12 modules, and sends decoded data to the MAC layer 14 .
  • a most widely used FEC is the Viterbi Algorithm Decoder in both wired and wireless application.
  • the drawback is that it would requires a long waiting for decisions until the whole sequence has been received.
  • a delay of six time the memory of the received data is required for decoding.
  • One of the more effective FEC, with higher complexity, a MAP algorithm used to decode received message has comprised the steps of very computational complex, requiring many multiplications and additions per bit to compute the posteriori probability.
  • the major difficulty with the use of MAP algorithm has been the implementation in semiconductor ASIC devices, the complexity the multiplications and additions which will slow down the decoding process and reducing the throughput data rates.
  • each multiplication will be used in the MAP algorithm, that would create a large circuits in the ASIC. The result is costly, and low performance in bit rates throughput.
  • PCCC parallel concatenated codes
  • RSC classic recursive systematic constituent Encoders
  • Interleaver Interleaver
  • FIG. 3 An example of the 3GPP Turbo Codes PCCC with 8-states and rate 1 ⁇ 3 is shown in FIG. 3.
  • data enters the two systematic encoders 31 33 separated by an interleaver 32 .
  • An output codeword consists of the source data bit followed by the output bits of the two encoders.
  • FIG. 2 Another prior art example of a 16-states Superorthogonal Turbo Codes (SOTC) is shown in FIG. 2. It is identical to the previous 3GPP Turbo Codes PCCC except a Walsh Code Generator substitutes for the XOR binary adder. Data enters the two systematic encoders 21 23 separated by an interleaver 22 . An output codeword consists of the two Walsh Codes output of the two encoders.
  • SOTC Superorthogonal Turbo Codes
  • the present invention concentrates only on the Turbo Codes Decoder to implement a more efficient, practical and suitable architecture and method to achieve the requirements for 3G cellular phones and 3G personal communication devices including higher speed data throughput, lower power consumptions, lower costs, and suitable for implementation in ASIC or DSP codes.
  • the present invention encompasses improved and simplified Turbo Codes Decoder method and apparatus to deliver higher speed and lower power especially for 3G applications.
  • Turbo Codes Decoder utilizes two pipelined and serially concatenated SISO Log-MAP Decoders.
  • the two decoders function in a pipelined scheme; while the first decoder is decoding data in the second-decoder-Memory, the second decoder performs decoding data in the first-decoder-Memory, which produces a decoded output every clock cycle in results.
  • our invention Turbo Codes Decoder utilizes a Sliding Window of Block N on the input buffer memory to decode per block N data for improvement processing efficiency. Accordingly, several objects and advantages of our Turbo Codes Decoder are:
  • BM branch-metric
  • SM recursive state-metric
  • ACS Add-Compare-Select
  • HDL high level design language
  • FIG. 1. is a typical 3G Receiver Functional Block Diagram which use Turbo Codes Decoder for error-correction. (Prior Art).
  • FIG. 2. is an example of an 16-states Superorthogonal Turbo Code (SOTC) Encoder with Walsh code generator. (Prior Art).
  • SOTC Superorthogonal Turbo Code
  • FIG. 3. is a block diagram of the 8-states 3GPP Parallel Concatenated Convolutional Codes. (Prior Art).
  • FIG. 4. is the Turbo Codes Decoder System Block Diagram showing Log-MAP Decoders, Interleavers, Memory Buffers, and control logics.
  • FIG. 5. is a Turbo Codes Decoder State Diagram.
  • FIG. 6. is the Block N Sliding Window Diagram.
  • FIG. 7. is a block diagram of the SISO Log-MAP Decoder showing Branch Metric module, State Metric module, Log-MAP module, am State and Branch Memory modules.
  • FIG. 8 a is the 8-States Trellis Diagram of a SISO Log-MAP Decoder using for the 3GPP 8-state PCCC Turbo codes.
  • FIG. 8 b is the 16-States Trellis Diagram of a SISO Log-MAP Decoder using for the superorthogonal Turbo codes (SOTC).
  • SOTC superorthogonal Turbo codes
  • FIG. 9. is a block diagram of the BRANCH METRIC COMPUTING module.
  • FIG. 11. is a block diagram of the Log-MAP Compare & Select I maximum logic for each state.
  • FIG. 12. is a block diagram of the Soft Decode module.
  • FIG. 13 is a block diagram of the Computation of Forward Recursion of State Metric module (FACS).
  • FIG. 14. is a block diagram of the Computation of Backward Recursion of State Metric module (BACS).
  • FIG. 15. showing State Metric Forward computing of Trellis state transitions.
  • FIG. 16. showing State Metric Backward computing of Trellis state transitions.
  • FIG. 17. is a block diagram of the State Machine operations of Log-MAP Decoder.
  • FIG. 18. is a block diagram of the BM dual-port Memory Module.
  • FIG. 19. is a block diagram of the SM dual-port Memory Module.
  • FIG. 20 is a block diagram of the De-Interleaver dual-port RAM Memory Memory Module for interleaved input R 2 .
  • FIG. 21 is a block diagram of the dual RAM Memory Module for input R 0 ,R 1 .
  • FIG. 24 is a block diagram of the intrinsic feedback Adder of the Turbo Codes Decoder.
  • FIG. 23 is a block diagram of the Iterative decoding feedback control.
  • a Turbo Codes Decoder has two concatenated Log-MAP SISO Decoders A 42 and B 44 connected in a feedback loop with dual-port Memory 43 and dual-port Memory 45 in between.
  • An input interleaver Memory 41 shown in details FIG. 20, has one interleaver 201 , and dual-port RAM memory 202 .
  • Input Memory blocks 48 49 shown in details FIG. 21, have dual-port RAM memory 202 .
  • a control logic module (CLSM) 47 consists of various state-machines, which control all the operations of the Turbo Codes Decoder.
  • the hard-decoder module 46 outputs the final decoded data.
  • Signals R 2 , R 1 , R 0 are the received soft decision data from the system receiver.
  • Signal XO 1 , and XO 2 are the output soft decision of the Log-MAP Decoders A 42 and B 44 respectively, which are stored in the buffer Memory 43 and Memory 45 module.
  • Signal Z 2 and Z 1 are the output of the buffer Memory 43 and Memory 45 where the Z 2 is feed into Log-MAP decoder B 44 , and Z 1 is feedback into an Adder 231 then into Log-MAP decoder A 42 for iterative decoding.
  • the R 0 is the data bit corresponding to the the transmit data bit u
  • R 1 is the first parity bit corresponding to the output bit of the first RSC encoder
  • R 2 is interleaved second parity bit corresponding to the output bit of the second RSC encoder as reference to FIG. 3.
  • the R 0 data is added to the feedback Z 1 data then feed into the decoder A, and R 1 is also fed into decoder A for decoding the first stage of decoding output X 01 .
  • the Z 2 and R 2 are fed into decoder B for decoding the second stage of decoding output X 02 .
  • the Turbo Codes Decoder utilizes a Sliding Window of Block N 61 on the input buffers 62 to decode one block N data at a time, the next block N of data is decoded after the previous block N is done in a circular wrap-around scheme for pipeline operations.
  • the Turbo Codes Decoder decodes an 8-state Parallel Concatenated Convolutional Code (PCCC), and also decodes a 16-states Superorthogonal Turbo Codes SOTC with different code rates.
  • PCCC Parallel Concatenated Convolutional Code
  • Received soft decision data (RXData[2:0]) are stored in three input buffers Memory 48 49 41 to produce R 0 , R 1 , and R 2 output data words.
  • Each output data word R 0 , R 1 , R 2 contains a number of binary bits.
  • a Sliding Window of Block N is imposed onto each input memory to produce R 0 , R 1 , and R 2 output data words.
  • the Turbo Decoder starts the Log-MAP Decoder A to decode the N input data based on the soft-values of R 0 , Z 1 and R 1 , then stores the outputs in the buffer Memory A.
  • the Turbo Decoder also starts the Log-MAP Decoder B at the same time to decode the N input data based on the soft-values of R 2 and Z 2 , then store the outputs in the De-Interleaver Memory.
  • the Log-MAP Decoder A uses the sum of Z 1 and R 1 and R 0 as inputs.
  • the Log-MAP Decoder B uses the data Z 2 and R 2 as inputs.
  • the Turbo Decoder starts the hard-decision operations to compute and produce soft-decision outputs.
  • an SISO Log-MAP Decoder 42 44 comprises of a Branch Metric (BM) computation module 71 , a State Metric (SM) computation module 72 , a Log-MAP computation module 73 , a BM Memory module 74 , a SM Memory module 75 , and a Control Logic State Machine module 76 .
  • Soft-values inputs enter the Branch Metric (BM) computation module 71 , where Euclidean distance is calculated for each branch, the output branch metrics are stored in the BM Memory module 74 .
  • the State Metric (SM) computation module 72 reads branch metrics from the BM Memory 74 and compute the state metric for each state, the output state-metrics are stored in the SM Memory module 75 .
  • the Log-MAP computation module 73 reads both branch-metrics and state-metrics from BM memory 74 and SM memory 75 modules to compute the Log Maximum a Posteriori probability and produce soft-decision output.
  • the Control Logic State-machine module 76 provides the overall operations of the decoding process.
  • the Log-MAP Decoder 42 44 functions effectively as follows:
  • the Log-MAP Decoder 42 44 reads each soft-values (SD) data pair input, then computes branch-metric (BM) values for all paths in the Turbo Codes Trellis 80 as shown in FIG. 8 a. (and Trellis 85 in 8 b .), then stores all BM data into BM Memory 74 . It repeats computing BM values for each input data until all N samples are calculated and stored in BM Memory 74 .
  • SD soft-values
  • BM branch-metric
  • the Log-MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75 , and computes the forward state-metric (SM) for all states in the Trellis 80 as shown in FIG. 8 a. (and Trellis 85 in 8 b .), then store all forward SM data into SM Memory 75 . It repeats computing forward SM values for each input data until all N samples are calculated and stored in SM Memory 75 .
  • SM forward state-metric
  • the Log-MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75 , and computes the backward state-metric (SM) for all states in the Trellis 80 as shown in FIG. 8 a. (and Trellis 85 in 8 b .), then store all backward SM data into the SM Memory 75 . It repeats computing backward SM values for each input data until all N samples are calculated and stored in SM Memory 75 .
  • SM backward state-metric
  • the Branch Metric (BM) computation module 71 computes the Euclidean distance for each branch in the 8-states Trellis 80 as shown in the FIG. 8 a. based on the following equations:
  • the SD 0 and SD 1 are soft-values input data
  • G 0 and G 1 are the expected input for each path in the Trellis 80 .
  • G 0 and G 1 are coded as signed antipodal values, meaning that 0 corresponds to +1 and 1 corresponds to ⁇ 1. Therefore, the local Euclidean distances for each path in the Trellis 80 are computed by the following equations:
  • the Branch Metric Computing module comprise of one L-bit Adder 91 , one L-bit Subtracter 92 , and a 2′complemeter 93 . It computes the Euclidean distances for path M 1 and M 5 . Path M 2 is 2′complement of path M 1 . Path M 6 is 2′complement of M 5 .
  • Path M 3 is the same path M 2
  • path M 4 is the same as path M 1
  • path M 7 is the same as path M 6
  • path M 8 is the same as path M 5
  • path M 9 is the same as path M 6
  • path M 10 is the same as path M 5
  • path M 11 is the same as path M 5
  • path M 12 is the same as path M 6
  • path M 13 is the same as path M 2
  • path M 14 is the same as path M 1
  • path M 15 is the same as path M 1
  • path M 16 is the same as path M 2 .
  • the State Metric Computing module 72 calculates the probability A(k) of each state transition in forward recursion and the probability B(k) in backward recursion.
  • FIG. 13. shows the implementation of state-metric in forward recursion with Add-Compare-Select (ACS) logic
  • FIG. 14. shows the implementation of state-metric in backward recursion with Add-Compare-Select (ACS) logic.
  • the calculations are performed at each node in the Turbo Codes Trellis 80 (FIG. 8 a .) in both forward and backward recursion.
  • the FIG. 15. shows the forward state transitions in the Turbo Codes Trellis 80 (FIG. 8 a .), and FIG. 16.
  • FIG. 8 a show the backward state transitions in the Turbo Codes Trellis 80 (FIG. 8 a .).
  • Each node in the Trellis 80 as shown in FIG. 8 a. has two entering paths: one-path 84 and zero-path 83 from the two nodes in the previous stage.
  • the ACS logic comprises of an Adder 132 , an Adder 134 , a Comparator 131 , and a Multiplexer 133 .
  • the Adder 132 computes the sum of the branch metric and state metric in the one-path 84 from the state s(k ⁇ 1) of previous stage (k ⁇ 1).
  • the Adder 134 computes the sum of the branch metric and state metric in the zero-path 83 from the state (k ⁇ 1) of previous stage (k ⁇ 1).
  • the Comparator 131 compares the two sums and the Multiplexer 133 selects the larger sum for the state s (k) of current stage (k).
  • the Adder 142 computes the sum of the branch metric and state metric in the one-path 84 from the state s(j+1) of previous stage (J+1).
  • the Adder 144 computes the sum of the branch metric and state metric in the zero-path 83 from the state s(j+1) of previous stage (J+1).
  • the Comparator 141 compares the two sums and the Multiplexer 143 selects the larger sum for the state s(j) of current stage (j).
  • a ( k ) MAX [( bm 0 + sm 0 ( k ⁇ 1)), ( bm 1 + sm 1 ( k ⁇ 1)]
  • Time (k ⁇ 1) is the previous stage of (k) in forward recursion as shown in FIG. 15.
  • time (j+1) is the previous stage of (j) in backward recursion as shown in FIG. 16.
  • the accumulated probabilities are compared and selected the u with larger probability.
  • the soft-decision are made based on the final probability selected for each bit.
  • FIG. 11. shows the implementation of compare-and-select the u with larger probability.
  • the equations for calculation the accumulated probabilities for each state and compare-and-select are shown below:
  • the Control Logics module controls the overall operations of the Log-MAP Decoder.
  • the control logic state machine 171 referred as CLSM
  • the CLSM module 171 (FIG. 17.) operates effectively as the followings. Initially, it stays in IDLE state 172 . When the decoder is enable, the CLSM transitions to CALC-BM state 173 , it then starts the Branch Metric (BM) module operations and monitor for completion. When Branch Metric calculations are done, referred as bm-done the CLSM transitions to CALC-FWD-SM state 174 , it then tarts the State Metric module (SM) in forward recursion operation.
  • BM Branch Metric
  • the CLSM transitions to CALC-BWD-SM state 175 , it then starts the State Metric module (SM) in backward recursion operations.
  • SM State Metric module
  • backward SM state metric calculations are done, referred as bwd-sm-done
  • the CLSM transitions to CALC-Log-MAP state 176 , it then starts the Log-MAP computation module to calculate the maximum a posteriori probability to produce soft decode output.
  • Log-MAP calculations are done, referred as log-map-done, it transitions back to IDLE state 172 .
  • the Branch-Metric Memory 74 and the State-Metric Memory 75 are shown in FIG. 7. as the data storage components for BM module 71 and SM module 72 .
  • the Branch Metric Memory module is a dual-port RAM contains M-bit of N memory locations as shown in FIG. 18.
  • the State Metric Memory module is a dual-port RAM contains K-bit of N memory locations as shown in FIG. 19. Data can be written into one port while reading at the other port.
  • the buffer Memory A 43 stores data for the first decoder A 42
  • buffer Memory B 45 stores data for the second decoder B 44 .
  • the decoder A 42 reads data from buffer memory B 45 and writes results data into buffer memory B 43
  • the decoder B 44 reads data from buffer memory A 43 and write results into buffer memory B 45 .
  • the De-Interleaver memory 41 comprises of an De-Interleaver module 201 and a dual-port RAM 202 contains M-bit of N memory locations.
  • the Interleaver is a Turbo code internal interleaver as defined by 3GPP standard ETSI TS 125 222 V3.2.1 (2000-05), or other source.
  • the Interleaver permutes the address input port A for all write operations into dual-port RAM module. Reading data from output port B are done with normal address input.
  • the buffer memory 43 45 comprises of a dual-port RAM 212 contains M-bit of N memory locations.
  • the Turbo Decoder Control Logics module 47 controls the overall operations of the Turbo Codes Decoder.
  • Log-MAP A 42 starts the operations of data in Memory B 45 .
  • Log-MAP B starts the operations in Memory A 43 .
  • the TDCLSM 47 starts the iterative decoding for L number of times.
  • the TDCLSM 47 transitions to HARD-DEC to generate the hard-decode outputs. Then the TDCLSM 47 transitions to start decoding another block of data.
  • Turbo Codes decoder performs iterative decoding L times by feeding back the output Z 1 of the second Log-MAP decoder B into the first Log-MAP decoder A, before making decision for hard-decoding output. As shown in FIG. 23., the Counter 233 count the preset number L times.

Abstract

The invention encompasses several improved Turbo Codes Decoder method and apparatus to provide a more suitable, practical and simpler method for implementation a Turbo Codes Decoder in ASIC or DSP codes. (1) Two pipelined Log-MAP decoders are used for iterative decoding of received data. (2) A Sliding Window of Block N data are used on the input Memory for pipeline operations. (3) The output block N data from the first decoder A are stored in the RAM memory A, and the second decoder B stores output data in the RAM memory B, such that in pipeline mode Decoder A decodes block N data from the RAM memory B while the Decoder B decodes block N data from the RAM memory A at the same clock cycle. (4) Log-MAP decoders are simpler to implement in ASIC and DSP codes with, only Adder circuits, and are low-power consumption. (5) Pipelined Log-MAP decoders architecture provides high speed data throughput, one output per clock cycle.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of patent application Ser. No. 09/681,093 filed Jan. 2, 2001.[0001]
  • BACKGROUND OF INVENTION
  • 1. Field of the Invention [0002]
  • This invention relates to Wireless Baseband Processor and Forward Error-Correction (FEC) Codes for 3G Wireless Mobile Communications; and more particularly, to a very high speed Turbo Codes Decoder using pipelined Log-MAP decoders architecture for 3G CDMA2000 and 3G WCDMA. [0003]
  • 2. Description of Prior Art [0004]
  • Turbo Codes is based upon the classic forward error correction concepts that include the use of recursive systematic constituent Encoders (RSC) and Interleaver to reduce E[0005] b/N0 for power-limited wireless applications such as digital 3G Wireless Mobile Communications.
  • A Turbo Codes Decoder is an important baseband processor of the digital wireless communication Receiver, which was used to reconstruct the corrupted and noisy received data and to improve BER (10[0006] −6) throughput. The FIG. 1. shows an example of a 3G Receiver with a Turbo Codes Decoder 13 which decodes data from the Demodulator 11 and De-mapping 12 modules, and sends decoded data to the MAC layer 14.
  • A most widely used FEC is the Viterbi Algorithm Decoder in both wired and wireless application. The drawback is that it would requires a long waiting for decisions until the whole sequence has been received. A delay of six time the memory of the received data is required for decoding. One of the more effective FEC, with higher complexity, a MAP algorithm used to decode received message has comprised the steps of very computational complex, requiring many multiplications and additions per bit to compute the posteriori probability. The major difficulty with the use of MAP algorithm has been the implementation in semiconductor ASIC devices, the complexity the multiplications and additions which will slow down the decoding process and reducing the throughput data rates. Furthermore, even under the best conditions, each multiplication will be used in the MAP algorithm, that would create a large circuits in the ASIC. The result is costly, and low performance in bit rates throughput. [0007]
  • Recently introduced by the 3GPP organization a new class of error correction codes using parallel concatenated codes (PCCC) that include the use of the classic recursive systematic constituent Encoders (RSC) and Interleaver as shown in FIG. 3. offers great improvement. An example of the 3GPP Turbo Codes PCCC with 8-states and rate ⅓ is shown in FIG. 3., data enters the two systematic encoders [0008] 31 33 separated by an interleaver 32. An output codeword consists of the source data bit followed by the output bits of the two encoders.
  • Other prior work of error correction codes was done by Berrou et al. describing a parallel concatenated codes which is much complex encoding structure which is not suitable for portable wireless device. Another patent U.S. Pat. No. 6,023,783 by Divsalar and Pollara et al. describes a more improved encoding method than Berrou and mathematical concepts of parallel concatenated codes. However, patents by Berrou et al., Divsalar et al., and others only described the concept of parallel concatenated codes using mathematical equations which are good for research in deep space communications and other government projects but are not feasible, economical, and suitable for consumers' portable wireless device. The encoding of data is simple and can be easily implemented with a few xor and flip-flop logic gates. But the decoding the Turbo Codes is much more difficult to implement in ASIC or software. The prior arts describe briefly the implementation of the Turbo Codes Decoder which are mostly for deep space communications and requires much more hardware, powers and costs. [0009]
  • Another prior art example of a 16-states Superorthogonal Turbo Codes (SOTC) is shown in FIG. 2. It is identical to the previous 3GPP Turbo Codes PCCC except a Walsh Code Generator substitutes for the XOR binary adder. Data enters the two [0010] systematic encoders 21 23 separated by an interleaver 22. An output codeword consists of the two Walsh Codes output of the two encoders.
  • All the prior arts of Turbo Codes fail to provide a simpler and suitable method and architecture for a Turbo Codes Decoder as it is required and desired for 3G cellular phones and 3G personal communication devices including high speed data throughput, low power consumption, lower costs, limited bandwidth, and limited power transmitter in noisy environment. [0011]
  • SUMMARY OF INVENTION
  • The present invention concentrates only on the Turbo Codes Decoder to implement a more efficient, practical and suitable architecture and method to achieve the requirements for 3G cellular phones and 3G personal communication devices including higher speed data throughput, lower power consumptions, lower costs, and suitable for implementation in ASIC or DSP codes. The present invention encompasses improved and simplified Turbo Codes Decoder method and apparatus to deliver higher speed and lower power especially for 3G applications. As shown in FIG. 5., and FIG. 4., our invention Turbo Codes Decoder utilizes two pipelined and serially concatenated SISO Log-MAP Decoders. The two decoders function in a pipelined scheme; while the first decoder is decoding data in the second-decoder-Memory, the second decoder performs decoding data in the first-decoder-Memory, which produces a decoded output every clock cycle in results. As shown in FIG. 6., our invention Turbo Codes Decoder utilizes a Sliding Window of Block N on the input buffer memory to decode per block N data for improvement processing efficiency. Accordingly, several objects and advantages of our Turbo Codes Decoder are: [0012]
  • To deliver higher speed throughput and lower power consumption [0013]
  • To utilize SISO Log-MAP decoder for faster decoding and simplified implementation in ASIC and DSP codes with the use of binary adders for computation. [0014]
  • To perform re-iterative decoding of data back-and-forth between the two Log-MAP decoders in a pipelined scheme until a decision is made. In such pipelined scheme, a decoded output data is produced each clock cycle. [0015]
  • To utilize a Sliding Window of Block N on the input buffer memory to decode per block N data for improvement pipeline processing efficiency [0016]
  • To improve higher performance in term of symbol error probability and low BER (10[0017] −6)for 3G applications such as 3G W-CDMA, and 3G CDMA2000 operating at very high bit-rate up to 100 Mbps in a low power noisy environment.
  • To utilize an simplified and improved architecture of SISO Log-MAP decoder including branch-metric (BM) calculations module, recursive state-metric (SM) forward/backward calculations module, Add-Compare-Select (ACS) circuit, Log-MAP posteriori probability calculations module, and output decision module. [0018]
  • To reduce complexity of multiplier circuits in MAP algorithm by perform the entire MAP algorithm in Log Max approximation with the uses of binary adder circuits which are more suitable for ASIC and DSP codes implementation while still maintain a high level of performance output. [0019]
  • To design an improve Log-MAP Decoder using high level design language (HDL) such as Verilog, system-C and VHDL which can be synthesized into custom ASIC and FPGA devices. [0020]
  • To implement an improve Log-MAP Decoder in DSP (digital signal processor) using optimized high level language C, C++, or assembly language. [0021]
  • Still further objects and advantages will become apparent to one skill in the art from a consideration of the ensuing descriptions and accompanying drawings.[0022]
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1. is a typical 3G Receiver Functional Block Diagram which use Turbo Codes Decoder for error-correction. (Prior Art). [0023]
  • FIG. 2. is an example of an 16-states Superorthogonal Turbo Code (SOTC) Encoder with Walsh code generator. (Prior Art). [0024]
  • FIG. 3. is a block diagram of the 8-states 3GPP Parallel Concatenated Convolutional Codes. (Prior Art). [0025]
  • FIG. 4. is the Turbo Codes Decoder System Block Diagram showing Log-MAP Decoders, Interleavers, Memory Buffers, and control logics. [0026]
  • FIG. 5. is a Turbo Codes Decoder State Diagram. [0027]
  • FIG. 6. is the Block N Sliding Window Diagram. [0028]
  • FIG. 7. is a block diagram of the SISO Log-MAP Decoder showing Branch Metric module, State Metric module, Log-MAP module, am State and Branch Memory modules. [0029]
  • FIG. 8[0030] a. is the 8-States Trellis Diagram of a SISO Log-MAP Decoder using for the 3GPP 8-state PCCC Turbo codes.
  • FIG. 8[0031] b. is the 16-States Trellis Diagram of a SISO Log-MAP Decoder using for the superorthogonal Turbo codes (SOTC).
  • FIG. 9. is a block diagram of the BRANCH METRIC COMPUTING module. [0032]
  • FIG. 10[0033] a. is a block diagram of the Log-MAP computing for u=0.
  • FIG. 10[0034] b. is a block diagram of the Log-MAP computing for u=1.
  • FIG. 11. is a block diagram of the Log-MAP Compare & Select I maximum logic for each state. [0035]
  • FIG. 12. is a block diagram of the Soft Decode module. [0036]
  • FIG. 13. is a block diagram of the Computation of Forward Recursion of State Metric module (FACS). [0037]
  • FIG. 14. is a block diagram of the Computation of Backward Recursion of State Metric module (BACS). [0038]
  • FIG. 15. showing State Metric Forward computing of Trellis state transitions. [0039]
  • FIG. 16. showing State Metric Backward computing of Trellis state transitions. [0040]
  • FIG. 17. is a block diagram of the State Machine operations of Log-MAP Decoder. [0041]
  • FIG. 18. is a block diagram of the BM dual-port Memory Module. [0042]
  • FIG. 19. is a block diagram of the SM dual-port Memory Module. [0043]
  • FIG. 20. is a block diagram of the De-Interleaver dual-port RAM Memory Memory Module for interleaved input R[0044] 2.
  • FIG. 21. is a block diagram of the dual RAM Memory Module for input R[0045] 0,R1.
  • FIG. 24. is a block diagram of the intrinsic feedback Adder of the Turbo Codes Decoder. [0046]
  • FIG. 23. is a block diagram of the Iterative decoding feedback control.[0047]
  • DETAILED DESCRIPTION
  • Turbo Codes Decoder [0048]
  • An exhibition of a 3GPP 8-state Parallel Concatenated Convolutional Code (PCCC), with coding rate ⅓, constraint length K=4, using SISO Log-MAP Decoders is provided for simplicity in descriptions of the invention. As shown in FIG. 4, a Turbo Codes Decoder has two concatenated Log-MAP [0049] SISO Decoders A 42 and B 44 connected in a feedback loop with dual-port Memory 43 and dual-port Memory 45 in between. An input interleaver Memory 41, shown in details FIG. 20, has one interleaver 201, and dual-port RAM memory 202. Input Memory blocks 48 49, shown in details FIG. 21, have dual-port RAM memory 202. A control logic module (CLSM) 47, consists of various state-machines, which control all the operations of the Turbo Codes Decoder. The hard-decoder module 46 outputs the final decoded data. Signals R2, R1, R0 are the received soft decision data from the system receiver. Signal XO1, and XO2 are the output soft decision of the Log-MAP Decoders A 42 and B 44 respectively, which are stored in the buffer Memory 43 and Memory 45 module. Signal Z2 and Z1 are the output of the buffer Memory 43 and Memory 45 where the Z2 is feed into Log-MAP decoder B 44, and Z1 is feedback into an Adder 231 then into Log-MAP decoder A 42 for iterative decoding.
  • More particularly, the R[0050] 0 is the data bit corresponding to the the transmit data bit u, R1 is the first parity bit corresponding to the output bit of the first RSC encoder, and R2 is interleaved second parity bit corresponding to the output bit of the second RSC encoder as reference to FIG. 3.
  • In accordance with the invention, the R[0051] 0 data is added to the feedback Z1 data then feed into the decoder A, and R1 is also fed into decoder A for decoding the first stage of decoding output X01. The Z2 and R2 are fed into decoder B for decoding the second stage of decoding output X02.
  • In accordance with the invention, as shown in FIG. 6., the Turbo Codes Decoder utilizes a Sliding Window of [0052] Block N 61 on the input buffers 62 to decode one block N data at a time, the next block N of data is decoded after the previous block N is done in a circular wrap-around scheme for pipeline operations.
  • In accordance with the invention, the Turbo Codes Decoder decodes an 8-state Parallel Concatenated Convolutional Code (PCCC), and also decodes a 16-states Superorthogonal Turbo Codes SOTC with different code rates. [0053]
  • As shown in FIG. 4. the Turbo Codes Decoder functions effectively as follows: [0054]
  • Received soft decision data (RXData[2:0]) are stored in three input buffers Memory [0055] 48 49 41 to produce R0, R1, and R2 output data words. Each output data word R0, R1, R2 contains a number of binary bits.
  • A Sliding Window of Block N is imposed onto each input memory to produce R[0056] 0, R1, and R2 output data words.
  • When a block of N input data is ready, the Turbo Decoder starts the Log-MAP Decoder A to decode the N input data based on the soft-values of R[0057] 0, Z1 and R1, then stores the outputs in the buffer Memory A.
  • The Turbo Decoder also starts the Log-MAP Decoder B at the same time to decode the N input data based on the soft-values of R[0058] 2 and Z2, then store the outputs in the De-Interleaver Memory.
  • The Turbo Decoder will do the iterative decoding for L number of times (L=1,2, . . . M). The Log-MAP Decoder A uses the sum of Z[0059] 1 and R1 and R0 as inputs. The Log-MAP Decoder B uses the data Z2 and R2 as inputs.
  • When the iterative decoding sequences are done, the Turbo Decoder starts the hard-decision operations to compute and produce soft-decision outputs. [0060]
  • SISO Log-MAP Decoder [0061]
  • As shown in FIG. 7., an SISO Log-[0062] MAP Decoder42 44 comprises of a Branch Metric (BM) computation module 71, a State Metric (SM) computation module 72, a Log-MAP computation module 73, a BM Memory module 74, a SM Memory module 75, and a Control Logic State Machine module 76. Soft-values inputs enter the Branch Metric (BM) computation module 71, where Euclidean distance is calculated for each branch, the output branch metrics are stored in the BM Memory module 74. The State Metric (SM) computation module 72 reads branch metrics from the BM Memory 74 and compute the state metric for each state, the output state-metrics are stored in the SM Memory module 75. The Log-MAP computation module 73 reads both branch-metrics and state-metrics from BM memory 74 and SM memory 75 modules to compute the Log Maximum a Posteriori probability and produce soft-decision output. The Control Logic State-machine module 76 provides the overall operations of the decoding process.
  • As shown in FIG. 7. and primary example of 3GPP Turbo Codes, the Log-[0063] MAP Decoder 42 44 functions effectively as follows:
  • The Log-[0064] MAP Decoder 42 44 reads each soft-values (SD) data pair input, then computes branch-metric (BM) values for all paths in the Turbo Codes Trellis 80 as shown in FIG. 8a. (and Trellis 85 in 8 b.), then stores all BM data into BM Memory 74. It repeats computing BM values for each input data until all N samples are calculated and stored in BM Memory 74.
  • The Log-[0065] MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75, and computes the forward state-metric (SM) for all states in the Trellis 80 as shown in FIG. 8a. (and Trellis 85 in 8 b.), then store all forward SM data into SM Memory 75. It repeats computing forward SM values for each input data until all N samples are calculated and stored in SM Memory 75.
  • The Log-[0066] MAP Decoder 42 44 reads BM values from BM Memory 74 and SM values from SM Memory 75, and computes the backward state-metric (SM) for all states in the Trellis 80 as shown in FIG. 8a. (and Trellis 85 in 8 b.), then store all backward SM data into the SM Memory 75. It repeats computing backward SM values for each input data until all N samples are calculated and stored in SM Memory 75.
  • The Log-[0067] MAP Decoder 42 44 then computed Log-MAP posteriori probability for u=0 and u=1 using BM values and SM values from BM Memory 74 and SM Memory 75. It repeats computing Log-MAP posteriori probability for each input data until all N samples are calculated. The Log-MAP Decoder then decodes data by making soft decision based on the posteriori probability for each stage and produce soft-decision output, until all N inputs are decoded.
  • Branch Metric Computation module [0068]
  • The Branch Metric (BM) [0069] computation module 71 computes the Euclidean distance for each branch in the 8-states Trellis 80 as shown in the FIG. 8a. based on the following equations:
  • Local Euclidean distances values= SD 0* G 0+SD 1* G 1
  • The SD[0070] 0 and SD1 are soft-values input data, G0 and G1 are the expected input for each path in the Trellis 80. G0 and G1 are coded as signed antipodal values, meaning that 0 corresponds to +1 and 1 corresponds to −1. Therefore, the local Euclidean distances for each path in the Trellis 80 are computed by the following equations:
  • M 1= SD 0+SD 1
  • M 2=− M 1
  • M3=M2
  • M4=M1
  • M 5=− SD 0+SD 1
  • M 6=− M 5
  • M7=M6
  • M8=M5
  • M9=M6
  • M10=M5
  • M11=M5
  • M12=M6
  • M13=M2
  • M14=M1
  • M15=M1
  • M16=M2
  • As shown in FIG. 9., the Branch Metric Computing module comprise of one L-[0071] bit Adder 91, one L-bit Subtracter 92, and a 2′complemeter 93. It computes the Euclidean distances for path M1 and M5. Path M2 is 2′complement of path M1. Path M6 is 2′complement of M5. Path M3 is the same path M2, path M4 is the same as path M1, path M7 is the same as path M6, path M8 is the same as path M5, path M9 is the same as path M6, path M10 is the same as path M5, path M11 is the same as path M5, path M12 is the same as path M6, path M13 is the same as path M2, path M14 is the same as path M1, path M15 is the same as path M1, and path M16 is the same as path M2.
  • State Metric Computing Module [0072]
  • The State [0073] Metric Computing module 72 calculates the probability A(k) of each state transition in forward recursion and the probability B(k) in backward recursion. FIG. 13. shows the implementation of state-metric in forward recursion with Add-Compare-Select (ACS) logic, and FIG. 14. shows the implementation of state-metric in backward recursion with Add-Compare-Select (ACS) logic. The calculations are performed at each node in the Turbo Codes Trellis 80 (FIG. 8a.) in both forward and backward recursion. The FIG. 15. shows the forward state transitions in the Turbo Codes Trellis 80 (FIG. 8a.), and FIG. 16. show the backward state transitions in the Turbo Codes Trellis 80 (FIG. 8a.). Each node in the Trellis 80 as shown in FIG. 8a. has two entering paths: one-path 84 and zero-path 83 from the two nodes in the previous stage.
  • The ACS logic comprises of an [0074] Adder 132, an Adder 134, a Comparator 131, and a Multiplexer 133. In the forward recursion, the Adder 132 computes the sum of the branch metric and state metric in the one-path 84 from the state s(k−1) of previous stage (k−1). The Adder 134 computes the sum of the branch metric and state metric in the zero-path 83 from the state (k−1) of previous stage (k−1). The Comparator 131 compares the two sums and the Multiplexer 133 selects the larger sum for the state s (k) of current stage (k). In the backward recursion, the Adder 142 computes the sum of the branch metric and state metric in the one-path 84 from the state s(j+1) of previous stage (J+1). The Adder 144 computes the sum of the branch metric and state metric in the zero-path 83 from the state s(j+1) of previous stage (J+1). The Comparator 141 compares the two sums and the Multiplexer 143 selects the larger sum for the state s(j) of current stage (j).
  • The Equations for the ACS are shown below: [0075]
  • A(k)=MAX [(bm 0+sm 0(k−1)), (bm 1+sm 1(k−1)]
  • B(j)=MAX [(bm 0+sm 0(j+1)), (bm 1+sm 1(j+1)]
  • Time (k−1) is the previous stage of (k) in forward recursion as shown in FIG. 15., and time (j+1) is the previous stage of (j) in backward recursion as shown in FIG. 16. [0076]
  • Log-MAP Computing Module [0077]
  • The Log-MAP computing module calculates the posteriori probability for u=0 and u=1, for each path entering each state in the [0078] Turbo Codes Trellis 80 corresponding to u=0 and u=1 or referred as zero-path 83 and one-path 84. The accumulated probabilities are compared and selected the u with larger probability. The soft-decision are made based on the final probability selected for each bit. FIG. 10a. shows the implementation for calculating the posteriori probability for u=0. FIG. 10b. shows the implementation for calculate the posteriori probability for u=1. FIG. 11. shows the implementation of compare-and-select the u with larger probability. FIG. 12. shows the implementation of the soft-decode compare logic to produce output bits based on the posteriori probability of u=0 and u=1. The equations for calculation the accumulated probabilities for each state and compare-and-select are shown below:
  • sum s 00=sm 0 i+bm 1+sm 0 j
  • sum s 01=sm 3 i+bm 7+sm 1 j
  • sum s 02=sm 4 i+bm 9+sm 2 j
  • sum s 03=sm 7 i+bm 15+sm 3 j
  • sum s 04=sm 1 i+bm 4+sm 4 j
  • sum s 05=sm 2 i+bm 6+sm 5 j
  • sum s 06=sm 5 i+bm 12+sm 6 j
  • sum s 07=sm 6 i+bm 14+sm 7 j
  • sum s 10=sm 1 i+bm 3+sm 0 j
  • sum s 11=sm 2 i+bm 5+sm 1 j
  • sum s 12=sm 5 i+bm 11+sm 2 j
  • sum s 13=sm 6 i+bm 13+sm 3 j
  • sum s 14=sm 0 i+bm 2+sm 4 j
  • sum s 15=sm 3 i+bm 8+sm 5 j
  • sum s 16=sm 4 i+bm 10+sm 6 j
  • sum s 17=sm 7 i+bm 16+sm 7 j
  • s 00sum=MAX[sum s 00, 0]
  • s 01sum=MAX[sum s 01, s 00sum]
  • s 02sum=MAX[sum s 02, s 01sum]
  • s 03sum=MAX[sum s 03, s 02sum]
  • s 04sum=MAX[sum s 04, s 03sum]
  • s 05sum=MAX[sum s 05, s 04sum]
  • s 06sum=MAX[sum s 06, s 05sum]
  • s 07sum=MAX[sum s 07, s 06sum]
  • s 10sum=MAX[sum s 10, 0]
  • s 11sum=MAX[sum s 11, s 10sum]
  • s 12sum=MAX[sum s 12, s 11sum]
  • s 13sum=MAX[sum s 13, s 12sum]
  • s 14sum=MAX[sum s 14, s 13sum]
  • s 15sum=MAX[sum s 15, s 14sum]
  • s 16sum=MAX[sum s 16, s 15sum]
  • s 17sum=MAX[sum s 17, s 16sum]
  • Control Logics—State Machine (CLSM) Module [0079]
  • As shown in FIG. 7. the Control Logics module controls the overall operations of the Log-MAP Decoder. The control [0080] logic state machine 171, referred as CLSM, is shown in FIG. 17. The CLSM module 171 (FIG. 17.) operates effectively as the followings. Initially, it stays in IDLE state 172. When the decoder is enable, the CLSM transitions to CALC-BM state 173, it then starts the Branch Metric (BM) module operations and monitor for completion. When Branch Metric calculations are done, referred as bm-done the CLSM transitions to CALC-FWD-SM state 174, it then tarts the State Metric module (SM) in forward recursion operation. When the forward SM state metric calculations are done, referred as fwd-sm, the CLSM transitions to CALC-BWD-SM state 175, it then starts the State Metric module (SM) in backward recursion operations. When backward SM state metric calculations are done, referred as bwd-sm-done the CLSM transitions to CALC-Log-MAP state 176, it then starts the Log-MAP computation module to calculate the maximum a posteriori probability to produce soft decode output. When Log-MAP calculations are done, referred as log-map-done, it transitions back to IDLE state 172.
  • BM Memory and SM Memory [0081]
  • The Branch-[0082] Metric Memory 74 and the State-Metric Memory 75 are shown in FIG. 7. as the data storage components for BM module 71 and SM module 72. The Branch Metric Memory module is a dual-port RAM contains M-bit of N memory locations as shown in FIG. 18. The State Metric Memory module is a dual-port RAM contains K-bit of N memory locations as shown in FIG. 19. Data can be written into one port while reading at the other port.
  • Buffer Memory [0083]
  • As shown in FIG. 4., the [0084] buffer Memory A 43 stores data for the first decoder A 42, and buffer Memory B 45 stores data for the second decoder B 44. In an iterative pipelined decoding, the decoder A 42 reads data from buffer memory B 45 and writes results data into buffer memory B 43, the decoder B 44 reads data from buffer memory A 43 and write results into buffer memory B 45.
  • As shown in FIG. 20., the [0085] De-Interleaver memory 41 comprises of an De-Interleaver module 201 and a dual-port RAM 202 contains M-bit of N memory locations. The Interleaver is a Turbo code internal interleaver as defined by 3GPP standard ETSI TS 125 222 V3.2.1 (2000-05), or other source. The Interleaver permutes the address input port A for all write operations into dual-port RAM module. Reading data from output port B are done with normal address input.
  • As shown in FIG. 21., the [0086] buffer memory 43 45 comprises of a dual-port RAM 212 contains M-bit of N memory locations.
  • Turbo Codes Decoder Control Logics—State Machine (TDCLSM) [0087]
  • As shown in FIG. 4. the Turbo Decoder [0088] Control Logics module 47, referred as TDCLSM, controls the overall operations of the Turbo Codes Decoder. Log-MAP A 42 starts the operations of data in Memory B 45. At the same time, Log-MAP B starts the operations in Memory A 43. When Log-MAP A 42 and Log-MAP B 44 are done for a block N data, the TDCLSM 47 starts the iterative decoding for L number of times. When the iterative decoding sequences are done, the TDCLSM 47 transitions to HARD-DEC to generate the hard-decode outputs. Then the TDCLSM 47 transitions to start decoding another block of data.
  • Iterative Decoding [0089]
  • Turbo Codes decoder performs iterative decoding L times by feeding back the output Z[0090] 1 of the second Log-MAP decoder B into the first Log-MAP decoder A, before making decision for hard-decoding output. As shown in FIG. 23., the Counter 233 count the preset number L times.

Claims (14)

1. An apparatus of turbo codes decoder used as a baseband processor subsystem for iterative decoding a plurality of sequences of received data Rn representative of coded data Xn generated by a turbo codes encoder from a source of original data un into decoded data Yn comprising of:
(a) two pipelined SISO Log-MAP Decoders each decoding input data from the other output data in an iterative mode.
(b) the first SISO Log-MAP Decoder A having three inputs: R0, R1 connecting from the two Input Memory modules 48 49, and Z1 feeding-back from the buffer Memory B module 45 output; the output of the Adder 231 of two input values R0 and Z1 is connected to Decoder A 42; and the first Decoder output is connected to a buffer Memory A module 43.
(c) the second SISO Log-MAP Decoder B having two inputs: R2 connecting from the Input Memory module 41, and Z2 connecting from the buffer Memory A module output; and the second Decoder output is connected to a buffer Memory B module 45.
(d) a buffer Memory A module 43 storing decoded data from the first Log-MAP Decoder A 42, feeding data to the second Log-MAP Decoder B 44.
(e) a buffer Memory B module 45 storing decoded data from the second Log-MAP Decoder B 44, feeding-back data to the first Log-MAP Decoder A 42.
(f) an Adder 231 to produce a sum values of the two inputs R0 and Z1 output for the first Log-MAP Decoder A 42.
(g) Three Input Buffer Memory modules 48 49 41 storing input soft decision received data, and feeding data to the two Log-MAP Decoders.
(h) a Control logic state machine 47 controlling the overall operations of the Turbo Codes Decoder.
(i) a hard-decoder logic 46 producing a final decision of either logic zero 0 or logic one 1 at the end of the iterations.
2. The Decoder system of claim 1, wherein each Log-MAP decoder uses the logarithm maximum a posteriori probability algorithm. The Decoder system of claim 1, wherein each Log-MAP decoder uses the soft-input and soft-output (SISO) method maximum a posteriori probability algorithm.
The Decoder system of claim 1, wherein each Log-MAP decoder uses the Log Max approximation algorithm.
3. The Decoder system of claim 1, wherein the two serially connected SISO Log-MAP Decoders each decoding input data from the other output data in pipeline mode to produce soft decoded data each clock cycle.
4. The Decoder system of claim 1, wherein the Memory modules use dual-port memory RAM.
The Decoder system of claim 1, wherein the input buffer Interleaver Memory module uses an interleaver to generate the write-address sequences of the Memory core in write-mode. In read-mode, the memory core read-address are normal sequences.
5. The Decoder system of claim 1, wherein a Sliding Window of Block N is used on the input buffer Memory so that each block N data is decoded at a time one block after another in a pipeline scheme.
The Decoder system of claim 1, wherein the a Sliding Window of Block N is used on the input buffer Memory in a continuous circular wrap-around scheme for pipeline operations.
6. A method for iterative decoding a plurality of sequences of received data Rn representative of coded data Xn generated by a Turbo Codes Encoder from a source of original data un into decoded data Yn comprising the steps of:
(a) coupling two pipelined Log-MAP decoders serially connected, having buffer Memory A and buffer Memory B for storing decoded output and providing feedback input for the decoders.
(b) applying feedback signal from the output of the buffer Memory B to the first decoder A, by adding the intrinsic values Z1 with the received signal R0 input, to generate a first decoded output XO1.
(c) applying the first decoded output to the buffer Memory A, and feeding the data with the received signal R2 input into the second decoder B to generate a second decoded output XO2.
(d) applying the second decoded output XO2 to the buffer Memory B and feeding back the data to the first decoder A.
(e) executing operations in both Log-MAP Decoders at the same time such that each decoder use the other's output as an input in iterative decoding.
(f) applying a Sliding Window of Block N on the input buffer Memory so that each block N data is decoded at a time one block after another in a pipeline scheme.
(g) applying an iterative decoding on each input data for L times until a desire soft decision is achieved and a hard decode output is generated.
7. An apparatus of SISO Log-MAP Decoder for decoding a plurality of sequences of soft-input data SD0 and SD1 generated by a receiver to produce decoded soft-output data Y comprising of:
(a) a Branch Metric module computing the two soft-input data SD0 and SD1 into branch metric values for each branch in the trellis.
(b) a Branch Metric Memory module storing the branch metric values for each stage k=0 . . . N.
(c) a State Metric module computing state metric values for each state in the trellis using branch metric values. A State Metric Computing module calculates the probability A(k) of each state transition in forward recursion and the probability B(k) in backward recursion.
(d) an Add-Compare-Select (ACS) circuit to compute state metric values at each node in the Trellis.
(e) a State Metric Memory module storing state metric values for each stage k=0 . . . N.
(f) a Log-MAP module computing the soft decision output based on the branch metric values and state metric values using log maximum a posteriori probability algorithm.
(g) a Control Logic state machine module controlling the overall operations of the Log-MAP decoder.
8. The Decoder system of claim 7, wherein the decoder uses the logarithm maximum a posteriori probability algorithm.
The Decoder system of claim 7, wherein each Log-MAP decoder uses the Log Max approximation algorithm.
The Decoder system of claim 7, wherein the decoder uses the soft-input and soft-output (SISO) method Log maximum a posteriori probability algorithm.
9. The Decoder system of claim 7, wherein the decoder implements state-metric in forward recursion with Add-Compare-Select (ACS).
The Decoder system of claim 7, wherein the decoder implements state-metric in backward recursion with Add-Compare-Select (ACS).
10. The Decoder system of claim 7, wherein the decoder uses an 8-states Trellis state transition diagram for 3GPP PCCC Turbo Codes.
The Decoder system of claim 7, wherein the decoder uses an 16-states Trellis state transition diagram for Superorthogonal Turbo Codes SOTC.
The Decoder system of claim 7, wherein the decoder uses an N-states trellis state transition diagram for higher order Superorthogonal Turbo Codes SOTC.
11. The Decoder system of claim 7, wherein the the branch metric module uses a binary adder, a binary Subtracter, and two binary two-complementers logic.
The Decoder system of claim 7, wherein the the state metric module uses a binary adder, a comparator, a Mux selector logic.
The Decoder system of claim 7, wherein the the log-map module uses binary adders, binary maximum selectors logic.
The Decoder system of claim 7, wherein the the branch metric memory module uses dual-port memory RAM.
The Decoder system of claim 7, wherein the soft decoder module uses soft decision algorithm.
12. A method for Log-Map decoding a plurality of sequences of received data SD0 and SD1 generated by a receiver to produce decoded soft-output data Y comprising the steps of:
(a) computing the branch metric for each input data in a block N data for the branches entering each state in the Trellis, then storing the results into the BM Memory.
(b) computing the forward recursion state metric with ACS for each data in BM Memory, for a block N data, for the each state in the Trellis, then storing the results into the SM Memory.
(c) computing the backward recursion state metric with ACS for each data in BM Memory, for a block N data, for the each state in the Trellis, then storing the accumulated results into the SM Memory.
(d) computing the Log-Map values from each data in BM Memory and SM Memory, for a block N data, for the each state in the Trellis.
(e) applying soft decision algorithm for each state and generate soft decoded outputs.
13. An apparatus of an ACS (add-compare-select) for computing a plurality of sequences of sm0, bm0, sm1, bm1 data to select max output data A comprising of:
(a) an Adder0 to compute the sum of state metric sm0 and branch metric bm0 data,
(b) an Adder1 to compute the sum of state metric sm1 and branch metric bm1 data,
(c) a Comparator to compares the two sums,
(d) and a Multiplexer selects the larger sum for the state s(k).
14. An apparatus of Super Orthogonal Turbo Codes (SOTC) Decoder used as a baseband processor subsystem for iterative decoding a plurality of sequences of received Walsh code data RWi and RW i representative of Walsh coded data Wi and W j generated by a Super Orthogonal Turbo Codes (SOTC) Encoder from a source of original data un into decoded data Yn comprising of:
(a) two pipelined SISO Log-MAP Decoders each decoding input data from the other output data in an iterative mode.
(d) a buffer Memory A module storing decoded data from the first Log-MAP Decoder A, feeding data to the second Log-MAP Decoder B.
(e) a buffer Memory B module storing decoded data from the second Log-MAP Decoder B, feeding-back data to the first Log-MAP Decoder A.
(f) an Adder to produce a sum values of the two inputs RWi and Z1 output for the first Log-MAP Decoder A.
(g) The Input Buffer Memory modules storing input soft decision received data, and feeding data to the two Log-MAP Decoders.
(h) a Control logic state machine controlling the overall operations of the Turbo Codes Decoder.
(i) a hard-decoder logic producing a final decision of either logic zero 0 or logic one 1 at the end of the iterations.
US10/065,408 2001-01-02 2002-10-15 High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture Abandoned US20030097633A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/065,408 US20030097633A1 (en) 2001-01-02 2002-10-15 High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture
US10/248,245 US6799295B2 (en) 2001-01-02 2002-12-30 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/681,093 US6813742B2 (en) 2001-01-02 2001-01-02 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture
US10/065,408 US20030097633A1 (en) 2001-01-02 2002-10-15 High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US09/681,093 Continuation-In-Part US6813742B2 (en) 2001-01-02 2001-01-02 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture
US09/681,093 Continuation US6813742B2 (en) 2001-01-02 2001-01-02 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/248,245 Continuation-In-Part US6799295B2 (en) 2001-01-02 2002-12-30 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture

Publications (1)

Publication Number Publication Date
US20030097633A1 true US20030097633A1 (en) 2003-05-22

Family

ID=24733789

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/681,093 Expired - Lifetime US6813742B2 (en) 2001-01-02 2001-01-02 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture
US10/065,408 Abandoned US20030097633A1 (en) 2001-01-02 2002-10-15 High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/681,093 Expired - Lifetime US6813742B2 (en) 2001-01-02 2001-01-02 High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture

Country Status (1)

Country Link
US (2) US6813742B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048329A1 (en) * 2000-09-12 2002-04-25 Tran Hau Thien Method and apparatus for min star calculations in a map decoder
US20040068744A1 (en) * 2000-11-14 2004-04-08 Claussen Paul J. Proximity detection using wireless connectivity in a communications system
US20050278603A1 (en) * 2002-09-05 2005-12-15 Stmicroelectronics N.V. Combined turbo-code/convolutional code decoder, in particular for mobile radio systems
US20060282712A1 (en) * 2005-05-18 2006-12-14 Seagate Technology Llc Low complexity pseudo-random interleaver
US20070177696A1 (en) * 2006-01-27 2007-08-02 Pei Chen Map decoder with bidirectional sliding window architecture
US7266757B1 (en) * 2004-01-29 2007-09-04 Xilinx, Inc. Pipelined architecture implementing recursion processes for forward error correction
US20110134969A1 (en) * 2009-12-08 2011-06-09 Samsung Electronics Co., Ltd. Method and apparatus for parallel processing turbo decoder
US20120106683A1 (en) * 2009-06-18 2012-05-03 Zte Corporation Method and apparatus for parallel turbo decoding in long term evolution system (lte)
US20160315638A1 (en) * 2015-04-21 2016-10-27 National Tsing Hua University Iterative decoding device, iterative signal detection device and information update method for the same

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1333531C (en) * 2001-02-23 2007-08-22 皇家菲利浦电子有限公司 Turbo decoder system comprising parallel decoders
US20030123563A1 (en) * 2001-07-11 2003-07-03 Guangming Lu Method and apparatus for turbo encoding and decoding
KR100438537B1 (en) * 2001-07-19 2004-07-03 엘지전자 주식회사 Decoder in mobile communication apparatus and control method thereof
US6996767B2 (en) * 2001-08-03 2006-02-07 Combasis Technology, Inc. Memory configuration scheme enabling parallel decoding of turbo codes
KR100703307B1 (en) * 2002-08-06 2007-04-03 삼성전자주식회사 Turbo decoding apparatus and method
US20100278287A1 (en) * 2003-03-27 2010-11-04 Nokia Corporation List Output Viterbi Deconder with Blockwise ACS and Traceback
US7246295B2 (en) * 2003-04-14 2007-07-17 Agere Systems Inc. Turbo decoder employing simplified log-map decoding
US7496164B1 (en) * 2003-05-02 2009-02-24 At&T Mobility Ii Llc Systems and methods for interference cancellation in a radio receiver system
JP4217887B2 (en) * 2003-07-22 2009-02-04 日本電気株式会社 Receiver
KR20050042869A (en) * 2003-11-04 2005-05-11 삼성전자주식회사 MAP decoder having a simple design and a method decoding thereof
US7702968B2 (en) * 2004-02-27 2010-04-20 Qualcomm Incorporated Efficient multi-symbol deinterleaver
KR20070029744A (en) * 2004-05-18 2007-03-14 코닌클리즈케 필립스 일렉트로닉스 엔.브이. Turbo decoder input reordering
CN100369403C (en) * 2006-02-20 2008-02-13 东南大学 Parallel realizing method accepted by iterative detection decoding of wireless communication system
US7761772B2 (en) * 2006-10-18 2010-07-20 Trellisware Technologies, Inc. Using no-refresh DRAM in error correcting code encoder and decoder implementations
US7743287B2 (en) * 2006-10-18 2010-06-22 Trellisware Technologies, Inc. Using SAM in error correcting code encoder and decoder implementations
US8583983B2 (en) * 2006-11-01 2013-11-12 Qualcomm Incorporated Turbo interleaver for high data rates
US8271858B2 (en) * 2009-09-03 2012-09-18 Telefonaktiebolget L M Ericsson (Publ) Efficient soft value generation for coded bits in a turbo decoder
US20110202819A1 (en) * 2010-02-12 2011-08-18 Yuan Lin Configurable Error Correction Encoding and Decoding
US8862971B1 (en) * 2011-03-01 2014-10-14 Sk Hynix Memory Solutions Inc. Inter-track interference (ITI) correlation and cancellation for disk drive applications
US9143166B1 (en) 2011-03-23 2015-09-22 Sk Hynix Memory Solutions Inc. Adaptive scheduling of turbo equalization based on a metric
US8843812B1 (en) * 2011-03-23 2014-09-23 Sk Hynix Memory Solutions Inc. Buffer management in a turbo equalization system
US9003266B1 (en) * 2011-04-15 2015-04-07 Xilinx, Inc. Pipelined turbo convolution code decoder
US8843807B1 (en) 2011-04-15 2014-09-23 Xilinx, Inc. Circular pipeline processing system
US9385756B2 (en) * 2012-06-07 2016-07-05 Avago Technologies General Ip (Singapore) Pte. Ltd. Data processing system with retained sector reprocessing
KR20150061253A (en) * 2013-11-27 2015-06-04 한국전자통신연구원 Half pipelined turbo decoder and method for controlling thereof
US10205470B2 (en) * 2014-02-14 2019-02-12 Samsung Electronics Co., Ltd System and methods for low complexity list decoding of turbo codes and convolutional codes
US11838033B1 (en) * 2022-09-20 2023-12-05 Western Digital Technologies, Inc. Partial speed changes to improve in-order transfer

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2675968B1 (en) 1991-04-23 1994-02-04 France Telecom METHOD FOR DECODING A CONVOLUTIVE CODE WITH MAXIMUM LIKELIHOOD AND WEIGHTING OF DECISIONS, AND CORRESPONDING DECODER.
FR2675971B1 (en) 1991-04-23 1993-08-06 France Telecom CORRECTIVE ERROR CODING METHOD WITH AT LEAST TWO SYSTEMIC CONVOLUTIVE CODES IN PARALLEL, ITERATIVE DECODING METHOD, CORRESPONDING DECODING MODULE AND DECODER.
FR2712760B1 (en) 1993-11-19 1996-01-26 France Telecom Method for transmitting bits of information by applying concatenated block codes.
US5721745A (en) 1996-04-19 1998-02-24 General Electric Company Parallel concatenated tail-biting convolutional code and decoder therefor
US6023783A (en) 1996-05-15 2000-02-08 California Institute Of Technology Hybrid concatenated codes and iterative decoding
US6000054A (en) 1997-11-03 1999-12-07 Motorola, Inc. Method and apparatus for encoding and decoding binary information using restricted coded modulation and parallel concatenated convolution codes
US6292918B1 (en) * 1998-11-05 2001-09-18 Qualcomm Incorporated Efficient iterative decoding
JP3670520B2 (en) * 1999-06-23 2005-07-13 富士通株式会社 Turbo decoder and turbo decoder
US6516437B1 (en) * 2000-03-07 2003-02-04 General Electric Company Turbo decoder control for use with a programmable interleaver, variable block length, and multiple code rates
US6307901B1 (en) * 2000-04-24 2001-10-23 Motorola, Inc. Turbo decoder with decision feedback equalization

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7023934B2 (en) * 2000-09-12 2006-04-04 Broadcom Corporation Method and apparatus for min star calculations in a map decoder
US20020048329A1 (en) * 2000-09-12 2002-04-25 Tran Hau Thien Method and apparatus for min star calculations in a map decoder
US20040068744A1 (en) * 2000-11-14 2004-04-08 Claussen Paul J. Proximity detection using wireless connectivity in a communications system
US20050278603A1 (en) * 2002-09-05 2005-12-15 Stmicroelectronics N.V. Combined turbo-code/convolutional code decoder, in particular for mobile radio systems
US7191377B2 (en) * 2002-09-05 2007-03-13 Stmicroelectronics N.V. Combined turbo-code/convolutional code decoder, in particular for mobile radio systems
US7266757B1 (en) * 2004-01-29 2007-09-04 Xilinx, Inc. Pipelined architecture implementing recursion processes for forward error correction
US7395461B2 (en) 2005-05-18 2008-07-01 Seagate Technology Llc Low complexity pseudo-random interleaver
US20060282712A1 (en) * 2005-05-18 2006-12-14 Seagate Technology Llc Low complexity pseudo-random interleaver
US20080215831A1 (en) * 2005-05-18 2008-09-04 Seagate Technology Llc Interleaver With Linear Feedback Shift Register
US7788560B2 (en) 2005-05-18 2010-08-31 Seagate Technology Llc Interleaver with linear feedback shift register
US20070177696A1 (en) * 2006-01-27 2007-08-02 Pei Chen Map decoder with bidirectional sliding window architecture
US7929646B2 (en) * 2006-01-27 2011-04-19 Qualcomm Incorporated Map decoder with bidirectional sliding window architecture
US20120106683A1 (en) * 2009-06-18 2012-05-03 Zte Corporation Method and apparatus for parallel turbo decoding in long term evolution system (lte)
US20110134969A1 (en) * 2009-12-08 2011-06-09 Samsung Electronics Co., Ltd. Method and apparatus for parallel processing turbo decoder
US8811452B2 (en) * 2009-12-08 2014-08-19 Samsung Electronics Co., Ltd. Method and apparatus for parallel processing turbo decoder
US20160315638A1 (en) * 2015-04-21 2016-10-27 National Tsing Hua University Iterative decoding device, iterative signal detection device and information update method for the same
US9973217B2 (en) * 2015-04-21 2018-05-15 National Tsing Hua University SISO (soft input soft output) system for use in a wireless communication system and an operational method thereof

Also Published As

Publication number Publication date
US20020124227A1 (en) 2002-09-05
US6813742B2 (en) 2004-11-02

Similar Documents

Publication Publication Date Title
US20030097633A1 (en) High speed turbo codes decoder for 3G using pipelined SISO Log-Map decoders architecture
US6799295B2 (en) High speed turbo codes decoder for 3G using pipelined SISO log-map decoders architecture
Bickerstaff et al. A 24Mb/s radix-4 logMAP turbo decoder for 3GPP-HSDPA mobile wireless
Wang et al. VLSI implementation issues of turbo decoder design for wireless applications
JP4092352B2 (en) Decoding device, decoding method, and receiving device
US8112698B2 (en) High speed turbo codes decoder for 3G using pipelined SISO Log-MAP decoders architecture
EP1391995A2 (en) MAX-LOG-MAP decoding with windowed processing of forward/backward recursions
US6434203B1 (en) Memory architecture for map decoder
US8082483B2 (en) High speed turbo codes decoder for 3G using pipelined SISO Log-MAP decoders architecture
JP2004343716A (en) Method and decoder for blind detection of transmission format of convolution-encoded signal
US6807239B2 (en) Soft-in soft-out decoder used for an iterative error correction decoder
JP2007194684A (en) Decoding apparatus, decoding method, and receiver
KR100390416B1 (en) Method for decoding Turbo
US20020021763A1 (en) Encoding and decoding methods and devices and systems using them
Martina et al. A flexible UMTS-WiMax turbo decoder architecture
US20120246539A1 (en) Wireless system with diversity processing
Halter et al. Reconfigurable signal processor for channel coding and decoding in low SNR wireless communications
US20080115032A1 (en) Efficient almost regular permutation (ARP) interleaver and method
Zhang et al. High-throughput radix-4 logMAP turbo decoder architecture
Huang et al. A high speed turbo decoder implementation for CPU-based SDR system
Mathana et al. Low complexity reconfigurable turbo decoder for wireless communication systems
Mathana et al. FPGA implementation of high speed architecture for Max Log Map turbo SISO decoder
Berns et al. Channel decoder architecture for 3G mobile wireless terminals
Anghel et al. FPGA implementation of a CTC Decoder for H-ARQ compliant WiMAX systems
Fang et al. An implementation of turbo decoder for high speed wireless packet transmission system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION