WO2004038928A1 - Unite de communication et procede de decodage - Google Patents

Unite de communication et procede de decodage Download PDF

Info

Publication number
WO2004038928A1
WO2004038928A1 PCT/EP2003/011570 EP0311570W WO2004038928A1 WO 2004038928 A1 WO2004038928 A1 WO 2004038928A1 EP 0311570 W EP0311570 W EP 0311570W WO 2004038928 A1 WO2004038928 A1 WO 2004038928A1
Authority
WO
WIPO (PCT)
Prior art keywords
backward
path metrics
processor
data
communication unit
Prior art date
Application number
PCT/EP2003/011570
Other languages
English (en)
Other versions
WO2004038928A9 (fr
Inventor
Keren Kesselman
Avi Ben-Zur
Original Assignee
Modem-Art Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Modem-Art Limited filed Critical Modem-Art Limited
Priority to AU2003301623A priority Critical patent/AU2003301623A1/en
Publication of WO2004038928A1 publication Critical patent/WO2004038928A1/fr
Publication of WO2004038928A9 publication Critical patent/WO2004038928A9/fr

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3927Log-Likelihood Ratio [LLR] computation by combination of forward and backward metrics into LLRs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2703Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques the interleaver involving at least two directions
    • H03M13/271Row-column interleaver with permutations, e.g. block interleaving with inter-row, inter-column, intra-row or intra-column permutations
    • H03M13/2714Turbo interleaver for 3rd generation partnership project [3GPP] universal mobile telecommunications systems [UMTS], e.g. as defined in technical specification TS 25.212
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing
    • H03M13/6505Memory efficient implementations

Definitions

  • This invention relates to communication units employing coding techniques.
  • the invention is applicable to, but not limited to, a third generation communication unit utilising a turbo coding technique.
  • Third generation (3G) wireless communication systems for example the third generation partnership project (3GPP) have as a requirement the support of speech and data.
  • a 3G-subscriber unit termed user equipment (UE) in universal mobile telecommunication system (UMTS) parlance
  • UE user equipment
  • UMTS universal mobile telecommunication system
  • the expression "soft samples” may be understood to encompass samples that are stored and contain information about the perturbations met by the data symbols during the signal transmission, such as thermal noise, fading, etc.
  • the soft samples need to be quantized with a large number of bits per sample.
  • the UE receiver's memory requirements are typically substantial.
  • the same consideration applies to any element in the communication system that decodes data, for example a base station (or node B in UMTS parlance) .
  • QoS Quality of Service
  • data services can be divided into two groups, real-time and non-real-time services.
  • the realtime services typically require a low average data block delay, arranged such that they can cope with a certain error rate.
  • non-real-time services typically require some form of limitation against delay, but often need a very low error rate.
  • FEC forward error correction
  • non-real-time services usually tolerate variable and high delays but cannot cope with a high error rate.
  • delays become too high, the risk of the communication link being broken is increased.
  • FIG. 1 illustrates a generic block diagram of a "turbo encoder" 100.
  • the turbo encoder 100 is comprised of two constituent encoders 120, 125 and an interleaver 115.
  • the constituent encoders 120, 125 are commonly convolutional encoders.
  • the input to the first encoder is a block of ⁇ K' information bits 105. In principle, more than two constituent encoders may be used, with each additional encoder preceded by an interleaver.
  • the turbo encoder outputs three types of bits:
  • the 2*K bits (from ⁇ B' and ⁇ C ) generated by the two constituent encoders 120, 125 are commonly referred to as "parity bits" 130, 135.
  • the fundamental feature of the turbo encoder is that the second constituent encoder 125 operates on the bits presented at the input to the turbo encoder after the interleaver 115 has processed the data block.
  • the information bits 105 are encoded twice: once in the original order and once after permutation by the interleaver 115.
  • the design of the interleaver 115 is of paramount significance as the features of the interleaver 115 have a large effect on the performance of the turbo- coding scheme.
  • a parallel-to-serial device may multiplex the systematic bits 110 and the parity bits 130, 135 at the output of the turbo encoder before they are transmitted over the channel.
  • the coding rate of the generic scheme in FIG. 1 is, for example, 1/3, whereby three code bits are transmitted for every information bit.
  • a known technique of puncturing can be applied. By transmitting only one parity bit for every information bit, the coding rate can be increased to 1/2.
  • the puncturing may be carried out by the optional switch 140 in FIG. 1, which alternates between the two streams of parity bits 130, 135. In this manner, only half of the 2K parity bits are transmitted.
  • turbo codes require a decoder with higher complexity than that of a conventional decoder used for convolutional codes.
  • One of the factors that affect the complexity of decoding turbo codes is the memory required for storing metrics during the decoding process.
  • the generic block diagram of a "turbo decoder" 200 is illustrated in FIG. 2.
  • Systematic bits 110, together with parity bits 130 are received from the first encoder over a noisy channel, and are input to a first a- posteriori probability (APP) module 240.
  • APP a- posteriori probability
  • the extrinsic information 215 that is output from the first APP module (APP module-1) 240 is input to an interleaver 245.
  • Systematic bits 110 are also input directly to a second interleaver 220.
  • the outputs from the first interleaver 245 and second interleaver 220, together with parity bits 135 from the second encoder, are input to a second APP module (APP module-2) 225.
  • the extrinsic information 230 output from APP module-2 225, is- input into a de-interleaver 235.
  • the output of the de- interleaver 235 is input to APP module-1 240, as shown. However, for the first iteration, zeros are input into APP module-1 240 instead of the de-interleaver output.
  • turbo decoder 200 operates on a vector of received data Y that is a noisy version of the signal generated by the turbo encoder. Furthermore, let us assume that the vector Y is of length K and is defined as:
  • y k sys is the systematic bit 110 and y p ⁇ , ⁇ p2 are the two parity bit streams 130, 135 generated by the two respective constituent encoders 120, 125.
  • the sign of the above expression indicates the most, probable value of the y k bit .
  • the magnitude of the expression is a measure of the reliability of the decision based on the sign - the larger the magnitude, the higher the reliability. If the magnitude is zero, then the two values of the bit are equally probable, i.e. there is no information on the value of this bit.
  • Turbo codes are commonly used with long blocks of data since, for a given constituent code and signal-to-noise ratio, the achieved performance improves with increasing size of the data block.
  • the computation of the a- posteriori probabilities requires the availability of the branch metrics and forward and backward path metrics. Therefore, if the four procedures outlined above are carried out sequentially, the decoder needs several buffers. Each buffer needs to be the length of the received block of data, for storing intermediate results. It is noteworthy that, in order to achieve good results, such data blocks are of a considerable word-length.
  • the APP Module-1 240 calculates the a-posteriori probabilities for all information bits in the data block. The most important ingredient of this information is the extrinsic information 215.
  • the extrinsic information 215 for a particular information bit is the novel information derived by APP Module-1 240.
  • the extrinsic information 215 generated by APP Module-1 240 is provided to APP Module-2 225 as a-priori information.
  • the salient feature of the turbo decoding algorithm is the application of iterative decoding.
  • APP Module-2 225 Once APP Module-2 225 has completed its operation, the extrinsic information 230 generated by APP Module-2 225 is fed back to APP Module-1 240, which processes the data again in order to generate new extrinsic information that can be provided to APP Module 225 for further processing.
  • a single round of processing of the data by both APP Modules 225, 240 is referred to as one iteration.
  • the performance of the coding scheme usually improves as the number of iterations increases. In a practical scheme, the total number of iterations is generally constrained by considerations of complexity and tolerable delay in the decoder design.
  • the information has been interleaved 115 before being encoded by the second constituent encoder 125. Consequently, in FIG. 2, the extrinsic information 215 at the output of APP Module-1 240 has to be interleaved 245 before being fed to APP Module-2 225. For the same reason, the received systematic bits 110 have to be interleaved 220. Evidently, the extrinsic information 230 at the output of APP Module-2 225 has to be de-interleaved 235 before it is provided to APP Module-1 240 as a-priori information.
  • the final decision 255 of the turbo decoder is derived by taking the de- interleaved (in de-interleaver 250) a-posteriori probabilities output from APP Module-2 225. As is evident from equation [3], all that is needed for the final decision on a particular bit is to determine the sign of the computation made by APP Module-2 225.
  • FIG. 2 is a generic block diagram and alternative implementations of a turbo decoder may be used, depending upon the prevalent engineering considerations.
  • one decoder implementation may have only one APP Module that operates intermittently as APP Module-1 and APP Module-2.
  • the same modules may be used for carrying out all iterations.
  • it is possible to design a turbo decoder by concatenating several decoding blocks in a pipeline configuration, so that each block performs one iteration.
  • the straightforward forward-backward algorithm for calculating the a-posteriori probabilities in the APP Module is illustrated in the flowchart 300 of FIG. 3.
  • the main feature of the straightforward approach, from the point of view of memory requirements, is that the branch and path metrics from the computation of the a- posteriori probabilities for all ⁇ K' bits in the data block need to be stored in the three vectors G f , B. These vectors are of length ⁇ K' and they store the branch metrics, the forward path metrics and the backward path metrics, respectively.
  • the calculation of the a-posteriori probabilities (vector P) is then computed using the data stored in vectors A, B and G, as shown in step 325.
  • the decoder needs a large amount of memory in order to store intermediate results of the forward path metrics and of the backward path metrics. This is a significant problem in the implementation of practical turbo-decoders.
  • the purpose of the "sliding window” technique is to perform the calculation of the a-posteriori probabilities, whilst avoiding the necessity for allocating large memory buffers for storing intermediate results in the communication unit's receiver.
  • a consequence of using a “sliding window” approach is a slight deterioration in performance.
  • the timing diagram 400 of FIG. 4 illustrates the "sliding window" approach as described in US Patent 5,933,462 whereby the decoder uses one "Forward Processor” and two "Backward Processors". Notably, both Backward Processors access the storage memory.
  • the notation S m indicates the data in the -th time slot.
  • the block of data ⁇ y k ⁇ of length K' is partitioned into data slots of length ⁇ L' .
  • the last data block when there is not an integer number of data slots in a data block, the last data block would be configured to be a shorter or a longer data slot. In the 3GPP system, an opportunity for the last data block to be a shorter data block exists, since there is a tail at the end (i.e. the output state is known).
  • the integer M' is assumed to be even but all results can be readily adapted to an odd ⁇ M' .
  • the length of the data slot should be large enough to facilitate the proper operation of the Backward Processor (s) .
  • the length of a data slot needs to be several times the size of the constraint length of the constituent encoder.
  • Data slot S m is defined as:
  • a block of data is given by
  • turbo coding scheme with a block interleaver is used.
  • the operation of the decoder starts after the entire block of data has been received.
  • the clock rate for all processors should be much higher than the clock rate of the transmitted data. This is because the iterative decoding procedure for turbo codes should be capable of completing several iterations for each transmitted block of data in order to achieve a good performance.
  • the first timing diagram 405 presents the operation of the Forward Processor. Since the trellis encoder in the transmitter is initialised at a known trellis state, the Forward Processor in the receiver can be properly initialised. In this manner, the receiver is able to operate on the received block of data in a time slot-by- time slot fashion, i.e. processing data from slot So to
  • the Backward Processors in timing diagrams 410, 415 do not start their operation at the end of the data block.
  • the Backward Processors 410, 415 cannot be properly initialised. Therefore, the two Backward Processors always operate on pairs of adjacent time slots in the following manner.
  • the first Backward Processor then proceeds backwards from the end of the second time slot computing backward path metrics that will not be used for decoding.
  • This operation may be termed a "warm-up" mode of operation, as its sole purpose is to provide correct initialisation for computing path metrics in the first time slot of the pair.
  • the second Backward Processor then computes backward path metrics for the first time slot of the pair. This operation may be termed "active" mode.
  • the second time diagram 410 in FIG. 4 presents the operation of the first Backward Processor 410.
  • the first Backward Processor starts operating at the same time as the Forward Processor 405 and operates according to the following rule in time slot m (where 0 ⁇ m ⁇ M-l) : (i) If m is an even number, then the first Backward Processor 410 processes the data in a "warm-up" mode up to data slot S m + ⁇ (i.e. where 0 ⁇ m ⁇ M-2) . (ii) If m is an odd number, then the first
  • Backward Processor 410 processes the data in an "active" mode up to data slot S m - ⁇ (i.e. where 1 ⁇ m ⁇ M-l) .
  • the backward path metrics calculated by the first Backward Processor 410 are never stored in memory. These metrics are used as follows:
  • the operation of the second Backward Processor 415 is carried out as presented in the third time diagram in FIG. 4.
  • the operation of the second Backward Processor 415 starts at a delay of one time slot relative to the operation of the Forward Processor 405 and therefore ends one time slot later.
  • the second Backward Processor 405 operates according to the following rule in time slot m (where 1 ⁇ m ⁇ M) :
  • the odd time slot (M-l) fails to provide data for the second Backward Processor 415 to operate on in the "warm-up" mode. Therefore, the second Backward Processor 415 is just kept idle. In the even time slot M; the second
  • Backward Processor 415 operates on the data in the last time slot, S ⁇ -i. There is no "warm-up" result for this time slot. Therefore, if the data block is properly terminated, the second Backward Processor 415 uses the known initialisation states. Otherwise, the second
  • Backward Processor 415 just assigns equal weights to all states. In the latter case, the performance for the last data slot is poorer than for other data slots.
  • the backward path metrics calculated by the second Backward Processor 415 are also never stored in memory. These metrics are used as follows:
  • the operation of the APP Calculator is illustrated in the fourth timing diagram 420 in FIG. 4.
  • the APP Calculator computes the a-posteriori probabilities. Notably, its operation is delayed by one time slot relative to the operation of the Forward Processor.
  • the APP Calculator uses the output of the Forward Processor 405 and either of the two Backward Processors 410 and 415 as follows:
  • 415 is used for data slots S m when m is odd. These metrics are used in even time slots.
  • the APP Calculator reads the forward path metrics computed by the Forward Processor 405 from memory. However, the backward path metrics are used on a symbol- by-symbol basis, immediately after the first and second Backward Processors 410, 415, compute them. As explained above, backward path metrics are provided by the first Backward Processor 410 for even time slots and by the second Backward Processor 415 for the odd time slots.
  • EP-1261139-A2 discloses a known mechanism and associated architecture to perform standard MAP decoding.
  • the known mechanism calculates beta metrics in a reverse order, stores in beta state metrics random access memory (RAM) and then calculates alpha state metrics.
  • the technique disclosed in EP 1261139-A2 combines alpha state metrics and beta state metrics in an extrinsic block.
  • the technique disclosed in EP-1261139-A2 subsequently computes beta state metrics for a next sliding window.
  • EP-1261139-A2 uses a prolog section for the alpha calculation, which is basically a warm-up mode for the Forward Processor.
  • the addressing scheme for the alpha metrics block is extremely complicated, i.e. it uses interleaved forward- reverse addressing.
  • EP 1261139-A2 describes a known technique that suffers from many of the complexity and inefficiency problems of other known mechanisms.
  • a communication unit as claimed in Claim 14.
  • a storage medium storing processor-implementable instructions, as claimed in Claim 16.
  • inventive concepts hereinafter described propose an efficient implementation of a sliding window technique that can be used in a turbo decoder.
  • inventive concepts present a method for implementing a forward-backward decoding algorithm for decoding turbo codes that is more memory and processing efficient than the known prior art.
  • backward path metrics are stored in storage memory thereby enabling the APP Calculator to immediately use computed forward path metrics.
  • FIG. 1 illustrates a block diagram of a known turbo encoder
  • FIG. 2 illustrates a block diagram of a known turbo decoder
  • FIG. 3 is a flow chart illustrating a computation operation of the a-posteriori probabilities using a known straightforward forward-backward approach in a turbo decoder
  • FIG. 4 shows a series of timing diagrams illustrating deficiencies in applying a sliding window approach to a turbo decoder according to US Patent 5,933,462.
  • FIG. 5 illustrates a data communication unit adapted in accordance with the preferred embodiment of the present invention
  • FIG. 6 illustrates a flowchart of a communication unit's receiver function with two Backward Processors having access to a storage memory, in accordance with the preferred embodiment of the present invention
  • FIG. 7 is a timing diagram illustrating the timing associated with the flowchart of FIG. 6;
  • FIG. 8 illustrates a flowchart of the communication unit's receiver function with a single Backward Processor having access to a storage memory, in accordance with an enhanced embodiment of the present invention
  • FIG. 9 is a timing diagram illustrating the timing associated with the flowchart of FIG. 8.
  • a subscriber unit capable of turbo decoding for example, a subscriber unit operating on a universal mobile telecommunication standard (UMTS) wireless communication system.
  • UMTS universal mobile telecommunication standard
  • the invention is applicable to the Third Generation Partnership Project (3GPP) specification for wide-band code-division multiple access (WCDMA) standard relating to the UTRAN radio Interface (described in the 3G TS 25. xxx series of specifications).
  • 3GPP Third Generation Partnership Project
  • WCDMA wide-band code-division multiple access
  • WCDMA wide-band code-division multiple access
  • the inventive concepts hereinafter described are equally applicable to other wireless or wired communication systems, for example, a CDMA-2000 system that supports turbo-decoding techniques.
  • inventive concepts hereinafter described could be used within any element in the UMTS infrastructure that is capable of performing decoding operations, for example, a base transceiver station (termed a Node-B in UMTS parlance) .
  • decoding operations may be alternatively controlled, implemented in full or implemented in part by adapting any suitable part of the communication system.
  • equivalent elements such as intermediate fixed communication units in other types of systems may, in appropriate circumstances, be adapted to provide or facilitate the turbo decoding mechanism as described herein.
  • the turbo decoding operation includes computation of the a-posteriori probabilities (APPs) using an APP Calculator.
  • the APP Calculator uses branch metrics, forward path metrics and backward path metrics generated by one or more processors.
  • the APP Calculator is a preferred example of a processor function that processes received data blocks in order to produce a data decoded output. In this regard, it may be viewed as an example of a data block determination function.
  • FIG. 5 there is shown a block diagram of a wireless subscriber unit 500, generally termed a user equipment (UE) or user terminal (UT) in UMTS parlance, adapted to support the inventive concepts of the preferred embodiments of the present invention.
  • the UE 500 contains an antenna 502 preferably coupled to a duplex filter or antenna switch 504 that provides isolation between receive and transmit chains within the UE 500.
  • the receiver chain 510 includes receiver front-end circuitry 506 (effectively providing reception, filtering and intermediate or baseband frequency conversion) .
  • the front-end circuit is serially coupled to a signal processing function 508.
  • a portion of the signal processing function 508 is configured as a turbo decoder 509 to provide turbo- decoding of received signals.
  • the signal processing function 508, in the context of the present invention will perform numerous other signal processing tasks (not shown), such as: symbol timing recovery, demodulation, de-multiplexing, de-interleaving, reordering, etc.
  • the receiver chain 510 also includes received signal strength indicator (RSSI) circuitry 512, which in turn is coupled to a controller 514 for maintaining overall UE control.
  • the controller 514 is also coupled to the receiver front-end circuitry 506 and the signal processing function 508 (generally realised by a digital signal processor (DSP) or dedicated hardware such as an application specific integrated circuit (ASIC) ) .
  • the controller is also coupled to a memory device 516 that stores operating regimes, such as decoding/encoding functions, synchronisation patterns, code sequences and the like. It is within the contemplation of the invention that the memory device, and/or a further memory element may be included within the signal processing function 508 and/or turbo decoder 509.
  • this essentially includes an input device 520, such as a keypad, coupled in series through transmitter/modulation circuitry 522 and a power amplifier 524 to the antenna 502.
  • the transmitter/modulation circuitry 522 and the power amplifier 524 are operationally responsive to the controller.
  • a timer 518 is operably coupled to the controller 514 to control the timing of operations (transmission or reception of time-dependent signals) within the UE 500.
  • the various components within the UE 500 can be realised in discrete or integrated component form, with an ultimate structure based on design, size and cost considerations.
  • the signal processing function 508 which may be a baseband (back-end) signal processing receiver integrated circuit (IC) in some embodiments, and particularly the turbo decoder 509 has been adapted to incorporate the inventive concepts described below.
  • IC back-end signal processing receiver integrated circuit
  • the turbo decoder 509 has been adapted to compute backward path metrics and store the backward path metrics in a more time, processing and memory efficient manner than known prior art turbo decoders.
  • the use of memory device 516 has been adapted such that it temporarily stores backward path metrics - as provided by a Backward Processor function.
  • the turbo decoder 509 has been further adapted to compute forward path metrics and calculate and outputs a-posteriori probabilities of the received data blocks based on the stored backward path metrics and the substantially immediately computed forward path metrics.
  • the Forward Processor is delayed to operate at a substantially concurrent time as the APP Calculator, thereby enabling the turbo decoder 509 to determine the a-posteriori probabilities of the received bits immediately, i.e. without a need to store forward path metrics.
  • the turbo decoder 509 is able to determine the a- posteriori probabilities of the received symbols in a sequentially ordered manner. The operation of the adapted turbo decoder 509 is described further with respect to the flowcharts of FIG. 6 and FIG. 8.
  • the various adapted components within the UE 500 can be realised in discrete or integrated component form. More generally, the functionality associated with turbo decoding of received data blocks may be implemented in a respective communication unit in any suitable manner. For example, new signal processing functionality may be added to a conventional UE (or Node B) , or alternatively existing processing functions of a conventional communication unit may be adapted, for example by reprogram ing one or more processors therein. As such, the adaptation of, for example, the signal processing function 508 and/or turbo decoder 509 in the preferred embodiment of the present invention, may be implemented in the form of processor-implementable instructions stored on a storage medium, such as a floppy disk, hard disk, PROM, RAM or any combination of these or other storage media.
  • a storage medium such as a floppy disk, hard disk, PROM, RAM or any combination of these or other storage media.
  • the timing of the APP Calculator operations is performed in a much-improved manner using a significantly reduced amount of memory.
  • the preferred timing of operations is presented in the timing diagrams of FIG. 7 and FIG. 9. It is convenient to start the description of the improved decoding procedure with the Backward Processors. As is evident from comparing the timing diagrams in FIG. 7 to those in FIG. 4, the Backward Processors operate with the same timing for computation of backward path metrics of respective data slots (So to S m ) . However, the inventors of the present invention have recognised the benefits to be gained from storing backward path metrics, rather than storing forward path metrics, in storage memory.
  • the procedures related to the Backward Processors are as illustrated in the flowchart 600 of FIG.6.
  • both Backward Processors have access to a memory, say memory device 516 in FIG. 5.
  • the two Backward Processors alternate between "active" mode and "warm-up” mode.
  • the first Backward Processor is arranged to be in its "active” mode
  • the second Backward Processor is arranged to be in a "warm-up” mode, for example, during odd time slots.
  • the first Backward Processor is arranged to be in a "warm-up” mode, whilst the second Backward Processor is arranged to be in an "active" mode.
  • step 610 If it is determined, in step 610, that m' is 0' or even and less than or equal to M-2', then on data slot S + ⁇ , the first Backward Processor is set to function in a "warm-up" mode, in step 615. The process then moves on to step 620. If it is determined, in step 610, that m' is either odd or greater than M-2', then the process also moves to step 620.
  • step 620 If m' is even, greater than ⁇ 0' and less than or equal to ⁇ M' , in step 620, then on data slot S - ⁇ , the second
  • Backward Processor is set to function in an "active" mode, in step 625.
  • the process then moves to step 630. If it is determined, in step 620, that ⁇ m' is either odd or greater than ⁇ M' , then the process also moves to step 620.
  • the second Backward Processor computes the backward path metrics and stores the computed metrics in the storage memory.
  • step 630 a further determination is made as to whether ⁇ m' is odd and m' is less than or equal to M-l' . If it is determined, in step 630, that ⁇ m' is odd and less than or equal to ⁇ M- 1' , then the first Backward Processor is set to function in an "active" mode on data slot S m - ⁇ - The process then moves to step 636. If it is determined, in step 630, that m' is even or greater than M-l' , then the process also moves on to step 636. In step 636, a determination is made as to whether m' is odd and less than or equal to M-3' . If it is determined in step 636 that ⁇ m' is odd and less than or equal to ⁇ M- 3', then the second Backward Processor functions in a "warm-up" mode on data slot S + ⁇ , as shown in step 638.
  • step 640 If it is determined in step 636 that 'm' is even or greater than ⁇ M-3' , then the process also moves on to step 640.
  • step 640 a further determination is made as to whether ⁇ m' is within the range l ⁇ m ⁇ M+1' . If it is determined, in step 640, that x m' is within the range Km ⁇ ⁇ M+l', then the Forward Processor is activated on data slot S m - 2 • If it is determined, in step 640, that m' is greater than ⁇ M+l' or ⁇ m' is less than ⁇ 2', then the process moves on to step 650.
  • the Forward Processor operates on the data in sequential time slots and starts its operation at a delay of two time slots relative to the start of the operation of the first Backward Processor.
  • the Forward Processor is able to compute forward path metrics immediately after, and independently of, the first or second Backward Processor has computed backward path metrics.
  • step 645 the a-posteriori probability (APP) Calculator is activated.
  • the APP Calculator now immediately uses the computed forward path metrics on a symbol-by-symbol basis, without incurring any delay.
  • the received data blocks are decoded and output, as shown in step 648.
  • the process then moves onto step 650.
  • step 650 a determination is made as to whether the current time slot being processed is the last time slot in the data block, i.e. is m ⁇ M. If m ⁇ M in step 650, the data being processed in the time slot is not the last data in the data block, and m is incremented so that the next time slot is selected for processing, in step 655.
  • the previously stored backward metrics in data slot S m - 3 are then preferably deleted in step 658, after they have been used in the calculation of APPs. In this manner, there is a minimum use of the storage memory.
  • timing diagram 700 of FIG. 7 The timing of Forward Processor operations 700 is illustrated in the first timing diagram. As shown, the Forward Processor delays the processing of data by two time slots with respect to the first Backward Processor commencing in a ⁇ warm-up' mode.
  • the second timing diagram illustrates the timing and mode of operation of the first Backward Processor 710.
  • the third timing diagram shows the timing and mode of operation of the second Backward Processor 715.
  • the first Backward Processor is in an ⁇ active' mode generating backward path metrics
  • the second Backward Processor is operating in a warm-up' mode.
  • the first Backward Processor is operating in a warm-up' mode.
  • the respective Backward Processor that is operating in a warm-up' mode calculates backward path metrics so that it is able to provide proper initialisation for computing and storing backward path metrics by the alternate Backward Processor operating in an ⁇ active' mode.
  • the APP Calculator 720 in that the APP Calculator 720 operates in any given time slot, on the same data slot that the Forward Processor operates on .
  • the APP Calculator uses the forward path metrics generated by the Forward Processor 705, on a symbol-by-symbol basis, as soon as the Forward Processor computes them.
  • the APP Calculator operates on the symbols within each data slot in the correct (forward) sequence.
  • the Calculator provides the a-posteriori probabilities for all symbols in the data block in their correct sequential order.
  • the two Backward Processors are configured to alternate their operations in "active" mode. This implies that both Backward Processors also take turns in storing the completed backward path metrics in memory.
  • the inventors of the present invention have recognised that, for some hardware architectures, it might be advantageous to have only one Backward Processor accessing the memory device.
  • a single Backward Processor can be provided access to the memory device, in the manner illustrated in the flowchart 800 of FIG. 8, and operating in accordance with the timing diagrams presented in FIG. 9.
  • the Forward Processor and the APP Calculator operate in the same manner as the improved procedure described with reference to FIG. 6 and FIG. 7.
  • FIG. 8 a flowchart of an enhanced embodiment of the present invention is shown.
  • the second, Backward Processor operates in an "active" mode, having access to the storage memory.
  • the other (first) Backward Processor continuously operates in a "warm-up” mode.
  • the reverse arrangement of the Backward Processors could be applied, i.e. the first Backward Processor could operate continuously in an "active” mode, whilst the second Backward Processor operates continuously in a “warm-up” mode .
  • a determination is then made, in step 810, as to whether m' is less than M-l' . If it is determined, in step
  • step 810 that x m' is less than M-1', then on data slot S m+ i_, the first Backward Processor is set to function in a "warm-up" mode, in step 815.
  • the process then moves to step 820, where a further determination is made as to whether ⁇ m' is greater than ⁇ 0' and less than or equal to ⁇ M' . If it is determined, in step 810, that ⁇ m' is greater than or equal to ⁇ M-1' , then the process also moves to step 820.
  • step 820 If it is determined, in step 820, that m' is greater than 0' and less than or equal to ⁇ M' , then on data slot
  • Sm- 1/ the second Backward Processor is set to function in an "active" mode, as shown in step 825.
  • only the second Backward Processor computes the backward path metrics and stores the computed metrics in the memory device.
  • step 830 a further determination is made as to whether ⁇ m' is in the range Km ⁇ ⁇ M+l' . If it is determined, in step 820, that ⁇ m' is greater than ⁇ M' , then the process also moves to step 830. If it is determined, in step 830, that ⁇ m' is in the range l ⁇ m ⁇ ⁇ M+l', in step 830, then on data slot S m - 2 - the Forward Processor is activated.
  • the Forward Processor operates on the data sequentially.
  • the Forward Processor also starts its operation at a delay of one time slot relative to the start of operation of the second Backward Processor, as indicated in FIG. 9. In this manner, the Forward Processor is able to compute forward path metrics immediately after the second Backward Processor computes and stores backward path metrics. Thus, there is again no need to store forward path metrics .
  • step 835 the APP Calculator is activated to compute the a-posteriori probabilities (APP) for the received sequence.
  • APP a-posteriori probabilities
  • the APP Calculator now immediately uses the computed forward path metrics on a symbol-by-symbol basis.
  • the APP module reads the backward path metrics from storage memory.
  • the process then moves onto step 838. In this manner, data blocks are decoded and output, as shown in step 838.
  • step 840 If it is determined, in step 830, that m' is greater than M+1' or less than 2, then the process also moves on to step 840.
  • step 840 a determination is made as to whether the current time slot being processed is the last time slot in the data block, i.e. is m ⁇ M. If m ⁇ M in step 840, the data just processed is not the last data from the data block, and m is incremented so that the data in the next time slot can be processed, in step 845.
  • the previously stored backward metrics in data slot S m - 3 are then preferably deleted in step 848, after the metrics are used in the calculation of APPs. The process then moves to step 810, and repeats.
  • step 850 the process is completed for that data block in step 850.
  • the backward path metrics for the previous respective odd or even time slot are deleted before starting the new backward path metric computations in the "active" mode.
  • FIG. 9 a series of timing diagrams 900 are illustrated that illustrate the timing of the Forward Processor 905 and Backward Processors 910, 915 and APP Calculator 920 of the enhanced embodiment of the present invention.
  • the primary difference between the timing diagrams of the preferred embodiment of FIG. 7 and the enhanced embodiment of FIG. 9 is in the division of tasks between the two Backward Processors 910 and 915.
  • the first Backward Processor 910 operates continuously in the "warm-up" mode whilst the second Backward Processor 915 operates continuously in the "active" mode.
  • the first Backward Processor 910 therefore provides proper initialisation for the backward path metrics to be subsequently computed by the second Backward Processor 915 in a subsequent time slot. Since only the second Backward Processor 915 operates in an "active" mode, only the second Backward Processor 915 writes the backward path metrics to the storage memory.
  • both Backward Processors process data slots in a sequential manner (-S m - ⁇ , S m , S + ⁇ , etc..) .
  • turbo decoder used in a subscriber device.
  • any device or processor function in any element in the communication system that is capable of performing decoding operations, particularly turbo decoding, could be adapted to incorporate the inventive concepts described herein.
  • the decoding operation may be performed by any communication unit, for example a subscriber unit or a base transceiver station (BTS) in any suitable communication system, for example CDMA 2000.
  • a communication unit for example a subscriber unit or a base transceiver station (BTS) in any suitable communication system, for example CDMA 2000.
  • BTS base transceiver station
  • the improved decoding operation uses one Forward Processor and two Backward Processors.
  • decoding configurations may benefit from the inventive concepts of the present invention, for example, using a single Backward Processor and use it alternatively as the first Backward Processor and the second Backward Processor.
  • This configuration would save hardware and/or save processing time when it is implemented in software, at the expense of a longer delay. It is therefore envisaged that such a configuration may be employed when delay is less important than savings on processor functionality.
  • the turbo decoding process of the preferred and enhanced embodiments described above provide a superior sliding window decoding technique with reduced storage memory of path metrics, compared to US 5,933,462. This has been achieved by changing the timing of the Forward Processor and of the two Backward Processors. Furthermore, in this manner, the APP Calculator is able to compute APP probabilities for the received data in the correct (forward) sequence.
  • the turbo decoding process of the enhanced embodiment can operate according to the timing diagram of FIG. 9. In this manner, the first Backward Processor is configured to be always in the "warm-up" mode and the second Backward Processor is always in the "active mode". Hence, only one Backward Processor is configured to store the calculated backward path metrics in the storage memory.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Error Detection And Correction (AREA)

Abstract

L'invention concerne un procédé (600, 800) de turbo-décodage d'un ou de plusieurs blocs de données. Le procédé consiste à recevoir (602, 802) un ou plusieurs blocs de données dans une pluralité de créneaux temporels à une unité de communication. Au moins un processeur vers l'arrière calcule (625) une métrique de voie de retour pour une pluralité de créneaux temporels et mémorise la métrique de voie de retour dans un élément à mémoire. Un processeur vers l'avant calcule (645, 835) une métrique de voie avant pour une pluralité de créneaux de données. Une fonction de détermination de blocs de données calcule et délivre en sortie (648, 838) des données décodées pour les blocs de données basés sur la métrique de voie avant et la métrique de voie de retour mémorisée. En mémorisant la métrique de voie avant dans une opération de turbo-décodage, la fonction de détermination de bloc de données, par exemple, un module de probabilités a posteriori, calcule et délivre en sortie des données décodées en utilisant un espace mémoire réduit, comparativement aux techniques connues de métrique de voie avant. Il en résulte un avantage dans la temporisation par rapport aux techniques connues de décodage « fenêtre glissante ».
PCT/EP2003/011570 2002-10-23 2003-10-17 Unite de communication et procede de decodage WO2004038928A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003301623A AU2003301623A1 (en) 2002-10-23 2003-10-17 Communication unit and method of decoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0224667.6 2002-10-23
GB0224667A GB2394627B (en) 2002-10-23 2002-10-23 Communication unit and method of decoding

Publications (2)

Publication Number Publication Date
WO2004038928A1 true WO2004038928A1 (fr) 2004-05-06
WO2004038928A9 WO2004038928A9 (fr) 2005-06-09

Family

ID=9946435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2003/011570 WO2004038928A1 (fr) 2002-10-23 2003-10-17 Unite de communication et procede de decodage

Country Status (3)

Country Link
AU (1) AU2003301623A1 (fr)
GB (1) GB2394627B (fr)
WO (1) WO2004038928A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106936448A (zh) * 2017-03-07 2017-07-07 西北工业大学 一种适用于激光通信浮标的Turbo码编码FDAPPM方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933462A (en) * 1996-11-06 1999-08-03 Qualcomm Incorporated Soft decision output decoder for decoding convolutionally encoded codewords
EP1115209A1 (fr) * 2000-01-07 2001-07-11 Motorola, Inc. Dispositif et procédé de décodage SISO en parallèle
GB2365291A (en) * 2000-02-10 2002-02-13 Motorola Inc Soft output decoder for convolutional codes using a sliding window technique which involves a learning period, stored backward recursion and forward recursion
US20020174401A1 (en) * 2001-04-30 2002-11-21 Zhongfeng Wang Area efficient parallel turbo decoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027531B2 (en) * 2000-12-29 2006-04-11 Motorola, Inc. Method and system for initializing a training period in a turbo decoding device
US6993704B2 (en) * 2001-05-23 2006-01-31 Texas Instruments Incorporated Concurrent memory control for turbo decoders

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933462A (en) * 1996-11-06 1999-08-03 Qualcomm Incorporated Soft decision output decoder for decoding convolutionally encoded codewords
EP1115209A1 (fr) * 2000-01-07 2001-07-11 Motorola, Inc. Dispositif et procédé de décodage SISO en parallèle
GB2365291A (en) * 2000-02-10 2002-02-13 Motorola Inc Soft output decoder for convolutional codes using a sliding window technique which involves a learning period, stored backward recursion and forward recursion
US20020174401A1 (en) * 2001-04-30 2002-11-21 Zhongfeng Wang Area efficient parallel turbo decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VITERBI A J: "AN INTUITIVE JUSITIFICATION AND A SIMPLIFIED IMPLEMENTATION OF THE MAP DECODER FOR CONVOLUTIONAL CODES", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE INC. NEW YORK, US, vol. 16, no. 2, 1 February 1998 (1998-02-01), pages 260 - 264, XP000741780, ISSN: 0733-8716 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106936448A (zh) * 2017-03-07 2017-07-07 西北工业大学 一种适用于激光通信浮标的Turbo码编码FDAPPM方法
CN106936448B (zh) * 2017-03-07 2020-05-01 西北工业大学 一种适用于激光通信浮标的Turbo码编码FDAPPM方法

Also Published As

Publication number Publication date
WO2004038928A9 (fr) 2005-06-09
GB0224667D0 (en) 2002-12-04
AU2003301623A1 (en) 2004-05-13
GB2394627B (en) 2004-09-08
GB2394627A (en) 2004-04-28

Similar Documents

Publication Publication Date Title
US7810018B2 (en) Sliding window method and apparatus for soft input/soft output processing
JP4298170B2 (ja) マップデコーダ用の区分されたデインターリーバメモリ
JP4101653B2 (ja) インターリーバ・メモリ内の復調データのスケーリング
JP4805883B2 (ja) 空間効率のよいターボデコーダ
JP3730885B2 (ja) 誤り訂正ターボ符号の復号器
US8225170B2 (en) MIMO wireless system with diversity processing
KR20090015913A (ko) 디펑처 모듈을 갖는 터보 디코더
KR20020018643A (ko) 고속 map 디코딩을 위한 방법 및 시스템
WO2000033527A1 (fr) Systemes et procedes permettant de recevoir un signal module contenant des bits codes et decodes au moyen d'une demodulation multipasse
US8082483B2 (en) High speed turbo codes decoder for 3G using pipelined SISO Log-MAP decoders architecture
US8069401B2 (en) Equalization techniques using viterbi algorithms in software-defined radio systems
EP1471677A1 (fr) Méthode de détection aveugle du format de transport d'un signal incident encodé par code convolutionnel, et décodeur de code convolutionnel correspondant
US8050363B2 (en) Turbo decoder and method for turbo decoding a double-binary circular recursive systematic convolutional encoded signal
JP2002152056A (ja) ターボコード復号化装置、復号化方法および記録媒体
JP2004147329A (ja) ターボ符号の復号化方法
EP1142183B1 (fr) Procede et systeme de decodage maximal a posteriori rapide
JP2002026879A (ja) データ誤り訂正装置
US20070168820A1 (en) Linear approximation of the max* operation for log-map decoding
WO2004038928A1 (fr) Unite de communication et procede de decodage
JP2001257602A (ja) データ誤り訂正方法及び装置
JP2001326578A (ja) データ誤り訂正装置
JP2004222197A (ja) データ受信方法及び装置
WO2011150295A1 (fr) Architectures de métrique d'état en base 2 et en base 4 pour turbo-décodeur

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGE 1/8, DRAWINGS, REPLACED BY A NEWPAGE 1/8; AFTER RECTIFICATION OF OBVIOUS ERRORS AUTHORIZED BY THE INTERNATIONAL SEARCH AUTHORITY

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP