WO2017032255A1 - System and method for data decoding - Google Patents

System and method for data decoding Download PDF

Info

Publication number
WO2017032255A1
WO2017032255A1 PCT/CN2016/095699 CN2016095699W WO2017032255A1 WO 2017032255 A1 WO2017032255 A1 WO 2017032255A1 CN 2016095699 W CN2016095699 W CN 2016095699W WO 2017032255 A1 WO2017032255 A1 WO 2017032255A1
Authority
WO
WIPO (PCT)
Prior art keywords
metric
sequence
bits
metric differences
nodes
Prior art date
Application number
PCT/CN2016/095699
Other languages
French (fr)
Inventor
Qin Huang
Zulin WANG
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Publication of WO2017032255A1 publication Critical patent/WO2017032255A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4107Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing add, compare, select [ACS] operations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4138Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4138Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions
    • H03M13/4146Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions soft-output Viterbi decoding according to Battail and Hagenauer in which the soft-output is determined using path metric differences along the maximum-likelihood path, i.e. "SOVA" decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • H03M13/658Scaling by multiplication or division

Definitions

  • the present disclosure relates to data decoding, and more particularly, relates to a system and method for decoding using a modified Viterbi algorithm.
  • Viterbi algorithm is a decoding algorithm for convolutional codes.
  • the algorithm uses dynamic programming to search for the maximum likelihood (ML) path, i.e. the ML code word, on the trellis graph of convolutional codes.
  • ML path i.e. the ML code word
  • Feldman proposed the method commonly known as the Lazy VA, which method finds the ML path by normalizing branch metrics as non-positive number and introducing priority queue (PQ) to pop all possible nodes in a sequence.
  • PQ priority queue
  • LLRs log-likelihood ratios
  • M-SOVA soft-input soft-output Viterbi algorithm
  • Chen, et al. proposed to update LLRs from two separate directions, referred to as bi-SOVA.
  • the bi-SOVA performs slightly better than the Max-Log-MAP algorithm by doubling the calculation complexity.
  • Huang, et al. used two scaling factors to further improve the performance of SOVA.
  • Some embodiments of the present disclosure relate to a method of decoding.
  • the method may include trimming metric differences to calculate reliability values.
  • methods of decoding may include determining a maximum likelihood path and related competitive paths based on a received sequence of L bits, and calculating reliability values relating to the bits based on no more than N metric differences, wherein L and N are integers greater than 1; and wherein L is greater than N.
  • the metric differences may be calculated based on the maximum likelihood path and the related competitive paths.
  • the method may further include conducting a backtracking operation on the related competitive paths.
  • the calculating reliability values may include estimating at least some of the reliability values corresponding to some of the bits based on intrinsic information or extrinsic information.
  • the intrinsic information may include a sequence transmitted through a channel, and the extrinsic information may indicate the quality of reliability values relating to at least some of the sequence bits.
  • the reliability value relating to a first bit within a backtracking depth of a second bit may be updated during the backtracking operation, and the second bit corresponds to one of the N metric differences.
  • N may equal to L divided by a trimming factor M, wherein M is a number greater than 1.
  • N may be an integer or a decimal.
  • the calculating reliability values may include obtaining a plurality of metric differences and obtaining N minimum metric differences among the plurality of metric differences.
  • the metric differences may be calculated by a Viterbi algorithm.
  • the metric differences which may be determined by a modified Lazy Viterbi algorithm including obtaining a priority queue including a plurality of nodes, and calculating a metric difference on a node of the plurality of nodes, wherein the node has been popped more than once based on the priority queue.
  • the plurality of nodes are popped in a sequence ranked based on path metric values of the plurality of nodes.
  • the method further includes modifying branch metrics relating to the sequence into non-positive values.
  • a system of decoding may include an input configured to retrieve a sequence of L bits, a calculating module configured to determine a maximum likelihood path and related competitive paths based on the L bits, a trimming module configured to obtain no more than N metric differences based on the maximum likelihood path and the related competitive paths, and an output configured to output reliability values relating to the bits calculated based on the metric differences, wherein L and N are integers greater than 1; and wherein L is greater than N.
  • FIG. 1 is a block diagram of a data transmission system according to some embodiments of the present disclosure
  • FIG. 2 is a block diagram of a decoding process according to some embodiments of the present disclosure
  • FIG. 3 is an exemplary structure of a decoder according to some embodiments of the present disclosure.
  • FIG. 4 is a process for decoding a convolutional code according to some embodiments of the present disclosure
  • FIG. 5 is an exemplary structure of the trimming module according to some embodiments of the present disclosure.
  • FIG. 6 is a process for searching the ML path and calculating the metric differences with a modified lazy VA according to some embodiments of the present disclosure
  • FIG. 7 is a block diagram of a calculating module according to some embodiments of the present disclosure.
  • FIG. 8 is an exemplary backtracking process 800 according to some embodiments of the present disclosure.
  • FIGs. 9A-9B is a plotting of extrinsic information according to some embodiments of the present disclosure.
  • FIGs. 10A-10C are plottings of the error performance of various decoding algorithms respectively for Code IIA, Code IIB and Code IIIA according to some embodiments of the present disclosure
  • FIG. 11 is an table of average number of backtracking operations of SOVA and T-SOVA according to some embodiments of the present disclosure
  • FIGs. 12A-12C are tables of average number of nodes expanded by T-SOVA and SOVA respectively for Code IIA, IIB and IIIA according to some embodiments of the present disclosure.
  • system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
  • the present disclosure relates to methods and related systems for data decoding. Specifically, the present disclosure relates to systems and methods that are capable of reducing calculation complexity for data decoding. Particularly, methods and systems provided herein include features such as trimming the metric differences, estimating reliability values (e.g., log-likelihood ratio (LLR) ) , or the like, or a combination thereof.
  • LLR log-likelihood ratio
  • FIG. 1 illustrates a data transmission system 100 according to some embodiments of the present disclosure.
  • the system 100 may include a transmitter 110 and a receiver 120.
  • the transmitter 110 may include a data source 101, which may be a memory or input device for receiving data to be transmitted via the transmission system 100.
  • a data encoder 102 may encode (e.g., compress) the data obtained from the data source 101 to increase the data rate, for example according to the MPEG or H264 standard.
  • the data encoded may be in the form of a bit sequence.
  • the encoded data may be provided to a channel encoder 103, which performs channel encoding on the encoded data.
  • the channel encoder 103 may generate data symbols based on an encoded bit sequence by, for example, a convolutional encoding algorithm.
  • the convolution algorithm may generate a data symbol to be transmitted based on one or more previous input bits from the data encoder 102.
  • a data symbol may include one or more bits.
  • a modulator 104 may modulate the data symbols received from the channel encoder 103 for transmission over a transmission channel 105.
  • the modulator 104 may use any suitable modulation method readily known in the art or to be established in the future.
  • Exemplary modulation methods include amplitude modulation, frequency modulation, phase modulation, or pulse modulation.
  • Exemplary modulations may include PAM (Pulse-amplitude modulation) , MASK (multi-amplitude shift keying) , BPSK (binary phase shift keying) , GMSK (Gaussian Filtered Minimum Shift Keying) , or the like, or a combination thereof.
  • the transmission channel 105 may be in any suitable form either readily known in the art, or to be established in the future.
  • Exemplary embodiments of a transmission channel that may be used in connection with the present disclosure include a wireless channel, e.g., a satellite broadcasting channel, a WLAN (wireless local area network) channel, a terrestrial digital television channel, a mobile network channel, or the like, or a combination thereof.
  • the channel could be wired, such as a cable or ADSL (Asymmetric digital subscriber line) interface.
  • the transmission channel 105 may exhibit noises, artifacts, and/or other impairments, such as frequency and phase distortion and a variety of fading characteristics during the transmission of data symbols. The noises and/or other impairments or artifacts may affect the transmitted data symbols, and thus the decoding process for data symbols originating from a convolutional encoder.
  • the receiver 120 may include a demodulator 106 for demodulating the transmitted data symbols to produce an estimate of the encoded bit sequence, such as log-likelihood ratio (LLR) values.
  • LLR log-likelihood ratio
  • the LLR value refers to the logarithm of the ratio of the probability of a bit being 1 and that of the bit being 0 at an arbitrary time point of a turbo decoding trellis.
  • the LLR values may be provided to a channel decoder 107, which may perform channel decoding on the data symbols received from the transmission channel.
  • the channel decoder 107 may use a certain kind of decoding algorithm, such as a Viterbi algorithm (VA) , a soft-input soft-output Viterbi algorithm (SOVA) , a lazy Viterbi algorithm, a modified lazy SOVA, or the like, to retrieve the originally encoded (e.g., compressed) data.
  • a Viterbi algorithm VA
  • SOVA soft-input soft-output Viterbi algorithm
  • SOVA soft-input soft-output Viterbi algorithm
  • lazy Viterbi algorithm e.g., a modified lazy SOVA, or the like
  • the encoded data may be either stored, or processed by a data decoder 108 to recuperate the original data.
  • the recuperated data may further be transmitted to an output 109.
  • the output 109 may be a storage device, a memory, a display or other output devices.
  • the output 109 may be in wired or wireless connection with the data decoder 108 or other components of the receiver 120.
  • the receiver 120 may be a set-top box, which can be connected to a television for receiving a cable, terrestrial or satellite signal.
  • the receiver 120 may be a mobile telephone, or other electronic device arranged to receive encoded data transmitted over a transmission channel.
  • the communication between different components in the system 100 may be wireless or wired.
  • the components in the transmitter 110 and/or the receiver 120 may be integrated on a same chipset, or may be on separate chip or chipset apart from each other.
  • the description of the transmission system 100 is for illustrative purposes, and not intended to limit the scope of the present invention.
  • the present decoding methods and systems may be used in connection with an optical disk reproducing apparatus, or a Viterbi equalizer, or the like, or a combination thereof.
  • the present decoding methods may be applied to various electronic systems, in particular, pattern recognition systems based on a finite state Markov process and digital communication systems. Exemplary applications of the various electronic systems may be found in data detection in a communication system, storage system, image recognition, speech recognition, musical melody recognition, forward error correction, software digital radio, or the like, or a combination thereof.
  • FIG. 2 illustrates a data decoding process that may be performed by the present system according to some embodiments of the present disclosure.
  • the system may include a first decoder 201, a second decoder 203, an interleaver 202 and a deinterleaver 204.
  • the first decoder 201 may receive an encoded sequence r 1l from a transmission channel and a feedback sequence L a (l) 24 from the deinterleaver 204.
  • the encoded sequence r 1l may be a turbo code.
  • the first decoder 201 and the second decoder 203 may take the form of, for example, VA decoders, or SOVA decoders, or the like, to decode the turbo code.
  • the first decoder 201 may include a processor connecting to a memory (not shown in FIG. 2) .
  • the memory may be configured to store a Viterbi algorithm or Viterbi-like decoding algorithm that specifies Viterbi-like decoding operations to be performed by the processor.
  • a Viterbi algorithm or Viterbi-like decoding algorithm refers to dynamic programming algorithms for finding the most likely bit (s) of a state, especially in the context of Markov information sources and hidden Markov models. Exemplary Viterbi-like decoding algorithms are described in detail below with reference to FIG. 5 and FIG. 6.
  • the processor may include a processor core including functional units, such as an addition/subtraction functional units to perform addition functions and subtraction functions, a multiplication function unit to perform multiplication functions, an arithmetic logic functional unit to perform logic functions, or the like, or a combination thereof.
  • the encoded sequence r 1l and/or the feedback sequence L a (l) may correspond to intrinsic information relating to the input of a decoder. As used herein, the intrinsic information refers to the log-likelihood ratio regarding the bits/symbols received from the transmission channel. In some embodiments, the intrinsic information may be affected by the environmental factors, such as the transmission conditions of the channel.
  • the output 21 of the first decoder 201 may be transmitted to the interleaver 202 to be interleaved so as to prevent a burst error by shuffling the encoded sequences r 1l across, for example, several bits, thereby creating a more uniform distribution of errors.
  • the output 21 may be hard-decision bits, or soft values such as reliability values relating to different bits.
  • reliability values refer to the possibility of valid decoding, in the form of, for example, the log-likelihood ratio (LLR) .
  • LLR log-likelihood ratio
  • a higher reliability value may represent a higher probability of valid decoding.
  • the output 21 may be referred to as extrinsic information, which indicates the quality of LLRs relating to at least some of the sequence bits.
  • the quality of LLRs may refer to the reliability of the LLRs.
  • the LLRs relating to sequence bits may be updated during a decoding process to achieve more reliable values in reflecting the accuracy of bits being decoded.
  • extrinsic information may be used to measure the reliability of one bit according to another bit in an encoded sequence.
  • the extrinsic information is the difference between the LLRs and intrinsic information.
  • the extrinsic information produced in decoding an encoded sequence by one decoder is used by another decoder for further decoding the encoded sequence.
  • the interleaver 202 may be a uniform interleaver, a convolutional interleaver, a random interleaver, an S-random interleaver, or any possible construction that may take the form of a contention-free quadratic permutation polynomial.
  • the second decoder 203 may decode the received sequences 22 (e.g., soft values relating to bits) based on sequence r 2l from the transmission channel, and output decoded bits 23 relating to the sequence bits.
  • the decoded bits 23 may include hard-decision bits, or soft values relating to the LLRs of at least some of sequence bits.
  • the sequence r 2l may correlate with the sequence r 1l by way of, for example, interleaving.
  • Another extrinsic information 24 relating to the LLRs of at least some of the sequence bits may be deinterleaved in the deinterleaver 204 and provide input 24 to the first decoder 201 (also referred to as “feedback sequence L a (l) ” ) , together with the encoded sequence r 1l .
  • the decoding algorithm may be performed as an iterative process. Specifically, the input 24 may be updated based on the output 23 provided by the second decoder 203, which in turn is generated based on both the encoded sequence r 1l and the input 24 from the previous iteration.
  • the deinterleaver 204 may be configured to deinterleave hard decision values based on the output of the second decoder 203. In some embodiments, the deinterleaver 204 may deinterleave the sequences according to the interleaving mechanism of interleaver 202. For example, the deinterleaving may be the revere process of the interleaving.
  • FIG. 3 illustrates an exemplary structure of a decoder that can be used in connection with the present disclosure, such as the first decoder 201 or the second decoder 203 as shown in FIG. 2.
  • the decoder 300 may include an input 301, a calculating module 302, a trimming module 303, a backtracking module 304, an estimating module 305, and an output 306.
  • the input 301 may be configured to receive information relating to an encoded sequence.
  • the information may take the form of a data symbol sequence (e.g., the sequence r 1l ) transmitted through a transmission channel, or a sequence of reliability values relating to a bit sequence.
  • the information may include a priori sequence L a (l) (also referred as “feedback sequence” ) which may relate to extrinsic information provided by another decoder as described in relation to FIG. 2.
  • the calculating module 302 may be configured to process the sequences received by the input 301 so as to perform calculations relating to the sequences. Merely by way of example, the calculating module 302 may search the maximum likelihood (ML) path based on the sequences received.
  • ML maximum likelihood
  • a trellis graph refers to a state transition diagram, with one direction representing time intervals and another direction representing the states corresponding to one or more bits.
  • the transition from a state at one time point to a state at the next time point is defined as a branch which corresponds to a branch metric.
  • the branch metric represents the probability that the transition from a particular state to a particular target state.
  • Multiple branches form a path which corresponds to a path metric.
  • the path metric refers to the cumulated branch metrics, representing the probability of transmitting through the set of states in the path.
  • the maximum likelihood path refers to the path corresponding to the maximum path metric, which represents the most likely chain of states.
  • a competitive path refers to another path other than the maximum path metric.
  • the metric difference refers to the difference between two metric paths, such as the ML path and a corresponding competitive path.
  • the calculating module 302 may search for the ML path using Viterbi-like algorithm, such as the Viterbi algorithm (VA) , lazy VA, or a modified lazy VA.
  • a Viterbi-like algorithm backtracks to a sequence of possible bit sequences to determine which bit sequence is most likely to have been transmitted.
  • the ML path may be determined.
  • the modified lazy VA will be described in further details in relation to FIG. 7.
  • the trimming module 303 may be configured to reduce the complexity of calculations by trimming the calculation process. For example, in some embodiments, the trimming module 303 may set an upper limit for the amount of calculations of the metric differences. Take the SOVA as an example, assume that the code length is L, and there may be a number of metric differences between the ML path and the respective competitive paths. In some embodiments, the upper limit may be a value relating to the code length L. For example, the trimming module 303 may trim the calculations of metric differences by a factor M, which means the number of metric differences may be calculated as L/M in this case. The factor M may be a number greater than one. In some embodiments, the upper limit may be a specific value smaller than the code length L.
  • the upper limit may be determined by the conditions under which the decoding process is conducted. For example, decoding algorithms used in an optical disk reproducing apparatus and in a Viterbi equalizer may be assigned with different upper limits. Furthermore, in various electronic systems where the present decoding algorithm may apply, such as pattern recognition systems based on a finite state Markov process and digital communication systems, a specific upper limit of the number of metric differences used in a backtracking process may be assigned. Applications of the various electronic systems may also be found, for example, in image recognition, speech recognition, musical melody recognition, forward error correction, software digital radio, or the like, or a combination thereof.
  • the backtracking module 304 may be configured to conduct backtracking operations on specific competitive paths relating to respective metric differences.
  • the backtracking operations may update the LLRs corresponding to specific sequence bits. Details regarding the backtracking operations will be discussed in relation to FIG. 8.
  • the estimating module 305 may be configured to estimate the LLRs omitted due to the reduction of metric differences.
  • the estimation may include interpolating LLRs omitted due to the reduction of metric differences.
  • some sequence bits may have less, or even no metric difference in calculating the LLR.
  • the estimating module 305 may estimate the LLRs, based on two aspects-particularly, the extrinsic information and intrinsic information as mentioned elsewhere in the present disclosure.
  • the output 306 may be configured to receive the calculation results relating to the LLRs corresponding to the sequence bits.
  • the calculation results may be derived from the estimating module 305.
  • the calculation results may be derived from the backtracking module 304 when no estimation of LLRs is needed.
  • FIG. 4 illustrates a process 400 for decoding a convolutional code according to some embodiments of the present disclosure.
  • the decoding process may be performed by the decoder as described in relation to FIG. 3.
  • an input may be obtained.
  • the input may include an encoded sequence r l , and/or a priori sequence L a (l) .
  • the sequence r l may contain intrinsic information
  • the priori sequence L a (l) may be a sequence output by a decoder, which contains extrinsic information.
  • the process 400 may search the ML path based on the sequence received in step 401, and calculate the metric differences between the ML path and respective competitive paths.
  • the determination of the ML path may be performed on a trellis by, for example, Viterbi algorithm (VA) , lazy VA, or modified lazy VA.
  • VA Viterbi algorithm
  • the complexity of calculation in the decoding process may be reduced by trimming the number of metric differences based on which backtracking process is conducted.
  • a determination may be made as to whether the number of the metric differences being calculated exceeds a threshold value.
  • the threshold may be a value relating to the length of the sequence to be decoded, or may be a value set by default.
  • the threshold value may be set as L/M, where L denotes the code length of the sequence, and M denotes a trimming factor (e.g., a number greater than 1) . If the number of the metric differences is above the threshold value, the process 400 may proceed to step 404 to pick up a certain number of metric differences.
  • the process 400 may pick up the smallest L/M metric differences among all the metric differences acquired.
  • the process 400 may pick up the metric differences by the following method: first, dividing all the metric differences into, for example, L/M blocks; then, selecting one metric difference from each blocks to get L/M metric differences. Specifically, the blocks of metric differences may be divided according to the positions of the bits in the sequence. If the number of the metric differences is less than the threshold value, the process 400 may proceed to step 405. In step 405, backtracking operations may be performed to backtrack the competitive paths of each state. Furthermore, the LLRs for previous bits from one state within a backtracking length ⁇ may be updated based on the backtracking operations.
  • step 406 LLR values are checked corresponding to each state. If a LLR is omitted due to the reduction of metric differences, the process 400 may estimate the LLR based on, for example, intrinsic information and/or extrinsic information as described elsewhere in the disclosure.
  • the decoding results may be output by generating, for example, the extrinsic sequence with hard-decision bits, or soft values with LLRs.
  • FIG. 5 illustrates an exemplary structure of the trimming module 303 according to some embodiments of the present disclosure.
  • the trimming module 303 may include a comparator 501, a trimming factor control unit 502, and a selector 503.
  • the trimming factor control unit 502 may be configured to determine the trimming factor M automatically or according to input by a user.
  • the automatic determination may be achieved by selecting M from a set of trimming factors based on the condition of the calculation process. In some embodiments, the determination may be obtained based on history of similar calculations.
  • the trimming factor M may be set to be the same value as in a historical calculation that performed the trimming process in a similar channel condition.
  • the trimming factor M may be acquired according to the encoding algorithm.
  • the trimming factor M may be any positive number.
  • M may be any positive integer, for instance, 2, 4, 8, 16, etc.
  • M may be any positive decimal, for instance, 2.5, 4.5, 6.45, etc.
  • the comparator 501 may compare the number of the metric differences calculated in step 402 by the calculating module 302, with L/M as described in step 403, where L is the code length of the sequence. If the number of the metric differences is greater than L/M, the selector 503 may select a certain number (e.g., L/M) of the metric differences in a specific manner.
  • the selector 503 may select the smallest L/M number among all the metric differences. If the number of the metric differences is smaller than L/M, the selector 503 may pick up all the metric differences calculated by the calculating module 302.
  • the trimming module 303 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • the trimming factor control unit 502 may be replaced by an upper limit control unit that may determine the upper limit of the number of metric differences to be used during backtracking process.
  • the upper limit may be determined based on various conditions as described elsewhere in the disclosure.
  • FIG. 6 illustrates a process 600 for searching the ML path and calculating the metric differences with a modified lazy VA according to some embodiments of the present disclosure.
  • branch metrics relating to the received bits between different time points may be calculated.
  • the branch metrics denotes the connection between separate states in the trellis.
  • the decoding algorithm may traverse the trellis to determine the probabilities of the individual states associated with the branch metrics.
  • the process 600 may modify the branch metrics at the same time point.
  • the branch metrics may be modified to become a non-positive value by the branch metric unit 701. Merely by way of example, the modification may be conducted by taking the opposite number of the branch metrics.
  • a priority queue (PQ) may be loaded with a start node on the trellis.
  • the priority queue may be used to indicate the next state on the trellis to be processed.
  • the priority queue may be updated with values corresponding to a node at a specific time point.
  • the modified branch metrics corresponding to a time point may be stored in the priority queue for subsequent use.
  • the priority queue may be configured to store the nodes (k, v) , where k represents the path metric, v represents the condition of the state (s, t) , where s denotes the state of the node and t represents the time point.
  • the top node of PQ may be popped as current node.
  • the nodes in the PQ may be ranked according to the path metric values of the states.
  • the top node of PQ may represent the state with the largest path metric value.
  • the process 600 may check if the node with the same state and time point has been popped. If the node with the same state and time point has been popped, the process 600 may proceed to step 606 to calculate the metric difference on the node. The metric difference calculated may be stored for subsequent use. Then the process 600 may further pop the top node in the PQ till the process is ended.
  • the process 600 may calculate the path metric relating to the current node and further update the PQ by inserting the calculated path metric into the PQ in step 607.
  • the process 600 may determine whether the top node of PQ is an end node. If the top node of PQ is an end node, the ML path is determined and backtracking operation may be conducted based on the ML in step 609.
  • the process 600 may output hard decision bits and metric differences relating to nodes on the ML path. If the top node of PQ is not an end node, the process 600 may return to step 604 to pop the top node in the updated PQ. The backtracking process may be conducted based on the number of metric differences calculated in step 606. If the number of metric differences exceeds a threshold value, the number of metric differences may be trimmed as described elsewhere in the disclosure.
  • FIG. 7 illustrates a block diagram of a calculating module using modified lazy VA according to some embodiments of the present disclosure.
  • the calculating module 700 may include a branch metric unit 701, a path metric unit 702, a metric difference unit 703 and a priority queue unit 704.
  • the branch metric unit 701 may be configured to calculate the branch metrics of possible output sequences between possible time points.
  • the path metric unit 702 may be configured to calculate the path metrics of possible sequences based on the branch metrics calculated by the branch metric unit 701. In some embodiments, the path metric unit 702 may be coupled to the branch metric unit 701 and update the path metrics based on the branch metrics for each state.
  • the metric difference unit 703 connected with the path metric unit 702 may be configured to calculate the metric difference as described in step 606.
  • the priority queue unit 704 may be configured to store the nodes (k, v) loaded to the PQ, as described elsewhere in the disclosure. For example, the priority queue unit 704 may pop the top node of PQ as the current node on the trellis.
  • the top node of PQ may be the node with the maximum path metric in the PQ relating to the maximum value of k.
  • the priority queue unit 704 may be coupled with the path metric unit 702 such that the priority queue may be updated by inserting the calculated path metric into the priority queue.
  • FIG. 8 illustrates an exemplary backtracking process 800 according to some embodiments of the present disclosure.
  • the LLR value of each node on the ML path may be initialized as of a certain value.
  • the certain value may be set as ⁇ .
  • the backtracking process 800 may start from a specific node on the ML path in step 802.
  • a time point t may be used to indicate the node where the backtracking proceeds.
  • the t may be initially set as L, which is the code length of the received sequence.
  • the path decision bit relating to a node on the ML path may be compared with the path decision bit relating to the node on a competitive path.
  • the path decision bits may be compared at consecutive nodes, such as from time point t down to time point t- ⁇ , where ⁇ denotes the backtracking depth.
  • the process 800 may check if the decision bits of the ML path and the competitive path at the same time slot are the same. If the decision bits of the ML path and the competitive path are the same, the LLR of the corresponding bit may remain unchanged in step 805, which is ⁇ as initially set according to some embodiments. If the decision bits of the ML path and the competitive path are different, the LLR of the corresponding node on the ML path may be updated in step 806.
  • the updated LLR may be the smaller value between the LLR of the corresponding bit and the metric difference ⁇ (s, t) between the ML path and the competitive path based on the corresponding bit.
  • the next node corresponding to time point t’ may be selected.
  • the backtracking may be conducted on consecutive nodes prior to the current node.
  • the next node may be the previous node according to time point t-1.
  • the backtracking may be conducted on specific nodes.
  • the specific nodes may be the nodes on which metric differences are calculated.
  • the specific nodes may be the nodes calculated by modified lazy VA algorithm as described in step 606.
  • the specific nodes may be the nodes corresponding to L/M metric differences as described elsewhere in the disclosure.
  • the specific nodes may be the nodes corresponding to the smallest L/M metric differences among all the calculated nodes.
  • the nodes corresponding to the L/M metric differences may be selected from a plurality of blocks. At least one of the nodes may correspond to a smallest metric difference in a block.
  • the plurality of blocks may be formed by dividing all the calculated nodes into, for example, L/M blocks.
  • the calculated nodes may be are divided evenly or unevenly into the plurality of blocks.
  • the process 800 may determine whether the nodes relating to the smallest L/M metric differences are backtracked in step 808.
  • the process 800 may return back to step 802 to backtrack another node in the ML path. If yes, the process 800 may proceed to step 809 to assign the sign of LLRs with the sign of the corresponding path decision bits of the ML path. Due to the reduction of metric differences as described above, multiple sequence bits may have less metric differences, or even no metric difference in the LLR calculations. To ensure the quality of LLRs, an estimation process may be carried out to compensate the metric differences omitted. As shown in FIG. 3, the estimating module 305 may be used to accomplish the estimation. In some embodiments, the estimating module 305 may estimate an omitted LLR based on, for example, an extrinsic sequence and/or an intrinsic sequence.
  • the estimation of LLRs from the intrinsic sequence can be described as below:
  • p denotes the probability of different decision bits and/or different symbols
  • r denotes the received symbol at time point l.
  • Equation (1) The term lnp (r
  • u l +1) in equation (1) may be rewritten as:
  • the right side of (3) can be divided into three parts based on the property of Markov chain. Particularly,
  • ⁇ l+1 denotes the backward metric, which means the accumulated metric from the node at time point L to the node at time point l+1
  • s l+1 denotes the state on trellis at time point l+1.
  • d ( ⁇ ) ln ⁇ l+1 (m ls (l + 1) ) -ln ⁇ l+1 (s l+1 ) .
  • the estimation may ignore d ( ⁇ ) and leave it for neighboring LLRs,
  • L c 4Es/N 0
  • r l denotes the received symbol at time point l
  • u l denotes the sequence bit at time point l
  • c l denotes the corresponding code word
  • La (l) denotes the prior sequence at time point l
  • s l denotes the state on trellis at time point l
  • m ls (l) denotes the ML state at time point l.
  • LLR estimation from extrinsic sequence can be described as below.
  • ⁇ i denotes the metric difference corresponding to node i.
  • LLR (t) is omitted due to the trimming process as described elsewhere in the disclosure, according to equation (10) , all metric differences in the ⁇ i ⁇ , i ⁇ Dt, are omitted.
  • ⁇ j , j ⁇ Dt is the minimum one among these metric differences.
  • Lemma 3 indicates that
  • LLR (j + k) 0 ⁇ k ⁇ ⁇ , is the closest obtained LLR of ⁇ j .
  • any omitted LLR has larger magnitude than that of its minimal neighbouring obtained LLR. Then, the omitted LLR can be estimated by its minimum neighbouring obtained LLR,
  • LLR (q) has the minimum magnitude among ⁇
  • the estimating module 305 may combine intrinsic sequence (e.g., as described in equation (9) ) and extrinsic sequence (e.g., neighbouring LLRs as described in equation (15) ) , to give better estimation of LLR of sequence bits.
  • intrinsic sequence e.g., as described in equation (9)
  • extrinsic sequence e.g., neighbouring LLRs as described in equation (15)
  • the estimating module 305 may use two scaling factors, ⁇ 1 and ⁇ 2 , to modify the values of extrinsic sequence. Then, the extrinsic sequence output by the estimating module 305 may be estimated as follows,
  • the determination of the scaling factors may base on statistical results. For example, using SOVA may result in larger values of extrinsic sequence, thereby applying the scaling factors which lie in the range (0, 1) may modify the values of extrinsic sequence to achieve a higher accuracy.
  • a tangible and non-transitory machine-readable medium or media having instructions recorded thereon for a processor or computer to operate an imaging apparatus to perform one or more functions of the modules or units described elsewhere herein, for example, to implement the processes of generating voxel masks, or reconstructing images in subsequent processes may be provided.
  • the medium or media may be any type of CD-ROM, DVD, floppy disk, hard disk, optical disk, flash RAM drive, or other type of computer-readable medium or a combination thereof.
  • the various embodiments and/or components may be implemented as part of one or more computers or processors.
  • the computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet or communicating with a cloud server.
  • the computer or processor may include a microprocessor.
  • the microprocessor may be connected to a communication bus.
  • the computer or processor may also include a memory.
  • the memory may include Random Access Memory (RAM) and Read Only Memory (ROM) .
  • the computer or processor further may include a storage device including, for example, a hard disk drive or a removable storage drive such as a floppy disk drive, optical disk drive, or the like, or a combination thereof.
  • the storage device may also be other similar means for loading computer programs or other instructions into the computer or processor.
  • the computer or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data.
  • the storage elements may also store data or other information as desired or needed.
  • the storage element may be in the form of an information source or a physical memory element within a processing machine.
  • FIGs. 9A-9B illustrate the quality of LLR using a standard Turbo code and a convolutional code according to some embodiments of the present disclosure.
  • a non-recursive convolutional code (CC) with register bits 6, 1/2 code rate, and (171, 133) code polynomials may be defined as Code I.
  • Code IIA may be a standard Turbo code in CDMA2000, whose register bits is 3, message bits is 1146, code rate is 1/3, and RSC polynomials is (13, 15) ;
  • Code IIB may be a standard Turbo code same as Code IIA except for 1/2 code rate.
  • Code IIIA may be a standard Turbo code, whose register bits is 4, message bits is 378, code rate is 1/3, and RSC polynomials is (21, 37) ; Code IIIB may be a standard Turbo code same as Code IIIA except for 10 5 message bits.
  • the standard Turbo code may be code II.
  • Code I and III have the similar result as Code II.
  • the extrinsic information may be used to indicate the quality of LLR.
  • FIGs. 9A-9B plots the extrinsic information of M-SOVA, S-SOVA and T-SOVA against that of Log-MAP for CC Code I and Turbo Code IIA, each of which includes 5000 data points obtained after the first iteration at an E b /N 0 .
  • FIGs. 10A-10C plot the error performance of various decoding algorithms respectively for Code IIA, Code IIB and Code IIIA according to some embodiments of the present disclosure.
  • the error performance is the bit-error-rate (BER) and block error-rate (BLER)
  • the decoding algorithms are Log-MAP, M-SOVA, bi-SOVA, S-SOVA T-SOVA.
  • M 16 and S-SOVA.
  • the BLER of various decoding algorithms for Code II B is similar to the results of BER as illustrated in FIG. 10B.
  • There is about 0.1 dB performance gap between T-SOVA with M 16 and S-SOVA.
  • the BLER of various decoding algorithms for Code III A is similar to the results of BER as illustrated in FIG. 10B.
  • FIG. 11 and FIGs. 12A-12C illustrate the complexity of SOVA and T-SOVA according to some embodiments of the present disclosure. Recall that the number of backtracking operations determines the complexity of SOVA. FIG. 11 shows the average number of backtracking operations of SOVA and T-SOVA. For Code IIA, IIB, and IIIA, T-SOVA requires at most 1/M backtracking operations of SOVA.
  • the memory consumption of SOVA and T-SOVA is determined by the number of expanded nodes in the first stage.
  • FIGs. 12A-12C the average number of nodes expanded by T-SOVA and SOVA with different E b /N 0 in each iteration are represented.
  • the above described method embodiments may take the form of computer or controller implemented processes and apparatuses for practicing those processes.
  • the disclosure can also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer or controller, the computer becomes an apparatus for practicing the invention.
  • the disclosure may also be embodied in the form of computer program code or signal, for example, whether stored in a storage medium, loaded into and/or executed by a computer or controller, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention.
  • the computer program code segments configure the microprocessor to create specific logic circuits.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present disclosure relates to a system and method for decoding a data sequence. The system and method include retrieving a sequence of L bits and determining a maximum likelihood path thereof. Further, a trimming SOVA with a trimming factor M is used to select a maximum number of L/M metric differences to be calculated, and reliability values relating to the sequence are calculated based on the selected metric differences.

Description

SYSTEM AND METHOD FOR DATA DECODING
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 201510523515.4 filed on August 24, 2015, which are hereby incorporated by reference in its entirety.
TECHNICAL FIELD
The present disclosure relates to data decoding, and more particularly, relates to a system and method for decoding using a modified Viterbi algorithm.
BACKGROUND
Viterbi algorithm (VA) is a decoding algorithm for convolutional codes. The algorithm uses dynamic programming to search for the maximum likelihood (ML) path, i.e. the ML code word, on the trellis graph of convolutional codes. In 2002, Feldman proposed the method commonly known as the Lazy VA, which method finds the ML path by normalizing branch metrics as non-positive number and introducing priority queue (PQ) to pop all possible nodes in a sequence.
In the past two decades, many researchers have dedicated themselves to improving the quality of log-likelihood ratios (LLRs) . In 1998, Fossorier, et al. modified soft-input soft-output Viterbi algorithm (M-SOVA) and proved the equivalence between M-SOVA and the Max-Log-MAP algorithm. In 2000, Chen, et al. proposed to update LLRs from two separate directions, referred to as bi-SOVA. The bi-SOVA performs slightly better than the Max-Log-MAP algorithm by doubling the calculation complexity. Huang, et al. used two scaling factors to further improve the performance of SOVA.
SUMMARY OF THE INVENTION
Some embodiments of the present disclosure relate to a method of decoding. The method may include trimming metric differences to calculate reliability values.
According to one aspect of the present disclosure, methods of decoding may include determining a maximum likelihood path and related competitive paths based on a received  sequence of L bits, and calculating reliability values relating to the bits based on no more than N metric differences, wherein L and N are integers greater than 1; and wherein L is greater than N. The metric differences may be calculated based on the maximum likelihood path and the related competitive paths.
In some embodiments, the method may further include conducting a backtracking operation on the related competitive paths.
In some embodiments, the calculating reliability values may include estimating at least some of the reliability values corresponding to some of the bits based on intrinsic information or extrinsic information. The intrinsic information may include a sequence transmitted through a channel, and the extrinsic information may indicate the quality of reliability values relating to at least some of the sequence bits.
In some embodiments, the reliability value relating to a first bit within a backtracking depth of a second bit may be updated during the backtracking operation, and the second bit corresponds to one of the N metric differences.
In some embodiments, N may equal to L divided by a trimming factor M, wherein M is a number greater than 1. N may be an integer or a decimal.
In some embodiments, the calculating reliability values may include obtaining a plurality of metric differences and obtaining N minimum metric differences among the plurality of metric differences.
In some embodiments, the metric differences may be calculated by a Viterbi algorithm.
In some embodiments, the metric differences which may be determined by a modified Lazy Viterbi algorithm including obtaining a priority queue including a plurality of nodes, and calculating a metric difference on a node of the plurality of nodes, wherein the node has been popped more than once based on the priority queue.
In some embodiments, the plurality of nodes are popped in a sequence ranked based on path metric values of the plurality of nodes.
In still yet another embodiment of the present disclosure, the method further includes modifying branch metrics relating to the sequence into non-positive values.
According to another aspect of the present disclosure, a system of decoding are  provided. The system may include an input configured to retrieve a sequence of L bits, a calculating module configured to determine a maximum likelihood path and related competitive paths based on the L bits, a trimming module configured to obtain no more than N metric differences based on the maximum likelihood path and the related competitive paths, and an output configured to output reliability values relating to the bits calculated based on the metric differences, wherein L and N are integers greater than 1; and wherein L is greater than N.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a block diagram of a data transmission system according to some embodiments of the present disclosure;
FIG. 2 is a block diagram of a decoding process according to some embodiments of the present disclosure;
FIG. 3 is an exemplary structure of a decoder according to some embodiments of the present disclosure;
FIG. 4 is a process for decoding a convolutional code according to some embodiments of the present disclosure;
FIG. 5 is an exemplary structure of the trimming module according to some embodiments of the present disclosure;
FIG. 6 is a process for searching the ML path and calculating the metric differences with a modified lazy VA according to some embodiments of the present disclosure;
FIG. 7 is a block diagram of a calculating module according to some embodiments of the present disclosure;
FIG. 8 is an exemplary backtracking process 800 according to some embodiments of the present disclosure;
FIGs. 9A-9B is a plotting of extrinsic information according to some embodiments of  the present disclosure;
FIGs. 10A-10C are plottings of the error performance of various decoding algorithms respectively for Code IIA, Code IIB and Code IIIA according to some embodiments of the present disclosure;
FIG. 11 is an table of average number of backtracking operations of SOVA and T-SOVA according to some embodiments of the present disclosure;
FIGs. 12A-12C are tables of average number of nodes expanded by T-SOVA and SOVA respectively for Code IIA, IIB and IIIA according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth by way of example in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims.
It will be understood that the term “system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
It will be understood that when a unit, engine, module or block is referred to as being “on, ” “connected to” or “coupled to” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an  intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The present disclosure relates to methods and related systems for data decoding. Specifically, the present disclosure relates to systems and methods that are capable of reducing calculation complexity for data decoding. Particularly, methods and systems provided herein include features such as trimming the metric differences, estimating reliability values (e.g., log-likelihood ratio (LLR) ) , or the like, or a combination thereof. Some description of the present methods and systems provided in connection with turbo code is only for illustration purposes, and the methods and systems are not so limited and can be used in connection with other coding formats.
FIG. 1 illustrates a data transmission system 100 according to some embodiments of the present disclosure. The system 100 may include a transmitter 110 and a receiver 120.
The transmitter 110 may include a data source 101, which may be a memory or input device for receiving data to be transmitted via the transmission system 100. A data encoder 102 may encode (e.g., compress) the data obtained from the data source 101 to increase the data rate, for example according to the MPEG or H264 standard. In some embodiments, the data encoded may be in the form of a bit sequence. The encoded data may be provided to a channel encoder 103, which performs channel encoding on the encoded data. In some embodiments, the channel encoder 103 may generate data symbols based on an encoded bit sequence by, for example, a convolutional encoding algorithm. The convolution algorithm may generate a data symbol to be transmitted based on one or more previous input bits from the data encoder 102. A data symbol may include one or more bits. A modulator 104 may modulate the data symbols received from the channel encoder 103 for transmission over a transmission channel 105. The modulator 104 may use any suitable modulation method readily known in the art or to be established in the future. Exemplary modulation methods include amplitude modulation, frequency modulation, phase modulation, or pulse modulation. Exemplary modulations may include PAM (Pulse-amplitude modulation) , MASK (multi-amplitude shift keying) , BPSK (binary phase shift keying) , GMSK (Gaussian Filtered Minimum Shift Keying) , or the like, or a combination  thereof. The transmission channel 105 may be in any suitable form either readily known in the art, or to be established in the future. Exemplary embodiments of a transmission channel that may be used in connection with the present disclosure include a wireless channel, e.g., a satellite broadcasting channel, a WLAN (wireless local area network) channel, a terrestrial digital television channel, a mobile network channel, or the like, or a combination thereof. Alternatively, the channel could be wired, such as a cable or ADSL (Asymmetric digital subscriber line) interface. In some embodiments, the transmission channel 105 may exhibit noises, artifacts, and/or other impairments, such as frequency and phase distortion and a variety of fading characteristics during the transmission of data symbols. The noises and/or other impairments or artifacts may affect the transmitted data symbols, and thus the decoding process for data symbols originating from a convolutional encoder.
The receiver 120 may include a demodulator 106 for demodulating the transmitted data symbols to produce an estimate of the encoded bit sequence, such as log-likelihood ratio (LLR) values. As used herein, the LLR value refers to the logarithm of the ratio of the probability of a bit being 1 and that of the bit being 0 at an arbitrary time point of a turbo decoding trellis. The LLR values may be provided to a channel decoder 107, which may perform channel decoding on the data symbols received from the transmission channel. As an example, the channel decoder 107 may use a certain kind of decoding algorithm, such as a Viterbi algorithm (VA) , a soft-input soft-output Viterbi algorithm (SOVA) , a lazy Viterbi algorithm, a modified lazy SOVA, or the like, to retrieve the originally encoded (e.g., compressed) data. Details regarding various decoding algorithms will be discussed in the following parts of the description.
The encoded data may be either stored, or processed by a data decoder 108 to recuperate the original data. The recuperated data may further be transmitted to an output 109. The output 109 may be a storage device, a memory, a display or other output devices. In some embodiments, the output 109 may be in wired or wireless connection with the data decoder 108 or other components of the receiver 120. In some embodiments, the receiver 120 may be a set-top box, which can be connected to a television for receiving a cable, terrestrial or satellite signal. In some embodiments, the receiver 120 may be a mobile telephone, or other electronic device arranged to receive encoded data transmitted over a transmission channel. The  communication between different components in the system 100 may be wireless or wired. The components in the transmitter 110 and/or the receiver 120 may be integrated on a same chipset, or may be on separate chip or chipset apart from each other.
It shall be noted that the description of the transmission system 100 is for illustrative purposes, and not intended to limit the scope of the present invention. For instance, the present decoding methods and systems may be used in connection with an optical disk reproducing apparatus, or a Viterbi equalizer, or the like, or a combination thereof. Furthermore, the present decoding methods may be applied to various electronic systems, in particular, pattern recognition systems based on a finite state Markov process and digital communication systems. Exemplary applications of the various electronic systems may be found in data detection in a communication system, storage system, image recognition, speech recognition, musical melody recognition, forward error correction, software digital radio, or the like, or a combination thereof.
FIG. 2 illustrates a data decoding process that may be performed by the present system according to some embodiments of the present disclosure. As shown in FIG. 2, the system may include a first decoder 201, a second decoder 203, an interleaver 202 and a deinterleaver 204.
The first decoder 201 may receive an encoded sequence r1l from a transmission channel and a feedback sequence La (l) 24 from the deinterleaver 204. In some embodiments, the encoded sequence r1l may be a turbo code. The first decoder 201 and the second decoder 203 may take the form of, for example, VA decoders, or SOVA decoders, or the like, to decode the turbo code. Specifically, the first decoder 201 may include a processor connecting to a memory (not shown in FIG. 2) . The memory may be configured to store a Viterbi algorithm or Viterbi-like decoding algorithm that specifies Viterbi-like decoding operations to be performed by the processor. As used herein, a Viterbi algorithm or Viterbi-like decoding algorithm refers to dynamic programming algorithms for finding the most likely bit (s) of a state, especially in the context of Markov information sources and hidden Markov models. Exemplary Viterbi-like decoding algorithms are described in detail below with reference to FIG. 5 and FIG. 6. The processor may include a processor core including functional units, such as an addition/subtraction functional units to perform addition functions and subtraction functions, a  multiplication function unit to perform multiplication functions, an arithmetic logic functional unit to perform logic functions, or the like, or a combination thereof. The encoded sequence r1l and/or the feedback sequence La (l) may correspond to intrinsic information relating to the input of a decoder. As used herein, the intrinsic information refers to the log-likelihood ratio regarding the bits/symbols received from the transmission channel. In some embodiments, the intrinsic information may be affected by the environmental factors, such as the transmission conditions of the channel.
The output 21 of the first decoder 201 may be transmitted to the interleaver 202 to be interleaved so as to prevent a burst error by shuffling the encoded sequences r1l across, for example, several bits, thereby creating a more uniform distribution of errors. The output 21 may be hard-decision bits, or soft values such as reliability values relating to different bits. As used herein, reliability values refer to the possibility of valid decoding, in the form of, for example, the log-likelihood ratio (LLR) . A higher reliability value may represent a higher probability of valid decoding. Specifically, in some embodiments, the output 21 may be referred to as extrinsic information, which indicates the quality of LLRs relating to at least some of the sequence bits. Specifically, the quality of LLRs may refer to the reliability of the LLRs. The LLRs relating to sequence bits may be updated during a decoding process to achieve more reliable values in reflecting the accuracy of bits being decoded. According to the present disclosure, extrinsic information may be used to measure the reliability of one bit according to another bit in an encoded sequence. In some embodiments, the extrinsic information is the difference between the LLRs and intrinsic information. In some embodiments, the extrinsic information produced in decoding an encoded sequence by one decoder is used by another decoder for further decoding the encoded sequence. In some embodiments, the interleaver 202 may be a uniform interleaver, a convolutional interleaver, a random interleaver, an S-random interleaver, or any possible construction that may take the form of a contention-free quadratic permutation polynomial. The second decoder 203 may decode the received sequences 22 (e.g., soft values relating to bits) based on sequence r2l from the transmission channel, and output decoded bits 23 relating to the sequence bits. The decoded bits 23 may include hard-decision bits, or soft values relating to the LLRs of at least some of sequence bits. In some embodiments, the sequence r2l  may correlate with the sequence r1l by way of, for example, interleaving. Another extrinsic information 24 relating to the LLRs of at least some of the sequence bits may be deinterleaved in the deinterleaver 204 and provide input 24 to the first decoder 201 (also referred to as “feedback sequence La (l) ” ) , together with the encoded sequence r1l. In some embodiments, the decoding algorithm may be performed as an iterative process. Specifically, the input 24 may be updated based on the output 23 provided by the second decoder 203, which in turn is generated based on both the encoded sequence r1l and the input 24 from the previous iteration. In some embodiments, the deinterleaver 204 may be configured to deinterleave hard decision values based on the output of the second decoder 203. In some embodiments, the deinterleaver 204 may deinterleave the sequences according to the interleaving mechanism of interleaver 202. For example, the deinterleaving may be the revere process of the interleaving.
FIG. 3 illustrates an exemplary structure of a decoder that can be used in connection with the present disclosure, such as the first decoder 201 or the second decoder 203 as shown in FIG. 2. The decoder 300 may include an input 301, a calculating module 302, a trimming module 303, a backtracking module 304, an estimating module 305, and an output 306.
The input 301 may be configured to receive information relating to an encoded sequence. In some embodiments, the information may take the form of a data symbol sequence (e.g., the sequence r1l) transmitted through a transmission channel, or a sequence of reliability values relating to a bit sequence. In some embodiments, the information may include a priori sequence La (l) (also referred as “feedback sequence” ) which may relate to extrinsic information provided by another decoder as described in relation to FIG. 2. The calculating module 302 may be configured to process the sequences received by the input 301 so as to perform calculations relating to the sequences. Merely by way of example, the calculating module 302 may search the maximum likelihood (ML) path based on the sequences received. Furthermore, the calculating module 302 may calculate the metric differences between the ML path and its competitive paths corresponding to at least some of the bits in the sequences according to a trellis graph. In some embodiments, a trellis graph refers to a state transition diagram, with one direction representing time intervals and another direction representing the states corresponding to one or more bits. The transition from a state at one time point to a state at the next time point is defined as a  branch which corresponds to a branch metric. The branch metric represents the probability that the transition from a particular state to a particular target state. Multiple branches form a path which corresponds to a path metric. The path metric refers to the cumulated branch metrics, representing the probability of transmitting through the set of states in the path. The maximum likelihood path refers to the path corresponding to the maximum path metric, which represents the most likely chain of states. A competitive path refers to another path other than the maximum path metric. The metric difference refers to the difference between two metric paths, such as the ML path and a corresponding competitive path.
In some embodiments, the calculating module 302 may search for the ML path using Viterbi-like algorithm, such as the Viterbi algorithm (VA) , lazy VA, or a modified lazy VA. Specifically, a Viterbi-like algorithm backtracks to a sequence of possible bit sequences to determine which bit sequence is most likely to have been transmitted. During the decoding process, by eliminating those transitions that are less possible or not permissible, the ML path may be determined. The modified lazy VA will be described in further details in relation to FIG. 7.
The trimming module 303 may be configured to reduce the complexity of calculations by trimming the calculation process. For example, in some embodiments, the trimming module 303 may set an upper limit for the amount of calculations of the metric differences. Take the SOVA as an example, assume that the code length is L, and there may be a number of metric differences between the ML path and the respective competitive paths. In some embodiments, the upper limit may be a value relating to the code length L. For example, the trimming module 303 may trim the calculations of metric differences by a factor M, which means the number of metric differences may be calculated as L/M in this case. The factor M may be a number greater than one. In some embodiments, the upper limit may be a specific value smaller than the code length L. In some embodiments, the upper limit may be determined by the conditions under which the decoding process is conducted. For example, decoding algorithms used in an optical disk reproducing apparatus and in a Viterbi equalizer may be assigned with different upper limits. Furthermore, in various electronic systems where the present decoding algorithm may apply, such as pattern recognition systems based on a finite state Markov process and  digital communication systems, a specific upper limit of the number of metric differences used in a backtracking process may be assigned. Applications of the various electronic systems may also be found, for example, in image recognition, speech recognition, musical melody recognition, forward error correction, software digital radio, or the like, or a combination thereof.
Details regarding the structure of the trimming module 303 will be discussed in relation to FIG. 5. The backtracking module 304 may be configured to conduct backtracking operations on specific competitive paths relating to respective metric differences. The backtracking operations may update the LLRs corresponding to specific sequence bits. Details regarding the backtracking operations will be discussed in relation to FIG. 8.
The estimating module 305 may be configured to estimate the LLRs omitted due to the reduction of metric differences. In some embodiments, the estimation may include interpolating LLRs omitted due to the reduction of metric differences. During the trimming process performed by the trimming module 303, some sequence bits may have less, or even no metric difference in calculating the LLR. To compensate the missing LLRs corresponding to specific bits, the estimating module 305 may estimate the LLRs, based on two aspects-particularly, the extrinsic information and intrinsic information as mentioned elsewhere in the present disclosure.
The output 306 may be configured to receive the calculation results relating to the LLRs corresponding to the sequence bits. In some embodiments, the calculation results may be derived from the estimating module 305. In some embodiments, the calculation results may be derived from the backtracking module 304 when no estimation of LLRs is needed.
FIG. 4 illustrates a process 400 for decoding a convolutional code according to some embodiments of the present disclosure. In some embodiments, the decoding process may be performed by the decoder as described in relation to FIG. 3.
In step 401, an input may be obtained. The input may include an encoded sequence rl, and/or a priori sequence La (l) . In some embodiments, as described above, the sequence rl may contain intrinsic information, and the priori sequence La (l) may be a sequence output by a decoder, which contains extrinsic information. In step 402, the process 400 may search the ML path based on the sequence received in step 401, and calculate the metric differences between  the ML path and respective competitive paths. The determination of the ML path may be performed on a trellis by, for example, Viterbi algorithm (VA) , lazy VA, or modified lazy VA. Take the SOVA as an example, the complexity of calculation in the decoding process may be reduced by trimming the number of metric differences based on which backtracking process is conducted. In step 403, a determination may be made as to whether the number of the metric differences being calculated exceeds a threshold value. The threshold may be a value relating to the length of the sequence to be decoded, or may be a value set by default. For example, the threshold value may be set as L/M, where L denotes the code length of the sequence, and M denotes a trimming factor (e.g., a number greater than 1) . If the number of the metric differences is above the threshold value, the process 400 may proceed to step 404 to pick up a certain number of metric differences. In some embodiments, the process 400 may pick up the smallest L/M metric differences among all the metric differences acquired. In some embodiments, the process 400 may pick up the metric differences by the following method: first, dividing all the metric differences into, for example, L/M blocks; then, selecting one metric difference from each blocks to get L/M metric differences. Specifically, the blocks of metric differences may be divided according to the positions of the bits in the sequence. If the number of the metric differences is less than the threshold value, the process 400 may proceed to step 405. In step 405, backtracking operations may be performed to backtrack the competitive paths of each state. Furthermore, the LLRs for previous bits from one state within a backtracking length δ may be updated based on the backtracking operations. Details regarding the backtracking operations will be described in relation to FIG. 8. Then, in step 406, LLR values are checked corresponding to each state. If a LLR is omitted due to the reduction of metric differences, the process 400 may estimate the LLR based on, for example, intrinsic information and/or extrinsic information as described elsewhere in the disclosure. In step 407, the decoding results may be output by generating, for example, the extrinsic sequence with hard-decision bits, or soft values with LLRs.
FIG. 5 illustrates an exemplary structure of the trimming module 303 according to some embodiments of the present disclosure. The trimming module 303 may include a comparator 501, a trimming factor control unit 502, and a selector 503. The trimming factor  control unit 502 may be configured to determine the trimming factor M automatically or according to input by a user. The automatic determination may be achieved by selecting M from a set of trimming factors based on the condition of the calculation process. In some embodiments, the determination may be obtained based on history of similar calculations. For example, the trimming factor M may be set to be the same value as in a historical calculation that performed the trimming process in a similar channel condition. In some embodiments, the trimming factor M may be acquired according to the encoding algorithm.
Referring back to FIG. 5, the trimming factor M may be any positive number. In some embodiments, M may be any positive integer, for instance, 2, 4, 8, 16, etc. In some embodiments, M may be any positive decimal, for instance, 2.5, 4.5, 6.45, etc. When the trimming factor M is determined, the comparator 501 may compare the number of the metric differences calculated in step 402 by the calculating module 302, with L/M as described in step 403, where L is the code length of the sequence. If the number of the metric differences is greater than L/M, the selector 503 may select a certain number (e.g., L/M) of the metric differences in a specific manner. Merely by way of example, the selector 503 may select the smallest L/M number among all the metric differences. If the number of the metric differences is smaller than L/M, the selector 503 may pick up all the metric differences calculated by the calculating module 302.
It shall be noted that the above description of the trimming module 303 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. For example, the trimming factor control unit 502 may be replaced by an upper limit control unit that may determine the upper limit of the number of metric differences to be used during backtracking process. Furthermore, the upper limit may be determined based on various conditions as described elsewhere in the disclosure.
FIG. 6 illustrates a process 600 for searching the ML path and calculating the metric differences with a modified lazy VA according to some embodiments of the present disclosure.
In step 601, branch metrics relating to the received bits between different time points may be calculated. As used herein, the branch metrics denotes the connection between separate  states in the trellis. When decoding a received sequence, the decoding algorithm may traverse the trellis to determine the probabilities of the individual states associated with the branch metrics. In step 602, the process 600 may modify the branch metrics at the same time point. In some embodiments, the branch metrics may be modified to become a non-positive value by the branch metric unit 701. Merely by way of example, the modification may be conducted by taking the opposite number of the branch metrics. In step 603, a priority queue (PQ) may be loaded with a start node on the trellis. According to the present disclosure, the priority queue may be used to indicate the next state on the trellis to be processed. During the decoding process, the priority queue may be updated with values corresponding to a node at a specific time point. For instance, the modified branch metrics corresponding to a time point may be stored in the priority queue for subsequent use. Specifically, the priority queue may be configured to store the nodes (k, v) , where k represents the path metric, v represents the condition of the state (s, t) , where s denotes the state of the node and t represents the time point. In step 604, the top node of PQ may be popped as current node. In some embodiments, the nodes in the PQ may be ranked according to the path metric values of the states. For example, the top node of PQ may represent the state with the largest path metric value. In step 605, the process 600 may check if the node with the same state and time point has been popped. If the node with the same state and time point has been popped, the process 600 may proceed to step 606 to calculate the metric difference on the node. The metric difference calculated may be stored for subsequent use. Then the process 600 may further pop the top node in the PQ till the process is ended. If the node with the same state and time point has not been popped, the process 600 may calculate the path metric relating to the current node and further update the PQ by inserting the calculated path metric into the PQ in step 607. In next step 608, the process 600 may determine whether the top node of PQ is an end node. If the top node of PQ is an end node, the ML path is determined and backtracking operation may be conducted based on the ML in step 609. In step 610, the process 600 may output hard decision bits and metric differences relating to nodes on the ML path. If the top node of PQ is not an end node, the process 600 may return to step 604 to pop the top node in the updated PQ. The backtracking process may be conducted based on the number of metric differences calculated in step 606. If the number of metric differences exceeds  a threshold value, the number of metric differences may be trimmed as described elsewhere in the disclosure.
FIG. 7 illustrates a block diagram of a calculating module using modified lazy VA according to some embodiments of the present disclosure. The calculating module 700 may include a branch metric unit 701, a path metric unit 702, a metric difference unit 703 and a priority queue unit 704.
The branch metric unit 701 may be configured to calculate the branch metrics of possible output sequences between possible time points. The path metric unit 702 may be configured to calculate the path metrics of possible sequences based on the branch metrics calculated by the branch metric unit 701. In some embodiments, the path metric unit 702 may be coupled to the branch metric unit 701 and update the path metrics based on the branch metrics for each state. The metric difference unit 703 connected with the path metric unit 702 may be configured to calculate the metric difference as described in step 606. The priority queue unit 704 may be configured to store the nodes (k, v) loaded to the PQ, as described elsewhere in the disclosure. For example, the priority queue unit 704 may pop the top node of PQ as the current node on the trellis. The top node of PQ may be the node with the maximum path metric in the PQ relating to the maximum value of k. The priority queue unit 704 may be coupled with the path metric unit 702 such that the priority queue may be updated by inserting the calculated path metric into the priority queue.
FIG. 8 illustrates an exemplary backtracking process 800 according to some embodiments of the present disclosure. In step 801, the LLR value of each node on the ML path may be initialized as of a certain value. In some embodiments, the certain value may be set as ∞. The backtracking process 800 may start from a specific node on the ML path in step 802. For illustration purposes, a time point t may be used to indicate the node where the backtracking proceeds. As an example, the t may be initially set as L, which is the code length of the received sequence. In step 803, the path decision bit relating to a node on the ML path may be compared with the path decision bit relating to the node on a competitive path. In some embodiments, the path decision bits may be compared at consecutive nodes, such as from time point t down to time point t-δ, where δ denotes the backtracking depth. In step 804, the process 800 may  check if the decision bits of the ML path and the competitive path at the same time slot are the same. If the decision bits of the ML path and the competitive path are the same, the LLR of the corresponding bit may remain unchanged in step 805, which is ∞ as initially set according to some embodiments. If the decision bits of the ML path and the competitive path are different, the LLR of the corresponding node on the ML path may be updated in step 806. The updated LLR may be the smaller value between the LLR of the corresponding bit and the metric difference Δ (s, t) between the ML path and the competitive path based on the corresponding bit. In step 807, the next node corresponding to time point t’ may be selected. In some embodiments, the backtracking may be conducted on consecutive nodes prior to the current node. For example, the next node may be the previous node according to time point t-1. In some embodiments, the backtracking may be conducted on specific nodes. For example, the specific nodes may be the nodes on which metric differences are calculated. Specifically, the specific nodes may be the nodes calculated by modified lazy VA algorithm as described in step 606. Alternatively, the specific nodes may be the nodes corresponding to L/M metric differences as described elsewhere in the disclosure. In some embodiments, the specific nodes may be the nodes corresponding to the smallest L/M metric differences among all the calculated nodes. In some embodiments, the nodes corresponding to the L/M metric differences may be selected from a plurality of blocks. At least one of the nodes may correspond to a smallest metric difference in a block. The plurality of blocks may be formed by dividing all the calculated nodes into, for example, L/M blocks. The calculated nodes may be are divided evenly or unevenly into the plurality of blocks. The process 800 may determine whether the nodes relating to the smallest L/M metric differences are backtracked in step 808. If not, the process 800 may return back to step 802 to backtrack another node in the ML path. If yes, the process 800 may proceed to step 809 to assign the sign of LLRs with the sign of the corresponding path decision bits of the ML path. Due to the reduction of metric differences as described above, multiple sequence bits may have less metric differences, or even no metric difference in the LLR calculations. To ensure the quality of LLRs, an estimation process may be carried out to compensate the metric differences omitted. As shown in FIG. 3, the estimating module 305 may be used to accomplish the estimation. In some embodiments, the estimating module 305 may  estimate an omitted LLR based on, for example, an extrinsic sequence and/or an intrinsic sequence.
According to some embodiments, the estimation of LLRs from the intrinsic sequence can be described as below:
According to the definition of LLR, assume the sequence bit at time point l is ul=+1,
Figure PCTCN2016095699-appb-000001
where p denotes the probability of different decision bits and/or different symbols, r denotes the received symbol at time point l.
Using the max-log approximation,
Figure PCTCN2016095699-appb-000002
The term lnp (r|ul=+1) in equation (1) may be rewritten as:
Figure PCTCN2016095699-appb-000003
The right side of (3) can be divided into three parts based on the property of Markov chain. Particularly,
Figure PCTCN2016095699-appb-000004
where mls (l) denotes the ML state at time point l. Because rt<l is conditionally independent of ul given mls (l) , the first part on the right side of equation (4) can be rewritten as:
ln p (rt<l|ul = +1, mls (l) )
=ln p (rt<l|ul = -1, mls (l) )
=ln p (rt<l|mls (l) )    (5)
The second part on the right side of equation (4) can be rewritten as:
ln p (rt>l|ul = +1, mls (l) )
=ln p (rt>l|sl+1)
=ln βl+1 (sl+1) (6)
where βl+1 denotes the backward metric, which means the accumulated metric from the node at time point L to the node at time point l+1 , sl+1 denotes the state on trellis at time point l+1. The third part of (4) may be expressed as
ln p (rl|ul = +1, mls (l) ) = ln p (rl|cl) = 0.5Lc·rl·cl (7)
where cl denotes the corresponding code word at time point l. Lc denotes the channel reliability factor, which can be described as 4NS/N0 . Since ul = +1, LLR of time point l can be written as
LLR (l)
≈0.5Lc· (rl·cl (ul = +1) -rl·cl (ul = -1) ) +La (l) +d (β) (8)
where, d (β) = lnβl+1 (mls (l + 1) ) -lnβl+1 (sl+1) . Here the impact of channel and priori sequence may be considered. The estimation may ignore d (β) and leave it for neighboring LLRs,
|LLR (l) |≈|0.5Lc· (rl·cl (ul = +1) -rl·cl (ul = -1) ) +La (l) | (9)
where, Lc = 4Es/N0, is the channel reliability factor, rl denotes the received symbol at time point l. ul denotes the sequence bit at time point l, cl denotes the corresponding code word, La (l) denotes the prior sequence at time point l, sl denotes the state on trellis at time point l, mls (l) denotes the ML state at time point l.
According to some embodiments, LLR estimation from extrinsic sequence can be described as below.
To prove that LLR estimation from extrinsic sequence is based on neighboring LLRs, the following lemmas may be considered.
Lemma 1. For RSC, the path decisions of the competitive path and the survivor path of the same node are different.
Lemma 2. For Lazy VA, the earlier node popped by PQ has the larger path metric than the later one.
Lemma 3. Any metric difference omitted by the modified Lazy VA is larger than its closest obtained LLR.
The backtracking operations of SOVA shows that
Figure PCTCN2016095699-appb-000005
where Δi denotes the metric difference corresponding to node i. Assume that LLR (t) is omitted due to the trimming process as described elsewhere in the disclosure, according to equation (10) , all metric differences in the {Δi} , i∈Dt, are omitted. Assume that Δj , j∈Dt, is the minimum one among these metric differences. Thus,
|LLR (t) | = Δj (11)
Moreover, Lemma 3 indicates that
Δj ≥|LLR (j + k) | (12)
where LLR (j + k) , 0 < k ≤ δ, is the closest obtained LLR of Δj .
Based on equation (11) and equation (12) ,
|LLR (t) | ≥ |LLR (j + k) | (13)
where t < j + k ≤ t +2δ.
Suppose that |LLR (q) |, t < q ≤ t +2δ, is the minimal magnitude of the obtained LLRs among {|LLR (t +1) |, |LLR (t + 2) |, …, |LLR (t +2δ) |} , thus
|LLR (t) | ≥ |LLR (q) | (14)
Thus, it shall be appreciated that any omitted LLR has larger magnitude than that of its minimal neighbouring obtained LLR. Then, the omitted LLR can be estimated by its minimum neighbouring obtained LLR,
LLRn (t) | ≈ |LLR (q) | (15)
where LLR (q) has the minimum magnitude among {|LLR (t +1) |, |LLR (t + 2) |, …, |LLR (t +2δ) |} , t +1≤ q ≤ t +2δ.
The estimating module 305 may combine intrinsic sequence (e.g., as described in equation (9) ) and extrinsic sequence (e.g., neighbouring LLRs as described in equation (15) ) , to give better estimation of LLR of sequence bits.
LLR (l) = |LLRi (l) | +|LLRn (l) | (16)
In some embodiments, the estimating module 305 may use two scaling factors, θ1 and θ2, to modify the values of extrinsic sequence. Then, the extrinsic sequence output by the estimating module 305 may be estimated as follows,
Le (l) = θ2 · (θ1 ·LLR (l) -Li (l) ) (17)
In some embodiments, the determination of the scaling factors may base on statistical  results. For example, using SOVA may result in larger values of extrinsic sequence, thereby applying the scaling factors which lie in the range (0, 1) may modify the values of extrinsic sequence to achieve a higher accuracy.
In some embodiments, a tangible and non-transitory machine-readable medium or media having instructions recorded thereon for a processor or computer to operate an imaging apparatus to perform one or more functions of the modules or units described elsewhere herein, for example, to implement the processes of generating voxel masks, or reconstructing images in subsequent processes, may be provided. The medium or media may be any type of CD-ROM, DVD, floppy disk, hard disk, optical disk, flash RAM drive, or other type of computer-readable medium or a combination thereof.
The various embodiments and/or components, for example, the modules, units, processors, components and controllers, may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet or communicating with a cloud server. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM) . The computer or processor further may include a storage device including, for example, a hard disk drive or a removable storage drive such as a floppy disk drive, optical disk drive, or the like, or a combination thereof. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor. The computer or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine.
EXAMPLES
The following examples are provided for illustration purposes, and not intended to limit the scope of the present disclosure.
FIGs. 9A-9B illustrate the quality of LLR using a standard Turbo code and a convolutional code according to some embodiments of the present disclosure. For illustration purpose, a non-recursive convolutional code (CC) with register bits 6, 1/2 code rate, and (171, 133) code polynomials may be defined as Code I. Code IIA may be a standard Turbo code in CDMA2000, whose register bits is 3, message bits is 1146, code rate is 1/3, and RSC polynomials is (13, 15) ; Code IIB may be a standard Turbo code same as Code IIA except for 1/2 code rate. Code IIIA may be a standard Turbo code, whose register bits is 4, message bits is 378, code rate is 1/3, and RSC polynomials is (21, 37) ; Code IIIB may be a standard Turbo code same as Code IIIA except for 105 message bits. As used herein, the standard Turbo code may be code II. And Code I and III have the similar result as Code II. Specially, the extrinsic information may be used to indicate the quality of LLR.
Particularly, FIGs. 9A-9B plots the extrinsic information of M-SOVA, S-SOVA and T-SOVA against that of Log-MAP for CC Code I and Turbo Code IIA, each of which includes 5000 data points obtained after the first iteration at an Eb/N0. FIG. 9A-B illustrate the results of the CC Code I at Eb/N0 = 2.0dB and the Turbo Code IIA at SNR=0.6 dB, respectively. It shows that the extrinsic information of M-SOVA is much more optimistic than that of Log-MAP, while the extrinsic information of S-SOVA matches well with that of Log-MAP. With trimming factors M=4 and M=8, the extrinsic information of T-SOVA matches well with that of Log-MAP. If the trimming factor M increases to 16, the difference between the extrinsic information of TSOVA and Log-MAP becomes larger in small LLR region around LLR = 0. As trimming becomes very aggressive, e.g. M = 64, the difference becomes obvious in small LLR region, which may cause performance degradation. For the other CC and Turbo codes, the analysis results of LLR quality are similar.
FIGs. 10A-10C plot the error performance of various decoding algorithms respectively for Code IIA, Code IIB and Code IIIA according to some embodiments of the present disclosure. As used herein, the error performance is the bit-error-rate (BER) and block error-rate (BLER) , the decoding algorithms are Log-MAP, M-SOVA, bi-SOVA, S-SOVA T-SOVA. As shown in the figures, when the trimming factor is moderate, e.g. M =4; 8; 16, T-SOVA with scaling factors θ1 = 0.9 and θ2 = 0.9 performs closely to Log-MAP and S-SOVA, and outperforms the other  variants of SOVA.
For Code IIA, T-SOVA with trimming factors M = 4 and M = 8 can approach Log-MAP and S-SOVA within 0.1 dB at BER 106 in FIG. 10A. There is about 0.2 dB performance gap between T-SOVA with M = 16 and S-SOVA. However, it still performs as well as bi-SOVA and outperforms M-SOVA. As the trimming factor becomes more aggressive, M = 64, the performance gap between T-SOVA and S-SOVA increases to 0.8 Db. As shown in FIG. 10A, T-SOVA with trimming factors M=4 and 8 can approach Log-MAP and S-SOVA within 0.1 dB at BLER 104, and perform as well as bi-SOVA. As the trimming factor becomes more aggressive, M = 64, the performance gap between T-SOVA and S-SOVA increases to 0.8 dB.
For Code IIB, T-SOVA with trimming factors M = 4 and M = 8 can approach Log-MAP and S-SOVA within 0.2 dB and 0.1 dB at BER 106 in FIG. 10B, respectively. T-SOVA with trimming factor M = 16 performs as well as bi-SOVA and outperforms M-SOVA. As the trimming factor becomes more aggressive, M = 64, the performance gap between TSOVA and S-SOVA increases to 0.8 dB. The BLER of various decoding algorithms for Code II B is similar to the results of BER as illustrated in FIG. 10B.
For Code IIIA, T-SOVA with trimming factors M = 4 and M = 8 can approach Log-MAP and S-SOVA within 0.2 dB and 0.05 dB at BER 106 in FIG. 10C, respectively. There is about 0.1 dB performance gap between T-SOVA with M = 16 and S-SOVA. However, it still performs as well as bi-SOVA and outperforms M-SOVA. As the trimming factor becomes more aggressive, M = 64, the performance gap between TSOVA and S-SOVA increases to 0.4 dB. The BLER of various decoding algorithms for Code III A is similar to the results of BER as illustrated in FIG. 10B.
FIG. 11 and FIGs. 12A-12C illustrate the complexity of SOVA and T-SOVA according to some embodiments of the present disclosure. Recall that the number of backtracking operations determines the complexity of SOVA. FIG. 11 shows the average number of backtracking operations of SOVA and T-SOVA. For Code IIA, IIB, and IIIA, T-SOVA requires at most 1/M backtracking operations of SOVA.
The memory consumption of SOVA and T-SOVA is determined by the number of expanded nodes in the first stage. In FIGs. 12A-12C, the average number of nodes expanded by  T-SOVA and SOVA with different Eb/N0 in each iteration are represented. For instance, T-SOVA expands about 22%, 68%and 45%nodes of SOVA at Eb/N0=3.2 dB for Code IIA, IIB and IIIA, respectively.
As will be also appreciated, the above described method embodiments may take the form of computer or controller implemented processes and apparatuses for practicing those processes. The disclosure can also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer or controller, the computer becomes an apparatus for practicing the invention. The disclosure may also be embodied in the form of computer program code or signal, for example, whether stored in a storage medium, loaded into and/or executed by a computer or controller, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
The various methods and techniques described above provide a number of ways to carry out the application. Of course, it is to be understood that not necessarily all objectives or advantages described can be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that the methods may be performed in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objectives or advantages as taught or suggested herein. A variety of alternatives are mentioned herein. It is to be understood that some embodiments specifically include one, another, or several features, while others specifically exclude one, another, or several features, while still others mitigate a particular feature by inclusion of one, another, or several advantageous features.
Furthermore, the skilled artisan will recognize the applicability of various features from different embodiments. Similarly, the various elements, features and steps discussed above, as  well as other known equivalents for each such element, feature or step, may be employed in various combinations by one of ordinary skill in this art to perform methods in accordance with the principles described herein. Among the various elements, features, and steps some will be specifically included and others specifically excluded in diverse embodiments.
Although the application has been disclosed in the context of certain embodiments and examples, it will be understood by those skilled in the art that the embodiments of the application extend beyond the specifically disclosed embodiments to other alternative embodiments and/or uses and modifications and equivalents thereof.
In some embodiments, the terms “a” and “an” and “the” and similar references used in the context of describing a particular embodiment of the application (especially in the context of certain of the following claims) may be construed to cover both the singular and the plural. The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (for example, “such as” ) provided with respect to certain embodiments herein is intended merely to better illuminate the application and does not pose a limitation on the scope of the application otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the application.
Some embodiments of this application are described herein. Variations on those embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. It is contemplated that skilled artisans may employ such variations as appropriate, and the application may be practiced otherwise than specifically described herein. Accordingly, many embodiments of this application include all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is  encompassed by the application unless otherwise indicated herein or otherwise clearly contradicted by context.
All patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein are hereby incorporated herein by this reference in their entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.

Claims (22)

  1. A method of decoding comprising:
    retrieving a sequence of L bits;
    determining a maximum likelihood path and related competitive paths based on the L bits;
    calculating reliability values relating to the bits based on no more than N metric differences,
    wherein the metric differences are calculated based on the maximum likelihood path and the related competitive paths;
    wherein L and N are integers greater than 1; and
    wherein L is greater than N.
  2. The method according to claim 1, further comprising conducting a backtracking operation on the related competitive paths.
  3. The method according to claim 2, wherein the calculating reliability values comprises estimating at least some of the reliability values corresponding to some of the bits based on intrinsic information or extrinsic information.
  4. The method according to claim 3, wherein the intrinsic information comprises a sequence transmitted through a channel.
  5. The method according to claim 2, wherein the reliability value relating to a first bit within a backtracking depth of a second bit is updated during the backtracking operation, and wherein the second bit corresponds to one of the N metric differences.
  6. The method according to claim 1, wherein N equals to L divided by a trimming factor M; and wherein M is a number greater than 1.
  7. The method according to claim 1, wherein the calculating reliability values comprises obtaining a plurality of metric differences and obtaining N minimum metric differences among the plurality of metric differences.
  8. The method according to claim 1, wherein the metric differences are calculated by a Viterbi algorithm.
  9. The method according to claim 1, wherein the metric differences are determined by a  modified Lazy Viterbi algorithm comprising:
    obtaining a priority queue comprising a plurality of nodes; and
    calculating a metric difference on a node of the plurality of nodes;
    wherein the node has been popped more than once based on the priority queue.
  10. The method according to claim 9, wherein the plurality of nodes are popped in a sequence ranked based on path metric values of the plurality of nodes.
  11. The method according to claim 9, further comprising modifying branch metrics relating to the sequence into non-positive values.
  12. A system of decoding comprising:
    an input configured to retrieve a sequence of L bits;
    a calculating module configured to determine a maximum likelihood path and related competitive paths based on the L bits;
    a trimming module configured to obtain no more than N metric differences based on the maximum likelihood path and the related competitive paths;
    an output configured to output reliability values relating to the bits calculated based on the metric differences,
    wherein L and N are integers greater than 1; and
    wherein L is greater than N.
  13. The system according to claim 12, further comprising a backtracking module configured to conduct a backtracking operation on the related competitive paths.
  14. The system according to claim 13, further comprising an estimating module configured to estimate at least some of the reliability values corresponding to some of the bits based on intrinsic information or extrinsic information.
  15. The system according to claim 14, wherein the intrinsic information comprises a sequence transmitted through a channel.
  16. The system according to claim 13, wherein the backtracking module is configured to update the reliability value relating to a first bit within a backtracking depth of a second bit during the backtracking operation, and wherein the second bit corresponds to one of the N metric differences.
  17. The system according to claim 12, wherein the trimming module is configured to provide a trimming factor M; wherein M is an integer greater than 1; and wherein N equals to L divided by M.
  18. The system according to claim 12, wherein the trimming module is configured to obtain N minimum metric differences among a plurality of metric differences.
  19. The system according to claim 12, wherein the calculating module is configured to calculate the metric differences by a Viterbi algorithm.
  20. The system according to claim 12,
    wherein the calculating module is configured to calculate the metric differences by a modified Lazy Viterbi algorithm;
    wherein the calculating module comprises:
    a priority queue unit configured to obtain a priority queue comprising a plurality of nodes; and
    a metric difference unit configured to calculate a metric difference on a node of the plurality of nodes; wherein the node has been popped more than once based on the priority queue.
  21. The system according to claim 20, wherein the priority queue unit is further configured to pop the plurality of nodes in a sequence ranked based on path metric values of the plurality of nodes.
  22. The system according to claim 20, further comprising a branch metric unit configured to modify branch metrics relating to the sequence into non-positive values.
PCT/CN2016/095699 2015-08-24 2016-08-17 System and method for data decoding WO2017032255A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510523515.4A CN106487392B (en) 2015-08-24 2015-08-24 Down-sampled interpretation method and device
CN201510523515.4 2015-08-24

Publications (1)

Publication Number Publication Date
WO2017032255A1 true WO2017032255A1 (en) 2017-03-02

Family

ID=58099596

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/095699 WO2017032255A1 (en) 2015-08-24 2016-08-17 System and method for data decoding

Country Status (2)

Country Link
CN (1) CN106487392B (en)
WO (1) WO2017032255A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108923887A (en) * 2018-06-26 2018-11-30 中国人民解放军国防科技大学 Soft decision decoder structure of multi-system partial response CPM signal
CN110612669A (en) * 2017-05-24 2019-12-24 华为技术有限公司 Decoding method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI729755B (en) * 2020-04-01 2021-06-01 智原科技股份有限公司 Receiver and internal tcm decoder and associated decoding method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010601A1 (en) * 2009-07-09 2011-01-13 Kabushiki Kaisha Toshiba Data reproducing apparatus and data reproducing method
CN103548084A (en) * 2011-06-17 2014-01-29 日立民用电子株式会社 Optical information reproduction device and method for reproducing optical information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101395669B (en) * 2007-02-21 2011-05-04 松下电器产业株式会社 Maximum likelihood decoder and information reproducing device
CN102340317B (en) * 2010-07-21 2014-06-25 中国科学院微电子研究所 High-throughput rate decoder and decoding method of structured LDPC code

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010601A1 (en) * 2009-07-09 2011-01-13 Kabushiki Kaisha Toshiba Data reproducing apparatus and data reproducing method
CN103548084A (en) * 2011-06-17 2014-01-29 日立民用电子株式会社 Optical information reproduction device and method for reproducing optical information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIN HUANG ET AL.: "Trimming Soft-Input Soft-Output Viterbi Algorithms", IEEE TRANSACTIONS ON COMMUNICATIONS, vol. 64, no. 7, 12 July 2016 (2016-07-12), pages 2952 - 2960, XP011616633, ISSN: 0090-6778 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110612669A (en) * 2017-05-24 2019-12-24 华为技术有限公司 Decoding method and device
US11477170B2 (en) 2017-05-24 2022-10-18 Huawei Technologies Co., Ltd. Decoding method and apparatus
CN108923887A (en) * 2018-06-26 2018-11-30 中国人民解放军国防科技大学 Soft decision decoder structure of multi-system partial response CPM signal

Also Published As

Publication number Publication date
CN106487392B (en) 2019-11-08
CN106487392A (en) 2017-03-08

Similar Documents

Publication Publication Date Title
US7757151B2 (en) Turbo decoder employing simplified log-MAP decoding
JP4227481B2 (en) Decoding device and decoding method
CA2710773C (en) Decoding scheme using multiple hypotheses about transmitted messages
US8081719B2 (en) Method and system for improving reception in wired and wireless receivers through redundancy and iterative processing
KR20150097048A (en) Signal receiving apparatus based on fast than nyquist and and signal decoding method thereof
WO2017032255A1 (en) System and method for data decoding
US8230311B2 (en) Method and apparatus for turbo code decoding
US20030018941A1 (en) Method and apparatus for demodulation
WO2012123514A1 (en) State metrics based stopping criterion for turbo-decoding
JP6144846B2 (en) Wireless communication apparatus and wireless communication system
EP1565992A1 (en) Erasure determination procedure for fec decoding
CN110798229B (en) Method for generating Turbo code interleaver
CN102957438B (en) A kind of Turbo decoder and interpretation method
CN107968697B (en) Decoding method and device for overlapping multiplexing system
US10116337B2 (en) Decoding method for convolutionally coded signal
CN115642924B (en) Efficient QR-TPC decoding method and decoder
US8914716B2 (en) Resource sharing in decoder architectures
US9106266B2 (en) Trellis state based stopping criterion for turbo-decoding
CN113162633B (en) Method and device for decoding polarization code, decoder, equipment and storage medium
WO2017013767A1 (en) Reception apparatus and communication system
US9647798B2 (en) Decoding method using dynamic scaler factor
JP3816752B2 (en) Code synchronization determination device
JP2006314055A (en) Turbo decoding method and apparatus
KR20060129538A (en) Local erasure map decoder

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16838514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16838514

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/03/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16838514

Country of ref document: EP

Kind code of ref document: A1