WO2014194761A1 - Soft bit coding method and apparatus for radio receiving device - Google Patents

Soft bit coding method and apparatus for radio receiving device Download PDF

Info

Publication number
WO2014194761A1
WO2014194761A1 PCT/CN2014/078051 CN2014078051W WO2014194761A1 WO 2014194761 A1 WO2014194761 A1 WO 2014194761A1 CN 2014078051 W CN2014078051 W CN 2014078051W WO 2014194761 A1 WO2014194761 A1 WO 2014194761A1
Authority
WO
WIPO (PCT)
Prior art keywords
soft bit
bit data
soft
mbit
decoding
Prior art date
Application number
PCT/CN2014/078051
Other languages
French (fr)
Chinese (zh)
Inventor
刘中伟
李钦昕
邱宁
邢艳楠
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2014194761A1 publication Critical patent/WO2014194761A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • H03M13/6588Compression or short representation of variables
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3911Correction factor, e.g. approximations of the exp(1+x) function
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors

Definitions

  • the present invention relates to a soft bit decoding technology suitable for a wireless receiving device, and more particularly to a soft bit decoding method and related device for a wireless receiving device.
  • BACKGROUND OF THE INVENTION In a digital communication system, a general error is unavoidable, and for a wireless communication system with poor communication conditions, the error condition is particularly serious, and reducing the bit error and improving the reliability of the communication system have always been pursued by the communication system design.
  • Channel coding is an error control technique developed to improve communication reliability, and mainly includes two types of error detection codes and error correction codes. Error correction codes are commonly used in channel coding, and error correction codes can be further divided into two categories: block codes and convolutional codes.
  • the block code is a memoryless error correction code.
  • only the current n inputs can be used to correct the transmitted information bits, which are often recorded as (n, k) codes.
  • n is the number of symbols of a codeword, that is, the codeword length
  • k is the number of information symbols
  • n-k is the number of supervised symbols.
  • the convolutional code is a memory error correction code.
  • not only the current n inputs but also the m bits and n bits received in the past can be used to estimate the currently transmitted information bits, which is denoted as (n, k). , m) code.
  • the linear block code only uses the current code word information for error correction, and the convolutional code can simultaneously use the previous multiple code word information for error correction, and thus, the error correction capability of the convolutional code is much stronger.
  • a convolutional code is mainly used for channel coding.
  • Commonly used convolutional codes include two types, one is a convolutional code in the usual sense, and the other is a turbo code, which is an improvement on a common convolutional code, that is, two convolutional codes are simultaneously used.
  • the input of one of the convolutional codes is the interleaving of another convolutional code input.
  • the convolutional code encoder in the TD protocol is shown in Figure 1, with a constraint length of 9.
  • the initial values of the eight shift registers D of the encoder are set to all zeros, and eight bits 0 are added at the end of the input bits.
  • each beat performs the operation of "first mode plus two, then register shift".
  • the maximum input data amount encoded by the convolutional encoder at one time is 504 bits.
  • the function of the Viterbi Viterbi decoder is to perform decoding of the 1/2 convolutional code and the 1/3 convolutional code.
  • the Viterbi decoding algorithm is a maximum likelihood decoding method proposed by Viterbi in 1967, that is, the output selected by the decoder always makes the codeword with the highest conditional probability of the received sequence. According to the principle of maximum likelihood decoding, one of the most suitable paths (the one with the smallest distance) is obtained in all possible paths, and the path backtracking is used to obtain the decision output. This method has been proved to have the best error correction decoding performance. .
  • the Viterbi decoding algorithm is mainly composed of the "plus-selection" operation of the path metric, the update of the cumulative metric, and the backtracking of the maximum likelihood path.
  • the turbo code decoder is shown in Fig. 2.
  • the input information sequences X YP 1 and ⁇ 2 are formed by adding the channel noise to the encoder output sequences X, ⁇ ⁇ 1 and ⁇ ⁇ 2 , respectively.
  • the input of decoder 1 is ⁇ 1
  • the best decoding strategy for Turbo codes is to calculate the posterior probability? ( 3 ⁇ 4
  • ⁇ 2 ), 3 ⁇ 4 0,1., high computational complexity.
  • the practical scheme is to calculate P ⁇ l ⁇ ) and P(u k
  • the performance of Log-MAP algorithm is close to that of MAP algorithm. It is a suboptimal decoding algorithm. Because the operation is transferred to the logarithmic domain, the multiplication operation is added to the addition operation, which greatly reduces the computational complexity. Max-Log The -MAP algorithm ignores the logarithmic component in the addition expression of the Log-MAP algorithm likelihood value, and adds the likelihood values to the maximum value operation, further reducing the computational complexity, and the performance is 0.3 ⁇ 0.5 dB lower than the MAP; The decoding performance of SOVA is the worst. It differs from the MAP algorithm by about 0.5 ⁇ ldB, and as the signal-to-noise ratio increases, the difference does not decrease, but its computational complexity is low, which is beneficial to hardware implementation.
  • the choice of Turbo decoding algorithm should consider the balance between performance and complexity.
  • the Max-Log-MAP algorithm is usually used.
  • viterbi decoding and turbo decoding there are two methods of hard decision and soft decision. Compared with the soft decision, the performance of the hard decision has a 2 ⁇ 3dB drop in performance. Obviously, the soft decision scheme is superior. However, the soft decision scheme is limited by the quantization bits of the hardware. The wider the quantization bits, the larger the RAM area required to hold the soft decision bits.
  • TD-SCDMA the general decoding process of the downlink is as shown in FIG. 3, including physical channel demapping, subframe splicing, second de-interleaving, physical channel splicing, de-bit scrambling, physical channel going.
  • the UE terminal of the TD-SCDMA must store the data demapped for one frame period and one radio frame, This means that the wider the bits of the soft bits used for viterbi decoding and turbo decoding, the larger the buffer required, and therefore the larger the chip area of the wireless receiving device.
  • Embodiments of the present invention provide a soft bit decoding method and apparatus for a wireless receiving device, which can better solve the problem of reducing the chip area of a wireless receiving device without substantially losing performance.
  • a soft bit decoding method of a wireless receiving device including: compressing Mbit soft bit data to be decoded during downlink processing to obtain a compressed Nbit Soft bit data; performing subsequent soft bit decoding processing on the soft bit data of the compressed bit; wherein, M and N are positive integers, and M>N.
  • M and N are positive integers, and M>N.
  • the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit.
  • the step of the subsequent soft bit decoding process includes: sequentially performing physical channel demapping processing, second deinterleaving processing, first deinterleaving processing, and decoding on the soft bit data of the compressed bit. deal with.
  • the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit.
  • the step of the subsequent soft bit decoding process includes: storing soft bit data of the compressed bit into a buffer; decompressing the soft bit data of the bit read from the buffer to obtain Mbit soft bit data, and sub-frame splicing processing of the Mbit soft bit data.
  • the step of the subsequent soft bit decoding process further includes: performing compression processing on the Mbit soft bit data obtained by the splicing process of the sub-frame to obtain soft bit data of the compressed bit; And storing soft bit data of the compressed bit into a buffer; decompressing the soft bit data of the bit read from the buffer to obtain soft bit data of Mbit, and performing soft bit data of the Mbit The second de-interlacing process.
  • the step of the subsequent soft bit decoding process further includes: performing line compression processing on the Mbit soft bit data obtained after the second deinterleave process to obtain a compressed
  • the step of compressing comprises: converting, by mapping, Mbit uniformly quantized soft bit data to obtain bit non-uniformly quantized soft bit data.
  • the step of decompressing comprises: converting, by inverse mapping, the bit non-uniformly quantized soft bit data read from the buffer to obtain Mbit uniformly quantized soft bit data.
  • a decoding apparatus comprising the above-described soft bit data buffer processing unit, configured to cache soft bit data obtained by soft decision processing after physical channel demapping processing .
  • a soft bit decoding apparatus for a wireless receiving device comprising: a compression module configured to compress Mbit soft bit data to be decoded during downlink processing Processing, obtaining soft bit data of the compressed bit; the decoding module is configured to perform subsequent soft bit decoding processing on the soft bit data of the compressed bit; wherein, M and N are positive integers, and M>N.
  • FIG. 1 is a schematic structural diagram of a convolutional encoder in a TD protocol provided by the prior art
  • FIG. 2 is a schematic structural diagram of a Turbo code decoder provided by the prior art
  • FIG. 3 is a schematic diagram of a TD-SCDMA provided by the prior art.
  • FIG. 4 is a flowchart of a soft bit decoding method of a wireless receiving device according to an embodiment of the present invention
  • FIG. 5 is a soft bit decoding of a wireless receiving device according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a comparison of quantized signal to noise ratios between linear domain decoding and transform domain decoding according to an embodiment of the present invention
  • FIG. 7 is an additive white Gaussian white noise AWGN channel according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram showing the performance comparison between the 8-bit linear domain decoding and the 4-bit transform domain decoding of the fading channel Casel according to the embodiment of the present invention
  • Step 401 During the downlink processing, the soft bit data of the Mbit to be decoded is performed.
  • Step 402 Perform subsequent soft bit decoding processing on the soft bit data of the compressed bit.
  • the step of the subsequent soft bit decoding process in step 402 includes: performing physical channel demapping processing on the soft bit data of the compressed bit in sequence, Secondary deinterleave processing, first deinterleave processing, and decoding processing. That is, the wireless receiving device performs descrambling and despreading to obtain soft bit data of the Mbit to be decoded, and the embodiment of the present invention compresses the soft bit data of the Mbit to obtain soft bit data of the compressed bit.
  • Subsequent includes physical channel demapping, subframe splicing, second de-interleaving, physical channel splicing, de-bit scrambling, physical channel demultiplexing, reverse rate matching, wireless frame splicing, first de-interlacing, wireless
  • soft bit data of the compressed bit is used, that is, full compression-transformed soft bit decoding is realized.
  • the step of the subsequent soft bit decoding process in step 402 includes: storing the soft bit data of the compressed bit to a cache, and The read soft bit data of the bit is subjected to decompression processing to obtain Mbit soft bit data, so as to perform sub-frame splicing processing on the Mbit soft bit data. Then, compressing the soft bit data of the Mbit obtained after the splicing process of the sub-frame, obtaining soft bit data of the compressed bit, storing the soft bit data of the compressed bit into a buffer, and reading from the cache The soft bit data of the bit is decompressed to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a second deinterleaving process.
  • the Mbit soft bit data obtained after the second deinterleaving process is subjected to line compression processing to obtain compressed bit soft bit data, and the compressed bit soft bit data is stored in the buffer, and the buffer is buffered.
  • the soft bit data of the bit read is subjected to decompression processing to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a first deinterleave process. That is to say, it is only necessary to perform Mbit-to-bit compression and Nbit-to-Mbit decompression of the data to be cached after physical channel demapping, before the second de-interleaving, and before the first de-interleaving, thereby simplifying the implementation.
  • Soft bit decoding process it is only necessary to perform Mbit-to-bit compression and Nbit-to-Mbit decompression of the data to be cached after physical channel demapping, before the second de-interleaving, and before the first de-interleaving, thereby simplifying the implementation.
  • the fully compressed compression soft bit decoding process requires the entire soft bit decoding process to be modified from Mbit to bit.
  • the workload is large, and the simplified soft bit decoding process has a small amount of work, due to the physical channel.
  • the area of the chip required for the mapping, the second deinterleaving, and the first deinterleaving is large. Therefore, the compression from Mbit to bit can significantly reduce the chip area.
  • the step of the above compression processing includes: converting the Mbit uniformly quantized soft bit data by mapping to obtain bit non-uniformly quantized soft bit data.
  • the step of decompressing the above-mentioned processing includes: converting, by inverse mapping, the bit non-uniformly quantized soft bit data read from the buffer to obtain Mbit uniformly quantized soft bit data.
  • the above cache is a random access memory RAM.
  • the embodiments of the present invention are applicable to soft bit decoding of a wireless receiving device, and are widely applicable to a decoding process of a convolutional code and a turbo code.
  • the embodiment of the present invention further provides a soft bit decoding apparatus for a wireless receiving device, including: a compression module, configured to compress, during the downlink processing, the soft bit data of the Mbit to be decoded, to obtain a compressed The soft bit data of the bit, where M and N are positive integers, and > ⁇ ; the decoding module is configured to perform subsequent soft bit decoding processing on the soft bit data of the compressed bit.
  • the subsequent soft bit decoding process includes: physical channel demapping processing, subframe splicing processing, second de-interleaving processing, physical channel splicing processing, de-biting
  • the processing of scrambling processing, physical channel demultiplexing processing, reverse rate matching processing, wireless frame splicing processing, first deinterleaving processing, and radio frame reverse equalization processing are performed using compressed bit soft bit data. deal with.
  • the subsequent soft bit decoding process includes: data to be cached after physical channel demapping, before second deinterleaving, and before first deinterleaving
  • the Mbit-to-bit compression is performed, and in the actual processing, after the bit-to-Mbit decompression is performed, the Mbit soft-bit data is used for corresponding processing.
  • FIG. 5 is a schematic diagram of a cache processing structure of a wireless receiving device in a soft bit decoding process according to an embodiment of the present invention. As shown in FIG. 5, the method includes: a compression module Comp, which is set to be stored during downlink processing. The soft bit data of the Mbit is subjected to compression processing to obtain soft bit data of the compressed bit.
  • a compression module Comp which is set to be stored during downlink processing.
  • the soft bit data of the Mbit is subjected to compression processing to obtain soft bit data of the compressed bit.
  • the Comp converts the Mbit uniformly quantized soft bit data by mapping, and obtains bit non-uniformly quantized soft bit data to implement compression.
  • a cache RAM configured to store soft bit data of the compressed bit; and a decompression module De-Comp configured to decompress the soft bit data of the bit read from the buffer to obtain Mbit soft bit data .
  • the De-Comp converts the bit non-uniformly quantized soft bit data read from the buffer by inverse mapping, and obtains Mbit uniformly quantized soft bit data to implement decompression.
  • M and N are positive integers and M>N.
  • the DTR process is a general term for the downlink processing process.
  • GDTR General Download Transport Processing
  • HDTR High Speed Downlink Transport Processing
  • the main purpose of the DTR process is to process the soft demodulated symbol data, decode and demultiplex the data, and pass it to the upper layer.
  • the DTR process is closely related to the protocol standard and is the inverse of the physical layer coding and multiplexing.
  • Channel coding implements forward error correction by convolutional codes and Turbo codes, so that error correction capability close to the scent limit can be obtained under the AWGN channel.
  • the error correction code alone is powerless, and the solution is interleaving and retransmission.
  • Interleaving and de-interleaving are commonly used as methods of combating fading in the process of channel coding and decoding of ordinary services.
  • retransmission is used as a method of combating fading.
  • de-interleaving and re-transmission are important steps, and the problem is that for soft demodulated data, a large storage space is required.
  • TTI Transmission Time Interval
  • data of 4 subframes of the service needs to be stored.
  • a frame is 40 ms
  • data of 8 subframes of the service needs to be stored.
  • Subsequent decoding can only be performed after deinterleaving. This storage amount increases with the increase of the number of quantization bits for saving soft bit data.
  • L 2( u k ) L c x k + e2 ( u k ) + a2 ( u k )
  • L 2 ( u k ) represents the output of the turbo decoder and x k represents the uniformly quantized soft bit of the input
  • ⁇ ⁇ (3 ⁇ 4) represents the decoder after decoding this "net" decode information is to be passed to a first sub-decoder as its a priori information ⁇ 3
  • a2 (u k) represents the current input to the decoder
  • the log likelihood ratio of the probability is derived from the output of the second sub-decoder.
  • turbo decoder and a turbo decoder output decision input x k is linear, 3 ⁇ 4 L soft 2 (u k) is larger when the input bits, a greater redundancy, even if a certain quantizing Errors are not easy to cause false positives.
  • L 2 ( u k ) is small, a small amount of error during quantization may cause misjudgment. Therefore, for demodulating soft bits, non-uniform quantization processing can be used, for larger inputs, coarser quantization, and for smaller inputs, fine quantization. That is, the DTR process can be shifted from the uniformly quantized linear domain to the non-uniform quantized transform domain.
  • the above process can also be explained by quantizing the signal-to-noise ratio.
  • Fig. 6 when the soft bit data is uniformly quantized, quantization noise is actually added.
  • the quantization noise is fixed.
  • the signal-to-noise ratio SNR ratio of the signal soft bit and the quantization noise is relatively large.
  • the SNR ratio of the signal soft bit and the quantization noise is relatively small.
  • the quantized SR can always satisfy the requirement, but when the value of the soft bit of the signal is small, the quantized SNR decreases greatly, even less than OdB, thereby causing soft bits.
  • Line 2 is a 4-bit uniformly quantized signal-to-noise ratio curve. When the input value is small, the quantized signal-to-noise ratio is lower than the operating point of HSDPA, thus resulting in a large quantization error, which affects the accuracy of the final decoding.
  • Line 3 is the signal-to-noise ratio curve of the 4-bit transform domain decoding. When the absolute value of the input value is large, a larger quantization interval is used.
  • the quantization signal-to-noise ratio is close to 4 bit uniform quantization, but because it is higher than HSDPA.
  • the point does not affect the decoding performance at this time; when the absolute value of the input value is small, a smaller quantization interval is used, and at this time, the quantization signal-to-noise ratio is close to the performance of 8-bit uniform quantization.
  • the bit width of the soft bit demodulation link can be significantly reduced without substantially loss of decoding performance.
  • a bit transform domain decoder can be used instead of the Mbit uniform quantization decoder, which is equivalent to modifying the Mbit decoding link in the DTR to a bit decoding link, that is, Vertibi decoding.
  • the input information is not uniformly quantized Mbit soft bit data, but the non-uniform quantized bit soft bit of the transform domain.
  • the translation is performed.
  • the output of the coder is judged, and finally 0, 1 bit is acquired. If the modification is based on the original Mbit link, the modification of the entire link may take a long time. The workaround is to maintain each processing module in the Mbit decoding link, and only occupy the occupied area of the DTR link.
  • the embodiments of the present invention are applicable to the GDTR and HDTR downlinks of the TD-SCDMA system. Since WCDMA has a GDTR and HDTR downlink procedure similar to TD-SCDMA, this scheme is also applicable to the WCDMA system.
  • the core invention of the strategy is that in the channel decoding process of the wireless receiving device, the Mbit linear domain is converted to the bit transform domain for the uniformly quantized soft bits, M and N are configurable, and M is always greater than N. Thus, a significant reduction in chip cost is achieved with less performance degradation (not more than 0.2 dB).
  • the demapped soft bits use 8-bit uniformly quantized soft bit data.
  • 4 bit transform domain decoding and 8 bit linear decoding are respectively used to perform performance simulation.
  • the simulation results show that the performance of the two is basically equivalent, and the performance degradation is not more than 0.2 dB. It is shown that it is feasible to use a 4-bit transform domain decoding scheme instead of an 8-bit linear decoding scheme.
  • the following is a comparison of the performance of the usual 8bit and 4bit schemes.
  • the RAM occupies a large area, and the RAM in the downstream GDTR link and the downstream HDTR link usually occupies more than 80% of the chip RAM area.
  • the RAM area in the DTR link can be reduced by half, which is equivalent to reduction. 40% of the RAM area in the baseband chip, which means that using the transform domain decoding scheme, the chip area can be greatly reduced without loss of performance, thereby effectively saving the cost of the baseband chip.
  • a simplified implementation method of transform domain decoding is described below by taking the transform domain decoding of the GDTR link for TD-SCDMA as an example. For a specific flowchart, reference may be made to FIG. 3. For the GDTR link of TD-SCDMA, the complete transform domain decoding method converts the received data symbols into non-linear soft bits after soft demodulation, and then uses the entire GDTR link.
  • the channel decoding is performed on the bit scheme of the transform domain, and the last 0 and 1 bits are decoded.
  • the link modification it can be implemented in a simplified manner, that is, the DTR link structure is not greatly modified, and the Mbit linear decoding is still used, but only the larger RAM in the DTR link is encapsulated. In this way, modifications to the link can be minimized.
  • Mbit soft bit data is stored in the RAM, it is mapped once, and Mbit soft bit data is mapped into bit soft bit data, and is stored in the RAM.
  • the bit soft bit data in the RAM is mapped to Mbit soft bit data by one mapping.
  • Step 1 The physical channel demaps the physical channel to be mapped to the inverse process of the physical layer mapping, and the physical channel mapping rule can be reversely applied to recover the user's subframe data, and the chip-level operation is converted into a bit-level operation. .
  • the equalized received data is soft-decised and quantized into appropriate soft bit data for subsequent vertibi soft bit decoding and turbo soft bit decoding.
  • the soft bit data after the soft decision is stored in the frame RAM, and can be compressed and decompressed in the aforementioned manner.
  • Step 2 Sub-frame splicing
  • a subframe splicing unit needs to be added between the second de-interleaving unit and the physical channel demapping unit to splicing the two subframes.
  • Sub-frame splicing only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits.
  • Step 3 The main purpose of the second deinterleaving interleaving is to turn the burst error into a random error, so that the decoder can perform error correction and resist the influence of fast fading.
  • the purpose of the second deinterleaving is to restore the data of the second interleaving to the original order. The data needs to be stored in the second deinterleaving RAM before the second deinterleaving, and the second deinterleaving is performed.
  • Step 4 Physical channel splicing
  • the transmitting end that is, when a code division combined transmission channel CCTrCH is transmitted by using multiple physical channels
  • the receiving end needs to use physical channel splicing, that is, several physical channels belonging to the same CCTrCH.
  • the data is serially stitched together.
  • the physical channel splicing only changes the storage location of the input soft bits, and does not change the value of the input soft bits.
  • Step 5 De-bit scrambling
  • the bit scrambling process is the inverse of the bit scrambling process, and the bit scrambling process is an input sequence. Modulo 2 sum with the scrambling code sequence p k . De-bit scrambling only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits.
  • Step 6. Demultiplexing the transport channel Demultiplexing The 10 ms radio frame data of one transport channel multiplexed on one CCTrCH is detached one by one and sent to the corresponding transport channel. The demultiplexing of the transport channel only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 7.
  • Reverse rate matching is to restore the data after the rate matching processing to the data before the rate matching.
  • Reverse rate matching only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits.
  • Step 8. Wireless frame stitching When the transmission time interval ( ⁇ ) of the transmission channel is greater than 10 ms, the 10 ms radio frames mapped to the transmission channel by the inverse rate matching module are spliced into one frame in the wireless frame splicing module. The wireless frame splicing only changes the storage order of the input soft bits, and does not change the value of the input soft bits.
  • Step 9 The first deinterleaving restores the first interleaved data to the original order.
  • the first deinterleaving only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits.
  • the data needs to be stored in the first deinterleaving AM before deinterleaving, and the data in the first deinterleaving RAM needs to be compressed and decompressed in the foregoing manner. That is, before the first deinterleaving, the soft bit data is compressed and stored in the first deinterleaving RAM, and when the first deinterleaving is required, the first deinterleaving RAM is read. The data is decompressed.
  • Step 10 Radio frame reverse equalization
  • the radio frame reverse equalization process is the inverse process of the radio frame equalization process, and the reverse radio frame equalization rule can be used to recover the user data before the radio frame equalization.
  • Radio frame reverse equalization only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits.
  • Step 1 Channel decoding
  • Commonly used channel codes mainly include convolutional codes and turbo codes.
  • Channel decoding mainly uses Viterbi decoding and MAX-LOG-MAP algorithm. After the compression and decompression processing in steps 1, 3 and 9, the Vertibi decoding process and the turbo decoding process still use the usual decoding algorithm, and the input information is still 8-bit soft bit data. After multiple iterations, The output of the decoder is judged, and finally 0, 1 bit is acquired.
  • the embodiment of the present invention can greatly reduce the area of the wireless receiving chip under the condition that the performance is not substantially affected, thereby effectively saving the cost of the baseband chip.
  • the invention has been described in detail above, the invention is not limited thereto, and various modifications may be made by those skilled in the art in accordance with the principles of the invention. Therefore, modifications made in accordance with the principles of the invention should be construed as falling within the scope of the invention.
  • INDUSTRIAL APPLICABILITY As described above, a soft bit decoding method and apparatus for a wireless receiving device provided by an embodiment of the present invention have the following beneficial effects: the wireless receiving chip area can be greatly reduced without substantially affecting performance. Effectively save the cost of baseband chips.

Abstract

A soft bit coding method and apparatus for a radio receiving device. The method comprises: during downlink processing, compressing to-be-coded Mbit soft bit data to obtain compressed Nbit soft bit data; and performing follow-up soft bit coding processing on the compressed Nbit soft bit data, M and N being positive integers and M being greater than N. By means of the method and the apparatus, during soft bit data coding of a radio receiving device, by slightly reducing performance, soft bit data of a cache is reduced, so that the size of the cache is reduced and the area of a chip of the radio receiving device is significantly reduced.

Description

一种无线接收设备的软比特译码方法及装置 技术领域 本发明涉及适用于无线接收设备的软比特译码技术, 特别涉及一种无线接收设备 的软比特译码方法及相关的装置。 背景技术 在数字通信系统中, 一般误码是不可避免的, 而对于通信条件恶劣的无线通信系 统, 误码情况尤其严重, 而减少误码、 提高通信系统的可靠性一直是通信系统设计所 追求的主要目标之一。 信道编码是为了提高通信可靠性而发展起来的一种差错控制技术, 主要包括检错 码和纠错码两大类。 在信道编码中常用的是纠错码, 纠错码又可以分为两类: 分组码 和卷积码。 分组码是无记忆的纠错码, 在接收端仅能使用当前的 n个输入对发送的信息比特 进行纠错, 常记作 (n, k)码。其中 n是一个码字的码元数, 即码字长, k是信息码元数, n-k是监督码元数。 卷积码是有记忆的纠错码, 在接收端不仅能使用当前的 n个输入, 还能使用过去 接收到的 m组 n个比特对当前发送的信息比特进行推测, 记作 (n, k, m)码。 线性分组码只是使用当前码字信息进行纠错, 而卷积码可以同时使用前面多个码 字信息进行纠错, 因而, 卷积码的纠错能力要强得多。在环境恶劣的无线接收系统中, 主要使用卷积码进行信道编码。 常用的卷积码又包括两种, 一种是通常意义上的卷积码, 另一种是 turbo码, 所 述 turbo码是对普通卷积码的改进, 即同时使用了两个卷积码, 其中一个卷积码的输 入是另一个卷积码输入的交织。  TECHNICAL FIELD The present invention relates to a soft bit decoding technology suitable for a wireless receiving device, and more particularly to a soft bit decoding method and related device for a wireless receiving device. BACKGROUND OF THE INVENTION In a digital communication system, a general error is unavoidable, and for a wireless communication system with poor communication conditions, the error condition is particularly serious, and reducing the bit error and improving the reliability of the communication system have always been pursued by the communication system design. One of the main goals. Channel coding is an error control technique developed to improve communication reliability, and mainly includes two types of error detection codes and error correction codes. Error correction codes are commonly used in channel coding, and error correction codes can be further divided into two categories: block codes and convolutional codes. The block code is a memoryless error correction code. At the receiving end, only the current n inputs can be used to correct the transmitted information bits, which are often recorded as (n, k) codes. Where n is the number of symbols of a codeword, that is, the codeword length, k is the number of information symbols, and n-k is the number of supervised symbols. The convolutional code is a memory error correction code. At the receiving end, not only the current n inputs but also the m bits and n bits received in the past can be used to estimate the currently transmitted information bits, which is denoted as (n, k). , m) code. The linear block code only uses the current code word information for error correction, and the convolutional code can simultaneously use the previous multiple code word information for error correction, and thus, the error correction capability of the convolutional code is much stronger. In a harsh wireless receiving system, a convolutional code is mainly used for channel coding. Commonly used convolutional codes include two types, one is a convolutional code in the usual sense, and the other is a turbo code, which is an improvement on a common convolutional code, that is, two convolutional codes are simultaneously used. The input of one of the convolutional codes is the interleaving of another convolutional code input.
TD协议中的卷积码编码器如图 1所示, 约束长度为 9。 在开始编码前, 编码器的 8个移位寄存器 D的初值设为全 0,并在输入比特的末尾添加 8个比特 0。进行编码时, 每个节拍进行 "先模二加, 再寄存器移位 "的操作。 按照 3G协议的规定, 码块分段的 结果, 卷积编码器一次编码的最大输入数据量为 504比特。 维特比 viterbi译码器的作 用是完成对该 1/2卷积码和 1/3卷积码的译码。 Viterbi译码算法是由 Viterbi于 1967年提出的一种最大似然译码方法, 即译码器 选择的输出总是使接收序列条件概率最大的码字。 根据最大似然译码原理, 在所有可 能的路径中求取与接收序列最相似的一条(距离最小的一条),进行路径回溯获得判决 输出, 该方法已被证明具有最佳纠错译码性能。 Viterbi译码算法主要由路径度量的 "加 比选"运算、 累积度量的更新、 最大似然路径的回溯等过程组成。 The convolutional code encoder in the TD protocol is shown in Figure 1, with a constraint length of 9. Before starting the encoding, the initial values of the eight shift registers D of the encoder are set to all zeros, and eight bits 0 are added at the end of the input bits. When encoding, each beat performs the operation of "first mode plus two, then register shift". According to the 3G protocol, as a result of the code block segmentation, the maximum input data amount encoded by the convolutional encoder at one time is 504 bits. The function of the Viterbi Viterbi decoder is to perform decoding of the 1/2 convolutional code and the 1/3 convolutional code. The Viterbi decoding algorithm is a maximum likelihood decoding method proposed by Viterbi in 1967, that is, the output selected by the decoder always makes the codeword with the highest conditional probability of the received sequence. According to the principle of maximum likelihood decoding, one of the most suitable paths (the one with the smallest distance) is obtained in all possible paths, and the path backtracking is used to obtain the decision output. This method has been proved to have the best error correction decoding performance. . The Viterbi decoding algorithm is mainly composed of the "plus-selection" operation of the path metric, the update of the cumulative metric, and the backtracking of the maximum likelihood path.
Turbo码译码器如图 2所示, 输入信息序列 X YP1和 ΥΡ2分别是编码端输出序列 X、 ΧΡ1和 ΧΡ2加入信道噪声后形成的。译码器 1的输入为 ^^ 1), 译码器 2的输 入为 K2 = (X YP2)。 Turbo码的最佳译码策略是计算后验概率?(¾ | ^^2),¾ = 0,1., 计算 复杂度高。 实际采用的方案是由成员译码器分别计算 P^ l ^ )和 P(uk | K2,i ), 得到 译码复杂度可以接受的次优策略。 通过循环迭代, 使他们收敛于?(¾ | ]^ 2)。 这正是 迭代译码的基本思想: 将复杂长码的解码过程分为几个步骤, 并且保证译码步骤之间 的概率 (软信息) 传递几乎不导致信息的损失。 对 Turbo码而言, 主要有两种译码方案: 一种是最大后验概率 (MAP)系列, 包括 MAP算法、 Log-MAP算法和 Max-Log-MAP算法;另一类是软输出维特比算法 SOVA。 MAP 是最优的译码算法, 但其缺点是运算复杂度高、 需要较大的存储空间;The turbo code decoder is shown in Fig. 2. The input information sequences X YP 1 and ΥΡ 2 are formed by adding the channel noise to the encoder output sequences X, Χ Ρ 1 and Χ Ρ 2 , respectively. The input of decoder 1 is ^^ 1 ), and the input of decoder 2 is K 2 = (X YP 2 ). The best decoding strategy for Turbo codes is to calculate the posterior probability? ( 3⁄4 | ^^ 2 ), 3⁄4 = 0,1., high computational complexity. The practical scheme is to calculate P^ l ^ ) and P(u k | K 2 , i ) by the member decoder respectively, and obtain a suboptimal strategy that the decoding complexity is acceptable. By looping iterations, let them converge? ( 3⁄4 | ]^ 2 ). This is the basic idea of iterative decoding: The decoding process of complex long codes is divided into several steps, and the probability (soft information) transfer between decoding steps is guaranteed to cause little loss of information. For Turbo codes, there are two main decoding schemes: one is the maximum a posteriori probability (MAP) series, including the MAP algorithm, the Log-MAP algorithm and the Max-Log-MAP algorithm; the other is the soft output Viterbi. Algorithm SOVA. MAP is the optimal decoding algorithm, but its disadvantage is that it has high computational complexity and requires a large storage space.
Log-MAP算法与 MAP算法的性能较接近, 是次优的译码算法, 由于将运算转移到对 数域, 将相乘运算变为相加运算, 从而大大降低了运算复杂度; Max-Log-MAP算法忽 略了 Log-MAP 算法似然值加法表达式中的对数分量, 把似然值相加变为求最大值运 算, 进一步降低了计算复杂度, 性能比 MAP低 0.3〜0.5dB; SOVA的译码性能最差, 它与 MAP算法相差 0.5〜ldB左右, 并且随着信噪比的增大, 差值没有降低的趋势, 但其运算复杂度较低, 有利于硬件的实现。 Turbo译码算法的选择要考虑性能与复杂 度的平衡, 通常选用 Max-Log-MAP算法。 对于 viterbi译码和 turbo译码, 有硬判决和软判决两种方法。 硬判决的性能和软 判决相比, 在性能上有 2〜3dB 的下降, 显然以软判决方案为优。 但软判决方案受限 于硬件的量化比特, 量化比特越宽, 则用于保存软判决比特所需要的 RAM面积越大。 以 TD-SCDMA为例, 其下行链路的通用解码流程如附图 3所示, 包括物理信道 去映射、 子帧拼接、 第二次解交织、 物理信道拼接、 去比特加扰、 物理信道去复用、 反向速率匹配、 无线帧拼接、 第一次解交织、 无线帧反向均衡、 信道解码、 编码块拼 接 /传输块分段、 CRC 检测等过程。 由于第二次解交织和第一次解交织的存在, TD-SCDMA的 UE终端必须对一个 ΤΉ周期和一个无线帧的解映射的数据进行存储, 这意味着 viterbi译码和 turbo译码使用的软比特的比特越宽, 需要的缓存越大, 因此 无线接收设备的芯片面积也越大。 发明内容 本发明实施例在于提供一种无线接收设备的软比特译码方法及装置, 能更好地解 决在性能基本无损失的情况下减小无线接收设备的芯片面积的问题。 根据本发明实施例的一个方面, 提供了一种无线接收设备的软比特译码方法, 包 括: 在下行链路处理期间, 将待译码的 Mbit的软比特数据进行压缩处理, 得到压缩的 Nbit的软比特数据; 对所述压缩的 bit的软比特数据进行后续的软比特译码处理; 其中, M和 N是正整数, 且 M>N。 优选地, 在物理信道去映射处理前, 对所述待译码的 Mbit的软比特数据进行压缩 处理, 得到压缩的 bit的软比特数据。 此时, 所述后续的软比特译码处理的步骤包括: 对所述压缩的 bit的软比特数据依次进行物理信道去映射处理、第二次解交织处 理、 第一次解交织处理、 译码处理。 优选地, 在物理信道去映射处理后, 对所述待译码的 Mbit的软比特数据进行压缩 处理, 得到压缩的 bit的软比特数据。 此时, 所述后续的软比特译码处理的步骤包括: 将所述压缩的 bit的软比特数据存储至缓存; 将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特 数据, 并对所述 Mbit的软比特数据进行子帧拼接处理。 所述后续的软比特译码处理的步骤还包括: 对所述子帧拼接处理后得到的 Mbit的软比特数据进行压缩处理,得到压缩的 bit 的软比特数据; 将所述压缩的 bit的软比特数据存储至缓存; 将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特 数据, 并对所述 Mbit的软比特数据进行第二次解交织处理。 所述后续的软比特译码处理的步骤还包括: 对第二次解交织处理后得到的 Mbit 的软比特数据进行行压缩处理, 得到压缩的The performance of Log-MAP algorithm is close to that of MAP algorithm. It is a suboptimal decoding algorithm. Because the operation is transferred to the logarithmic domain, the multiplication operation is added to the addition operation, which greatly reduces the computational complexity. Max-Log The -MAP algorithm ignores the logarithmic component in the addition expression of the Log-MAP algorithm likelihood value, and adds the likelihood values to the maximum value operation, further reducing the computational complexity, and the performance is 0.3~0.5 dB lower than the MAP; The decoding performance of SOVA is the worst. It differs from the MAP algorithm by about 0.5~ldB, and as the signal-to-noise ratio increases, the difference does not decrease, but its computational complexity is low, which is beneficial to hardware implementation. The choice of Turbo decoding algorithm should consider the balance between performance and complexity. The Max-Log-MAP algorithm is usually used. For viterbi decoding and turbo decoding, there are two methods of hard decision and soft decision. Compared with the soft decision, the performance of the hard decision has a 2~3dB drop in performance. Obviously, the soft decision scheme is superior. However, the soft decision scheme is limited by the quantization bits of the hardware. The wider the quantization bits, the larger the RAM area required to hold the soft decision bits. Taking TD-SCDMA as an example, the general decoding process of the downlink is as shown in FIG. 3, including physical channel demapping, subframe splicing, second de-interleaving, physical channel splicing, de-bit scrambling, physical channel going. Multiplexing, reverse rate matching, wireless frame splicing, first de-interlacing, radio frame backward equalization, channel decoding, coding block splicing/transport block segmentation, CRC detection, etc. Due to the existence of the second deinterleaving and the first deinterleaving, the UE terminal of the TD-SCDMA must store the data demapped for one frame period and one radio frame, This means that the wider the bits of the soft bits used for viterbi decoding and turbo decoding, the larger the buffer required, and therefore the larger the chip area of the wireless receiving device. SUMMARY OF THE INVENTION Embodiments of the present invention provide a soft bit decoding method and apparatus for a wireless receiving device, which can better solve the problem of reducing the chip area of a wireless receiving device without substantially losing performance. According to an aspect of the embodiments of the present invention, a soft bit decoding method of a wireless receiving device is provided, including: compressing Mbit soft bit data to be decoded during downlink processing to obtain a compressed Nbit Soft bit data; performing subsequent soft bit decoding processing on the soft bit data of the compressed bit; wherein, M and N are positive integers, and M>N. Preferably, before the physical channel demapping process, the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit. In this case, the step of the subsequent soft bit decoding process includes: sequentially performing physical channel demapping processing, second deinterleaving processing, first deinterleaving processing, and decoding on the soft bit data of the compressed bit. deal with. Preferably, after the physical channel demapping process, the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit. In this case, the step of the subsequent soft bit decoding process includes: storing soft bit data of the compressed bit into a buffer; decompressing the soft bit data of the bit read from the buffer to obtain Mbit soft bit data, and sub-frame splicing processing of the Mbit soft bit data. The step of the subsequent soft bit decoding process further includes: performing compression processing on the Mbit soft bit data obtained by the splicing process of the sub-frame to obtain soft bit data of the compressed bit; And storing soft bit data of the compressed bit into a buffer; decompressing the soft bit data of the bit read from the buffer to obtain soft bit data of Mbit, and performing soft bit data of the Mbit The second de-interlacing process. The step of the subsequent soft bit decoding process further includes: performing line compression processing on the Mbit soft bit data obtained after the second deinterleave process to obtain a compressed
Nbit的软比特数据; 将所述压缩的 bit的软比特数据存储至缓存中; 将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特 数据, 并对所述 Mbit的软比特数据进行第一次解交织处理。 优选地, 所述压缩处理的步骤包括: 通过映射, 将 Mbit均匀量化的软比特数据进行转换, 得到 bit非均匀量化的软 比特数据。 优选地, 所述解压缩处理的步骤包括: 通过逆映射, 将从缓存中读取的所述 bit非均匀量化的软比特数据进行转换, 得 到 Mbit均匀量化的软比特数据。 根据本发明实施例的另一方面, 提供了一种译码装置, 包括上述的软比特数据的 缓存处理单元, 设置为在物理信道去映射处理后, 将软判决处理得到的软比特数据进 行缓存。 根据本发明实施例的另一方面, 提供了一种无线接收设备的软比特译码装置, 包 括: 压缩模块, 设置为在下行链路处理期间, 将待译码的 Mbit的软比特数据进行压缩 处理, 得到压缩的 bit的软比特数据; 译码模块, 设置为对所述压缩的 bit的软比特数据进行后续的软比特译码处理; 其中, M和 N是正整数, 且 M>N。 与现有技术相比较, 本发明实施例的有益效果在于: 本发明实施例能够在性能基本无损失或性能略微下降 (不大于 0.2dB ) 的情况下, 减小用于软比特译码的比特数, 从而显著减小无线接收设备的芯片面积, 有效节省基 带芯片的成本。 附图说明 图 1是现有技术提供的 TD协议中的卷积编码器结构示意图; 图 2是现有技术提供的 Turbo码译码器结构示意图; 图 3是现有技术提供的 TD-SCDMA的 GDTR下行链路解码和无复用流程图; 图 4是本发明实施例提供的无线接收设备的软比特译码方法流程图; 图 5是本发明实施例提供的无线接收设备在软比特译码过程中的缓存处理结构示 意图; 图 6是本发明实施例提供的线性域译码与变换域译码的量化信噪比对比示意图; 图 7是本发明实施例提供的加性高斯白噪声 AWGN信道下 8bit线性域译码和 4bit 变换域译码的性能对比示意图; 图 8是本发明实施例提供的衰落信道 Casel下 8bit线性域译码和 4bit变换域译码 的性能对比示意图; 图 9是本发明实施例提供的衰落信道 Casel下 8bit线性域译码和 4bit变换域译码 的性能对比示意图。 具体实施方式 以下结合附图对本发明的优选实施例进行详细说明, 应当理解, 以下所说明的优 选实施例仅用于说明和解释本发明, 并不用于限定本发明。 图 4是本发明实施例提供的无线接收设备的软比特译码方法流程图,如图 4所示, 步骤包括: 步骤 401、 在下行链路处理期间, 将待译码的 Mbit的软比特数据进行压缩处理, 得到压缩的 bit的软比特数据, 其中, M和 N是正整数, 且 M>N。 步骤 402、 对所述压缩的 bit的软比特数据进行后续的软比特译码处理。 当上述步骤 401执行于物理信道去映射处理之前时, 步骤 402中的所述后续的软 比特译码处理的步骤包括: 对所述压缩的 bit的软比特数据依次进行物理信道去映射 处理、 第二次解交织处理、 第一次解交织处理、 译码处理。 也就是说, 无线接收设备 进行解扰解扩后, 得到待译码的 Mbit的软比特数据, 本发明实施例对所述 Mbit的软 比特数据进行压缩处理, 得到压缩的 bit的软比特数据, 在后续的包括物理信道去映 射、 子帧拼接、 第二次解交织、 物理信道拼接、 去比特加扰、 物理信道去复用、 反向 速率匹配、 无线帧拼接、 第一次解交织、 无线帧反向均衡等处理过程中, 均使用所述 压缩的 bit的软比特数据, 即实现了完全的经过压缩变换的软比特译码。 当上述步骤 401执行于物理信道去映射处理之后时, 步骤 402中的所述后续的软 比特译码处理的步骤包括: 将所述压缩的 bit的软比特数据存储至缓存, 并将从缓存 中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特数据, 以便对 所述 Mbit的软比特数据进行子帧拼接处理。然后,对所述子帧拼接处理后得到的 Mbit 的软比特数据进行压缩处理,得到压缩的 bit的软比特数据,将所述压缩的 bit的软 比特数据存储至缓存, 并将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特数据, 并对所述 Mbit的软比特数据进行第二次解交织处理。最后, 对第二次解交织处理后得到的 Mbit的软比特数据进行行压缩处理, 得到压缩的 bit 的软比特数据, 将所述压缩的 bit的软比特数据存储至缓存中, 并将从缓存中读取的 所述 bit的软比特数据进行解压缩处理, 得到 Mbit的软比特数据, 并对所述 Mbit的 软比特数据进行第一次解交织处理。 也就是说, 仅需要在进行物理信道去映射之后、 第二次解交织之前、 第一次解交织之前, 对待缓存的数据进行 Mbit到 bit的压缩和 Nbit到 Mbit的解压缩, 实现了简化的软比特译码过程。完全经过压缩变换的软比特译 码过程要求对整个软比特译码过程从 Mbit到 bit的链路改造, 工作量较大, 而简化 的软比特译码过程, 工作量很小, 由于物理信道去映射、 第二次解交织、 第一次解交 织所需要的缓存的芯片面积较大, 因此, 从 Mbit到 bit的压缩, 可以显著减少芯片 面积。 上述压缩处理的步骤包括: 通过映射, 将 Mbit均匀量化的软比特数据进行转换, 得到 bit非均匀量化的软比特数据。 上述解压缩处理的步骤包括: 通过逆映射, 将从缓存中读取的所述 bit非均匀量化的软比特数据进行转换, 得 到 Mbit均匀量化的软比特数据。 上述缓存是随机存储器 RAM。 本发明实施例适用于无线接收设备的软比特译码, 可广泛适用于卷积码和 turbo 码的译码过程。 本发明实施例还提供了一种无线接收设备的软比特译码装置, 包括: 压缩模块, 设置为在下行链路处理期间, 将待译码的 Mbit的软比特数据进行压缩 处理, 得到压缩的 bit的软比特数据, 其中, M和 N是正整数, 且 >^^; 译码模块, 设置为对所述压缩的 bit的软比特数据进行后续的软比特译码处理。 当采用完全的经过压缩变换的软比特译码时, 所述后续的软比特译码处理包括: 物理信道去映射处理、 子帧拼接处理、 第二次解交织处理、 物理信道拼接处理、 去比 特加扰处理、 物理信道去复用处理、 反向速率匹配处理、 无线帧拼接处理、 第一次解 交织处理、 无线帧反向均衡处理等处理过程中, 均使用压缩的 bit的软比特数据进行 处理。 当采用简化的经过压缩变换的软比特译码时, 所述后续的软比特译码处理包括: 在物理信道去映射之后、 第二次解交织之前、 第一次解交织之前, 对待缓存的数据进 行 Mbit到 bit的压缩,而在实际处理过程中, 需要进行 bit到 Mbit的解压缩后, 使 用 Mbit的软比特数据进行相应的处理。图 5是本发明实施例提供的无线接收设备在软 比特译码过程中的缓存处理结构示意图, 如图 5所示, 包括: 压缩模块 Comp,设置为在下行链路处理期间,将待存储的 Mbit的软比特数据进行 压缩处理,得到压缩的 bit的软比特数据。 所述 Comp通过映射, 将 Mbit均匀量化的 软比特数据进行转换, 得到 bit非均匀量化的软比特数据, 实现压缩。 缓存 RAM, 设置为存储所述压缩的 bit的软比特数据; 解压缩模块 De-Comp,设置为将从缓存中读取的所述 bit的软比特数据进行解压 缩处理, 得到 Mbit的软比特数据。 所述 De-Comp通过逆映射, 将从缓存中读取的所 述 bit非均匀量化的软比特数据进行转换, 得到 Mbit均匀量化的软比特数据, 实现 解压缩。 其中, M和 N是正整数, 且 M>N。 图 5所述的缓存处理结构可以广泛适用于 DTR (Download Transport Processing) 的译码装置, 通过减小用于缓存软比特译码的比特数, 显著缩减芯片面积, 从而显著 减小无线接收设备的芯片面积。 DTR过程是下行链路处理过程的总称, 对于 TD-SCDMA系统而言, 包括 GDTR ( General Download Transport Processing ) 禾口 HDTR ( HSDPA Download Transport Processing)两个分支。 DTR过程的主要目的是对经过软解调的符号数据进行处理, 将 数据经过解码和解复用后, 再传递给上层。 DTR过程和协议标准紧密相关, 是物理层 编码和复用的逆过程。 信道编码通过卷积码和 Turbo码来实现前向纠错,从而可以在 AWGN信道下获得 接近香侬极限的纠错能力。 但在衰落信道下, 由于信道的深衰落和突发干扰的影响, 可能导致一段时间内的集中错误, 这种情况, 仅仅使用纠错码是无能为力的, 解决的 方法就是交织和重传。 在普通业务的信道编码和译码的过程中, 普遍使用了交织和解 交织作为对抗衰落的方法。 而在 HSDPA业务的信道编码和译码的过程中, 更使用重 传作为对抗衰落的方法。 因此, 在 DTR链路中, 解交织和重传合并是其中的重要步骤, 带来的问题就是对 于软解调后的数据, 需要较大的存储空间。 以解交织为例, 如果一个传输时间间隔 ( Transmission Time Intervel, TTI) 为 20ms, 就需要存储该业务 4个子帧的数据, 如 果一个 ΤΉ为 40ms, 就需要存储该业务 8个子帧的数据, 经解交织后才能进行随后的 解码。 这个存储量随保存软比特数据的量化位数的增加而增加, 假定一个子帧需要存 储的数据位 2000个, 那么, 对于 40msTTI, 就需要预先存储 16000个数据, 如果每个 数据以 8bit表示, 就需要 128000bit的存储空间, 如果每个数据以 4bit表示, 那么需 要的存储空间就可以减小到一半, 仅需要 64,000bit的存储空间即可。 决定 DTR链路 中数据存储量比特数的则是 vertibi译码器和 turbo译码器对输入软比特位数的要求。 考虑 turbo软比特译码, 是通过对数似然比来对信息比特做出判别, 用公式表示 如下: Nbit soft bit data; storing soft bit data of the compressed bit into a buffer; decompressing soft bit data of the bit read from the buffer to obtain Mbit soft bit data, and The Mbit soft bit data is subjected to the first deinterleaving process. Preferably, the step of compressing comprises: converting, by mapping, Mbit uniformly quantized soft bit data to obtain bit non-uniformly quantized soft bit data. Preferably, the step of decompressing comprises: converting, by inverse mapping, the bit non-uniformly quantized soft bit data read from the buffer to obtain Mbit uniformly quantized soft bit data. According to another aspect of the present invention, there is provided a decoding apparatus, comprising the above-described soft bit data buffer processing unit, configured to cache soft bit data obtained by soft decision processing after physical channel demapping processing . According to another aspect of the present invention, a soft bit decoding apparatus for a wireless receiving device is provided, comprising: a compression module configured to compress Mbit soft bit data to be decoded during downlink processing Processing, obtaining soft bit data of the compressed bit; the decoding module is configured to perform subsequent soft bit decoding processing on the soft bit data of the compressed bit; wherein, M and N are positive integers, and M>N. Compared with the prior art, the beneficial effects of the embodiments of the present invention are: The embodiment of the invention can reduce the number of bits used for soft bit decoding under the condition that the performance is basically no loss or the performance is slightly decreased (not more than 0.2 dB), thereby significantly reducing the chip area of the wireless receiving device, and effectively saving the baseband. The cost of the chip. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic structural diagram of a convolutional encoder in a TD protocol provided by the prior art; FIG. 2 is a schematic structural diagram of a Turbo code decoder provided by the prior art; FIG. 3 is a schematic diagram of a TD-SCDMA provided by the prior art. GDTR downlink decoding and non-multiplexing flowchart; FIG. 4 is a flowchart of a soft bit decoding method of a wireless receiving device according to an embodiment of the present invention; FIG. 5 is a soft bit decoding of a wireless receiving device according to an embodiment of the present invention; FIG. 6 is a schematic diagram of a comparison of quantized signal to noise ratios between linear domain decoding and transform domain decoding according to an embodiment of the present invention; FIG. 7 is an additive white Gaussian white noise AWGN channel according to an embodiment of the present invention; FIG. 8 is a schematic diagram showing the performance comparison between the 8-bit linear domain decoding and the 4-bit transform domain decoding of the fading channel Casel according to the embodiment of the present invention; FIG. 9 is a schematic diagram of performance comparison between the 8-bit linear domain decoding and the 4-bit transform domain decoding. A performance comparison diagram of 8-bit linear domain decoding and 4-bit transform domain decoding in the fading channel Casel provided by the embodiment of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The preferred embodiments of the present invention are described in detail below with reference to the accompanying drawings. 4 is a flowchart of a method for decoding a soft bit of a wireless receiving device according to an embodiment of the present invention. As shown in FIG. 4, the method includes: Step 401: During the downlink processing, the soft bit data of the Mbit to be decoded is performed. Compression processing is performed to obtain compressed bit soft bit data, where M and N are positive integers, and M > N. Step 402: Perform subsequent soft bit decoding processing on the soft bit data of the compressed bit. When the foregoing step 401 is performed before the physical channel demapping process, the step of the subsequent soft bit decoding process in step 402 includes: performing physical channel demapping processing on the soft bit data of the compressed bit in sequence, Secondary deinterleave processing, first deinterleave processing, and decoding processing. That is, the wireless receiving device performs descrambling and despreading to obtain soft bit data of the Mbit to be decoded, and the embodiment of the present invention compresses the soft bit data of the Mbit to obtain soft bit data of the compressed bit. Subsequent includes physical channel demapping, subframe splicing, second de-interleaving, physical channel splicing, de-bit scrambling, physical channel demultiplexing, reverse rate matching, wireless frame splicing, first de-interlacing, wireless In the process of frame inverse equalization and the like, soft bit data of the compressed bit is used, that is, full compression-transformed soft bit decoding is realized. When the above step 401 is performed after the physical channel demapping process, the step of the subsequent soft bit decoding process in step 402 includes: storing the soft bit data of the compressed bit to a cache, and The read soft bit data of the bit is subjected to decompression processing to obtain Mbit soft bit data, so as to perform sub-frame splicing processing on the Mbit soft bit data. Then, compressing the soft bit data of the Mbit obtained after the splicing process of the sub-frame, obtaining soft bit data of the compressed bit, storing the soft bit data of the compressed bit into a buffer, and reading from the cache The soft bit data of the bit is decompressed to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a second deinterleaving process. Finally, the Mbit soft bit data obtained after the second deinterleaving process is subjected to line compression processing to obtain compressed bit soft bit data, and the compressed bit soft bit data is stored in the buffer, and the buffer is buffered. The soft bit data of the bit read is subjected to decompression processing to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a first deinterleave process. That is to say, it is only necessary to perform Mbit-to-bit compression and Nbit-to-Mbit decompression of the data to be cached after physical channel demapping, before the second de-interleaving, and before the first de-interleaving, thereby simplifying the implementation. Soft bit decoding process. The fully compressed compression soft bit decoding process requires the entire soft bit decoding process to be modified from Mbit to bit. The workload is large, and the simplified soft bit decoding process has a small amount of work, due to the physical channel. The area of the chip required for the mapping, the second deinterleaving, and the first deinterleaving is large. Therefore, the compression from Mbit to bit can significantly reduce the chip area. The step of the above compression processing includes: converting the Mbit uniformly quantized soft bit data by mapping to obtain bit non-uniformly quantized soft bit data. The step of decompressing the above-mentioned processing includes: converting, by inverse mapping, the bit non-uniformly quantized soft bit data read from the buffer to obtain Mbit uniformly quantized soft bit data. The above cache is a random access memory RAM. The embodiments of the present invention are applicable to soft bit decoding of a wireless receiving device, and are widely applicable to a decoding process of a convolutional code and a turbo code. The embodiment of the present invention further provides a soft bit decoding apparatus for a wireless receiving device, including: a compression module, configured to compress, during the downlink processing, the soft bit data of the Mbit to be decoded, to obtain a compressed The soft bit data of the bit, where M and N are positive integers, and >^^ ; the decoding module is configured to perform subsequent soft bit decoding processing on the soft bit data of the compressed bit. When full compression-transformed soft bit decoding is employed, the subsequent soft bit decoding process includes: physical channel demapping processing, subframe splicing processing, second de-interleaving processing, physical channel splicing processing, de-biting The processing of scrambling processing, physical channel demultiplexing processing, reverse rate matching processing, wireless frame splicing processing, first deinterleaving processing, and radio frame reverse equalization processing are performed using compressed bit soft bit data. deal with. When a simplified compression-transformed soft bit decoding is employed, the subsequent soft bit decoding process includes: data to be cached after physical channel demapping, before second deinterleaving, and before first deinterleaving The Mbit-to-bit compression is performed, and in the actual processing, after the bit-to-Mbit decompression is performed, the Mbit soft-bit data is used for corresponding processing. FIG. 5 is a schematic diagram of a cache processing structure of a wireless receiving device in a soft bit decoding process according to an embodiment of the present invention. As shown in FIG. 5, the method includes: a compression module Comp, which is set to be stored during downlink processing. The soft bit data of the Mbit is subjected to compression processing to obtain soft bit data of the compressed bit. The Comp converts the Mbit uniformly quantized soft bit data by mapping, and obtains bit non-uniformly quantized soft bit data to implement compression. a cache RAM, configured to store soft bit data of the compressed bit; and a decompression module De-Comp configured to decompress the soft bit data of the bit read from the buffer to obtain Mbit soft bit data . The De-Comp converts the bit non-uniformly quantized soft bit data read from the buffer by inverse mapping, and obtains Mbit uniformly quantized soft bit data to implement decompression. Where M and N are positive integers and M>N. The buffer processing structure described in FIG. 5 can be widely applied to a decoding device of DTR (Download Transport Processing), and the chip area is significantly reduced by reducing the number of bits used for buffering soft bit decoding, thereby significantly reducing the wireless receiving device. Chip area. The DTR process is a general term for the downlink processing process. For the TD-SCDMA system, it includes two branches: GDTR (General Download Transport Processing) and HDTR (HSDPA Download Transport Processing). The main purpose of the DTR process is to process the soft demodulated symbol data, decode and demultiplex the data, and pass it to the upper layer. The DTR process is closely related to the protocol standard and is the inverse of the physical layer coding and multiplexing. Channel coding implements forward error correction by convolutional codes and Turbo codes, so that error correction capability close to the scent limit can be obtained under the AWGN channel. However, under the fading channel, due to the deep fading of the channel and the influence of the burst interference, it may cause a concentration error within a certain period of time. In this case, the error correction code alone is powerless, and the solution is interleaving and retransmission. Interleaving and de-interleaving are commonly used as methods of combating fading in the process of channel coding and decoding of ordinary services. In the process of channel coding and decoding of HSDPA services, retransmission is used as a method of combating fading. Therefore, in the DTR link, de-interleaving and re-transmission are important steps, and the problem is that for soft demodulated data, a large storage space is required. Taking deinterleaving as an example, if a Transmission Time Interval (TTI) is 20 ms, data of 4 subframes of the service needs to be stored. If a frame is 40 ms, data of 8 subframes of the service needs to be stored. Subsequent decoding can only be performed after deinterleaving. This storage amount increases with the increase of the number of quantization bits for saving soft bit data. Assuming that one subframe needs to store 2000 data bits, then for 40 ms TTI, 16000 data needs to be stored in advance, if each data is represented by 8 bits, It requires 128,000 bits of storage space. If each data is represented by 4 bits, the required storage space can be reduced to half, and only 64,000 bits of storage space is needed. Determining the number of data storage bits in the DTR link is the requirement of the input soft bit bits for the vertibi decoder and the turbo decoder. Considering turbo soft bit decoding, the information bits are discriminated by log likelihood ratio, which is expressed as follows:
L 2( uk) = Lcxk + e2( uk) + a2( uk) 其中, L2( uk)表示 turbo译码器的输出, xk表示输入的均匀量化软比特, λώ( ¾)表 示本解码器经过解码之后的 "净"解码信息, 被用来传递给第一个子解码器, 作为其先 验信息 λ3a2( uk)表示当前解码器输入的先验概率的对数似然比, 它来自于第二个子 解码器的输出。 上述公式表明, turbo译码器输出判决和 turbo译码器输入 xk成线性关系, ¾ L2( uk) 的软比特输入较大的时候, 有较大的冗余, 即使量化时有一定误差, 也不容易引起误 判。 而在 L2( uk)的软比特输入较小的时候, 量化时少量误差, 都可能引起误判。 因此, 对于解调软比特, 可以采用非均匀量化处理, 对较大的输入, 采用较粗的 量化, 对较小的输入, 采用精细的量化。 也就是说, DTR过程可以由均匀量化线性域 转移到非均匀量化变换域。 上述过程还可以通过量化信噪比加以解释, 如图 6所示, 在对软比特数据进行均 匀量化时, 实际上是加入了量化噪声。 对于均匀量化来说, 量化噪声是固定的, 在软 比特的值较大的时候, 信号软比特和量化噪声的信噪比 SNR比值比较大。而在软比特 的值较小的时候, 相应的, 信号软比特和量化噪声的 SNR比值比较小。 对于均匀量化来说, 在信号软比特的值较大的时候, 量化 S R总能满足要求, 但 在信号软比特的值较小的时候, 量化 SNR下降较大, 甚至小于 OdB, 从而引起软比特 译码性能劣化。 而采用上述变换域译码, 即非均匀量化, 在信号软比特的值较大的时候, 使用较 大的量化步长, 而在信号软比特的值较小的时候, 使用较小的量化步长, 这就保证了 在量化区间内, 量化 S R的下降比较平坦。 从而保证了在整个量化区间, 量化 S R 基本不变, 保证了随后的信道译码性能。 图 6以 8bit->4bit变换域译码为例进行说明,线条 1为 8bit均匀量化的信噪比曲线, 输入数值的绝对值越大, 则量化信噪比越高, 但由于 HSDPA 的工作点也就是 14〜 15dB, 更高的信噪比对于软解调的准确性并没有什么帮助。 线条 2为 4bit均匀量化的 信噪比曲线, 当输入数值较小时, 则量化信噪比低于 HSDPA的工作点, 因此导致了 较大的量化误差,影响了最终译码的准确性。线条 3为 4bit变换域译码的信噪比曲线, 当输入数值的绝对值较大时, 采用较大的量化间隔, 此时, 量化信噪比接近 4bit均匀 量化, 但因为高于 HSDPA的工作点, 并不影响此时的译码性能; 当输入数值的绝对 值较小时, 采用较小的量化间隔, 此时, 量化信噪比接近于 8bit均匀量化的性能。 这 样, 就能够在译码性能基本无损失的情况下, 显著减少软比特解调链路的比特位宽。 为实现 DTR变换域译码,可以使用 bit变换域译码器替代 Mbit均匀量化译码器, 相当于把 DTR中的 Mbit译码链路修改为 bit译码链路, 也就是说, Vertibi译码过程 和 turbo译码过程虽然仍采用通常的译码算法,但其输入信息已经不是均匀量化的 Mbit 软比特数据, 而是变换域的非均匀量化的 bit软比特, 经多次迭代后, 对译码器的输 出进行判决, 并最后获取 0、 1比特。 如果是在原有 Mbit链路的基础上进行修改,整个链路的修改可能需要耗费较长的 时间, 变通的方法是保持 Mbit译码链路中各个处理模块, 而仅对 DTR链路中占用面 积较大的 RAM进行封装, 在解调软比特存入 RAM的时候进行 Mbit到 bit压缩, 在 需要获取 RAM中存储的数据时, 再通过解压缩将 RAM中的 bit数据映射为 Mbit数 据, 如图 4和图 5所示。 也就是说, 在数据存入 RAM时, 通过映射实现 Mbit均匀量 化的软比特数据到 bit非均匀量化变换域的转换, 对于从 RAM中读取的数据, 通过 映射实现 bit非均匀量化软比特数据到 Mbit均匀量化的软比特数据的转换, 从而压 缩了 DTR链路的 RAM开销。 其中, Mbit均匀量化的软比特数据到 bit非均匀量化变换域的映射过程可用如下 公式表示:
Figure imgf000012_0001
L 2( u k ) = L c x k + e2 ( u k ) + a2 ( u k ) where L 2 ( u k ) represents the output of the turbo decoder and x k represents the uniformly quantized soft bit of the input, λ ώ (¾) represents the decoder after decoding this "net" decode information is to be passed to a first sub-decoder as its a priori information λ 3, a2 (u k) represents the current input to the decoder The log likelihood ratio of the probability is derived from the output of the second sub-decoder. The above formula shows, turbo decoder and a turbo decoder output decision input x k is linear, ¾ L soft 2 (u k) is larger when the input bits, a greater redundancy, even if a certain quantizing Errors are not easy to cause false positives. When the soft bit input of L 2 ( u k ) is small, a small amount of error during quantization may cause misjudgment. Therefore, for demodulating soft bits, non-uniform quantization processing can be used, for larger inputs, coarser quantization, and for smaller inputs, fine quantization. That is, the DTR process can be shifted from the uniformly quantized linear domain to the non-uniform quantized transform domain. The above process can also be explained by quantizing the signal-to-noise ratio. As shown in Fig. 6, when the soft bit data is uniformly quantized, quantization noise is actually added. For uniform quantization, the quantization noise is fixed. When the value of the soft bit is large, the signal-to-noise ratio SNR ratio of the signal soft bit and the quantization noise is relatively large. When the value of the soft bit is small, correspondingly, the SNR ratio of the signal soft bit and the quantization noise is relatively small. For uniform quantization, when the value of the soft bit of the signal is large, the quantized SR can always satisfy the requirement, but when the value of the soft bit of the signal is small, the quantized SNR decreases greatly, even less than OdB, thereby causing soft bits. Decoding performance is degraded. The above-mentioned transform domain decoding, that is, non-uniform quantization, uses a larger quantization step size when the value of the soft bit of the signal is large, and uses a smaller quantization step when the value of the soft bit of the signal is small. Long, this ensures that the quantization SR decreases more flatly within the quantization interval. Therefore, it is ensured that the quantization SR is substantially unchanged throughout the quantization interval, and the subsequent channel decoding performance is guaranteed. Figure 6 shows an 8-bit->4bit transform domain decoding as an example. Line 1 is an 8-bit uniform quantization signal-to-noise ratio curve. The larger the absolute value of the input value, the higher the quantized signal-to-noise ratio, but due to the operating point of HSDPA. That is, 14 to 15 dB, a higher signal to noise ratio does not help the accuracy of soft demodulation. Line 2 is a 4-bit uniformly quantized signal-to-noise ratio curve. When the input value is small, the quantized signal-to-noise ratio is lower than the operating point of HSDPA, thus resulting in a large quantization error, which affects the accuracy of the final decoding. Line 3 is the signal-to-noise ratio curve of the 4-bit transform domain decoding. When the absolute value of the input value is large, a larger quantization interval is used. At this time, the quantization signal-to-noise ratio is close to 4 bit uniform quantization, but because it is higher than HSDPA. The point does not affect the decoding performance at this time; when the absolute value of the input value is small, a smaller quantization interval is used, and at this time, the quantization signal-to-noise ratio is close to the performance of 8-bit uniform quantization. Thus, the bit width of the soft bit demodulation link can be significantly reduced without substantially loss of decoding performance. To implement DTR transform domain decoding, a bit transform domain decoder can be used instead of the Mbit uniform quantization decoder, which is equivalent to modifying the Mbit decoding link in the DTR to a bit decoding link, that is, Vertibi decoding. Although the process and turbo decoding process still use the usual decoding algorithm, the input information is not uniformly quantized Mbit soft bit data, but the non-uniform quantized bit soft bit of the transform domain. After multiple iterations, the translation is performed. The output of the coder is judged, and finally 0, 1 bit is acquired. If the modification is based on the original Mbit link, the modification of the entire link may take a long time. The workaround is to maintain each processing module in the Mbit decoding link, and only occupy the occupied area of the DTR link. Larger RAM for encapsulation, Mbit to bit compression when demodulating soft bits are stored in RAM, When it is necessary to acquire the data stored in the RAM, the bit data in the RAM is mapped to the Mbit data by decompression, as shown in FIGS. 4 and 5. That is to say, when the data is stored in the RAM, the conversion of the soft-bit data of the Mbit uniform quantization to the bit non-uniform quantization transform domain is realized by mapping, and the data read from the RAM is used to realize the bit non-uniform quantization soft bit data by mapping. The conversion of soft bit data to Mbit uniformly quantized, thereby compressing the RAM overhead of the DTR link. The mapping process of the Mbit uniformly quantized soft bit data to the bit non-uniform quantization transform domain may be expressed by the following formula:
Figure imgf000012_0001
公式表示:
Figure imgf000012_0002
其中, alpha为概率值。 本发明实施例适用于 TD-SCDMA系统的 GDTR和 HDTR下行链路,由于 WCDMA 具有和 TD-SCDMA相近的 GDTR和 HDTR下行链路流程, 因而, 此方案也适用于 WCDMA系统。 该策略的核心发明点是在无线接收设备信道译码过程中,针对均匀量化的软比特, 将 Mbit的线性域转化到 bit的变换域, M和 N是可配置的, M总是大于 N, 从而以 较小的性能退化 (不大于 0.2dB ) 获得了芯片成本的显著减小。 在常见的 viterbi译码和 turbo译码方案中, 解映射后的软比特使用的是 8bit均匀 量化的软比特数据。 本发明实施例分别采用 4bit变换域译码和 8bit线性译码, 在性能 上进行仿真, 如图 7至图 9所示, 仿真结果表明, 两者性能基本相当, 性能退化不大 于 0.2dB, 这表明, 采用 4bit变换域译码方案来代替 8bit线性译码方案是可行的。 以 下为通常的 8bit方案和 4bit方案在性能上的比较。 无线接收芯片的基带部分中, RAM占有很大的面积, 而下行 GDTR链路和下行 HDTR链路中的 RAM通常占有芯片 RAM面积的 80%以上。这意味着, 通过 M->Nbit 变换, 以 8->4bit变化为例, 可以将 DTR链路中的 RAM面积缩减一半, 相当于缩减 了基带芯片中 RAM面积的 40%, 这意味着使用变换域译码方案, 可以在性能基本无 损失的情况下, 大幅度缩减芯片面积, 从而有效节省基带芯片成本。 下面以用于 TD-SCDMA的 GDTR链路的变换域译码为例, 描述了变换域译码的 简化实施方法, 具体流程图可参考图 3。 对于 TD-SCDMA的 GDTR链路来说, 完整的变换域译码方法是在软解调后, 将 接收的数据符号转化为非线性的软比特, 然后在整个 GDTR链路中, 使用的都是变换 域的 bit, 在信道解码中, 通过对变换域的 bit方案做信道译码, 解码出最后的 0、 1 比特。 而考虑到链路修改的复杂度,可以采用简化方式加以实现, 即并不对 DTR链路结 构做大的修改, 仍然使用 Mbit的线性译码, 而只是对 DTR链路中较大的 RAM进行 封装, 这样, 可以将对链路的修改达到最小。 具体地说, 在将 Mbit 软比特数据存入 RAM时, 对其进行一次映射, 将 Mbit软比特数据映射为 bit软比特数据, 并保存在 RAM中。 需要获取 RAM中数据时, 再通过一次映射将 RAM中的 bit软比特数据映 射为 Mbit软比特数据。这样, 既可以节省了大部分芯片面积, 又可以将对原线性链路 的修改减少到最小。 具体实施步骤如下: 步骤 1、 物理信道去映射 物理信道去映射为物理层映射的反过程, 可反向应用物理信道映射规则来恢复用 户的子帧数据, 将码片级操作转化为比特级操作。 物理信道去映射后, 将经过均衡处理的接收数据进行软判决, 量化为合适的软比 特数据, 以便用于随后的 vertibi软比特译码和 turbo软比特译码。 软判决后的软比特数据存储在帧 RAM中, 可采用前述方式进行压缩和解压缩。 也就是说, 软判决后的软比特数据通过压缩存储在帧 RAM中, 并在需要进行后续处 理时, 将从 RAM读取的数据进行解压缩。 步骤 2、 子帧拼接 当编码组合传输信道 ΤΉ的长度大于 5ms时, 需要在第二次解交织单元和物理信 道去映射单元之间增加一个子帧拼接单元, 以便将两个子帧拼接起来。 子帧拼接仅改 变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 3、 第二次解交织 交织的主要目的是将突发错误变成随机错误, 以利于译码器进行纠错, 抵抗快衰 落造成的影响。 不论第一次交织还是第二次交织, 两者均为矩形交织器, 遵循行进列 出的原则。 第二次解交织的目的是将第二次交织打乱的数据恢复成原来的顺序。 第二次解交织之前需要将数据存储在第二次解交织 RAM 中, 对第二次解交织
The formula says:
Figure imgf000012_0002
Where alpha is the probability value. The embodiments of the present invention are applicable to the GDTR and HDTR downlinks of the TD-SCDMA system. Since WCDMA has a GDTR and HDTR downlink procedure similar to TD-SCDMA, this scheme is also applicable to the WCDMA system. The core invention of the strategy is that in the channel decoding process of the wireless receiving device, the Mbit linear domain is converted to the bit transform domain for the uniformly quantized soft bits, M and N are configurable, and M is always greater than N. Thus, a significant reduction in chip cost is achieved with less performance degradation (not more than 0.2 dB). In the common viterbi decoding and turbo decoding scheme, the demapped soft bits use 8-bit uniformly quantized soft bit data. In the embodiment of the present invention, 4 bit transform domain decoding and 8 bit linear decoding are respectively used to perform performance simulation. As shown in FIG. 7 to FIG. 9 , the simulation results show that the performance of the two is basically equivalent, and the performance degradation is not more than 0.2 dB. It is shown that it is feasible to use a 4-bit transform domain decoding scheme instead of an 8-bit linear decoding scheme. The following is a comparison of the performance of the usual 8bit and 4bit schemes. In the baseband portion of the wireless receiving chip, the RAM occupies a large area, and the RAM in the downstream GDTR link and the downstream HDTR link usually occupies more than 80% of the chip RAM area. This means that by using the M->Nbit transform, taking the 8->4bit change as an example, the RAM area in the DTR link can be reduced by half, which is equivalent to reduction. 40% of the RAM area in the baseband chip, which means that using the transform domain decoding scheme, the chip area can be greatly reduced without loss of performance, thereby effectively saving the cost of the baseband chip. A simplified implementation method of transform domain decoding is described below by taking the transform domain decoding of the GDTR link for TD-SCDMA as an example. For a specific flowchart, reference may be made to FIG. 3. For the GDTR link of TD-SCDMA, the complete transform domain decoding method converts the received data symbols into non-linear soft bits after soft demodulation, and then uses the entire GDTR link. In the channel decoding, the channel decoding is performed on the bit scheme of the transform domain, and the last 0 and 1 bits are decoded. Considering the complexity of the link modification, it can be implemented in a simplified manner, that is, the DTR link structure is not greatly modified, and the Mbit linear decoding is still used, but only the larger RAM in the DTR link is encapsulated. In this way, modifications to the link can be minimized. Specifically, when Mbit soft bit data is stored in the RAM, it is mapped once, and Mbit soft bit data is mapped into bit soft bit data, and is stored in the RAM. When it is necessary to acquire data in the RAM, the bit soft bit data in the RAM is mapped to Mbit soft bit data by one mapping. In this way, most of the chip area can be saved, and the modification of the original linear link can be minimized. The specific implementation steps are as follows: Step 1. The physical channel demaps the physical channel to be mapped to the inverse process of the physical layer mapping, and the physical channel mapping rule can be reversely applied to recover the user's subframe data, and the chip-level operation is converted into a bit-level operation. . After the physical channel is demapped, the equalized received data is soft-decised and quantized into appropriate soft bit data for subsequent vertibi soft bit decoding and turbo soft bit decoding. The soft bit data after the soft decision is stored in the frame RAM, and can be compressed and decompressed in the aforementioned manner. That is to say, the soft bit data after the soft decision is stored in the frame RAM by compression, and the data read from the RAM is decompressed when subsequent processing is required. Step 2: Sub-frame splicing When the length of the coded combined transmission channel ΤΉ is greater than 5 ms, a subframe splicing unit needs to be added between the second de-interleaving unit and the physical channel demapping unit to splicing the two subframes. Sub-frame splicing only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 3: The main purpose of the second deinterleaving interleaving is to turn the burst error into a random error, so that the decoder can perform error correction and resist the influence of fast fading. Regardless of the first interleaving or the second interleaving, both are rectangular interleavers, following the principles outlined in the travel. The purpose of the second deinterleaving is to restore the data of the second interleaving to the original order. The data needs to be stored in the second deinterleaving RAM before the second deinterleaving, and the second deinterleaving is performed.
RAM中的数据需要采用前述方式进行压缩和解压缩。 也就是说, 在第二次解交织前, 将软比特数据进行压缩后存储至第二次解交织 RAM, 并在需要进行第二次解交织时, 将从第二次解交织 RAM中读取的数据进行解压缩。 步骤 4、 物理信道拼接 当发送端存在多码传输, 即用多个物理信道传输一个码分组合传输信道 CCTrCH 时, 接收端便需要用到物理信道拼接, 即将几个属于同一个 CCTrCH的物理信道的数 据串行拼接起来。 物理信道拼接仅改变输入软比特的存储位置, 不改变输入软比特的值。 步骤 5、 去比特加扰 去比特加扰过程是比特加扰过程的反过程, 比特加扰过程是输入序列
Figure imgf000014_0001
与扰码序列 pk的模 2和。 去比特加扰仅改变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 6、 传输信道去复用 传输信道去复用是将复用在一个 CCTrCH上的 I个传输信道的 10ms无线帧数据 逐个拆离出来, 并送到对应的传输信道上去。 传输信道去复用仅改变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 7、 反向速率匹配 反向速率匹配是将经过速率匹配处理后的数据恢复成速率匹配前的数据。 反向速率匹配仅改变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 8、 无线帧拼接 当传输信道 的传输时间间隔 (ΤΉ) 大于 10ms时, 经反向速率匹配模块映射到 传输信道 的 F = TTI / \0ms ) 个 10ms无线帧需在无线帧拼接模块中拼接成一个 ΤΉ帧。 无线帧拼接仅改变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 9、 第一次解交织 将第一次交织打乱的数据恢复成原来的顺序。 第一次解交织仅改变输入软比特的存储顺序, 不改变输入软比特的值。 一次解交织之前需要将数据存储在第一次解交织 AM中,对第一次解交织 RAM 中的数据需要采用前述方式进行压缩和解压缩。 也就是说, 在第一次解交织前, 将软 比特数据进行压缩后存储至第一次解交织 RAM, 并在需要进行第一次解交织时,将从 第一次解交织 RAM中读取的数据进行解压缩。 步骤 10、 无线帧反向均衡 无线帧反向均衡过程是无线帧均衡过程的反过程, 可反向无线帧均衡规则来恢复 无线帧均衡前的用户数据。 无线帧反向均衡仅改变输入软比特的存储顺序, 不改变输入软比特的值。 步骤 1 1、 信道解码 信道解码是信道编码的逆过程, 常用的信道码主要包括卷积码和 turbo码, 信道 译码主要采用 Viterbi译码和 MAX-LOG-MAP算法。 在步骤 1、步骤 3和步骤 9的压缩和解压缩处理后, Vertibi译码过程和 turbo译码 过程仍采用通常的译码算法, 其输入信息仍是 8bit的软比特数据, 经多次迭代后, 对 译码器的输出进行判决, 并最后获取 0、 1比特。 综上所述, 本发明实施例能够在基本不影响性能的情况下, 大幅度缩减无线接收 芯片面积, 从而有效节省基带芯片成本。 尽管上文对本发明进行了详细说明, 但是本发明不限于此, 本技术领域技术人员 可以根据本发明的原理进行各种修改。 因此, 凡按照本发明原理所作的修改, 都应当 理解为落入本发明的保护范围。 工业实用性 如上所述, 本发明实施例提供的一种无线接收设备的软比特译码方法及装置 具有以下有益效果: 能够在基本不影响性能的情况下, 大幅度缩减无线接收芯片面 积, 从而有效节省基带芯片成本。
The data in the RAM needs to be compressed and decompressed in the manner described above. That is, before the second deinterleaving, the soft bit data is compressed and stored in the second deinterleaving RAM, and when the second deinterleaving is required, the second deinterleaving RAM is read. The data is decompressed. Step 4: Physical channel splicing When there is multi-code transmission at the transmitting end, that is, when a code division combined transmission channel CCTrCH is transmitted by using multiple physical channels, the receiving end needs to use physical channel splicing, that is, several physical channels belonging to the same CCTrCH. The data is serially stitched together. The physical channel splicing only changes the storage location of the input soft bits, and does not change the value of the input soft bits. Step 5. De-bit scrambling The bit scrambling process is the inverse of the bit scrambling process, and the bit scrambling process is an input sequence.
Figure imgf000014_0001
Modulo 2 sum with the scrambling code sequence p k . De-bit scrambling only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 6. Demultiplexing the transport channel Demultiplexing The 10 ms radio frame data of one transport channel multiplexed on one CCTrCH is detached one by one and sent to the corresponding transport channel. The demultiplexing of the transport channel only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 7. Reverse rate matching Reverse rate matching is to restore the data after the rate matching processing to the data before the rate matching. Reverse rate matching only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 8. Wireless frame stitching When the transmission time interval (ΤΉ) of the transmission channel is greater than 10 ms, the 10 ms radio frames mapped to the transmission channel by the inverse rate matching module are spliced into one frame in the wireless frame splicing module. The wireless frame splicing only changes the storage order of the input soft bits, and does not change the value of the input soft bits. Step 9. The first deinterleaving restores the first interleaved data to the original order. The first deinterleaving only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. The data needs to be stored in the first deinterleaving AM before deinterleaving, and the data in the first deinterleaving RAM needs to be compressed and decompressed in the foregoing manner. That is, before the first deinterleaving, the soft bit data is compressed and stored in the first deinterleaving RAM, and when the first deinterleaving is required, the first deinterleaving RAM is read. The data is decompressed. Step 10: Radio frame reverse equalization The radio frame reverse equalization process is the inverse process of the radio frame equalization process, and the reverse radio frame equalization rule can be used to recover the user data before the radio frame equalization. Radio frame reverse equalization only changes the order in which the input soft bits are stored, and does not change the value of the input soft bits. Step 1 1. Channel decoding Channel decoding is the inverse process of channel coding. Commonly used channel codes mainly include convolutional codes and turbo codes. Channel decoding mainly uses Viterbi decoding and MAX-LOG-MAP algorithm. After the compression and decompression processing in steps 1, 3 and 9, the Vertibi decoding process and the turbo decoding process still use the usual decoding algorithm, and the input information is still 8-bit soft bit data. After multiple iterations, The output of the decoder is judged, and finally 0, 1 bit is acquired. In summary, the embodiment of the present invention can greatly reduce the area of the wireless receiving chip under the condition that the performance is not substantially affected, thereby effectively saving the cost of the baseband chip. Although the invention has been described in detail above, the invention is not limited thereto, and various modifications may be made by those skilled in the art in accordance with the principles of the invention. Therefore, modifications made in accordance with the principles of the invention should be construed as falling within the scope of the invention. INDUSTRIAL APPLICABILITY As described above, a soft bit decoding method and apparatus for a wireless receiving device provided by an embodiment of the present invention have the following beneficial effects: the wireless receiving chip area can be greatly reduced without substantially affecting performance. Effectively save the cost of baseband chips.

Claims

权 利 要 求 书 、 一种无线接收设备的软比特译码方法, 包括: Claims, a soft bit decoding method for a wireless receiving device, comprising:
在下行链路处理期间,将待译码的 Mbit的软比特数据进行压缩处理,得到 压缩的 bit的软比特数据;  During the downlink processing, the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit;
对所述压缩的 bit的软比特数据进行后续的软比特译码处理; 其中, M和 N是正整数, 且 M>N。 、 根据权利要求 1所述的方法, 其中, 在物理信道去映射处理前, 对所述待译码 的 Mbit的软比特数据进行压缩处理, 得到压缩的 bit的软比特数据。 、 根据权利要求 2所述的方法, 其中, 所述后续的软比特译码处理的步骤包括: 对所述压缩的 bit的软比特数据依次进行物理信道去映射处理、第二次解 交织处理、 第一次解交织处理、 译码处理。 、 根据权利要求 1所述的方法, 其中, 在物理信道去映射处理后, 对所述待译码 的 Mbit的软比特数据进行压缩处理, 得到压缩的 bit的软比特数据。 、 根据权利要求 4所述的方法, 其中, 所述后续的软比特译码处理的步骤包括: 将所述压缩的 bit的软比特数据存储至缓存;  Subsequent soft bit decoding processing is performed on the soft bit data of the compressed bit; wherein M and N are positive integers, and M > The method according to claim 1, wherein before the physical channel demapping process, the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit. The method according to claim 2, wherein the step of the subsequent soft bit decoding process comprises: sequentially performing physical channel demapping processing, second deinterleaving processing, and soft decoding on the compressed bit soft bit data, The first deinterleaving process and the decoding process. The method according to claim 1, wherein after the physical channel demapping process, the soft bit data of the Mbit to be decoded is compressed to obtain soft bit data of the compressed bit. The method according to claim 4, wherein the step of the subsequent soft bit decoding process comprises: storing soft bit data of the compressed bit to a cache;
将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的 软比特数据, 并对所述 Mbit的软比特数据进行子帧拼接处理。 、 根据权利要求 5所述的方法,其中,所述后续的软比特译码处理的步骤还包括:  The soft bit data of the bit read from the buffer is subjected to decompression processing to obtain Mbit soft bit data, and sub-frame splicing processing is performed on the Mbit soft bit data. The method of claim 5, wherein the step of the subsequent soft bit decoding process further comprises:
对所述子帧拼接处理后得到的 Mbit的软比特数据进行压缩处理,得到压缩 的 bit的软比特数据;  Performing compression processing on the soft bit data of the Mbit obtained after the splicing process of the subframe to obtain soft bit data of the compressed bit;
将所述压缩的 bit的软比特数据存储至缓存;  Storing the soft bit data of the compressed bit to a cache;
将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的 软比特数据, 并对所述 Mbit的软比特数据进行第二次解交织处理。 、 根据权利要求 6所述的方法,其中,所述后续的软比特译码处理的步骤还包括:  The soft bit data of the bit read from the buffer is subjected to decompression processing to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a second deinterleave process. The method of claim 6, wherein the step of the subsequent soft bit decoding process further comprises:
对第二次解交织处理后得到的 Mbit的软比特数据进行行压缩处理,得到压 缩的 bit的软比特数据; 将所述压缩的 bit的软比特数据存储至缓存中; Performing line compression processing on the Mbit soft bit data obtained after the second deinterleaving process to obtain soft bit data of the compressed bit; Storing the soft bit data of the compressed bit into a cache;
将从缓存中读取的所述 bit的软比特数据进行解压缩处理, 得到 Mbit的 软比特数据, 并对所述 Mbit的软比特数据进行第一次解交织处理。 、 根据权利要求 1-7任意一项所述的方法, 其中, 所述压缩处理的步骤包括: 通过映射, 将 Mbit均匀量化的软比特数据进行转换, 得到 bit非均匀量 化的软比特数据。 、 根据权利要求 8所述的方法, 其中, 所述解压缩处理的步骤包括:  The soft bit data of the bit read from the buffer is decompressed to obtain Mbit soft bit data, and the Mbit soft bit data is subjected to a first deinterleave process. The method according to any one of claims 1-7, wherein the step of compressing comprises: converting, by mapping, Mbit uniformly quantized soft bit data to obtain bit non-uniformly quantized soft bit data. The method according to claim 8, wherein the step of decompressing the processing comprises:
通过逆映射,将从缓存中读取的所述 bit非均匀量化的软比特数据进行转 换, 得到 Mbit均匀量化的软比特数据。 0、 一种无线接收设备的软比特译码装置, 包括:  By inverse mapping, the bit non-uniformly quantized soft bit data read from the buffer is converted to obtain Mbit uniformly quantized soft bit data. 0. A soft bit decoding apparatus for a wireless receiving device, comprising:
压缩模块, 设置为在下行链路处理期间,将待译码的 Mbit的软比特数据进 行压缩处理, 得到压缩的 bit的软比特数据;  a compression module, configured to compress the soft bit data of the Mbit to be decoded during the downlink processing to obtain soft bit data of the compressed bit;
译码模块,设置为对所述压缩的 bit的软比特数据进行后续的软比特译码 处理;  a decoding module, configured to perform subsequent soft bit decoding processing on the soft bit data of the compressed bit;
其中, M和 N是正整数, 且 M>N。  Where M and N are positive integers and M>N.
PCT/CN2014/078051 2013-06-05 2014-05-21 Soft bit coding method and apparatus for radio receiving device WO2014194761A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310222092.3A CN104218956B (en) 2013-06-05 2013-06-05 A kind of the soft bit interpretation method and device of radio receiver
CN201310222092.3 2013-06-05

Publications (1)

Publication Number Publication Date
WO2014194761A1 true WO2014194761A1 (en) 2014-12-11

Family

ID=52007534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078051 WO2014194761A1 (en) 2013-06-05 2014-05-21 Soft bit coding method and apparatus for radio receiving device

Country Status (2)

Country Link
CN (1) CN104218956B (en)
WO (1) WO2014194761A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843106B (en) * 2015-01-12 2019-04-30 深圳市中兴微电子技术有限公司 Dedicated digital signal processor and its device and method for realizing data interaction conversion
US10075970B2 (en) 2015-03-15 2018-09-11 Qualcomm Incorporated Mission critical data support in self-contained time division duplex (TDD) subframe structure
US9936519B2 (en) 2015-03-15 2018-04-03 Qualcomm Incorporated Self-contained time division duplex (TDD) subframe structure for wireless communications
US9814058B2 (en) 2015-05-15 2017-11-07 Qualcomm Incorporated Scaled symbols for a self-contained time division duplex (TDD) subframe structure
US9992790B2 (en) 2015-07-20 2018-06-05 Qualcomm Incorporated Time division duplex (TDD) subframe structure supporting single and multiple interlace modes
CN107624251B (en) * 2016-01-15 2021-01-08 高通股份有限公司 Wireless communication
CN112104394B (en) * 2020-11-18 2021-01-29 北京思凌科半导体技术有限公司 Signal processing method, signal processing device, storage medium and electronic equipment
CN113347667B (en) * 2021-06-08 2022-07-05 武汉虹信科技发展有限责任公司 Bit compression method and system for reporting UCI (uplink control information)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392741A (en) * 2002-08-12 2003-01-22 北京邮电大学 Turbine code decoder for logarithmic compression-expansion code and its realizing method
US20070189248A1 (en) * 2006-02-14 2007-08-16 Chang Li F Method and system for HSDPA bit level processor engine
CN101237241A (en) * 2007-01-31 2008-08-06 大唐移动通信设备有限公司 A method and system for realizing mixed automatic request re-transfer processing and channel decoding

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8000411B2 (en) * 2008-01-04 2011-08-16 Qualcomm Incorporated Decoding scheme using multiple hypotheses about transmitted messages
US8005152B2 (en) * 2008-05-21 2011-08-23 Samplify Systems, Inc. Compression of baseband signals in base transceiver systems
CN101414848B (en) * 2008-11-18 2012-12-12 华为技术有限公司 Method, apparatus and system for analyzing frame control head

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392741A (en) * 2002-08-12 2003-01-22 北京邮电大学 Turbine code decoder for logarithmic compression-expansion code and its realizing method
US20070189248A1 (en) * 2006-02-14 2007-08-16 Chang Li F Method and system for HSDPA bit level processor engine
CN101237241A (en) * 2007-01-31 2008-08-06 大唐移动通信设备有限公司 A method and system for realizing mixed automatic request re-transfer processing and channel decoding

Also Published As

Publication number Publication date
CN104218956A (en) 2014-12-17
CN104218956B (en) 2019-04-26

Similar Documents

Publication Publication Date Title
WO2014194761A1 (en) Soft bit coding method and apparatus for radio receiving device
US6975692B2 (en) Scaling of demodulated data in an interleaver memory
KR100671075B1 (en) Decoder, decoding system and method for decoding to facilitate the use of turbo coding
JP4955150B2 (en) Highly parallel MAP decoder
JP3662766B2 (en) Iterative demapping
US8713414B2 (en) Method and apparatus for soft information transfer between constituent processor circuits in a soft-value processing apparatus
WO2005013543A1 (en) Scaling and quantizing soft-decision metrics for decoding
US6606724B1 (en) Method and apparatus for decoding of a serially concatenated block and convolutional code
US6859906B2 (en) System and method employing a modular decoder for decoding turbo and turbo-like codes in a communications network
JP4709119B2 (en) Decoding device and decoding method
JP3613448B2 (en) Data transmission method, data transmission system, transmission device, and reception device
GB2432495A (en) Terminating turbo decoding of an entire transport block once decoding has failed in a single code block divided therefrom
US6795507B1 (en) Method and apparatus for turbo decoding of trellis coded modulated signal transmissions
US8413021B2 (en) Efficient soft value generation for coded bits in a turbo decoder
JP4543479B2 (en) Communication system and method
KR100912600B1 (en) Tail-biting turbo code for arbitrary number of information bits
WO2000010254A1 (en) Memory architecture for map decoder
US7333419B2 (en) Method to improve performance and reduce complexity of turbo decoder
US8196003B2 (en) Apparatus and method for network-coding
JP2001211088A (en) Method and device for data error correction
JP2001257602A (en) Method and device for data error correction
US20050172200A1 (en) Data receiving method and apparatus
JP3514213B2 (en) Direct concatenated convolutional encoder and direct concatenated convolutional encoding method
Anghel et al. FPGA implementation of a CTC Decoder for H-ARQ compliant WiMAX systems
Singh et al. Modified joint channel source decoding structure for reliable image transmission

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14807325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14807325

Country of ref document: EP

Kind code of ref document: A1