WO2022002272A1 - 按需译码方法及装置 - Google Patents

按需译码方法及装置 Download PDF

Info

Publication number
WO2022002272A1
WO2022002272A1 PCT/CN2021/104390 CN2021104390W WO2022002272A1 WO 2022002272 A1 WO2022002272 A1 WO 2022002272A1 CN 2021104390 W CN2021104390 W CN 2021104390W WO 2022002272 A1 WO2022002272 A1 WO 2022002272A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoding
syndrome
syndromes
group
decision
Prior art date
Application number
PCT/CN2021/104390
Other languages
English (en)
French (fr)
Inventor
梁伟光
黄科超
马会肖
肖世尧
耿东玉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21832381.4A priority Critical patent/EP4170914A4/en
Publication of WO2022002272A1 publication Critical patent/WO2022002272A1/zh
Priority to US18/146,794 priority patent/US20230136251A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/152Bose-Chaudhuri-Hocquenghem [BCH] codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/159Remainder calculation, e.g. for encoding and syndrome calculation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/19Single error correction without using particular properties of the cyclic codes, e.g. Hamming codes, extended or generalised Hamming codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3707Adaptive decoding and hybrid decoding, e.g. decoding methods or techniques providing more than one decoding algorithm for one code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3746Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with iterative decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/611Specific encoding aspects, e.g. encoding by means of decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6561Parallelized implementations

Definitions

  • the present application relates to a decoding technology, and in particular, to a low-power on-demand decoding technology.
  • FEC Forward Error Correction Coding
  • the starting point of the forward error correction coding technology is to add some parity bits when the transmitter encodes, and to correct the errors in the code stream by calculating the parity bits in the receiver code stream that has generated errors.
  • OSNR Signal Noise Ratio
  • the present application provides a decoding method, which solves the problems of high decoding complexity and high decoding power consumption in the prior art by prioritizing input codewords and scheduling decoding on demand.
  • a decoding method obtaining a syndrome corresponding to each codeword in a plurality of codewords; grouping the obtained syndromes, and performing priority sorting in each group of syndromes; The priority sorting result of each group of syndromes is described, and the syndromes are selected for decoding.
  • the embodiment of the present application does not perform the same decoding process on the syndrome of each codeword, which avoids the problem that in the traditional static decoding scheme, no matter whether the codeword itself is correct or not, the same number of decodings is required, so as to realize on-demand decoding. code, reducing the demand for decoding resources and system power consumption.
  • non-zero syndromes have priority over syndromes with a value of zero. Further, the priority of the non-zero syndrome with more decoding times is lower than that of the non-zero syndrome with fewer decoding times. Moreover, the number of decoding can also be limited. When the number of times the syndrome is decoded reaches the threshold, it will not be decoded again. For example, the threshold can be set to 3 times. is decoded. The chance of the syndrome to be decoded being decoded is improved, thereby improving the decoding efficiency.
  • the priority of non-zero syndromes is always higher than that of syndromes whose value is zero; in the case of soft-decision decoding, the priority of non-zero syndromes can always be higher than that of syndromes whose value is zero.
  • the syndrome of zero can also have priority of decoding times. For example, regardless of whether the value of the syndrome is zero, the syndrome with more decoding times has lower priority than the syndrome with fewer decoding times. If two syndromes are decoded The number of times is the same, and the priority of non-zero syndrome is higher than that of zero syndrome.
  • priority may also be sorted according to the reliability of soft information, which is not limited in this application.
  • the decoding method is applied to a decoding apparatus including a plurality of decoding units;
  • the selecting a syndrome for decoding includes: selecting at most one syndrome from each group, and sending them respectively Perform hard-decision or soft-decision decoding for different decoding units, wherein the selected syndromes are all non-zero syndromes.
  • the syndrome with a value of zero does not need to be decoded; if the value of all syndromes in a certain group is 0, no syndrome in this group will be selected for decoding. Therefore, Pick at most one from each group. At this time, the storage units of each group of syndromes only need to be connected to the corresponding decoding units, and the wiring complexity will be reduced to some extent.
  • each group can also select at most two syndromes or more, in this case, the number of decoding units connected to the memory units of each group of syndromes becomes two or more. It should be understood that in the case of soft-decision decoding, the zero syndrome can be decoded to improve the decoding performance, or not to be decoded to reduce the decoding complexity.
  • the decoding method is applied to a decoding device including a plurality of decoding units;
  • the selecting a syndrome for decoding includes: selecting a syndrome from each group and sending it to Different decoding units perform soft-decision decoding.
  • the storage units of each group of syndromes only need to be connected to the corresponding decoding units, and the wiring complexity will be reduced to some extent.
  • two or more syndromes can also be selected from each group, in this case, the number of decoding units connected to the memory units of each group of syndromes becomes two or more.
  • the number of groups of packets is the same as the number of decoding units, so as to maximize the utilization of decoding resources.
  • the decoding method is applied to a decoding device including a plurality of decoding units; and the selecting a syndrome for decoding includes: selecting at most one syndrome from each group, respectively It is sent to different decoding units for hard decision or soft decision decoding; the syndromes are prioritized again in each two groups, and according to the sorting results, at most one syndrome is selected from each two groups and sent to Different decoding units perform hard-decision or soft-decision decoding, wherein the syndromes selected twice are different, and the selected syndromes are all non-zero syndromes.
  • the decoding method is applied to a decoding device including a plurality of decoding units;
  • the selecting a syndrome for decoding includes: selecting a syndrome from each group, and sending them respectively Perform soft-decision decoding on different decoding units; prioritize the syndromes in every two groups again, and select another syndrome from each of the two groups according to the sorting results, and send them to different decoding units respectively.
  • the grouping of the obtained syndromes includes: dividing the obtained syndromes into 2/3n groups, where n is the number of decoding units, and n is an integer multiple of 3.
  • each group includes the same number of syndromes to achieve uniform grouping, which can ensure that the algorithm complexity is low when selecting syndromes.
  • the syndromes included in each group have different numbers or addresses, and the corresponding syndromes can be identified according to the different numbers or addresses, so as to prioritize them.
  • the method further includes: sorting the syndromes of each group by priority again, and selecting the syndromes again according to the result of this priority sorting.
  • the syndrome is decoded.
  • the prioritizing methods may be different.
  • the method further includes: if the decoding of the first syndrome is successful, performing decoding on the first syndrome and the codeword corresponding to the first syndrome according to the decoding result. Update, wherein the first syndrome is one of the decoded syndromes.
  • updating the first syndrome and the codeword corresponding to the first syndrome according to the decoding result specifically includes: the decoding result includes: Incremental syndrome and flip bit corresponding to the first syndrome; superimpose the incremental syndrome and the first syndrome to obtain an updated syndrome; flip the corresponding code according to the flip bit The bit in the word that corresponds to the flipped bit.
  • each syndrome is stored for the same time. That is to say, each syndrome is stored in the storage unit for the same time, assuming 2 microseconds, after the storage time reaches 2 microseconds, the syndrome will be covered by the newly received syndrome. Similarly, the storage time of the codeword corresponding to the syndrome is also the same. Further, the decoding time of each syndrome is the same, assuming 1 microsecond, that is, the storage time of the syndrome reaches 1 microsecond, regardless of whether the syndrome is decoded or not, the address of the syndrome is regarded as an invalid address, The syndrome will not be decoded again until it is overwritten by the newly stored syndrome.
  • more decoding resources can be used for newly stored syndromes, instead of syndromes that have been stored for a long time but have not yet obtained accurate results, so as to realize on-demand allocation of decoding resources and improve decoding efficiency.
  • the method further includes: decoding the soft information amplitude corresponding to the selected syndrome.
  • the syndrome and its corresponding soft information amplitude need to be decoded together. If the decoding is successful, in addition to the incremental syndrome and flip bit, the updated soft information amplitude will also be obtained. value; superimpose the incremental syndrome with the corresponding syndrome to obtain an updated syndrome; according to the flipped bit, flip the bit corresponding to the flipped bit in the corresponding codeword, and then use the updated soft
  • the information amplitude replaces the original soft information amplitude to complete the decoding.
  • the method further includes: storing the syndromes corresponding to the first frame in groups, and the number of syndromes corresponding to the first frame stored in each group differs by at most one, wherein the first frame A frame includes multiple codewords.
  • the number of syndromes corresponding to codewords from the same frame is the same in each group of storage, that is, the number in each storage unit is the same, so that uniform storage is achieved.
  • the load it handles is the average value of the storage load, so this design ensures that all storage units have a substantially consistent load, achieves thermal density balance, and avoids local overheating.
  • a decoding device which is characterized by comprising: a controller and a decoder, wherein the controller is configured to obtain a syndrome corresponding to each codeword in a plurality of codewords, The syndromes are grouped; also used for prioritizing in each group of syndromes, selecting syndromes and sending them to the decoder according to the priority sorting results of each group of syndromes; the decoder, Used to decode the received syndrome.
  • the embodiment of the present application does not perform the same decoding process on the syndrome corresponding to each codeword, which avoids the problem that in the traditional static decoding scheme, no matter whether the codeword itself is correct or not, the same number of decodings is required, and realizes on-demand decoding. Decoding, reducing the demand for decoding resources and system power consumption.
  • non-zero syndromes have priority over syndromes with a value of zero. Further, the priority of the non-zero syndrome with more decoding times is lower than that of the non-zero syndrome with fewer decoding times. Moreover, the number of decoding can also be limited. When the number of times the syndrome is decoded reaches the threshold, it will not be decoded again. For example, the threshold can be set to 3 times. is decoded. The chance of the syndrome to be decoded being decoded is improved, thereby improving the decoding efficiency.
  • the controller is configured to select at most one syndrome from each group, and send them to different decoding units in the decoder for hard-decision or soft-decision decoding, wherein , the selected syndromes are all non-zero syndromes.
  • the syndrome with a value of zero does not need to be decoded; if the value of all syndromes in a certain group is 0, no syndrome in this group will be selected for decoding. Therefore, Pick at most one from each group. At this time, the storage unit of the syndrome only needs to be connected to the corresponding decoding unit, and the wiring complexity will be reduced.
  • each group can also select at most two syndromes or more, in this case, the number of decoding units connected to the memory units of each group of syndromes becomes two or more. It should be understood that in the case of soft-decision decoding, the zero syndrome can be decoded to improve the decoding performance, or not to be decoded to reduce the decoding complexity.
  • the controller is configured to select a syndrome from each group and send it to different decoding units in the decoder respectively for soft-decision decoding.
  • soft-decision decoding no matter whether the syndrome is 0, it may be selected for decoding.
  • the storage units of each group of syndromes only need to be connected to the corresponding decoding units, and the wiring complexity will be reduced to some extent.
  • two or more syndromes can also be selected from each group, in this case, the number of decoding units connected to the memory units of each group of syndromes becomes two or more.
  • the number of groups of packets is the same as the number of decoding units, so as to maximize the utilization of decoding resources.
  • the controller is further configured to select at most one syndrome from each group, and send them to different decoding units in the decoder respectively for hard-decision or soft-decision decoding
  • the syndrome is prioritized again, and according to the sorting result, select a maximum of one syndrome from every two groups, and send to the different decoding units in the decoder to make a hard decision.
  • soft-decision decoding in which the two selected syndromes are different, and the selected syndromes are all non-zero syndromes.
  • the controller is further configured to select a syndrome from each group, and send it to different decoding units in the decoder for soft decision decoding;
  • the syndromes are prioritized again in each group, and according to the sorting result, another syndrome is selected from each of the two groups, and sent to different decoding units in the decoder for soft-decision decoding, wherein, The two selected syndromes are different.
  • the controller is further configured to divide the obtained syndromes into 2/3n groups, where n is the number of decoding units, and n is an integer multiple of 3.
  • each group includes the same number of syndromes to achieve uniform grouping, which can ensure that the algorithm complexity is low when selecting syndromes.
  • the syndromes included in each group have different numbers or addresses, and the corresponding syndromes can be identified according to the different numbers or addresses, so as to prioritize them.
  • the controller is further configured to prioritize the syndromes of each group again after sending the selected syndromes to the decoder, according to the current priority As a result of sorting, syndromes are again selected from each group and sent to the decoder. Further, in the two processes of prioritizing the syndromes of each group, the prioritizing methods may be different.
  • the controller is configured to, when the first syndrome is successfully decoded, perform the first syndrome and the codeword corresponding to the first syndrome according to the decoding result. Update, wherein the first syndrome is one of the syndromes sent to the decoder.
  • the decoding apparatus further includes a memory
  • the decoder is further configured to obtain the the incremental syndrome and the flip bit corresponding to the first syndrome; send the incremental syndrome and the flip bit to the memory; the controller is used for combining the incremental syndrome with the The first syndrome is superimposed, so that the memory stores the updated syndrome; it is also used to flip the bits corresponding to the flipped bits in the corresponding codeword according to the flipped bits, so that the memory stores the updated ones. bits.
  • each syndrome is stored for the same time, that is, each syndrome is stored in the storage unit for the same time, assuming 2 microseconds, after the storage time reaches 2 microseconds, the The syndrome will be overwritten by the newly received syndrome.
  • the storage time of the codeword corresponding to the syndrome is also the same.
  • the decoding time of each syndrome is the same, assuming 1 microsecond, that is, the storage time of the syndrome reaches 1 microsecond, regardless of whether the syndrome is decoded or not, the address of the syndrome is regarded as an invalid address, The syndrome will not be decoded again until it is overwritten by the newly stored syndrome.
  • more decoding resources can be used for newly stored syndromes, instead of syndromes that have been stored for a long time but have not yet obtained accurate results, so as to realize on-demand allocation of decoding resources and improve decoding efficiency.
  • the controller is further configured to send the soft information amplitude corresponding to the selected syndrome to the decoder; the decoder is further configured to The information amplitude is decoded.
  • the syndrome and its corresponding soft information amplitude need to be decoded together. If the decoding is successful, in addition to the incremental syndrome and flip bit, the updated soft information amplitude will also be obtained. value.
  • the decoder sends the incremental syndrome, the flipped bit and the updated soft information amplitude to the memory; the memory is used to store the incremental syndrome, flipping the bit and the updated soft information amplitude; the controller is used for superimposing the incremental syndrome with the corresponding syndrome, so that the memory stores the updated syndrome; and the controller is used for storing the updated syndrome according to the flipping bit bit, flips the bit corresponding to the flipped bit in the corresponding codeword, so that the memory stores the updated bit.
  • the decoding apparatus further includes a memory, the memory includes a plurality of storage units, and the number of syndromes corresponding to the first frame stored in each storage unit differs by at most one, wherein the The first frame includes a plurality of codewords.
  • the number of syndromes corresponding to codewords from the same frame is the same in each storage unit, so as to achieve uniform storage.
  • the load it handles is the average value of the storage load, so this design ensures that all storage units have a substantially consistent load, achieves thermal density balance, and avoids local overheating.
  • the decoding apparatus further includes a scheduling unit, whose main functions include: sending the syndrome in the memory to the decoder according to the instruction of the controller, and correcting the incremental output of the decoder sub and flip bits into memory.
  • the scheduling unit also sends the soft information amplitude in the memory to the decoder according to the instruction of the controller, and sends the soft information amplitude output from the decoder to the memory.
  • the bandwidth of the scheduling unit can be constrained.
  • the number of incremental syndromes and the number of flip bits sent to the memory at each moment is limited not to exceed a certain threshold.
  • the scheduling unit will cache the incremental syndrome and flip bits that exceed the threshold, and send them to the memory at the next moment.
  • a computer-readable storage medium stores an instruction, and when the instruction is executed on a terminal device, the terminal device is made to perform the operations described in the first aspect and the third aspect.
  • a fourth aspect provides a computer program product containing instructions, characterized in that, when running on a terminal device, the terminal device is caused to execute the method described in the first aspect and any possible implementation manner of the first aspect .
  • the embodiment of the present application does not perform the same decoding process on the syndrome corresponding to each codeword, which avoids the problem that in the traditional static decoding scheme, no matter whether the codeword itself is correct or not, the same number of decodings is required, and realizes on-demand decoding.
  • Decoding reducing the demand for decoding resources and system power consumption.
  • the method of group selection can reduce the complexity of the algorithm, and the storage unit of the syndrome only needs to be connected to the corresponding decoding unit, and the complexity of the connection is also reduced.
  • Fig. 1 is the structural block diagram of the communication system
  • FIG. 2 is a basic architecture diagram of on-demand decoding provided by the present application.
  • Fig. 3 is the flow chart of a kind of on-demand decoding method provided by this application.
  • Fig. 4 is a kind of corresponding relationship diagram of each group of syndrome and decoding unit provided by this application.
  • Fig. 5 is another correspondence diagram of each group of syndromes and decoding units provided by the application.
  • FIG. 6 is another correspondence diagram of each group of syndromes and decoding units provided by the application.
  • FIG. 8 is a diagram of an on-demand decoding device provided by the present application.
  • FIG. 9 is a diagram of another on-demand decoding apparatus provided by the present application.
  • Fig. 1 shows the structural block diagram of the communication system.
  • the source provides the data stream to be sent;
  • the encoder receives the data stream, encodes it, and encodes the codeword information obtained by combining the check bits and the information bits.
  • Sending, through channel transmission, reaches the receiving end; after receiving the wrong codeword information due to noise or other damage in the channel, the receiving end decodes it through a decoding device, recovers the original data, and sends it to the sink.
  • the decoding method provided by the present application is applied to the decoding apparatus shown in FIG. 1 , which is a very important part in the communication system.
  • the decoding method provided by the present application is a dynamic on-demand decoding method, and its basic structure is shown in Figure 2.
  • the decoding structure includes codeword sequence grouping priority sorting, decoding dynamic scheduling, and decoder decoding. , codeword and syndrome update.
  • the specific steps are shown in Figure 3, including:
  • the syndrome is obtained according to the transposition of the codeword to be decoded and the parity check matrix.
  • the decoder can determine the error pattern (that is, flip the bit) according to the syndrome, and flip the corresponding bit in the codeword to be decoded accordingly. , to obtain the decoded codeword.
  • the non-zero syndromes Group the obtained syndromes, and perform priority sorting in each group of syndromes. For example, different syndromes have different numbers. Assuming that there are 100 syndromes, the syndromes are numbered from 1 to 100 and divided into different groups; if they are all non-zero syndromes, they can be numbered according to the number in different groups. Prioritize from smallest to largest. Further, the priority of non-zero syndromes is higher than that of syndromes with a value of 0. If there is a syndrome with a value of 0, the non-zero syndromes are prioritized from small to large in different groups, and the value of A syndrome of zero comes last. It should be understood that the non-zero syndromes can also be sorted in descending order or 1, 3, 5, . In addition, the number can also be replaced by a storage address, and the sorting method is the same, and will not be repeated.
  • the syndromes to be grouped are 64, they are divided into 4 groups, and the syndromes 1-16, 17-32, 33-48, and 49-64 are divided into four groups, and each group is divided into four groups according to the order from small to large.
  • the priority is sorted in order; in this decoding, if the 64 syndromes are all non-zero syndromes, the priority is sorted according to the order from small to large, and the syndrome with the highest priority in each group is 1, 17, 33, and 49; if there is a syndrome with a value of zero, it is reduced in priority, e.g.
  • syndrome 1 has a value of 0 and the rest of the syndromes are non-zero syndromes, the highest priority in each group The syndromes are 2, 17, 33 and 49 respectively; if syndrome 2 is also 0, then syndrome 3 has the highest priority in the first group; if the values of syndromes 1-16 in this group are all 0, It is still sorted according to the preset order.
  • the priority of non-zero syndromes with many decoding times is lower than that of non-zero syndromes with few decoding times, that is, when sorting non-zero syndromes, it is necessary to improve the non-zero syndromes with few decoding times. priority. For example, suppose that one group includes 16 syndromes, syndromes 1-5 are zero syndromes, and syndromes 6-16 are non-zero syndromes, among which, syndromes 10-16 have not undergone any decoding, and the rest are not The zero syndromes are all decoded once.
  • the priority of syndromes 10-16 is higher than that of syndromes 6-9, and the priority of syndromes 6-9 is higher than that of syndromes 1-5; among the three types of syndromes , taking the ordering of numbers from small to large as an example, it can be concluded that the priority order in this group is 10, 11...16, 6, 7...9, 1, 2...5.
  • the value of the syndrome is zero and no decoding is required; in the case of soft decision, each syndrome may be decoded regardless of whether the value is 0 or not. Therefore, in the case of hard decision, if all the syndromes in a certain group have the value 0, all the syndromes in this group do not need to be decoded. In addition, in the case of soft decision, it is also possible to only look at the number of decodings. For example, a syndrome with a large number of decodings has a lower priority than a syndrome with a small number of decodings, without distinguishing whether the value of the syndrome is 0.
  • the embodiments of the present application do not perform the same decoding process on each codeword, which avoids the problem that in the traditional static decoding scheme, no matter whether the codeword itself is correct or not, the same number of decodings is required. Decoding resource requirements and system power consumption.
  • parallel decoding can be used for decoding, that is, the meaning of simultaneous decoding by multiple decoding units, and the number of parallel decoding is the same as the number of decoding units. If there are 4 decoding units, you can If 4 syndromes are supported for decoding at the same time, the number of parallel decoding is 4. At this time, the number of selected syndromes should not exceed 4.
  • the soft information amplitude and the sign bit are collectively referred to as soft information
  • the sign bit is the value (0 or 1) of each bit in the codeword
  • the soft information amplitude indicates that each bit is 0 or 1.
  • the probability of 1, that is, the soft information indicates the probability that the corresponding bit is 0 or 1.
  • each group includes the same number of syndromes, so that the grouping of syndromes is the most uniform, and the complexity of prioritization is reduced.
  • the decoding method of the present application can be applied to a decoding device including a plurality of decoding units, wherein, there are several ways to select a syndrome for decoding:
  • n is a positive integer not greater than the number of parallel decoding; optionally, n is the number of parallel decoding, select at most one syndrome from each group, and send them to Different decoding units perform hard-decision or soft-decision decoding.
  • n is the number of parallel decoding, and one syndrome is selected from each group and sent to different decoding units for soft-decision decoding.
  • the complexity is lower; and only the selected syndrome needs to be decoded, which avoids most unnecessary decoding operations and reduces power consumption; in addition,
  • the storage unit of the syndrome only needs to be connected to the corresponding decoding unit.
  • the storage unit of the syndrome 1-16 only needs to be connected to the first decoding unit, and the storage unit of the syndrome 17-32 only needs to be connected to the second decoding unit. Connection, the connection complexity will be reduced. Steps selected from each group can be performed in parallel, further reducing decoding time.
  • the obtained syndromes are divided into 2n/3 groups, where n is not greater than the number of parallel decoding, and n is an integer multiple of 3; optionally, n is the number of parallel decoding, and n is an integer multiple of 3.
  • n is not greater than the number of parallel decoding, and n is an integer multiple of 3; optionally, n is the number of parallel decoding, and n is an integer multiple of 3.
  • select at most one syndrome for decoding that is, select a maximum of 2n/3 syndromes for decoding; and then select from each two groups according to the priority sorting result obtained again.
  • There is at most one syndrome that is, at most n/3 syndromes are selected for decoding, and no more than n syndromes are selected for decoding in total.
  • the 64 syndromes are divided into 4 groups, each group including 16 syndromes, as shown in Figure 5 As shown, at most one syndrome is selected from syndromes 1-16 and sent to the first decoding unit, and at most one syndrome is selected from syndromes 17-32 and sent to the second decoding unit, and from syndromes 33-48 At most one syndrome is selected from among the syndromes 49-64 and sent to the third decoding unit, and at most one syndrome is selected from the syndromes 49-64 and sent to the fourth decoding unit.
  • the selected syndromes are removed, and at most one comparator is selected and sent to the fifth decoding unit; from the syndromes 33-64, the selected syndromes are removed. , and then select at most one comparator and send it to the sixth decoding unit, wherein the first to fourth decoding units are first-level decoding units, and the fifth and sixth decoding units are second-level decoding units.
  • the algorithm complexity is low; and only the selected syndrome needs to be decoded, avoiding most of the unnecessary decoding operations, and the power consumption is low;
  • the storage unit of the syndrome also only needs to be connected to the corresponding decoding unit.
  • the storage unit of the syndrome 1-16 only needs to be connected to the first decoding unit and the fifth decoding unit, and the storage unit of the syndrome 17-32 only needs to be connected. It needs to be connected with the second decoding unit and the fifth decoding unit, the storage units of the syndromes 33-48 only need to be connected with the third decoding unit and the sixth decoding unit, and the storage units of the syndromes 49-64 only need to be connected with The fourth decoding unit is connected with the sixth decoding unit, and the wiring complexity is low.
  • the steps of selecting at most one syndrome from each of the four groups can be performed in parallel, and the steps of selecting at most one syndrome from 1-32 and 33-64 can also be performed in parallel, further reducing the decoding time.
  • the grouping method can be as follows: the syndrome is divided into 4n/7 groups, where n is not greater than the number of parallel decoding, and n is an integer multiple of 7; optionally, n is equal to the number of parallel decoding number, and n is an integer multiple of 7.
  • each group select at most one syndrome for decoding, that is, select a maximum of 4n/7 syndromes for decoding; re-prioritize each two groups, according to the priority sorting result , select at most one syndrome from each two groups for decoding, that is, select a maximum of 2n/7 syndromes for decoding; re-prioritize each four groups, and select the most One syndrome, that is, selecting up to n/7 syndromes for decoding again, there are no more than n syndromes to be decoded in total, wherein the syndromes selected three times are not the same.
  • the 64 syndromes are divided into 4 groups, each group including 16 syndromes, as shown in Figure 6 As shown, at most one syndrome is selected from syndromes 1-16 and sent to the first decoding unit, and at most one syndrome is selected from syndromes 17-32 and sent to the second decoding unit, and from syndromes 33-48 At most one syndrome is selected from among the syndromes 49-64 and sent to the third decoding unit, and at most one syndrome is selected from the syndromes 49-64 and sent to the fourth decoding unit.
  • the first to fourth decoding units are first-level decoding units.
  • the selected syndromes are removed, and at most one comparator is selected and sent to the fifth decoding unit; from the syndromes 33-64, the selected syndromes are removed. , and then select at most one comparator and send it to the sixth decoding unit.
  • the fifth and sixth decoding units are second-level decoding units. Then, from the syndromes 1-64, the selected syndromes are removed, and at most one syndrome is selected and sent to the seventh decoding unit, which is the third-level decoding unit.
  • the step of selecting syndromes for decoding according to the priority ordering is divided into three levels, wherein, in the first level, the steps of selecting at most one syndrome from each of the four groups can be operated in parallel; in the second level , the steps of selecting at most one syndrome from 1-32 and 33-64 syndromes can also be operated in parallel, and finally the third level is performed to select at most one syndrome from 1-64 syndromes, improving the Select the degree of parallelism to reduce decoding time.
  • more than one syndrome can be selected in each group.
  • the number of syndromes is 64, there are 6 decoding units that can be decoded in parallel, and the 64 syndromes are divided into 4 groups, each group includes 16 syndromes; at this time, at most two syndromes can be selected from each of the two groups for decoding, and at most one syndrome can be selected from each of the remaining two groups for decoding;
  • the storage units of the syndromes of the group are respectively connected to two corresponding decoding units, and the storage units of the remaining two groups of syndromes are respectively connected to a corresponding decoding unit. Or, select at most three syndromes from one of the groups for decoding, and select at most one syndrome from each of the remaining three groups for decoding.
  • the storage units of a group of syndromes are connected to the corresponding three syndromes code unit, and the storage units of the remaining three groups of syndromes are respectively connected to a corresponding decoding unit.
  • one syndrome can be selected from each group, and two syndromes can be selected from three groups of syndromes or two syndromes of two groups of syndromes, and a total of 6 syndromes can be obtained for translation.
  • the selection method is to select two syndromes from each of the two groups for decoding, and select one syndrome from each of the remaining two groups for decoding; or, select three syndromes from one of the groups. Each syndrome is decoded, and one syndrome is selected for decoding from each of the remaining three groups.
  • more than one syndrome may also be selected from each of the two groups.
  • the priority order preset in each packet is different from the priority order at the previous decoding moment.
  • the priority order in each group is 1, 2, 3..., the next decoding time, the priority can be changed to 2, 3, 4...1, and the previous decoding time
  • the priority order is in a cyclic shift relationship.
  • the priority of non-zero syndromes is higher than the priority of syndromes with a value of zero at any decoding time.
  • the syndromes 1-16, 17-32, 33-48, and 49-64 are respectively divided into four groups.
  • the next decoding according to (1, 2... 16), (17, 18... 32), (33, 34... 48), (49, 50... 64) in order to select, and select syndromes 1, 17, 33 and 49 for decoding; the next decoding
  • the syndromes 1-16, 17-32, 33-48, and 49-64 are also divided into four groups respectively.
  • the stage order becomes (2, 3...16, 1), (18, 19...32, 17), (34, 35...48, 33), (50, 51...64, 49), and the syndrome 2, 18, 34 and 50 are decoded, and so on.
  • both syndromes 1-3 and 17-20 are zero syndromes, according to the same priority order, and the priority of non-zero syndromes is higher than that of zero syndromes, Select syndromes 4, 21, 33 and 49 for decoding; at the next decoding time, among the 64 syndromes, syndromes 4-6 and 18 are zero syndromes, according to the changed priority order and non-zero syndromes
  • the priority of the syndrome is higher than that of the zero syndrome, and syndromes 2, 19, 34 and 50 are selected for decoding.
  • the priority order of selecting at most one syndrome from each group may not follow the cyclic shift relationship.
  • the priority order of the non-zero syndromes is randomly set, etc., which is not specifically limited in this application.
  • the syndrome corresponding to each codeword in the obtained multiple codewords can be selected from the syndromes stored in the memory, for example, the number of the obtained multiple syndromes is 64, and the memory includes 64 storage units, One syndrome is selected from each storage unit, and 64 syndromes can be obtained. Of course, there are 32 storage units, and two syndromes may be selected from each storage unit, which is not specifically limited in this application. Moreover, at the time of two decoding, the obtained syndromes can be different, that is, 64 new syndromes will be obtained from the next decoding. It is possible that the selected syndromes are exactly the same as the last time. Some syndromes are different, and it is even possible to obtain completely different syndromes.
  • the first syndrome When decoding the selected syndromes, take one of the syndromes (the first syndrome) as an example, if the decoding fails, no operation is performed, or the corresponding soft information amplitude is updated; If successful, the first syndrome and the codeword corresponding to the first syndrome are updated according to the decoding result.
  • the decoding result includes the incremental syndrome corresponding to the first syndrome and the flipped bits, and the incremental syndrome and the first syndrome are superimposed to obtain an updated syndrome, which is replaced by Remove the original first syndrome; then according to the flip bit, flip the bit corresponding to the flip bit in the corresponding codeword, assuming that the codeword corresponding to the first syndrome includes 100 bits, the flip bit indicates the 30th bit , the 30th bit in the codeword is flipped, that is, 0 becomes 1, or 1 becomes 0.
  • the soft information amplitude corresponding to the syndrome is also decoded together.
  • the first syndrome if the decoding is successful, the incremental syndrome corresponding to the first syndrome is obtained, the bits are flipped and the updated soft information amplitude; the incremental syndrome is superimposed with the first syndrome to obtain The updated syndrome replaces the original first syndrome; according to the flipped bit, flips the bit corresponding to the flipped bit in the corresponding codeword; and replaces the original soft information amplitude with the updated soft information amplitude to complete decoding.
  • each syndrome is stored for the same time, that is, each syndrome can only be stored in the storage unit for a fixed time, assuming 2 microseconds, after the storage time reaches 2 microseconds, the syndrome will be overwritten by the newly received syndrome.
  • the storage time of the codeword corresponding to the syndrome is also the same.
  • the decoding time of each syndrome is the same, assuming 1 microsecond, that is, the storage time of the syndrome reaches 1 microsecond, regardless of whether the syndrome is decoded or not, the address of the syndrome is regarded as an invalid address, The syndrome will not be decoded again until it is overwritten by the newly stored syndrome.
  • the decoding times of the syndrome can be further limited. For example, if the threshold of the decoding times is set to 3, each syndrome is decoded at most 3 times. If When a certain syndrome has been decoded three times, the syndrome will not be selected for decoding. In addition, it is also possible to not limit the storage time or the time to be decoded, but only limit the number of decoding times. As long as the number of times the syndrome is decoded reaches the threshold, the address of the syndrome will be regarded as an invalid address until it is overwritten by the newly stored syndrome. , the corresponding codeword will be output from the corresponding storage unit.
  • BCH Bose-Chaudhuri-Hocquenghem
  • it uses the BCH code with a code rate of 0.917 to construct 64 cyclic BCH (968, 928) codes with a code length of 968 and an information length of 928, which can correct 4 bits of error, that is, each code word contains 968 pieces of information bits and 40 parity bits.
  • Half of the bits in each BCH (968, 928) are from the previously formed codeword.
  • Another embodiment of the present application provides a storage load balancing solution, where syndromes corresponding to a first frame are stored in groups, and the number of syndromes corresponding to the first frame stored in each group differs by at most one, wherein the The first frame includes multiple codewords; optionally, the number of syndromes corresponding to the codewords stored in each group is the same.
  • the following description will be given by taking as an example that each group stores the same number of syndromes corresponding to codewords from the same frame.
  • a data frame includes k codewords, each codeword corresponds to a syndrome, the syndromes corresponding to the codewords in the data frame are stored in k groups, and each group stores a syndrome; the decoding of the decoder
  • the window length is b frames, and k and b are both positive integers.
  • the storage unit needs to store k*b syndromes in total.
  • the jth storage unit 701 stores the syndromes of the jth codewords of different frames, and the frame numbers are cyclically accumulated from 1 to b. Assuming that the frame number of the current input frame is 1, since the codewords contained in the current frame contain the most bit errors, the most decoding and storage updates are required, so this scheme is to make the syndromes corresponding to all the codewords of the current frame 1 uniform. Distributed to all storage units 701. Similarly, the syndromes corresponding to all codewords in frames 2, 3...b are also uniformly distributed to all storage units 701, but they have fewer bit errors than the first frame, requiring less decoding and storage renew. For each storage unit 701, the load it handles is the average value of the storage load within the decoding window length of n frames. Therefore, the design ensures that all the storage units 701 have a substantially uniform load, achieves thermal density balance, and avoids local overheating.
  • the number of groups stored in a group can also be different from the number of code words contained in the data frame. For example, each data frame contains 11 code words, which are stored in 10 groups, and one group needs to store the corresponding code words of two code words. For the syndrome, each of the remaining nine groups stores a syndrome corresponding to a codeword to ensure uniform storage and avoid large local power consumption. It should be understood that the storage of codewords can also satisfy the above conditions, so as to ensure that the codewords included in each frame are stored evenly and reduce local power consumption.
  • the correspondence between the syndrome storage unit and the decoding unit is realized, so the decoding load of the decoding unit is also the average value of the decoding load within the decoding window length of the total b frames , Since the number of decodings involved in each decoding unit is balanced, the power consumption generated by the chip area where each decoding unit is located is also similar, which can significantly improve the level of thermal density balance and reduce the engineering difficulty of chip implementation.
  • the present application provides a decoding apparatus, as shown in FIG. 8 , a controller 801 and a decoder 802 .
  • the controller 801 is used for obtaining the syndrome corresponding to each codeword in the plurality of codewords, and grouping the obtained syndromes; and is also used for prioritizing in each group of syndromes, according to each group of syndromes
  • the priority sorting result is selected, and the syndrome is selected and sent to the decoder 802; the decoder 802 is used for decoding the received syndrome.
  • the decoding device disclosed in the embodiment of the present application does not perform the same decoding process on each codeword, which avoids the problem that in the traditional decoding scheme, no matter whether the codeword itself is correct or not, the same number of decodings is required. Decoding, reducing the demand for decoding resources and system power consumption.
  • the priority of the non-zero syndrome is higher than that of the syndrome with a value of zero; further, the priority of the non-zero syndrome with more decoding times is lower than that of the non-zero syndrome with fewer decoding times, that is, when When sorting non-zero syndromes, it is necessary to increase the priority of non-zero syndromes with fewer decoding times, and the syndromes with fewer decoding times have higher priority.
  • Specific examples are as described in the previous method embodiments. The embodiments are not repeated here.
  • there are many ways to select syndromes for decoding which have been described in detail in the foregoing method embodiments, and will not be repeated in this embodiment.
  • each group includes the same number of syndromes, so that the grouping of syndromes is the most uniform, and the complexity of prioritization is reduced.
  • the zero-valued syndrome may also need to be decoded Decoding, and the decoding device will also receive the soft information amplitude corresponding to the bits included in each codeword.
  • the controller 801 in addition to sending the selected syndrome to the decoder 802 for decoding, the controller 801 also needs to send its corresponding soft information amplitude to the decoder 802 together.
  • the soft information amplitude and the sign bit are collectively referred to as soft information
  • the sign bit is the value (0 or 1) of each bit in the codeword
  • the soft information amplitude represents the probability that each bit is 0 or 1
  • that is Soft information indicates the probability that the corresponding bit is 0 or 1.
  • the priority order preset in each packet is different from the priority order at the previous decoding moment.
  • the priority order in each group is 1, 2, 3..., the next decoding time, the priority can be changed to 2, 3, 4...1, and the previous decoding time
  • the priority order is in a cyclic shift relationship.
  • the priority of non-zero syndromes is higher than the priority of syndromes with a value of zero at any decoding time.
  • the syndromes 1-16, 17-32, 33-48, and 49-64 are respectively divided into four groups.
  • the next decoding according to (1, 2... 16), (17, 18... 32), (33, 34... 48), (49, 50... 64) in order to select, and select syndromes 1, 17, 33 and 49 for decoding; the next decoding
  • the syndromes 1-16, 17-32, 33-48, and 49-64 are also divided into four groups respectively.
  • the stage order becomes (2, 3...16, 1), (18, 19...32, 17), (34, 35...48, 33), (50, 51...64, 49), and the syndrome 2, 18, 34 and 50 are decoded, and so on.
  • both syndromes 1-3 and 17-20 are zero syndromes, according to the same priority order, and the priority of non-zero syndromes is higher than that of zero syndromes, Select syndromes 4, 21, 33 and 49 for decoding; at the next decoding time, among the 64 syndromes, syndromes 4-6 and 18 are zero syndromes, according to the changed priority order and non-zero syndromes
  • the priority of the syndrome is higher than that of the zero syndrome, and syndromes 2, 19, 34 and 50 are selected for decoding.
  • the priority order of selecting at most one syndrome from each group may not follow the cyclic shift relationship.
  • the priority order of the non-zero syndromes is randomly set, etc., which is not specifically limited in this application.
  • the decoding apparatus further includes a memory 803, and the syndrome corresponding to each codeword in the obtained multiple codewords can be selected from the syndromes stored in the memory 803, for example, the number of obtained multiple syndromes is 64, the memory 803 includes 64 storage units, and 64 syndromes can be obtained by selecting one syndrome from each storage unit.
  • the storage unit may be the same memory or different locations in multiple memories; or one memory is just one storage unit, which is not limited in this application.
  • the obtained syndromes may be different, that is, 64 new syndromes will be obtained from the next decoding, and it is possible that the selected syndromes are exactly the same as the last time. It is possible that some syndromes are different, and it is even possible to obtain completely different syndromes.
  • the decoder 802 decodes the received syndrome, if the decoding is successful, the first syndrome and the code word corresponding to the first syndrome are updated according to the decoding result; if the decoding fails, then Do nothing.
  • the decoder 803 performs hard-decision decoding on the received syndrome. Taking one of the syndromes (the first syndrome) as an example, if the decoding is successful, the incremental syndrome and flip bit corresponding to the first syndrome will be obtained.
  • the controller 801 is used to superimpose the incremental syndrome and the first syndrome, so that the memory 803 stores the updated syndrome; it is also used to store the updated syndrome according to the inversion bit. bit, flip the bit corresponding to the flip bit in the corresponding code word, so that the memory 803 stores the updated bit; assuming that the code word corresponding to the first syndrome includes 100 bits, and the flip bit indicates the 30th bit, then the code The 30th bit in the word is flipped, i.e. from 0 to 1 or from 1 to 0.
  • the decoder 802 performs soft-decision decoding on the received syndrome, the soft information amplitude corresponding to the syndrome needs to be decoded together.
  • the first syndrome if the decoding is successful, the incremental syndrome corresponding to the first syndrome is obtained, the bits are inverted and the updated soft information amplitude; the incremental syndrome, the inverted bits and the updated soft information are obtained.
  • the information amplitude is sent to memory 803 .
  • the controller 801 is used to superimpose the incremental syndrome and the first syndrome, so that the memory 803 stores the updated syndrome; and according to the flip bit, flip the bit corresponding to the flip bit in the corresponding codeword, so that the memory 803 Store the updated bits.
  • the memory 803 may include different storage units, for example, a syndrome storage unit, a data storage unit and a soft information storage unit, which are respectively used to store syndromes, codewords and corresponding soft information.
  • the decoding apparatus further includes a scheduling unit 804, as shown in FIG. 9, whose main functions include: according to the instruction of the controller 801, the syndrome in the memory 803 is sent to the decoder 802, and the decoder 802 The resulting incremental syndrome and flip bits are sent to memory 803 .
  • the scheduling unit 804 also sends the soft information amplitude in the memory 803 to the decoder 802 according to the instruction of the controller 801 , and sends the soft information amplitude output from the decoder 802 to the memory 803 .
  • the bandwidth of the scheduling unit 804 can be constrained, for example, the number of incremental syndromes and the number of flip bits sent into the memory 803 at each moment is limited not to exceed a specific threshold, At this time, the scheduling unit 804 will buffer the incremental syndrome and the flip bit exceeding the threshold, and send them to the memory at the next moment.
  • each syndrome is stored for the same time, that is, each syndrome can only be stored in the memory 803 for a fixed time, assuming 2 microseconds, after the storage time reaches 2 microseconds, the syndrome will be overwritten by the newly received syndrome.
  • the storage time of the codeword corresponding to the syndrome is also the same.
  • the decoding time of each syndrome is the same, assuming 1 microsecond, that is, the storage time of the syndrome reaches 1 microsecond, no matter whether the syndrome is decoded or not, the address of the syndrome is regarded as an invalid address, and the The syndrome will not be decoded again until it is overwritten by the newly stored syndrome.
  • the decoding times of the syndrome can be further limited. For example, if the threshold of the decoding times is set to 3, each syndrome is decoded at most 3 times. If When a certain syndrome has been decoded three times, the syndrome will not be selected for decoding.
  • the storage time or the time to be decoded may not be limited, but only the number of decoding times. As long as the number of times the syndrome is decoded reaches the threshold, the address of the syndrome in the memory 803 will be regarded as an invalid address until it is newly stored. Until the syndrome is covered, the corresponding codeword will also be output from the corresponding memory.
  • the memory includes multiple storage units, and the number of syndromes corresponding to the first frame stored in each storage unit differs by at most one to achieve uniform storage, wherein the first frame includes multiple codewords; further, Each storage unit stores the same number of syndromes corresponding to codewords belonging to the same frame.
  • Each storage unit stores the same number of syndromes corresponding to codewords belonging to the same frame. The following description will be given by taking as an example that each storage unit stores the same number of syndromes corresponding to codewords from the same frame.
  • each data frame contains k codewords, and k and b are both positive integers.
  • the memory 803 needs to store k*b syndromes in total.
  • An embodiment of the present application provides a load balancing solution.
  • the jth storage unit stores the syndromes of the jth codewords of different frames, and the frame numbers are cyclically accumulated from 1 to b.
  • the number of storage units can also be different from the number of code words contained in the data frame. For example, each data frame contains 100 code words and the number of storage units is 10, then each storage unit stores the data frame.
  • the syndromes corresponding to the 10 codewords in the Each unit stores a syndrome corresponding to one codeword; this embodiment can ensure uniform storage and avoid large local power consumption.
  • the received codeword and the syndrome are stored in different storage units, and the scheme for storing the codeword can also be stored according to the above-mentioned storage scheme of the syndrome; if it is a soft decision, the corresponding bits in the codeword will also be received.
  • the soft information amplitude can also be stored according to the above-mentioned syndrome storage scheme.
  • the correspondence between the storage unit storing the syndrome and the decoding unit is realized, so the decoding load of the decoding unit is also the decoding load within the decoding window length of the total b frame.
  • the average value, since the number of decodings involved in each decoding unit is balanced, the power consumption generated by the chip area where each decoding unit is located is also similar, which can significantly improve the level of thermal density balance and reduce the engineering cost of chip implementation. difficulty.
  • the decoding device involved in this application can be composed of an application specific integrated circuit (ASIC) and a field programmable gate array (Field Programmable Gata Array, FPGA), wherein each functional device includes The memory, scheduling unit, etc., can be realized by ASIC or FPGA, and finally constitute a decoding device.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the embodiments of the present application provide a computer-readable storage medium or a computer program product for storing a computer program, and the computer program is used to execute the decoding method disclosed in the method embodiments of the present application.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Error Detection And Correction (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种译码方法,可应用于城域、骨干网、数据中心互连等多个场景,满足光传输的需求。所述方法包括:获得多个码字中每个码字对应的校正子;对得到的所述校正子进行分组,在每一组校正子中进行优先级排序;根据所述每一组校正子的优先级排序结果,挑选校正子进行译码。由于本译码方法并不对每一个码字都进行相同的译码处理,避免了传统静态译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和功耗。

Description

按需译码方法及装置
本申请要求于2020年7月3日提交中国国家知识产权局、申请号为202010631750.4、申请名称为“按需译码方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种译码技术,尤其涉及一种低功耗的按需译码技术。
背景技术
前向纠错编码(Forward Error Correction,FEC)技术已经广泛应用于无线蜂窝、无线网络、存储及高速光传输系统中。前向纠错编码技术的出发点是在发射机编码时通过加入某些校验比特,在已经产生了误码的接收端码流中,通过对校验比特进行计算来纠正码流中的误码,以降低接收端的信噪比(Optical Signal Noise Ratio,OSNR)容限,从而达到改善系统的误码率性能,提高系统通信的可靠性,延长信号的传输距离,降低发射机发射功率以及降低系统成本的目的。
近几年来,光通信系统经历了飞速发展,从100Gbps发展到400Gbps,进而到未来使用的800Gbps光通信系统,对FEC编码增益提出了更高的要求,FEC编码也离香农极限越来越近。随之而来,FEC译码的复杂度越来越高,译码功耗越来越大,无法满足产品的需求。
发明内容
本申请提供一种译码方法,通过对输入码字进行优先级排序以及按需调度译码,解决了现有技术中译码的复杂度高,译码功耗大的问题。
第一方面,提供一种译码方法,获得多个码字中每个码字对应的校正子;对得到的所述校正子进行分组,在每一组校正子中进行优先级排序;根据所述每一组校正子的优先级排序结果,挑选校正子进行译码。
本申请实施例并不对每一个码字的校正子都进行相同的译码处理,避免了传统静态译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和系统功耗。
在一种可能的实现方式中,非零校正子的优先级高于值为零的校正子。进一步地,译码次数多的非零校正子的优先级低于译码次数少的非零校正子。而且,还可以限定译码次数,当校正子被译码次数达到阈值时,将不会再被译码,例如,可以将阈值设为3次,只要译码次数达到3次,则不会再被译码。提高需要译码的校正子被译码的机会,从而提高译码效率。
可选地,在硬判决的情况下,非零校正子的优先级始终高于值为零的校正子;针对软判决译码的情况下,非零校正子的优先级可以始终高于值为零的校正子,也可以是译码次数优先,例如,无论校正子的值是否为零,译码次数多的校正子优先级低于译码次数少的校正子,如果两个校正子译码次数相同,非零校正子的优先级再高于零校正子。此外,在采用软判决译码的情况下,还可以根据软信息的可靠度进行优先级排序,本申请不做限定。
在一种可能的实现方式中,所述译码方法应用于包括多个译码单元的译码装置;所述挑选校正子进行译码,包括:从每组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,选出的校正子均为非零校正子。由于采用硬判决译码时,值为零的校正子无需译码;如果某一组中所有校正子的值均为0,则该组中不会有校正子被选出进行译码,因此,每组中最多挑选一个。此时,每组校正子的存储单元只需连接对应的译码单元,连线复杂度会有所降低。当然,每组也可以挑选出至多两个校正子或更多,这样的话,每组校正子的存储单元连接的译码单元数量变为两个或更多。应理解,软判决译码的情况下,零校正子可以进行译码,提高译码性能;也可以不进行译码,降低译码复杂度。
在一种可能的实现方式中,所述译码方法应用于包括多个译码单元的译码装置;所述挑选校正子进行译码,包括:从每组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码。此时,每组校正子的存储单元只需连接对应的译码单元,连线复杂度会有所降低。当然,每组也可以挑选出两个校正子或更多,这样的话,每组校正子的存储单元连接的译码单元数量变为两个或更多。
可选地,分组的组数与译码单元的个数相同,最大化利用译码资源。
在一种可能的实现方式中,所述译码方法应用于包括多个译码单元的译码装置;所述挑选校正子进行译码,包括:从每一组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,两次挑选的校正子不同,且选出的校正子均为非零校正子。在一种可能的实现方式中,所述译码方法应用于包括多个译码单元的译码装置;所述挑选校正子进行译码,包括:从每一组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给不同的译码单元进行软判决译码,其中,两次挑选的校正子不同。进一步地,所述对得到的所述校正子进行分组,包括:将得到的所述校正子分为2/3n组,其中,n为译码单元的个数,且n为3的整数倍。
本方案中,也可以保证每次选取都是选出最多一个校正子,算法复杂度低;且只需对选出的校正子进行译码,避免了大部分非必要的译码运算,功耗低;校正子的存储单元也是只需连接对应的译码单元,连线复杂度低。可选地,每组也可以挑选出两个校正子或更多的校正子。
另外,需要考虑译码单元当前的工作状态,如果有译码单元是空闲的,尽量安排到空闲的译码单元,避免译码单元负载出现不均衡的情况。
在一种可能的实现方式中,每一组包括的校正子数目相同,实现均匀分组,可以保证挑选校正子时,算法复杂度较低。
在一种可能的实现方式中,每一组中包括的校正子具有不同的编号或地址,可以根据不同的编号或地址识别对应的校正子,从而对其进行优先级排序。
在一种可能的实现方式中,在对选出的校正子进行译码之后,所述方法还包括:再次对每一组的校正子进行优先级排序,根据本次优先级排序结果,再次挑选校正子进行译码。进一步地,两次对每一组的校正子进行优先级排序的过程中,所述优先级排序的方法可以不同。
在一种可能的实现方式中,所述方法还包括:如果对第一校正子译码成功,则根据译码结果,对所述第一校正子和所述第一校正子对应的码字进行更新,其中,所述第一校正子为所述进行译码的校正子中的一个。
结合上一种可能的实现方式,在本实现方式中,所述根据译码结果,对第一校正子和所述第一校正子对应的码字进行更新,具体包括:所述译码结果包括所述第一校正子对应的增量校正子和翻转比特位;将所述增量校正子与所述第一校正子叠加,得到更新的校正子;根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特。
在一种可能的实现方式中,每个校正子存储的时间相同。也就是说,每个校正子在存储单元中存储的时间相同,假设2微秒,则存储时间达到2微秒之后,该校正子会被新接收的校正子覆盖。同理,与校正子对应的码字的存储时间也相同。进一步地,每个校正子的待译码时间相同,假设1微秒,即校正子的存储时间达到1微秒,无论该校正子是否被译码都将该校正子的地址视为无效地址,该校正子不会再被译码了,直到被新存储的校正子覆盖。
本申请实施例可以让译码资源多用于新存储的校正子,而不是已经存储了很久,却仍没有得到准确结果的校正子,实现译码资源的按需分配,提高译码效率。
在一种可能的实现方式中,所述方法还包括:对选出的校正子对应的软信息幅值进行译码。在执行软判决译码时,需要将校正子及其对应的软信息幅值一起译码,如果译码成功,在得到增量校正子和翻转比特位之外,还会得到更新的软信息幅值;将所述增量校正子与对应的校正子叠加,得到更新的校正子;根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特,再用更新的软信息幅值替换掉原有的软信息幅值,完成译码。
在一种可能的实现方式中,所述方法还包括:将第一帧对应的校正子分组存储,每组存储的所述第一帧对应的校正子的数目最多相差一个,其中,所述第一帧包括多个码字。可选地,来自同一个帧的码字对应的校正子,在每组存储中的个数相同,也就是说,在每个存储单元中的个数相同,实现均匀存储。在本申请实施例中,对于每个存储单元来说,它所处理的负载是存储负载的平均值,因此该设计保证了所有存储单元具有大体一致的负载,实现热密度均衡,避免局部过热。
第二方面,提供一种译码装置,其特征在于,包括:控制器和译码器,所述控制器,用于获得多个码字中每个码字对应的校正子,对得到的所述校正子进行分组;还用于在每一组校正子中进行优先级排序,根据所述每一组校正子的优先级排序结果,挑选校正子发送给所述译码器;译码器,用于对收到的校正子进行译码。
本申请实施例并不对每一个码字对应的校正子都进行相同的译码处理,避免了传统静态译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和系统功耗。
在一种可能的实现方式中,非零校正子的优先级高于值为零的校正子。进一步地,译码次数多的非零校正子的优先级低于译码次数少的非零校正子。而且,还可以限定译码次数,当校正子被译码次数达到阈值时,将不会再被译码,例如,可以将阈值设为3次,只要译码次数达到3次,则不会再被译码。提高需要译码的校正子被译码的机会,从而提高译码效率。
在一种可能的实现方式中,所述控制器用于从每一组中挑选最多一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码,其中,选出的校正子均为非零校正子。由于采用硬判决译码时,值为零的校正子无需译码;如果某一组中所有校正子的值均为0,则该组中不会有校正子被选出进行译码,因此,每组中最多挑选一个。此时,校正子的存储单元只需连接对应的译码单元,连线复杂度会有所降低。当然,每组也可以挑选出至多两个校正子或更多,这样的话,每组校正子的存储单元连接的译码单元数量变为两个或更多。应理解,软判决译码的情况下,零校正子可以进行译码,提高译码性能;也可以不进行译码,降低译码复杂度。
在一种可能的实现方式中,所述控制器用于从每一组中挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码。软判决译码的时候,无论校正子是否为0,均可能被选出进行译码。此时,每组校正子的存储单元只需连接对应的译码单元,连线复杂度会有所降低。当然,每组也可以挑选出两个校正子或更多,这样的话,每组校正子的存储单元连接的译码单元数量变为两个或更多。
可选地,分组的组数与译码单元的个数相同,最大化利用译码资源。
在一种可能的实现方式中,所述控制器还用于从每一组中挑选最多一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选最多一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码,其中,两次挑选的校正子不同,且选出的校正子均为非零校正子。在一种可能的实现方式中,所述控制器还用于从每一组中挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码,其中,两次挑选的校正子不同。进一步地,所述控制器还用于将得到的所述校正子分为2/3n组,其中,n为译码单元的个数,且n为3的整数倍。
本方案中,也可以保证每次选取都是选出最多一个校正子,算法复杂度低;且只需对选出的校正子进行译码,避免了大部分非必要的译码运算,功耗低;校正子的存储单元也是只需连接对应的译码单元,连线复杂度低。可选地,每组也可以挑选出两个校正子或更多的校正子。
在一种可能的实现方式中,每一组包括的校正子数目相同,实现均匀分组,可以保证挑选校正子时,算法复杂度较低。
在一种可能的实现方式中,每一组中包括的校正子具有不同的编号或地址,可以根据不同的编号或地址识别对应的校正子,从而对其进行优先级排序。
在一种可能的实现方式中,所述控制器还用于在对选出的校正子发送给所述译码器之后,再次对每一组的校正子进行优先级排序,根据本次优先级排序结果,再次从每组中挑选校正子发送给所述译码器。进一步地,两次对每一组的校正子进行优先级排序的过程中,所述优先级排序的方法可以不同。
在一种可能的实现方式中,所述控制器,用于在第一校正子译码成功时,根据译码结果,对所述第一校正子和所述第一校正子对应的码字进行更新,其中,所述第一校正子为发送给所述译码器的校正子中的一个。
结合上一种可能的实现方式,在一种可能的实现方式中,所述译码装置还包括存储器,所述译码器,还用于在对第一校正子译码成功时,得到所述第一校正子对应的增量校正子和翻转比特位;将所述增量校正子和所述翻转比特位发送给所述存储器;所述控制器,用于将所述增量校正子与所述第一校正子叠加,使所述存储器存储更新的校正子;还用于根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特,使所述存储器存储更新的比特。
在一种可能的实现方式中,每个校正子存储的时间相同,也就是说,每个校正子在存储单元中存储的时间相同,假设2微秒,则存储时间达到2微秒之后,该校正子会被新接收的校正子覆盖。同理,与校正子对应的码字的存储时间也相同。进一步地,每个校正子的待译码时间相同,假设1微秒,即校正子的存储时间达到1微秒,无论该校正子是否被译码都将 该校正子的地址视为无效地址,该校正子不会再被译码了,直到被新存储的校正子覆盖。
本申请实施例可以让译码资源多用于新存储的校正子,而不是已经存储了很久,却仍没有得到准确结果的校正子,实现译码资源的按需分配,提高译码效率。
在一种可能的实现方式中,所述控制器,还用于将选出的校正子对应的软信息幅值发送给所述译码器;所述译码器,还用于对所述软信息幅值进行译码。在执行软判决译码时,需要将校正子及其对应的软信息幅值一起译码,如果译码成功,在得到增量校正子和翻转比特位之外,还会得到更新的软信息幅值。进一步地,所述译码器将所述增量校正子,所述翻转比特位和所述更新的软信息幅值发送给所述存储器;所述存储器,用于存储所述增量校正子,翻转比特位以及更新的软信息幅值;所述控制器,用于将所述增量校正子与对应的校正子叠加,使所述存储器存储更新的校正子;还用于根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特,使所述存储器存储更新的比特。
在一种可能的实现方式中,所述译码装置还包括存储器,所述存储器包括多个存储单元,每个存储单元存储的第一帧对应的校正子的数目最多相差一个,其中,所述第一帧包括多个码字。可选地,来自同一个帧的码字对应的校正子,在每个存储单元中的个数相同,实现均匀存储。在本申请实施例中,对于每个存储单元来说,它所处理的负载是存储负载的平均值,因此该设计保证了所有存储单元具有大体一致的负载,实现热密度均衡,避免局部过热。
在一种可能的实现方式中,所述译码装置还包括调度单元,其主要功能包括:根据控制器的指示将存储器中的校正子送入译码器,将译码器输出的增量校正子和翻转比特位送入存储器。对于软判决译码,调度单元还根据控制器的指示将存储器中的软信息幅值送入译码器,将译码器输出的软信息幅值送入存储器。
特别地,在实际实现时,为了降低功耗可以对调度单元的带宽进行约束,比如对每个时刻送入存储器的增量校正子个数和翻转比特个数进行限制不超过特定阈值,这时调度单元会将超过阈值的增量校正子和翻转比特进行缓存,待下一时刻再送入存储器。
第三方面,提供一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储指令,当所述指令在终端设备上运行时,使得所述终端设备执行如第一方面及第一方面中任一种可能的实现方式所述的方法。
第四方面,提供一种包含指令的计算机程序产品,其特征在于,当在终端设备上运行时,使得终端设备执行如第一方面及第一方面中任一种可能的实现方式所述的方法。
本申请实施例并不对每一个码字对应的校正子都进行相同的译码处理,避免了传统静态译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和系统功耗。且采用分组选取的方式,可以降低算法复杂度,且校正子的存储单元只需连接对应的译码单元,连线复杂度也会有所降低。
附图说明
图1为通信系统的结构框图;
图2为本申请提供的按需译码的基本架构图;
图3为本申请提供的一种按需译码方法的流程图;
图4为本申请提供的一种每组校正子与译码单元的对应关系图;
图5为本申请提供的另一种每组校正子与译码单元的对应关系图;
图6为本申请提供的另一种每组校正子与译码单元的对应关系图;
图7为本申请提供的一种校正子与存储单元的对应关系图;
图8为本申请提供的一种按需译码装置图;
图9为本申请提供的另一种按需译码装置图。
具体实施方式
在对本申请实施例进行详细地解释说明之前,先对本申请实施例的应用场景予以说明。图1示出通信系统的结构框图,在发送端,信源提供待发送的数据流;编码器接收该数据流,并对其进行编码,编码获得校验比特和信息比特合并的码字信息进行发送,经过信道传输,到达接收端;接收端接收到因为信道中的噪声或者其他损伤产生错误的码字信息后,通过译码装置进行译码,恢复出原有数据,发给信宿。其中,本申请提供的译码方法应用于图1所示的译码装置中,是通信系统中非常重要的一环。
本申请提供的译码方法是一种动态按需的译码方式,其基本架构如图2所示,该译码架构包括码字序列分组优先级排序、译码动态调度、译码器译码、码字及校正子(syndrome)更新。其具体的步骤如图3所示,包括:
301、获得多个码字中每个码字对应的校正子。其中,校正子根据待译码的码字与奇偶校验矩阵的转置得出,通常情况下,校正子为待译码的码字与奇偶校验矩阵的转置的内积,用于在译码过程中确定错误比特。由于码字传输入中可能由于干扰而出错,例如发送的码字为A,接收到的待译码码字却是B,则误码为E=A-B,即待译码码字B=A+E,此时,S=B·H T即为校正子,其中,H为奇偶校验矩阵。由于原始码字和H矩阵的转置乘积为零,因此,S=A·H T+E·H T=E·H T,如果校正子S为0,则传输无误码或者误码E为合法码字,如果校正子S是一个非零矢量,则传输有误码;译码器可以根据校正子确定错误图样(即翻转比特位),据此对待译码码字中相应位置的比特进行翻转,获得译码码字。
302、对得到的校正子进行分组,在每一组校正子中进行优先级排序。例如,不同的校正子具备不同的编号,假设有100个校正子,则校正子编号从1到100,分到不同的组中;如果均为非零校正子,则在不同组中可以按照编号从小到大的顺序进行优先级排序。进一步地,非零校正子的优先级高于值为零的校正子,如果存在值为0的校正子,则在不同组中将非零校正子从小到大的顺序进行优先级排序,将值为零的校正子排在最后。应理解,对非零校正子也可以按照从大到小或1、3、5…来排序,满足要求的优先级排序方式还可以有很多种,本申请不做限定。此外,编号也可以用存储地址来代替,排序方法相同,不再赘述。
具体地,假设待分组的校正子为64,分为4组,校正子1-16、17-32、33-48、49-64分别被分到四个组中,每组按照从小到大的顺序进行优先级排序;在本次译码时,如果64个校正子均为非零校正子,则按照从小到大的顺序进行优先级排序,每组中优先级最高的校正子分别为1、17、33和49;如果存在值为零的校正子,则将其优先级降低,例如,如果校正子1的值为0,其余校正子均为非零校正子,则每组中优先级最高的校正子分别为2、17、33和49;如果校正子2也为0,则校正子3在第一组中的优先级最高;如果本组中校正子1-16的值均为0,则仍然按照预设的顺序排序。
可选地,译码次数多的非零校正子的优先级低于译码次数少的非零校正子,即在对非零校正子排序的时候,需要提高译码次数少的非零校正子的优先级。例如,假设其中一组包括16个校正子,校正子1-5为零校正子,校正子6-16为非零校正子,其中,校正子10-16均没有经过任何译码,其余的非零校正子均经过一次译码,此时,校正子10-16的优先级高于校 正子6-9,校正子6-9的优先级高于校正子1-5;在三类校正子中,以编号从小到大排序为例,可以得出该组中的优先级排序为10、11…16、6、7…9、1、2…5。
应理解,在硬判决情况下,校正子的值为零,无需译码;在软判决的情况下,无论值是否为0,每个校正子均可能被译码。因此,在硬判决的情况下,如果其中某一组的所有校正子的值均为0,则该组中的所有校正子都不需要译码。另外,在软判决情况下,还可以只看译码次数,例如,译码次数多的校正子的优先级低于译码次数少的校正子,而不区分校正子的值是否为0。
303、根据所述每一组校正子的优先级排序结果,挑选校正子进行译码。
本申请实施例并不对每一个码字都进行相同的译码处理,避免了传统静态译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和系统功耗。
需要说明的是,译码可以采用并行译码,即多个译码单元同时译码的意思,而并行译码的个数与译码单元的个数相同,如果存在4个译码单元,可以支持4个校正子同时进行译码,则并行译码的个数为4,此时,挑选出来的校正子个数不要超过4个。
如前所述,在对获得的码字进行硬判决译码的情况下,只需对非零校正子进行译码;在对获得的码字进行软判决译码的情况下,值为零的校正子也可能需要译码,而且,还会获得每个码字包含的比特对应的软信息幅值;此时,在步骤203中,除了对选出的校正子进行译码之外,还需将该校正子对应的软信息幅值一起进行译码。
需要说明的是,一般情况,将软信息幅值和符号位统称为软信息,符号位为码字中每个比特的值(0或1),软信息幅值则表示每个比特是0或1的概率,即软信息指出对应的比特为0的概率或为1的概率。
进一步地,每一组包括的校正子数目并不限定,例如,10个校正子分为4组,可以有两组包括3个校正子,两组包括2个校正子;也可以一组包括4个校正子,剩余三组都包括2个校正子。可选地,每一组包括的校正子数目相同,这样对校正子的分组最均匀,降低优先级排序的复杂度。
本申请的译码方法可应用于包括多个译码单元的译码装置,其中,挑选校正子进行译码有如下几种方式:
(1)将校正子分为n组,n为不大于并行译码个数的正整数;可选地,n为并行译码的个数,从每组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码。可选地,n为并行译码的个数,从每组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码。
本申请实施例假设校正子的数目为64,有4个可以并行译码的译码单元,即并行译码的个数为4,将64个校正子分为4组(即n=4),每组16个校正子,如图4所示,从校正子1-16中最多选出一个校正子送入第一译码单元,从校正子17-32中最多选出一个校正子送入第二译码单元,从校正子33-48中最多选出一个校正子送入第三译码单元,从校正子49-64中最多选出一个校正子送入第四译码单元,分组选取的方式相比直接从64个中选出4个的方式复杂度更低;且只需对选出的校正子进行译码,避免了大部分非必要的译码运算,降低了功耗;此外,校正子的存储单元只需连接对应的译码单元,例如,校正子1-16的存储单元只需要与第一译码单元连接,校正子17-32的存储单元只需要与第二译码单元连接,连线复杂度会有所降低。从每组中挑选的步骤可以并行执行,进一步降低译码时间。
需要说明的是,在进行硬判决译码,且每一个组中的校正子均为0的情况下,是不会选 出任何一个校正子进行译码的,因此,一组中可能选出一个校正子进行译码,或者一组中没有任何校正子被选出进行译码。软判决情况下,每组中可以选择一个校正子进行译码。当然,选择组内优先级最高的校正子进行译码。在后续的实施例中,无论是从每一组中,还是每两组中,甚至更多组中选出最多一个校正子进行译码时,也需满足上述要求,本申请不再赘述。
(2)从每一组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,两次挑选的校正子不同,且选出的校正子均为非零校正子。可选地,从每一组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给不同的译码单元进行软判决译码,其中,两次挑选的校正子不同。具体地,将得到的校正子分为2n/3个分组,其中,n不大于并行译码的个数,且n为3的整数倍;可选地,n为并行译码的个数,且n为3的整数倍。在每组中根据优先级排序结果,挑选最多一个校正子进行译码,即选出最多2n/3个校正子进行译码;再从每两个分组中,根据再次得到的优先级排序结果挑选最多一个校正子,即选出最多n/3个校正子进行译码,总共选出不超过n个校正子进行译码。
例如,仍假设校正子的数目为64,有6个可以并行译码的译码单元,则根据本方案,将64个校正子分为4组,每组包括16个校正子,如图5所示,从校正子1-16中最多选出一个校正子送入第一译码单元,从校正子17-32中最多选出一个校正子送入第二译码单元,从校正子33-48中最多选出一个校正子送入第三译码单元,从校正子49-64中最多选出一个校正子送入第四译码单元。然后,再从校正子1-32中,除去已经被选中的校正子,再选出最多一个较正子送入第五译码单元;再从校正子33-64中,除去已经被选中的校正子,再选出最多一个较正子送入第六译码单元,其中,第一到第四译码单元为第一级译码单元、第五和第六译码单元为第二级译码单元。本方案中,保证每次选取都是选出最多一个校正子,算法复杂度低;且只需对选出的校正子进行译码,避免了大部分非必要的译码运算,功耗低;校正子的存储单元也是只需连接对应的译码单元,例如,校正子1-16的存储单元只需要与第一译码单元和第五译码单元连接,校正子17-32的存储单元只需要与第二译码单元与第五译码单元连接,校正子33-48的存储单元只需要与第三译码单元和第六译码单元连接,校正子49-64的存储单元只需要与第四译码单元与第六译码单元连接,连线复杂度低。
特别地,从四组中各选出最多一个校正子的步骤可以并行操作,从1-32以及33-64中再分别选出最多一个校正子的步骤也可以并行操作,进一步降低译码时间。
(3)在方案(1)只有一级译码单元、方案(2)存在两级译码单元的基础上,还可以存在三级译码单元,此时,从每一组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码;再从每四个分组中再次对校正子进行优先级排序,根据排序结果,从每四组中再挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,三次挑选的校正子不同。另外,执行软判决的情况下,从每一组,每两组和每四组中选择的时候,各选择一个校正子去进行软判决译码。
具体地,分组方式可以为,将校正子分为4n/7个分组,其中,n不大于并行译码的个数,且n为7的整数倍;可选地,n等于并行译码的个数,且n为7的整数倍。在每组中根据优先级排序结果,挑选最多一个校正子进行译码,即选出最多4n/7个校正子进行译码;在每两 个分组中重新进行优先级排序,根据优先级排序结果,从每两组中挑选最多一个校正子进行译码,即选出最多2n/7个校正子进行译码;在每四个分组中重新进行优先级排序,根据优先级排序结果,挑选出最多一个校正子,即再次选出最多n/7个校正子进行译码,总共有不超过n个校正子去译码,其中,三次挑选的校正子均不相同。
例如,仍假设校正子的数目为64,存在7个可以并行译码的译码单元,则根据本方案,将64个校正子分为4组,每组包括16个校正子,如图6所示,从校正子1-16中最多选出一个校正子送入第一译码单元,从校正子17-32中最多选出一个校正子送入第二译码单元,从校正子33-48中最多选出一个校正子送入第三译码单元,从校正子49-64中最多选出一个校正子送入第四译码单元。第一至第四译码单元为第一级译码单元。然后,再从校正子1-32中,除去已经被选中的校正子,再选出最多一个较正子送入第五译码单元;再从校正子33-64中,除去已经被选中的校正子,再选出最多一个较正子送入第六译码单元,第五和第六译码单元为第二级译码单元。再从校正子1-64中,除去已经被选中的校正子,再选出最多一个校正子送入第七译码单元,第七译码单元为第三级译码单元。
本方案中,也可以保证每次选取都是选出最多一个校正子,保证了较低的算法复杂度;且只需对选出的校正子进行译码,避免了大部分非必要的译码运算,功耗低;存储单元也是只需连接对应的译码单元,连线复杂度低。相比于前两种方案,译码时间更长一些,但译码性能也更好。
此外,根据优先级排序来挑选校正子进行译码的步骤分3个层级,其中,在第一个层级中,从四组中各选出最多一个校正子的步骤可以并行操作;第二层级中,从1-32以及33-64个校正子中再分别选出最多一个校正子的步骤也可以并行操作,最后是第三层级,执行从1-64个校正子中选取最多一个校正子,提高挑选的并行度,降低译码时间。
上述几种方案为仅为本申请提供的几种实施方式,还可以有其他的方式,例如,分更多的层级,或在分组的时候,每组包括的校正子数量不相同等。
此外,每组中还可以选出不止一个校正子,例如,仍假设校正子的数目为64,有6个可以并行译码的译码单元,将64个校正子分为4组,每组包括16个校正子;此时,可以从其中两组中,每组选出最多两个校正子进行译码,剩余两组中每组选出最多一个校正子进行译码;此种情况下,两组校正子的存储单元分别连接对应的两个译码单元,剩余两组校正子的存储单元分别连接对应的一个译码单元。或者,从其中一组中最多选出三个校正子进行译码,剩余三组中每组选出最多一个校正子进行译码,此时,一组校正子的存储单元连接对应的三个译码单元,剩余三组校正子的存储单元分别连接对应的一个译码单元。此外,还可以从每组中选出一个校正子,再从其中三组校正子中选出两个校正子或其中两组校正子中选出两个校正子,总共得到6个校正子进行译码;存在多种类似的挑选方式,均在本申请的保护范围之内。
进一步地,由于软判决情况下,无论校正子是否为0均需要译码,不存在没有任何校正子需要译码的情况。此时,选择方式变为从其中两组中,每组选出两个校正子进行译码,剩余两组中每组选出一个校正子进行译码;或者,从其中一组中选出三个校正子进行译码,剩余三组中每组选出一个校正子进行译码。另外,如果存在多级的译码单元,例如,按照上面的方法(2)来执行,从每两组中也可以选取不止一个校正子。
可选地,在下一个译码时刻,在每个分组中预设的优先级顺序与上一次译码时刻的优先 级顺序不同。例如,本次译码时,假设在每个组中优先级顺序为1、2、3…,下一个译码时刻,优先级可以变为2、3、4…1,和上一译码时刻的优先级顺序呈循环位移关系。可选地,无论在哪个译码时刻,非零校正子的优先级高于值为零的校正子的优先级。
具体地,假设校正子的数目为64,校正子1-16、17-32、33-48、49-64分别被分到四个组中,在本次译码时,按照(1、2…16),(17、18…32),(33、34…48),(49,50…64)的顺序来挑选,挑选出校正子1、17、33和49进行译码;下一译码时刻,存在与之前不完全相同的64个校正子,编号仍为1-64,校正子1-16、17-32、33-48、49-64也同样被分别分到四个组中,优先级顺序变为(2、3…16、1),(18、19…32、17),(34、35…48、33),(50、51…64、49),挑选出校正子2、18、34和50进行译码,以此类推。此外,假设本次译码时,校正子1-3和17-20均为零校正子,则按照同样的优先级顺序,且非零校正子优先级高于零校正子的优先级排序方式,挑选出校正子4、21、33和49进行译码;下一译码时刻,在64个校正子中,校正子4-6和18为零校正子,根据变化后的优先级顺序以及非零校正子优先级高于零校正子,挑选出校正子2、19、34和50进行译码。
当然,在两次译码时刻,从每组中选出最多一个校正子的优先级顺序也可以不遵循循环位移关系,例如,在非零校正子优先级高于零校正子的条件下,每次随机设置非零校正子的优先级顺序等,本申请不作具体限定。
应理解,获得的多个码字中每个码字对应的校正子可以从存储在存储器中的校正子中挑选,例如,获得的多个校正子的数目为64,存储器包括64个存储单元,从每个存储单元中挑选出一个校正子,即可得到64个校正子。当然存在32个存储单元,从每个存储单元中挑选两个校正子也可以,本申请不作具体限定。而且,在两次译码时刻,获得的校正子可以不同,即后一次译码的时候,会从新获取64个新的校正子,有可能正好挑出来的校正子和上一次完全相同,也有可能有部分校正子是不同的,甚至有可能获得完全不同的校正子。
在对选出的校正子进行译码时,以其中一个校正子(第一校正子)为例,如果译码失败了,则不执行任何操作,或者更新对应的软信息幅值;如果译码成功,则根据译码结果,对第一校正子和第一校正子对应的码字进行更新。
如果对获得的码字进行硬判决译码,译码结果包括第一校正子对应的增量校正子和翻转比特位,将增量校正子与第一校正子叠加,得到更新的校正子,替换掉原来的第一校正子;再根据翻转比特位,翻转对应的码字中与翻转比特位对应的比特,假设第一校正子对应的码字包括100个比特,翻转比特位指示第30个比特,则将码字中第30个比特进行翻转,即0变成1,或1变成0。
如果对获得的码字进行软判决译码,则校正子对应的软信息幅值也一起进行译码。对第一校正子来说,如果译码成功,则得到第一校正子对应的增量校正子,翻转比特位以及更新的软信息幅值;将增量校正子与第一校正子叠加,得到更新的校正子,替换掉原来的第一校正子;根据翻转比特位,翻转对应的码字中与翻转比特位对应的比特;并用更新的软信息幅值替换掉原来的软信息幅值,完成译码。
关于码字及对应的校正子的输出,有几种不同的机制,例如,每个校正子存储的时间相同,也就是说,每个校正子只能在存储单元中存储一个固定时间,假设2微秒,则存储时间达到2微秒之后,该校正子会被新接收的校正子覆盖。同理,与校正子对应的码字的存储时间也相同。又例如,每个校正子的待译码时间相同,假设1微秒,即校正子的存储时间达到1微秒,无论该校正子是否被译码都将该校正子的地址视为无效地址,该校正子不会再被译 码了,直到被新存储的校正子覆盖。
在保证存储时间或待译码时间相同的条件下,还可以进一步限定校正子的被译码次数,例如,译码次数的阈值设为3,则每个校正子最多被译码3次,如果某一个校正子被译码次数达到3次时,将不会再选择该校正子去译码。此外,也可以不限定存储时间或待译码时间,只限定译码次数,只要校正子被译码次数达到阈值,就将该校正子的地址视为无效地址,直到被新存储的校正子覆盖,对应的码字将从对应的存储单元中输出。
本申请实施例还以码率为444/484=0.917的空间耦合博斯-查德胡里-霍昆格姆(Bose-Chaudhuri-Hocquenghem,BCH)码为例,对译码性能进行了仿真,其中,其利用码率为0.917的BCH码构造了64个码长为968,信息长度为928,可纠错4个比特的循环BCH(968,928)码,即每个码字包含968个信息比特和40个校验比特。每个BCH(968,928)中有一半比特是来自于前面形成的码字。然后,在获得每个码字对应的校正子(共64个校正子)之后,采用如上述实施例中方案(2)的方式选出最多6个非零校正子进行硬判决译码,结果表明,在输入误比特率为6.05E-3下,采用本申请提供的动态译码的译码进行译码的误比特率(也称为输出误比特率)约为1E-15,性能符合要求,与此同时,本方案算法复杂度低;只需对选出的校正子进行译码,避免了大部分非必要的译码运算,功耗低;而且,存储单元也无需连接所有的译码单元,连线复杂度也低。
本申请另一实施例提供一种关于存储的负载均衡方案,将第一帧对应的校正子分组存储,每组存储的所述第一帧对应的校正子的数目最多相差一个,其中,所述第一帧包括多个码字;可选地,每组存储的属于同一个帧的码字对应的校正子的数目相同。下面以每组存储相同数目的来自同一个帧的码字对应的校正子为例,进行说明。
例如,一个数据帧包括k个码字,每个码字对应一个校正子,将该数据帧中的码字对应的校正子分k组存储,每组存储一个校正子;译码器的译码窗长为b帧,k和b均为正整数,此时存储单元共需存储k*b个校正子。在一种负载均衡方案中,总共存在k个存储单元,如图7所示,其中Ci,j表示第i帧中第j个码字的校正子,i=1,2…b,j=1,2…k。第j个存储单元701存储不同帧的第j个码字的校正子,帧号从1至b循环累加。假定当前输入帧的帧号为1,由于当前帧包含的码字含有最多的误码,因此需要最多的译码和存储更新,那么本方案就是将当前帧1的所有码字对应的校正子均匀分布到所有存储单元701上。类似的,第2,3…b帧的所有码字对应的校正子也均匀分布到所有存储单元701上,但它们相比第1帧具有较少的误码,需要较少的译码和存储更新。对于每个存储单元701来说,它所处理的负载是n帧的译码窗长内的存储负载的平均值。因此该设计保证了所有存储单元701具有大体一致的负载,实现热密度均衡,避免局部过热。
此外,分组存储的组数也可以与数据帧包含的码字个数不相同,例如,每个数据帧包含11个码字,分10组存储,则其中一组需要存储两个码字对应的校正子,剩余九组各存储一个码字对应的校正子,保证存储均匀,避免局部功耗大。应理解,码字的存储也可以满足上述的条件,保证每一个帧包括的码字均匀存储,降低局部功耗。
通过前述实施例提供的按需译码方案,实现了校正子存储单元到译码单元的对应,因此译码单元译码负载的也是总共b帧的译码窗长内的译码负载的平均值,由于每个译码单元参与的译码数目是均衡的,这样每个译码单元所在的芯片面积上产生的功耗也是差不多的,可以明显提升热密度均衡水平,降低芯片实现的工程难度。
本申请提供一种译码装置,如图8所示,控制器801和译码器802。控制器801,用于获得多个码字中每个码字对应的校正子,对得到的校正子进行分组;还用于在每一组校正子中进行优先级排序,根据每一组校正子的优先级排序结果,挑选校正子发送给译码器802;译码器802用于对收到的校正子进行译码。
本申请实施例公开的译码装置并不对每一个码字都进行相同的译码处理,避免了传统译码方案中,无论码字本身是否正确均需要进行同等次数译码的问题,实现按需译码,降低译码资源的需求和系统功耗。
可选地,非零校正子的优先级高于值为零的校正子;进一步地,译码次数多的非零校正子的优先级低于译码次数少的非零校正子,即在对非零校正子排序的时候,需要提高译码次数少的非零校正子的优先级,译码次数越少的校正子优先级越高,具体的例子如前面的方法实施例所述,本申请实施例不再赘述。而且,挑选校正子进行译码的方式有多种,在前面的方法实施例中已经详细描述过,本实施例也不再赘述。
进一步地,每一组包括的校正子数目并不限定,例如,10个校正子分为4组,可以有两组包括3个校正子,两组包括2个校正子;也可以一组包括4个校正子,剩余三组都包括2个校正子。可选地,每一组包括的校正子数目相同,这样对校正子的分组最均匀,降低优先级排序的复杂度。
在对获得的码字进行硬判决译码的情况下,只需对非零校正子进行译码;在对获得的码字进行软判决译码的情况下,值为零的校正子也可能需要译码,而且,译码装置还会收到每个码字包含的比特对应的软信息幅值。此时,控制器801除了将选出的校正子发送给译码器802进行译码之外,还需将其对应的软信息幅值一起发送给译码器802。一般情况,将软信息幅值和符号位统称为软信息,符号位为码字中每个比特的值(0或1),软信息幅值则表示每个比特是0或1的概率,即软信息指出对应的比特为0的概率或为1的概率。
可选地,在下一个译码时刻,在每个分组中预设的优先级顺序与上一次译码时刻的优先级顺序不同。例如,本次译码时,假设在每个组中优先级顺序为1、2、3…,下一个译码时刻,优先级可以变为2、3、4…1,和上一译码时刻的优先级顺序呈循环位移关系。可选地,无论在哪个译码时刻,非零校正子的优先级高于值为零的校正子的优先级。
具体地,假设校正子的数目为64,校正子1-16、17-32、33-48、49-64分别被分到四个组中,在本次译码时,按照(1、2…16),(17、18…32),(33、34…48),(49,50…64)的顺序来挑选,挑选出校正子1、17、33和49进行译码;下一译码时刻,存在与之前不完全相同的64个校正子,编号仍为1-64,校正子1-16、17-32、33-48、49-64也同样被分别分到四个组中,优先级顺序变为(2、3…16、1),(18、19…32、17),(34、35…48、33),(50、51…64、49),挑选出校正子2、18、34和50进行译码,以此类推。此外,假设本次译码时,校正子1-3和17-20均为零校正子,则按照同样的优先级顺序,且非零校正子优先级高于零校正子的优先级排序方式,挑选出校正子4、21、33和49进行译码;下一译码时刻,在64个校正子中,校正子4-6和18为零校正子,根据变化后的优先级顺序以及非零校正子优先级高于零校正子,挑选出校正子2、19、34和50进行译码。
当然,在两次译码时刻,从每组中选出最多一个校正子的优先级顺序也可以不遵循循环位移关系,例如,在非零校正子优先级高于零校正子的条件下,每次随机设置非零校正子的优先级顺序等,本申请不作具体限定。
通常情况下,译码装置还包括存储器803,获得的多个码字中每个码字对应的校正子可 以从存储在存储器803中的校正子中挑选,例如,获得的多个校正子的数目为64,存储器803包括64个存储单元,从每个存储单元中挑选出一个校正子,即可得到64个校正子。当然存在32个存储单元,从每个存储单元中挑选两个校正子也可以,本申请不作具体限定。其中,存储单元可以为同一个存储器或多个存储器中的不同位置;或者一个存储器即为一个存储单元,对此本申请不做限定。
进一步地,在两次译码时刻,获得的校正子可以不同,即后一次译码的时候,会从新获取64个新的校正子,有可能正好挑出来的校正子和上一次完全相同,也有可能有部分校正子是不同的,甚至有可能获得完全不同的校正子。
在译码器802对收到的校正子进行译码时,如果译码成功,则根据译码结果,对第一校正子和第一校正子对应的码字进行更新;如果译码失败,则不执行任何操作。
译码器803对收到的校正子进行硬判决译码,以其中一个校正子(第一校正子)为例,如果译码成功,会得到第一校正子对应的增量校正子和翻转比特位,然后将增量校正子和翻转比特位发送给存储器803;控制器801,用于将增量校正子与第一校正子叠加,使存储器803存储更新的校正子;还用于根据翻转比特位,翻转对应的码字中与翻转比特位对应的比特,使存储器803存储更新的比特;假设第一校正子对应的码字包括100个比特,翻转比特位指示第30个比特,则将码字中第30个比特进行翻转,即从0变成1或从1变成0。
如果译码器802对收到的校正子进行软判决译码,则需要将校正子对应的软信息幅值也一起进行译码。对第一校正子来说,如果译码成功,则得到第一校正子对应的增量校正子,翻转比特位以及更新的软信息幅值;将增量校正子,翻转比特位和更新的软信息幅值发送给存储器803。控制器801,用于将增量校正子与第一校正子叠加,使存储器803存储更新的校正子;并根据翻转比特位,翻转对应的码字中与翻转比特位对应的比特,使存储器803存储更新的比特。
需要说明的是,存储器803可以包括不同的存储单元,例如,校正子存储单元,数据存储单元和软信息存储单元,分别用来存储校正子、码字以及对应的软信息。
可选地,译码装置还包括调度单元804,如图9所示,其主要功能包括:根据控制器801的指示将存储器803中的校正子送入译码器802,将译码器输802出的增量校正子和翻转比特位送入存储器803。对于软判决译码,调度单元804还根据控制器801的指示将存储器803中的软信息幅值送入译码器802,将译码器802输出的软信息幅值送入存储器803。
特别地,在实际实现时,为了降低功耗可以对调度单元804的带宽进行约束,比如对每个时刻送入存储器803的增量校正子个数和翻转比特个数进行限制不超过特定阈值,这时调度单元804会将超过阈值的增量校正子和翻转比特进行缓存,待下一时刻再送入存储器。
关于码字及对应的校正子的输出,有几种不同的机制,例如,每个校正子存储的时间相同,也就是说,每个校正子只能在存储器803中存储一个固定时间,假设2微秒,则存储时间达到2微秒之后,该校正子会被新接收的校正子覆盖。同理,与校正子对应的码字的存储时间也相同。又例如,每个校正子的待译码时间相同,假设1微秒,即校正子的存储时间达到1微秒,无论该校正子是否被译码都该校正子的地址视为无效地址,该校正子不会再被译码,直到被新存储的校正子覆盖。
在保证存储时间或待译码时间相同的条件下,还可以进一步限定校正子的被译码次数,例如,译码次数的阈值设为3,则每个校正子最多被译码3次,如果某一个校正子被译码次数达到3次时,将不会再选择该校正子去译码。此外,也可以不限定存储时间或待译码时间, 只限定译码次数,只要校正子被译码次数达到阈值,就该校正子在存储器803中的地址视为无效地址,直到被新存储的校正子覆盖为止,对应的码字也将从对应的存储器中输出。
可选地,存储器包括多个存储单元,每个存储单元存储的第一帧对应的校正子的数目最多相差一个,实现均匀存储,其中,所述第一帧包括多个码字;进一步地,每个存储单元存储的属于同一个帧的码字对应的校正子的数目相同。下面以每个存储单元存储相同数目的来自同一个帧的码字对应的校正子为例,进行说明。
假设译码器802的译码窗长为b帧,每个数据帧含有k个码字,k和b均为正整数,此时存储器803共需存储k*b个校正子。本申请实施例提供一种负载均衡方案,存储器包含k个存储单元,大小均为b,如图6所示,其中Ci,j表示第i帧中第j个码字的码字信息,i=1,2…b,j=1,2…k。第j个存储单元存储不同帧的第j个码字的校正子,帧号从1至b循环累加。假定当前输入帧的帧号为1,由于当前帧包含的码字含有最多的误码,因此需要最多的译码和存储更新,那么本方案就是将当前帧1的所有码字对应的校正子均匀分布到所有存储单元上。类似的,第2,3…b帧的所有码字对应的校正子也均匀分布到所有存储单元上,但它们相比第1帧具有较少的误码,需要较少的译码和存储更新。对于每个存储单元来说,它所处理的负载是n帧的译码窗长内的存储负载的平均值。因此该设计保证了所有存储单元具有大体一致的负载,实现热密度均衡,避免局部过热。
此外,存储单元的个数也可以与数据帧包含的码字个数不相同,例如,每个数据帧包含100个码字,存储单元的数目为10个,则每个存储单元存储该数据帧中的10个码字对应的校正子;如果每个数据帧包含11个码字,存储单元的数目仍为10个,则其中一个存储单元存储两个码字对应的校正子,剩余9个存储单元各存储一个码字对应的校正子;本实施例可以保证均匀存储,避免局部功耗大。应理解,接收的码字和校正子存储在不同的存储单元,存储码字的方案也可按照上述校正子的存储方案进行存储;如果是软判决,还会接收到码字中每个比特对应的软信息幅值,软信息幅值也可按照上述校正子存储方案进行存储。
通过前述实施例提供的按需译码方案,实现了存储校正子的存储单元到译码单元的对应,因此译码单元译码负载的也是总共b帧的译码窗长内的译码负载的平均值,由于每个译码单元参与的译码数目是均衡的,这样每个译码单元所在的芯片面积上产生的功耗也是差不多的,可以明显提升热密度均衡水平,降低芯片实现的工程难度。
需要说明的是,本申请中涉及的译码装置可以由专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gata Array,FPGA)组成,其中,每个功能器件,包括存储器、调度单元等,均可以由ASIC或FPGA实现,最终构成译码装置。
本申请实施例提供了一种计算机可读存储介质或计算机程序产品,用于存储计算机程序,该计算机程序用于执行本申请方法实施例中公开的译码方法。
应理解,说明书通篇提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。在本发明的各种实施例中,上述各过程的序号大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
总之,以上所述仅为本发明技术方案的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (33)

  1. 一种译码方法,其特征在于,
    获得多个码字中每个码字对应的校正子;
    对得到的所述校正子进行分组,在每一组校正子中进行优先级排序;
    根据所述每一组校正子的优先级排序结果,挑选校正子进行译码。
  2. 根据权利要求1所述的译码方法,其特征在于,非零校正子的优先级高于值为零的校正子。
  3. 根据权利要求2所述的译码方法,其特征在于,译码次数多的非零校正子的优先级低于译码次数少的非零校正子。
  4. 根据权利要求1-3中任一项所述的译码方法,其特征在于,所述译码方法应用于包括多个译码单元的译码装置;
    所述挑选校正子进行译码,包括:从每组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,选出的校正子均为非零校正子。
  5. 根据权利要求1-3中任一项所述的译码方法,其特征在于,所述译码方法应用于包括多个译码单元的译码装置;
    所述挑选校正子进行译码,包括:从每组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码。
  6. 根据权利要求4或5所述的译码方法,其特征在于,分组的组数与译码单元的个数相同。
  7. 根据权利要求1-3中任一项所述的译码方法,其特征在于,所述译码方法应用于包括多个译码单元的译码装置;
    所述挑选校正子进行译码,包括:
    从每一组中挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码;
    在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选最多一个校正子,分别发送给不同的译码单元进行硬判决或软判决译码,其中,两次挑选的校正子不同,且选出的校正子均为非零校正子。
  8. 根据权利要求1-3中任一项所述的译码方法,其特征在于,所述译码方法应用于包括多个译码单元的译码装置;
    所述挑选校正子进行译码,包括:
    从每一组中挑选一个校正子,分别发送给不同的译码单元进行软判决译码;
    在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给不同的译码单元进行软判决译码,其中,两次挑选的校正子不同。
  9. 根据权利要求7或8所述的译码方法,其特征在于,所述对得到的所述校正子进行分组,包括:将得到的所述校正子分为2/3n组,其中,n为译码单元的个数,且n为3的整数倍。
  10. 根据权利要求1-9中任一项所述的译码方法,其特征在于,每一组包括的校正子数目相同。
  11. 根据权利要求1-10中任一项所述的译码方法,其特征在于,在对选出的校正子进行译码之后,所述方法还包括:
    再次对每一组的校正子进行优先级排序,根据本次优先级排序结果,再次挑选校正子进行译码。
  12. 根据权利要求11所述的译码方法,其特征在于,两次对每一组的校正子进行优先级排序的过程中,所述优先级排序的方法不同。
  13. 根据权利要求1-12中任一项所述的译码方法,其特征在于,所述方法还包括:
    如果对第一校正子译码成功,得到所述第一校正子对应的增量校正子和翻转比特位,其中,第一校正子为进行译码的校正子中的一个;
    将所述增量校正子与所述第一校正子叠加,得到更新的校正子;
    根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特。
  14. 根据权利要求1-13中任一项所述的译码方法,其特征在于,每个校正子存储的时间相同。
  15. 根据权利要求1-14中任一项所述的译码方法,其特征在于,所述方法还包括:
    将第一帧对应的校正子分组存储,每组存储的所述第一帧对应的校正子的数目最多相差一个,其中,所述第一帧包括多个码字。
  16. 一种译码装置,其特征在于,包括:控制器和译码器,
    所述控制器,用于获得多个码字中每个码字对应的校正子,对得到的所述校正子进行分组;还用于在每一组校正子中进行优先级排序,根据所述每一组校正子的优先级排序结果,挑选校正子发送给所述译码器;
    译码器,用于对收到的校正子进行译码。
  17. 根据权利要求16所述的译码装置,其特征在于,非零校正子的优先级高于值为零的校正子。
  18. 根据权利要求17所述的译码装置,其特征在于,译码次数多的非零校正子的优先级低于译码次数少的非零校正子。
  19. 根据权利要求16-18中任一项所述的译码装置,其特征在于,所述控制器用于从每一组中挑选最多一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码,其中,选出的校正子均为非零校正子。
  20. 根据权利要求16-18中任一项所述的译码装置,其特征在于,所述控制器用于从每一组中挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码。
  21. 根据权利要求19或20所述的译码装置,其特征在于,分组的组数与译码单元的个数相同。
  22. 根据权利要求16-18中任一项所述的译码装置,其特征在于,所述控制器还用于从每一组中挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行硬判决或软判决译码,其中,两次挑选的校正子不同,且选出的校正子均为非零校正子。
  23. 根据权利要求16-18中任一项所述的译码装置,其特征在于,所述控制器还用于从每一组中挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码;在每两个分组中再次对校正子进行优先级排序,根据排序结果,从每两组中再挑选一个校正子,分别发送给所述译码器中的不同的译码单元进行软判决译码,其中,两次挑选的校正子不同。
  24. 根据权利要求22或23所述的译码装置,其特征在于,所述控制器还用于将得到的所述校正子分为2/3n组,其中,n为译码单元的个数,且n为3的整数倍。
  25. 根据权利要求16-24中任一项所述的译码装置,其特征在于,每一组包括的校正子数目相同。
  26. 根据权利要求16-25中任一项所述的译码装置,其特征在于,所述控制器还用于在对选出的校正子发送给所述译码器之后,再次对每一组的校正子进行优先级排序,根据本次优先级排序结果,再次挑选校正子发送给所述译码器。
  27. 根据权利要求26所述的译码装置,其特征在于,两次对每一组的校正子进行优先级排序的过程中,所述优先级排序的方法不同。
  28. 根据权利要求16-27中任一项所述的译码装置,其特征在于,所述译码装置还包括存储器,
    所述译码器,还用于在对第一校正子译码成功时,得到所述第一校正子对应的增量校正子和翻转比特位;将所述增量校正子和所述翻转比特位发送给所述存储器;
    所述控制器,用于将所述增量校正子与所述第一校正子叠加,使所述存储器存储更新的校正子;还用于根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特,使所述存储器存储更新的比特。
  29. 根据权利要求16-28中任一项所述的译码装置,其特征在于,每个校正子存储的时间相同。
  30. 根据权利要求16-27中任一项所述的译码装置,其特征在于,所述译码装置还包括存储器,
    所述译码器在对第一校正子和对应的软信息幅值译码成功时,得到所述第一校正子对应的增量校正子,翻转比特位以及更新的软信息幅值,将所述增量校正子,所述翻转比特位和所述更新的软信息幅值发送给所述存储器,其中,所述第一校正子为发送给所述译码器的校正子中的一个;
    所述存储器,用于存储所述增量校正子,翻转比特位以及更新的软信息幅值;
    所述控制器,用于将所述增量校正子与所述第一校正子叠加,使所述存储器存储更新的校正子;还用于根据所述翻转比特位,翻转对应的码字中与所述翻转比特位对应的比特,使所述存储器存储更新的比特。
  31. 根据权利要求16-30中任一项所述的译码装置,其特征在于,所述译码装置还包括存储器,所述存储器包括多个存储单元,每个存储单元存储第一帧对应的校正子的数目最多相差一个,其中,所述第一帧包括多个码字。
  32. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储指令,当所述指令在终端设备上运行时,使得所述终端设备执行如权利要求1-15中任一项所述的方法。
  33. 一种包含指令的计算机程序产品,其特征在于,当在终端设备上运行时,使得终端设备执行如权利要求1-15中任一项所述的方法。
PCT/CN2021/104390 2020-07-03 2021-07-03 按需译码方法及装置 WO2022002272A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21832381.4A EP4170914A4 (en) 2020-07-03 2021-07-03 METHOD AND DEVICE FOR DECODING ON DEMAND
US18/146,794 US20230136251A1 (en) 2020-07-03 2022-12-27 On-demand decoding method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010631750.4A CN113890545A (zh) 2020-07-03 2020-07-03 按需译码方法及装置
CN202010631750.4 2020-07-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/146,794 Continuation US20230136251A1 (en) 2020-07-03 2022-12-27 On-demand decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2022002272A1 true WO2022002272A1 (zh) 2022-01-06

Family

ID=79013151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/104390 WO2022002272A1 (zh) 2020-07-03 2021-07-03 按需译码方法及装置

Country Status (4)

Country Link
US (1) US20230136251A1 (zh)
EP (1) EP4170914A4 (zh)
CN (1) CN113890545A (zh)
WO (1) WO2022002272A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045071A (zh) * 2009-10-12 2011-05-04 马维尔国际贸易有限公司 改善用于低功率应用的ldpc解码器中的功耗
US8938660B1 (en) * 2011-10-10 2015-01-20 Marvell International Ltd. Systems and methods for detection and correction of error floor events in iterative systems
US20170272097A1 (en) * 2016-03-17 2017-09-21 Silicon Motion Inc. Low power scheme for bit flipping low density parity check decoder
WO2019178107A1 (en) * 2018-03-14 2019-09-19 Cypress Semiconductor Corporation Bit error correction for wireless retransmission communications systems

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990624B2 (en) * 2001-10-12 2006-01-24 Agere Systems Inc. High speed syndrome-based FEC encoder and decoder and system using same
US8386894B2 (en) * 2009-03-23 2013-02-26 Applied Micro Circuits Corporation Parallel forward error correction with syndrome recalculation
US8850289B2 (en) * 2012-07-27 2014-09-30 Lsi Corporation Quality based priority data processing with soft guaranteed iteration
RU2612593C1 (ru) * 2015-11-23 2017-03-09 Федеральное Государственное Унитарное Предприятие Ордена Трудового Красного Знамени Научно-Исследовательский Институт Радио (Фгуп Ниир) Устройство параллельного декодирования циклических кодов на программируемых логических интегральных схемах

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045071A (zh) * 2009-10-12 2011-05-04 马维尔国际贸易有限公司 改善用于低功率应用的ldpc解码器中的功耗
US8938660B1 (en) * 2011-10-10 2015-01-20 Marvell International Ltd. Systems and methods for detection and correction of error floor events in iterative systems
US20170272097A1 (en) * 2016-03-17 2017-09-21 Silicon Motion Inc. Low power scheme for bit flipping low density parity check decoder
WO2019178107A1 (en) * 2018-03-14 2019-09-19 Cypress Semiconductor Corporation Bit error correction for wireless retransmission communications systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4170914A4

Also Published As

Publication number Publication date
EP4170914A1 (en) 2023-04-26
CN113890545A (zh) 2022-01-04
US20230136251A1 (en) 2023-05-04
EP4170914A4 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
JP6963620B2 (ja) インタリーブを伴う連接ポーラ符号
US7395495B2 (en) Method and apparatus for decoding forward error correction codes
WO2014116041A1 (en) Method and system for encoding and decoding data using concatenated polar codes
US6996767B2 (en) Memory configuration scheme enabling parallel decoding of turbo codes
US10735154B2 (en) Methods and apparatus for coding sub-channel selection
KR20190053899A (ko) 폴라 코드를 사용하여 데이터를 인코딩하는 방법 및 장치
WO2018153263A1 (en) A method to generate ordered sequence for polar codes
CN1625859A (zh) 以太网中的前向纠错编码
CN1459148A (zh) 在通信系统中生成和解码代码的设备和方法
US7836376B2 (en) Method and apparatus for encoding blocks of data with a blocks oriented code and for decoding such blocks with a controllable latency decoding, in particular for a wireless communication system of the WLAN or WPAN type
US10548158B2 (en) Message passing algorithm decoder and methods
WO2017011946A1 (zh) 基于不等差错保护的数据传输方法、装置和设备
CA3069594A1 (en) Media content-based adaptive method, device and system for fec coding and decoding of systematic code, and medium
JP2009524316A (ja) 高速な符号化方法および復号方法ならびに関連する装置
EP2203979A1 (en) Optimum distance spectrum feedforward tail-biting convolutional codes
WO2022002272A1 (zh) 按需译码方法及装置
JP5937194B2 (ja) 低密度パリティ検査符号を使用するシステムにおける信号マッピング/デマッピング装置及び方法
US20230021167A1 (en) Coding method and apparatus for data communication
WO2009075507A1 (en) Method of error control
Xia et al. A two-staged adaptive successive cancellation list decoding for polar codes
JP7142977B1 (ja) データ通信システム、送信装置、および受信装置
US8769372B2 (en) System and method for assigning code blocks to constituent decoder units in a turbo decoding system having parallel decoding units
WO2024001313A1 (zh) 译码处理方法、装置、存储介质及电子装置
RU2541844C1 (ru) Способ декодирования кода-произведения с использованием упорядоченного по весу смежного класса векторов ошибок и устройство его реализующее
CN112290956A (zh) 一种基于流水线结构的ctc编码器及编码方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21832381

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021832381

Country of ref document: EP

Effective date: 20230117

NENP Non-entry into the national phase

Ref country code: DE