CN115037312B - Method, device and equipment for quantizing LDPC decoding soft information - Google Patents

Method, device and equipment for quantizing LDPC decoding soft information Download PDF

Info

Publication number
CN115037312B
CN115037312B CN202210968862.8A CN202210968862A CN115037312B CN 115037312 B CN115037312 B CN 115037312B CN 202210968862 A CN202210968862 A CN 202210968862A CN 115037312 B CN115037312 B CN 115037312B
Authority
CN
China
Prior art keywords
soft information
neural network
mapping relation
optimal
quantization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210968862.8A
Other languages
Chinese (zh)
Other versions
CN115037312A (en
Inventor
李晓雷
李德建
赵东艳
赵旭
张玉冰
王勇
刘晗
李正浩
甘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Shandong Electric Power Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
Original Assignee
State Grid Shandong Electric Power Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Shandong Electric Power Co Ltd, Beijing Smartchip Microelectronics Technology Co Ltd filed Critical State Grid Shandong Electric Power Co Ltd
Priority to CN202210968862.8A priority Critical patent/CN115037312B/en
Publication of CN115037312A publication Critical patent/CN115037312A/en
Application granted granted Critical
Publication of CN115037312B publication Critical patent/CN115037312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0061Error detection codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to the technical field of wireless communication, and provides a method, a device and equipment for quantizing LDPC decoding soft information. The quantization method of the LDPC decoding soft information comprises the following steps: acquiring a mapping relation between soft information before quantization and soft information after quantization; fitting the mapping relation through a neural network to obtain a neural network quantizer; the neural network quantizer is used for outputting corresponding soft information according to the input soft information. The implementation mode provided by the invention improves the performance robustness of the decoder, and reduces the storage bits of soft information with smaller performance loss.

Description

Method, device and equipment for quantizing LDPC decoding soft information
Technical Field
The present invention relates to the field of wireless communication, and in particular, to a method and an apparatus for quantizing LDPC decoding soft information, and a computer-readable storage medium.
Background
At present, mainstream quantization methods are divided into uniform quantization and non-uniform quantization, however, no matter the LLR is directly subjected to uniform quantization or non-uniform quantization, the performance loss of a decoder is large. Furthermore, although there have been some improved uniform quantization or non-uniform quantization methods to reduce the bit width of LLRs to some extent and guarantee that the performance loss of the decoder is within a certain range, under different MCS (modulation and coding scheme) scenarios, there may be a robustness problem, i.e., the quantization method may be very lossy under some scenarios.
The threshold value α in some current schemes is basically fixed and does not adaptively change according to the scene, but the actual scene is complex, and under different channels and MCSs, the quantization scheme may have a large loss scene.
The main idea of some other current schemes is to determine a first range and a second range of soft information, and to use different quantization schemes for the different ranges, i.e. uniform quantization (or linear quantization) is used for the values of the first range, and non-uniform quantization is used for the values of the second range. However, the actual scene is complex, and the dynamic ranges of the soft information data are different under different channels and different MCSs, so the threshold for distinguishing the first range from the second range is not well determined. Preferably, the data is cut off and then quantized, and the scheme does not provide a relevant solution.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method, an apparatus, and a device for quantizing LDPC decoding soft information, so as to solve the above problems.
In order to achieve the above object, a first aspect of the present invention provides a method for quantizing LDPC decoding soft information, the method comprising: acquiring a mapping relation between soft information before quantization and soft information after quantization; fitting the mapping relation through a neural network to obtain a neural network quantizer; the neural network quantizer is used for outputting corresponding soft information according to the input soft information.
Preferably, the obtaining of the mapping relationship between the soft information before quantization and the soft information after quantization includes: a processing function based on soft information is used as the mapping relation; or obtaining a normalized expression as the mapping relation based on a distribution function of the soft information.
Preferably, the parameter in the processing function includes a truncation threshold, and the truncation threshold is determined by: determining a search range of a truncation threshold and an optimal quantitative evaluation index; calculating the value of the optimal quantitative evaluation index according to the decoded bit and the transmitted bit obtained by the decoder of the cut soft information; and determining an optimal truncation threshold from the search range as an optimal parameter according to the value of the optimal quantitative evaluation index.
Preferably, the parameters in the processing function include compression coefficients, and the compression coefficients are determined by: obtaining a value of a performance index of the decoder according to a decoding bit and a transmitting bit obtained by the decoder according to the quantized soft information; and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
Preferably, the fitting the mapping relationship through a neural network to obtain a neural network quantizer includes: training the neural network by adopting a data sample containing the mapping relation, and taking the trained neural network as the neural network quantizer; or extracting a mathematical expression of the neural network, determining parameters in the mathematical expression according to the data sample containing the mapping relation, and taking the mathematical expression after determining the parameters as the neural network quantizer; or fitting the mapping relation by adopting a fitting curve, determining characteristic parameters in the fitting curve through a data sample containing the mapping relation and the neural network, and taking the fitting curve after determining the characteristic parameters as the neural network quantizer.
Preferably, the fitting curve is a polygonal line, the characteristic parameter is a vertex of the polygonal line, and the vertex is determined by an output value of the neural network.
Preferably, the fold line is three fold lines, and the vertexes of the three fold lines are (0, 0) and (x) respectively 1 ,y 1 )、(x 2 1), (1, 1); wherein x 1 、y 1 、x 2 And determining by using the output value of the neural network.
Preferably, the neural network can be replaced with other machine learning models.
In a second aspect of the present invention, there is also provided an apparatus for quantizing LDPC decoding soft information, the apparatus comprising: the mapping acquisition module is used for acquiring the mapping relation between the input soft information of the quantizer and the output soft information corresponding to the input soft information; the mapping fitting module is used for fitting the mapping relation through a neural network to obtain a neural network quantizer; and the quantization application module is used for outputting corresponding soft information according to the input soft information by the neural network quantizer.
Preferably, the obtaining of the mapping relationship between the soft information before quantization and the soft information after quantization includes: a processing function based on soft information is used as the mapping relation; or obtaining a normalized expression as the mapping relation based on a distribution function of the soft information.
Preferably, the parameter in the processing function includes a truncation threshold, and the truncation threshold is determined by: determining a search range and an optimal quantitative evaluation index of a truncation threshold; calculating the value of the optimal quantitative evaluation index according to the decoded bit and the transmitted bit obtained by the decoder of the cut soft information; and determining an optimal truncation threshold from the search range as an optimal parameter according to the value of the optimal quantitative evaluation index.
Preferably, the parameters in the processing function include compression coefficients, which are determined by: obtaining a value of a performance index of the decoder according to a decoding bit and a transmitting bit obtained by the decoder according to the quantized soft information; and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
Preferably, the fitting the mapping relationship through a neural network to obtain a neural network quantizer includes: training the neural network by adopting a data sample containing the mapping relation, and taking the trained neural network as the neural network quantizer; or extracting a mathematical expression of the neural network, determining parameters in the mathematical expression according to the data sample containing the mapping relation, and taking the mathematical expression after determining the parameters as the neural network quantizer; or fitting the mapping relation by adopting a fitting curve, determining characteristic parameters in the fitting curve through a data sample containing the mapping relation and the neural network, and taking the fitting curve after determining the characteristic parameters as the neural network quantizer.
Preferably, the fitting curve is a polygonal line, the characteristic parameter is a vertex of the polygonal line, and the vertex is determined by an output value of the neural network.
Preferably, the fold line is three fold lines, and vertexes of the three fold lines are (0, 0) and (x) respectively 1 ,y 1 )、(x 2 1), (1, 1); wherein x is 1 、y 1 、x 2 And determining by using the output value of the neural network.
Preferably, the neural network can be replaced with other machine learning models.
In a third aspect of the present invention, there is also provided a quantization apparatus for LDPC decoding soft information, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the aforementioned quantization method for LDPC decoding soft information when executing the computer program.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium, having stored therein instructions, which when run on a computer, cause the computer to perform the aforementioned steps of the quantization method of LDPC decoding soft information.
A fifth aspect of the present invention provides a computer program product comprising a computer program which, when executed by a processor, implements the aforementioned method of quantizing LDPC decoding soft information.
The technical scheme at least has the following beneficial effects:
(1) The problem that the performance loss of a direct uniform quantization or non-uniform quantization decoder is large due to the large dynamic range of LLR data under different channels and MCSs is solved.
(2) Under different MCS, the performance robustness of the decoder is improved. At bler =10%, the signal-to-noise ratio is lost no more than 0.1dB at most.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating an implementation of a quantization method for LDPC decoding soft information according to an embodiment of the present invention;
FIG. 2 schematically illustrates a diagram of an LDPC decoding process and an inference process;
FIG. 3 schematically illustrates a flow diagram for searching for a truncation threshold according to an embodiment of the present invention;
FIG. 4 schematically shows a flow chart for searching for a scaling factor according to an embodiment of the present invention;
FIG. 5 schematically illustrates a diagram of an expert non-uniform quantization transfer function obtained from a simulation according to an embodiment of the present invention;
FIG. 6 schematically shows a schematic structural diagram of a neural network in accordance with an embodiment of the present invention;
FIG. 7 schematically illustrates a diagram of a neural network quantization function, according to an embodiment of the present invention;
FIG. 8 schematically illustrates a diagram of a neural network assisted quantization function, according to an embodiment of the present invention;
FIG. 9 schematically illustrates a general training process of a neural network in accordance with an embodiment of the present invention;
FIG. 10 schematically illustrates a simulation result diagram according to an embodiment of the present invention;
FIG. 11 schematically shows a simulation result diagram according to another embodiment of the present invention;
fig. 12 is a schematic diagram illustrating the structure of a quantization apparatus for LDPC decoding of soft information according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 schematically shows an implementation diagram of a quantization method for LDPC decoding soft information according to an embodiment of the present invention. As shown in fig. 1, a quantization method for LDPC decoding soft information includes:
s01, acquiring a mapping relation between soft information before quantization and soft information after quantization; the soft information is also referred to as log-likelihood ratio (LLR). The quantization referred to in this embodiment includes quantization methods that can be employed in the related art. The mapping herein may include data samples embodying the mapping.
S02, fitting the mapping relation through a neural network to obtain a neural network quantizer; the mapping relationship obtained in the foregoing steps may have certain limitations, such as discreteness, interval limitation, or value adaptation. By utilizing machine learning technologies such as neural networks, the mapping relation can be transformed into a corresponding mapping model, so that the limitation is overcome, and the neural network quantizer which can be adapted to any input soft information is obtained. Fig. 2 schematically shows a schematic diagram of the LDPC decoding process and the inference process, as shown in fig. 2, in which a dotted line is started during training. In this step, the expert non-uniform quantizer (mapping relation) is fitted through the neural network, and the fitting result replaces the original quantizer.
And S03, the neural network quantizer is used for outputting corresponding soft information according to the input soft information.
The neural network quantizer obtained by adopting the steps performs mapping processing on the input soft information through deployment, so that a better mapping effect is realized.
Through the implementation mode, the decoding efficiency of the soft information can be improved, the LLR bit number can be reduced from 6 bits (larger than 6 bits or even 4 bits), and the silicon area required by the memory is reduced by about 15%.
In some embodiments of the present invention, obtaining a mapping relationship between soft information before quantization and soft information after quantization includes: a processing function based on soft information is used as the mapping relation; the method specifically comprises the following steps:
(1) Obtaining the truncation threshold and compression factor of the soft information optimum, and recording as [ 2 ]β qm,cr η qm,cr ]Wherein, in the step (A),qm =2,4,6,8 modulation modes, QPSK, 1694am, 64qam and 256QAM respectively,crthe code rate is represented by a code rate,cre (0, 1). It is obvious thatqmAndcrone MCS combination can be uniquely represented.
1) Recording the cutoff threshold value to be selected asr k ∈(7,31),k=0,1,…,K-1,KDefining initial soft information as first soft information for the number of the truncation threshold to be selected, and recording the first soft information as the first soft information
Figure DEST_PATH_IMAGE001
The second soft information is obtained by subjecting the first soft information to the following clipping operation:
Figure 544456DEST_PATH_IMAGE002
2) Inputting the second soft information into the decoder to obtain decoded bits, which are recorded as
Figure 855352DEST_PATH_IMAGE003
In whichlA slot index is represented which indicates the slot index,l=0,1,⋯,L-1,ja bit index is represented by a bit number,j=0,1,⋯,M-1 andL,Mrespectively representing the number of slots and the number of bits.
3) Calculating cross entropy loss according to the decoded bits and the transmitted bits obtained in the step 2), namely:
Figure 657086DEST_PATH_IMAGE004
wherein, the first and the second end of the pipe are connected with each other,k=0,1,⋯,K-1, index representing the truncation threshold.
4) Calculating an optimal truncation threshold, namely:
Figure 205879DEST_PATH_IMAGE005
5) Truncation thresholdβ qm,cr After the determination, the optimal compression coefficient is determinedη qm,cr Wherein the compression coefficient is a parameter of the non-uniform quantizer as follows:
Figure 559500DEST_PATH_IMAGE006
step 1, for the first soft informationLLR j 1st And carrying out amplitude limiting to obtain second soft information.
Figure 482325DEST_PATH_IMAGE007
Step 2, assigning valuesη qm,cr = z t t=0,1,⋯,T-1 whereinz t ∈(1,10),TThe number of the compression coefficients to be selected. Inputting the absolute value of the second soft information into a quantizer to obtain third soft informationLLR i 3rd
Figure 477963DEST_PATH_IMAGE008
And 3, recovering the third soft information symbol and inputting the recovered third soft information symbol into a decoder to obtain a performance index Bler of the decoder, and recording the performance index Bler asBler t
And 4, calculating an optimal compression coefficient, namely:
Figure 553366DEST_PATH_IMAGE009
5) Repeating the steps 1) to 4) to obtain the truncation threshold and the compression coefficient under all MCS combinations, and obtaining the expert non-uniform transfer functions corresponding to different MCSs to be recorded asf qm,cr
Also for example: and obtaining a normalized expression as the mapping relation based on the distribution function of the soft information. Setting the distribution function of the soft information asp L (x) Then the mapping relationship is expressed as:
Figure 77889DEST_PATH_IMAGE010
wherein, the first and the second end of the pipe are connected with each other,
Figure 488010DEST_PATH_IMAGE011
Figure 959443DEST_PATH_IMAGE012
His a normalization factor.
The above embodiments provide only two alternative approaches, thef qm,cr The method can be obtained by any technical means, and is not limited to theoretical derivation, traversal simulation search and the like.
Fig. 3 schematically shows a flow diagram for searching for a truncation threshold according to an embodiment of the present invention. As shown in fig. 3. The truncation threshold is determined by: determining a search range of a truncation threshold and an optimal quantitative evaluation index; calculating the value of the optimal quantization evaluation index according to the decoding bit and the transmitting bit obtained by the decoder of the quantized soft information; and determining an optimal truncation threshold from the search range as an optimal parameter according to the value of the optimal quantitative evaluation index.
FIG. 4 schematically shows a flow chart for searching for a scaling factor according to an embodiment of the present invention; as shown in fig. 4, the compression factor is determined by: obtaining a value of a performance index of the decoder according to a decoding bit and a transmitting bit obtained by the decoder according to the quantized soft information; and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
FIG. 5 is a diagram schematically illustrating an expert nonuniform quantization transfer function obtained by simulation according to an embodiment of the present invention, and as shown, a thin solid line in the diagram can be represented asf 2,0.19 The thin dotted line can be represented asf 2,0.30 The thick solid line can be expressed asf 2,0.44
In some embodiments of the present invention, the neural network quantizer is obtained by fitting the mapping relationship through a neural network, which includes the following modes:
in some optional embodiments, the neural network is trained by using a data sample containing the mapping relationship, and the trained neural network is used as the neural network quantizer. The neural network can classify or regress the input parameters after being trained, and the trained neural network enables the mapping relation in the data evolution to be changed from a discrete state to a continuous state.
In some optional embodiments, the neural network quantizer obtains a mathematical expression for extracting the neural network, determines a parameter in the mathematical expression according to a data sample containing the mapping relationship, and uses the mathematical expression after determining the parameter as the neural network quantizer. Specifically, the network structure adopts a [1-2-2-1] structure, as shown in fig. 6. Fig. 6 schematically shows a schematic structural diagram of a neural network according to an embodiment of the present invention. The SELU activation function applied at each layer in the network is represented as follows:
Figure 13987DEST_PATH_IMAGE013
the parameters for each layer are as follows:
layer 1:
Figure 115935DEST_PATH_IMAGE014
layer 2:
Figure 822564DEST_PATH_IMAGE017
layer 3:
Figure 997194DEST_PATH_IMAGE018
therefore, the 6-bit soft information is directly input into the network, and a 4-bit result can be obtained, namely
Figure 738885DEST_PATH_IMAGE019
The parameters in each of the above layers are determined by data samples and a loss function. Thus, the neural network quantizer can complete 4-bit quantization by storing the network parameters and performing multiply-add operation on the 6-bit soft information. There is a problem that the calculation of the selu function includes an exponential calculation, the conventional method is realized by a table look-up, and e can be realized by the following method in order to save the storage x Approximation, namely:
Figure 202227DEST_PATH_IMAGE020
the parameters are configured as follows during the training of the neural network:
TABLE 1
Figure 812200DEST_PATH_IMAGE021
The neural network quantization function thus obtained is shown in fig. 7, and fig. 7 schematically shows a schematic diagram of the neural network quantization function according to an embodiment of the present invention.
In the above embodiment, although the neural network quantizer does not need to be truncated, 13 parameters are stored and several multiply-add operations are performed, in order to reduce storage and computational complexity, the mapping relationship may be fitted with a fitting curve, the characteristic parameters in the fitting curve are determined by using the data samples including the mapping relationship and the neural network, and the fitting curve after determining the characteristic parameters is used as the neural network quantizer.
FIG. 8 schematically shows a diagram of a neural network assisted quantization function, according to an embodiment of the present invention. As shown in fig. 8. In this embodiment, the fitted curve is a polygonal line, and the characteristic parameter is a vertex of the polygonal line, and the vertex is determined according to an output value of the neural network. In this case, the corresponding method is multi-segment linear quantization, and the multi-segment linear quantization curve is obtained by the neural network learning. The specific operation is as follows:
1) Changing a network structure, wherein a neural network adopts a [1-4-2-1] structure;
2) Replacement of activation function byrelu
Figure 965970DEST_PATH_IMAGE022
For the neural network-assisted quantization function in the previous embodiment, it can be found that each fitted curve can be well replaced by three segments of broken lines, each segment of straight line in the three segments of broken lines can be uniquely and approximately represented by two points, and the parameter coordinate is marked as (x) 1 qm,cr ,y 1 qm,cr ),(x 2 qm,cr 1), where the superscript qm, cr denotes this x 1 、y 1 、x 2 Related to the combination of the aforementioned modulation scheme and coding scheme. The embodiment adopts the broken line fitting, and can further reduce the storage and the calculation complexity.
The process and steps for performing quantization by adopting the fitting curve are as follows:
1) Looking up a parameter coordinate table to obtain a parameter x under the current MCS 1 qm,cr ,y 1 qm,cr ,x 2 qm,cr
2) Normalizing the first soft information of 6 bits to be quantized at the inlet of the decoder, namelyx j =LLR j 1st /31 (referred to as second soft information);
3) The second soft information of the step 2) is processedx j Obtaining a normalized 4-bit quantization value by performing
Figure 268775DEST_PATH_IMAGE023
4) Will be provided withy j The third soft information is obtained by the following processing:
Figure 891517DEST_PATH_IMAGE024
description of the drawings: the quantization process is only for the case that the soft information is greater than or equal to 0, and for the case that the soft information is less than 0, the absolute value of the soft information is only required to be taken in the quantizer, and then multiplied by-1, and the processing mode is strictly symmetrical to the soft information greater than 0. Fig. 9 schematically shows a general training process diagram of a neural network according to an embodiment of the present invention. As shown in fig. 9, the first soft information is normalized, and the neural network is trained and adjusted based on the principle that the MSE Loss between the neural network and the output result of the expert non-uniform quantizer is the minimum.
3. And sending the third soft information (4 bit) to an LDPC decoder based on NMS for decoding operation. The decoder is configured as follows: the number of decoding iterations is 15, normalization factorα=0.75。
The effect of the embodiments of the present invention is verified by simulation through different quantization methods, and the performance of the encoder is compared as follows.
Description of different quantization methods involved in comparison:
TABLE 2
Figure 305181DEST_PATH_IMAGE025
Among them, the 4-bit uniQ and 4-bit nonuniQ methods have poor performance, and therefore are not in the comparison range in the following.
FIG. 10 is a schematic diagram illustrating simulation results according to an embodiment of the present invention, as shown in FIG. 10. NMS AWGN Qm =2 cr =0.3 in the figure means: the decoder adopts NMS algorithm, based on AWGN channel, the modulation mode is QPSK, and the code rate is 0.3. Fig. 11 schematically shows a simulation result diagram according to another embodiment of the present invention, as shown in fig. 11. NMS AWGN Qm =2 cr =0.44 in the figure denotes: the decoder adopts NMS algorithm, based on AWGN channel, the modulation mode is QPSK, and the code rate is 0.44.
From the above results, it can be found that at two code rates, the performance of the embodiment of the present invention is better than that of other comparison methods, no matter the neural network non-uniform quantizer or the neural network assisted quantizer, and the performance of the neural network assisted quantizer is better, and at a Bler =10% operating point, the loss of the signal-to-noise ratio is less than 0.1dB. Therefore, after indexes such as storage occupation and calculation complexity are comprehensively considered, the neural network auxiliary quantizer is a better choice for realizing a chip.
The 4-bit alpha-nonuniQ method with alpha =0.5, firstly truncates the 6-bit LLR according to an alpha threshold, and then carries out non-uniform quantization, wherein a quantization table is as follows:
TABLE 3
Figure 313457DEST_PATH_IMAGE026
The 4-bit beta-uniQ method adopts a truncation threshold to truncate the 6-bit LLR, and then uniform quantization is carried out, wherein a quantization table is as follows:
TABLE 4
Figure 990426DEST_PATH_IMAGE027
Based on the same inventive concept, the invention also provides a device for quantizing the LDPC decoding soft information. Fig. 12 is a schematic diagram illustrating a structure of a quantization apparatus for LDPC decoding of soft information according to an embodiment of the present invention, as shown in fig. 12. An apparatus for quantizing LDPC coded soft information, the apparatus comprising: the mapping acquisition module is used for acquiring the mapping relation between the input soft information of the quantizer and the output soft information corresponding to the input soft information; the mapping fitting module is used for fitting the mapping relation through a neural network to obtain a neural network quantizer; and the quantization application module is used for outputting corresponding soft information according to the input soft information by utilizing the neural network quantizer.
In some optional embodiments, obtaining the mapping relationship between the soft information before quantization and the soft information after quantization includes: a processing function based on soft information is used as the mapping relation; or obtaining a normalized expression as the mapping relation based on a distribution function of the soft information.
In some optional embodiments, the parameter in the processing function comprises a truncation threshold, the truncation threshold being determined by: determining a search range and an optimal quantitative evaluation index of a truncation threshold; calculating the value of the optimal quantization evaluation index according to the transmitting bit and the decoding bit; and determining an optimal truncation threshold from the search range as an optimal parameter according to the optimal quantitative evaluation index.
Preferably, the parameters in the processing function include compression coefficients, and the compression coefficients are determined by: obtaining a value of a performance index of the decoder according to the transmitting bit and the decoding bit; and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
In some optional embodiments, fitting the mapping relationship through a neural network to obtain a neural network quantizer includes: training the neural network by adopting a data sample containing the mapping relation, and taking the trained neural network as the neural network quantizer; or extracting a mathematical expression of the neural network, determining parameters in the mathematical expression according to the data sample containing the mapping relation, and taking the mathematical expression after determining the parameters as the neural network quantizer; or fitting the mapping relation by adopting a fitting curve, determining characteristic parameters in the fitting curve through a data sample containing the mapping relation and the neural network, and taking the fitting curve after determining the characteristic parameters as the neural network quantizer.
In some alternative embodiments, the fitted curve is a polygonal line, and the characteristic parameter is a vertex of the polygonal line, the vertex being determined from an output value of the neural network.
Preferably, the fold line is three fold lines, and vertexes of the three fold lines are (0, 0) and (x) respectively 1 ,y 1 )、(x 2 1) (1, 1); wherein x is 1 、y 1 、x 2 And determining by using the output value of the neural network.
In some alternative embodiments, the neural network can be replaced with other machine learning models.
For specific limitations of each functional block in the LDPC decoding soft information quantization apparatus, reference may be made to the above limitations on the LDPC decoding soft information quantization method, and details are not repeated here. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, the present invention also provides an LDPC decoding soft information quantization apparatus, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the foregoing steps of the LDPC decoding soft information quantization method. The processor herein has functions of numerical calculation and logical operation, and it has at least a central processing unit CPU having data processing capability, a random access memory RAM, a read only memory ROM, various I/O ports, an interrupt system, and the like. The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. One or more than one kernel can be set, and the method is realized by adjusting kernel parameters. The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
In an embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein instructions which, when executed on a computer, cause a processor to be configured to perform the above-mentioned quantization method of LDPC decoding soft information.
In one embodiment provided by the present invention, a computer program product is provided, which comprises a computer program, and the computer program realizes the quantization method of LDPC decoding soft information described above when being executed by a processor.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (16)

1. A quantization method for LDPC decoding soft information, the method comprising:
acquiring a mapping relation between soft information before quantization and soft information after quantization, wherein the mapping relation comprises the following steps: a processing function based on soft information is used as the mapping relation, or a distribution function based on soft information is used for obtaining a normalized expression as the mapping relation;
fitting the mapping relation through a neural network to obtain a neural network quantizer;
the neural network quantizer is used for outputting corresponding soft information according to the input soft information;
wherein, the processing function based on the soft information as the mapping relation comprises: 1) Defining initial soft information as first soft information; obtaining second soft information by performing a clipping operation on the first soft information; 2) Inputting the second soft information into a decoder to obtain decoding bits; 3) Calculating cross entropy loss according to the decoding bit and the transmitting bit obtained in the step 2); 4) Calculating an optimal truncation threshold; 5) After the truncation threshold is determined, determining an optimal compression coefficient; 6) And (5) repeating the steps 1) to 5) to obtain truncation thresholds and compression coefficients under all MCS combinations and obtain non-uniform transfer functions corresponding to different MCSs.
2. The method of claim 1, wherein the parameter in the processing function comprises a truncation threshold, the truncation threshold determined by:
determining a search range of a truncation threshold and an optimal quantitative evaluation index;
calculating the value of the optimal quantitative evaluation index according to the decoded bit and the transmitted bit obtained by the decoder of the cut soft information;
and determining an optimal truncation threshold from the search range as an optimal parameter according to the value of the optimal quantitative evaluation index.
3. The method of claim 1, wherein the parameters in the processing function include compression factors, the compression factors determined by:
obtaining a value of a performance index of the decoder according to a decoding bit and a transmitting bit obtained by the decoder according to the quantized soft information;
and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
4. The method of claim 1, wherein fitting the mapping relationship through a neural network to obtain a neural network quantizer comprises:
training the neural network by adopting a data sample containing the mapping relation, and taking the trained neural network as the neural network quantizer; or
Extracting a mathematical expression of the neural network, determining parameters in the mathematical expression according to a data sample containing the mapping relation, and taking the mathematical expression after determining the parameters as the neural network quantizer; or alternatively
And fitting the mapping relation by adopting a fitting curve, determining characteristic parameters in the fitting curve through the data sample containing the mapping relation and the neural network, and taking the fitting curve after determining the characteristic parameters as the neural network quantizer.
5. The method of claim 4, wherein the fitted curve is a polyline, and the characteristic parameter is a vertex of the polyline, the vertex being determined from the output values of the neural network.
6. The method of claim 5, wherein the polyline is a three-segment polyline with vertices (0, 0), (x) respectively 1 ,y 1 )、(x 2 1), (1, 1); wherein x 1 、y 1 、x 2 And determining by using the output value of the neural network.
7. The method of claim 1, wherein the neural network can be replaced with other machine learning models.
8. An apparatus for quantizing LDPC coded soft information, the apparatus comprising:
the mapping obtaining module is configured to obtain a mapping relationship between the soft information before quantization and the soft information after quantization, and includes: taking a processing function based on soft information as the mapping relation, or taking a normalized expression obtained based on a distribution function of the soft information as the mapping relation;
the mapping fitting module is used for fitting the mapping relation through a neural network to obtain a neural network quantizer; and
the quantization application module is used for outputting corresponding soft information according to the input soft information by utilizing the neural network quantizer;
wherein, the processing function based on the soft information as the mapping relation comprises: 1) Defining initial soft information as first soft information; obtaining second soft information by performing a clipping operation on the first soft information; 2) Inputting the second soft information into a decoder to obtain decoding bits; 3) Calculating cross entropy loss according to the decoded bits and the transmitted bits obtained in the step 2); 4) Calculating an optimal truncation threshold; 5) After the truncation threshold is determined, determining an optimal compression coefficient; 6) And (5) repeating the steps 1) to 5) to obtain truncation thresholds and compression coefficients under all MCS combinations and obtain non-uniform transfer functions corresponding to different MCSs.
9. The apparatus of claim 8, wherein the parameters in the processing function comprise a truncation threshold, the truncation threshold determined by:
determining a search range of a truncation threshold and an optimal quantitative evaluation index;
calculating the value of the optimal quantitative evaluation index according to the decoded bit and the transmitted bit obtained by the decoder of the cut soft information;
and determining an optimal truncation threshold from the search range as an optimal parameter according to the optimal quantitative evaluation index.
10. The apparatus of claim 8, wherein the parameters in the processing function comprise compression factors, and wherein the compression factors are determined by:
obtaining a value of a performance index of the decoder according to a decoding bit and a transmitting bit obtained by the decoder according to the quantized soft information;
and obtaining the optimal compression coefficient as the optimal parameter according to the value of the performance index of the decoder.
11. The apparatus of claim 8, wherein fitting the mapping relationship through a neural network to obtain a neural network quantizer comprises:
training the neural network by adopting a data sample containing the mapping relation, and taking the trained neural network as the neural network quantizer; or alternatively
Extracting a mathematical expression of the neural network, determining parameters in the mathematical expression according to a data sample containing the mapping relation, and taking the mathematical expression after determining the parameters as the neural network quantizer; or
And fitting the mapping relation by adopting a fitting curve, determining characteristic parameters in the fitting curve through the data sample containing the mapping relation and the neural network, and taking the fitting curve after determining the characteristic parameters as the neural network quantizer.
12. The apparatus of claim 11, wherein the fitted curve is a polygonal line, and the characteristic parameter is a vertex of the polygonal line, the vertex being determined from an output value of the neural network.
13. The apparatus of claim 12, wherein the broken line is a three-segment broken line, and the vertices of the three-segment broken line are (0, 0), (x) respectively 1 ,y 1 )、(x 2 1), (1, 1); wherein x 1 、y 1 、x 2 And determining by using the output value of the neural network.
14. The apparatus of claim 8, wherein the neural network is replaceable with other machine learning models.
15. A quantization apparatus for LDPC coded soft information, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of the method of quantization of LDPC coded soft information as claimed in any one of claims 1 to 7.
16. A computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to perform the steps of the quantization method of LDPC coding soft information of any one of claims 1 to 7.
CN202210968862.8A 2022-08-12 2022-08-12 Method, device and equipment for quantizing LDPC decoding soft information Active CN115037312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210968862.8A CN115037312B (en) 2022-08-12 2022-08-12 Method, device and equipment for quantizing LDPC decoding soft information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210968862.8A CN115037312B (en) 2022-08-12 2022-08-12 Method, device and equipment for quantizing LDPC decoding soft information

Publications (2)

Publication Number Publication Date
CN115037312A CN115037312A (en) 2022-09-09
CN115037312B true CN115037312B (en) 2023-01-17

Family

ID=83130982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210968862.8A Active CN115037312B (en) 2022-08-12 2022-08-12 Method, device and equipment for quantizing LDPC decoding soft information

Country Status (1)

Country Link
CN (1) CN115037312B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111490798B (en) * 2019-01-29 2022-04-22 华为技术有限公司 Decoding method and decoding device
US11177831B2 (en) * 2020-04-09 2021-11-16 Kabushiki Kaisha Toshiba LLR estimation for soft decoding
CN112929033B (en) * 2021-01-26 2023-09-01 福州大学 Optimization method of MLC type NAND flash memory Polar-RNNA quantizer based on deep learning
CN113962385A (en) * 2021-10-21 2022-01-21 上海商汤阡誓科技有限公司 Neural network training and data processing method and device, medium and computer equipment
CN114244375B (en) * 2021-12-22 2023-07-21 中山大学 LDPC normalization minimum sum decoding method and device based on neural network

Also Published As

Publication number Publication date
CN115037312A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN109859281B (en) Compression coding method of sparse neural network
JP4981174B2 (en) Symbol plane coding / decoding by dynamic calculation of probability table
CN107666370B (en) Encoding method and apparatus
CN110249384B (en) Quantizer with index coding and bit arrangement
US11315583B2 (en) Audio encoders, audio decoders, methods and computer programs adapting an encoding and decoding of least significant bits
EP3069449B1 (en) Split gain shape vector coding
CN114285524B (en) LLR vector quantization calculation method, device and communication system
US10869029B2 (en) Hybrid digital-analog coding
JP6470339B2 (en) Low complexity LLR computation for non-uniform QAM constellation
CN115037312B (en) Method, device and equipment for quantizing LDPC decoding soft information
CN101266796B (en) A quantified coding method and device
CN108055107B (en) Information communication method based on puncture polarization code
JPH11355147A (en) Encoding device
US20230299788A1 (en) Systems and Methods for Improved Machine-Learned Compression
CN110491399B (en) Encoding method, encoding device, and recording medium
CN115037313B (en) LDPC decoding quantization method, device and equipment, and LDPC decoding method and system
KR20040075944A (en) Data compression and expansion of a digital information signal
KR20230010854A (en) An improved concept for the representation of neural network parameters
WO2020122749A1 (en) Apparatus and method for obtaining concatenated code structures and computer program product therefor
CN115175248B (en) Fixed point data compression and decompression method, device and equipment for communication system
US11935550B1 (en) Audio compression for low overhead decompression
CN116684042A (en) Sliding window block scheduling decoding method and system based on double-space coupling LDPC code
KR20220129266A (en) Method for encoding and decoding block orthogonal sparse superposition code, and apparatus therefore
RU2450451C2 (en) Logic for quick calculation of square root for correction in mimo system
CN111404556A (en) Encoding method, decoding method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant