CN108777584A - A kind of fast Optimization of polarization code decoding parameter - Google Patents

A kind of fast Optimization of polarization code decoding parameter Download PDF

Info

Publication number
CN108777584A
CN108777584A CN201810735831.1A CN201810735831A CN108777584A CN 108777584 A CN108777584 A CN 108777584A CN 201810735831 A CN201810735831 A CN 201810735831A CN 108777584 A CN108777584 A CN 108777584A
Authority
CN
China
Prior art keywords
decoding
polarization code
basis function
neural network
radial basis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810735831.1A
Other languages
Chinese (zh)
Inventor
李世宝
卢丽金
潘荔霞
刘建航
黄庭培
陈海华
邓云强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201810735831.1A priority Critical patent/CN108777584A/en
Publication of CN108777584A publication Critical patent/CN108777584A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0054Maximum-likelihood or sequential decoding, e.g. Viterbi, Fano, ZJ algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present invention provides the fast Optimization that a kind of polarization code decodes parameter, since the method first collecting, arranging sample data;Then using the characteristics of sample data and size is as according to being modeled, and network is trained using supervised learning and stochastic gradient optimization method;Then the likelihood ratio being calculated by reception signal is input to again and is completed in trained radial basis function neural network model, export M;L is finally initialized as M, executes serial counteracting list decoding.This method avoids unnecessary calculating operation, to greatly reduce the decoding complexity of polarization code by the way that radial basis function neural network technology and polarization code decoding technique to be combined.

Description

A kind of fast Optimization of polarization code decoding parameter
Technical field
The invention belongs to fields of communication technology, more particularly to a kind of to optimize serial counteracting with radial basis function neural network The method of the polarization code decoding parameter of list decoding.
Background technology
Polarization code is a kind of novel channel coding proposed by E.Arikan in 2008.Polarization code, which is the first, to be passed through Stringent mathematical method proves the constructivity encoding scheme for reaching channel capacity.It is serial to offset at the beginning of polarization code is suggested (SC) decoding is also suggested therewith.SC decodings can be considered to be at the path search process on binary tree.SC decoding algorithms from Code tree root node starts, and is successively scanned for successively to leaf node layer, preferable from two subsequent middle selections after each layer of extension One be extended.The characteristics of SC is decoded mainly has two aspects, is on the one hand that its complexity is low, decoding architecture is simple;It is another Aspect is that it is proved to that shannon limit can be reached when code length is sufficiently large in theory.But SC decoding algorithms are limited in code length Under long configuration, error-correcting performance is undesirable.In order to improve performance, propose that serial counteracting list (SCL) decodes.SCL decodings are SC A kind of modified version of decoding.Unlike SC, SCL decoding algorithms be no longer from two it is subsequent it is middle selection it is preferable one into Row extension, but retain the successor path no more than L items as much as possible, when being extended at next layer, all this is not more than the times of L items Routing diameter can all be extended respectively.When terminating the extension of leaf node layer, there is at most L path candidate to be retained in list In.Since SCL decodings are only under higher signal-to-noise ratio, maximum-likelihood decoding performance can be realized, therefore cyclic redundancy check (CRC) it is introduced for promoting the decoding performance of polarization code, this L path candidate is verified by using CRC, it is final defeated Going out one can be by CRC's and most possible path candidate.The SCL of CRC auxiliary has more better with LDPC code than Turbo code Decoding performance, but as L increases, decoding complexity also will increase.In order to solve this problem, adaptive serial counteracting list (AD-SCL) decoding algorithm avoids calculating unnecessary path, greatly reduced decoding by adaptively controlling L Complexity.
But under lower signal-to-noise ratio, AD-SCL can frequently occur high decoding complexity situation.AD-SCL algorithms are always The initial value of L is configured to 1.If the AD-SCL decoding failures based on L=1, which can be updated to 2L by L and continue to translate Code, until L=Lmax, LmaxIt is the maximum list size being set according to actual conditions.Under the configuration of low signal-to-noise ratio and L=1, The probability of AD-SCL algorithms failure is high, it is therefore desirable to which frequent updating L values increase complexity.If when decoding beginning, by L As soon as being initialized as a suitable value, executing primary decoding as much as possible can be successful, then, it is multiple that decoding will be significantly decreased Miscellaneous degree.
In order to reduce decoding complexity, by finding a suitable L value under the premise of ensureing polarization code decoding performance Reduce calculation amount, patent of the present invention proposes a kind of fast Optimization of polarization code decoding parameter, by building and training Radial basis function neural network optimizes L values, to realize the target for reducing operand, finally reduces decoding complexity.
Invention content
The present invention proposes a kind of parameter optimization side of the radial basis function neural network auxiliary based on SCL decoding algorithms Method reduces decoding complexity, and the L values of this optimization in the case where ensureing that decoding performance is constant by optimizing L values It is denoted as M.
In the sample data preparation stage, different signal-to-noise ratio are based on, 50000 adaptive serial list decodings of offsetting is executed and calculates Method, the likelihood ratio being calculated by reception signal when will be successfully decoded each time and it is successfully decoded when corresponding L record, one The likelihood ratio that secondary decoding success is recorded constitutes one group of sample data with corresponding L.10000 groups of sample datas are randomly selected, from 75% group of data is randomly selected in this 10000 groups of data as training sample, and using remaining 25% group of data as test specimens This.
Radial basis function neural network (RBFNN) is a kind of three-layer forward networks and classification capacity with single hidden layer By force, it can be used as grader, it is characterized in that hierarchical structure and training rules can be set according to actual conditions.Build radial base letter When number neural network model, hierarchical structure includes 1 input layer, 1 hidden layer and 1 output layer, we are by the section of input layer It counts out and is set as 1, the interstitial content of hidden layer is set as 300, and the interstitial content of output layer is set as 6, using full connection side Formula builds radial basis function neural network, and hidden layer neuron is different from the model of output layer neuron, hides node layer and swashs Function living is radial basis function (Gaussian function), and output node layer activation primitive is linear function, and likelihood ratio is network inputs, mark It includes 6 kinds of different sizes to sign L, i.e. L ∈ { 1,2,4,8,16,32 } are divided into 6 classes.When training radial basis function neural network, base In supervised learning and error backpropagation algorithm, by stochastic gradient descent come to the center of radial basis function, variance in network It exercises supervision with the weights of hidden layer to output layer and trains optimization, correct each parameter, until completing radial ba-sis function network The training of network.
End is decoded in polarization code, the likelihood ratio being calculated by reception signal is input to radial basis function neural network mould In type, M is obtained, and L is initialized as M, executes serial counteracting list decoding.
In parameter optimisation procedure, it is applicable in following steps:
Step 1, prepare sample data, and sample data is pre-processed;
Step 2, radial basis function neural network, and training radial basis function neural network are built;
Step 3, the stage is decoded in polarization code, likelihood ratio is input in radial basis function neural network model, obtains one A value M, and L is initialized as M, execute serial counteracting list decoding;
Wherein, prepare sample data in step 1 and refer to 50000 adaptive serial counteracting list decodings of execution, The likelihood ratio that is calculated by reception signal when will be successfully decoded each time and it is successfully decoded when corresponding L record, once The likelihood ratio that decoding success is recorded constitutes one group of sample data with corresponding L, 10000 groups of sample datas is randomly selected, from this 75% group of data is randomly selected in 10000 groups of data as training sample, and using remaining 25% group of data as test sample; Radial basis function neural network is built in step 2 referring to and the interstitial content of input layer is set as 1, the number of plies of hidden layer is set It is set to 1, the interstitial content of hidden layer is set as 300, and the interstitial content of output layer is set as 6.
Advantageous effect
The present invention, which compares prior art, has following innovative point:
With radial basis function neural network come Optimal Parameters L.End is decoded in polarization code, likelihood ratio is input to, instruction is completed In experienced radial basis function neural network, then exportable M, realizes the Fast Classification of likelihood ratio.On this basis, ensureing to polarize Under the premise of code performance, decoding algorithm can reduce decoding complexity with suitable L value.
Radial basis function neural network technology and polarization code decoding technique are combined.In the sample data preparation stage, Sample data, which derives from, is performed a plurality of times adaptive serial counteracting list decoding;Building and training radial ba-sis function network The input in network stage, input layer is likelihood ratio, and the output of output layer is L;At decoding end, based on SCL decoding algorithms, by L It is initialized as M.At this point, probability successfully decoded SCL are higher, frequent updating L values are not needed, unnecessary arithmetic operation is avoided, The operand for reducing decoder, to which computation complexity be greatly reduced.
Description of the drawings
Fig. 1 is to determine the radial basis function neural network schematic diagram built when radial basis function neural network structure;
Fig. 2 is the method flow diagram of Optimal Parameters L.
Specific implementation mode
Below in conjunction with drawings and examples, the present invention will be further described.
The present invention provides a kind of fast Optimization of polarization code decoding parameter, and main includes preparing sample data, building And train radial basis function neural network and decoding three parts.In the sample data preparation stage, held under different signal-to-noise ratio first Row 50000 times it is adaptive it is serial offset list decodings, and will be successfully decoded each time when be calculated by reception signal Likelihood ratio and it is successfully decoded when corresponding L record, 10000 groups of sample datas are then randomly selected, finally from this 10000 groups 75% group of data is randomly selected in data as training sample, and using remaining 25% group of data as test sample;It is building With the training radial basis function neural network stage, first according to the characteristics of training sample and size determines the hierarchical structure of network And parameter, radial basis function neural network is built, then by stochastic gradient descent come in the radial basis function in network The weights of the heart, variance and hidden layer to output layer, which all exercise supervision, trains optimization, and corrects, adjusts each parameter;In polarization code End is decoded, the likelihood ratio being calculated by reception signal is input in radial basis function neural network model first, then To M, L is finally initialized as M, executes serial counteracting list decoding.
It executes 50000 adaptive serial list decodings of offsetting in different signal-to-noise ratio in the sample data preparation stage and calculates Method, and will be successfully decoded each time when the likelihood ratio that is calculated by reception signal and it is successfully decoded when corresponding L record, The likelihood ratio that decoding success is recorded constitutes one group of sample data with corresponding L;Then 10000 groups of sample numbers are randomly selected According to;75% group of data is finally randomly selected from this 10000 groups of data as training sample, and remaining 25% group of data are made For test sample, sample data is converted to by numeric data using normalized.The present embodiment adaptively will serially offset row The code length of table decoding algorithm is set as 1024, and code check is set as 0.5, Lmax=32, CRC length are 16, by number of training and survey Examination sample number is respectively set to 7500 and 2500.
Building with training the radial basis function neural network stage, the radial basis function neural network built as shown in Figure 1, Including 1 input layer, 1 hidden layer and 1 output layer, because the input of network is a likelihood ratio, so the node of input layer Number is 1;The interstitial content of hidden layer is set as 300;L is label, each sample corresponds to a kind of label, the present embodiment Label have 6 kinds of different sizes, i.e. L=1, L=2, L=4, L=8, L=16, L=32, can be classified as 6 classes, therefore will The number of nodes of network output layer is determined as 6, and connects input layer, hidden layer and output layer by the way of connecting entirely.It hides Layer neuron is different from the model of output layer neuron, and it is radial basis function to hide node layer activation primitive, and output node layer swashs Function living is linear function, the basic function of the hiding node layer of the radial basis function neural network in the present embodiment using Euclidean away from From, and use Gaussian function as activation primitive, the output of network be hide layer unit output linear weighted function and.Based on supervision Study trains network using back-propagation algorithm, and the cost function of the present embodiment is the square of network output and desired output Error;On this basis, cost function is optimized with stochastic gradient descent optimization method, that is, passes through stochastic gradient descent All exercise supervision training optimization come the weights to the center of the radial basis function in network, variance and hidden layer to output layer, often When secondary iteration, in the negative direction of error gradient with certain learning rate come adjust, corrected parameter.According to obtained training rule Then, center, variance and the hidden layer of radial basis function are constantly corrected to the weights of output layer, until completing all sample trainings.
End is decoded in polarization code, comes Optimal Parameters L, the optimization method flow of parameter L by radial basis function neural network Figure is as shown in Figure 2.The likelihood ratio being calculated by reception signal is input to first, trained Radial Basis Function neural is completed In network, M is exported;L is initialized as M again, executes serial counteracting list decoding;Then to L items decode path candidate into Row CRC check then exports a most possible path if there is one or the path candidate of more than one are by CRC check And decoding is exited, otherwise, L is updated to 2L;After completing update, judge whether L is more than LmaxIf being not more than Lmax, then continue into Otherwise the serial list decoding of offsetting of row exits decoding.The serial code length for offsetting list decoding is set as 1024 by the present embodiment, Code check is set as 0.5, Lmax=32, CRC length are 16.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited to this, any ripe Those skilled in the art are known in technical scope proposed by the present invention, the variation or replacement that can be readily occurred in all are answered This is included within the scope of the present invention.

Claims (3)

1. a kind of fast Optimization of polarization code decoding parameter, which is characterized in that the method radial ba-sis function network Network serially offsets the list size L of list decoding to optimize, and the parameter optimization method includes the following steps:
Step 1, prepare sample data, and sample data is pre-processed;
Step 2, radial basis function neural network, and training radial basis function neural network are built;
Step 3, the stage is decoded in polarization code, likelihood ratio is input in radial basis function neural network model, a value is obtained M, and L is initialized as M, execute serial counteracting list decoding.
2. a kind of fast Optimization of polarization code decoding parameter according to claim 1, which is characterized in that in step 1 Prepare sample data and refer to that execution 50000 times is adaptive serial to offset list decodings, when will be successfully decoded each time by Receive the likelihood ratio that is calculated of signal and it is successfully decoded when corresponding L record, the likelihood that a decoding success is recorded Than constituting one group of sample data with corresponding L, 10000 groups of sample datas are randomly selected, are randomly selected from this 10000 groups of data 75% group of data is as training sample, and using remaining 25% group of data as test sample.
3. a kind of fast Optimization of polarization code decoding parameter according to claim 1, which is characterized in that in step 2 It builds radial basis function neural network and refers to and the interstitial content of input layer is set as 1, the number of plies of hidden layer is set as 1, hidden The interstitial content for hiding layer is set as 300, and the interstitial content of output layer is set as 6.
CN201810735831.1A 2018-07-06 2018-07-06 A kind of fast Optimization of polarization code decoding parameter Pending CN108777584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810735831.1A CN108777584A (en) 2018-07-06 2018-07-06 A kind of fast Optimization of polarization code decoding parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810735831.1A CN108777584A (en) 2018-07-06 2018-07-06 A kind of fast Optimization of polarization code decoding parameter

Publications (1)

Publication Number Publication Date
CN108777584A true CN108777584A (en) 2018-11-09

Family

ID=64029623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810735831.1A Pending CN108777584A (en) 2018-07-06 2018-07-06 A kind of fast Optimization of polarization code decoding parameter

Country Status (1)

Country Link
CN (1) CN108777584A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450459A (en) * 2019-01-16 2019-03-08 中国计量大学 A kind of polarization code FNSC decoder based on deep learning
CN110212922A (en) * 2019-06-03 2019-09-06 南京宁麒智能计算芯片研究院有限公司 A kind of polarization code adaptive decoding method and system
CN111030708A (en) * 2019-12-27 2020-04-17 哈尔滨工业大学(深圳) Iterative adjustable soft serial offset list decoding method and device for polarization code
CN111224677A (en) * 2018-11-27 2020-06-02 华为技术有限公司 Encoding method, decoding method and device
CN111697975A (en) * 2020-06-01 2020-09-22 西安工业大学 Polarization code continuous deletion decoding optimization algorithm based on full-connection neural network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1359697A1 (en) * 2002-04-30 2003-11-05 Psytechnics Ltd Method and apparatus for transmission error characterisation
CN101194452A (en) * 2005-05-27 2008-06-04 同流技术控股有限公司 Evolutionary synthesis of a modem for band-limited non-linear channels
CN101594154A (en) * 2009-07-14 2009-12-02 北京航空航天大学 TCM neural network demodulation method based on code restoration
CN105488528A (en) * 2015-11-26 2016-04-13 北京工业大学 Improved adaptive genetic algorithm based neural network image classification method
CN106209113A (en) * 2016-07-29 2016-12-07 中国石油大学(华东) A kind of decoding method of polarization code
CN106506009A (en) * 2016-10-31 2017-03-15 中国石油大学(华东) A kind of interpretation method of polarization code
US20170126360A1 (en) * 2015-11-04 2017-05-04 Mitsubishi Electric Research Laboratories, Inc. Fast Log-Likelihood Ratio (LLR) Computation for Decoding High-Order and High-Dimensional Modulation Schemes
CN106656212A (en) * 2016-12-05 2017-05-10 东南大学 Self-adaptive continuous erasure decoding method and architecture based on polarization code
CN106953821A (en) * 2017-03-29 2017-07-14 西安电子科技大学 A kind of time-frequency overlapped signal Modulation Identification method under Underlay frequency spectrum shares
CN107241106A (en) * 2017-05-24 2017-10-10 东南大学 Polarization code decoding algorithm based on deep learning
CN107947803A (en) * 2017-12-12 2018-04-20 中国石油大学(华东) A kind of method for rapidly decoding of polarization code
CN107994973A (en) * 2017-12-04 2018-05-04 电子科技大学 A kind of adaptive modulation and coding method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1359697A1 (en) * 2002-04-30 2003-11-05 Psytechnics Ltd Method and apparatus for transmission error characterisation
CN101194452A (en) * 2005-05-27 2008-06-04 同流技术控股有限公司 Evolutionary synthesis of a modem for band-limited non-linear channels
CN101594154A (en) * 2009-07-14 2009-12-02 北京航空航天大学 TCM neural network demodulation method based on code restoration
US20170126360A1 (en) * 2015-11-04 2017-05-04 Mitsubishi Electric Research Laboratories, Inc. Fast Log-Likelihood Ratio (LLR) Computation for Decoding High-Order and High-Dimensional Modulation Schemes
WO2017077945A1 (en) * 2015-11-04 2017-05-11 Mitsubishi Electric Corporation Method and receiver for decoding symbol transmitted over channel
CN105488528A (en) * 2015-11-26 2016-04-13 北京工业大学 Improved adaptive genetic algorithm based neural network image classification method
CN106209113A (en) * 2016-07-29 2016-12-07 中国石油大学(华东) A kind of decoding method of polarization code
CN106506009A (en) * 2016-10-31 2017-03-15 中国石油大学(华东) A kind of interpretation method of polarization code
CN106656212A (en) * 2016-12-05 2017-05-10 东南大学 Self-adaptive continuous erasure decoding method and architecture based on polarization code
CN106953821A (en) * 2017-03-29 2017-07-14 西安电子科技大学 A kind of time-frequency overlapped signal Modulation Identification method under Underlay frequency spectrum shares
CN107241106A (en) * 2017-05-24 2017-10-10 东南大学 Polarization code decoding algorithm based on deep learning
CN107994973A (en) * 2017-12-04 2018-05-04 电子科技大学 A kind of adaptive modulation and coding method
CN107947803A (en) * 2017-12-12 2018-04-20 中国石油大学(华东) A kind of method for rapidly decoding of polarization code

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUGO TORRES SALAMEA 等: "Design of a Fault Tolerant Digital Communication System, by means of RBF Networks.Comparison Simulations with the Encoding and Decoding Algorithms BCH (7,4,1)", 《IEEE LATIN AMERICA TRANSACTIONS》 *
刘亚军 等: "一种低时延极化码列表连续删除译码算法", 《计算机工程》 *
樊婷婷 等: "Polar码SC译码算法的量化问题", 《北京邮电大学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111224677A (en) * 2018-11-27 2020-06-02 华为技术有限公司 Encoding method, decoding method and device
CN111224677B (en) * 2018-11-27 2021-10-15 华为技术有限公司 Encoding method, decoding method and device
JP2022511771A (en) * 2018-11-27 2022-02-01 華為技術有限公司 Coding method and device and decoding method and device
JP7228042B2 (en) 2018-11-27 2023-02-22 華為技術有限公司 Encoding method and device and decoding method and device
CN109450459A (en) * 2019-01-16 2019-03-08 中国计量大学 A kind of polarization code FNSC decoder based on deep learning
CN110212922A (en) * 2019-06-03 2019-09-06 南京宁麒智能计算芯片研究院有限公司 A kind of polarization code adaptive decoding method and system
CN110212922B (en) * 2019-06-03 2022-11-11 南京宁麒智能计算芯片研究院有限公司 Polarization code self-adaptive decoding method and system
CN111030708A (en) * 2019-12-27 2020-04-17 哈尔滨工业大学(深圳) Iterative adjustable soft serial offset list decoding method and device for polarization code
CN111030708B (en) * 2019-12-27 2023-05-12 哈尔滨工业大学(深圳) Iterative adjustable soft serial cancellation list decoding method and device for polarization codes
CN111697975A (en) * 2020-06-01 2020-09-22 西安工业大学 Polarization code continuous deletion decoding optimization algorithm based on full-connection neural network

Similar Documents

Publication Publication Date Title
CN108777584A (en) A kind of fast Optimization of polarization code decoding parameter
CN108880568A (en) A kind of serial counteracting list decoding parameter optimization method based on convolutional neural networks
Shen et al. Fractional skipping: Towards finer-grained dynamic CNN inference
CN110278002B (en) Bit-flipping-based polarization code belief propagation list decoding method
CN109245776A (en) A kind of polarization code decoding parameter determination method based on deep neural network
CN103929210B (en) Hard decision decoding method based on genetic algorithm and neural network
Wang et al. Learning to flip successive cancellation decoding of polar codes with LSTM networks
Lugosch et al. Learning from the syndrome
CN116680540A (en) Wind power prediction method based on deep learning
CN110138390A (en) A kind of polarization code SSCL algorithm decoder based on deep learning
Zhao et al. Genetic optimization of radial basis probabilistic neural networks
CN112713903B (en) Polarization code construction method based on general partial order and genetic algorithm under SCL decoder
JPWO2018167885A1 (en) Information processing apparatus, information processing method, and information processing program
CN106452675A (en) Sphere decoding method for polar codes
Yuan et al. A novel hard decision decoding scheme based on genetic algorithm and neural network
CN112712178A (en) Bayesian network structure learning method and system based on genetic algorithm
CN110572166B (en) BCH code decoding method based on deep learning
Zor et al. BeamECOC: A local search for the optimization of the ECOC matrix
CN114401016B (en) Two-stage construction method for rate compatible shortened polarization code
US20130318017A1 (en) Devices for Learning and/or Decoding Messages, Implementing a Neural Network, Methods of Learning and Decoding and Corresponding Computer Programs
CN110212922A (en) A kind of polarization code adaptive decoding method and system
CN113823322A (en) Simplified and improved Transformer model-based voice recognition method
CN108388942A (en) Information intelligent processing method based on big data
CN114217580A (en) Functional fiber production scheduling method based on improved differential evolution algorithm
CN108900198A (en) A kind of serial fast determination method for offsetting list decoding parameter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181109