CN111490798B - Decoding method and decoding device - Google Patents

Decoding method and decoding device Download PDF

Info

Publication number
CN111490798B
CN111490798B CN201910087689.9A CN201910087689A CN111490798B CN 111490798 B CN111490798 B CN 111490798B CN 201910087689 A CN201910087689 A CN 201910087689A CN 111490798 B CN111490798 B CN 111490798B
Authority
CN
China
Prior art keywords
decoding
neural network
initial
decoding unit
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910087689.9A
Other languages
Chinese (zh)
Other versions
CN111490798A (en
Inventor
张朝阳
宋旭冉
秦康剑
朱致焕
徐晨
于天航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910087689.9A priority Critical patent/CN111490798B/en
Priority to PCT/CN2020/071341 priority patent/WO2020156095A1/en
Publication of CN111490798A publication Critical patent/CN111490798A/en
Application granted granted Critical
Publication of CN111490798B publication Critical patent/CN111490798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The application provides a decoding method and a decoding device, wherein the method comprises the steps of obtaining soft information of N bits to be decoded, wherein N is an integer greater than or equal to 2; decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the exclusive OR operation of the soft information, and the decoding model is obtained through at least one training process. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.

Description

Decoding method and decoding device
Technical Field
The present application relates to the field of communications, and in particular, to a decoding method and a decoding apparatus.
Background
The rapid evolution of wireless communication predicts that the fifth generation (5G) communication system will exhibit some new features, and the most typical three communication scenarios include enhanced mobile internet (eMBB), mass machine connectivity communication (mtc), and high reliable low latency communication (URLLC), and the requirements of these communication scenarios will present new challenges to the existing Long Term Evolution (LTE) technology. Channel coding, the most basic radio access technology, is one of the important research objects to meet the requirements of 5G communication. Polar Codes (Polar Codes) are selected as the control channel coding scheme in the 5G standard. The Polar code, which may also be referred to as Polar code, is the first, also known only, channel coding method that can be strictly proven to "reach" the channel capacity. Polar codes perform far better than Turbo codes and Low Density Parity Check (LDPC) codes at different code lengths, especially for finite codes. In addition, Polar codes have low computational complexity in coding and decoding. These advantages make Polar code have great development and application prospects in 5G.
Although the maximum likelihood decoding has the best decoding performance, the received modulation symbols need to be correlated with all possible code words, which makes the maximum likelihood decoding almost impossible under the actual code length configuration.
Therefore, in order to meet the requirements of high-rate transmission and low decoding delay, designing a Polar code decoding model with good decoding performance becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a decoding method and a decoding device, which have good decoding performance.
In a first aspect, a method for decoding is provided, where the method includes:
acquiring soft information of N bits to be decoded, wherein N is an integer greater than or equal to 2;
decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the exclusive OR operation of the soft information, and the decoding model is obtained through at least one training process.
According to the embodiment of the application, the neural network decoding units form the decoding model, and the decoding model is obtained after the small neural network decoding units are connected, so that in the learning process of decoding, the whole code word space can be generalized through small learning samples, and the influence of information with longer code words on the complexity and the learning difficulty of the neural network is weakened. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.
With reference to the first aspect, in one implementation, in the coding modelThe plurality of neural network coding units form log2And the N-layer structure is formed, wherein the output of the decoding unit of the neural network in the previous layer is used as the input of the next layer.
With reference to the first aspect, in one implementation, each neural network coding unit is 2-input-2-output and has at least one hidden layer structure.
Optionally, each hidden layer may include Q nodes, and Q is an integer greater than or equal to 2.
It should be understood that the hidden layer in the embodiments of the present application may also be referred to as a hidden layer, and the embodiments of the present application are not limited thereto.
With reference to the first aspect, in an implementation manner, the neural network coding unit includes a neural network coding unit parameter, the neural network coding unit parameter is used to indicate a mapping relationship between input information and output information input to the neural network coding unit, and the neural network coding unit parameter includes a weight matrix and an offset vector.
With reference to the first aspect, in an implementation manner, an input vector input to one neural network decoding unit and an output vector output to the one neural network decoding unit have the following mapping relationship:
Figure BDA0001962302760000021
wherein y ═ y1,y2)TRepresenting said input vector, x ═ x1,x2)TRepresents said output vector, w1And w2Representing said weight matrix, b1And b2Representing the offset vector, h representing a hidden layer element vector, g1And g2Represents an activation function, said w1、w2Are all real number matrices, b1、b2H, y, x are real vectors.
With reference to the first aspect, in an implementation manner, a value of the output vector x is
Figure BDA0001962302760000022
And
Figure BDA0001962302760000023
in any case, the output vector y has the following mapping relation with x:
Figure BDA0001962302760000024
x2=y2
with reference to the first aspect, in an implementation manner, before the decoding the soft information by the decoding model, the method further includes:
and acquiring the coding model.
It should be understood that the decoding model may be trained by the decoding apparatus, or may be trained by other apparatuses, and the embodiments of the present application are not limited thereto.
In a case where the decoding model is trained by another apparatus, the decoding apparatus acquiring the decoding model includes the decoding apparatus acquiring the decoding model from another device.
In this case, the decoding model may be trained by the other apparatuses or another apparatus, and the embodiments of the present application are not limited thereto.
In the embodiment of the application, because other devices are trained to obtain the decoding model, the decoding device can obtain the decoding model from the other devices without training the model, and directly use the decoding model, thereby avoiding the cost expense caused by retraining.
Optionally, in the case that the decoding model is trained by itself, the decoding means obtaining the decoding model comprises the decoding means training obtaining the decoding model.
In this case, after the decoding device obtains the decoding model through training, the decoding device can send the decoding model to another device for use, so that the other device can directly use the decoding model without training, and the cost caused by retraining is avoided.
It should be understood that, in practical applications, after the decoding device has trained the decoding model, the decoding device can directly use the decoding model in decoding later, without training again. That is, the decoding apparatus may train the decoding model in advance, and may not train the decoding model any more during decoding but directly use the decoding model. Optionally, the decoding device may train to obtain the decoding model only when there is a decoding requirement, and then perform decoding, which is not limited in this embodiment of the present application.
With reference to the first aspect, in one implementation manner, the coding model is obtained through two training processes.
With reference to the first aspect, in an implementation manner, the obtaining the coding model includes:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
training the initial neural network decoding unit by using a preset first sample set, updating parameters of the initial neural network decoding unit to parameters of an intermediate neural network decoding unit, and obtaining the intermediate neural network decoding unit, wherein the intermediate neural network decoding unit comprises the parameters of the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit are used for indicating a mapping relation between input information and output information which are input into the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit comprise an intermediate weight matrix and an intermediate offset vector, the first sample set comprises at least one first sample, one first sample comprises a first column vector with the length of 2 and a second column vector with the length of 2, and the second column vector is an expected vector for decoding the first column vector;
combining a plurality of the intermediate neural network decoding units together to obtain a first initial decoding model;
training the first initial coding model by using a preset second sample set, updating the parameters of the inter-neural network coding unit in the inter-neural network coding unit to the parameters of the neural network coding unit, and obtaining the coding model, wherein the second sample set comprises a third column vector with the length of N and a fourth column vector with the length of N, and the fourth column vector is an expected vector for coding the third column vector.
With reference to the first aspect, in an implementation manner, the combining a plurality of the inter-neural network coding units together to obtain a first initial coding model includes:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the code butterfly graph in the decoding network graph by using the intermediate neural network decoding unit to obtain the first initial decoding model.
With reference to the first aspect, in an implementation manner, the coding model is obtained through a training process.
With reference to the first aspect, in an implementation manner, the obtaining the coding model includes:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
combining a plurality of initial neural network decoding units together to obtain a second initial decoding model;
training the second initial coding model by using a preset third sample set, updating the initial neural network coding unit parameters in the initial neural network coding unit to neural network coding unit parameters, and obtaining the coding model, wherein the third sample set comprises a fifth column vector with the length of N and a sixth column vector with the length of N, and the sixth column vector is an expected vector for decoding the fifth column vector.
With reference to the first aspect, in an implementation manner, the combining a plurality of the initial neural network coding units to obtain a second initial coding model includes:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the decoding butterfly graph in the decoding network graph by using an initial neural network decoding unit to obtain the second initial decoding model.
According to the embodiment of the application, the neural network decoding units form the decoding model, and the decoding model is obtained after the small neural network decoding units are connected, so that in the learning process of decoding, the whole code word space can be generalized through small learning samples, and the influence of information with longer code words on the complexity and the learning difficulty of the neural network is weakened. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.
In a second aspect, a decoding apparatus is provided, which includes various modules or units for performing the method of the first aspect or any one of the possible implementations of the first aspect.
In a third aspect, a decoding apparatus is provided that includes a transceiver, a processor, and a memory. The processor is configured to control the transceiver to transceive signals, the memory is configured to store a computer program, and the processor is configured to retrieve and execute the computer program from the memory, so that the decoding apparatus executes the method of the first aspect and possible implementations thereof.
In a fourth aspect, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a computer, carries out the method of the first aspect and its possible implementations.
In a fifth aspect, a computer program product is provided, which when executed by a computer implements the method of the first aspect and its possible implementations.
In a sixth aspect, a processing apparatus is provided that includes a processor and an interface.
In a seventh aspect, a processing apparatus is provided that includes a processor, an interface, and a memory.
In a sixth aspect or a seventh aspect, the processor is configured to perform the methods as an execution subject of the method in the first aspect or any possible implementation manner of the first aspect, where a relevant data interaction process (e.g., receiving information sent by a sending end, such as a bit to be decoded, and the like) is completed through the interface. In a specific implementation process, the interface may further complete the data interaction process through a transceiver.
It should be understood that the processing device in the above six aspects or the seventh aspect may be a chip, the processor may be implemented by hardware or may be implemented by software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor, and the memory may be integrated with the processor by reading software code stored in the memory, e.g., the memory may be integrated in the processor, or the memory may be located external to the processor and stand-alone.
Drawings
Fig. 1 is a schematic diagram of a scenario to which an embodiment of the present application is applicable.
Fig. 2 is a schematic diagram of a wireless communication process according to an embodiment of the present application.
Fig. 3 is a flowchart of a decoding method according to an embodiment of the present application.
FIG. 4 is a schematic diagram of a neural network decoding unit according to an embodiment of the present application.
FIG. 5 is a diagram of a coding model according to one embodiment of the present application.
FIG. 6 is a schematic diagram of a method for training a decoding model twice according to an embodiment of the present application.
FIG. 7 is a diagram of a decoding network according to one embodiment of the present application.
FIG. 8 is a schematic diagram of a butterfly operation according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a method of generating a first initial coding model according to an embodiment of the present application.
FIG. 10 is a diagram illustrating a method for one-time training of a decoding model according to an embodiment of the present application.
Fig. 11 is a diagram illustrating a method of generating a second initial coding model according to an embodiment of the present application.
FIG. 12 is a graph illustrating simulation of decoding performance by a decoding model according to an embodiment of the present application.
FIG. 13 is a graph comparing coding performance of a coding model according to the present application with a prior model.
FIG. 14 is a block diagram of a decoding apparatus according to an embodiment of the present application.
FIG. 15 is a block diagram of a decoding apparatus according to another embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The technical scheme of the embodiment of the application can be applied to various communication systems, for example: a global system for mobile communications (GSM) system, a Code Division Multiple Access (CDMA) system, a Wideband Code Division Multiple Access (WCDMA) system, a General Packet Radio Service (GPRS), a long term evolution (long term evolution, LTE) system, a LTE Frequency Division Duplex (FDD) system, a LTE Time Division Duplex (TDD), a universal mobile telecommunications system (universal mobile telecommunications system, UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication system, a future fifth generation (5G) or new radio NR (5G) system, and the like.
Fig. 1 shows a schematic diagram of a communication system 100 suitable for use in the method and apparatus for transmitting and receiving of embodiments of the present application. As shown, the communication system 100 may include at least one network device, such as the network device 110 shown in fig. 1; the communication system 100 may also include at least one terminal device, such as the terminal device 120 shown in fig. 1. Network device 110 and terminal device 120 may communicate via a wireless link.
Each communication device, such as network device 110 or terminal device 120 in fig. 1, may be configured with multiple antennas. The plurality of antennas may include at least one transmit antenna for transmitting signals and at least one receive antenna for receiving signals. Additionally, each communication device can additionally include a transmitter chain and a receiver chain, each of which can comprise a plurality of components associated with signal transmission and reception (e.g., processors, modulators, multiplexers, demodulators, demultiplexers, antennas, etc.), as will be appreciated by one skilled in the art. Therefore, the network equipment and the terminal equipment can communicate through the multi-antenna technology.
It should be understood that the network device in the wireless communication system may be any device having a wireless transceiving function. Such devices include, but are not limited to: may be a Base Transceiver Station (BTS) in a global system for mobile communications (GSM) system or a Code Division Multiple Access (CDMA) system, may be a base station (NodeB, NB) in a Wideband Code Division Multiple Access (WCDMA) system, may be an evolved node b (eNB or eNodeB) in an LTE system, may be a radio controller in a Cloud Radio Access Network (CRAN) scenario, or the network device may be a relay station, an access point, a vehicle device, a wearable device, and a network device in a future 5G network or a network device in a future evolved PLMN network, for example, a Transmission and Reception Point (TRP) or a Transmission Point (TP) in the NR system, a base station (gNB) in the NR system, one or a group (including multiple antenna panels) of base stations in the 5G system, and the like. The present embodiment is not particularly limited to this.
In some deployments, the gNB may include a Centralized Unit (CU) and a DU. The gNB may also include a Radio Unit (RU). A CU implements part of the function of a gNB, and a DU implements part of the function of the gNB, for example, the CU implements the function of a Radio Resource Control (RRC) layer, a Packet Data Convergence Protocol (PDCP) layer, and the DU implements the function of a Radio Link Control (RLC) layer, a Medium Access Control (MAC) layer, and a Physical (PHY) layer. Since the information of the RRC layer eventually becomes the information of the PHY layer or is converted from the information of the PHY layer, in this architecture, higher layer signaling, such as RRC layer signaling, may also be considered to be transmitted by the DU, or transmitted by the DU + CU (for example, the higher layer information is determined by the CU and then transmitted to the DU, and the higher layer information is transmitted by the DU). It is to be understood that the network device may be a CU node, or a DU node, or a device including a CU node and a DU node. In addition, the CU may be divided into network devices in a Radio Access Network (RAN), or may be divided into network devices in a Core Network (CN), which is not limited in this application.
It should also be understood that terminal equipment in the wireless communication system may also be referred to as User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment. The terminal device in the embodiment of the present application may be a mobile phone (mobile phone), a tablet computer (pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), a terminal device in a future 5G network, or a terminal device in a future evolved Public Land Mobile Network (PLMN), and the like, and the present application is not limited thereto.
In the embodiment of the application, the terminal device or the network device includes a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer. The hardware layer includes hardware such as a Central Processing Unit (CPU), a Memory Management Unit (MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processing through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address list, word processing software, instant messaging software and the like. Furthermore, the embodiment of the present application does not particularly limit the specific structure of the execution main body of the method provided by the embodiment of the present application, as long as the communication can be performed according to the method provided by the embodiment of the present application by running the program recorded with the code of the method provided by the embodiment of the present application, for example, the execution main body of the method provided by the embodiment of the present application may be a terminal device or a network device, or a functional module capable of calling the program and executing the program in the terminal device or the network device.
In addition, various aspects or features of the present application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
Reference to an element in the present application being referred to in the singular is intended to mean "one or more" rather than "one and only one" unless specifically stated otherwise. "some" may mean one or more.
It should be understood that in the embodiments shown below, the first, second, third, fourth and various numerical numbers are only used for convenience of description and are not used to limit the scope of the embodiments of the present application.
The technical solution of the present application may be applied to a wireless communication system, for example, the communication system 100 shown in fig. 1. Two communication devices in a wireless communication system may have a wireless communication connection relationship between them. One of the communication devices may correspond to, for example, network device 110 shown in fig. 1, such as network device 110 or a chip configured in network device 110, and the other of the two communication devices may correspond to, for example, terminal device 120 in fig. 1, such as terminal device 120 or a chip configured in terminal device 120.
In the above communication system, when the terminal device communicates with the network device, the terminal device and the network device may be a transmitting end and a receiving end, that is, when the terminal device sends a signal to the network device, the terminal device serves as the transmitting end and the network device serves as the receiving end, otherwise, when the network device sends a signal to the terminal device, the network device serves as the transmitting end and the terminal device serves as the receiving end. Specifically, in the process of wireless communication, the basic flow is as shown in fig. 2, where in fig. 2:
at a transmitting end, the information source is transmitted after sequentially carrying out information source coding, channel coding and modulation mapping. And at a receiving end, sequentially performing demapping demodulation, channel decoding and information source decoding to output an information sink.
It should be noted that, when the terminal device is used as the transmitting end, the encoding process (steps of source encoding, channel encoding, modulation mapping, and the like) in fig. 2 is performed by the terminal device, and when the terminal device is used as the receiving end, the decoding process (steps of mapping demodulation, channel decoding, source decoding, and the like) in fig. 2 is performed by the terminal device. The network devices are the same.
Current channel coding/decoding methods include, but are not limited to: hamming code, Polar code.
In the prior art, the learning process of encoding and decoding mainly performs learning on samples of the whole codeword space, but for encoding/decoding modes with longer code length, for example: polar code, for example, when the information bit length K is 32, then there is 232A code word. Therefore, due to the increasing difficulty and complexity, the prior art cannot complete the learning of the encoding and decoding.
In summary, the present application provides an encoding/decoding method that can be generalized to the entire codeword space by sampling the codeword space in a small range. The method comprises the steps of forming a neural network coding/decoding neural model through a neural network unit generated based on coding/decoding, and coding and/or decoding information to be coded/decoded according to the neural network coding/decoding model. The coding/decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good performance.
By way of example and not limitation, the method of encoding and decoding of the embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be understood that, in the embodiment of the present application, a similar decoding method may be used for the encoding method, the encoding model used in the specific encoding process is similar to the decoding model used in the decoding process, and in order to avoid repetition, only decoding is taken as an example hereinafter for description, and specifically, the encoding process may correspond to the decoding process hereinafter, alternatively, in the present application, an existing method may also be used for encoding, and the embodiment of the present application is not limited thereto.
As shown in fig. 3, which is a flowchart illustrating a decoding method in the embodiment of the present application, the method shown in fig. 3 may be applied to the system shown in fig. 1 and executed by a decoding apparatus (which may also be referred to as a receiving end). Specifically, the decoding apparatus may be a network device during uplink transmission, and the decoding apparatus may be a terminal device during downlink transmission, which is not limited in this embodiment of the present application.
Specifically, the method shown in fig. 3 includes:
310, obtaining soft information of N bits to be decoded, wherein N is an integer greater than or equal to 2;
it should be understood that, in the embodiment of the present application, the soft information of the bits to be decoded may also be Log Likelihood Ratios (LLRs) of the bits to be decoded, where each bit to be decoded of the N bits to be decoded has one LLR, and the N bits to be decoded correspond to the N LLRs.
It should be understood that in the embodiment of the present application, N may be regarded as a code length of Polar code, and the embodiment of the present application is not limited thereto.
It should be understood that, in the embodiment of the present application, the soft information of the bits to be decoded may also be referred to as information to be decoded. The information to be decoded can also be called code words to be decoded, code blocks to be decoded, code words and code blocks. The decoding device may decode the information to be decoded as a whole, or may divide the information to be decoded into a plurality of sub-code blocks and perform decoding processing in parallel, which is not limited in this embodiment of the present application.
And 320, decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the exclusive or operation of the soft information, and the decoding model is obtained through at least one training process.
According to the embodiment of the application, the neural network decoding units form the decoding model, and the decoding model is obtained after the small neural network decoding units are connected, so that in the learning process of decoding, the whole code word space can be generalized through small learning samples, and the influence of information with longer code words on the complexity and the learning difficulty of the neural network is weakened. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.
Optionally, as an embodiment, each of the neural network coding units is 2-input and 2-output and has at least one hidden layer structure.
Optionally, each hidden layer may include Q nodes, and Q is an integer greater than or equal to 2.
It should be understood that the hidden layer in the embodiments of the present application may also be referred to as a hidden layer, and the embodiments of the present application are not limited thereto.
Optionally, as another embodiment, the neural network coding unit includes a neural network coding unit parameter, the neural network coding unit parameter is used to indicate a mapping relationship between input information and output information input to the neural network coding unit, and the neural network coding unit parameter includes a weight matrix and an offset vector.
For example, as shown in fig. 4, the neural network decoding unit is 2-input 2-output and has a hidden layer structure, and the hidden layer structure includes 3 nodes.
Specifically, as shown in fig. 4, the neural network decoding unit includes an input layer, an output layer and a hidden layer. The information input by the input layer is an input vector, and the information output by the output layer is an output vector.
Further, as another embodiment, as shown in fig. 4, an input vector input to one neural network decoding unit and an output vector output to the one neural network decoding unit have the following mapping relationship:
Figure BDA0001962302760000081
wherein y ═ y1,y2)TRepresenting said input vector, x ═ x1,x2)TRepresents said output vector, w1And w2Representing said weight matrix, b1And b2Representing the offset vector, h representing a hidden layer element vector, g1And g2Represents an activation function, said w1、w2Are all real number matrices, b1、b2H, y, x are real vectors.
Further, as another implementationFor example, the value of the output vector x is
Figure BDA0001962302760000082
And
Figure BDA0001962302760000083
in any case, the output vector y has the following mapping relation with x:
Figure BDA0001962302760000084
x2=y2
wherein the content of the first and second substances,
Figure BDA0001962302760000085
representing an exclusive or operation.
Optionally, as an embodiment, the plurality of neural network coding units form a log in the coding model2And the N-layer structure is formed, wherein the output of the decoding unit of the neural network in the previous layer is used as the input of the next layer.
For example, fig. 5 shows a decoding model with log when N-16 is used216-4 layers. Wherein, the input information of each layer is y, the output information is x, and the output information x of the previous layer is used as the input information y of the current layer.
It should be understood that the example of fig. 5 is merely illustrative, and the connection relationship between the layers in fig. 5 may be changed or modified arbitrarily, and the embodiment of the present application is not limited thereto.
The input information of the decoding model shown in fig. 5 is soft information of 16 bits to be decoded, and the output information is 16 decoded bits.
Optionally, as another embodiment, before step 320, the method may further include: the decoding device obtains the decoding model.
It should be understood that the decoding model may be trained by a decoding apparatus that executes the method shown in fig. 3, or may be trained by other apparatuses, and the embodiments of the present application are not limited thereto.
In a case where the decoding model is trained by another apparatus, the decoding apparatus acquiring the decoding model includes the decoding apparatus acquiring the decoding model from another device.
In this case, the decoding model may be trained by the other apparatuses or another apparatus, and the embodiments of the present application are not limited thereto.
In the embodiment of the application, because other devices are trained to obtain the decoding model, the decoding device can obtain the decoding model from the other devices without training the model, and directly use the decoding model, thereby avoiding the cost expense caused by retraining.
Optionally, in the case that the decoding model is trained by itself, the decoding means obtaining the decoding model comprises the decoding means training obtaining the decoding model.
In this case, after the decoding device obtains the decoding model through training, the decoding device can send the decoding model to another device for use, so that the other device can directly use the decoding model without training, and the cost caused by retraining is avoided.
It should be understood that, in practical applications, after the decoding device has trained the decoding model, the decoding device can directly use the decoding model in decoding later, without training again. That is, the decoding apparatus may train the decoding model in advance, and may not train the decoding model any more during decoding but directly use the decoding model. Optionally, the decoding device may train to obtain the decoding model only when there is a decoding requirement, and then perform decoding, which is not limited in this embodiment of the present application.
The following describes a specific scheme of training a decoding model in the embodiment of the present application.
It should be understood that the following training scheme may refer to a scheme adopted for training a decoding model in advance, or may be a scheme adopted for training a decoding model when decoding is currently required.
It should be understood that the decoding model in the embodiment of the present application may be obtained through at least one training process.
For example, in one implementation, the coding model is obtained by two training processes.
The following first describes in detail a specific scheme of obtaining a decoding model through the two training processes in the embodiment of the present application.
Specifically, as shown in fig. 6, the method for obtaining the decoding model through two training processes in the embodiment of the present application includes:
and 610, constructing an initial neural network decoding unit, and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating a mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector.
Optionally, the initial neural network coding unit includes at least one hidden layer, each hidden layer includes Q nodes, and Q is greater than or equal to 2. For example, the initial neural network decoding unit includes one hidden layer having 3 nodes.
For example, an initial neural network coding unit includes an input layer, an output layer, and at least one hidden layer. In an embodiment of the present application, the initial neural network decoding unit further includes an initial neural network decoding unit parameter, and the initial neural network decoding unit parameter may include: an initial weight matrix w and an initial bias vector b. It should be noted that the initial neural network decoding unit parameter is generally randomly generated, optionally, the initial neural network decoding unit parameter may also be a preset value, and the embodiment of the present application is not limited thereto.
It should be noted that, in the embodiment of the present application, the number of hidden layers may be one or more than one, where the greater the number of hidden layers, the greater the complexity of the neural network, but the greater the generalization capability thereof. Therefore, when the user sets the number of hidden layers of the initial neural network decoding unit and other neural networks in the application embodiment, the number may be set based on actual requirements, combining factors such as processing capability and computing capability of the device, and the application is not limited.
In this embodiment, taking Polar code as an example, the initial neural network decoding unit is constructed as shown in fig. 4.
In the embodiment of the application, the number of nodes in the hidden layer of the initial neural network decoding unit is larger than the code length of the input information and the output information. That is, when the code length of the input information and the input information is 2, the number of nodes in the hidden layer is an integer greater than 2. In fig. 4, only the initial neural network decoding unit has one hidden layer, and the hidden layer has 3 nodes, which is taken as an example for detailed description, but the embodiment of the present invention is not limited thereto.
Then, the decoding device trains the initial neural network decoding unit (i.e. the first training process) to obtain the neural network decoding unit. The specific training process may refer to step 620.
620, training the initial neural network decoding unit by using a preset first sample set, updating the initial neural network decoding unit parameters to intermediate neural network decoding unit parameters, and obtaining an intermediate neural network decoding unit, where the intermediate neural network decoding unit includes the intermediate neural network decoding unit parameters, the intermediate neural network decoding unit parameters are used to indicate a mapping relationship between input information and output information input to the intermediate neural network decoding unit, the intermediate neural network decoding unit parameters include an intermediate weight matrix and an intermediate offset vector, the first sample set includes at least one first sample, one first sample includes a first column vector with a length of 2 and a second column vector with a length of 2, and the second column vector is an expected vector decoded by the first column vector;
specifically, the initial neural network decoding unit is trained based on the initial neural network decoding unit parameters until an error between output information of the initial neural network decoding unit and an expected check result (i.e., a second column vector) of input information (i.e., a first column vector) is smaller than a first preset threshold. It should be understood that, when the initial neural network decoding unit is trained, the initial neural network decoding unit parameter is updated to obtain the intermediate neural network decoding unit parameter.
Alternatively, in one embodiment, the error between the output information and the expected verification result of the input information may be the difference between the output information and the expected verification result.
Alternatively, in another embodiment, the error between the output information and the expected verification result of the input information may be a mean square error between the output information and the expected verification result.
An operator can set a mode of solving the error between the output information and the expected verification result according to actual requirements, and the method is not limited in the application.
It should be understood that the threshold corresponding to the error between the output information and the expected verification result may be set according to the error calculation mode, and the application is not limited thereto.
In an embodiment of the present application, the trained initial neural network decoding unit is an intermediate neural network decoding unit in the embodiment of the present application. After the initial neural network decoding unit is trained, the initial neural network decoding unit parameters contained in the initial neural network decoding unit are updated to be the intermediate neural network decoding unit parameters.
In the embodiment of the present application, the intermediate neural network decoding unit can achieve the following results: after input training information (such as a first column vector) is encoded based on parameters of an intermediate neural network decoding unit contained in the training information, output information is equal to or close to an expected verification result of the first column vector (namely, a second column vector).
For example, the parameters of the trained interneural neural network decoding unit are shown in table 1 below.
TABLE 1
Figure BDA0001962302760000101
Specifically, the decoding apparatus may perform the following training process on the initial neural network decoding unit based on the input information, the expected verification result of the input information, and the initial neural network decoding unit parameter, such as:
1) a loss function is obtained.
Specifically, for neurons of two adjacent layers of the initial neural network decoding unit (i.e., nodes in an input layer, an output layer, or a hidden layer), an input r of a next layer of neurons is an output c of a connected previous layer of neurons, the weighted sum is performed based on parameters of the initial neural network decoding unit (i.e., an initial weight value w set on each connection line between the two layers, and an initial bias vector b set on each node), and then, through an activation function, the input r of each neuron is represented by the following formula:
r=f(wc+b)
then, the output x of the initial neural network decoding unit (i.e. the initial neural network decoding unit in this embodiment may be recursively expressed as:
x=fn(wnfn-1+bn)
referring to fig. 4, based on the formula r ═ f (wc + b) and the formula x ═ fn(wnfn-1+bn) And (3) calculating the input information of the initial neural network decoding unit, and acquiring output information (for distinguishing other training results, hereinafter referred to as training result 1).
Subsequently, the encoding apparatus obtains an error value between the training result 1 and the expected verification result. The error value may be calculated as described above, i.e., may be the difference between the training result 1 and the expected verification result, or a mean square value. For the details of finding the loss function, reference may be made to the embodiments in the prior art, and the details of the application are not repeated herein.
2) The error is propagated backwards.
Specifically, the encoding device may calculate the residual error of the output layer by propagating the error direction, then perform weighted summation on the residual errors of the nodes in each layer by layer, and then update the first layer weight (i.e., the weight between the input layer and the hidden layer) based on the learning rate and the residual error value of each node of the input layer, and circulate the above method to update the corresponding weights layer by layer. Then, training the input information again by using the updated weight and obtaining the training result, and repeating the above steps, namely, repeatedly updating the initial neural network decoding unit parameter until the error between the training result n output by the initial neural network decoding unit and the expected verification result is less than the target value (for example, the target value may be 0.0001), so as to confirm that the training result is converged.
The above training method is a gradient descent method, and the encoding device may iteratively optimize the initial weight value w and the initial bias vector b by the gradient descent method, so as to minimize the loss function. For the details of the gradient descent method, reference may be made to the prior art examples, which are not described in detail in this application.
It should be noted that the encoding apparatus may also train the initial neural network decoding unit in the embodiment of the present application by using other training methods, which are all intended to make the output value of the initial neural network decoding unit approach the optimization target and update the parameters of the initial neural network decoding unit therein.
And 630, combining a plurality of the intermediate neural network decoding units together to obtain a first initial decoding model.
Specifically, in the embodiment of the present application, all butterfly operations (as shown in fig. 8) in a decoding network graph (Polar code decoding structure) (e.g., the butterfly graph shown in fig. 7) may be replaced by an intermediate neural network decoding unit to obtain a first initial decoding model.
Specifically, as shown in fig. 9, a schematic flow chart of the step of generating the first initial decoding model is shown, and the step shown in fig. 9 includes:
and 910, acquiring a decoding network graph.
Acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and 920, replacing the code butterfly graph in the decoding network graph by using an intermediate neural network decoding unit to obtain a first initial decoding model.
And 640, training the first initial coding model by using a preset second sample set, and updating the parameters of the inter-neural network coding unit in the inter-neural network coding unit to the parameters of the neural network coding unit to obtain the coding model, wherein the second sample set includes a third column vector with a length of N and a fourth column vector with a length of N, and the fourth column vector is an expected vector for decoding the third column vector.
Specifically, in the embodiment of the present application, the decoding apparatus may train the first initial decoding model until an error between the output information output by the first initial decoding model and an expected (fourth column vector) check result of the input information (third column vector) is smaller than a second preset threshold. And after the first initial decoding model is trained, updating the parameters of the intermediate neural network decoding unit in the intermediate neural network decoding unit into the parameters of the neural network decoding unit to obtain the decoding model.
In an embodiment of the present application, the trained first initial decoding model is the decoding model.
The specific steps of training the first initial decoding model may refer to the training steps of the initial neural network decoding unit, which are not described herein again.
For another example, in another implementation, the decoding model is obtained through a training process.
The following describes in detail a specific scheme of obtaining a decoding model in a training process according to an embodiment of the present application.
Specifically, as shown in fig. 10, a method 1000 for obtaining a decoding model in a training process in the embodiment of the present application includes:
1010, constructing an initial neural network decoding unit, and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating a mapping relation between input information and output information input into the initial neural network decoding unit, and the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector;
1010 corresponds to step 610, and is not described herein again to avoid repetition.
And 1020, combining a plurality of initial neural network decoding units together to obtain a second initial decoding model.
Specifically, in the embodiment of the present application, according to a decoding network graph (Polar code decoding structure) (e.g., the butterfly graph shown in fig. 7), all butterfly operations (e.g., fig. 8) therein may be replaced by an initial neural network decoding unit, so as to obtain a second initial decoding model.
Specifically, as shown in fig. 11, a schematic flow chart of the step of generating the second initial decoding model is shown, and the step shown in fig. 11 includes:
1110, a decoded network map is obtained.
Acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
1120, replacing the code butterfly graph in the decoding network graph by using the initial neural network decoding unit to obtain the second initial decoding model.
And 1030, training the second initial decoding model by using a preset third sample set, and updating the initial neural network decoding unit parameters in the initial neural network decoding unit to neural network decoding unit parameters to obtain the decoding model, wherein the third sample set comprises a fifth column vector with the length of N and a sixth column vector with the length of N, and the sixth column vector is an expected vector for decoding the fifth column vector.
Specifically, in the embodiment of the present application, the decoding apparatus may train the second initial decoding model until an error between the output information output by the second initial decoding model and an expected (sixth column vector) check result of the input information (fifth column vector) is smaller than a third preset threshold. And after the second initial decoding model is trained, updating the parameters of the intermediate neural network decoding unit in the primary neural network decoding unit into the parameters of the neural network decoding unit to obtain the decoding model.
In an embodiment of the present application, the trained second initial decoding model is the above decoding model.
The specific steps of training the second initial decoding model may refer to the above training steps of the initial neural network decoding unit, which is not described herein again.
According to the embodiment of the application, the neural network decoding units form the decoding model, and the decoding model is obtained after the small neural network decoding units are connected, so that in the learning process of decoding, the whole code word space can be generalized through small learning samples, and the influence of information with longer code words on the complexity and the learning difficulty of the neural network is weakened. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.
Fig. 12 is a graph showing a comparison of simulation performance when the code length N is 16 and the information bit K is 8, and the initialization weight is set manually, specifically, fig. 12 shows a comparison of performance of a trained decoding model and performance of an untrained decoding model. As shown in fig. 12, the abscissa represents the signal-to-noise ratio Eb/No, which in particular may represent the receiver demodulation threshold, defined as the energy per bit divided by the noise power spectral density. Specifically, Eb denotes signal energy per bit, Eb ═ S/R, S denotes signal energy, R denotes traffic bit rate, No ═ N/W, W denotes bandwidth; n represents noise. The ordinate represents the bit error probability (BER). As shown in fig. 12, the trained decoding model retains the xor function of each processing unit, and has a certain learning ability. Specifically, the trained decoding model has better decoding performance than the untrained decoding model, and the decoding model with a high training sample ratio p has better decoding performance than the decoding model with a low training sample ratio under the condition of high signal-to-noise ratio.
It should be understood that in the embodiment of the present application, p may represent a ratio of training samples in a full codeword space, and p may be 10%, 20%, 40%, 60%, 80%, 100%, which is not limited to this.
By way of example and not limitation, in practical applications, the training snr Eb/N0(dB) of the decoding model in the embodiment of the present application may be 0,1,2,3,4,5, and 6. The test signal-to-noise ratio Eb/N0(dB) can be in the range of 0-14. The embodiments of the present application are not limited thereto.
Fig. 13 shows the comparison result between the decoding model of the embodiment of the present application and the decoding models of other neural networks when the code length N is 16, the information bit K is 8, the initialization parameter is manually set, and p is set to 0.1. Fig. 13 shows that, in the case of a very small number of training sets, the performance of the neural network decoding model based on the neural network decoding unit (which may also be referred to as a polarization processing unit) proposed in the present application is better than that of other existing neural network decoding models.
It should be understood that the above examples of fig. 1 to 13 are only for assisting the skilled person in understanding the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the examples given in fig. 1-13, and such modifications or variations are intended to be included within the scope of the embodiments of the present application.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The method of the embodiment of the present application is described in detail above with reference to fig. 1 to 13, and the apparatus for decoding of the embodiment of the present application is described below with reference to fig. 14 to 15.
Fig. 14 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application, where the apparatus 1400 may include: a decode module 1410 and an acquisition module 1420.
The device comprises an acquisition module, a decoding module and a decoding module, wherein the acquisition module is used for acquiring soft information of N bits to be decoded, and N is an integer greater than or equal to 2;
and the decoding module is used for decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the XOR operation of the soft information, and the decoding model is obtained through at least one training process.
According to the embodiment of the application, the neural network decoding units form the decoding model, and the decoding model is obtained after the small neural network decoding units are connected, so that in the learning process of decoding, the whole code word space can be generalized through small learning samples, and the influence of information with longer code words on the complexity and the learning difficulty of the neural network is weakened. The decoding model of the embodiment of the application can meet the requirements of high-speed transmission and low decoding delay and has good decoding performance.
It should be understood that the decoding apparatus 1400 has any functions performed by the decoding apparatus in the above method embodiments, and the detailed description is omitted here where appropriate.
Optionally, the plurality of neural network coding units form a log in the coding model2And the N-layer structure is formed, wherein the output of the decoding unit of the neural network in the previous layer is used as the input of the next layer.
Optionally, each of the neural network coding units is 2-input-2-output and has at least one hidden layer structure.
Optionally, the neural network coding unit includes a neural network coding unit parameter, the neural network coding unit parameter is used to indicate a mapping relationship between input information and output information input to the neural network coding unit, and the neural network coding unit parameter includes a weight matrix and an offset vector.
Optionally, an input vector input to one neural network decoding unit and an output vector output to the one neural network decoding unit have the following mapping relationship:
Figure BDA0001962302760000141
wherein y ═ y1,y2)TRepresenting said input vector, x ═ x1,x2)TRepresents said output vector, w1And w2Representing said weight matrix, b1And b2Presentation instrumentThe offset vector, h represents a hidden layer unit vector, g1And g2Represents an activation function, said w1、w2Are all real number matrices, b1、b2H, y, x are real vectors.
Optionally, the value of the output vector x is
Figure BDA0001962302760000142
And
Figure BDA0001962302760000143
in any case, the output vector y has the following mapping relation with x:
Figure BDA0001962302760000144
x2=y2
optionally, before the decoding the soft information by the decoding model, the decoding module further includes:
and acquiring the coding model.
Optionally, the decoding model is obtained by two training processes.
Optionally, the decoding module is specifically configured to:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
training the initial neural network decoding unit by using a preset first sample set, updating parameters of the initial neural network decoding unit to parameters of an intermediate neural network decoding unit, and obtaining the intermediate neural network decoding unit, wherein the intermediate neural network decoding unit comprises the parameters of the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit are used for indicating a mapping relation between input information and output information which are input into the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit comprise an intermediate weight matrix and an intermediate offset vector, the first sample set comprises at least one first sample, one first sample comprises a first column vector with the length of 2 and a second column vector with the length of 2, and the second column vector is an expected vector for decoding the first column vector;
combining a plurality of the intermediate neural network decoding units together to obtain a first initial decoding model;
training the first initial coding model by using a preset second sample set, updating the parameters of the inter-neural network coding unit in the inter-neural network coding unit to the parameters of the neural network coding unit, and obtaining the coding model, wherein the second sample set comprises a third column vector with the length of N and a fourth column vector with the length of N, and the fourth column vector is an expected vector for coding the third column vector.
Optionally, the decoding module is specifically configured to:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the code butterfly graph in the decoding network graph by using the intermediate neural network decoding unit to obtain the first initial decoding model.
Optionally, the decoding model is obtained through a training process.
Optionally, the decoding module is specifically configured to:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
combining a plurality of initial neural network decoding units together to obtain a second initial decoding model;
training the second initial coding model by using a preset third sample set, updating the initial neural network coding unit parameters in the initial neural network coding unit to neural network coding unit parameters, and obtaining the coding model, wherein the third sample set comprises a fifth column vector with the length of N and a sixth column vector with the length of N, and the sixth column vector is an expected vector for decoding the fifth column vector.
Optionally, the decoding module is specifically configured to:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the decoding butterfly graph in the decoding network graph by using an initial neural network decoding unit to obtain the second initial decoding model.
It should be appreciated that the term module in the embodiments of the present application may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. The "module" in the embodiment of the present application may also be referred to as a "unit" and may be implemented by hardware or software, and the embodiment of the present application is not limited thereto.
In an optional example, those skilled in the art may understand that the decoding apparatus 1400 provided in the present application corresponds to a process executed by the decoding apparatus in the foregoing method embodiments, and the functions of each unit/module in the apparatus may refer to the description above, which is not described herein again.
It should be understood that the decoding apparatus shown in fig. 14 may be a network device or a terminal device, or may be a chip or an integrated circuit installed in the network device or the decoding device.
Taking the decoding apparatus as a network device or a terminal device as an example, fig. 15 is a schematic structural diagram of a decoding apparatus provided in an embodiment of the present application. As shown in fig. 15, the decoding apparatus 1500 can be applied to the system shown in fig. 1, and performs any functions of the decoding apparatus in the above method embodiments.
As shown in fig. 15, access point 1500 may include at least one processor 1510 and a transceiver 1520, with processor 1510 coupled to transceiver 1520, optionally access point 1500 further includes at least one memory 1530, with memory 1530 coupled to processor 1510, and further optionally, access point 1500 may further include a bus system 1540. The processor 1510, the memory 1530 and the transceiver 1520 may be connected by a bus system 1540, the memory 1530 may be used for storing instructions, the processor 1510 may correspond to the processing module 1410 in fig. 14, and the transceiver 1520 may correspond to the transceiver module 1420 in fig. 14. Alternatively, the processing module and the obtaining module in fig. 14 may also be implemented by the processor 1510 in fig. 15, and the embodiment of the present application is not limited thereto. In particular, the processor 1510 is configured to execute instructions to control the transceiver 1520 to transmit and receive information or signals, and the memory 1530 stores instructions.
It is to be understood that the memory 1530 may be integrated with the processor 1510, for example, the memory 1530 may be integrated in the processor 1510, and the memory 1530 may also be located outside the processor 1510 and stand alone, which is not limited to the embodiment of the present application.
It should be understood that in the embodiments of the present invention, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
The bus system may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. For clarity of illustration, however, the various buses are labeled as a bus system in the figures.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the method in combination with the hardware. To avoid repetition, it is not described in detail here.
It should be understood that the decoding apparatus 1500 shown in fig. 15 is capable of implementing the various processes related to the decoding apparatus in the above-described method embodiments. The operations and/or functions of the modules in the decoding apparatus 1500 are respectively for implementing the corresponding flows in the above method embodiments. Specifically, reference may be made to the description of the above method embodiments, and the detailed description is appropriately omitted herein to avoid redundancy.
The embodiment of the application also provides a processing device, which comprises a processor and an interface; the processor is used for executing the decoding method in any method embodiment.
It should be understood that the processing means may be a chip. For example, the processing Device may be a Field-Programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), a System on Chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a Digital Signal processing Circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other Integrated chips.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
It should be noted that the processor in the embodiments of the present invention may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will be appreciated that the memory in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the present application further provides a communication system, which includes the encoding end and the decoding end.
The present application further provides a computer-readable medium, on which a computer program is stored, where the computer program is executed by a computer to implement the method in any of the above method embodiments.
The embodiment of the present application further provides a computer program product, and when being executed by a computer, the computer program product implements the method in any of the above method embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It should be understood that the above describes a decoding method in a communication system, but the present application is not limited thereto, and optionally, the above similar scheme may also be adopted during encoding, and in order to avoid repetition, the description is omitted here.
The network device in the foregoing various apparatus embodiments completely corresponds to the terminal device and the network device or the terminal device in the method embodiments, and the corresponding modules or units execute the corresponding steps, for example, the sending module (transmitter) method executes the steps sent in the method embodiments, the receiving module (receiver) executes the steps received in the method embodiments, and other steps except sending and receiving may be executed by the processing module (processor). The functionality of the specific modules may be referred to in the respective method embodiments. The transmitting module and the receiving module can form a transceiving module, and the transmitter and the receiver can form a transceiver to realize transceiving function together; the processor may be one or more.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should also be understood that reference herein to first, second, third, fourth, and various numerical designations is made only for ease of description and is not intended to limit the scope of the embodiments of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks and steps (step) described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions (programs). The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program instructions (programs) are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A method of decoding, comprising:
acquiring soft information of N bits to be decoded, wherein N is an integer greater than or equal to 2;
decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the exclusive-or operation of the soft information, and the decoding model is obtained through at least one training process;
forming logs of the plurality of neural network coding units in the coding model2The N-layer structure is adopted, wherein the output of the decoding unit of the neural network in the previous layer is used as the input of the next layer;
each neural network decoding unit is 2 input and 2 output and has at least one hidden layer structure;
the neural network decoding unit comprises neural network decoding unit parameters, the neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the neural network decoding unit, and the neural network decoding unit parameters comprise a weight matrix and an offset vector;
the input vector input into one neural network decoding unit and the output vector output from the neural network decoding unit have the following mapping relation:
Figure FDA0003279042940000011
wherein y ═ y1,y2)TRepresenting said input vector, x ═ x1,x2)TRepresents said output vector, w1And
w2representing said weight matrix, b1And b2Representing the offset vector, h representing a hidden layer element vector, g1And g2Represents an activation function, said w1、w2Are all real number matrices, b1、b2H, y, x are real vectors.
2. The method of claim 1,
the value of the output vector x is
Figure FDA0003279042940000012
And
Figure FDA0003279042940000013
in any case, the output vector y has the following mapping relation with x:
Figure FDA0003279042940000014
x2=y2
3. the method of claim 2, wherein prior to coding the soft information by a coding model, the method further comprises:
and acquiring the coding model.
4. The method of claim 3,
the decoding model is obtained through two training processes.
5. The method of claim 4, wherein the obtaining the coding model comprises:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
training the initial neural network decoding unit by using a preset first sample set, updating parameters of the initial neural network decoding unit to parameters of an intermediate neural network decoding unit, and obtaining the intermediate neural network decoding unit, wherein the intermediate neural network decoding unit comprises the parameters of the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit are used for indicating a mapping relation between input information and output information which are input into the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit comprise an intermediate weight matrix and an intermediate offset vector, the first sample set comprises at least one first sample, one first sample comprises a first column vector with the length of 2 and a second column vector with the length of 2, and the second column vector is an expected vector for decoding the first column vector;
combining a plurality of the intermediate neural network decoding units together to obtain a first initial decoding model;
training the first initial coding model by using a preset second sample set, updating the parameters of the inter-neural network coding unit in the inter-neural network coding unit to the parameters of the neural network coding unit, and obtaining the coding model, wherein the second sample set comprises a third column vector with the length of N and a fourth column vector with the length of N, and the fourth column vector is an expected vector for coding the third column vector.
6. The method according to claim 5, wherein said combining a plurality of said inter-neural network coding units together to obtain a first initial coding model comprises:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the code butterfly graph in the decoding network graph by using the intermediate neural network decoding unit to obtain the first initial decoding model.
7. The method of claim 3,
the decoding model is obtained through a training process.
8. The method of claim 7, wherein the obtaining the coding model comprises:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
combining a plurality of initial neural network decoding units together to obtain a second initial decoding model;
training the second initial coding model by using a preset third sample set, updating the initial neural network coding unit parameters in the initial neural network coding unit to neural network coding unit parameters, and obtaining the coding model, wherein the third sample set comprises a fifth column vector with the length of N and a sixth column vector with the length of N, and the sixth column vector is an expected vector for decoding the fifth column vector.
9. The method according to claim 8, wherein said combining a plurality of said initial neural network coding units together to obtain a second initial coding model comprises:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the decoding butterfly graph in the decoding network graph by using an initial neural network decoding unit to obtain the second initial decoding model.
10. A decoding apparatus, comprising:
the device comprises an acquisition module, a decoding module and a decoding module, wherein the acquisition module is used for acquiring soft information of N bits to be decoded, and N is an integer greater than or equal to 2;
the decoding module is used for decoding the soft information through a decoding model to obtain a decoding result, wherein the decoding model is composed of a plurality of neural network decoding units, each neural network decoding unit supports the XOR operation of the soft information, and the decoding model is obtained through at least one training process;
forming logs of the plurality of neural network coding units in the coding model2The N-layer structure is adopted, wherein the output of the decoding unit of the neural network in the previous layer is used as the input of the next layer;
each neural network decoding unit is 2 input and 2 output and has at least one hidden layer structure;
the neural network decoding unit comprises neural network decoding unit parameters, the neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the neural network decoding unit, and the neural network decoding unit parameters comprise a weight matrix and an offset vector;
the input vector input into one neural network decoding unit and the output vector output from the neural network decoding unit have the following mapping relation:
Figure FDA0003279042940000031
wherein y ═ y1,y2)TRepresenting said input vector, x ═ x1,x2)TRepresents said output vector, w1And
w2representing said weight matrix, b1And b2Representing the offset vector, h representing a hidden layer element vector, g1And g2Represents an activation function, said w1、w2Are all real number matrices, b1、b2H, y, x are real vectors.
11. The decoding apparatus according to claim 10,
the value of the output vector x is
Figure FDA0003279042940000032
And
Figure FDA0003279042940000033
in any case, the output vector y has the following mapping relation with x:
Figure FDA0003279042940000034
x2=y2
12. the decoding apparatus of claim 11, wherein before the decoding of the soft information by the decoding model, the decoding module further comprises:
and acquiring the coding model.
13. The decoding apparatus according to claim 12,
the decoding model is obtained through two training processes.
14. The decoding device according to claim 13, wherein the decoding module is specifically configured to:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
training the initial neural network decoding unit by using a preset first sample set, updating parameters of the initial neural network decoding unit to parameters of an intermediate neural network decoding unit, and obtaining the intermediate neural network decoding unit, wherein the intermediate neural network decoding unit comprises the parameters of the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit are used for indicating a mapping relation between input information and output information which are input into the intermediate neural network decoding unit, the parameters of the intermediate neural network decoding unit comprise an intermediate weight matrix and an intermediate offset vector, the first sample set comprises at least one first sample, one first sample comprises a first column vector with the length of 2 and a second column vector with the length of 2, and the second column vector is an expected vector for decoding the first column vector;
combining a plurality of the intermediate neural network decoding units together to obtain a first initial decoding model;
training the first initial coding model by using a preset second sample set, updating the parameters of the inter-neural network coding unit in the inter-neural network coding unit to the parameters of the neural network coding unit, and obtaining the coding model, wherein the second sample set comprises a third column vector with the length of N and a fourth column vector with the length of N, and the fourth column vector is an expected vector for coding the third column vector.
15. The decoding device according to claim 14, wherein the decoding module is specifically configured to:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the code butterfly graph in the decoding network graph by using the intermediate neural network decoding unit to obtain the first initial decoding model.
16. The decoding apparatus according to claim 12,
the decoding model is obtained through a training process.
17. The decoding device according to claim 16, wherein the decoding module is specifically configured to:
constructing an initial neural network decoding unit and setting initial neural network decoding unit parameters, wherein the initial neural network decoding unit parameters are used for indicating the mapping relation between input information and output information which are input into the initial neural network decoding unit, and the initial neural network decoding unit parameters comprise an initial weight matrix and an initial offset vector;
combining a plurality of initial neural network decoding units together to obtain a second initial decoding model;
training the second initial coding model by using a preset third sample set, updating the initial neural network coding unit parameters in the initial neural network coding unit to neural network coding unit parameters, and obtaining the coding model, wherein the third sample set comprises a fifth column vector with the length of N and a sixth column vector with the length of N, and the sixth column vector is an expected vector for decoding the fifth column vector.
18. The decoding device according to claim 17, wherein the decoding module is specifically configured to:
acquiring a decoding network graph, wherein the decoding network graph comprises at least one decoding butterfly graph, and the decoding butterfly graph is used for indicating a check relation between input information of the decoding butterfly graph and output information of the decoding butterfly graph;
and replacing the decoding butterfly graph in the decoding network graph by using an initial neural network decoding unit to obtain the second initial decoding model.
19. A computer-readable storage medium, in which program instructions are stored which, when run on a processor, perform the method according to any one of claims 1 to 9.
20. A decoding apparatus, comprising:
a memory to store instructions;
and at least one processor communicatively coupled to the memory, wherein the at least one processor is configured to perform the method of any of claims 1-9 when executing the instructions.
21. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface, performing the method of any one of claims 1-9.
CN201910087689.9A 2019-01-29 2019-01-29 Decoding method and decoding device Active CN111490798B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910087689.9A CN111490798B (en) 2019-01-29 2019-01-29 Decoding method and decoding device
PCT/CN2020/071341 WO2020156095A1 (en) 2019-01-29 2020-01-10 Decoding method and decoding device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910087689.9A CN111490798B (en) 2019-01-29 2019-01-29 Decoding method and decoding device

Publications (2)

Publication Number Publication Date
CN111490798A CN111490798A (en) 2020-08-04
CN111490798B true CN111490798B (en) 2022-04-22

Family

ID=71812337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910087689.9A Active CN111490798B (en) 2019-01-29 2019-01-29 Decoding method and decoding device

Country Status (2)

Country Link
CN (1) CN111490798B (en)
WO (1) WO2020156095A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422380B (en) * 2020-10-09 2023-06-09 维沃移动通信有限公司 Neural network information transmission method, device, communication equipment and storage medium
CN115037312B (en) * 2022-08-12 2023-01-17 北京智芯微电子科技有限公司 Method, device and equipment for quantizing LDPC decoding soft information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0904649A1 (en) * 1996-06-11 1999-03-31 Motorola Limited Msle decoder with neural network
CN101562456A (en) * 2009-06-03 2009-10-21 华北电力大学(保定) Code assisting frame synchronizing method based on soft decoding information of low-density parity check codes
WO2011014738A2 (en) * 2009-07-30 2011-02-03 Qualcomm Incorporated Method and apparatus for reliability-aided pruning of blind decoding results
CN102831026A (en) * 2012-08-13 2012-12-19 忆正科技(武汉)有限公司 MLC (multi-level cell) and method for dynamically regulating soft bit read voltage threshold of MLC
CN104079382A (en) * 2014-07-25 2014-10-01 北京邮电大学 Polar code decoder and polar code decoding method based on probability calculation
CN107248866A (en) * 2017-05-31 2017-10-13 东南大学 A kind of method for reducing polarization code decoding delay
CN108140407A (en) * 2015-08-11 2018-06-08 桑迪士克科技有限责任公司 For reading the soft bit technology of data storage device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412608B (en) * 2017-03-24 2019-11-05 华为技术有限公司 Polar coding method and code device, interpretation method and code translator
US20180357530A1 (en) * 2017-06-13 2018-12-13 Ramot At Tel-Aviv University Ltd. Deep learning decoding of error correcting codes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0904649A1 (en) * 1996-06-11 1999-03-31 Motorola Limited Msle decoder with neural network
CN101562456A (en) * 2009-06-03 2009-10-21 华北电力大学(保定) Code assisting frame synchronizing method based on soft decoding information of low-density parity check codes
WO2011014738A2 (en) * 2009-07-30 2011-02-03 Qualcomm Incorporated Method and apparatus for reliability-aided pruning of blind decoding results
CN102831026A (en) * 2012-08-13 2012-12-19 忆正科技(武汉)有限公司 MLC (multi-level cell) and method for dynamically regulating soft bit read voltage threshold of MLC
CN104079382A (en) * 2014-07-25 2014-10-01 北京邮电大学 Polar code decoder and polar code decoding method based on probability calculation
CN108140407A (en) * 2015-08-11 2018-06-08 桑迪士克科技有限责任公司 For reading the soft bit technology of data storage device
CN107248866A (en) * 2017-05-31 2017-10-13 东南大学 A kind of method for reducing polarization code decoding delay

Also Published As

Publication number Publication date
CN111490798A (en) 2020-08-04
WO2020156095A1 (en) 2020-08-06

Similar Documents

Publication Publication Date Title
JP7471357B2 (en) Encoding method, decoding method, and device
CN108365848B (en) Polar code decoding method and device
CN108282259B (en) Coding method and device
CN108462554B (en) Polar code transmission method and device
EP3614701A1 (en) Polar code transmission method and device
CN108365850B (en) Encoding method, encoding device, and communication device
EP3879709A1 (en) Coding method and apparatus, decoding method and apparatus
KR20190123801A (en) Code rate adjustment method and apparatus of polar code
CN108429599A (en) Method and apparatus for the data processing in communication system
CN111490798B (en) Decoding method and decoding device
KR20200093627A (en) Encoding of systematic punctured polar codes associated with internal code
US20170111142A1 (en) Decoding method and apparatus
US11012102B2 (en) Puncturing of polar codes with complementary sequences
WO2018127069A1 (en) Coding method and device
US20200036474A1 (en) Resource mapping method and apparatus thereof
WO2018210216A1 (en) Data transmission method, chip, transceiver, and computer readable storage medium
EP3734873B1 (en) Channel encoding method and encoding device
WO2022117061A1 (en) Method and device for determining polar code assistant bits
WO2024055894A1 (en) Coding/decoding method and apparatus
US20230291498A1 (en) Method and apparatus for hybrid automatic repeat request in communication system using polar codes
CN111786680B (en) Method and device for determining generator matrix
CN116158031A (en) Encoding method and decoding method for polarization code, encoding device and decoding device
CN117155410A (en) Coding and decoding method and device
CN115913453A (en) Communication method and device
CN115549848A (en) Data transmission method and communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant