WO2020156095A1 - 译码的方法和译码装置 - Google Patents

译码的方法和译码装置 Download PDF

Info

Publication number
WO2020156095A1
WO2020156095A1 PCT/CN2020/071341 CN2020071341W WO2020156095A1 WO 2020156095 A1 WO2020156095 A1 WO 2020156095A1 CN 2020071341 W CN2020071341 W CN 2020071341W WO 2020156095 A1 WO2020156095 A1 WO 2020156095A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoding
neural network
decoding unit
initial
model
Prior art date
Application number
PCT/CN2020/071341
Other languages
English (en)
French (fr)
Inventor
张朝阳
宋旭冉
秦康剑
朱致焕
徐晨
于天航
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020156095A1 publication Critical patent/WO2020156095A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • This application relates to the field of communications, in particular to a decoding method and decoding device.
  • the rapid evolution of wireless communication indicates that the fifth generation (5G) communication system in the future will show some new characteristics.
  • the three most typical communication scenarios include enhanced mobile broadband (eMBB) and massive machine connections.
  • Communication massive machine type communication, mMTC
  • high reliability and low latency communication ultra reliable low latency communication, URLLC
  • LTE long term evolution
  • channel coding is one of the important research objects to meet the needs of 5G communication.
  • Polar codes are selected as the control channel coding method in the 5G standard. Polar codes can also be called Polar codes. They are the first and only known channel coding method that can be strictly proven to "reach" the channel capacity.
  • Polar codes Under different code lengths, especially for finite codes, the performance of Polar codes is much better than Turbo codes and low density parity check (LDPC) codes. In addition, Polar code has lower computational complexity in encoding and decoding. These advantages make Polar codes have great development and application prospects in 5G.
  • maximum likelihood decoding has the best decoding performance, it needs to correlate the received modulation symbols with all possible codewords, which makes it almost impossible to achieve maximum likelihood decoding under the actual code length configuration. .
  • the present application provides a decoding method and decoding device, which have good decoding performance.
  • a decoding method includes:
  • N is an integer greater than or equal to 2;
  • the soft information is decoded through a decoding model to obtain the decoding result, wherein the decoding model is composed of multiple neural network decoding units, and each neural network decoding unit supports the exclusive OR operation of the soft information ,
  • the decoding model is obtained through at least one training process.
  • the embodiment of the application composes the neural network decoding unit into a decoding model, and realizes that after connecting small neural network decoding units, the decoding model is obtained, so that in the decoding learning process, small learning samples can be used.
  • the decoding model of this application embodiment can meet the requirements of high-rate transmission and low decoding delay, and has good decoding performance.
  • the multiple neural network decoding units in the decoding model form a log 2 N-layer structure, wherein the output of the neural network decoding unit of the previous layer is used as the latter Layer input.
  • each neural network decoding unit has 2 inputs and 2 outputs and has at least one hidden layer structure.
  • each hidden layer may include Q nodes, and Q is an integer greater than or equal to 2.
  • the hidden layer in the embodiment of the present application may also be referred to as a hidden layer, and the embodiment of the present application is not limited thereto.
  • the neural network decoding unit includes neural network decoding unit parameters, and the neural network decoding unit parameters are used to indicate that the input information input to the neural network decoding unit and
  • the mapping relationship between output information, and the neural network decoding unit parameters include a weight matrix and an offset vector.
  • the input vector input to a neural network decoding unit and the output vector output to the neural network decoding unit have the following mapping relationship:
  • w 1 and w 2 represent the weight matrix
  • b 1 and b 2 Represents the offset vector
  • h represents the hidden unit vector
  • g 1 and g 2 represent the activation function
  • the w 1 and w 2 are real number matrices
  • b 1 , b 2 , h, y, and x are all real number vectors .
  • the value of the output vector x is with In either case, the output vector y and x have the following mapping relationship:
  • the method before decoding the soft information through a decoding model, the method further includes:
  • the decoding model may be trained by a decoding device or other devices, and the embodiment of the present application is not limited thereto.
  • obtaining the decoding model by the decoding device includes that the decoding device obtains the decoding model from another device.
  • the decoding model may be trained by the aforementioned other device or another device, and the embodiment of the present application is not limited to this.
  • the decoding device since other devices have been trained to obtain the decoding model, the decoding device does not need to train the model, and can obtain the decoding model from the other device and directly use the decoding model, avoiding retraining The resulting cost overhead.
  • the decoding device acquiring the decoding model includes training the decoding device to acquire the decoding model.
  • the decoding device can send the decoding model to another device for use, so that the other device can directly use the decoding model without training.
  • the code model avoids the cost overhead caused by retraining.
  • the decoding device after the decoding device has trained the decoding model, it can directly use the decoding model during subsequent decoding without retraining.
  • the decoding device training the decoding model can be pre-trained, and the decoding model can be no longer trained during decoding, but the decoding model can be used directly.
  • the decoding device may also train to obtain the decoding model when there is a decoding requirement, and then perform decoding, and the embodiment of the present application is not limited to this.
  • the decoding model is obtained through two training processes.
  • the acquiring the decoding model includes:
  • the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector;
  • the neural network decoding unit includes the intermediate neural network decoding unit parameters, and the intermediate neural network decoding unit parameters are used to indicate the mapping relationship between input information and output information input to the intermediate neural network decoding unit, so
  • the parameters of the intermediate neural network decoding unit include an intermediate weight matrix and an intermediate offset vector
  • the first sample set includes at least one first sample
  • one of the first samples includes a first column vector of length 2 and A second column vector of length 2, where the second column vector is a desired vector decoded by the first column vector;
  • the second sample set includes a third column vector of length N and a fourth column vector of length N, and the fourth column vector is a decoding expectation of the third column vector vector.
  • said combining a plurality of said intermediate neural network decoding units to obtain a first initial decoding model includes:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the intermediate neural network decoding unit is used to replace the code butterfly diagram in the decoding network diagram to obtain the first initial decoding model.
  • the decoding model is obtained through a training process.
  • the acquiring the decoding model includes:
  • the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector;
  • the third sample set includes a fifth column vector of length N and a sixth column vector of length N, and the sixth column vector is a desired vector for decoding of the fifth column vector.
  • the combining a plurality of the initial neural network decoding units to obtain a second initial decoding model includes:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the initial neural network decoding unit is used to replace the decoding butterfly diagram in the decoding network diagram to obtain the second initial decoding model.
  • the embodiment of the application composes the neural network decoding unit into a decoding model, and realizes that after connecting small neural network decoding units, the decoding model is obtained, so that in the decoding learning process, small learning samples can be used.
  • the decoding model of the embodiment of the present application can meet the requirements of high-rate transmission and low decoding delay, and has good decoding performance.
  • a decoding device which includes various modules or units for executing the method in the first aspect or any one of the possible implementation manners of the first aspect.
  • a decoding device which includes a transceiver, a processor, and a memory.
  • the processor is used to control the transceiver to send and receive signals
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the decoding device executes the method in the first aspect and its possible implementation.
  • a computer-readable medium on which a computer program is stored, and when the computer program is executed by a computer, the method in the first aspect and its possible implementation manners are implemented.
  • a computer program product which implements the method in the first aspect and its possible implementation manner when the computer program product is executed by a computer.
  • a processing device including a processor and an interface.
  • a processing device including a processor, an interface, and a memory.
  • the processor is configured to execute these methods as the execution subject of the methods in the first party or any possible implementation of the first aspect, wherein the related data interaction process ( For example, receiving information sent by the sending end, such as bits to be decoded, etc., is completed through the above-mentioned interface.
  • the above-mentioned interface may further complete the above-mentioned data interaction process through a transceiver.
  • the processing device in the above six or seventh aspects may be a chip, and the processor may be implemented by hardware or software.
  • the processor When implemented by hardware, the processor may be a logic circuit or an integrated circuit. Etc.; when implemented by software, the processor can be a general-purpose processor, which can be implemented by reading the software code stored in the memory.
  • the memory can be integrated with the processor. For example, the memory can be integrated in the processor. The memory may also be located outside the processor and exist independently.
  • Figure 1 is a schematic diagram of an applicable scenario according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a wireless communication process according to an embodiment of the present application.
  • Fig. 3 is a flowchart of a decoding method according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a neural network decoding unit according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a decoding model according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a method for training a decoding model twice according to an embodiment of the present application.
  • Fig. 7 is a schematic diagram of a decoding network according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a butterfly operation according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a method for generating a first initial decoding model according to an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a method for training a decoding model once according to an embodiment of the present application.
  • Fig. 11 is a schematic diagram of a method for generating a second initial decoding model according to an embodiment of the present application.
  • Fig. 12 is a comparison diagram of simulation decoding performance of a decoding model according to an embodiment of the present application.
  • FIG. 13 is a comparison diagram of the decoding performance of the decoding model according to the present application and the existing model.
  • Fig. 14 is a schematic structural diagram of a decoding device according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a decoding device according to another embodiment of the present application.
  • GSM global system for mobile communications
  • CDMA code division multiple access
  • GSM broadband code division multiple access
  • WCDMA wideband code division multiple access
  • GPRS general packet radio service
  • LTE long term evolution
  • LTE frequency division duplex FDD
  • UMTS universal mobile telecommunication system
  • WiMAX worldwide interoperability for microwave access
  • 5G future 5th generation
  • NR new radio
  • FIG. 1 shows a schematic diagram of a communication system 100 applicable to the sending and receiving methods and apparatuses of the embodiments of the present application.
  • the communication system 100 may include at least one network device, such as the network device 110 shown in FIG. 1; the communication system 100 may also include at least one terminal device, such as the terminal device 120 shown in FIG. 1.
  • the network device 110 and the terminal device 120 may communicate through a wireless link.
  • Each communication device such as the network device 110 or the terminal device 120 in FIG. 1, may be configured with multiple antennas.
  • the plurality of antennas may include at least one transmitting antenna for transmitting signals and at least one receiving antenna for receiving signals.
  • each communication device additionally includes a transmitter chain and a receiver chain.
  • Those of ordinary skill in the art can understand that they can all include multiple components related to signal transmission and reception (such as processors, modulators, multiplexers). , Demodulator, demultiplexer or antenna, etc.). Therefore, multiple antenna technology can be used to communicate between network devices and terminal devices.
  • the network device in the wireless communication system may be any device with a wireless transceiver function.
  • This equipment includes but is not limited to: it can be a base transceiver station (BTS) in the global system for mobile communications (GSM) system or code division multiple access (CDMA), or it can be a broadband code
  • BTS base transceiver station
  • GSM global system for mobile communications
  • CDMA code division multiple access
  • the base station (NodeB, NB) in the wideband code division multiple access (WCDMA) system can also be the evolved base station (evolvedNodeB, eNB or eNodeB) in the LTE system, and it can also be the cloud radio access network (cloud wireless access network).
  • evolvedNodeB, eNB or eNodeB evolved base station
  • cloud wireless access network cloud wireless access network
  • Radio access network, CRAN) wireless controller, or the network equipment can be relay station, access point, in-vehicle equipment, wearable equipment, network equipment in the future 5G network or network equipment in the future evolved PLMN network, etc.
  • the transmission and reception point (TRP) or transmission point (TP) in the NR system the base station (gNB) in the NR system, one or a group (including multiple base stations) in the 5G system Antenna panel) Antenna panel, etc.
  • TRP transmission and reception point
  • TP transmission point
  • gNB base station
  • the embodiment of the present application does not specifically limit this.
  • the gNB may include a centralized unit (CU) and a DU.
  • the gNB may also include a radio unit (RU).
  • CU realizes some functions of gNB
  • DU realizes some functions of gNB, for example, CU realizes radio resource control (radio resource control, RRC), packet data convergence protocol (packet data convergence protocol, PDCP) layer functions
  • DU realizes wireless link
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • DU realizes wireless link
  • RLC radio link control
  • MAC media access control
  • PHY physical
  • high-level signaling such as RRC layer signaling
  • the network device may be a CU node, or a DU node, or a device including a CU node and a DU node.
  • the CU can be divided into network equipment in an access network (radio access network, RAN), or the CU can be divided into network equipment in a core network (core network, CN), which is not limited in this application.
  • terminal equipment in the wireless communication system may also be referred to as user equipment (UE), access terminal, user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile equipment, User terminal, terminal, wireless communication device, user agent or user device.
  • UE user equipment
  • the terminal device in the embodiment of the present application may be a mobile phone (mobile phone), a tablet computer (pad), a computer with a wireless transceiver function, a virtual reality (VR) terminal device, and an augmented reality (AR) terminal Equipment, wireless terminals in industrial control, wireless terminals in unmanned driving (self-driving), wireless terminals in remote medical, wireless terminals in smart grid, transportation safety ( The wireless terminal in transportation safety, the wireless terminal in the smart city, the wireless terminal in the smart home, the terminal equipment in the future 5G network, or the public land mobile communication network that will evolve in the future.
  • the terminal equipment in the network, PLMN is not limited in the embodiment of the present application.
  • the terminal device or the network device includes a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer.
  • the hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and memory (also referred to as main memory).
  • the operating system may be any one or more computer operating systems that implement business processing through processes, for example, Linux operating system, Unix operating system, Android operating system, iOS operating system, or windows operating system.
  • the application layer includes applications such as browsers, address books, word processing software, and instant messaging software.
  • the embodiments of the application do not specifically limit the specific structure of the execution body of the method provided in the embodiments of the application, as long as the program that records the codes of the methods provided in the embodiments of the application can be provided according to the embodiments of the application.
  • the execution subject of the method provided in the embodiments of the present application may be a terminal device or a network device, or a functional module in the terminal device or network device that can call and execute the program.
  • various aspects or features of the present application can be implemented as methods, devices, or products using standard programming and/or engineering techniques.
  • article of manufacture used in this application encompasses a computer program that can be accessed from any computer-readable device, carrier, or medium.
  • computer-readable media may include, but are not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, compact discs (CD), digital versatile discs (DVD)) Etc.), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.).
  • various storage media described herein may represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • the technical solution of the present application can be applied to a wireless communication system, for example, the communication system 100 shown in FIG. 1.
  • Two communication devices in the wireless communication system may have a wireless communication connection relationship.
  • One of the communication devices may correspond to the network device 110 shown in FIG. 1, for example, it may be the network device 110 or a chip configured in the network device 110, and the other of the two communication devices may correspond to the network device 110, for example.
  • the terminal device 120 in 1 may be the terminal device 120 or a chip configured in the terminal device 120, for example.
  • the terminal device and the network device when the terminal device communicates with the network device, the terminal device and the network device will be the sender and receiver each other, that is, when the terminal device sends a signal to the network device, the terminal device acts as the sender and the network device acts as the sender.
  • the network device when a network device sends a signal to a terminal device, the network device serves as the sender and the terminal device serves as the receiver.
  • the source is sent out after source coding, channel coding, and modulation mapping in turn.
  • the destination is output through demapping and demodulation, channel decoding, and source decoding in turn.
  • the coding process (source coding, channel coding, and modulation mapping steps) in Figure 2 is executed by the terminal device.
  • the coding process in Figure 2 The decoding process (mapping demodulation, channel decoding, and source decoding steps) is executed by the terminal device.
  • the network equipment is the same.
  • the current channel coding/decoding methods include but are not limited to: Hamming code and Polar code.
  • the learning process of encoding and decoding is mainly for learning samples in the entire codeword space.
  • code lengths such as Polar codes
  • the embodiment of the present application proposes an encoding/decoding method that can generalize to the entire codeword space by sampling a small range of the codeword space.
  • the method constructs a neural network encoding/decoding model through neural network units generated based on encoding/decoding, and then encodes and/or decodes information to be encoded/decoded according to the neural network encoding/decoding model.
  • the encoding/decoding model of the embodiment of the present application can meet the requirements of high-rate transmission and low decoding delay, and has good performance.
  • the encoding and decoding method of the embodiment of the present application will be described in detail below with reference to the accompanying drawings. It should be understood that the encoding method in the embodiments of the present application can be performed using a similar decoding method.
  • the encoding model used in the specific encoding process is similar to the decoding model used in the decoding process. In order to avoid repetition, the following only uses decoding as For illustration, the specific encoding process may correspond to the following decoding process.
  • an existing method may also be used for encoding, and the embodiment of the application is not limited to this.
  • FIG. 3 is a schematic flowchart of the decoding method in an embodiment of this application.
  • the method shown in FIG. 3 can be applied to the system of FIG. 1 and executed by a decoding device (also called a receiving end).
  • the decoding device may be a network device
  • the decoding device may be a terminal device, and the embodiment of the present application is not limited to this.
  • the method shown in FIG. 3 includes:
  • N is an integer greater than or equal to 2;
  • the soft information of the bit to be decoded may also be the log likelihood ratio (LLR) of the bit to be decoded.
  • LLR log likelihood ratio
  • Each of the N bits to be decoded is to be decoded.
  • Each bit has one LLR, and N bits to be decoded correspond to N LLRs.
  • N can be considered as the length of the Polar code mother code, and the embodiment of the present application is not limited to this.
  • the soft information of the bits to be decoded may also be referred to as the information to be decoded.
  • the information to be decoded may also be referred to as a code word to be decoded, a code block to be decoded, a code word, or a code block.
  • the decoding device may regard the information to be decoded as a whole for decoding, or divide the information to be decoded into multiple sub-code blocks for parallel decoding processing, and the embodiment of the present application is not limited to this.
  • the decoding model is composed of a plurality of neural network decoding units, and each neural network decoding unit supports the difference of soft information. Or operation, the decoding model is obtained through at least one training process.
  • the embodiment of the application composes the neural network decoding unit into a decoding model, and realizes that after connecting small neural network decoding units, the decoding model is obtained, so that in the decoding learning process, small learning samples can be used.
  • the decoding model of the embodiment of the present application can meet the requirements of high-rate transmission and low decoding delay, and has good decoding performance.
  • each neural network decoding unit has 2 inputs and 2 outputs and has at least one hidden layer structure.
  • each hidden layer may include Q nodes, and Q is an integer greater than or equal to 2.
  • the hidden layer in the embodiment of the present application may also be referred to as a hidden layer, and the embodiment of the present application is not limited thereto.
  • the neural network decoding unit includes neural network decoding unit parameters, and the neural network decoding unit parameters are used to indicate the input information and output information input to the neural network decoding unit
  • the mapping relationship between the neural network decoding unit parameters includes a weight matrix and an offset vector.
  • the neural network decoding unit has 2 inputs and 2 outputs, and has a hidden layer structure.
  • the hidden layer includes 3 nodes.
  • the neural network decoding unit includes an input layer, an output layer and a hidden layer.
  • the information input by the input layer is the input vector
  • the information output by the output layer is the output vector.
  • the input vector input to a neural network decoding unit and the output vector output to the neural network decoding unit have the following mapping relationship:
  • w 1 and w 2 represent the weight matrix
  • b 1 and b 2 Represents the offset vector
  • h represents the hidden unit vector
  • g 1 and g 2 represent the activation function
  • the w 1 and w 2 are real number matrices
  • b 1 , b 2 , h, y, and x are all real number vectors .
  • the value of the output vector x is with In either case, the output vector y and x have the following mapping relationship:
  • the multiple neural network decoding units in the decoding model form a log 2 N-layer structure, wherein the output of the neural network decoding unit of the previous layer is used as the input of the latter layer .
  • the input information of each layer is y
  • the output information is x
  • the output information x of the previous layer is used as the input information y of the current layer.
  • FIG. 5 is only schematic, and the connection relationship between the layers in FIG. 5 can be changed or deformed arbitrarily, and the embodiment of the present application is not limited thereto.
  • the input information of the decoding model shown in FIG. 5 is soft information of 16 bits to be decoded, and the output information is 16 decoded bits.
  • the method may further include: a decoding device acquiring the decoding model.
  • the decoding model may be trained by a decoding device that executes the method shown in FIG. 3, or may be trained by other devices, and the embodiment of the present application is not limited thereto.
  • obtaining the decoding model by the decoding device includes that the decoding device obtains the decoding model from another device.
  • the decoding model may be trained by the aforementioned other device or another device, and the embodiment of the present application is not limited to this.
  • the decoding device since other devices have been trained to obtain the decoding model, the decoding device does not need to train the model, and can obtain the decoding model from the other device and directly use the decoding model, avoiding retraining The resulting cost overhead.
  • the decoding device acquiring the decoding model includes training the decoding device to acquire the decoding model.
  • the decoding device can send the decoding model to another device for use, so that the other device can directly use the decoding model without training.
  • the code model avoids the cost overhead caused by retraining.
  • the decoding model can be used directly during subsequent decoding without retraining.
  • the decoding device training the decoding model can be pre-trained, and the decoding model can be no longer trained during decoding, but the decoding model can be used directly.
  • the decoding device may also train to obtain the decoding model when there is a decoding requirement, and then perform decoding, and the embodiment of the present application is not limited to this.
  • training scheme may refer to a scheme used for pre-training a decoding model, or may be a scheme used for training a decoding model when decoding is currently required.
  • the decoding model in the embodiment of the present application may be obtained through at least one training process.
  • the decoding model is obtained through two training processes.
  • the method for obtaining a decoding model during two training processes in the embodiment of the present application includes:
  • the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector.
  • the initial neural network decoding unit includes at least one hidden layer, each hidden layer includes Q nodes, and Q is greater than or equal to 2.
  • the initial neural network decoding unit includes a hidden layer, and the hidden layer has 3 nodes.
  • the initial neural network decoding unit includes an input layer, an output layer, and at least one hidden layer.
  • the initial neural network decoding unit further includes initial neural network decoding unit parameters, and the initial neural network decoding unit parameters may include: an initial weight matrix w and an initial bias vector b.
  • the initial neural network decoding unit parameters are generally randomly generated.
  • the initial neural network decoding unit parameters may also be preset values, and the embodiment of the present application is not limited to this.
  • the number of hidden layers may be one or more than one.
  • the constructed initial neural network decoding unit is shown in FIG. 4.
  • the number of nodes in the hidden layer of the initial neural network decoding unit is greater than the code length of the input information and the output information. That is, when the code length of the input information and the input information is 2, the number of nodes in the hidden layer is an integer greater than 2.
  • the initial neural network decoding unit has one hidden layer and the hidden layer has 3 nodes as an example for detailed description, but the embodiment of the present application is not limited to this.
  • the decoding device trains the initial neural network decoding unit (that is, the first training process) to obtain the neural network decoding unit. Refer to step 620 for the specific training process.
  • the intermediate neural network decoding unit includes the intermediate neural network decoding unit parameters, and the intermediate neural network decoding unit parameters are used to indicate the mapping relationship between input information and output information input to the intermediate neural network decoding unit ,
  • the intermediate neural network decoding unit parameter includes an intermediate weight matrix and an intermediate offset vector, the first sample set includes at least one first sample, and one of the first samples includes a first column of length 2 A vector and a second column vector of length 2, where the second column vector is a desired vector decoded by the first column vector;
  • the initial neural network decoding unit is trained until the output information and input information (ie, the first column vector) of the initial neural network decoding unit It is expected that the error between the verification results (ie, the second column vector) is less than the first preset threshold. It should be understood that when the initial neural network decoding unit is trained, the initial neural network decoding unit parameters are updated to obtain the intermediate neural network decoding unit parameters.
  • the error between the expected verification result of the output information and the input information may be the difference between the output information and the expected verification result.
  • the error between the expected check result of the output information and the input information may be the mean square error between the output information and the expected check result.
  • the operator can set the method for obtaining the error between the output information and the expected verification result according to actual needs, which is not limited in this application.
  • the threshold corresponding to the error between the output information and the expected verification result can also be set according to different ways of obtaining the error, which is not limited in this application.
  • the initial neural network decoding unit after training is the intermediate neural network decoding unit in the embodiment of this application.
  • the initial neural network decoding unit parameters included in it are updated to the intermediate neural network decoding unit parameters.
  • the achievable result of the intermediate neural network decoding unit is: based on the intermediate neural network decoding unit parameters contained therein, the input training information (for example, the first column vector) is encoded and output The output information of is equal to or close to the expected check result of the first column vector (ie, the second column vector).
  • the training parameters of the decoding unit of the intermediate neural network are shown in Table 1 below.
  • the decoding device can perform the following training process on the initial neural network decoding unit based on the input information, the expected verification result of the input information, and the initial neural network decoding unit parameters, such as:
  • the input r of the next layer of neurons is the previous layer of neurons connected to it.
  • the output c of the element is weighted and summed based on the parameters of the initial neural network decoding unit (that is, the initial weight value w set on each connection between the two layers and the initial bias vector b set on each node), and then After the activation function, the input r of each neuron is as follows:
  • the output x of the initial neural network decoding unit (that is, the initial neural network decoding unit in the embodiment of this application can be recursively expressed as:
  • the encoding device obtains the error value between the training result 1 and the expected verification result.
  • the calculation method of the error value is as described above, that is, it can be the difference between the training result 1 and the expected verification result, or the mean square value.
  • the loss function please refer to the prior art embodiment, which will not be repeated in this application.
  • the encoding device can calculate the residual of the output layer by propagating the error direction, and then perform a weighted summation of the residuals of the nodes in each layer layer by layer, and then, based on the learning rate, and the input layer For the residual value of the node, update the weight of the first layer (that is, the weight between the input layer and the hidden layer), and loop the above method to update the corresponding weight layer by layer.
  • the input information is trained again, and the training results are obtained, and the above steps are repeated, that is, the parameters of the initial neural network decoding unit are repeatedly updated until the output of the initial neural network decoding unit
  • the error between the training result n and the expected verification result is less than the target value (for example, the target value can be 0.0001), and the convergence of the training result can be confirmed.
  • the above training method is the gradient descent method, and the encoding device can iteratively optimize the initial weight value w and the initial bias vector b through the gradient descent method, so that the loss function reaches the minimum value.
  • the encoding device can iteratively optimize the initial weight value w and the initial bias vector b through the gradient descent method, so that the loss function reaches the minimum value.
  • the encoding device can also train the initial neural network decoding unit in the embodiment of the present application through other training methods, the purpose of which is to make the output value of the initial neural network decoding unit close to the optimization target, and update the initial neural network decoding unit.
  • Initial neural network decoding unit parameters are also train the initial neural network decoding unit in the embodiment of the present application through other training methods, the purpose of which is to make the output value of the initial neural network decoding unit close to the optimization target, and update the initial neural network decoding unit.
  • the decoding network diagram (Polar code decoding structure) (for example, the butterfly diagram shown in FIG. 7), all butterfly operations (as shown in FIG. 8) can be replaced with
  • the intermediate neural network decoding unit obtains the first initial decoding model.
  • FIG. 9 is a schematic flow chart of the steps of generating the first initial decoding model, and the steps shown in FIG. 9 include:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the decoding device may train the first initial decoding model until the expected output information and input information (third column vector) output by the first initial decoding model (the third column vector) The error between the four-column vector) verification results is less than the second preset threshold. And, after the first initial decoding model is trained, the parameters of the intermediate neural network decoding unit in the intermediate neural network decoding unit are updated to the parameters of the neural network decoding unit to obtain the decoding model.
  • the first initial decoding model after training is the aforementioned decoding model.
  • the decoding model is obtained through a training process.
  • a method 1000 for obtaining a decoding model in one training process in an embodiment of the present application includes:
  • step 1010 corresponds to step 610, in order to avoid repetition, it will not be repeated here.
  • the decoding network diagram (Polar code decoding structure) (for example, the butterfly diagram shown in FIG. 7), all butterfly operations (as shown in FIG. 8) can be replaced with The initial neural network decoding unit obtains the second initial decoding model.
  • FIG. 11 it is a schematic flowchart of steps for generating a second initial decoding model, and the steps shown in FIG. 11 include:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the decoding device may train the second initial decoding model until the expected output information and input information (fifth column vector) of the second initial decoding model are expected (the fifth column vector). The error between the six-column vector) verification results is less than the third preset threshold. And, after the second initial decoding model is trained, the intermediate neural network decoding unit parameters in the primary neural network decoding unit are updated to the neural network decoding unit parameters to obtain the decoding model.
  • the second initial decoding model after training is the aforementioned decoding model.
  • the embodiment of the application composes the neural network decoding unit into a decoding model, and realizes that after connecting small neural network decoding units, the decoding model is obtained, so that in the decoding learning process, small learning samples can be used.
  • the decoding model of the embodiment of the present application can meet the requirements of high-rate transmission and low decoding delay, and has good decoding performance.
  • the abscissa represents the signal-to-noise ratio Eb/No.
  • Eb/No may represent the demodulation threshold of the receiver, which is defined as the energy per bit divided by the noise power spectral density.
  • Eb represents signal energy per bit
  • Eb S/R
  • S represents signal energy
  • R represents service bit rate
  • No N/W
  • W represents bandwidth
  • N represents noise.
  • the ordinate represents the bit error ratio (BER).
  • the trained decoding model retains the exclusive OR function of each processing unit, and the model has certain learning capabilities. Specifically, the trained decoding model has better decoding performance than the untrained decoding model.
  • a decoding model with a higher training sample ratio p has a higher signal-to-noise ratio than a decoding model with a lower training sample ratio. In this case, it has better decoding performance.
  • p may represent the proportion of the number of training samples in the full codeword space, and the value of p may be 10%, 20%, 40%, 60%, 80%, 100%. Not limited to this.
  • the test signal-to-noise ratio Eb/N0 (dB) can range from 0 to 14. The embodiments of the application are not limited to this.
  • Figure 13 shows that in the case of a very small number of training sets, the performance of the neural network decoding model based on the neural network decoding unit (also known as the polarization processing unit) proposed in this application is better than other existing neural networks Decoding model.
  • the neural network decoding unit also known as the polarization processing unit
  • FIGS. 1 to 13 are merely to help those skilled in the art understand the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the examples given in FIGS. 1 to 13, and such modifications or changes also fall within the scope of the embodiments of the present application.
  • FIG. 14 is a schematic structural diagram of a decoding device provided by an embodiment of the application.
  • the device 1400 may include a decoding module 1410 and an obtaining module 1420.
  • the acquiring module is used to acquire the soft information of N bits to be decoded, where N is an integer greater than or equal to 2;
  • the decoding module is used to decode the soft information through a decoding model to obtain the decoding result, wherein the decoding model is composed of multiple neural network decoding units, and each neural network decoding unit supports For the exclusive OR operation of soft information, the decoding model is obtained through at least one training process.
  • the embodiment of the application composes the neural network decoding unit into a decoding model, and realizes that after connecting small neural network decoding units, the decoding model is obtained, so that in the decoding learning process, small learning samples can be used.
  • the decoding model of the embodiment of the present application can meet the requirements of high-rate transmission and low decoding delay, and has good decoding performance.
  • the decoding device 1400 has any function performed by the decoding device in the foregoing method embodiments, and detailed descriptions are appropriately omitted here.
  • the multiple neural network decoding units in the decoding model form a log 2 N-layer structure, wherein the output of the neural network decoding unit of the previous layer is used as the input of the latter layer.
  • each neural network decoding unit has 2 inputs and 2 outputs and has at least one hidden layer structure.
  • the neural network decoding unit includes neural network decoding unit parameters, and the neural network decoding unit parameters are used to indicate the mapping relationship between input information and output information input to the neural network decoding unit,
  • the neural network decoding unit parameters include a weight matrix and an offset vector.
  • the input vector input to one neural network decoding unit and the output vector output to the one neural network decoding unit have the following mapping relationship:
  • w 1 and w 2 represent the weight matrix
  • b 1 and b 2 Represents the offset vector
  • h represents the hidden unit vector
  • g 1 and g 2 represent the activation function
  • the w 1 and w 2 are real number matrices
  • b 1 , b 2 , h, y, and x are all real number vectors .
  • the value of the output vector x is with In either case, the output vector y and x have the following mapping relationship:
  • the decoding module before decoding the soft information through a decoding model, the decoding module further includes:
  • the decoding model is obtained through two training processes.
  • the decoding module is specifically configured to:
  • the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector;
  • the neural network decoding unit includes the intermediate neural network decoding unit parameters, and the intermediate neural network decoding unit parameters are used to indicate the mapping relationship between input information and output information input to the intermediate neural network decoding unit, so
  • the parameters of the intermediate neural network decoding unit include an intermediate weight matrix and an intermediate offset vector
  • the first sample set includes at least one first sample
  • one of the first samples includes a first column vector of length 2 and A second column vector of length 2, where the second column vector is a desired vector decoded by the first column vector;
  • the second sample set includes a third column vector of length N and a fourth column vector of length N, and the fourth column vector is a decoding expectation of the third column vector vector.
  • the decoding module is specifically configured to:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the intermediate neural network decoding unit is used to replace the code butterfly diagram in the decoding network diagram to obtain the first initial decoding model.
  • the decoding model is obtained through a training process.
  • the decoding module is specifically configured to:
  • the initial neural network decoding unit parameters include an initial weight matrix and an initial offset vector;
  • the third sample set includes a fifth column vector of length N and a sixth column vector of length N, and the sixth column vector is a desired vector for decoding of the fifth column vector.
  • the decoding module is specifically configured to:
  • the decoding network diagram includes at least one decoding butterfly diagram, and the decoding butterfly diagram is used to indicate the input information of the decoding butterfly diagram and the decoding butterfly diagram.
  • the initial neural network decoding unit is used to replace the decoding butterfly diagram in the decoding network diagram to obtain the second initial decoding model.
  • module in the embodiments of the present application may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor for executing one or more software or firmware programs (such as a shared processor). , Proprietary processors or group processors, etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
  • ASIC application specific integrated circuit
  • the “module” in the embodiment of the present application may also be referred to as a “unit” which may be implemented by hardware or software, and the embodiment of the present application is not limited thereto.
  • decoding device 1400 provided in the present application corresponds to the process performed by the decoding device in the foregoing method embodiment, and the functions of each unit/module in the device can be referred to in the above Description, not repeat them here.
  • the decoding device described in FIG. 14 may be a network device or a terminal device, or a chip or an integrated circuit installed in the network device or the decoding device.
  • FIG. 15 is a schematic structural diagram of a decoding device provided by an embodiment of the application. As shown in FIG. 15, the decoding device 1500 can be applied to the system shown in FIG. 1 to perform any function of the decoding device in the foregoing method embodiment.
  • the access point 1500 may include at least one processor 1510 and a transceiver 1520, and the processor 1510 is connected to the transceiver 1520.
  • the access point 1500 further includes at least one memory 1530, which is connected to the processor 1530.
  • the access point 1500 may further include a bus system 1540.
  • the processor 1510, the memory 1530, and the transceiver 1520 can be connected via a bus system 1540.
  • the memory 1530 can be used to store instructions.
  • the processor 1510 can correspond to the processing module 1410 in FIG. 14, and the transceiver 1520 can correspond to the processing module 1410 in FIG.
  • the transceiver module 1420 can correspond to the processing module 1410 in FIG.
  • the transceiver module 1420 can correspond to the processing module 1410 in FIG.
  • the transceiver module 1420 can correspond to the processing module 1410 in FIG.
  • the processor 1510 is configured to execute instructions to control the transceiver 1520 to send and receive information or signals, and the memory 1530 stores the instructions.
  • the memory 1530 may be integrated with the processor 1510.
  • the memory 1530 may be integrated in the processor 1510, and the memory 1530 may also be located outside the processor 1510 and exist independently.
  • the embodiment of the present application is not limited to this .
  • the processor may be a central processing unit (Central Processing Unit, referred to as "CPU"), and the processor may also be other general-purpose processors, digital signal processors (DSP), or application specific integrated circuits. (ASIC), ready-made programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory may include read-only memory and random access memory, and provides instructions and data to the processor.
  • a part of the memory may also include a non-volatile random access memory.
  • the memory can also store device type information.
  • the bus system may also include a power bus, a control bus, and a status signal bus.
  • various buses are marked as bus systems in the figure.
  • the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. In order to avoid repetition, it will not be described in detail here.
  • the decoding device 1500 shown in FIG. 15 can implement various processes involving the decoding device in the foregoing method embodiments.
  • the operations and/or functions of each module in the decoding device 1500 are respectively for implementing the corresponding processes in the foregoing method embodiments.
  • An embodiment of the present application also provides a processing device, including a processor and an interface; the processor is configured to execute the decoding method in any of the foregoing method embodiments.
  • the processing device may be a chip.
  • the processing device may be a Field-Programmable Gate Array (FPGA), a dedicated integrated circuit (Application Specific Integrated Circuit, ASIC), or a system chip (System on Chip, SoC), and It can be a central processor (Central Processor Unit, CPU), a network processor (Network Processor, NP), a digital signal processing circuit (Digital Signal Processor, DSP), or a microcontroller (Micro Controller). Unit, MCU), can also be a programmable controller (Programmable Logic Device, PLD) or other integrated chips.
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • SoC System on Chip
  • CPU Central Processor Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • MCU microcontroller
  • MCU can also be a programmable controller (Programmable Logic Device, PLD) or other integrated chips.
  • the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. In order to avoid repetition, it will not be described in detail here.
  • the processor in the embodiment of the present invention may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • Programming logic devices discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory in the embodiment of the present invention may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically accessible memory. Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • serial link DRAM SLDRAM
  • direct rambus RAM direct rambus RAM
  • the embodiment of the present application also provides a communication system, which includes the aforementioned encoding end and decoding end.
  • the embodiment of the present application also provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a computer, the method in any of the foregoing method embodiments is implemented.
  • the embodiment of the present application also provides a computer program product, which implements the method in any of the foregoing method embodiments when the computer program product is executed by a computer.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium can be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (for example, a solid state disk, SSD)) etc.
  • the decoding method in the communication system is described above, but the present application is not limited to this.
  • the above-similar solution can also be used when encoding, and to avoid repetition, it will not be repeated here.
  • the network equipment in each of the above-mentioned device embodiments completely corresponds to the network equipment or terminal equipment in the terminal equipment and method embodiments, and the corresponding modules or units execute the corresponding steps, for example, the sending module (transmitter) method executes the sending in the method embodiment
  • the receiving module (receiver) executes the receiving steps in the method embodiment, and other steps except sending and receiving can be executed by the processing module (processor).
  • the sending module and the receiving module can form a transceiver module, and the transmitter and receiver can form a transceiver to realize the transceiver function together; there can be one or more processors.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated object, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are in an "or” relationship.
  • "The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of single item (a) or plural items (a).
  • at least one item (a) of a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • component used in this specification are used to denote computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the component may be, but is not limited to, a process, a processor, an object, an executable file, an execution thread, a program, and/or a computer running on a processor.
  • the application running on the computing device and the computing device can be components.
  • One or more components may reside in processes and/or threads of execution, and components may be located on one computer and/or distributed among two or more computers.
  • these components can be executed from various computer readable media having various data structures stored thereon.
  • a component can be based on a signal having one or more data packets (for example, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through signals) Communicate through local and/or remote processes.
  • data packets for example, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through signals
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the computer program product includes one or more computer instructions (programs).
  • programs When the computer program instructions (programs) are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Error Detection And Correction (AREA)

Abstract

本申请提供了一种译码的方法和译码装置,该方法包括获取N个待译码比特的软信息,N为大于或等于2的整数;通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。本申请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。

Description

译码的方法和译码装置
本申请要求于2019年1月29日提交中国专利局、申请号为201910087689.9、申请名称为“译码的方法和译码装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,特别涉及一种译码的方法和译码装置。
背景技术
无线通信的快速演进预示着未来第五代(5th generation,5G)通信系统将呈现出一些新的特点,最典型的三个通信场景包括增强型移动互联网(enhance mobile broadband,eMBB)、海量机器连接通信(massive machine type communication,mMTC)和高可靠低延迟通信(ultra reliable low latency communication,URLLC),这些通信场景的需求将对现有长期演进(long term evolution,LTE)技术提出新的挑战。信道编码作为最基本的无线接入技术,是满足5G通信需求的重要研究对象之一。极化码(Polar Codes)在5G标准中被选作控制信道编码方式。极化码也可以称为Polar码,是第一种、也是已知的唯一一种能够被严格证明“达到”信道容量的信道编码方法。在不同码长下,尤其对于有限码,Polar码的性能远优于Turbo码和低密度奇偶校验码(low density parity check,LDPC)码。另外,Polar码在编译码方面具有较低的计算复杂度。这些优点让Polar码在5G中具有很大的发展和应用前景。
最大似然译码虽然具有最佳的译码性能,但是却需要将接收到的调制符号同所有可能的码字做相关,这使得在实际码长配置下要实现最大似然译码几乎不可能。
因此,面对高速率传输和低译码延时的需求,设计一种具有良好译码性能的Polar码译码模型成为亟待解决的问题。
发明内容
本申请提供一种译码的方法和译码装置,具有良好的译码性能。
第一方面,提供了一种译码的方法,该方法包括:
获取N个待译码比特的软信息,N为大于或等于2的整数;
通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。
本申请实施例将神经网络译码单元组成译码模型,实现了将小的神经网络译码单元进行连接后,得到译码模型,从而在译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。本申 请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。
结合第一方面,在一种实现方式中,在所述译码模型中所述多个神经网络译码单元形成log 2N层结构,其中,前一层神经网络译码单元的输出作为后一层的输入。
结合第一方面,在一种实现方式中,所述每个神经网络译码单元为2输入2输出且具有至少一个隐层结构。
可选地,每个所述隐层可以包含Q个节点,并且,Q为大于等于2的整数。
应理解,本申请实施例中隐层也可以称为隐藏层,本申请实施例并不限于此。
结合第一方面,在一种实现方式中,所述神经网络译码单元包括神经网络译码单元参数,所述神经网络译码单元参数用于指示输入所述神经网络译码单元的输入信息与输出信息之间的映射关系,所述神经网络译码单元参数包括权重矩阵和偏移向量。
结合第一方面,在一种实现方式中,输入一个神经网络译码单元的输入向量与输出所述一个神经网络译码单元的输出向量具有如下映射关系:
Figure PCTCN2020071341-appb-000001
其中,y=(y 1,y 2) T表示所述输入向量,x=(x 1,x 2) T表示所述输出向量,w 1和w 2表示所述权重矩阵,b 1和b 2表示所述偏移向量,h表示隐层单元向量,g 1和g 2表示激活函数,所述w 1、w 2均为实数矩阵,b 1、b 2、h、y、x均为实数向量。
结合第一方面,在一种实现方式中,所述输出向量x的取值为
Figure PCTCN2020071341-appb-000002
Figure PCTCN2020071341-appb-000003
中的任一种情况下,所述输出向量y与x具有以下映射关系:
Figure PCTCN2020071341-appb-000004
x 2=y 2
结合第一方面,在一种实现方式中,在通过译码模型对所述软信息进行译码之前,所述方法还包括:
获取所述译码模型。
应理解,该译码模型可以是由译码装置训练的,也可以是其他装置训练的,本申请实施例并不限于此。
在该译码模型是由其他装置训练的情况下,译码装置获取所述译码模型包括该译码装置从其他设备获取该译码模型。
这种情况下,该译码模型可以是上述其他装置或者另外的装置训练的,本申请实施例并不限于此。
本申请实施例中,由于其他装置已训练获得该译码模型,因此,该译码装置无需训练模型,可以从该其他装置获取该译码模型,并直接使用该译码模型,避免了再次训练导致的成本开销。
可选地,在译码模型是由其自身训练的情况下,译码装置获取所述译码模型包括译码装置训练获取该译码模型。
这种情况下,该译码装置在训练获得该译码模型后,该译码装置可以将该译码模型发给别的装置使用,以使得该别的装置无需进行训练,能够直接使用该译码模型,避免了再次训练导致的成本开销。
应理解,在实际应用中,译码装置在训练出译码模型后,在之后译码时可以直接使用 该译码模型,而无需再次训练。也就是说,译码装置训练该译码模型可以是预先训练好的,在译码时可以不再训练译码模型,而是直接使用该译码模型。可选地,译码装置也可以是在具有译码需求时才训练获取译码模型,然后再进行译码,本申请实施例并不限于此。
结合第一方面,在一种实现方式中,所述译码模型是通过两次训练过程得到的。
结合第一方面,在一种实现方式中,所述获取所述译码模型,包括:
构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
使用预设的第一样本集合训练所述初始神经网络译码单元,将所述初始神经网络译码单元参数更新为中间神经网络译码单元参数,获取中间神经网络译码单元,所述中间神经网络译码单元包括所述中间神经网络译码单元参数,所述中间神经网络译码单元参数用于指示输入所述中间神经网络译码单元的输入信息与输出信息之间的映射关系,所述中间神经网络译码单元参数包括中间权重矩阵和中间偏移向量,所述第一样本集合包括至少一个第一样本,一个所述第一样本包括长度为2的第一列向量和长度为2的第二列向量,所述第二列向量为所述第一列向量译码的期望向量;
将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型;
使用预设的第二样本集合训练所述第一初始译码模型,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型,其中,所述第二样本集合包括长度为N的第三列向量和长度为N的第四列向量,所述第四列向量为所述第三列向量译码的期望向量。
结合第一方面,在一种实现方式中,所述将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型,包括:
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
使用所述中间神经网络译码单元对所述译码网络图中的码蝶形图进行替换,得到所述第一初始译码模型。
结合第一方面,在一种实现方式中,所述译码模型是通过一次训练过程得到的。
结合第一方面,在一种实现方式中,所述获取所述译码模型,包括:
构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型;
使用预设的第三样本集合训练所述第二初始译码模型,将所述初始神经网络译码单元中的所述初始神经网络译码单元参数更新为神经网络译码单元参数,获得所述译码模型,其中,所述第三样本集合包括长度为N的第五列向量和长度为N的第六列向量,所述第六列向量为所述第五列向量译码的期望向量。
结合第一方面,在一种实现方式中,所述将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型,包括:
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形 图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
使用初始神经网络译码单元对所述译码网络图中的译码蝶形图进行替换,得到所述第二初始译码模型。
本申请实施例将神经网络译码单元组成译码模型,实现了将小的神经网络译码单元进行连接后,得到译码模型,从而在译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。本申请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。
第二方面,提供了一种译码装置,包括用于执行第一方面或第一方面中任一种可能实现方式中的方法的各个模块或单元。
第三方面,提供了一种译码装置,包括收发器、处理器和存储器。该处理器用于控制收发器收发信号,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得该译码装置执行第一方面及其可能实现方式中的方法。
第四方面,提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被计算机执行时实现第一方面及其可能实现方式中的方法。
第五方面,提供了一种计算机程序产品,该计算机程序产品被计算机执行时实现第一方面及其可能实现方式中的方法。
第六方面,提供了一种处理装置,包括处理器和接口。
第七方面,提供了一种处理装置,包括处理器、接口和存储器。
在第六方面或第七方面中,该处理器,用于作为上述第一方或第一方面的任一可能的实现方式中的方法的执行主体来执行这些方法,其中相关的数据交互过程(例如接收发送端发送的信息,例如待译码比特等)是通过上述接口来完成的。在具体实现过程中,上述接口可以进一步通过收发器来完成上述数据交互过程。
应理解,上述六方面或第七方面中的处理装置可以是一个芯片,该处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等;当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现,存储器可以和处理器集成在一起,例如,存储器可以集成在处理器中,该存储器也可以位于该处理器之外,独立存在。
附图说明
图1是本申请实施例可应用的场景示意图。
图2是本申请实施例的一种无线通信流程示意图。
图3是根据本申请一个实施例的译码方法流程图。
图4是根据本申请一个实施例的神经网络译码单元示意图。
图5是根据本申请一个实施例的译码模型示意图。
图6是根据本申请一个实施例的两次训练译码模型方法示意图。
图7是根据本申请一个实施例的译码网络示意图。
图8是根据本申请一个实施例的蝶形运算示意图。
图9是根据本申请一个实施例的生成第一初始译码模型的方法示意图。
图10是根据本申请一个实施例的一次训练译码模型方法示意图。
图11是根据本申请一个实施例的生成第二初始译码模型的方法示意图。
图12是根据本申请一个实施例的译码模型仿真译码性能对比图。
图13是根据本申请译码模型与现有模型的译码性能对比图。
图14是根据本申请一个实施例的译码装置的结构示意图。
图15是根据本申请另一实施例的译码装置的结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请实施例的技术方案可以应用于各种通信系统,例如:全球移动通信(global system formobile communications,GSM)系统、码分多址(code division multiple access,CDMA)系统、宽带码分多址(wideband code division multiple access,WCDMA)系统、通用分组无线业务(general packet radio service,GPRS)、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)、通用移动通信系统(universal mobile telecommunication system,UMTS)、全球互联微波接入(worldwide interoperability for microwave access,WiMAX)通信系统、未来的第五代(5th generation,5G)系统或新无线(new radio,NR)等。
图1示出了适用于本申请实施例的发送和接收的方法和装置的通信系统100的示意图。如图所示,该通信系统100可以包括至少一个网络设备,例如图1所示的网络设备110;该通信系统100还可以包括至少一个终端设备,例如图1所示的终端设备120。网络设备110与终端设备120可通过无线链路通信。
各通信设备,如图1中的网络设备110或终端设备120,可以配置多个天线。该多个天线可以包括至少一个用于发送信号的发射天线和至少一个用于接收信号的接收天线。另外,各通信设备还附加地包括发射机链和接收机链,本领域普通技术人员可以理解,它们均可包括与信号发送和接收相关的多个部件(例如处理器、调制器、复用器、解调器、解复用器或天线等)。因此,网络设备与终端设备之间可通过多天线技术通信。
应理解,该无线通信系统中的网络设备可以是任意一种具有无线收发功能的设备。该设备包括但不限于:可以是全球移动通信(global system formobile communications,GSM)系统或码分多址(code division multiple access,CDMA)中的基站(base transceiver station,BTS),也可以是宽带码分多址(wideband code division multiple access,WCDMA)系统中的基站(NodeB,NB),还可以是LTE系统中的演进型基站(evolvedNodeB,eNB或eNodeB),还可以是云无线接入网络(cloud radio access network,CRAN)场景下的无线控制器,或者该网络设备可以为中继站、接入点、车载设备、可穿戴设备以及未来5G网络中的网络设备或者未来演进的PLMN网络中的网络设备等,例如,NR系统中传输接收点(transmission and reception point,TRP)或传输点(transmission point,TP)、NR系统中的基站(gNB)、5G系统中的基站的一个或一组(包括多个天线面板)天线面板等。本申请实施例对此并未特别限定。
在一些部署中,gNB可以包括集中式单元(centralized unit,CU)和DU。gNB还可以包括射频单元(radio unit,RU)。CU实现gNB的部分功能,DU实现gNB的部分功能,比如,CU实现无线资源控制(radio resource control,RRC),分组数据汇聚层协议 (packet data convergence protocol,PDCP)层的功能,DU实现无线链路控制(radio link control,RLC)层、媒体接入控制(media access control,MAC)层和物理(physical,PHY)层的功能。由于RRC层的信息最终会变成PHY层的信息,或者,由PHY层的信息转变而来,因而,在这种架构下,高层信令,如RRC层信令,也可以认为是由DU发送的,或者,由DU+CU发送的(例如,由CU确定高层信息并后发给DU,由DU发送该高层信息)。可以理解的是,网络设备可以为CU节点、或DU节点、或包括CU节点和DU节点的设备。此外,CU可以划分为接入网(radio access network,RAN)中的网络设备,也可以将CU划分为核心网(core network,CN)中的网络设备,本申请对此不做限定。
还应理解,该无线通信系统中的终端设备也可以称为用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。本申请的实施例中的终端设备可以是手机(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端以及未来5G网络中的终端设备或者未来演进的公用陆地移动通信网络(public land mobile network,PLMN)中的终端设备等,本申请实施例对此并不限定。
在本申请实施例中,终端设备或网络设备包括硬件层、运行在硬件层之上的操作系统层,以及运行在操作系统层上的应用层。该硬件层包括中央处理器(central processing unit,CPU)、内存管理单元(memory management unit,MMU)和内存(也称为主存)等硬件。该操作系统可以是任意一种或多种通过进程(process)实现业务处理的计算机操作系统,例如,Linux操作系统、Unix操作系统、Android操作系统、iOS操作系统或windows操作系统等。该应用层包含浏览器、通讯录、文字处理软件、即时通信软件等应用。并且,本申请实施例并未对本申请实施例提供的方法的执行主体的具体结构特别限定,只要能够通过运行记录有本申请实施例的提供的方法的代码的程序,以根据本申请实施例提供的方法进行通信即可,例如,本申请实施例提供的方法的执行主体可以是终端设备或网络设备,或者,是终端设备或网络设备中能够调用程序并执行程序的功能模块。
另外,本申请的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本申请中对于使用单数表示的元素旨在用于表示“一个或多个”,而并非仅表示“一个且仅一个”,除非有特别说明。“一些”可以是指一个或多个。
应理解,在下文示出的实施例中,第一、第二、第三、第四以及各种数字编号仅为描 述方便进行的区分,并不用来限制本申请实施例的范围。
本申请的技术方案可以应用于无线通信系统中,例如,图1中所示的通信系统100。处于无线通信系统中的两个通信装置之间可具有无线通信连接关系。该通信装置中的一个例如可以对应于图1中所示的网络设备110,如可以为网络设备110或者配置于网络设备110中的芯片,该两个通信装置中的另一个例如可以对应于图1中的终端设备120,如可以为终端设备120或者配置于终端设备120中的芯片。
在上述通信系统中,终端设备在与网络设备进行通信时,终端设备与网络设备会互为发送端与接收端,即,终端设备向网络设备发送信号时,终端设备作为发送端,网络设备作为接收端,反之,网络设备向终端设备发送信号时,网络设备作为发送端,而终端设备作为接收端。具体的,在无线通信的过程中,基本流程如图2所示,在图2中:
在发送端,信源依次经过信源编码、信道编码、和调制映射后发出。在接收端,依次通过解映射解调、信道译码和信源译码输出信宿。
需要说明的是,当终端设备作为发送端时,图2中的编码过程(信源编码、信道编码、和调制映射等步骤)由终端设备执行,当终端设备作为接收端时,图2中的译码过程(映射解调、信道译码和信源译码等步骤)由终端设备执行。网络设备相同。
目前的信道编/译码方式包括但不限于:汉明码、Polar码。
在已有技术中,编译码的学习过程,主要是针对整个码字空间的样本进行学习,但是,对于码长较长的编/译码方式,例如:Polar码,举例说明,当信息比特长度K=32时,则有2 32个码字。因此,由于难度以及复杂度剧增,已有技术无法完成对编译码的学习。
综上,本申请实施例中提出了一种实现通过对码字空间的小范围采样,即可泛化到整个码字空间的编码/译码方法。该方法通过基于编码/译码生成的神经网络单元,构成神经网络编码/译码神模型,再根据神经网络编码/译码模型对待编码/译码信息进行编码和/或译码。本申请实施例的编码/译码模型能够满足高速率传输和低译码延时的需求,具有良好的性能。
作为示例而非限定,下面结合附图对本申请实施例的编译码的方法进行详细说明。应理解,本申请实施例中编码的方法可以采用类似的译码的方法进行,具体编码过程使用的编码模型与译码过程使用的译码模型类似,为了避免重复,下文中仅以译码为例进行说明,具体地编码的过程可以与下文中的译码过程对应,可选地,本申请中,也可以采用已有的方法进行编码,本申请实施例并不限于此。
如图3所示为本申请实施例中的译码方法的流程示意图,图3所示的方法可以适用于图1的系统中,由译码装置(也可以称为接收端)执行。具体地,在上行传输时,该译码装置可以为网络设备,在下行传输时,该译码装置可以为终端设备,本申请实施例并不限于此。
具体地,如图3所示的方法包括:
310,获取N个待译码比特的软信息,N为大于或等于2的整数;
应理解,本申请实施例中,待译码比特的软信息也可以为待译码比特的对数似然比(log likelihood ratio,LLR),N个待译码比特中的每个待译码比特都具有一个LLR,N个待译码比特对应N个LLR。
应理解,本申请实施例中,N可认为是Polar码母码长度,本申请实施例并不限于此。
应理解,本申请实施例中,待译码比特的软信息,也可以称为待译码信息。待译码信息也可以称为待译码码字、待译码码块、码字、码块。译码装置可以将待译码信息看成一个整体进行译码,也可以将该待译码信息分为多个子码块并行做译码处理,本申请实施例并不限于此。
320,通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。
本申请实施例将神经网络译码单元组成译码模型,实现了将小的神经网络译码单元进行连接后,得到译码模型,从而在译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。本申请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。
可选地,作为一个实施例,所述每个神经网络译码单元为2输入2输出且具有至少一个隐层结构。
可选地,每个所述隐层可以包含Q个节点,并且,Q为大于等于2的整数。
应理解,本申请实施例中隐层也可以称为隐藏层,本申请实施例并不限于此。
可选地,作为另一实施例,所述神经网络译码单元包括神经网络译码单元参数,所述神经网络译码单元参数用于指示输入所述神经网络译码单元的输入信息与输出信息之间的映射关系,所述神经网络译码单元参数包括权重矩阵和偏移向量。
例如,如图4所示,神经网络译码单元为2输入2输出,且具有一个隐层结构,该一个隐层包括3个节点。
具体地,如图4所示,该神经网络译码单元包括输入层、输出层和一个隐层。其中,输入层输入的信息为输入向量,输出层输出的信息为输出向量。
进一步地,作为另一实施例,如图4所示,输入一个神经网络译码单元的输入向量与输出所述一个神经网络译码单元的输出向量具有如下映射关系:
Figure PCTCN2020071341-appb-000005
其中,y=(y 1,y 2) T表示所述输入向量,x=(x 1,x 2) T表示所述输出向量,w 1和w 2表示所述权重矩阵,b 1和b 2表示所述偏移向量,h表示隐层单元向量,g 1和g 2表示激活函数,所述w 1、w 2均为实数矩阵,b 1、b 2、h、y、x均为实数向量。
进一步地,作为另一实施例,在所述输出向量x的取值为
Figure PCTCN2020071341-appb-000006
Figure PCTCN2020071341-appb-000007
中的任一种情况下,所述输出向量y与x具有以下映射关系:
Figure PCTCN2020071341-appb-000008
x 2=y 2
其中,
Figure PCTCN2020071341-appb-000009
表示异或运算。
可选地,作为一个实施例,在所述译码模型中所述多个神经网络译码单元形成log 2N层结构,其中,前一层神经网络译码单元的输出作为后一层的输入。
例如,如图5所示为N=16时的一种译码模型,该译码模型具有log 216=4层。其中,每一层的输入信息为y,输出信息为x,前一层的输出信息x作为当前层的输入信息y。
应理解,图5的例子仅是示意性的,图5中各层之间的连接的关系可以任意变换或变 形,本申请实施例并不限于此。
如图5所示的译码模型的输入信息为16个待译码比特的软信息,输出信息为16译码比特。
可选地,作为另一实施例,在步骤320之前,该方法还可以包括:译码装置获取所述译码模型。
应理解,该译码模型可以是由执行图3所示的方法的译码装置训练的,也可以是其他装置训练的,本申请实施例并不限于此。
在该译码模型是由其他装置训练的情况下,译码装置获取所述译码模型包括该译码装置从其他设备获取该译码模型。
这种情况下,该译码模型可以是上述其他装置或者另外的装置训练的,本申请实施例并不限于此。
本申请实施例中,由于其他装置已训练获得该译码模型,因此,该译码装置无需训练模型,可以从该其他装置获取该译码模型,并直接使用该译码模型,避免了再次训练导致的成本开销。
可选地,在译码模型是由其自身训练的情况下,译码装置获取所述译码模型包括译码装置训练获取该译码模型。
这种情况下,该译码装置在训练获得该译码模型后,该译码装置可以将该译码模型发给别的装置使用,以使得该别的装置无需进行训练,能够直接使用该译码模型,避免了再次训练导致的成本开销。
应理解,在实际应用中,译码装置在训练出译码模型后,在之后译码时可以直接使用该译码模型,而无需再次训练。也就是说,译码装置训练该译码模型可以是预先训练好的,在译码时可以不再训练译码模型,而是直接使用该译码模型。可选地,译码装置也可以是在具有译码需求时才训练获取译码模型,然后再进行译码,本申请实施例并不限于此。
下面描述本申请实施例中,训练得到译码模型的具体方案。
应理解,以下的训练方案,可以是指预先训练译码模型采用的方案,也可以是当前需要译码时的训练译码模型采用的方案。
应理解,本申请实施例中的译码模型可以是经过至少一次训练过程得到的。
例如,在一种实现方式中,所述译码模型是通过两次训练过程得到的。
下面首先详细描述本申请实施例的两次训练过程得到译码模型的具体方案。
具体地,如图6所示,本申请实施例两次训练过程得到译码模型的方法包括:
610,构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量。
可选地,所述初始神经网络译码单元包括至少一个隐层,每个所述隐层包含Q个节点,并且,Q大于等于2。例如,该初始神经网络译码单元包括一个隐层,该一个隐层具有3个节点。
例如,初始神经网络译码单元包括输入层、输出层以及至少一个隐层。在本申请的实施例中,初始神经网络译码单元还包括有初始神网络译码单元参数,所述初始神经网络译 码单元参数可包括:初始权重矩阵w以及初始偏置向量b。需要说明的是,初始神经网络译码单元参数一般情况下为随机生成的,可选地,初始神经网络译码单元参数也可以是预设值的,本申请实施例并不限于此。
需要说明的是,本申请实施例中,隐层的数量可以为一个或一个以上,其中,隐层的数量越多,则神经网络的复杂度越大,但其泛化能力越强。因此,用户在设置初始神经网络译码单元以及申请实施例中的其它神经网络的隐层的数量时,可基于实际需求,结合装置的处理能力、计算能力等因素进行设置,本申请不做限定。
在本实施例中,以Polar码为例,构造的初始神经网络译码单元如图4所示。
本申请实施例中,初始神经网络译码单元隐层中的节点个数大于输入信息与输出信息的码长。即,在输入信息与输入信息的码长为2的情况下,隐层中的节点数为大于2的整数。在图4中,仅以初始神经网络译码单元具有一个隐层,并且,隐层具有3个节点为例进行详细说明,但本申请实施例并不限于此。
随后,译码装置对初始神经网络译码单元进行训练(即第一次训练过程),得到神经网络译码单元。具体训练过程可参照步骤620。
620,使用预设的第一样本集合训练所述初始神经网络译码单元,将所述初始神经网络译码单元参数更新为中间神经网络译码单元参数,获取中间神经网络译码单元,所述中间神经网络译码单元包括所述中间神经网络译码单元参数,所述中间神经网络译码单元参数用于指示输入所述中间神经网络译码单元的输入信息与输出信息之间的映射关系,所述中间神经网络译码单元参数包括中间权重矩阵和中间偏移向量,所述第一样本集合包括至少一个第一样本,一个所述第一样本包括长度为2的第一列向量和长度为2的第二列向量,所述第二列向量为所述第一列向量译码的期望向量;
具体而言,基于所述初始神经网络译码单元参数,对所述初始神经网络译码单元进行训练,直至所述初始神经网络译码单元的输出信息与输入信息(即第一列向量)的期望校验结果(即第二列向量)之间的误差小于第一预设阈值。应理解,对所述初始神经网络译码单元进行训练时,将所述初始神经网络译码单元参数进行更新,得到所述中间神经网络译码单元参数。
可选地,在一个实施例中,输出信息与输入信息的期望校验结果之间的误差可以为输出信息与期望校验结果之间的差值。
可选地,在另一个实施例中,输出信息与输入信息的期望校验结果之间的误差可以为输出信息与期望校验结果之间的均方差。
操作人员可根据实际需求设置求取输出信息与期望校验结果之间的误差的方式,本申请不做限定。
应理解,对应于输出信息与期望校验结果之间的误差的阈值也可根据误差求取的方式不同,进行相应的设置,本申请并不对此作限定。
在本申请的实施例中,训练完成的初始神经网络译码单元即为本申请实施例中的中间神经网络译码单元。其中,初始神经网络译码单元经过训练后,将其所包含的初始神经网络译码单元参数更新为中间神经网络译码单元参数。
在本申请的实施例中,中间神经网络译码单元可达到的结果为:基于其所包含的中间神经网络译码单元参数,对输入的训练信息(例如第一列向量)进行编码后,输出的输出 信息等于或接近第一列向量的期望校验结果(即第二列向量)。
例如,训练得到的中间神经网络译码单元参数如下表1所示。
表1
Figure PCTCN2020071341-appb-000010
具体地,译码装置基于输入信息、输入信息的期望校验结果以及初始神经网络译码单元参数可以对初始神经网络译码单元进行如下训练过程如:
1)获取损失函数。
具体的,对于初始神经网络译码单元相邻两层的神经元(即输入层、输出层或隐层中的节点),下一层神经元的输入r即为与之相连的上一层神经元的输出c基于初始神经网络译码单元参数(即两层之间的每一条连线上设置的初始权重值w,以及每个节点上设置的初始偏置向量b)进行加权求和,再经过激活函数,每个神经元的输入r如下公式所示:
r=f(wc+b)
则,初始神经网络译码单元的输出x(即为本申请实施例中初始神经网络译码单元可以递归表示为:
x=f n(w nf n-1+b n)
参照图4,基于公式r=f(wc+b)和公式x=f n(w nf n-1+b n)对初始神经网络译码单元的输入信息进行运算,并获取输出信息(为区分其它训练结果,以下简称训练结果1)。
随后,编码装置获取训练结果1与期望校验结果之间的误差值。误差值的计算方法如上文所述,即,可以为训练结果1与期望校验结果之间的差值,或均方值。求取损失函数的具体细节可参照已有技术实施例,本申请不再赘述。
2)将误差反向传播。
具体的,编码装置可通过将误差方向传播,计算出输出层的残差,然后再逐层对每一层中的节点的残差进行加权求和,随后,基于学习率,与输入层每个节点的残差值,更新第一层权重(即输入层与隐层之间的权重),并循环上述方法,逐层更新对应的权重。随后,在利用更新后的权重,再次对输入信息进行训练,并获得训练结果,以及,循环上述步骤,即,对初始神经网络译码单元参数进行反复更新,直至初始神经网络译码单元输出的训练结果n与期望校验结果之间的误差小于目标值(例如:目标值可以为0.0001),即可确认训练结果收敛。
上述训练方法即为梯度下降法,编码装置可通过梯度下降法对初始权重值w以及 初始偏置向量b迭代优化,以使损失函数达到最小值。梯度下降法的具体细节可参照已有技术实施例,本申请不再赘述。
需要说明的是,编码装置还可以通过其它训练方法对本申请实施例中的初始神经网络译码单元进行训练,其目的均为使初始神经网络译码单元的输出值接近优化目标,并更新其中的初始神经网络译码单元参数。
630,将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型。
具体地,本申请实施例中可以根据译码网络图(Polar码译码结构)(例如,为如图7所示蝶形图),将其中的所有的蝶形运算(如图8)替换为中间神经网络译码单元,获得第一初始译码模型。
具体的,如图9所示为生成第一初始译码模型的步骤流程示意图,图9所示的步骤包括:
910,获取译码网络图。
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
920,使用中间神经网络译码单元对译码网络图中的码蝶形图进行替换,得到第一初始译码模型。
640,使用预设的第二样本集合训练所述第一初始译码模型,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型,其中,所述第二样本集合包括长度为N的第三列向量和长度为N的第四列向量,所述第四列向量为所述第三列向量译码的期望向量。
具体的,在本申请的实施例中,译码装置可对第一初始译码模型进行训练,直至第一初始译码模型络输出的输出信息与输入信息(第三列向量)的期望(第四列向量)校验结果之间的误差小于第二预设阈值。以及,第一初始译码模型经过训练后,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型。
在本申请的实施例中,训练完成的第一初始译码模型即为上述译码模型。
对第一初始译码模型的训练的具体步骤可参照上文中初始神经网络译码单元的训练步骤,此处不赘述。
再例如,在另一种实现方式中,所述译码模型是通过一次训练过程得到的。
下面详细描述本申请实施例的一次训练过程得到译码模型的具体方案。
具体地,如图10所示,本申请实施例一次训练过程得到译码模型的方法1000包括:
1010,构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
1010与步骤610对应,为避免重复,此处不再赘述。
1020,将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型。
具体地,本申请实施例中可以根据译码网络图(Polar码译码结构)(例如,为如图7所示蝶形图),将其中的所有的蝶形运算(如图8)替换为初始神经网络译码单元,获得第二初始译码模型。
具体的,如图11所示为生成第二初始译码模型的步骤流程示意图,图11所示的步骤包括:
1110,获取译码网络图。
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
1120,使用初始神经网络译码单元对译码网络图中的码蝶形图进行替换,得到所述第二初始译码模型。
1030,使用预设的第三样本集合训练所述第二初始译码模型,将所述初始神经网络译码单元中的所述初始神经网络译码单元参数更新为神经网络译码单元参数,获得所述译码模型,其中,所述第三样本集合包括长度为N的第五列向量和长度为N的第六列向量,所述第六列向量为所述第五列向量译码的期望向量。
具体的,在本申请的实施例中,译码装置可对第二初始译码模型进行训练,直至第二初始译码模型络输出的输出信息与输入信息(第五列向量)的期望(第六列向量)校验结果之间的误差小于第三预设阈值。以及,第二初始译码模型经过训练后,将所述初级神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型。
在本申请的实施例中,训练完成的第二初始译码模型即为上述译码模型。
对第二初始译码模型的训练的具体步骤可参照上文中初始神经网络译码单元的训练步骤,此处不赘述。
本申请实施例将神经网络译码单元组成译码模型,实现了将小的神经网络译码单元进行连接后,得到译码模型,从而在译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。本申请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。
图12为码长N=16,信息比特K=8,人为设定初始化权值时的仿真性能对比图,具体地,图12中展示了经过训练后的译码模型的性能与不训练的译码模型的性能进行对比图。如图12所示,横坐标表示信噪比Eb/No,具体地,Eb/No可以表示接收机解调门限,其定义为每比特能量除以噪声功率谱密度。具体地,Eb表示每比特信号能量,Eb=S/R,S表示信号能量,R表示业务比特速率,No=N/W,W表示带宽;N表示噪音。纵坐标表示比特出错概率(bit error ratio,BER)。如图12所示,经过训练后的译码模型保留了每个处理单元的异或功能,且该模型具有一定的学习能力。具体地,经过训练后的译码模型比无训练的译码模型具有更好的译码性能,训练样本比例p高的译码模型比训练样本比例低的译码模型在信噪比较高的情况下,具有较好的译码性能。
应理解,本申请实施例中,p可以表示全码字空间的训练样本数比例,p的取值可以为10%,20%,40%,60%,80%,100%,本申请实施例并不限于此。
作为示例而非限定,在实际应用中,本申请实施例中的译码模型的训练信噪比Eb/N0(dB)=的取值可以为0,1,2,3,4,5,6。测试信噪比Eb/N0(dB)的取值范围可以为0~14。本申请实施例并不限于此。
图13为码长N=16,信息比特K=8,人为设定初始化参数,且设定p=0.1时,本申请实施例的译码模型与其他神经网络译码模型性能对比结果。图13表明,在训练集数量极少的情况下,本申请提出的基于神经网络译码单元(也可以称为极化处理单元)的神经网络译码模型的性能要优于现有其他神经网络译码模型。
应理解,上文中图1至图13的例子,仅仅是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的图1至图13的例子,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上文中,结合图1至图13详细描述了本申请实施例的方法,下面结合图14至图15描述本申请实施例的译码的装置。
图14为本申请实施例提供的一种译码的装置的结构示意图,该装置1400可以包括:译码模块1410和获取模块1420。
获取模块,用于获取N个待译码比特的软信息,N为大于或等于2的整数;
译码模块,用于通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。
本申请实施例将神经网络译码单元组成译码模型,实现了将小的神经网络译码单元进行连接后,得到译码模型,从而在译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。本申请实施例的译码模型能够满足高速率传输和低译码延时的需求,具有良好的译码性能。
应理解,译码装置1400具有上述方法实施例中由译码装置执行的任意功能,此处适当省略详细描述。
可选地,在所述译码模型中所述多个神经网络译码单元形成log 2N层结构,其中,前一层神经网络译码单元的输出作为后一层的输入。
可选地,所述每个神经网络译码单元为2输入2输出且具有至少一个隐层结构。
可选地,所述神经网络译码单元包括神经网络译码单元参数,所述神经网络译码单元参数用于指示输入所述神经网络译码单元的输入信息与输出信息之间的映射关系,所述神经网络译码单元参数包括权重矩阵和偏移向量。
可选地,输入一个神经网络译码单元的输入向量与输出所述一个神经网络译码单元的输出向量具有如下映射关系:
Figure PCTCN2020071341-appb-000011
其中,y=(y 1,y 2) T表示所述输入向量,x=(x 1,x 2) T表示所述输出向量,w 1和w 2表示所述权重矩阵,b 1和b 2表示所述偏移向量,h表示隐层单元向量,g 1和g 2表示 激活函数,所述w 1、w 2均为实数矩阵,b 1、b 2、h、y、x均为实数向量。
可选地,所述输出向量x的取值为
Figure PCTCN2020071341-appb-000012
Figure PCTCN2020071341-appb-000013
中的任一种情况下,所述输出向量y与x具有以下映射关系:
Figure PCTCN2020071341-appb-000014
x 2=y 2
可选地,在通过译码模型对所述软信息进行译码之前,所述译码模块还包括:
获取所述译码模型。
可选地,所述译码模型是通过两次训练过程得到的。
可选地,所述译码模块具体用于:
构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
使用预设的第一样本集合训练所述初始神经网络译码单元,将所述初始神经网络译码单元参数更新为中间神经网络译码单元参数,获取中间神经网络译码单元,所述中间神经网络译码单元包括所述中间神经网络译码单元参数,所述中间神经网络译码单元参数用于指示输入所述中间神经网络译码单元的输入信息与输出信息之间的映射关系,所述中间神经网络译码单元参数包括中间权重矩阵和中间偏移向量,所述第一样本集合包括至少一个第一样本,一个所述第一样本包括长度为2的第一列向量和长度为2的第二列向量,所述第二列向量为所述第一列向量译码的期望向量;
将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型;
使用预设的第二样本集合训练所述第一初始译码模型,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型,其中,所述第二样本集合包括长度为N的第三列向量和长度为N的第四列向量,所述第四列向量为所述第三列向量译码的期望向量。
可选地,所述译码模块具体用于:
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
使用所述中间神经网络译码单元对所述译码网络图中的码蝶形图进行替换,得到所述第一初始译码模型。
可选地,所述译码模型是通过一次训练过程得到的。
可选地,所述译码模块具体用于:
构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型;
使用预设的第三样本集合训练所述第二初始译码模型,将所述初始神经网络译码单元中的所述初始神经网络译码单元参数更新为神经网络译码单元参数,获得所述译码模型,其中,所述第三样本集合包括长度为N的第五列向量和长度为N的第六列向量,所述第 六列向量为所述第五列向量译码的期望向量。
可选地,所述译码模块具体用于:
获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
使用初始神经网络译码单元对所述译码网络图中的译码蝶形图进行替换,得到所述第二初始译码模型。
应理解,本申请实施例中的术语“模块”可以指应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。本申请实施例中的“模块”也可以称为“单元”可以由硬件实现,也可以由软件实现,本申请实施例并不限于此。
在一个可选例子中,本领域技术人员可以理解,本申请提供的译码装置1400对应上述方法实施例中译码装置执行的过程,该装置中的各个单元/模块的功能可以参见上文中的描述,此处不再赘述。
应理解,图14所述的译码装置可以是网络设备或终端设备,也可以是安装于网络设备或译码设备中的芯片或集成电路。
以译码装置为网络设备或终端设备为例,图15为本申请实施例提供的一种译码装置的结构示意图。如图15所示,该译码装置1500可应用于如图1所示的系统中,执行上述方法实施例中译码装置的任意功能。
如图15所示,接入点1500可以包括至少一个处理器1510和收发器1520,处理器1510和收发器1520相连,可选地,接入点1500还包括至少一个存储器1530,存储器1530与处理器1510相连,进一步可选地,该接入点1500还可以包括总线系统1540。其中,处理器1510、存储器1530和收发器1520可以通过总线系统1540相连,该存储器1530可以用于存储指令,该处理器1510可以对应图14中的处理模块1410,收发器1520可以对应图14中的收发模块1420。可选地,图14中的处理模块和获取模块也可以均由图15中的处理器1510实现,本申请实施例并不限于此。具体地,处理器1510用于执行指令,以控制收发器1520收发信息或信号,存储器1530存储指令。
应理解,存储器1530可以和处理器1510集成在一起,例如,存储器1530可以集成在处理器1510中,该存储器1530也可以位于该处理器1510之外,独立存在,本申请实施例并不限于此。
应理解,在本发明实施例中,处理器可以是中央处理单元(Central Processing Unit,简称为“CPU”),处理器还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器的一部分还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。
总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,图15所示的译码装置1500能够实现上述方法实施例中涉及译码装置的各个过程。译码装置1500中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详述描述。
本申请实施例还提供了一种处理装置,包括处理器和接口;所述处理器,用于执行上述任一方法实施例中的译码的方法。
应理解,上述处理装置可以是一个芯片。例如,该处理装置可以是现场可编程门阵列(Field-Programmable Gate Array,FPGA),可以是专用集成芯片(Application Specific Integrated Circuit,ASIC),还可以是系统芯片(System on Chip,SoC),还可以是中央处理器(Central Processor Unit,CPU),还可以是网络处理器(Network Processor,NP),还可以是数字信号处理电路(Digital Signal Processor,DSP),还可以是微控制器(Micro Controller Unit,MCU),还可以是可编程控制器(Programmable Logic Device,PLD)或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本发明实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本发明实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只 读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提供一种通信系统,其包括前述的编码端和译码端。
本申请实施例还提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述任一方法实施例中的方法。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例中的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
应理解,上文中描述了通信系统中译码的方法,但本申请并不限于此,可选地,在编码时也可以采用上文类似的方案,为避免重复,此处不再赘述。
上述各个装置实施例中网络设备与终端设备和方法实施例中的网络设备或终端设备完全对应,由相应的模块或单元执行相应的步骤,例如发送模块(发射器)方法执行方法实施例中发送的步骤,接收模块(接收器)执行方法实施例中接收的步骤,除发送接收外的其它步骤可以由处理模块(处理器)执行。具体模块的功能可以参考相应的方法实施例。发送模块和接收模块可以组成收发模块,发射器和接收器可以组成收发器,共同实现收发功能;处理器可以为一个或多个。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达, 是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
还应理解,本文中涉及的第一、第二、第三、第四以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各种说明性逻辑块(illustrative logical block)和步骤(step),能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令(程序)。在计算机上加载和执行所述计算机程序指令(程序)时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk(SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (30)

  1. 一种译码的方法,其特征在于,包括:
    获取N个待译码比特的软信息,N为大于或等于2的整数;
    通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。
  2. 根据权利要求1所述的方法,其特征在于,
    在所述译码模型中所述多个神经网络译码单元形成log 2N层结构,其中,前一层神经网络译码单元的输出作为后一层的输入。
  3. 根据权利要求2所述的方法,其特征在于,
    所述每个神经网络译码单元为2输入2输出且具有至少一个隐层结构。
  4. 根据权利要求3所述的方法,其特征在于,
    所述神经网络译码单元包括神经网络译码单元参数,所述神经网络译码单元参数用于指示输入所述神经网络译码单元的输入信息与输出信息之间的映射关系,所述神经网络译码单元参数包括权重矩阵和偏移向量。
  5. 根据权利要求4所述的方法,其特征在于,
    输入一个神经网络译码单元的输入向量与输出所述一个神经网络译码单元的输出向量具有如下映射关系:
    Figure PCTCN2020071341-appb-100001
    其中,y=(y 1,y 2) T表示所述输入向量,x=(x 1,x 2) T表示所述输出向量,w 1
    w 2表示所述权重矩阵,b 1和b 2表示所述偏移向量,h表示隐层单元向量,g 1和g 2表示激活函数,所述w 1、w 2均为实数矩阵,b 1、b 2、h、y、x均为实数向量。
  6. 根据权利要求5所述的方法,其特征在于,
    所述输出向量x的取值为
    Figure PCTCN2020071341-appb-100002
    Figure PCTCN2020071341-appb-100003
    中的任一种情况下,所述输出向量y与x具有以下映射关系:
    Figure PCTCN2020071341-appb-100004
    x 2=y 2
  7. 根据权利要求6所述的方法,其特征在于,在通过译码模型对所述软信息进行译码之前,所述方法还包括:
    获取所述译码模型。
  8. 根据权利要求7所述的方法,其特征在于,
    所述译码模型是通过两次训练过程得到的。
  9. 根据权利要求8所述的方法,其特征在于,所述获取所述译码模型,包括:
    构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
    使用预设的第一样本集合训练所述初始神经网络译码单元,将所述初始神经网络译码 单元参数更新为中间神经网络译码单元参数,获取中间神经网络译码单元,所述中间神经网络译码单元包括所述中间神经网络译码单元参数,所述中间神经网络译码单元参数用于指示输入所述中间神经网络译码单元的输入信息与输出信息之间的映射关系,所述中间神经网络译码单元参数包括中间权重矩阵和中间偏移向量,所述第一样本集合包括至少一个第一样本,一个所述第一样本包括长度为2的第一列向量和长度为2的第二列向量,所述第二列向量为所述第一列向量译码的期望向量;
    将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型;
    使用预设的第二样本集合训练所述第一初始译码模型,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型,其中,所述第二样本集合包括长度为N的第三列向量和长度为N的第四列向量,所述第四列向量为所述第三列向量译码的期望向量。
  10. 根据权利要求9所述的方法,其特征在于,所述将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型,包括:
    获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
    使用所述中间神经网络译码单元对所述译码网络图中的码蝶形图进行替换,得到所述第一初始译码模型。
  11. 根据权利要求7所述的方法,其特征在于,
    所述译码模型是通过一次训练过程得到的。
  12. 根据权利要求11所述的方法,其特征在于,所述获取所述译码模型,包括:
    构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
    将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型;
    使用预设的第三样本集合训练所述第二初始译码模型,将所述初始神经网络译码单元中的所述初始神经网络译码单元参数更新为神经网络译码单元参数,获得所述译码模型,其中,所述第三样本集合包括长度为N的第五列向量和长度为N的第六列向量,所述第六列向量为所述第五列向量译码的期望向量。
  13. 根据权利要求12所述的方法,其特征在于,所述将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型,包括:
    获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
    使用初始神经网络译码单元对所述译码网络图中的译码蝶形图进行替换,得到所述第二初始译码模型。
  14. 一种译码装置,其特征在于,包括:
    获取模块,用于获取N个待译码比特的软信息,N为大于或等于2的整数;
    译码模块,用于通过译码模型对所述软信息进行译码,获取译码结果,其中,所述译码模型由多个神经网络译码单元构成,每个神经网络译码单元均支持软信息的异或运算,所述译码模型是通过至少一次训练过程得到的。
  15. 根据权利要求14所述的译码装置,其特征在于,
    在所述译码模型中所述多个神经网络译码单元形成log 2N层结构,其中,前一层神经网络译码单元的输出作为后一层的输入。
  16. 根据权利要求15所述的译码装置,其特征在于,
    所述每个神经网络译码单元为2输入2输出且具有至少一个隐层结构。
  17. 根据权利要求16所述的译码装置,其特征在于,
    所述神经网络译码单元包括神经网络译码单元参数,所述神经网络译码单元参数用于指示输入所述神经网络译码单元的输入信息与输出信息之间的映射关系,所述神经网络译码单元参数包括权重矩阵和偏移向量。
  18. 根据权利要求17所述的译码装置,其特征在于,
    输入一个神经网络译码单元的输入向量与输出所述一个神经网络译码单元的输出向量具有如下映射关系:
    Figure PCTCN2020071341-appb-100005
    其中,y=(y 1,y 2) T表示所述输入向量,x=(x 1,x 2) T表示所述输出向量,w 1
    w 2表示所述权重矩阵,b 1和b 2表示所述偏移向量,h表示隐层单元向量,g 1和g 2表示激活函数,所述w 1、w 2均为实数矩阵,b 1、b 2、h、y、x均为实数向量。
  19. 根据权利要求18所述的译码装置,其特征在于,
    所述输出向量x的取值为
    Figure PCTCN2020071341-appb-100006
    Figure PCTCN2020071341-appb-100007
    中的任一种情况下,所述输出向量y与x具有以下映射关系:
    Figure PCTCN2020071341-appb-100008
    x 2=y 2
  20. 根据权利要求19所述的译码装置,其特征在于,在通过译码模型对所述软信息进行译码之前,所述译码模块还包括:
    获取所述译码模型。
  21. 根据权利要求20所述的译码装置,其特征在于,
    所述译码模型是通过两次训练过程得到的。
  22. 根据权利要求21所述的译码装置,其特征在于,所述译码模块具体用于:
    构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
    使用预设的第一样本集合训练所述初始神经网络译码单元,将所述初始神经网络译码单元参数更新为中间神经网络译码单元参数,获取中间神经网络译码单元,所述中间神经网络译码单元包括所述中间神经网络译码单元参数,所述中间神经网络译码单元参数用于指示输入所述中间神经网络译码单元的输入信息与输出信息之间的映射关系,所述中间神经网络译码单元参数包括中间权重矩阵和中间偏移向量,所述第一样本集合包括至少一个第一样本,一个所述第一样本包括长度为2的第一列向量和长度为2的第二列向量,所述第二列向量为所述第一列向量译码的期望向量;
    将多个所述中间神经网络译码单元组合在一起,获得第一初始译码模型;
    使用预设的第二样本集合训练所述第一初始译码模型,将所述中间神经网络译码单元中的所述中间神经网络译码单元参数更新为所述神经网络译码单元参数,获得所述译码模型,其中,所述第二样本集合包括长度为N的第三列向量和长度为N的第四列向量,所述第四列向量为所述第三列向量译码的期望向量。
  23. 根据权利要求22所述的译码装置,其特征在于,所述译码模块具体用于:
    获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
    使用所述中间神经网络译码单元对所述译码网络图中的码蝶形图进行替换,得到所述第一初始译码模型。
  24. 根据权利要求20所述的译码装置,其特征在于,
    所述译码模型是通过一次训练过程得到的。
  25. 根据权利要求24所述的译码装置,其特征在于,所述译码模块具体用于:
    构造初始神经网络译码单元,并设置初始神经网络译码单元参数,其中,所述初始神经网络译码单元参数用于指示输入所述初始神经网络译码单元的输入信息与输出信息之间的映射关系,所述初始神经网络译码单元参数包括初始权重矩阵和初始偏移向量;
    将多个所述初始神经网络译码单元组合在一起,获得第二初始译码模型;
    使用预设的第三样本集合训练所述第二初始译码模型,将所述初始神经网络译码单元中的所述初始神经网络译码单元参数更新为神经网络译码单元参数,获得所述译码模型,其中,所述第三样本集合包括长度为N的第五列向量和长度为N的第六列向量,所述第六列向量为所述第五列向量译码的期望向量。
  26. 根据权利要求25所述的译码装置,其特征在于,所述译码模块具体用于:
    获取译码网络图,其中,所述译码网络图中包括至少一个译码蝶形图,所述译码蝶形图用于指示所述译码蝶形图的输入信息与所述译码蝶形图的输出信息之间的校验关系;
    使用初始神经网络译码单元对所述译码网络图中的译码蝶形图进行替换,得到所述第二初始译码模型。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,当其在处理器上运行时,执行如权利要求1至13中任一项所述的方法。
  28. 一种计算机程序,当所述计算机程序被装置执行时,用于执行权利要求1-13中任一项所述的方法。
  29. 一种译码装置,其特征在于,包括:
    存储器,用于存储指令;
    以及,与所述存储器进行通信连接的至少一个处理器,其中,所述至少一个处理器用于在运行所述指令时执行权利要求1-13中任一项所述的方法。
  30. 一种芯片,包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行权利要求1-13中任一项所述的方法。
PCT/CN2020/071341 2019-01-29 2020-01-10 译码的方法和译码装置 WO2020156095A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910087689.9A CN111490798B (zh) 2019-01-29 2019-01-29 译码的方法和译码装置
CN201910087689.9 2019-01-29

Publications (1)

Publication Number Publication Date
WO2020156095A1 true WO2020156095A1 (zh) 2020-08-06

Family

ID=71812337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071341 WO2020156095A1 (zh) 2019-01-29 2020-01-10 译码的方法和译码装置

Country Status (2)

Country Link
CN (1) CN111490798B (zh)
WO (1) WO2020156095A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422380B (zh) * 2020-10-09 2023-06-09 维沃移动通信有限公司 神经网络信息传输方法、装置、通信设备和存储介质
CN115037312B (zh) * 2022-08-12 2023-01-17 北京智芯微电子科技有限公司 Ldpc译码软信息的量化方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079382A (zh) * 2014-07-25 2014-10-01 北京邮电大学 一种基于概率计算的极化码译码器和极化码译码方法
CN107248866A (zh) * 2017-05-31 2017-10-13 东南大学 一种降低极化码译码时延的方法
CN108631930A (zh) * 2017-03-24 2018-10-09 华为技术有限公司 Ploar编码方法和编码装置、译码方法和译码装置
US20180357530A1 (en) * 2017-06-13 2018-12-13 Ramot At Tel-Aviv University Ltd. Deep learning decoding of error correcting codes

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2314240B (en) * 1996-06-11 2000-12-27 Motorola Ltd Viterbi decoder for an equaliser and method of operation
CN101562456B (zh) * 2009-06-03 2012-08-22 华北电力大学(保定) 基于低密度奇偶校验码译码软信息的码辅助帧同步方法
US20110182385A1 (en) * 2009-07-30 2011-07-28 Qualcomm Incorporated Method and apparatus for reliability-aided pruning of blind decoding results
CN102831026A (zh) * 2012-08-13 2012-12-19 忆正科技(武汉)有限公司 多层单元闪存及其软信息位读取电压阈值动态调整方法
US10474525B2 (en) * 2015-08-11 2019-11-12 Sandisk Technologies Llc Soft bit techniques for a data storage device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079382A (zh) * 2014-07-25 2014-10-01 北京邮电大学 一种基于概率计算的极化码译码器和极化码译码方法
CN108631930A (zh) * 2017-03-24 2018-10-09 华为技术有限公司 Ploar编码方法和编码装置、译码方法和译码装置
CN107248866A (zh) * 2017-05-31 2017-10-13 东南大学 一种降低极化码译码时延的方法
US20180357530A1 (en) * 2017-06-13 2018-12-13 Ramot At Tel-Aviv University Ltd. Deep learning decoding of error correcting codes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUAWEI ET AL.: "Overview of Polar Codes", 3GPP TSG RAN WG1 MEETING #84BIS R1-162161, 15 April 2016 (2016-04-15), XP051080007, DOI: 20200326155447A *

Also Published As

Publication number Publication date
CN111490798A (zh) 2020-08-04
CN111490798B (zh) 2022-04-22

Similar Documents

Publication Publication Date Title
US20230198660A1 (en) Method for encoding information in communication network
CN108282259B (zh) 一种编码方法及装置
CN113273083B (zh) 使用压缩的信道输出信息来解码数据的方法和系统
US20210279584A1 (en) Encoding method and apparatus, and decoding method and apparatus
WO2018177227A1 (zh) 一种编码方法、译码方法、装置和设备
WO2021103978A1 (zh) 一种极化码编码方法及装置
WO2018137568A1 (zh) 编码方法、编码装置和通信装置
WO2020156095A1 (zh) 译码的方法和译码装置
US11558068B2 (en) Method and apparatus for encoding polar code concatenated with CRC code
WO2022161201A1 (zh) 编码调制与解调解码方法及装置
US20230208554A1 (en) Encoding and Decoding Method and Apparatus
US20240137147A1 (en) Data Processing Method, Apparatus, and System
WO2018127069A1 (zh) 一种编码方法及装置
WO2022268130A1 (zh) 一种网络编码方法及装置
WO2018210216A1 (zh) 传输数据的方法、芯片、收发机和计算机可读存储介质
WO2022057599A1 (zh) 极化码的编码方法和译码方法、及编码装置和译码装置
WO2020014988A1 (en) Polar encoding and decoding
WO2024055894A1 (zh) 一种编/译码方法及装置
WO2023072077A1 (zh) 通信方法及相关装置
WO2022171019A1 (zh) 一种编码和译码方法及相关装置
WO2023030236A1 (zh) 一种数据传输方法、数据接收方法和通信装置
WO2024055934A1 (zh) 编码方法、译码方法、通信装置及计算机可读存储介质
WO2024077486A1 (zh) 一种确定循环冗余校验比特的方法、通信方法及装置
WO2022117061A1 (zh) 一种极化码辅助比特的确定方法和装置
WO2023109733A1 (zh) 一种速率匹配的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20748867

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20748867

Country of ref document: EP

Kind code of ref document: A1