CN110474716B - Method for establishing SCMA codec model based on noise reduction self-encoder - Google Patents

Method for establishing SCMA codec model based on noise reduction self-encoder Download PDF

Info

Publication number
CN110474716B
CN110474716B CN201910746945.0A CN201910746945A CN110474716B CN 110474716 B CN110474716 B CN 110474716B CN 201910746945 A CN201910746945 A CN 201910746945A CN 110474716 B CN110474716 B CN 110474716B
Authority
CN
China
Prior art keywords
scma
encoder
noise
neural network
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910746945.0A
Other languages
Chinese (zh)
Other versions
CN110474716A (en
Inventor
胡艳军
胡梦钰
蒋芳
王翊
许耀华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201910746945.0A priority Critical patent/CN110474716B/en
Publication of CN110474716A publication Critical patent/CN110474716A/en
Application granted granted Critical
Publication of CN110474716B publication Critical patent/CN110474716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/0048Decoding adapted to other signal detection operation in conjunction with detection of multiuser or interfering signals, e.g. iteration between CDMA or MIMO detector and FEC decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/005Iterative decoding, including iteration between signal detection and decoding operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03165Arrangements for removing intersymbol interference using neural networks

Abstract

The invention relates to a method for establishing an SCMA codec model based on a noise reduction self-encoder, which comprises the following steps: establishing an SCMA (sparse code multiple access) encoder based on a noise reduction self-encoder and a fully-connected neural network, and mapping original input data of a user into code words; the code words of all users on each resource block are transmitted in a superposition mode, and then the signal on each resource block is superposed with channel noise; an SCMA decoder based on a full-connection neural network is established at a receiving end, and original input data of all users are decoded; training an SCMA codec model based on a noise reduction self-encoder; the above SCMA codec model based on the noise reduction self-encoder was tested for BER performance. Compared with the traditional SCMA system, the invention reduces the complexity of coding and decoding; compared with the existing SCMA system model based on deep learning, the method further reduces the bit error rate; compared with the existing SCMA system model based on deep learning, the method has higher training convergence rate.

Description

Method for establishing SCMA codec model based on noise reduction self-encoder
Technical Field
The invention relates to the technical field of wireless communication and deep learning, in particular to a method for establishing an SCMA codec model based on a noise reduction self-encoder.
Background
Sparse Code Multiple Access (SCMA) is a non-orthogonal multiple access scheme that provides high spectral efficiency and large scale connectivity that meets the requirements of 5G wireless communication systems. The SCMA system encoding side and decoding side form the main components of the system. Each user has a dedicated codebook, coded bits are directly mapped to a multi-dimensional code word in an SCMA codebook, non-orthogonal superposition is carried out on resources, multi-user detection is needed by utilizing the sparse characteristic of the user code words in the SCMA system, and an Algorithm used in the current multi-user detection scheme is mainly a Message Passing Algorithm (MPA).
However, the SCMA system still faces the problem of high complexity of both encoding and decoding. The codewords in the codebook are not orthogonal to each other and are composed of multidimensional complex values, and in the case of different resource quantities, codebooks of all possible environments must be manually constructed, so that the codebook design complexity is high. During decoding, under the scenes of excessive iteration times, large increase of the number of users, pursuit of larger system diversity gain and the like, the complexity of a decoding algorithm is also increased sharply. SCMA systems currently have no breakthrough progress in solving the complexity problem, and most algorithms are proposed at the expense of system performance.
Due to new characteristics of future communication, such as complex scenarios with unknown channel models, high speed and accurate processing requirements, Artificial Intelligence (AI) technology provides a possibility for designing and optimizing communication systems beyond traditional concepts and performances, and has become an increasingly recognized research direction of important interest in the industry, such as the potential application of Deep Learning (DL) in the physical layer. The invention with publication number CN109787715A considers the decoding part in the SCMA system to be represented by a deep neural network, but the invention does not consider the encoding part of the SCMA; an An Enhanced SCMA Detector Enabled by Deep Neural Network published on ICCC by scholars Lu C and the like constructs a sparsely connected Neural Network by expanding MPA algorithm and distributing weights to the edge of a factor graph, thereby reducing decoding complexity. However, the bit error rate performance is not improved well, and even under the condition of low signal-to-noise ratio, the bit error rate performance is still worse than the BER performance of the traditional algorithm.
The Deep Learning-Aided SCMA, published by Kim M et al in IEEE Communications Letters, designs a D-SCMA system using a Deep neural network representation so that it can autonomously learn the encoded codeword and decode it according to the channel state. The article result proves that compared with the traditional SCMA system, the D-SCMA system greatly reduces the complexity of coding and decoding and the time delay of the system. But the problems still exist at present: 1) compared with the traditional SCMA system, the bit error rate of the D-SCMA system is reduced because the neural network also learns better code word mapping, but the decoding performance of the D-SCMA system is still inferior to that of the MPA algorithm; 2) the slow convergence rate of the deep neural network used by the D-SCMA system results in longer training times. Therefore, the system needs to be further improved and optimized to improve the convergence speed of training and the decoding performance of the system.
Disclosure of Invention
The invention aims to provide a method for establishing an SCMA codec model based on a noise reduction self-encoder, which improves the error rate performance of an SCMA system and has higher convergence rate in a training phase.
In order to achieve the purpose, the invention adopts the following technical scheme: a method for establishing an SCMA codec model based on a noise reduction self-encoder is characterized in that: the method comprises the following steps in sequence:
(1) firstly, establishing a SCMA (sparse code multiple access) encoder based on a noise reduction self-encoder and a fully-connected neural network, and mapping original input data of a user into code words;
(2) the code words of all users on each resource block are transmitted in a superposition mode, and then the signal on each resource block is superposed with channel noise;
(3) an SCMA decoder based on a full-connection neural network is established at a receiving end, and original input data of all users are decoded;
(4) the SCMA encoder, the channel and the SCMA decoder jointly form an SCMA codec model based on the noise reduction self-encoder, and the SCMA codec model based on the noise reduction self-encoder is trained;
(5) the above SCMA codec model based on the noise reduction self-encoder was tested for BER performance.
The step (1) of establishing the SCMA encoder based on the noise reduction self-encoder and the fully-connected neural network specifically comprises the following steps:
(1a) encoding the original input data of each user into a single heat vector, wherein the information transmitted by each user is represented as s, s is binary bit data, and m possible information exists, wherein m is 2bB represents the number of bits per transmission; randomly generated binary input data is selected, encoded into a single heat vector, i.e. a vector of m dimensions, and only whereinOne element of (1) and the other elements are all 0, and the coded single heat vector is used as the input of each code word mapper;
(1b) establishing an SCMA code word mapper based on a noise reduction self-encoder and a full-connection neural network, adding a noise layer before each full-connection deep neural network unit, namely a DNN unit, and constructing a code word mapper corresponding to different resource blocks for each user;
(1c) acquiring a factor graph according to the condition that each user occupies a resource block, acquiring a corresponding factor graph matrix, and connecting the SCMA code word mappers according to the factor graph matrix to form an SCMA encoder; and the transmitter sends the superposed code words to the corresponding resource blocks.
The channel noise in the step (2) is gaussian white noise, the variance of the channel noise power is β, and β is expressed as:
Figure GDA0003185560550000031
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio.
The step (3) specifically comprises the following steps:
(3a) establishing an input layer of an SCMA decoder, wherein a channel output signal is used as an input signal of a receiving end decoder; wherein, the received signal on the kth resource block is:
Figure GDA0003185560550000032
wherein, theta1Representing parameters of a SCMA encoder based on a fully-connected neural network, fkj(sj;θ1) For the j user after passing through the code word mapper on the code word, h on the resource kkjFor corresponding channel gain, nkIs the noise on resource k, ykRepresents the received signal on resource k; sjRaw input data for the jth user;
(3b) using a fully-connected neural network at a receiving end, establishing a hidden layer of the SCMA decoder for learning and extracting signal characteristics, wherein an activation function of the hidden layer adopts a relu activation function, and then output data of an l-th hidden layer is expressed as:
Figure GDA0003185560550000033
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, W and b are the parameters theta, yl-1Represents the output of the l-1 layer hidden layer;
(3c) to prevent overfitting, batch normalization is carried out on the data of each layer before nonlinear operation of relu activation functions is carried out on the data of each layer;
(3d) the target output is a single heat vector sent into the SCMA encoder by each user, and because the range of target output data is [0,1], the output activation function is selected from tanh activation functions, and the expression is as follows:
Figure GDA0003185560550000041
wherein z is the output value of the last hidden layer, e is the base of the natural logarithm, e is 2.71828, youtIs the final actual output of the SCMA decoder.
The training of the SCMA codec model based on the noise reduction auto-encoder in the step (4) specifically includes the following steps:
(4a) in training, the channel noise adopts white gaussian noise with a fixed size as the noise used in training, which is called training noise, the variance of the power of the channel noise is β, and β is expressed as:
Figure GDA0003185560550000042
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio; when the training is carried out, the training device can be used,Eb/N0the magnitude is fixed to 13dB, namely beta is fixed, namely training noise with fixed magnitude;
(4b) using the mean square error loss function as the end-to-end loss function for calculating the loss of the actual output and the expected output, the expression of the loss function is as follows:
Figure GDA0003185560550000043
wherein n is the size of one batch of the training neural network, and theta1And theta2Parameters of the encoder and decoder, respectively; y denotes the target output of the neural network, youtRepresenting the actual output of the neural network;
(4c) the loss function is converged to a minimum using a gradient descent algorithm: sending a batch of input data each time to calculate the partial derivative of the loss function to the neural network parameters, obtaining the gradient of the loss function, namely the direction of the loss function which descends the fastest, and updating the parameters through the direction to make the value of the loss function smaller and smaller; the weights W and bias b of the neural network are updated with Adam optimizers so that the value of the loss function converges to a minimum.
The step (5) specifically comprises the following steps:
(5a) randomly generating binary input data bits, encoding the binary input data bits into a single heat vector to be used as input data of the SCMA encoder;
(5b) calculating BER performance of the SCMA codec based on the noise reduction self-encoder under different signal-to-noise ratios.
The step (1b) specifically comprises the following steps:
(1b1) randomly generating a binary bit data stream for each user as input data of a codeword mapper;
(1b2) adding a noise layer before each fully-connected deep neural network unit, namely a DNN unit, wherein the noise adopts Gaussian white noise;
(1b3) establishing a hidden layer for learning and extracting signal characteristics, wherein the activation function adopts a relu activation function, and then the output data of the l-th hidden layer is expressed as:
Figure GDA0003185560550000051
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, yl-1The output of the l-1 hidden layer is represented, and the output adopts a linear activation function, namely, the data of the previous layer of the input is not changed; w and b are called parameters theta together; w represents a weight, b represents an offset;
(1b4) in each hidden layer, the data is subjected to batch normalization before nonlinear operation by adopting the following steps:
in the batch training process, c training examples are contained in one training process, and the specific normalization operation is to perform the following transformation on the activation value of each neuron in the hidden layer:
Figure GDA0003185560550000061
wherein h represents the h-th neuron, x is the linear activation value of a hidden layer neuron, namely x ═ Wu + b, wherein W and b are parameters of the neural network, W represents weight, b represents bias, and u is the output value of a neuron in the previous layer; e (x) and Var (x) are respectively the mean and variance of c activation values x obtained from c examples;
because the transformation is subjected to normalization processing, the standard deviation is limited to 1, and the feature distribution learned by the network of the layer is damaged; therefore, two adjusting parameters, namely scale and shift, are added to each neuron, and the two adjusting parameters are learned through training and are used for performing transformation reconstruction on transformed activation, so that the network expression capability is enhanced, namely the following scale and shift operations are performed on the transformed activation:
Figure GDA0003185560550000062
wherein h representsThe h neuron, gamma and beta are learnable reconstruction parameters,
Figure GDA0003185560550000063
representing normalized transformed neuron activation values, y(h)Representing the final output value of the h neuron after reconstruction through the transformation of the formula (8);
(1b5) the codeword mapper outputs a codeword mapped to a resource block for a user, where fkj(sj;θ1) Expressed as code word, theta, mapping the jth user data onto the kth resource block1The weight and bias representing the codeword mapper are important parameters that need to be updated by training, sjOriginal input data sent for user j; the output layer outputs 2-dimensional data representing the real and imaginary parts of the complex codeword, respectively.
According to the technical scheme, the beneficial effects of the invention are as follows: firstly, compared with the traditional SCMA system, the invention reduces the complexity of coding and decoding; secondly, compared with the existing SCMA system model based on deep learning, the method further reduces the bit error rate; third, compared with the existing SCMA system model based on deep learning, the invention has faster training convergence speed.
Drawings
FIG. 1 is a system block diagram of a SCMA codec in the present invention;
FIG. 2 is a network structure diagram of a codeword mapper according to the present invention;
FIG. 3 is a network architecture diagram of a neural network based SCMA decoder of the present invention;
FIG. 4 is a graph of a neural network Loss using an Adam optimizer provided in the first embodiment;
FIG. 5 is a graph of a neural network Loss using the SGD optimizer provided in the first embodiment;
fig. 6 is a graph of BER performance of the SCMA codec based on the noise reduction self-encoder provided in the first embodiment.
Detailed Description
A method for establishing an SCMA codec model based on a noise reduction self-encoder is characterized in that: the method comprises the following steps in sequence:
(1) firstly, establishing a SCMA (sparse code multiple access) encoder based on a noise reduction self-encoder and a fully-connected neural network, and mapping original input data of a user into code words;
(2) the code words of all users on each resource block are transmitted in a superposition mode, and then the signal on each resource block is superposed with channel noise;
(3) an SCMA decoder based on a full-connection neural network is established at a receiving end, and original input data of all users are decoded;
(4) the SCMA encoder, the channel and the SCMA decoder jointly form an SCMA codec model based on the noise reduction self-encoder, and the SCMA codec model based on the noise reduction self-encoder is trained;
(5) the above SCMA codec model based on the noise reduction self-encoder was tested for BER performance.
The step (1) of establishing the SCMA encoder based on the noise reduction self-encoder and the fully-connected neural network specifically comprises the following steps:
(1a) encoding the original input data of each user into a single heat vector, wherein the information transmitted by each user is represented as s, s is binary bit data, and m possible information exists, wherein m is 2bB represents the number of bits per transmission; selecting randomly generated binary input data, encoding the binary input data into a single heat vector, namely an m-dimensional vector, wherein only one element is 1, the rest elements are 0, and the encoded single heat vector is used as the input of each code word mapper;
(1b) establishing an SCMA code word mapper based on a noise reduction self-encoder and a full-connection neural network, adding a noise layer before each full-connection deep neural network unit, namely a DNN unit, and constructing a code word mapper corresponding to different resource blocks for each user;
(1c) acquiring a factor graph according to the condition that each user occupies a resource block, acquiring a corresponding factor graph matrix, and connecting the SCMA code word mappers according to the factor graph matrix to form an SCMA encoder; and the transmitter sends the superposed code words to the corresponding resource blocks.
The channel noise in the step (2) is gaussian white noise, the variance of the channel noise power is β, and β is expressed as:
Figure GDA0003185560550000081
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio.
The step (3) specifically comprises the following steps:
(3a) establishing an input layer of an SCMA decoder, wherein a channel output signal is used as an input signal of a receiving end decoder; wherein, the received signal on the kth resource block is:
Figure GDA0003185560550000082
wherein, theta1Representing parameters of a SCMA encoder based on a fully-connected neural network, fkj(sj;θ1) For the j user after passing through the code word mapper on the code word, h on the resource kkjFor corresponding channel gain, nkIs the noise on resource k, ykRepresents the received signal on resource k; sjRaw input data for the jth user;
(3b) using a fully-connected neural network at a receiving end, establishing a hidden layer of the SCMA decoder for learning and extracting signal characteristics, wherein an activation function of the hidden layer adopts a relu activation function, and then output data of an l-th hidden layer is expressed as:
Figure GDA0003185560550000083
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, W and b are the parameters theta, yl-1Represents the output of the l-1 layer hidden layer;
(3c) to prevent overfitting, batch normalization is carried out on the data of each layer before nonlinear operation of relu activation functions is carried out on the data of each layer;
(3d) the target output is a single heat vector sent into the SCMA encoder by each user, and because the range of target output data is [0,1], the output activation function is selected from tanh activation functions, and the expression is as follows:
Figure GDA0003185560550000091
wherein z is the output value of the last hidden layer, e is the base of the natural logarithm, e is 2.71828, youtIs the final actual output of the SCMA decoder.
The training of the SCMA codec model based on the noise reduction auto-encoder in the step (4) specifically includes the following steps:
(4a) in training, the channel noise adopts white gaussian noise with a fixed size as the noise used in training, which is called training noise, the variance of the power of the channel noise is β, and β is expressed as:
Figure GDA0003185560550000092
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio; in training, Eb/N0The magnitude is fixed to 13dB, namely beta is fixed, namely training noise with fixed magnitude;
(4b) using the mean square error loss function as the end-to-end loss function for calculating the loss of the actual output and the expected output, the expression of the loss function is as follows:
Figure GDA0003185560550000093
wherein n is the size of one batch of the training neural network, and theta1And theta2Parameters of the encoder and decoder, respectively; y denotes the target output of the neural network, youtRepresenting the actual output of the neural network;
(4c) the loss function is converged to a minimum using a gradient descent algorithm: sending a batch of input data each time to calculate the partial derivative of the loss function to the neural network parameters, obtaining the gradient of the loss function, namely the direction of the loss function which descends the fastest, and updating the parameters through the direction to make the value of the loss function smaller and smaller; the weights W and bias b of the neural network are updated with Adam optimizers so that the value of the loss function converges to a minimum.
The step (5) specifically comprises the following steps:
(5a) randomly generating binary input data bits, encoding the binary input data bits into a single heat vector to be used as input data of the SCMA encoder;
(5b) calculating BER performance of the SCMA codec based on the noise reduction self-encoder under different signal-to-noise ratios.
The step (1b) specifically comprises the following steps:
(1b1) randomly generating a binary bit data stream for each user as input data of a codeword mapper;
(1b2) adding a noise layer before each fully-connected deep neural network unit, namely a DNN unit, wherein the noise adopts Gaussian white noise;
(1b3) establishing a hidden layer for learning and extracting signal characteristics, wherein the activation function adopts a relu activation function, and then the output data of the l-th hidden layer is expressed as:
Figure GDA0003185560550000101
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, yl-1The output of the l-1 hidden layer is represented, and the output adopts a linear activation function, namely, the data of the previous layer of the input is not changed; w and b are called parameters theta together; w represents a weight, b represents an offset;
(1b4) in each hidden layer, the data is subjected to batch normalization before nonlinear operation by adopting the following steps:
in the batch training process, c training examples are contained in one training process, and the specific normalization operation is to perform the following transformation on the activation value of each neuron in the hidden layer:
Figure GDA0003185560550000102
wherein h represents the h-th neuron, x is the linear activation value of a hidden layer neuron, namely x ═ Wu + b, wherein W and b are parameters of the neural network, W represents weight, b represents bias, and u is the output value of a neuron in the previous layer; e (x) and Var (x) are respectively the mean and variance of c activation values x obtained from c examples;
because the transformation is subjected to normalization processing, the standard deviation is limited to 1, and the feature distribution learned by the network of the layer is damaged; therefore, two adjusting parameters, namely scale and shift, are added to each neuron, and the two adjusting parameters are learned through training and are used for performing transformation reconstruction on transformed activation, so that the network expression capability is enhanced, namely the following scale and shift operations are performed on the transformed activation:
Figure GDA0003185560550000111
wherein h represents the h neuron, gamma and beta are learnable reconstruction parameters,
Figure GDA0003185560550000112
representing normalized transformed neuron activation values, y(h)Representing the final output value of the h neuron after reconstruction through the transformation of the formula (8);
(1b5) the codeword mapper outputs a codeword mapped to a resource block for a user, where fkj(sj;θ1) Expressed as code word, theta, mapping the jth user data onto the kth resource block1The weight and bias representing the codeword mapper are important parameters that need to be updated by training, sjOriginal output sent for user jInputting data; the output layer outputs 2-dimensional data representing the real and imaginary parts of the complex codeword, respectively.
Example one
Take 6 users, 4 resource blocks as an example.
As shown in fig. 1, based on the idea of a noise reduction self-encoder, a certain proportion of random noise is added to input data, and an SCMA encoder maps original input data of a user into a codeword; the coding code words are overlapped on the resource block in a non-orthogonal mode for transmission, and are influenced by a channel in the transmission process, and the signal is overlapped with noise; an SCMA decoder based on a full-connection neural network is established at a receiving end, and original input information of all users is decoded according to input coding code words; the SCMA codec can be regarded as a complete noise reduction self-encoder structure, and a channel noise with a fixed size is superposed to train the SCMA codec model based on the noise reduction self-encoder; the above SCMA codec model based on the noise reduction self-encoder was tested for BER performance. Compared with the traditional SCMA system, the error rate is reduced, and compared with the existing SCMA system model based on deep learning, the error rate of the system is further reduced, and the training convergence speed is higher. The method comprises the following specific steps:
1. firstly, an SCMA encoder based on a noise reduction self-encoder and a fully-connected neural network is established, and original input data of a user is mapped into code words. The method comprises the following specific steps:
1.1 encode each user's raw input data as a single heat vector. The information transmitted by each user is represented as s, wherein s is binary bit data, and there are m possible information, wherein m is 2bAnd b represents the number of bits per transmission. The randomly generated binary input data is selected as an m-dimensional vector, and only one element is 1, and the rest elements are 0. The encoded single heat vector is used as the input of each codeword mapper. Considering that each user transmits 2 bits of information at a time, there are 4 possible information, 00,01,10,11, respectively, encoded as a single thermal vector, 1,0,0,0, 1,0, and 0,0,1, respectively.
1.2 build SCMA code word mapper based on noise reduction self-encoder and full connection neural network. As shown in fig. 2, the idea inspired on noise reduction auto-encoder: it is not necessarily the best one to be able to recover the original signal from the self-encoder, and it is the good feature to be able to encode, decode and then recover the "contaminated" original data, and then also the true original data. And (3) adding a certain proportion of random noise before each fully-connected deep neural network unit (DNN unit) so that the neural network finds more robust characteristics, thereby constructing a code word mapper corresponding to different resource blocks for each user. Each codeword mapper learns complex codewords for which the transmission data of a single user is mapped onto a corresponding resource block. Fig. 2 provides a network structure diagram of a codeword mapper, and the specific steps of deploying the codeword mapper model are as follows:
1.2.1 randomly generating a binary bit data stream for each user, wherein each user encodes each 2-bit data into a 4-dimensional single heat vector as input data of a code mapper;
1.2.2 inspired by the idea of noise reduction self-encoder, a noise layer is added before each fully connected deep neural network unit (DNN unit), wherein Gaussian white noise is adopted, and the standard deviation of the noise power is fixed to be 0.1;
1.2.3, establishing a hidden layer for learning and extracting signal characteristics, wherein the activation function adopts a relu activation function, and then the output data of the hidden layer of the l-th layer is expressed as:
Figure GDA0003185560550000131
wherein WlRepresents the weight of the l-th layer, blIndicating the bias of the l-th layer. y isl-1Representing the output of the l-1 hidden layer. The output adopts a linear activation function, namely, the input data of the previous layer is not changed;
1.2.4 in each hidden layer, the data is subjected to batch normalization before nonlinear operation by adopting the following steps:
in the batch training process, c training examples are contained in one training process, and the specific normalization operation is to perform the following transformation on the activation value of each neuron in the hidden layer:
Figure GDA0003185560550000132
wherein h represents the h-th neuron, x is the linear activation value of a hidden layer neuron, namely x ═ Wu + b, wherein W and b are parameters of the neural network, W represents weight, b represents bias, and u is the output value of a neuron in the previous layer; e (x) and Var (x) are respectively the mean and variance of c activation values x obtained from c examples;
because the transformation is subjected to normalization processing, the standard deviation is limited to 1, and the feature distribution learned by the network of the layer is damaged; therefore, two adjusting parameters, namely scale and shift, are added to each neuron, and the two adjusting parameters are learned through training and are used for performing transformation reconstruction on transformed activation, so that the network expression capability is enhanced, namely the following scale and shift operations are performed on the transformed activation:
Figure GDA0003185560550000133
wherein h represents the h neuron, gamma and beta are learnable reconstruction parameters,
Figure GDA0003185560550000134
representing normalized transformed neuron activation values, y(h)Representing the final output value of the h neuron after reconstruction through the transformation of the formula (8);
1.2.5 output expressed as fkj(sj;θ1) For complex code words, theta, mapped to the kth resource block for the jth user data1The weights and offsets representing the codeword mapper are important parameters that require training updates. As shown in fig. 2, the output layer outputs 2-dimensional data representing the real and imaginary parts of the complex codeword, respectively.
1.3 obtaining factor graph by the condition of each user occupying resource block, thus obtaining corresponding factor graph matrix, connecting the SCMA code word mappers according to the factor graph matrix to form an SCMA coder, and then sending the superposed code word to the corresponding resource block by the transmitter. The method comprises the following specific steps:
1.3.1 obtaining a factor graph according to the condition that each user node occupies the resource node, thereby obtaining a corresponding factor graph matrix, wherein the situation of 4 resource blocks of 6 users is considered, data of 3 users are transmitted on one resource block at the same time, and data of 1 user is multiplexed on 2 resource blocks. The factor graph matrix used here is as follows:
Figure GDA0003185560550000141
as shown in the SCMA system coding part of fig. 1, a codeword mapper is disposed at each edge connecting a user node and a resource node according to a factor graph matrix to learn a codeword mapping input data of the user node to a corresponding resource block;
1.3.2 as shown in fig. 1, the superposition operation is performed on the output data of a plurality of user code words, i.e. code word mappers, on the same resource block, which indicates that the data of a plurality of users is multiplexed on one resource block. And the transmitter sends the superposed code words to the corresponding resource blocks.
2. And (3) the code words of all users on each resource block are transmitted in a superposition mode, and the signal on each resource block is superposed with the channel noise in consideration of the influence of the channel noise. The method comprises the following specific steps:
2.1 choosing white gaussian noise as the channel noise, where the variance of the channel noise power is β, expressed as:
Figure GDA0003185560550000151
wherein E [ | x-2]Represents the transmission signal power and η represents the spectral efficiency. Eb/N0Representing the bit signal-to-noise ratio.
3. According to the network architecture diagram of the neural network based SCMA decoder shown in fig. 3, a fully connected neural network based SCMA decoder is established at the receiving end. The method comprises the following specific steps:
3.1 establishing an input layer of an SCMA decoder, wherein a channel output signal is used as an input signal of a receiving end decoder; wherein, the received signal on the kth resource block is:
Figure GDA0003185560550000152
because the experimental scene considers a white gaussian noise channel, the average channel gain h is 1, the input of the decoder is the superposition code words on 4 resource blocks plus gaussian noise, because the superposition code words transmitted on each resource block are complex signals, the real part and the imaginary part of the complex signals are respectively represented by two-dimensional data, so that the received data of the 4 resource blocks are 8-dimensional data, namely the dimension of the input data of the decoder at the receiving end is 8-dimensional.
3.2 at the receiving end, a fully connected neural network is used for establishing a hidden layer of the SCMA decoder for learning and extracting signal characteristics, wherein the hidden layer activation function adopts a relu activation function. The output data of the l-th layer hidden layer is represented as:
Figure GDA0003185560550000153
wherein WlRepresents the weight of the l-th layer, blIndicating the bias of the l-th layer. y isl-1Representing the output of the l-1 hidden layer.
3.3 to prevent overfitting, the data for each layer was batch normalized before being subjected to nonlinear operations.
3.4 target output is the single heat vector sent to the SCMA encoder by each user, and because the range of target output data is [0,1], the output activation function is selected from tanh activation function, and the expression is:
Figure GDA0003185560550000161
wherein z is the output value of the last hidden layer.
4. The SCMA codec can be regarded as a complete noise reduction self-encoder structure, and the SCMA codec model based on the noise reduction self-encoder is trained. The method comprises the following specific steps:
4.1 in training, the channel noise adopts white gaussian noise with a fixed size as the noise used in training, which is called training noise, the variance of the power of the channel noise is β, and β is expressed as:
Figure GDA0003185560550000162
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio;
in training, Eb/N0The magnitude is fixed to 13dB, namely beta is fixed, namely training noise with fixed magnitude;
4.2 use the mean square error loss function as the end-to-end loss function to calculate the loss of actual output versus expected output:
Figure GDA0003185560550000163
wherein n is the size of one batch of the training neural network, and theta1And theta2The parameters of the encoder and decoder, respectively.
4.3 make use of gradient descent algorithm to make the loss function converge to minimum: and (3) calculating the partial derivative of the loss function to the neural network parameters by feeding a batch of input data each time, solving the gradient of the loss function, namely the direction in which the loss function descends the fastest, and updating the parameters through the direction to make the value of the loss function smaller and smaller. The trained neural network has optimal parameters to ensure that the error of output data is minimum, and the bit error rate of the SCMA system is also minimum. Here Adam optimizers are used to update the weights and biases of the neural network so that the value of the loss function converges to a minimum.
Adam is an algorithm that performs first order gradient optimization on a random objective function, based on adaptive low-order moment estimation. The method can replace a first-order optimization algorithm of the traditional random gradient descent process, can iteratively update the weight of the neural network based on training data, and has high calculation efficiency and low memory requirement. The Adam algorithm differs from the traditional random gradient descent. The stochastic gradient descent keeps a single learning rate (i.e., alpha) updating all weights, and the learning rate does not change during the training process. Adam, in turn, designs independent adaptive learning rates for different parameters by computing first and second order moment estimates of the gradient.
The rules for the Adam optimizer to update the parameters are as follows:
calculating the gradient of t time steps:
Figure GDA0003185560550000171
where L is the loss function and θ is a parameter of the neural network. First, the exponential moving average m of the gradient is calculatedt
mt=β1mt-1+(1-β1)gt (10)
Wherein beta is1The coefficient is an exponential decay rate, and is 0.9 by default. m is0Initialization to 0, but results in an early stage of training mtBiased toward 0. Therefore, the mean value m of the gradient is required heretAnd (3) correcting deviation to reduce the influence of the deviation on the initial training stage:
Figure GDA0003185560550000172
second, the exponential moving average v of the square of the gradient is calculatedt
vt=β2vt-1+(1-β2)gt 2 (12)
Wherein beta is2The coefficient is an exponential decay rate, and is 0.999 by default. v. of0Initialized to 0, and m0Similarly, because v0An initialization to 0 results in an initial phase v of trainingtAnd (3) deviating to 0, correcting the deviation:
Figure GDA0003185560550000173
and finally, updating the neural network parameters:
Figure GDA0003185560550000181
wherein ε is 10-8Preventing the divisor from becoming 0; thetat-1Parameter, θ, representing the t-1 st ordertIndicating the parameter at the t-th time after the update. It can be seen from the expression that the updated step size calculation can be adaptively adjusted from two angles of gradient mean and gradient square, rather than being directly determined by the current gradient.
Wherein ε is 10-8The divisor is prevented from becoming 0. It can be seen from the expression that the updated step size calculation can be adaptively adjusted from two angles of gradient mean and gradient square, rather than being directly determined by the current gradient.
As can be seen from fig. 4, the loss function can be converged in 4 training periods (epochs) of the SCMA codec based on the noise reduction self-encoder using the Adam optimizer, while the SGD optimizer used in the simulation experiment of fig. 5 on the existing SCMA system based on the deep learning requires about 150 training periods to converge in the same experimental environment and under the same training conditions.
5. The above SCMA codec model based on the noise reduction self-encoder was tested for BER performance. The method comprises the following specific steps:
5.1 randomly generating binary input data bits, coding the binary input data bits into a single heat vector to be used as input data of an SCMA (sparse code multiple access) coder;
5.2 calculate BER performance of SCMA codec based on noise reduction auto-encoder under different SNR. As shown in fig. 6, a single curve is BER performance of a traditional MPA algorithm, a circular curve is a simulated BER performance result of the SCMA system based on the deep neural network, and a diamond curve is a performance result of an embodiment provided by the present invention, as can be seen from fig. 6, compared with the traditional SCMA system, the present invention reduces bit error rate, and compared with the existing SCMA system model based on deep learning, further reduces bit error rate of the system. As can be seen from fig. 4 and 5, the invention further increases the convergence rate of the training.
While the invention has been described in detail with reference to specific embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. A method for establishing an SCMA codec model based on a noise reduction self-encoder is characterized in that: the method comprises the following steps in sequence:
(1) firstly, establishing a SCMA (sparse code multiple access) encoder based on a noise reduction self-encoder and a fully-connected neural network, and mapping original input data of a user into code words;
(2) the code words of all users on each resource block are transmitted in a superposition mode, and then the signal on each resource block is superposed with channel noise;
(3) an SCMA decoder based on a full-connection neural network is established at a receiving end, and original input data of all users are decoded;
(4) the SCMA encoder, the channel and the SCMA decoder jointly form an SCMA codec model based on the noise reduction self-encoder, and the SCMA codec model based on the noise reduction self-encoder is trained;
(5) testing the BER performance of the SCMA codec model based on the noise reduction self-encoder;
the step (1) of establishing the SCMA encoder based on the noise reduction self-encoder and the fully-connected neural network specifically comprises the following steps:
(1a) of each userThe original input data is encoded into a single heat vector, and the information transmitted each time by each user is represented as s, wherein s is binary bit data, and m possible information exists, wherein m is 2bB represents the number of bits per transmission; selecting randomly generated binary input data, encoding the binary input data into a single heat vector, namely an m-dimensional vector, wherein only one element is 1, the rest elements are 0, and the encoded single heat vector is used as the input of each code word mapper;
(1b) establishing an SCMA code word mapper based on a noise reduction self-encoder and a full-connection neural network, adding a noise layer before each full-connection deep neural network unit, namely a DNN unit, and constructing a code word mapper corresponding to different resource blocks for each user;
(1c) acquiring a factor graph according to the condition that each user occupies a resource block, acquiring a corresponding factor graph matrix, and connecting the SCMA code word mappers according to the factor graph matrix to form an SCMA encoder; the transmitter sends the superposed code words to the corresponding resource blocks;
the step (1b) specifically comprises the following steps:
(1b1) randomly generating a binary bit data stream for each user as input data of a codeword mapper;
(1b2) adding a noise layer before each fully-connected deep neural network unit, namely a DNN unit, wherein the noise adopts Gaussian white noise;
(1b3) establishing a hidden layer for learning and extracting signal characteristics, wherein the activation function adopts a relu activation function, and then the output data of the l-th hidden layer is expressed as:
Figure FDA0003185560540000021
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, yl-1The output of the l-1 hidden layer is represented, and the output adopts a linear activation function, namely, the data of the previous layer of the input is not changed; w and b are called parameters theta together; w represents a weight, b represents an offset;
(1b4) in each hidden layer, the data is subjected to batch normalization before nonlinear operation by adopting the following steps:
in the batch training process, c training examples are contained in one training process, and the specific normalization operation is to perform the following transformation on the activation value of each neuron in the hidden layer:
Figure FDA0003185560540000022
wherein h represents the h-th neuron, x is the linear activation value of a hidden layer neuron, namely x ═ Wu + b, wherein W and b are parameters of the neural network, W represents weight, b represents bias, and u is the output value of a neuron in the previous layer; e (x) and Var (x) are respectively the mean and variance of c activation values x obtained from c examples;
because the transformation is subjected to normalization processing, the standard deviation is limited to 1, and the feature distribution learned by the network of the layer is damaged; therefore, two adjusting parameters, namely scale and shift, are added to each neuron, and the two adjusting parameters are learned through training and are used for performing transformation reconstruction on transformed activation, so that the network expression capability is enhanced, namely the following scale and shift operations are performed on the transformed activation:
Figure FDA0003185560540000031
wherein h represents the h neuron, gamma and beta are learnable reconstruction parameters,
Figure FDA0003185560540000032
representing normalized transformed neuron activation values, y(h)Representing the final output value of the h neuron after reconstruction through the transformation of the formula (8);
(1b5) the codeword mapper outputs a codeword mapped to a resource block for a user, where fkj(sj;θ1) Expressed as code word, theta, mapping the jth user data onto the kth resource block1The weight and bias representing the codeword mapper are important parameters that need to be updated by training, sjOriginal input data sent for user j; the output layer outputs 2-dimensional data representing the real and imaginary parts of the complex codeword, respectively.
2. The method of claim 1, wherein the method comprises: the channel noise in the step (2) is gaussian white noise, the variance of the channel noise power is β, and β is expressed as:
Figure FDA0003185560540000033
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio.
3. The method of claim 1, wherein the method comprises: the step (3) specifically comprises the following steps:
(3a) establishing an input layer of an SCMA decoder, wherein a channel output signal is used as an input signal of a receiving end decoder; wherein, the received signal on the kth resource block is:
Figure FDA0003185560540000034
wherein, theta1Representing parameters of a SCMA encoder based on a fully-connected neural network, fkj(sj;θ1) For the j user after passing through the code word mapper on the code word, h on the resource kkjFor corresponding channel gain, nkIs the noise on resource k, ykRepresents the received signal on resource k; sjRaw input data for the jth user;
(3b) using a fully-connected neural network at a receiving end, establishing a hidden layer of the SCMA decoder for learning and extracting signal characteristics, wherein an activation function of the hidden layer adopts a relu activation function, and then output data of an l-th hidden layer is expressed as:
Figure FDA0003185560540000041
wherein, WlRepresents the weight of the l-th layer, blDenotes the bias of the l-th layer, W and b are the parameters theta, yl-1Represents the output of the l-1 layer hidden layer;
(3c) to prevent overfitting, batch normalization is carried out on the data of each layer before nonlinear operation of relu activation functions is carried out on the data of each layer;
(3d) the target output is a single heat vector sent into the SCMA encoder by each user, and because the range of target output data is [0,1], the output activation function is selected from tanh activation functions, and the expression is as follows:
Figure FDA0003185560540000042
wherein z is the output value of the last hidden layer, e is the base of the natural logarithm, e is 2.71828, youtIs the final actual output of the SCMA decoder.
4. The method of claim 1, wherein the method comprises: the training of the SCMA codec model based on the noise reduction auto-encoder in the step (4) specifically includes the following steps:
(4a) in training, the channel noise adopts white gaussian noise with a fixed size as the noise used in training, which is called training noise, the variance of the power of the channel noise is β, and β is expressed as:
Figure FDA0003185560540000043
wherein E [ | x-2]Representing the power of the transmitted signal, eta representing the spectral efficiency, Eb/N0Representing the bit signal-to-noise ratio; in training, Eb/N0The magnitude is fixed to 13dB, namely beta is fixed, namely training noise with fixed magnitude;
(4b) using the mean square error loss function as the end-to-end loss function for calculating the loss of the actual output and the expected output, the expression of the loss function is as follows:
Figure FDA0003185560540000051
wherein n is the size of one batch of the training neural network, and theta1And theta2Parameters of the encoder and decoder, respectively; y denotes the target output of the neural network, youtRepresenting the actual output of the neural network;
(4c) the loss function is converged to a minimum using a gradient descent algorithm: sending a batch of input data each time to calculate the partial derivative of the loss function to the neural network parameters, obtaining the gradient of the loss function, namely the direction of the loss function which descends the fastest, and updating the parameters through the direction to make the value of the loss function smaller and smaller; the weights W and bias b of the neural network are updated with Adam optimizers so that the value of the loss function converges to a minimum.
5. The method of claim 1, wherein the method comprises: the step (5) specifically comprises the following steps:
(5a) randomly generating binary input data bits, encoding the binary input data bits into a single heat vector to be used as input data of the SCMA encoder;
(5b) calculating BER performance of the SCMA codec based on the noise reduction self-encoder under different signal-to-noise ratios.
CN201910746945.0A 2019-08-14 2019-08-14 Method for establishing SCMA codec model based on noise reduction self-encoder Active CN110474716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910746945.0A CN110474716B (en) 2019-08-14 2019-08-14 Method for establishing SCMA codec model based on noise reduction self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910746945.0A CN110474716B (en) 2019-08-14 2019-08-14 Method for establishing SCMA codec model based on noise reduction self-encoder

Publications (2)

Publication Number Publication Date
CN110474716A CN110474716A (en) 2019-11-19
CN110474716B true CN110474716B (en) 2021-09-14

Family

ID=68511059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910746945.0A Active CN110474716B (en) 2019-08-14 2019-08-14 Method for establishing SCMA codec model based on noise reduction self-encoder

Country Status (1)

Country Link
CN (1) CN110474716B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112838908B (en) * 2019-11-22 2022-10-25 华为技术有限公司 Communication method, device and system based on deep learning
CN110942100B (en) * 2019-11-29 2023-04-07 山东大学 Working method of spatial modulation system based on deep denoising neural network
CN111130697B (en) * 2019-12-24 2022-04-19 重庆邮电大学 Method for reducing complexity of communication physical layer transmission system based on automatic encoder
CN111182705B (en) * 2020-01-03 2021-01-01 西安电子科技大学 Time-varying plasma diagnosis method and diagnosis system based on automatic encoder
CN111310331B (en) * 2020-02-12 2022-03-25 成都理工大学 Shell model construction method based on conditional variation self-coding
US11356305B2 (en) * 2020-02-24 2022-06-07 Qualcomm Incorporated Method to convey the TX waveform distortion to the receiver
US11653228B2 (en) * 2020-02-24 2023-05-16 Qualcomm Incorporated Channel state information (CSI) learning
CN111565061B (en) * 2020-05-28 2021-04-02 安徽大学 MIMO-SCMA downlink communication method based on deep neural network
CN113381799B (en) * 2021-06-08 2022-11-01 哈尔滨工业大学 Low orbit satellite-ground link end-to-end sparse code multiple access method based on convolutional neural network
CN113569464A (en) * 2021-06-21 2021-10-29 国网山东省电力公司电力科学研究院 Wind turbine generator oscillation mode prediction method and device based on deep learning network and multi-task learning strategy
CN115603859A (en) * 2021-07-09 2023-01-13 华为技术有限公司(Cn) Model training method and related device
CN113627337B (en) * 2021-08-10 2023-12-05 吉林大学 Force touch signal processing method based on stack type automatic coding
CN113794536B (en) * 2021-09-15 2024-02-23 苏州米特希赛尔人工智能有限公司 Artificial intelligent channel coding and decoding method and device
CN113992313B (en) * 2021-10-25 2023-07-25 安徽大学 Balanced network assisted SCMA encoding and decoding method based on deep learning
CN113890622B (en) * 2021-11-08 2023-01-10 西南交通大学 Long-distance passive optical network demodulation method based on graph neural network
CN114640423B (en) * 2022-01-13 2023-07-25 北京邮电大学 Transmission method and related equipment for joint coding of distributed semantic information source channels
CN114689700B (en) * 2022-04-14 2023-06-06 电子科技大学 Low-power EMAT signal noise reduction method based on stack-type self-encoder
CN114866119B (en) * 2022-04-15 2023-09-26 电子科技大学长三角研究院(湖州) Mixed wave beam forming method under imperfect channel state information condition
CN115550934B (en) * 2022-11-29 2023-04-07 安徽电信规划设计有限责任公司 Hybrid multiple access heterogeneous network multi-user detection method based on deep learning
CN115865129B (en) * 2022-12-01 2024-03-29 电子科技大学 Narrowband interference intelligent elimination method based on denoising self-encoder

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665365A (en) * 2016-07-27 2018-02-06 三星电子株式会社 Accelerator and its operating method in convolutional neural networks
CN109039534A (en) * 2018-06-20 2018-12-18 东南大学 A kind of sparse CDMA signals detection method based on deep neural network
CN109787715A (en) * 2018-12-18 2019-05-21 中国科学院深圳先进技术研究院 The DNN coding/decoding method and decoded communications equipment of SCMA system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665365A (en) * 2016-07-27 2018-02-06 三星电子株式会社 Accelerator and its operating method in convolutional neural networks
CN109039534A (en) * 2018-06-20 2018-12-18 东南大学 A kind of sparse CDMA signals detection method based on deep neural network
CN109787715A (en) * 2018-12-18 2019-05-21 中国科学院深圳先进技术研究院 The DNN coding/decoding method and decoded communications equipment of SCMA system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Deep Learning-Aided SCMA;M. Kim, N. Kim, W. Lee and D. Cho;《IEEE Communications Letters》;20181111;第22卷(第4期);正文第II-IV节 *

Also Published As

Publication number Publication date
CN110474716A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110474716B (en) Method for establishing SCMA codec model based on noise reduction self-encoder
CN110445581B (en) Method for reducing channel decoding error rate based on convolutional neural network
CN111901024B (en) MIMO channel state information feedback method based on fitting depth learning resistance
CN113381828B (en) Sparse code multiple access random channel modeling method based on condition generation countermeasure network
CN109361404A (en) A kind of LDPC decoding system and interpretation method based on semi-supervised deep learning network
CN110932734B (en) Deep learning channel decoding method based on alternative direction multiplier method
CN107743056B (en) SCMA (sparse code multiple access) multi-user detection method based on compressed sensing assistance
CN109039534A (en) A kind of sparse CDMA signals detection method based on deep neural network
CN109728824B (en) LDPC code iterative decoding method based on deep learning
CN107864029A (en) A kind of method for reducing Multiuser Detection complexity
CN110430013B (en) RCM method based on deep learning
CN109768857B (en) CVQKD multidimensional negotiation method using improved decoding algorithm
Xie et al. Massive unsourced random access for massive MIMO correlated channels
Shao et al. Attentioncode: Ultra-reliable feedback codes for short-packet communications
CN111711455A (en) Polarization code BP decoding method based on neural network
CN109831281B (en) Multi-user detection method and device for low-complexity sparse code multiple access system
CN113114269A (en) Belief propagation-information correction decoding method
CN113992313B (en) Balanced network assisted SCMA encoding and decoding method based on deep learning
CN116938662A (en) Constellation probability shaping method and device based on recurrent neural network training optimization
CN105376185A (en) Constant modulus blind equalization processing method based on optimization of DNA shuffled frog leaping algorithm in communication system
CN111049531B (en) Deep learning channel decoding method based on alternative direction multiplier method of piecewise linearity penalty function
CN112787694A (en) Low-complexity detection algorithm of MIMO-SCMA system based on expected propagation
CN106911431B (en) Improved partial edge information transmission method applied to demodulation process of sparse code multiple access system
CN107248876B (en) Generalized spatial modulation symbol detection method based on sparse Bayesian learning
Njoku et al. BLER performance evaluation of an enhanced channel autoencoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant