CN113098804B - Channel state information feedback method based on deep learning and entropy coding - Google Patents

Channel state information feedback method based on deep learning and entropy coding Download PDF

Info

Publication number
CN113098804B
CN113098804B CN202110334430.7A CN202110334430A CN113098804B CN 113098804 B CN113098804 B CN 113098804B CN 202110334430 A CN202110334430 A CN 202110334430A CN 113098804 B CN113098804 B CN 113098804B
Authority
CN
China
Prior art keywords
entropy
decoder
encoder
feature
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110334430.7A
Other languages
Chinese (zh)
Other versions
CN113098804A (en
Inventor
郑添月
凌泰炀
姚志伟
田佳辰
伍诗语
郑怀瑾
王闻今
李潇
金石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110334430.7A priority Critical patent/CN113098804B/en
Publication of CN113098804A publication Critical patent/CN113098804A/en
Application granted granted Critical
Publication of CN113098804B publication Critical patent/CN113098804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0256Channel estimation using minimum mean square error criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Radio Transmission System (AREA)

Abstract

The invention discloses a deep learning-based methodFirstly, at a user terminal, preprocessing a channel matrix of the MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback; secondly, a model combining a deep learning characteristic encoder and entropy encoding is built at a user side, and a channel matrix H is encoded into a binary bit stream; at a base station end, constructing a model combining a deep learning characteristic decoder and entropy decoding, and reconstructing an original channel matrix estimation value from a binary bit stream; training the model to obtain the parameters of the model and the reconstructed value of the reconstructed channel matrix
Figure DDA0002996797530000011
And finally, the trained model based on deep learning and entropy coding is used for compressed sensing and reconstruction of channel information. The invention can reduce the feedback overhead of the large-scale MIMO channel state information.

Description

Channel state information feedback method based on deep learning and entropy coding
Technical Field
The invention relates to a large-scale MIMO channel state information feedback method based on deep learning and entropy coding.
Background
Massive MIMO (massive multiple-input multiple-output) technology is considered as a key technology of 5G and post-6G communication systems. By using multiple transmit and multiple receive antennas, the MIMO system can significantly increase capacity without extending additional bandwidth. Based on the potential advantages of the large-scale MIMO system, the method is established on the basis that the base station end can accurately acquire the channel state information, and interference among multiple users is eliminated through precoding, however, for an fdd (frequency division multiple) MIMO system, an uplink and a downlink work on different frequency points, and therefore, downlink channel state information is obtained by the user end and is transmitted back to the base station end through a feedback link. Considering that the base station uses a large number of antennas, feeding back complete channel state information would result in a huge resource overhead, which is not desirable in practice. Quantization or codebook-based methods are therefore usually used in practice to reduce the overhead, and such methods lose channel state information to some extent and still increase linearly with the number of antennas, and are therefore not desirable in massive MIMO systems.
Research on channel state information feedback for massive MIMO systems has focused on reducing the feedback overhead by means of spatial and temporal correlation of channel state information. In particular, the relevant channel state information may be converted into an irrelevant sparse vector in some bases; thus, a sufficiently accurate estimate of the sparse vector can be obtained from the underdetermined linear system using compressed sensing. Specifically, the channel state information can be transformed into a sparse matrix under a certain basis, and random compression sampling is performed on the sparse matrix by using a compressed sensing method to obtain a low-dimensional measurement value; the measured value is transmitted to a base station end through a feedback link under the condition of occupying a small amount of resource overhead, and the base station end reconstructs an original sparse channel matrix from the measured value by means of a compressive sensing theory. The above method based on compressed sensing is a channel feedback method which is more advanced at present, but still has the following problems: compressed sensing algorithms generally rely on the assumption that the channel is sparse on some basis. In practice, the channel is not completely sparse at any transform base and has a more complex structure, even possibly no interpretable structure; compressed sensing uses a random projection method to obtain a low-dimensional compressed signal, so that a channel structure is not fully utilized; most of the existing compressed sensing algorithms are iterative algorithms, the reconstruction speed is low, huge calculation overhead is needed, and great challenges are provided for the real-time performance of the system.
To solve the above problem, a sensing and recovery network csiant based on deep learning channel state information has been proposed. Then, in actual communication, in order to further improve the accuracy of channel information recovery and reduce feedback overhead, the coding and decoding processes of the transmitting and receiving ends need to be considered, and at present, very few documents relate to the content in this respect.
Disclosure of Invention
The technical problem is as follows: the invention provides a large-scale MIMO channel state information feedback method based on deep learning and entropy coding, which is a large-scale MIMO channel state information feedback method capable of quickly and accurately reconstructing channel state information from feedback information with a lower bit rate by a model combining deep learning characteristic coding and decoding with entropy coding and decoding, solves the problem of high channel state information feedback overhead in a large-scale MIMO system, and realizes better balance of the bit rate and the channel information feedback accuracy.
The technical scheme is as follows: the invention discloses a channel state information feedback method based on deep learning and entropy coding, which comprises the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and entropy coding, and encoding a channel matrix H into a binary bit stream;
step 3, at the base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and the original channel matrix estimation value is reconstructed from the binary bit stream obtained in the step 2
Figure BDA0002996797510000021
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy encoder and the reconstructed mean square error MSE in the training process, and balancing before the compression ratio and the recovery accuracy of encoding to obtain the combined modelType parameters, and output reconstructed original channel matrix estimated values
Figure BDA0002996797510000022
And 5, applying the deep learning feature-based encoder and entropy coding combined model trained in the step 4 to compressed sensing and reconstruction of channel information.
Wherein,
and the user end part of the model combining the deep learning characteristic encoder and the entropy encoding in the step 2 consists of a characteristic encoder, a uniform unit scalar quantizer and an entropy encoder.
The feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ f f-en (H,Θ en ) Wherein the eigen-encoder parameter Θ en Obtained by training, H is a channel matrix, f f-en A representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization can not be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000031
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure BDA0002996797510000032
Where s is the output binary bit stream and P is the probability density function expressed as
Figure BDA0002996797510000033
Parameter theta of probability density function p The training aid is obtained by training the human body,
Figure BDA0002996797510000034
for the quantized feature matrix, f e-en Representing an entropy coder.
And (3) the base station part of the model combining the deep learning characteristic decoder and the entropy decoding in the step (3) consists of the entropy decoder and the characteristic decoder.
The entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure BDA0002996797510000035
Where P is the probability density function, f e-de Represents an entropy decoder, according to which the binary bit stream is decoded into a feature matrix;
5.2. decoding by a decoder designed at a base station end, randomly initializing parameters of each layer, and using a characteristic matrix for the decoder
Figure BDA0002996797510000036
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure BDA0002996797510000037
Figure BDA0002996797510000038
Wherein f is f-de A decoder representing the characteristics of the digital video signal,
Figure BDA0002996797510000039
for the entropy decoder output, the feature decoder parameter Θ de Obtained by training, whereupon the feature decoder decodes the feature matrix into a channel matrix.
The parameters of the combined model in the step (4) mainly comprise convolution kernels and offsets of the convolution layer and relevant parameters of entropy coding.
And (4) training the combined model, adopting an end-to-end training mode, and jointly training the parameters of the encoder and the decoder to minimize the cost function, wherein the cost function simultaneously optimizes the entropy output by the entropy encoder and the reconstructed MSE, and the compression ratio and the recovery accuracy of the encoding are balanced before.
Has the advantages that: compared with the prior art, the invention has the advantages that compared with the prior art, the channel reconstruction quality is improved at a lower bit rate, so that the feedback of the channel state information is realized under the limited resource expense. According to experimental results, under the condition that the channel transmission bit rate is equivalent, the invention can realize the gain of channel state information estimation of 3-4dB compared with the prior work.
Drawings
FIG. 1 is a diagram of the deep learning + entropy coding model encoder network architecture of the present invention;
FIG. 2 is a diagram of the decoder network architecture of the deep learning + entropy coding model of the present invention;
fig. 3 is a block diagram of an exemplary decoder and reflonenet unit of the present invention;
fig. 4 is a flowchart of CABAC entropy coding according to an example of the present invention (entropy decoding is the inverse of this process, and therefore is not described again).
Detailed Description
The invention discloses a channel state information feedback method based on deep learning and entropy coding, which comprises the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and entropy encoding, and encoding a channel matrix H into binary bit streams;
step 3, at the base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and the original channel matrix estimation value is reconstructed from the binary bit stream obtained in the step 2
Figure BDA0002996797510000041
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy encoder and the reconstructed mean square error MSE in the training process, balancing before the compression ratio and the recovery accuracy of encoding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure BDA0002996797510000042
And 5, applying the deep learning feature-based encoder and entropy coding combined model trained in the step 4 to compressed sensing and reconstruction of channel information.
Wherein,
and the user end part of the model combining the deep learning feature encoder and the entropy encoding in the step 2 consists of a feature encoder, a uniform unit scalar quantizer and an entropy encoder.
The feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ f f-en (H,Θ en ) Wherein the eigen-encoder parameter Θ en Obtained by training, H is a channel matrix, f f-en A representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization can not be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000051
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. by entropyEncoding, based on an input probability model, the quantized values into a binary bit stream, represented as
Figure BDA0002996797510000052
Where s is the output binary bit stream and P is the probability density function expressed as
Figure BDA0002996797510000053
Parameter theta of probability density function p The training aid is obtained by training the human body,
Figure BDA0002996797510000054
for the quantized feature matrix, f e-en Representing an entropy coder.
And (3) the base station part of the model combining the deep learning characteristic decoder and the entropy decoding in the step (3) consists of the entropy decoder and the characteristic decoder.
The entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure BDA0002996797510000055
Where P is the probability density function, f e-de Represents an entropy decoder, according to which the binary bit stream is decoded into a feature matrix;
5.2. decoding by a decoder designed at a base station end, randomly initializing parameters of each layer, and using a characteristic matrix for the decoder
Figure BDA0002996797510000056
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure BDA0002996797510000057
Figure BDA0002996797510000058
Wherein f is f-de A decoder representing the characteristics of the digital video signal,
Figure BDA0002996797510000059
for the entropy decoder output, the feature decoder parameter Θ de Obtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
The parameters of the combined model in the step (4) mainly comprise convolution kernels and offsets of the convolutional layer and relevant parameters of entropy coding.
And (4) training the combined model, adopting an end-to-end training mode, and jointly training the parameters of the encoder and the decoder to minimize the cost function, wherein the cost function simultaneously optimizes the entropy output by the entropy encoder and the reconstructed MSE, and the compression ratio and the recovery accuracy of the encoding are balanced before.
The present invention will be described in further detail below with reference to the accompanying drawings and a COST 2100MIMO channel, in which the feature codec employs a CsiNet model and the entropy codec employs a CABAC entropy codec.
A large-scale MIMO channel state information feedback method based on deep learning, through the encoder-decoder framework of data drive, compress and encode the channel state information into the low dimensional code word with the encoder at the user end, convey to the end decoder of base transceiver station and rebuild out the channel state information through the feedback link, reduce the feedback overhead of channel state information, improve channel and rebuild quality and speed at the same time, include the following step specifically:
(1) in downlink of MIMO system, base station end uses N t The MIMO system adopts OFDM carrier modulation mode and uses 32 transmitting antennas, and the user end uses a single receiving antenna
Figure BDA0002996797510000061
Figure BDA0002996797510000062
And (4) sub-carriers. Samples of the channel matrix were generated using the COST 2100 model according to the above conditions in a 5.3GHz indoor picocellular network scenario, divided into a training set of 10000 samples, a validation set of 30000 samples, and a test set of 20000 samples. Number collected by itselfAccording to the form of
Figure BDA0002996797510000063
Since the delay between the multipath arrival times is within a limited time range, the actual delay is mainly concentrated in the first 32 rows of data, so only the first 32 rows of the original data are needed in the data, so the sample for the network is N c ×N t =32×32。
(2) As shown in the encoder portion of fig. 1, the encoder at the user end is designed to apply a complex field channel matrix to the channel matrix
Figure BDA0002996797510000064
The real and imaginary parts of (a) are split into two real matrices each of size 32 x 32 as inputs to two channels. First, the feature encoder of the encoder in the csiant architecture is a two-channel convolutional layer, which is convolved with the input by using two 3 × 3 two-channel convolution kernels, and the convolutional layer output is two feature maps of 32 × 32 size, that is, two real matrices of 32 × 32 size, by using appropriate zero padding, ReLU activation function, and batch normalization. Second, in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to the nearest integer to that element. However, since quantization is not a differentiable function, it can not be used in the network structure based on gradient, therefore, in the training, the output obtained by adding-0.5 to 0.5 independent and identically distributed noise matrix on the output of the feature encoder is used to replace the quantization process, and the output is still two real number matrixes with 32 × 32 size. Thirdly, in order to realize the coding, an EntropyBottleneck layer is designed, which can be used for modeling the entropy of the vector and realizing a flexible probability density model to estimate the entropy of the input tensor. During training, this can be used to impose an entropy constraint on its activation function, limiting the amount of information that flows through the layers. After training, this layer can be used to compress any input tensor into a string. As shown in fig. 4, the CABAC entropy coding includes binarization, context modeling, and binary arithmetic coding, and based on the input probability model, the quantized values are converted into a binary bit stream, i.e., a compressed and coded codeword s to be transmitted to the base station by the ue.
(3) The base station side decoder is designed as shown in the decoder part of fig. 2, and includes an entropy decoder and a feature decoder. The entropy decoder decodes the binary bit stream into an estimate of the feature matrix. While the feature decoder in CsiNet architecture contains two reinnet units and one convolutional layer, the reinnet unit contains one input layer and three convolutional layers, and a path to add the input layer data to the last layer, as shown in fig. 3. The output of the entropy decoder is input to the first layer of the signature decoder, i.e. a reflonenet unit, which is the input layer, i.e. two real matrices of size 32 × 32, as the initialization of the real and imaginary parts of the estimated channel matrix, respectively. The second, third and fourth layers of the RefineNet are convolution layers, 8, 16 and 2 convolution kernels with the size of 3 × 3 are respectively adopted, and proper zero padding, ReLU activation function and batch normalization (batch normalization) are adopted, so that the size of a characteristic diagram obtained after each convolution is consistent with the size of an original channel matrix H and is 32 × 32. In addition, the data of the input layer is added to the data of the third convolutional layer, i.e., the last layer of the RefineNet, as the output of the entire RefineNet. The output of the RefineNet, i.e., the two 32 × 32-sized profiles, are input to a second RefineNet unit, the input layer of which replicates the output of the previous RefineNet unit, the remainder of which is identical to the previous RefineNet unit, and the two 32 × 32-sized profiles of which are output are input to the last convolutional layer of the decoder, with a sigmoid activation function, to limit the output value range to [0,1]Interval such that the final output of the decoder is two 32 x 32 sized real matrices as the final reconstructed channel matrix
Figure BDA0002996797510000072
Real and imaginary parts of (c).
(4) Designing the cost function of the whole CsiNet architecture optimizes the entropy output by the entropy encoder and the reconstructed MSE at the same time, which can be balanced before the compression rate and the recovery accuracy of the encoding. The expression can be written as:
Figure BDA0002996797510000071
where λ may adjust the weights of the two optimization objectives. Using 100000 training set samples of the channel matrix H generated in the step (1), adopting an Adam optimization algorithm and an end-to-end learning mode, and jointly training parameters of an encoder and a decoder, wherein the parameters mainly comprise convolution and entropy coding parameters, so that the cost function is minimum, the learning rate adopted in the Adam algorithm is 0.001, each iteration is to calculate gradient by using 200 samples in the training set, and the parameters are updated according to the formula of the Adam algorithm, so that the whole training set is traversed 1000 times. In the training process, a model with good performance can be selected by using a verification set, wherein the CsiNet model is the selected model; the test set may test the performance of the final model.
(5) The trained CsiNet model can be used for channel state information feedback of the MIMO system. Inputting CsiNet + entropy coding structure according to the channel matrix H of the channel state channel information in the step (1), and outputting the reconstructed channel matrix
Figure BDA0002996797510000081
The original channel state information can be recovered.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (5)

1. A large-scale MIMO channel state information feedback method based on deep learning and entropy coding is characterized by comprising the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and an entropy encoder, and encoding a channel matrix H into binary bit stream;
step 3, in the baseAnd (3) the station side constructs a model combining the deep learning characteristic decoder and the entropy decoder, and reconstructs an original channel matrix estimated value from the binary bit stream obtained in the step (2)
Figure FDA0003736409920000011
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy coder and the reconstructed mean square error MSE in the training process, balancing the compression ratio and the recovery accuracy of the coding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure FDA0003736409920000012
Step 5, the deep learning feature-based encoder and entropy coding combined model trained in the step 4 is used for compressed sensing and reconstruction of channel information;
the user end part of the model combining the deep learning feature encoder and the entropy encoding in the step 2 consists of a feature encoder, a uniform unit scalar quantizer and an entropy encoder;
the feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ f f-en (H,Θ en ) Wherein the eigen-encoder parameter Θ en Obtained by training, H is a channel matrix, f f-en A representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization cannot be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure FDA0003736409920000013
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure FDA0003736409920000021
Where s is the output binary bit stream and P is the probability density function expressed as
Figure FDA0003736409920000022
Parameter theta of probability density function p The training aid is obtained by training the human body,
Figure FDA0003736409920000023
for the quantized feature matrix, f e-en Representing an entropy coder.
2. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 1, wherein the base station part of the model combining the deep learning feature decoder and entropy decoding of step 3 is composed of an entropy decoder and a feature decoder.
3. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 2, wherein the entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure FDA0003736409920000024
Where P is a probability density function, f e-de Represents an entropy decoder, according to which the binary bitstream is decoded into a feature matrix;
5.2. decoding through the decoder designed at the base station end, and randomly initializing each layerParameter, decoder and feature matrix
Figure FDA0003736409920000025
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure FDA0003736409920000026
Wherein f is f-de A representative feature decoder is provided for decoding the feature data,
Figure FDA0003736409920000027
for the entropy decoder output, the feature decoder parameter Θ de Obtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
4. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 1, wherein the parameters of the combination model in step 4 mainly comprise convolution kernel, bias, entropy coding related parameters of convolution layer.
5. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 1, wherein the step 4 is to train the composition model, adopt an end-to-end training mode, jointly train parameters of the encoder and the decoder, make the cost function minimum, the cost function optimizes entropy outputted by the entropy encoder and reconstructed MSE at the same time, and get a balance before the compression ratio of encoding and the recovery accuracy.
CN202110334430.7A 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding Active CN113098804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Publications (2)

Publication Number Publication Date
CN113098804A CN113098804A (en) 2021-07-09
CN113098804B true CN113098804B (en) 2022-08-23

Family

ID=76670727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110334430.7A Active CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Country Status (1)

Country Link
CN (1) CN113098804B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115694722A (en) * 2021-07-30 2023-02-03 华为技术有限公司 Communication method and device
CN117837130A (en) * 2021-09-02 2024-04-05 Oppo广东移动通信有限公司 Model processing method, electronic device, network device and terminal device
CN114337849B (en) * 2021-12-21 2023-03-14 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
US20230261712A1 (en) * 2022-02-15 2023-08-17 Qualcomm Incorporated Techniques for encoding and decoding a channel between wireless communication devices
CN116827489A (en) * 2022-03-18 2023-09-29 中兴通讯股份有限公司 Feedback method and device for channel state information, storage medium and electronic device
CN116961711A (en) * 2022-04-19 2023-10-27 华为技术有限公司 Communication method and device
CN117479316A (en) * 2022-07-18 2024-01-30 中兴通讯股份有限公司 Channel state information determining method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Also Published As

Publication number Publication date
CN113098804A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113098804B (en) Channel state information feedback method based on deep learning and entropy coding
CN108390706B (en) Large-scale MIMO channel state information feedback method based on deep learning
CN110311718B (en) Quantization and inverse quantization method in massive MIMO channel state information feedback
CN110350958B (en) CSI multi-time rate compression feedback method of large-scale MIMO based on neural network
CN108847876B (en) Large-scale MIMO time-varying channel state information compression feedback and reconstruction method
Lu et al. Bit-level optimized neural network for multi-antenna channel quantization
Dai et al. Nonlinear transform source-channel coding for semantic communications
CN112737985B (en) Large-scale MIMO channel joint estimation and feedback method based on deep learning
Yang et al. Deep convolutional compression for massive MIMO CSI feedback
Liu et al. An efficient deep learning framework for low rate massive MIMO CSI reporting
CN110474716B (en) Method for establishing SCMA codec model based on noise reduction self-encoder
CN110912598B (en) Large-scale MIMO system CSI feedback method based on long-time attention mechanism
Chen et al. Deep learning-based implicit CSI feedback in massive MIMO
JP5066609B2 (en) Adaptive compression of channel feedback based on secondary channel statistics
Xu et al. Deep joint source-channel coding for CSI feedback: An end-to-end approach
CN101207464B (en) Generalized grasman code book feedback method
CN116248156A (en) Deep learning-based large-scale MIMO channel state information feedback and reconstruction method
Shen et al. Clustering algorithm-based quantization method for massive MIMO CSI feedback
CN113660020A (en) Wireless communication channel information transmission method, system and decoder
CN116505989B (en) CSI feedback method and system based on complex encoder hybrid neural network
CN116155333A (en) Channel state information feedback method suitable for large-scale MIMO system
TWI669921B (en) Feedback method for use as a channel information based on deep learning
Zheng et al. Analysis of multiple antenna systems with finite-rate channel information feedback over spatially correlated fading channels
CN113556159A (en) Channel feedback method of large-scale MIMO multi-user system
CN113660693B (en) Information transmission method applied to wireless communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant