CN113098804A - Channel state information feedback method based on deep learning and entropy coding - Google Patents

Channel state information feedback method based on deep learning and entropy coding Download PDF

Info

Publication number
CN113098804A
CN113098804A CN202110334430.7A CN202110334430A CN113098804A CN 113098804 A CN113098804 A CN 113098804A CN 202110334430 A CN202110334430 A CN 202110334430A CN 113098804 A CN113098804 A CN 113098804A
Authority
CN
China
Prior art keywords
entropy
decoder
deep learning
matrix
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110334430.7A
Other languages
Chinese (zh)
Other versions
CN113098804B (en
Inventor
郑添月
凌泰炀
姚志伟
田佳辰
伍诗语
郑怀瑾
王闻今
李潇
金石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110334430.7A priority Critical patent/CN113098804B/en
Publication of CN113098804A publication Critical patent/CN113098804A/en
Application granted granted Critical
Publication of CN113098804B publication Critical patent/CN113098804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0256Channel estimation using minimum mean square error criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Radio Transmission System (AREA)

Abstract

The invention discloses a channel state information feedback method based on deep learning and entropy coding, which comprises the steps of firstly, preprocessing a channel matrix of MIMO channel state information at a user terminal, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback; secondly, a model combining a deep learning characteristic encoder and entropy encoding is built at a user side, and a channel matrix H is encoded into a binary bit stream; at a base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and an original channel matrix estimation value is reconstructed from a binary bit stream; training the model to obtain the parameters of the model and the reconstructed value of the reconstructed channel matrix
Figure DDA0002996797530000011
Finally, the trained model based on deep learning and entropy coding is used for compressing the channel informationAnd (4) reducing perception and reconstructing. The invention can reduce the feedback overhead of the large-scale MIMO channel state information.

Description

Channel state information feedback method based on deep learning and entropy coding
Technical Field
The invention relates to a large-scale MIMO channel state information feedback method based on deep learning and entropy coding.
Background
Massive MIMO (massive multiple-input multiple-output) technology is considered as a key technology of 5G and post-6G communication systems. By using multiple transmit and multiple receive antennas, the MIMO system can significantly increase capacity without extending additional bandwidth. Based on the potential advantages of the large-scale MIMO system, the method is established on the basis that the base station end can accurately acquire the channel state information, and interference among multiple users is eliminated through precoding, however, for an fdd (frequency division multiple) MIMO system, an uplink and a downlink work on different frequency points, and therefore, downlink channel state information is obtained by the user end and is transmitted back to the base station end through a feedback link. Considering that the base station uses a large number of antennas, feeding back complete channel state information would result in a huge resource overhead, which is not desirable in practice. Quantization or codebook-based methods are therefore usually used in practice to reduce the overhead, which somewhat loses the channel state information and still increases linearly with the number of antennas, and thus are not preferable in massive MIMO systems.
Research on channel state information feedback for massive MIMO systems has focused on reducing the feedback overhead by means of spatial and temporal correlation of channel state information. In particular, the relevant channel state information may be converted into an irrelevant sparse vector in some bases; thus, a sufficiently accurate estimate of the sparse vector can be obtained from an underdetermined linear system using compressed sensing. Specifically, the channel state information can be transformed into a sparse matrix under a certain basis, and random compressive sampling is performed on the sparse matrix by using a compressive sensing method to obtain a low-dimensional measurement value; the measured value is transmitted to a base station end through a feedback link under the condition of occupying a small amount of resource overhead, and the base station end reconstructs an original sparse channel matrix from the measured value by means of a compressive sensing theory. The above method based on compressed sensing is a channel feedback method which is advanced at present, but still has the following problems: compressed sensing algorithms generally rely on the assumption that the channel is sparse on some basis. In practice, the channel is not completely sparse at any transform basis and has a more complex structure, possibly even no interpretable structure; compressed sensing uses a random projection method to obtain a low-dimensional compressed signal, so that a channel structure is not fully utilized; most of the existing compressed sensing algorithms are iterative algorithms, the reconstruction speed is low, huge calculation overhead is needed, and great challenges are provided for the real-time performance of the system.
To solve the above problem, a sensing and recovery network csiant based on deep learning channel state information has been proposed. Then, in actual communication, in order to further improve the accuracy of channel information recovery and reduce feedback overhead, the coding and decoding processes of the transmitting end and the receiving end need to be considered, and at present, few documents relate to the content in this respect.
Disclosure of Invention
The technical problem is as follows: the invention provides a large-scale MIMO channel state information feedback method based on deep learning and entropy coding, which is a large-scale MIMO channel state information feedback method capable of quickly and accurately reconstructing channel state information from feedback information with a lower bit rate by a model combining deep learning characteristic coding and decoding with entropy coding and decoding, solves the problem of high channel state information feedback overhead in a large-scale MIMO system, and realizes better balance of the bit rate and the channel information feedback accuracy.
The technical scheme is as follows: the invention discloses a channel state information feedback method based on deep learning and entropy coding, which comprises the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and entropy coding, and encoding a channel matrix H into a binary bit stream;
step 3, at the base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and the original channel matrix estimation value is reconstructed from the binary bit stream obtained in the step 2
Figure BDA0002996797510000021
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy encoder and the reconstructed mean square error MSE in the training process, balancing before the compression ratio and the recovery accuracy of encoding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure BDA0002996797510000022
And 5, applying the deep learning feature-based encoder and entropy coding combined model trained in the step 4 to compressed sensing and reconstruction of channel information.
Wherein the content of the first and second substances,
and the user end part of the model combining the deep learning feature encoder and the entropy encoding in the step 2 consists of a feature encoder, a uniform unit scalar quantizer and an entropy encoder.
The feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ ff-en(H,Θen) Wherein the eigen-encoder parameter ΘenObtained by training, H is a channel matrix, ff-enA representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization can not be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000031
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure BDA0002996797510000032
Where s is the output binary bit stream and P is the probability density function expressed as
Figure BDA0002996797510000033
Parameter theta of probability density functionpThe training aid is obtained by training the human body,
Figure BDA0002996797510000034
for the quantized feature matrix, fe-enRepresenting an entropy coder.
And (3) the base station part of the model combining the deep learning characteristic decoder and the entropy decoding in the step (3) consists of the entropy decoder and the characteristic decoder.
The entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure BDA0002996797510000035
Where P is the probability density function, fe-deRepresents an entropy decoder, according to which the binary bitstream is decoded into a feature matrix;
5.2. decoding by a decoder designed at the base station side, followed byThe machine initializes parameters of each layer, and the decoder uses a feature matrix
Figure BDA0002996797510000036
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure BDA0002996797510000037
Figure BDA0002996797510000038
Wherein f isf-deA decoder representing the characteristics of the digital video signal,
Figure BDA0002996797510000039
for the entropy decoder output, the feature decoder parameter ΘdeObtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
The parameters of the combined model in the step (4) mainly comprise convolution kernels and offsets of the convolutional layer and relevant parameters of entropy coding.
And (4) training the combined model, adopting an end-to-end training mode, and jointly training the parameters of the encoder and the decoder to minimize the cost function, wherein the cost function simultaneously optimizes the entropy output by the entropy encoder and the reconstructed MSE, and the compression ratio and the recovery accuracy of the encoding are balanced before.
Has the advantages that: compared with the prior art, the invention has the advantages that compared with the prior art, the channel reconstruction quality is improved at a lower bit rate, so that the feedback of the channel state information is realized under the limited resource expense. According to experimental results, under the condition that the channel transmission bit rate is equivalent, the invention can realize the gain of channel state information estimation of 3-4dB compared with the prior work.
Drawings
FIG. 1 is a diagram of the deep learning + entropy coding model encoder network architecture of the present invention;
FIG. 2 is a diagram of the decoder network architecture of the deep learning + entropy coding model of the present invention;
fig. 3 is a block diagram of an exemplary decoder and reflonenet unit of the present invention;
fig. 4 is a flowchart of CABAC entropy coding according to an example of the present invention (entropy decoding is the inverse of this process, and therefore is not described again).
Detailed Description
The invention discloses a channel state information feedback method based on deep learning and entropy coding, which comprises the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and entropy coding, and encoding a channel matrix H into a binary bit stream;
step 3, at the base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and the original channel matrix estimation value is reconstructed from the binary bit stream obtained in the step 2
Figure BDA0002996797510000041
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy encoder and the reconstructed mean square error MSE in the training process, balancing before the compression ratio and the recovery accuracy of encoding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure BDA0002996797510000042
And 5, applying the deep learning feature-based encoder and entropy coding combined model trained in the step 4 to compressed sensing and reconstruction of channel information.
Wherein the content of the first and second substances,
and the user end part of the model combining the deep learning feature encoder and the entropy encoding in the step 2 consists of a feature encoder, a uniform unit scalar quantizer and an entropy encoder.
The feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ ff-en(H,Θen) Wherein the eigen-encoder parameter ΘenObtained by training, H is a channel matrix, ff-enA representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization can not be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000051
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure BDA0002996797510000052
Where s is the output binary bit stream and P is the probability density function expressed as
Figure BDA0002996797510000053
Parameter theta of probability density functionpThe training aid is obtained by training the human body,
Figure BDA0002996797510000054
for the quantized feature matrix, fe-enRepresenting an entropy coder.
And (3) the base station part of the model combining the deep learning characteristic decoder and the entropy decoding in the step (3) consists of the entropy decoder and the characteristic decoder.
The entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, passing through the entropy decoder, the binary bitStream s is input, output
Figure BDA0002996797510000055
Where P is the probability density function, fe-deRepresents an entropy decoder, according to which the binary bitstream is decoded into a feature matrix;
5.2. decoding by a decoder designed at a base station end, randomly initializing parameters of each layer, and using a characteristic matrix for the decoder
Figure BDA0002996797510000056
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure BDA0002996797510000057
Figure BDA0002996797510000058
Wherein f isf-deA decoder representing the characteristics of the digital video signal,
Figure BDA0002996797510000059
for the entropy decoder output, the feature decoder parameter ΘdeObtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
The parameters of the combined model in the step (4) mainly comprise convolution kernels and offsets of the convolutional layer and relevant parameters of entropy coding.
And (4) training the combined model, adopting an end-to-end training mode, and jointly training the parameters of the encoder and the decoder to minimize the cost function, wherein the cost function simultaneously optimizes the entropy output by the entropy encoder and the reconstructed MSE, and the compression ratio and the recovery accuracy of the encoding are balanced before.
The present invention will be described in further detail below with reference to the accompanying drawings and a COST 2100MIMO channel, in which the feature codec uses the CsiNet model and the entropy codec uses the CABAC entropy codec.
A large-scale MIMO channel state information feedback method based on deep learning, through the encoder-decoder framework of data drive, compress and encode the channel state information into the low dimensional code word with the encoder at the user end, convey to the end decoder of base transceiver station and rebuild out the channel state information through the feedback link, reduce the feedback overhead of channel state information, improve channel and rebuild quality and speed at the same time, include the following step specifically:
(1) in downlink of MIMO system, base station end uses NtThe MIMO system adopts OFDM carrier modulation mode and uses 32 transmitting antennas, and the user end uses a single receiving antenna
Figure BDA0002996797510000061
Figure BDA0002996797510000062
And (4) sub-carriers. Samples of the channel matrix were generated using the COST 2100 model according to the above conditions in a 5.3GHz indoor picocellular network scenario, divided into a training set of 10000 samples, a validation set of 30000 samples, and a test set of 20000 samples. The data collected by the system is in the form of
Figure BDA0002996797510000063
Since the delay between the multipath arrival times is within a limited time range, the actual delay is mainly concentrated in the first 32 rows of data, so only the first 32 rows of the original data are needed in the data, so the sample for the network is Nc×Nt=32×32。
(2) As shown in the encoder portion of fig. 1, the encoder at the user end is designed to apply a complex field channel matrix to the channel matrix
Figure BDA0002996797510000064
The real and imaginary parts of (a) are split into two real matrices each of size 32 x 32 as inputs to two channels. First, the feature encoder of the encoder in the csiant architecture is a two-channel convolutional layer, which is convolved with the input by using two 3 × 3 two-channel convolution kernels, and the convolutional layer output is two feature maps of 32 × 32 size, that is, two real matrices of 32 × 32 size, by using appropriate zero padding, ReLU activation function, and batch normalization. Second, in the quantization stage, the unit is uniformThe scalar quantizer adjusts each element in M to the nearest integer to that element. However, since quantization is not a differentiable function, it can not be used in a network structure based on gradient, so in training, a noise matrix with-0.5 to 0.5 independent and same distribution is added to the output of the feature encoder instead of quantization, and the obtained output is still two real matrixes with size of 32 × 32. Thirdly, in order to realize the coding, an EntropyBottleneck layer is designed, which can be used for modeling the entropy of the vector and realizing a flexible probability density model to estimate the entropy of the input tensor. During training, this can be used to impose an entropy constraint on its activation function, limiting the amount of information that flows through the layers. After training, this layer can be used to compress any input tensor into a string. Referring to fig. 4, the CABAC entropy coding includes binarization, context modeling, and binary arithmetic coding, and based on the input probability model, the quantized values are converted into a binary bit stream, i.e., a compressed and coded codeword s to be transmitted by the ue to the bs.
(3) The base station side decoder is designed as shown in the decoder portion of fig. 2, and includes an entropy decoder and a feature decoder. The entropy decoder decodes the binary bit stream into an estimate of the feature matrix. While the feature decoder in CsiNet architecture contains two reinnet units and one convolutional layer, the reinnet unit contains one input layer and three convolutional layers, and a path to add the input layer data to the last layer, as shown in fig. 3. The output of the entropy decoder is input to the first layer of the signature decoder, i.e. a reflonenet unit, which is the input layer, i.e. two real matrices of size 32 × 32, as the initialization of the real and imaginary parts of the estimated channel matrix, respectively. The second, third and fourth layers of the RefineNet are convolution layers, 8, 16 and 2 convolution kernels with the size of 3 × 3 are respectively adopted, and proper zero padding, ReLU activation function and batch normalization (batch normalization) are adopted, so that the size of a characteristic diagram obtained after each convolution is consistent with the size of an original channel matrix H and is 32 × 32. In addition, the data of the input layer is added to the data of the third convolutional layer, i.e., the last layer of the RefineNet, as the output of the entire RefineNet. The output of the RefineNet, i.e.Inputting two feature maps of 32 x 32 size into a second RefineNet unit, copying the output of the last RefineNet unit at the input layer, inputting the rest part of the feature maps into the last convolution layer of the decoder, and limiting the output value range to [0,1 ] by using sigmoid activation function]Interval such that the final output of the decoder is two 32 x 32 sized real matrices as the final reconstructed channel matrix
Figure BDA0002996797510000072
Real and imaginary parts of (c).
(4) Designing the cost function of the whole CsiNet architecture optimizes the entropy output by the entropy encoder and the reconstructed MSE at the same time, which can be balanced before the compression rate and the recovery accuracy of the encoding. The expression can be written as:
Figure BDA0002996797510000071
where λ may adjust the weights of the two optimization objectives. Using 100000 training set samples of the channel matrix H generated in the step (1), adopting an Adam optimization algorithm and an end-to-end learning mode, and jointly training parameters of an encoder and a decoder, wherein the parameters mainly comprise convolution and entropy coding parameters, so that the cost function is minimum, the learning rate adopted in the Adam algorithm is 0.001, each iteration is to calculate gradient by using 200 samples in the training set, and the parameters are updated according to the formula of the Adam algorithm, so that the whole training set is traversed 1000 times. In the training process, a model with good performance can be selected by using a verification set, wherein the CsiNet model is the selected model; the test set may test the performance of the final model.
(5) The trained CsiNet model can be used for channel state information feedback of the MIMO system. Inputting CsiNet + entropy coding structure according to the channel matrix H of the channel state channel information in the step (1), and outputting the reconstructed channel matrix
Figure BDA0002996797510000081
Can recoverOriginal channel state information.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (7)

1. A channel state information feedback method based on deep learning and entropy coding is characterized by comprising the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and entropy coding, and encoding a channel matrix H into a binary bit stream;
step 3, at the base station end, a model combining a deep learning characteristic decoder and entropy decoding is constructed, and the original channel matrix estimation value is reconstructed from the binary bit stream obtained in the step 2
Figure FDA0002996797500000011
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy encoder and the reconstructed mean square error MSE in the training process, balancing before the compression ratio and the recovery accuracy of encoding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure FDA0002996797500000012
And 5, applying the deep learning feature-based encoder and entropy coding combined model trained in the step 4 to compressed sensing and reconstruction of channel information.
2. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 1, wherein the user side part of the model combining deep learning feature coder and entropy coding in step 2 is composed of a feature coder, a uniform unit scalar quantizer and an entropy coder.
3. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 2, wherein the feature encoder, the uniform unit scalar quantizer and the entropy coder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ ff-en(H,Θen) Wherein the eigen-encoder parameter ΘenObtained by training, H is a channel matrix, ff-enA representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization can not be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure FDA0002996797500000013
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure FDA0002996797500000021
Where s is the output binary bit stream and P is the probability density function expressed as
Figure FDA0002996797500000022
Parameter theta of probability density functionpThe training aid is obtained by training the human body,
Figure FDA0002996797500000023
for the quantized feature matrix, fe-enRepresenting an entropy coder.
4. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 1, wherein the base station part of the model combining the deep learning feature decoder and entropy decoding of step (3) is composed of an entropy decoder and a feature decoder.
5. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 4, wherein the entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure FDA0002996797500000028
Where P is the probability density function, fe-deRepresents an entropy decoder, according to which the binary bitstream is decoded into a feature matrix;
5.2. decoding by a decoder designed at a base station end, randomly initializing parameters of each layer, and using a characteristic matrix for the decoder
Figure FDA0002996797500000024
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure FDA0002996797500000025
Figure FDA0002996797500000026
Wherein f isf-deA decoder representing the characteristics of the digital video signal,
Figure FDA0002996797500000027
for the output of the entropy decoderCharacterization of decoder parameters ΘdeObtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
6. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 1, wherein the parameters of the combination model in step (4) mainly comprise convolution kernel, offset, and entropy coding related parameters of the convolutional layer.
7. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 1, wherein the step (4) is to train the composition model, and adopt an end-to-end training mode, jointly train parameters of the encoder and the decoder, so as to minimize the cost function, and the cost function simultaneously optimizes the entropy output by the entropy encoder and the reconstructed MSE, and is balanced before the compression rate and the recovery accuracy of the encoding.
CN202110334430.7A 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding Active CN113098804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Publications (2)

Publication Number Publication Date
CN113098804A true CN113098804A (en) 2021-07-09
CN113098804B CN113098804B (en) 2022-08-23

Family

ID=76670727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110334430.7A Active CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Country Status (1)

Country Link
CN (1) CN113098804B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114337849A (en) * 2021-12-21 2022-04-12 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
WO2023006096A1 (en) * 2021-07-30 2023-02-02 华为技术有限公司 Communication method and apparatus
WO2023028948A1 (en) * 2021-09-02 2023-03-09 Oppo广东移动通信有限公司 Model processing method, electronic device, network device, and terminal device
WO2023174132A1 (en) * 2022-03-18 2023-09-21 中兴通讯股份有限公司 Channel state information feedback method and apparatus, and storage medium and electronic apparatus
WO2023202514A1 (en) * 2022-04-19 2023-10-26 华为技术有限公司 Communication method and apparatus
WO2024016936A1 (en) * 2022-07-18 2024-01-25 中兴通讯股份有限公司 Method for determining channel state information, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023006096A1 (en) * 2021-07-30 2023-02-02 华为技术有限公司 Communication method and apparatus
WO2023028948A1 (en) * 2021-09-02 2023-03-09 Oppo广东移动通信有限公司 Model processing method, electronic device, network device, and terminal device
CN114337849A (en) * 2021-12-21 2022-04-12 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
CN114337849B (en) * 2021-12-21 2023-03-14 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
WO2023174132A1 (en) * 2022-03-18 2023-09-21 中兴通讯股份有限公司 Channel state information feedback method and apparatus, and storage medium and electronic apparatus
WO2023202514A1 (en) * 2022-04-19 2023-10-26 华为技术有限公司 Communication method and apparatus
WO2024016936A1 (en) * 2022-07-18 2024-01-25 中兴通讯股份有限公司 Method for determining channel state information, electronic device, and storage medium

Also Published As

Publication number Publication date
CN113098804B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN113098804B (en) Channel state information feedback method based on deep learning and entropy coding
CN108390706B (en) Large-scale MIMO channel state information feedback method based on deep learning
CN110311718B (en) Quantization and inverse quantization method in massive MIMO channel state information feedback
CN110350958B (en) CSI multi-time rate compression feedback method of large-scale MIMO based on neural network
Yang et al. Deep convolutional compression for massive MIMO CSI feedback
Liu et al. An efficient deep learning framework for low rate massive MIMO CSI reporting
CN111464220B (en) Channel state information reconstruction method based on deep learning
JP5066609B2 (en) Adaptive compression of channel feedback based on secondary channel statistics
Chen et al. Deep learning-based implicit CSI feedback in massive MIMO
CN112737985A (en) Large-scale MIMO channel joint estimation and feedback method based on deep learning
CN101207464B (en) Generalized grasman code book feedback method
CN115001629B (en) Channel quantization feedback method and device, electronic equipment and storage medium
Xu et al. Deep joint source-channel coding for CSI feedback: An end-to-end approach
CN116248156A (en) Deep learning-based large-scale MIMO channel state information feedback and reconstruction method
CN115996160A (en) Method and apparatus in a communication system
CN113660020A (en) Wireless communication channel information transmission method, system and decoder
Shen et al. Clustering algorithm-based quantization method for massive MIMO CSI feedback
Yang et al. Distributed deep convolutional compression for massive MIMO CSI feedback
TW201944745A (en) Feedback method for use as a channel information based on deep learning
CN116436551A (en) Channel information transmission method and device
CN113556159A (en) Channel feedback method of large-scale MIMO multi-user system
Thulajanaik et al. An Adaptive Compression Technique for Efficient Video Reconstruction in Future Generation Wireless Network.
CN116155333A (en) Channel state information feedback method suitable for large-scale MIMO system
CN115333900B (en) Unmanned aerial vehicle collaborative channel estimation and CSI feedback method based on deep learning
CN115664477A (en) KL (KL-Loyd-Max) transform and Lloyd-Max quantization based multi-antenna channel compression feedback method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant