CN113098804B - Channel state information feedback method based on deep learning and entropy coding - Google Patents

Channel state information feedback method based on deep learning and entropy coding Download PDF

Info

Publication number
CN113098804B
CN113098804B CN202110334430.7A CN202110334430A CN113098804B CN 113098804 B CN113098804 B CN 113098804B CN 202110334430 A CN202110334430 A CN 202110334430A CN 113098804 B CN113098804 B CN 113098804B
Authority
CN
China
Prior art keywords
entropy
feature
decoder
encoder
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110334430.7A
Other languages
Chinese (zh)
Other versions
CN113098804A (en
Inventor
郑添月
凌泰炀
姚志伟
田佳辰
伍诗语
郑怀瑾
王闻今
李潇
金石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110334430.7A priority Critical patent/CN113098804B/en
Publication of CN113098804A publication Critical patent/CN113098804A/en
Application granted granted Critical
Publication of CN113098804B publication Critical patent/CN113098804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0256Channel estimation using minimum mean square error criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Radio Transmission System (AREA)

Abstract

The invention discloses a deep learning-based methodFirstly, at a user terminal, preprocessing a channel matrix of the MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback; secondly, a model combining a deep learning characteristic encoder and entropy encoding is built at a user side, and a channel matrix H is encoded into a binary bit stream; at a base station end, constructing a model combining a deep learning characteristic decoder and entropy decoding, and reconstructing an original channel matrix estimation value from a binary bit stream; training the model to obtain the parameters of the model and the reconstructed value of the reconstructed channel matrix
Figure DDA0002996797530000011
And finally, the trained model based on deep learning and entropy coding is used for compressed sensing and reconstruction of channel information. The invention can reduce the feedback overhead of the large-scale MIMO channel state information.

Description

一种基于深度学习与熵编码的信道状态信息反馈方法A Channel State Information Feedback Method Based on Deep Learning and Entropy Coding

技术领域technical field

本发明涉及一种基于深度学习和熵编码的大规模MIMO信道状态信息反馈方法。The invention relates to a massive MIMO channel state information feedback method based on deep learning and entropy coding.

背景技术Background technique

大规模MIMO(massive multiple-input multiple-output)技术被认为是5G和后6G通信系统的关键技术。通过使用多个发射和多个接收天线,MIMO系统能够在不扩展额外带宽的情况下显著增加容量。基于上述大规模MIMO系统的潜在优势,建立在基站端可以精确获知信道状态信息的基础上,并以此通过预编码来消除多用户间的干扰,然而,对于FDD(frequency division duplexity)的MIMO系统,上行链路和下行链路工作在不同频点上,因此下行信道状态信息是由用户端获得,并通过反馈链路传送回基站端。考虑到基站端使用大量天线,反馈完整的信道状态信息将导致巨大的资源开销,在实际中是不可取的。因而在实际中通常采用量化或基于码本的方法来减小开销,这种方法一定程度损失了信道状态信息,且仍会随着天线数量的增加而线性增加,因此在大规模MIMO系统中还是不可取。Massive MIMO (massive multiple-input multiple-output) technology is considered to be the key technology of 5G and post-6G communication systems. By using multiple transmit and multiple receive antennas, MIMO systems can significantly increase capacity without expanding additional bandwidth. Based on the potential advantages of the massive MIMO system mentioned above, it is based on the fact that the base station can accurately know the channel state information, and use this to eliminate the interference between multiple users through precoding. However, for the frequency division duplexity (FDD) MIMO system , the uplink and downlink work on different frequency points, so the downlink channel state information is obtained by the user end and transmitted back to the base station through the feedback link. Considering that the base station uses a large number of antennas, feeding back complete channel state information will result in huge resource overhead, which is not desirable in practice. Therefore, in practice, quantization or codebook-based methods are usually used to reduce overhead. This method loses channel state information to a certain extent, and it still increases linearly with the increase of the number of antennas. Therefore, it is still used in massive MIMO systems. Not advisable.

关于大规模MIMO系统信道状态信息反馈的研究,主要集中于借助信道状态信息的空间和时间相关性来减小反馈开销。特别地,相关信道状态信息可以在一些基中转换成不相关的稀疏向量;因此,可以使用压缩感知从欠定线性系统获得稀疏向量的足够精确的估计。具体来说,可将信道状态信息变换至某个基下的稀疏矩阵,利用压缩感知的方法对其进行随机压缩采样以获得低维度测量值;该测量值在占用少量资源开销的情况下通过反馈链路传递至基站端,基站端借助压缩感知的理论,从该测量值中重建出原稀疏信道矩阵。上述基于压缩感知的方法为目前较为先进的信道反馈方法,但仍然存在以下问题:压缩感知算法普遍依赖于,信道在某些基础上稀疏的假设。而实际中信道不在任何变换基完全稀疏,且具有更复杂的结构,甚至可能没有可解释的结构;压缩感知使用随机投影的方法来获得低维度的压缩信号,因而没有充分利用信道结构;目前存在的压缩感知算法多为迭代算法,重建速度慢,需要巨大的计算开销,且对系统的实时性提出巨大挑战。The research on channel state information feedback in massive MIMO systems mainly focuses on reducing the feedback overhead by means of the spatial and temporal correlation of channel state information. In particular, correlated channel state information can be transformed into uncorrelated sparse vectors in some basis; thus, a sufficiently accurate estimate of sparse vectors can be obtained from underdetermined linear systems using compressed sensing. Specifically, the channel state information can be transformed into a sparse matrix under a certain basis, and the compressed sensing method is used to randomly compress and sample it to obtain a low-dimensional measurement value; the measurement value can be fed back while occupying a small amount of resource overhead. The link is transmitted to the base station, and the base station reconstructs the original sparse channel matrix from the measured value with the help of compressed sensing theory. The above-mentioned compressive sensing-based methods are relatively advanced channel feedback methods at present, but there are still the following problems: compressive sensing algorithms generally rely on the assumption that the channel is sparse on some basis. In practice, the channel is not completely sparse in any transform basis, and has a more complex structure, and may not even have an interpretable structure; compressed sensing uses random projection to obtain low-dimensional compressed signals, so the channel structure is not fully utilized; there are currently Most of the compressive sensing algorithms are iterative algorithms, and the reconstruction speed is slow, which requires huge computational overhead and poses a huge challenge to the real-time performance of the system.

为解决上述问题,目前,基于深度学习的信道状态信息的传感和恢复网络CsiNet已被提出。然后实际通信中,为进一步提高信道信息恢复的准确率,并降低反馈开销,需要考虑发射与接收端的编解码过程,而目前极少有文献涉及到此方面的内容。To solve the above problems, a deep learning-based channel state information sensing and restoration network CsiNet has been proposed. Then in actual communication, in order to further improve the accuracy of channel information recovery and reduce the feedback overhead, it is necessary to consider the encoding and decoding processes of the transmitting and receiving ends, and there are very few literatures related to this aspect.

发明内容SUMMARY OF THE INVENTION

技术问题:本发明提出一种基于深度学习和熵编码的大规模MIMO信道状态信息反馈方法,将深度学习特征编解码与熵编解码结合的模型,可以从较低比特率的反馈信息快速且准确地重建出信道状态信息的大规模MIMO信道状态信息反馈方法,解决大规模MIMO系统中信道状态信息反馈开销大的问题,实现比特率与信道信息反馈准确率的更优平衡。Technical problem: The present invention proposes a massive MIMO channel state information feedback method based on deep learning and entropy coding. The model combining deep learning feature coding and decoding with entropy coding and decoding can quickly and accurately feedback information from lower bit rates. A massive MIMO channel state information feedback method for reconstructing channel state information in a reliable manner, solves the problem of high overhead of channel state information feedback in massive MIMO systems, and achieves a better balance between bit rate and channel information feedback accuracy.

技术方案:本发明的一种基于深度学习与熵编码的信道状态信息反馈方法包括以下步骤:Technical solution: A channel state information feedback method based on deep learning and entropy coding of the present invention includes the following steps:

步骤1,在用户端,对MIMO信道状态信息的信道矩阵进行预处理,选择关键矩阵元素以减少计算量,获得实际用于反馈的信道矩阵H;Step 1, at the user end, preprocess the channel matrix of the MIMO channel state information, select key matrix elements to reduce the amount of calculation, and obtain the channel matrix H that is actually used for feedback;

步骤2,在用户端,构建包括深度学习特征编码器与熵编码结合的模型,将信道矩阵H编码为二进制比特流;Step 2, at the user end, construct a model including the combination of deep learning feature encoder and entropy coding, and encode the channel matrix H into a binary bit stream;

步骤3,在基站端,构建包括深度学习特征译码器与熵解码结合的模型,从步骤2得到的二进制比特流重建出原信道矩阵估计值

Figure BDA0002996797510000021
Step 3, at the base station, build a model including a combination of deep learning feature decoder and entropy decoding, and reconstruct the original channel matrix estimate from the binary bit stream obtained in step 2.
Figure BDA0002996797510000021

步骤4,对步骤2与步骤3组合得到的组合模型进行训练,在训练过程中,同时最优化熵编码器输出的熵和重建的均方误差MSE,在编码的压缩率和复原精确度之前得到平衡,获得该组合模型的参数,以及输出的重建的原信道矩阵估计值

Figure BDA0002996797510000022
Step 4, train the combined model obtained by combining step 2 and step 3. During the training process, the entropy output by the entropy encoder and the reconstructed mean square error MSE are optimized at the same time, and obtained before the encoding compression rate and restoration accuracy. Balance, obtain the parameters of the combined model, and output the reconstructed original channel matrix estimate
Figure BDA0002996797510000022

步骤5,将步骤4中训练好的基于深度学习特征编码器与熵编码组合模型用于信道信息的压缩感知和重建。In step 5, the combined model based on the deep learning feature encoder and entropy encoding trained in step 4 is used for compressed sensing and reconstruction of channel information.

其中,in,

所述步骤2中深度学习特征编码器与熵编码结合的模型的用户端部分由特征编码器、均匀单位标量量化器以及熵编码器组成。In the step 2, the user-end part of the model in which the deep learning feature encoder and entropy coding are combined is composed of a feature encoder, a uniform unit scalar quantizer and an entropy encoder.

所述特征编码器、均匀单位标量量化器以及熵编码器具体为:The feature encoder, uniform unit scalar quantizer and entropy encoder are specifically:

3.1.特征编码器:随机初始化各层参数,将信道矩阵H作为特征编码器的输入,得到的特征编码器输出为M=ff-en(H,Θen),其中特征编码器参数Θen通过训练获得,H为信道矩阵,ff-en代表特征编码器;3.1. Feature encoder: randomly initialize the parameters of each layer, take the channel matrix H as the input of the feature encoder, and the obtained feature encoder output is M=f f-en (H, Θ en ), wherein the feature encoder parameter Θ en Obtained through training, H is the channel matrix, and f f-en represents the feature encoder;

3.2.均匀单位标量量化器:在量化阶段,均匀单位标量量化器将M中的每个元素调整为离该元素最近的整数,然而由于量化不是可微函数,因此无法再以梯度为基础的网络结构中使用,因此,在训练中,采用独立同分布的噪声矩阵代替量化过程;将量化的特征矩阵写为:

Figure BDA0002996797510000031
其中ΔM为-0.5到0.5中均匀分布随机矩阵;3.2. Uniform unit scalar quantizer: In the quantization stage, the uniform unit scalar quantizer adjusts each element in M to the nearest integer to that element, however, since quantization is not a differentiable function, it is no longer possible to use gradient-based networks structure, therefore, in training, an IID noise matrix is used instead of the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000031
where ΔM is a uniformly distributed random matrix from -0.5 to 0.5;

3.3.通过熵编码,基于输入概率模型,将量化后的值转化为二进制比特流,该步骤表示为

Figure BDA0002996797510000032
其中s为输出的二进制比特流,P为概率密度函数,该概率密度函数表示为
Figure BDA0002996797510000033
概率密度函数的参数Θp通过训练获得的,
Figure BDA0002996797510000034
为量化的特征矩阵,fe-en代表熵编码器。3.3. Through entropy coding, based on the input probability model, the quantized value is converted into a binary bit stream, and this step is expressed as
Figure BDA0002996797510000032
where s is the output binary bit stream, and P is the probability density function, which is expressed as
Figure BDA0002996797510000033
The parameter Θ p of the probability density function is obtained by training,
Figure BDA0002996797510000034
is the quantized feature matrix, f e-en represents the entropy encoder.

所述步骤(3)深度学习特征译码器与熵解码结合的模型的基站端部分由熵解码器以及特征译码器组成。In the step (3), the base station part of the model combining the deep learning feature decoder and entropy decoding is composed of an entropy decoder and a feature decoder.

所述熵解码器以及特征译码器具体为:The entropy decoder and the feature decoder are specifically:

5.1.将二进制比特流s反馈至基站端,通过熵解码器,二进制比特流s为输入,输出

Figure BDA0002996797510000035
其中P为概率密度函数,fe-de代表熵解码器,据此,将二进制比特流解码为特征矩阵;5.1. Feed back the binary bit stream s to the base station, through the entropy decoder, the binary bit stream s is input and output
Figure BDA0002996797510000035
where P is the probability density function, and f e-de represents the entropy decoder, according to which the binary bit stream is decoded into a feature matrix;

5.2.通过基站端设计的译码器译码,随机初始化各层参数,译码器以特征矩阵

Figure BDA0002996797510000036
为输入,输出与信道矩阵H同维度的重建原信道矩阵估计值
Figure BDA0002996797510000037
Figure BDA0002996797510000038
其中ff-de代表特征解码器,
Figure BDA0002996797510000039
为熵解码器输出,特征解码器参数Θde通过训练获得,据此,特征解码器将特征矩阵解码为信道矩阵。5.2. Decode through the decoder designed by the base station, initialize the parameters of each layer randomly, and the decoder uses the feature matrix
Figure BDA0002996797510000036
is the input and the output is the estimated value of the reconstructed original channel matrix with the same dimension as the channel matrix H
Figure BDA0002996797510000037
Figure BDA0002996797510000038
where f f-de represents the feature decoder,
Figure BDA0002996797510000039
For the entropy decoder output, the feature decoder parameters Θ de are obtained through training, according to which the feature decoder decodes the feature matrix into a channel matrix.

步骤(4)所述的组合模型的参数主要包括卷积层的卷积核、偏置,熵编码相关参数。The parameters of the combined model described in step (4) mainly include the convolution kernel, bias and entropy coding related parameters of the convolution layer.

所述步骤(4)对组合模型进行训练,采用端到端的训练方式,联合训练编码器和译码器的参数,使代价函数最小,代价函数同时最优化熵编码器输出的熵和重建的MSE,在编码的压缩率和复原精确度之前得到平衡。The step (4) trains the combined model, adopts an end-to-end training method, and jointly trains the parameters of the encoder and the decoder to minimize the cost function, and the cost function optimizes the entropy output by the entropy encoder and the reconstructed MSE at the same time. , which is balanced before the compression ratio of the encoding and the accuracy of the restoration.

有益效果:与现有技术相比,本发明的有益效果,与已有工作相比,在较低的比特率上,改善信道重建质量,从而在有限的资源开销下,实现信道状态信息的反馈。依据实验结果,在信道传输比特速率相当的情况下,本发明与已有工作相比,可实现3-4dB的信道状态信息估计的增益。Beneficial effect: Compared with the prior art, the beneficial effect of the present invention is that compared with the existing work, the channel reconstruction quality is improved at a lower bit rate, so as to realize the feedback of the channel state information under the limited resource overhead. . According to the experimental results, the present invention can achieve a gain of 3-4dB in channel state information estimation compared with the existing work when the channel transmission bit rate is equivalent.

附图说明Description of drawings

图1为本发明的深度学习+熵编码模型编码器网络架构图;Fig. 1 is the deep learning+entropy coding model encoder network architecture diagram of the present invention;

图2为本发明的深度学习+熵编码模型解码器网络架构图;Fig. 2 is the deep learning+entropy coding model decoder network architecture diagram of the present invention;

图3为本发明示例的解码器及RefineNet单元结构图;3 is a structural diagram of a decoder and a RefineNet unit according to an example of the present invention;

图4为本发明示例的CABAC熵编码流程图(熵解码为该过程的逆过程,故不再赘述)。FIG. 4 is a flow chart of CABAC entropy coding according to an example of the present invention (entropy decoding is the inverse process of this process, so it is not repeated here).

具体实施方式Detailed ways

本发明的一种基于深度学习与熵编码的信道状态信息反馈方法包括:A channel state information feedback method based on deep learning and entropy coding of the present invention includes:

步骤1,在用户端,对MIMO信道状态信息的信道矩阵进行预处理,选择关键矩阵元素以减少计算量,获得实际用于反馈的信道矩阵H;Step 1, at the user end, preprocess the channel matrix of the MIMO channel state information, select key matrix elements to reduce the amount of calculation, and obtain the channel matrix H that is actually used for feedback;

步骤2,在用户端,构建包括深度学习特征编码器与熵编码结合的模型,将信道矩阵H编码为二进制比特流;Step 2, at the user end, construct a model including the combination of deep learning feature encoder and entropy coding, and encode the channel matrix H into a binary bit stream;

步骤3,在基站端,构建包括深度学习特征译码器与熵解码结合的模型,从步骤2得到的二进制比特流重建出原信道矩阵估计值

Figure BDA0002996797510000041
Step 3, at the base station, build a model including a combination of deep learning feature decoder and entropy decoding, and reconstruct the original channel matrix estimate from the binary bit stream obtained in step 2.
Figure BDA0002996797510000041

步骤4,对步骤2与步骤3组合得到的组合模型进行训练,在训练过程中,同时最优化熵编码器输出的熵和重建的均方误差MSE,在编码的压缩率和复原精确度之前得到平衡,获得该组合模型的参数,以及输出的重建的原信道矩阵估计值

Figure BDA0002996797510000042
Step 4, train the combined model obtained by combining step 2 and step 3. During the training process, the entropy output by the entropy encoder and the reconstructed mean square error MSE are optimized at the same time, and obtained before the encoding compression rate and restoration accuracy. Balance, obtain the parameters of the combined model, and output the reconstructed original channel matrix estimate
Figure BDA0002996797510000042

步骤5,将步骤4中训练好的基于深度学习特征编码器与熵编码组合模型用于信道信息的压缩感知和重建。In step 5, the combined model based on the deep learning feature encoder and entropy encoding trained in step 4 is used for compressed sensing and reconstruction of channel information.

其中,in,

所述步骤2中深度学习特征编码器与熵编码结合的模型的用户端部分由特征编码器、均匀单位标量量化器以及熵编码器组成。In the step 2, the user-end part of the model in which the deep learning feature encoder and entropy coding are combined is composed of a feature encoder, a uniform unit scalar quantizer and an entropy encoder.

所述特征编码器、均匀单位标量量化器以及熵编码器具体为:The feature encoder, uniform unit scalar quantizer and entropy encoder are specifically:

3.1.特征编码器:随机初始化各层参数,将信道矩阵H作为特征编码器的输入,得到的特征编码器输出为M=ff-en(H,Θen),其中特征编码器参数Θen通过训练获得,H为信道矩阵,ff-en代表特征编码器;3.1. Feature encoder: randomly initialize the parameters of each layer, take the channel matrix H as the input of the feature encoder, and the obtained feature encoder output is M=f f-en (H, Θ en ), wherein the feature encoder parameter Θ en Obtained through training, H is the channel matrix, and f f-en represents the feature encoder;

3.2.均匀单位标量量化器:在量化阶段,均匀单位标量量化器将M中的每个元素调整为离该元素最近的整数,然而由于量化不是可微函数,因此无法再以梯度为基础的网络结构中使用,因此,在训练中,采用独立同分布的噪声矩阵代替量化过程;将量化的特征矩阵写为:

Figure BDA0002996797510000051
其中ΔM为-0.5到0.5中均匀分布随机矩阵;3.2. Uniform unit scalar quantizer: In the quantization stage, the uniform unit scalar quantizer adjusts each element in M to the nearest integer to that element, however, since quantization is not a differentiable function, it is no longer possible to use gradient-based networks structure, therefore, in training, an IID noise matrix is used instead of the quantization process; the quantized feature matrix is written as:
Figure BDA0002996797510000051
where ΔM is a uniformly distributed random matrix from -0.5 to 0.5;

3.3.通过熵编码,基于输入概率模型,将量化后的值转化为二进制比特流,该步骤表示为

Figure BDA0002996797510000052
其中s为输出的二进制比特流,P为概率密度函数,该概率密度函数表示为
Figure BDA0002996797510000053
概率密度函数的参数Θp通过训练获得的,
Figure BDA0002996797510000054
为量化的特征矩阵,fe-en代表熵编码器。3.3. Through entropy coding, based on the input probability model, the quantized value is converted into a binary bit stream, and this step is expressed as
Figure BDA0002996797510000052
where s is the output binary bit stream, and P is the probability density function, which is expressed as
Figure BDA0002996797510000053
The parameter Θ p of the probability density function is obtained by training,
Figure BDA0002996797510000054
is the quantized feature matrix, f e-en represents the entropy encoder.

所述步骤(3)深度学习特征译码器与熵解码结合的模型的基站端部分由熵解码器以及特征译码器组成。In the step (3), the base station part of the model combining the deep learning feature decoder and entropy decoding is composed of an entropy decoder and a feature decoder.

所述熵解码器以及特征译码器具体为:The entropy decoder and the feature decoder are specifically:

5.1.将二进制比特流s反馈至基站端,通过熵解码器,二进制比特流s为输入,输出

Figure BDA0002996797510000055
其中P为概率密度函数,fe-de代表熵解码器,据此,将二进制比特流解码为特征矩阵;5.1. Feed back the binary bit stream s to the base station, through the entropy decoder, the binary bit stream s is input and output
Figure BDA0002996797510000055
where P is the probability density function, and f e-de represents the entropy decoder, according to which the binary bit stream is decoded into a feature matrix;

5.2.通过基站端设计的译码器译码,随机初始化各层参数,译码器以特征矩阵

Figure BDA0002996797510000056
为输入,输出与信道矩阵H同维度的重建原信道矩阵估计值
Figure BDA0002996797510000057
Figure BDA0002996797510000058
其中ff-de代表特征解码器,
Figure BDA0002996797510000059
为熵解码器输出,特征解码器参数Θde通过训练获得,据此,特征解码器将特征矩阵解码为信道矩阵。5.2. Decode through the decoder designed by the base station, initialize the parameters of each layer randomly, and the decoder uses the feature matrix
Figure BDA0002996797510000056
is the input and the output is the estimated value of the reconstructed original channel matrix with the same dimension as the channel matrix H
Figure BDA0002996797510000057
Figure BDA0002996797510000058
where f f-de represents the feature decoder,
Figure BDA0002996797510000059
For the entropy decoder output, the feature decoder parameters Θ de are obtained through training, according to which the feature decoder decodes the feature matrix into a channel matrix.

步骤(4)所述的组合模型的参数主要包括卷积层的卷积核、偏置,熵编码相关参数。The parameters of the combined model described in step (4) mainly include the convolution kernel, bias and entropy coding related parameters of the convolution layer.

所述步骤(4)对组合模型进行训练,采用端到端的训练方式,联合训练编码器和译码器的参数,使代价函数最小,代价函数同时最优化熵编码器输出的熵和重建的MSE,在编码的压缩率和复原精确度之前得到平衡。The step (4) trains the combined model, adopts an end-to-end training method, and jointly trains the parameters of the encoder and the decoder to minimize the cost function, and the cost function optimizes the entropy output by the entropy encoder and the reconstructed MSE at the same time. , which is balanced before the compression ratio of the encoding and the accuracy of the restoration.

以下将结合附图和一种COST 2100MIMO信道,特征编译码器采用CsiNet模型,熵编解码器采用CABAC熵编解码,对本发明做进一步详细说明。The present invention will be further described in detail below with reference to the accompanying drawings and a COST 2100 MIMO channel, the feature codec adopts the CsiNet model, and the entropy codec adopts CABAC entropy codec.

一种基于深度学习的大规模MIMO信道状态信息反馈方法,通过数据驱动的编码器——译码器架构,在用户端用编码器将信道状态信息压缩编码为低维度码字,经反馈链路传送至基站端译码器并重建出信道状态信息,减少信道状态信息反馈开销,同时提高信道重建质量和速度,具体包括如下步骤:A massive MIMO channel state information feedback method based on deep learning, through a data-driven encoder-decoder architecture, at the user end, the encoder is used to compress and encode the channel state information into low-dimensional codewords, and through the feedback link Send to the base station decoder and reconstruct the channel state information, reduce the channel state information feedback overhead, and improve the quality and speed of channel reconstruction at the same time, including the following steps:

(1)一种MIMO系统的下行链路中,基站端使用Nt=32根发送天线,用户端使用单根接收天线,该MIMO系统采用OFDM载波调制方式,使用

Figure BDA0002996797510000061
Figure BDA0002996797510000062
个子载波。用COST 2100模型根据上述条件,在5.3GHz的室内微微蜂窝网场景产生信道矩阵的样本,分成10000个样本的训练集、30000个样本的验证集以及20000个样本的测试集。本身收集的数据形式应为
Figure BDA0002996797510000063
因为多径到达时间之间的延迟在有限的时间范围内,实际的延迟主要集中于数据的前32行,因此数据中只需要原始数据的前32行,所以用于网络的样本为Nc×Nt=32×32。(1) In the downlink of a MIMO system, the base station uses N t =32 transmitting antennas, and the user terminal uses a single receiving antenna. The MIMO system adopts the OFDM carrier modulation method, using
Figure BDA0002996797510000061
Figure BDA0002996797510000062
subcarriers. According to the above conditions, the COST 2100 model is used to generate the samples of the channel matrix in the indoor picocellular network scenario of 5.3GHz, which is divided into a training set of 10,000 samples, a validation set of 30,000 samples, and a test set of 20,000 samples. The data collected by itself should be in the form of
Figure BDA0002996797510000063
Because the delay between multipath arrival times is in a limited time range, the actual delay is mainly concentrated in the first 32 rows of the data, so only the first 32 rows of the original data are needed in the data, so the samples used for the network are N c × N t =32×32.

(2)如图1的编码器部分所示,设计的用户端的编码器将复数域信道矩阵

Figure BDA0002996797510000064
的实部和虚部拆分为两个均为32×32大小的实数矩阵,作为两通道的输入。第一,CsiNet架构中的编码器的特征编码器是一个两通道的卷积层,采用两个3×3大小的两通道卷积核与输入进行卷积,采用适当的零填充、ReLU激活函数和批归一化,使得该卷积层输出为两个32×32大小的特征图,即两个32×32大小的实数矩阵。第二,在量化阶段,均匀单位标量量化器将M中的每个元素调整为离该元素最近的整数。然而由于量化不是可微函数,因此无法再以梯度为基础的网络结构中使用,因此,在训练中,采用在特征编码器输出上加上-0.5到0.5独立同分布的噪声矩阵代替量化过程,得到的输出仍为两个32×32大小的实数矩阵。第三,为实现该编码,设计了EntropyBottleneck层,可用于建模向量的熵,实现了一个灵活的概率密度模型来估计其输入张量的熵。在训练期间,这可以用来对它的激活函数施加一个熵约束,限制流经层的信息量。经过训练后,该层可用于将任何输入张量压缩为字符串。如图4,通过CABAC熵编码包括二值化、上下文建模、二进制算术编码三部分,基于输入概率模型,将量化后的值转化为二进制比特流,即为用户端要传送给基站端的压缩编码后的码字s。(2) As shown in the encoder part of Figure 1, the designed encoder at the user end converts the complex domain channel matrix
Figure BDA0002996797510000064
The real and imaginary parts are split into two real matrices of size 32×32, which are used as two-channel inputs. First, the feature encoder of the encoder in the CsiNet architecture is a two-channel convolutional layer, using two 3×3 two-channel convolution kernels to convolve with the input, using appropriate zero-padding, ReLU activation function and batch normalization, so that the output of this convolutional layer is two feature maps of size 32 × 32, that is, two real matrices of size 32 × 32. Second, in the quantization stage, a uniform unit scalar quantizer adjusts each element in M to the nearest integer to that element. However, since quantization is not a differentiable function, it can no longer be used in gradient-based network structures. Therefore, in training, a noise matrix with -0.5 to 0.5 IID added to the output of the feature encoder is used instead of the quantization process. The resulting output is still two real matrices of size 32x32. Third, to implement this encoding, an EntropyBottleneck layer is designed, which can be used to model the entropy of a vector, implementing a flexible probability density model to estimate the entropy of its input tensors. During training, this can be used to impose an entropy constraint on its activation function, limiting the amount of information flowing through the layer. After training, this layer can be used to compress any input tensor into a string. As shown in Figure 4, through CABAC entropy coding, it includes three parts: binarization, context modeling, and binary arithmetic coding. Based on the input probability model, the quantized value is converted into a binary bit stream, which is the compression code that the user terminal will transmit to the base station. The following codeword s.

(3)如图2的译码器部分所示设计基站端的译码器,译码器包括熵解码器和特征解码器。熵解码器将二进制比特流解码为特征矩阵的估计值。而CsiNet架构中的特征解码器包含两个RefineNet单元和一个卷积层,RefineNet单元包含一个输入层和三个卷积层,以及一条将输入层数据加到最后一层的路径,如图3所示。熵解码器的输出,输入特征译码器的第一层,即一个RefineNet单元,该单元的第一层为输入层,即两个32×32大小的实数矩阵,分别作为估计的信道矩阵的实部和虚部的初始化。RefineNet的第二、三、四层均为卷积层,分别采用8个、16个和2个大小为3×3的卷积核,采用适当的零填充、ReLU激活函数和批归一化(batch normalization),使得每次卷积后得到的特征图大小与原信道矩阵H大小一致,为32×32。此外,输入层的数据与第三个卷积层,即RefineNet的最后一层的数据相加,作为整个RefineNet的输出。该RefineNet的输出,即两个32×32大小的特征图,输入第二个RefineNet单元,其输入层复制上一个RefineNet单元的输出,其余部分与上一个RefineNet单元一样,且其输出的两个32×32大小的特征图输入译码器的最后一个卷积层,采用sigmoid激活函数,将输出值范围限制在[0,1]区间,从而该译码器的最终输出为两个32×32大小的实数矩阵,作为最终重建的信道矩阵

Figure BDA0002996797510000072
的实部和虚部。(3) Design the decoder at the base station as shown in the decoder part of FIG. 2 , and the decoder includes an entropy decoder and a feature decoder. The entropy decoder decodes the binary bitstream into estimates of the feature matrix. The feature decoder in the CsiNet architecture contains two RefineNet units and a convolutional layer. The RefineNet unit contains an input layer and three convolutional layers, and a path to add the input layer data to the last layer, as shown in Figure 3 Show. The output of the entropy decoder is input to the first layer of the feature decoder, that is, a RefineNet unit. The first layer of the unit is the input layer, that is, two real number matrices of size 32 × 32, which are respectively used as the real number of the estimated channel matrix. Initialization of the part and the imaginary part. The second, third, and fourth layers of RefineNet are convolutional layers with 8, 16, and 2 convolution kernels of size 3×3, respectively, with appropriate zero padding, ReLU activation function, and batch normalization ( batch normalization), so that the size of the feature map obtained after each convolution is consistent with the size of the original channel matrix H, which is 32×32. In addition, the data of the input layer is added with the data of the third convolutional layer, the last layer of RefineNet, as the output of the entire RefineNet. The output of the RefineNet, that is, two feature maps of size 32×32, is input to the second RefineNet unit, and its input layer replicates the output of the previous RefineNet unit, and the rest is the same as the previous RefineNet unit, and its output two 32 The feature map of ×32 size is input to the last convolutional layer of the decoder, and the sigmoid activation function is used to limit the output value range to the [0,1] interval, so that the final output of the decoder is two 32 × 32 size The real matrix of , as the final reconstructed channel matrix
Figure BDA0002996797510000072
The real and imaginary parts of .

(4)设计整个CsiNet架构的代价函数同时最优化熵编码器输出的熵和重建的MSE,这可以在编码的压缩率和复原精确度之前得到平衡。其表达式可写为:(4) Design the cost function of the entire CsiNet architecture while optimizing the entropy output of the entropy encoder and the reconstructed MSE, which can be balanced before the encoding compression ratio and restoration accuracy. Its expression can be written as:

Figure BDA0002996797510000071
Figure BDA0002996797510000071

其中,λ可调整两个优化目标的权重。用(1)中产生的信道矩阵H的100000个训练集样本,采用Adam优化算法和端到端的学习方式,联合训练编码器和译码器的参数,主要包括卷积及熵编码参数,使得代价函数最小,其中Adam算法中采用的学习率为0.001,每次迭代是使用训练集中的200个样本来计算梯度,并根据Adam算法的公式更新参数,以此方式遍历整个训练集1000次。训练过程中可用验证集选择性能好的模型,上述CsiNet模型即为经选择后的模型;测试集可以测试最终模型的性能。Among them, λ can adjust the weight of the two optimization objectives. Using the 100,000 training set samples of the channel matrix H generated in (1), the Adam optimization algorithm and the end-to-end learning method are used to jointly train the parameters of the encoder and decoder, mainly including convolution and entropy coding parameters, so that the cost The function is the smallest, and the learning rate used in the Adam algorithm is 0.001. Each iteration uses 200 samples in the training set to calculate the gradient, and updates the parameters according to the formula of the Adam algorithm, traversing the entire training set 1000 times in this way. In the training process, a model with good performance can be selected from the validation set, and the above-mentioned CsiNet model is the selected model; the test set can test the performance of the final model.

(5)训练好的CsiNet模型即可用于MIMO系统的信道状态信息反馈。根据(1)所述将信道状态信道信息的信道矩阵H,输入CsiNet+熵编码架构,即可输出重建后的信道矩阵

Figure BDA0002996797510000081
,即可恢复出原信道状态信息。(5) The trained CsiNet model can be used for the channel state information feedback of the MIMO system. According to (1), input the channel matrix H of the channel state channel information into the CsiNet+entropy coding architecture, and then output the reconstructed channel matrix
Figure BDA0002996797510000081
, the original channel state information can be recovered.

实施例仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明保护范围之内。The embodiment is only to illustrate the technical idea of the present invention, and cannot limit the protection scope of the present invention. Any changes made on the basis of the technical solution according to the technical idea proposed by the present invention all fall within the protection scope of the present invention. .

Claims (5)

1. A large-scale MIMO channel state information feedback method based on deep learning and entropy coding is characterized by comprising the following steps:
step 1, at a user side, preprocessing a channel matrix of MIMO channel state information, selecting key matrix elements to reduce the calculated amount, and obtaining a channel matrix H actually used for feedback;
step 2, at a user side, constructing a model combining a deep learning characteristic encoder and an entropy encoder, and encoding a channel matrix H into binary bit stream;
step 3, in the baseAnd (3) the station side constructs a model combining the deep learning characteristic decoder and the entropy decoder, and reconstructs an original channel matrix estimated value from the binary bit stream obtained in the step (2)
Figure FDA0003736409920000011
Step 4, training the combined model obtained by combining the step 2 and the step 3, simultaneously optimizing the entropy output by the entropy coder and the reconstructed mean square error MSE in the training process, balancing the compression ratio and the recovery accuracy of the coding, obtaining the parameters of the combined model and the output reconstructed original channel matrix estimated value
Figure FDA0003736409920000012
Step 5, the deep learning feature-based encoder and entropy coding combined model trained in the step 4 is used for compressed sensing and reconstruction of channel information;
the user end part of the model combining the deep learning feature encoder and the entropy encoding in the step 2 consists of a feature encoder, a uniform unit scalar quantizer and an entropy encoder;
the feature encoder, the uniform unit scalar quantizer, and the entropy encoder are specifically:
3.1. a feature encoder: randomly initializing parameters of each layer, taking a channel matrix H as the input of a feature encoder, and obtaining the output of the feature encoder as M ═ f f-en (H,Θ en ) Wherein the eigen-encoder parameter Θ en Obtained by training, H is a channel matrix, f f-en A representative feature encoder;
3.2. uniform unit scalar quantizer: in the quantization stage, the uniform unit scalar quantizer adjusts each element in M to be an integer nearest to the element, however, because quantization is not a differentiable function, the quantization cannot be used in a network structure based on gradient, and therefore, in training, an independent and equally distributed noise matrix is adopted to replace the quantization process; the quantized feature matrix is written as:
Figure FDA0003736409920000013
wherein Δ M is a uniformly distributed random matrix in the range of-0.5 to 0.5;
3.3. the quantized values are converted into a binary bit stream based on an input probability model by entropy coding, which is represented as
Figure FDA0003736409920000021
Where s is the output binary bit stream and P is the probability density function expressed as
Figure FDA0003736409920000022
Parameter theta of probability density function p The training aid is obtained by training the human body,
Figure FDA0003736409920000023
for the quantized feature matrix, f e-en Representing an entropy coder.
2. The massive MIMO channel state information feedback method based on deep learning and entropy coding of claim 1, wherein the base station part of the model combining the deep learning feature decoder and entropy decoding of step 3 is composed of an entropy decoder and a feature decoder.
3. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 2, wherein the entropy decoder and the feature decoder are specifically:
5.1. feeding back the binary bit stream s to the base station end, and outputting the binary bit stream s as input through the entropy decoder
Figure FDA0003736409920000024
Where P is a probability density function, f e-de Represents an entropy decoder, according to which the binary bitstream is decoded into a feature matrix;
5.2. decoding through the decoder designed at the base station end, and randomly initializing each layerParameter, decoder and feature matrix
Figure FDA0003736409920000025
For input, outputting the reconstructed original channel matrix estimated value with the same dimension as the channel matrix H
Figure FDA0003736409920000026
Wherein f is f-de A representative feature decoder is provided for decoding the feature data,
Figure FDA0003736409920000027
for the entropy decoder output, the feature decoder parameter Θ de Obtained by training, according to which the signature decoder decodes the signature matrix into a channel matrix.
4. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 1, wherein the parameters of the combination model in step 4 mainly comprise convolution kernel, bias, entropy coding related parameters of convolution layer.
5. The massive MIMO channel state information feedback method based on deep learning and entropy coding as claimed in claim 1, wherein the step 4 is to train the composition model, adopt an end-to-end training mode, jointly train parameters of the encoder and the decoder, make the cost function minimum, the cost function optimizes entropy outputted by the entropy encoder and reconstructed MSE at the same time, and get a balance before the compression ratio of encoding and the recovery accuracy.
CN202110334430.7A 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding Active CN113098804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110334430.7A CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Publications (2)

Publication Number Publication Date
CN113098804A CN113098804A (en) 2021-07-09
CN113098804B true CN113098804B (en) 2022-08-23

Family

ID=76670727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110334430.7A Active CN113098804B (en) 2021-03-29 2021-03-29 Channel state information feedback method based on deep learning and entropy coding

Country Status (1)

Country Link
CN (1) CN113098804B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115694722A (en) * 2021-07-30 2023-02-03 华为技术有限公司 Communication method and device
EP4398527A1 (en) * 2021-09-02 2024-07-10 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Model processing method, electronic device, network device, and terminal device
CN114337849B (en) * 2021-12-21 2023-03-14 上海交通大学 Physical layer confidentiality method and system based on mutual information quantity estimation neural network
US20230261712A1 (en) * 2022-02-15 2023-08-17 Qualcomm Incorporated Techniques for encoding and decoding a channel between wireless communication devices
CN116827489A (en) * 2022-03-18 2023-09-29 中兴通讯股份有限公司 Channel state information feedback method, device, storage medium and electronic device
CN116961711A (en) * 2022-04-19 2023-10-27 华为技术有限公司 Communication method and device
CN117479316A (en) * 2022-07-18 2024-01-30 中兴通讯股份有限公司 Channel state information determining method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019115865A1 (en) * 2017-12-13 2019-06-20 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
CN109921882A (en) * 2019-02-20 2019-06-21 深圳市宝链人工智能科技有限公司 A kind of MIMO coding/decoding method, device and storage medium based on deep learning
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network

Also Published As

Publication number Publication date
CN113098804A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113098804B (en) Channel state information feedback method based on deep learning and entropy coding
CN108390706B (en) Large-scale MIMO channel state information feedback method based on deep learning
CN110311718B (en) A Quantization and Inverse Quantization Method in Massive MIMO Channel State Information Feedback
CN110350958B (en) CSI multi-time rate compression feedback method of large-scale MIMO based on neural network
CN108847876B (en) A Massive MIMO Time-varying Channel State Information Compression Feedback and Reconstruction Method
Yang et al. Deep convolutional compression for massive MIMO CSI feedback
CN109672464B (en) FCFNN-based large-scale MIMO channel state information feedback method
CN111630787B (en) MIMO multi-antenna signal transmission and detection technology based on deep learning
Liu et al. An efficient deep learning framework for low rate massive MIMO CSI reporting
Chen et al. Deep learning-based implicit CSI feedback in massive MIMO
Lu et al. Bit-level optimized neural network for multi-antenna channel quantization
CN110474716B (en) Method for establishing SCMA codec model based on noise reduction self-encoder
JP5066609B2 (en) Adaptive compression of channel feedback based on secondary channel statistics
CN112737985A (en) Large-scale MIMO channel joint estimation and feedback method based on deep learning
CN115996160A (en) Method and apparatus in a communication system
CN101304300A (en) Method and device for channel quantization of multi-user MIMO system based on limited feedback
CN101207464B (en) Generalized grasman code book feedback method
CN116248156A (en) Deep learning-based large-scale MIMO channel state information feedback and reconstruction method
Ravula et al. Deep autoencoder-based massive MIMO CSI feedback with quantization and entropy coding
Shen et al. Clustering algorithm-based quantization method for massive MIMO CSI feedback
Wang et al. A novel compression CSI feedback based on deep learning for FDD massive MIMO systems
CN115865145A (en) A Transformer-based Massive MIMO Channel State Information Feedback Method
Zhang et al. Quantization adaptor for bit-level deep learning-based massive MIMO CSI feedback
Feng et al. Deep Learning-Based Joint Channel Estimation and CSI Feedback for RIS-Assisted Communications
CN116155333A (en) A Channel State Information Feedback Method Applicable to Massive MIMO Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant