WO2022206747A1 - High-efficiency mimo channel feedback method and device based on binarized neural network - Google Patents

High-efficiency mimo channel feedback method and device based on binarized neural network Download PDF

Info

Publication number
WO2022206747A1
WO2022206747A1 PCT/CN2022/083630 CN2022083630W WO2022206747A1 WO 2022206747 A1 WO2022206747 A1 WO 2022206747A1 CN 2022083630 W CN2022083630 W CN 2022083630W WO 2022206747 A1 WO2022206747 A1 WO 2022206747A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
channel
downlink channel
binarized
base station
Prior art date
Application number
PCT/CN2022/083630
Other languages
French (fr)
Chinese (zh)
Inventor
王劲涛
陆智麟
张彧
阳辉
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2022206747A1 publication Critical patent/WO2022206747A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • H04L25/0244Channel estimation channel estimation algorithms using matrix methods with inversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of communication technologies, and in particular, to an efficient MIMO channel feedback method and device based on a binarized neural network.
  • the channel feedback accuracy of the traditional codebook-based MIMO (Multiple-Input Multiple-Output, multiple-input multiple-output system) system is limited, especially when the MIMO scale is expanded to massive MIMO, the feedback accuracy will be further attenuated.
  • the full-channel feedback or sub-full-channel feedback of the MIMO system requires a large amount of information to be fed back, which brings unbearable feedback overhead.
  • the channel compression feedback technology of MIMO system based on deep learning achieves lower compression information loss at a lower compression rate through the learning of channel characteristics. It is too cumbersome in terms of storage and calculation.
  • the present application aims to solve one of the technical problems in the related art at least to a certain extent.
  • one purpose of this application is to propose an efficient MIMO channel feedback method based on a binary neural network, which can implement low-overhead autoencoder neural network deployment at the resource-constrained user end, and achieve a more practical Channel compression feedback scheme.
  • Another object of the present application is to propose an efficient MIMO channel feedback device based on a binarized neural network.
  • an embodiment of the present application proposes an efficient MIMO channel feedback method based on a binarized neural network, including:
  • the dimension of the space-frequency domain downlink channel matrix of the channel at the user end is determined according to the number of OFDM subcarriers in the communication system and the number of antennas at the base station end;
  • S3 Decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
  • an efficient MIMO channel feedback device based on a binarized neural network including:
  • the user-end preamble module is used to determine the dimension of the space-frequency domain downlink channel matrix of the user-end channel according to the number of OFDM subcarriers and the number of antennas at the base station in the communication system;
  • the first compression module is used for inputting the space-frequency domain downlink channel matrix into an autoencoder based on a binary neural network for compression to obtain a channel eigenvector, and compressing the channel eigenvector from the user terminal through the uplink sent to the base station;
  • the first decompression module is configured to decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
  • the high-efficiency MIMO channel feedback method and device based on the binarized neural network in the embodiments of the present application quantize the floating-point fully-connected layer in the original autoencoder into a binarized fully-connected layer by introducing the network binarization technology.
  • the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder.
  • the ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
  • FIG. 1 is a flowchart of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an end-to-end training of a binarized autoencoder according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a visualization of a binarized dimensionally compressed fully-connected layer in an autoencoder and a dimensionally expanded fully-connected layer in a self-decoder according to an embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of an efficient MIMO channel feedback device based on a binarized neural network according to an embodiment of the present application.
  • FIG. 1 is a flowchart of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application.
  • the efficient MIMO channel feedback method based on binarized neural network includes the following steps:
  • Step S1 Determine the dimension of the space-frequency domain downlink channel matrix of the channel at the user end according to the number of OFDM subcarriers and the number of antennas at the base station in the communication system.
  • the embodiments of the present application are mainly applied in a frequency division duplex (FDD) system to reduce the storage and computing overhead of a resource-constrained user end, where the base station end is a multi-antenna array, and the user end is a single antenna.
  • FDD frequency division duplex
  • TDD time division duplex
  • the space-frequency domain downlink channel matrix can also be processed, and the space-frequency domain downlink channel matrix can be transformed by two discrete Fourier transforms into The angle-delay domain downlink channel matrix, and the non-zero sub-matrix in the angle-delay domain downlink channel matrix is intercepted by non-zero sub-matrix cutting. Take a nonzero subarray as the input to the autoencoder.
  • the discrete Fourier transform operation can be used for any number of OFDM subcarriers N c and the number of antennas N t at the base station.
  • the method of intercepting the non-zero sub-arrays is determined by the actual channel conditions. When the channel delay is high or complex, the intercepted non-zero sub-arrays are larger, otherwise, only smaller non-zero sub-arrays are required.
  • step S2 the space-frequency domain downlink channel matrix is input into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and the channel eigenvector is sent from the user end to the base station through the uplink.
  • the user end feeds back the compressed eigenvectors through the uplink instead of directly feeding back the original channel matrix, thereby greatly reducing the overhead of channel feedback.
  • an autoencoder based on a binarized neural network can be an unprocessed space-frequency domain downlink channel matrix, and directly compress the space-frequency domain downlink channel matrix to obtain a channel eigenvector. If the downlink channel matrix in the space-frequency domain is processed, the non-zero sub-matrix is used as the input of the self-encoder to compress, and the channel eigenvector is obtained.
  • the autoencoder based on the binarized neural network is composed of a convolutional layer in the pre-order and a fully connected layer in the post-order, and the fully connected layer in the autoencoder based on the binarized neural network is binarized.
  • the convolutional layers are binarized or not.
  • the fully connected layer used to adjust the dimension of the channel matrix is binarized.
  • the pre-order convolutional layers can be binarized or not. To ensure performance, it is generally recommended not to perform binarization.
  • the fully connected layer usually occupies more than 90% of the weight of the autoencoder network parameters and calculations, so the binarized fully connected layer basically means the binarized fully autoencoder.
  • Step S3 Decode the received channel eigenvectors through a neural network-based self-decoder at the base station to obtain a downlink channel matrix in the space-frequency domain, thereby completing downlink channel feedback in a low-overhead MIMO system.
  • the space-frequency domain downlink channel matrix is obtained by directly decoding the channel vector through the self-decoder. If discrete Fourier transform and non-zero sub-array cutting are performed on the space-frequency domain downlink channel matrix, the received channel eigenvectors are decoded by the neural network-based self-decoder at the base station to obtain non-zero angle-time Delay domain downlink channel sub-array, and then perform zero-padding and two inverse discrete Fourier transform on the non-zero angle-delay domain downlink channel sub-array to obtain the space-frequency domain downlink channel matrix, and complete the complete downlink channel compression feedback process.
  • neural network-based self-decoders can have various designs, such as utilizing residual networks, utilizing multi-resolution networks, utilizing attention mechanism augmentation networks, and so on.
  • the application is for several extensions of the binarized autoencoder of the MIMO channel compression network, mainly aimed at expanding and strengthening the existing convolutional neural network in the autoencoder, so as to obtain better feedback performance.
  • the extension of the self-decoder of the MIMO channel compression network mainly adds redundant decoding units to the self-decoder to obtain better feedback performance.
  • Channel data can be generated in batches by using the existing mature channel model, or the device can be used to collect channel data from the actual environment.
  • the former is less expensive but may degrade in performance when deployed; the latter is more expensive but the trained network performs better in real-world environments.
  • After the obtained channel matrix is preprocessed by discrete Fourier transform and non-zero sub-array cutting, a data set that can be used for end-to-end network training is generated. After the data is obtained, it is divided into training set and test set. A collection requires no duplicate data.
  • the autoencoder and autodecoder are then connected together for end-to-end training, where the optimizer can use ADAM and the loss function uses mean squared error (MSE).
  • MSE mean squared error
  • the input of the self-encoder is directly used as the input of the self-decoder; and the mean squared error (MSE) is calculated from the angle-delay domain non-zero channel sub-array output by the self-decoder and its true value. Therefore, the forward propagation of the network will eventually calculate the value of the loss function MSE, and then backpropagate through the loss function value, that is, automatically derive the gradient according to the chain rule to obtain the gradient of each layer. After the gradient is obtained, the adaptive learning rate is given by using the initialization and optimization rules of the ADAM optimizer, and then the parameters of each layer can be updated.
  • Binarize the floating-point parameters of the binarization layer and the method of binarization is to take its sign, that is, positive numbers and zeros are binarized to 1; negative numbers are binarized to -1.
  • the gradient low-pass filter is used as the approximate derivative of the binarization function. Specifically, when the absolute value of the original parameter is greater than 1, the gradient of the parameter is considered to be 0; when the absolute value of the original parameter is less than 1, the parameter is considered to be The gradient of is consistent with the normal floating point gradient.
  • the gradient of all layers of the feedback network can be obtained by the operation defined in the previous step; the update of the floating point parameters of the binarization layer can be calculated from the equivalent weight of the binarization layer and the original gradient of the binarization layer gradient.
  • the autoencoder design and autodecoder design of the compressed feedback network can be obtained.
  • the embodiments of the present application include a learning rate reduction strategy that is effective for MIMO channel compression networks (including self-encoders and self-decoders). That is to say, the learning firstly rises rapidly from 0 to the highest point, and then decreases from the highest point to 0 in a cosine curve.
  • the lowest point of the learning rate can be not 0 but a small amount close to zero; the length of the interval between which the learning rate rises and the interval during which it falls can be different, usually the former is much shorter.
  • End-to-end training of the binarized neural network-based autoencoder and the neural network-based self-decoder where the key difficulty in end-to-end training is the binarization module in the autoencoder (such as binary fully connected layer) is non-steerable.
  • the present application adopts the gradient low-pass approximation method, that is, for all parameters whose absolute value is less than the threshold, the gradient is back-transmitted as it is, and the gradient is zero-forcing for the parameters whose absolute value is greater than the threshold. .
  • the entire feedback network can achieve complete gradient back-propagation, enabling end-to-end training.
  • the core of this is to use the gradient low-pass approximation to complete the end-to-end training of the binary feedback network under the intelligent feedback communication system, rather than the technical details of general feedback network training such as data acquisition and Adam optimizer.
  • the matrix H will pass through the autoencoder E based on the binarized neural network.
  • the autoencoder consists of a pre-order convolutional layer and a subsequent fully-connected layer, which are responsible for channel spatial feature abstraction and channel dimension compression, respectively, and finally convert the input matrix H into a compressed feature vector v, as shown in the following formula:
  • the dimension of the feature vector v is from a fraction to several tenths of the original matrix H. Therefore, the user end only needs to feed back v but does not need to feed back H, which will significantly reduce the feedback overhead.
  • the convolutional layer here can have various designs, one of which is shown in the first specific embodiment.
  • the fully connected layer is constrained by the dimensions of the matrix H and the feature vector v.
  • the dimension of the matrix H is 2 ⁇ N s ⁇ N t , where 2 represents the real part and the imaginary part (the element at each position of the channel matrix H is a complex number); and the dimension of the feature vector v is Where ⁇ is the compression factor, ranging from several to dozens. Therefore, the dimension of the fully connected layer is
  • the base station After the eigenvector v is ideally fed back through the uplink, the base station will receive the lossless eigenvector v. Note that it is assumed that the uplink is lossless digital transmission, and the possible information loss of the eigenvector itself in transmission is not considered, but focuses on how to recover the original channel matrix as accurate as possible from the compressed eigenvectors.
  • S103 Input the received feature vector v into the neural network-based self-decoder D at the base station side.
  • the structure of the self-decoder and the self-encoder are dual, and consist of a fully connected layer with dual dimension and concatenated subsequent convolution layers.
  • the fully connected layer of dimension duality restores the compressed feature vector v to the dimension before compression; the subsequent convolutional layer extracts and restores the spatial information of the original channel matrix H, as shown in the following formula:
  • a properly trained self-decoder can recover the original non-zero angle-delay domain subarray H, that is, is sufficiently close to the original channel matrix H. It is worth noting that the dimension of the fully connected layer here is opposite to that of the fully connected layer in the autoencoder, that is Due to the relatively strong storage and computing capabilities of the base station, the subsequent convolutional layer D conv can be constructed relatively complex, so as to better restore the spatial characteristics of the channel matrix and achieve better feedback performance.
  • the base station can recover the value from the non-zero angle-delay domain sub-array through zero-padding and two inverse discrete Fourier transforms. Get the recovered value of the original downlink channel Complete the complete downlink channel compression feedback process, as shown in the following formula:
  • the key of this application is to binarize the fully connected layer EFC which occupies most of the storage/computation complexity of the autoencoder .
  • the storage complexity of this layer is reduced to the original Since the multiplication operation is not required after binarization, only the addition operation is required, and the calculation cost is also greatly reduced. It is conservatively estimated that the operation speed is increased by more than 2 times. It is worth noting that in most current autoencoder designs, the amount of computation and parameters occupied by the convolutional layer is very small compared to the fully connected layer. Therefore, the loss of binarizing the convolutional layer of the self-encoder is greater than the gain, so the binarization of the convolutional layer is not adopted in the example of this application (but such an operation is practically feasible).
  • the present application greatly reduces the cost of deploying resource-sensitive user-end self-encoders, and improves the feasibility of the MIMO channel feedback method based on neural network self-encoder-self-decoder in practical systems.
  • the above process can be implemented by the low-complexity MIMO channel compression feedback device based on network binarization depicted in FIG. 2 .
  • the above networks all need to obtain the channel compression/decompression capability through end-to-end training, and the parts that need to be trained include E conv , E BFC , D FC and D conv .
  • the network described in the above embodiment can be trained on the collected/generated data through the end-to-end training strategy described in the content of the application, and the key is to perform the full-connection layer EBFC on the non-derivable binarized autoencoder.
  • the equivalent weights of its parameters are obtained as shown in Figure 3, and their update gradients are calculated.
  • flow_pass (F BFC ) is the aforementioned gradient low-pass filter, which is used to process the approximate derivation of the binarization function.
  • the low-complexity MIMO channel compression feedback method based on network binarization will be further described below by way of specific embodiments.
  • the downlink channel matrix Do two discrete Fourier transforms, which can be transformed from the space-frequency domain to the angle-delay domain to obtain the angle-delay domain channel matrix After that, it is intercepted, and the first 32 lines (usually 16 lines are sufficient, 32 lines can further improve the accuracy) are taken as the complex sub-array H that needs to be fed back, then its dimension is reduced to 32 ⁇ 32, and the dimension after conversion to a floating-point number vector That is 2 ⁇ 32 ⁇ 32.
  • the fully connected layer of the autoencoder in this application is binarized, so in fact, the parameter amount here is only 16K (the original floating-point parameter amount is 512K), that is, the parameter amount is reduced to the original
  • the parameter amount is only 16K (the original floating-point parameter amount is 512K)
  • multiplying by ⁇ 1 is equivalent to changing the sign of the current element, which eventually degenerates into addition and subtraction between vector elements.
  • the operation speed of addition is faster than multiplication on most hardware, so the forward transmission speed of the binarized fully-connected layer here is more than twice that of the original floating-point parameter fully-connected layer.
  • the self-decoder consists of a trained one-dimensionally expanded fully connected layer and a more powerful convolutional neural network.
  • the subsequent convolution sub-network is mainly composed of a simple convolution layer and a residual structure.
  • Each residual structure includes three convolution layers, and the convolution kernel size is 3; the three convolution kernels expand the number of channels respectively. Increase to 8, expand to 16 and compress back to 2, to achieve feature recovery by scaling the channel.
  • two such residual structures are cascaded, and a simple convolutional layer with kernel size 3 is added at the end to form the final self-decoder convolutional network.
  • the matrix finally recovered from the decoder The dimension is still 2 ⁇ 32 ⁇ 32, which is 32 ⁇ 32 in terms of complex dimensions.
  • pair matrix Zero-padding restores its dimensions from 32 ⁇ 32 to 1024 ⁇ 32. Perform two inverse discrete Fourier transforms on the matrix that restores the OFDM dimension to obtain the final MIMO channel feedback result.
  • the training data is collected from the actual environment, or generated by the channel model (COST2100 model, 3GPPTR.38.901UMiNLOS model or simple Saleh-Valenzuela model, etc.).
  • the channel model COST2100 model, 3GPPTR.38.901UMiNLOS model or simple Saleh-Valenzuela model, etc.
  • the channel recovery normalization below -10dB can be obtained through the cascaded compression feedback network of the binarized auto-encoder and self-decoder.
  • the normalized mean square error provides sufficiently accurate downlink channel information for subsequent beamforming.
  • the solution of the embodiment of the present application is to binarize the FC layer in the self-encoder, instead of quantizing or binarizing the compressed feature vector output by the FC layer.
  • the final FC layer uses network binarization, whether discrete Fourier transform is used in the compression process or not, it belongs to the concept of the present application.
  • the purpose of this application is to reduce the cost of deploying the neural network on the client device, so it is necessary to compress the neural network itself, rather than reducing the feedback cost, and further quantify the CSI feature vector that needs to be fed back.
  • the overhead reduced by the present application includes storage overhead and calculation overhead, and the overhead of channel feedback before and after compression is unchanged.
  • the embodiments of the present application do not consider the compression of feature vectors, but focus on the compression of the network itself and the reduction of deployment costs.
  • the binarization mentioned in this application is network binarization, and specifically, binarization is performed on the fully connected layer of the feedback self-encoder, rather than the binarization of the compressed feature vector.
  • the floating-point fully-connected layer in the original autoencoder is quantized into a binarized fully-connected layer.
  • the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder.
  • the ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
  • FIG. 5 is a schematic structural diagram of an efficient MIMO channel feedback device based on a binarized neural network according to an embodiment of the present application.
  • the high-efficiency MIMO channel feedback device based on the binarized neural network includes: a user-end preamble module 501 , a first compression module 502 and a first decompression module.
  • the UE preamble module 501 is configured to determine the dimension of the space-frequency domain downlink channel matrix of the UE channel according to the number of OFDM subcarriers and the number of base station antennas in the communication system.
  • the first compression module 502 is configured to input the space-frequency domain downlink channel matrix into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and send the channel eigenvector from the user end to the base station through the uplink end.
  • the first decompression module 503 is configured to decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain a space-frequency domain downlink channel matrix.
  • the transformation module 504 is configured to transform the space-frequency domain downlink channel matrix into an angle-delay domain downlink channel matrix through two discrete Fourier transforms, and intercept the angle-delay domain downlink channel matrix through non-zero sub-array cutting. nonzero subarray.
  • the second compression module 505 is used for inputting the non-zero sub-array into the autoencoder based on the binarized neural network for compression to obtain the channel feature vector, and sending the channel feature vector from the user end to the base station end through the uplink.
  • the second decompression module 506 is configured to decode the received channel feature vector through the neural network-based self-decoder at the base station to obtain a non-zero angle-delay domain downlink channel sub-array, and for the non-zero angle-delay domain
  • the downlink channel sub-array performs zero-padding and two inverse discrete Fourier transforms to obtain the downlink channel matrix in the space-frequency domain.
  • the base station finally obtains the downlink channel of the space-frequency domain MIMO system Can be used as input for other subsequent modules, such as beamforming modules, etc.
  • the floating-point fully-connected layer in the original autoencoder is quantized into a binarized fully-connected layer.
  • the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder.
  • the ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Power Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Radio Transmission System (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present application discloses a high-efficiency MIMO channel feedback method and device based on a binarized neural network. The method comprises: determining the dimension of a space-frequency domain downlink channel matrix of a user-side channel according to the number of OFDM subcarriers in a communication system and the number of antennas of a base station side; inputting the space-frequency domain downlink channel matrix into a self-encoder based on the binarized neural network for compression to obtain a channel feature vector, and sending the channel feature vector from a user side to the base station side by means of an uplink; and decoding the received channel feature vector by means of a neural network-based self-decoder of the base station side to obtain the space-frequency domain downlink channel matrix. In the solution, self-encoder neural network deployment requiring low overhead can be carried out at the user side having limited resources, and a more practical channel compression feedback solution is realized.

Description

基于二值化神经网络的高效MIMO信道反馈方法及装置Efficient MIMO channel feedback method and device based on binary neural network
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求清华大学于2021年4月1日提交的、发明名称为“基于二值化神经网络的高效MIMO信道反馈方法及装置”的、中国专利申请号“202110356946.1”的优先权。This application claims the priority of the Chinese patent application number "202110356946.1" filed by Tsinghua University on April 1, 2021 with the title of "Efficient MIMO Channel Feedback Method and Device Based on Binarized Neural Network".
技术领域technical field
本申请涉及通信技术领域,特别涉及一种基于二值化神经网络的高效MIMO信道反馈方法及装置。The present application relates to the field of communication technologies, and in particular, to an efficient MIMO channel feedback method and device based on a binarized neural network.
背景技术Background technique
传统基于码本的MIMO(Multiple-Input Multiple-Output,多输入多输出系统)系统信道反馈精度受限,尤其在MIMO规模扩展为大规模MIMO后,其反馈精度将进一步衰减。The channel feedback accuracy of the traditional codebook-based MIMO (Multiple-Input Multiple-Output, multiple-input multiple-output system) system is limited, especially when the MIMO scale is expanded to massive MIMO, the feedback accuracy will be further attenuated.
MIMO系统的全信道反馈或亚全信道反馈需要反馈的信息量较大,带来难以承受的反馈开销。另外传统的压缩感知技术难以实现充分低的压缩率和充分低的压缩信息损耗。The full-channel feedback or sub-full-channel feedback of the MIMO system requires a large amount of information to be fed back, which brings unbearable feedback overhead. In addition, it is difficult for traditional compressed sensing technology to achieve sufficiently low compression ratio and sufficiently low loss of compressed information.
基于深度学习的MIMO系统信道压缩反馈技术通过对信道特征的学习,在较低的压缩率下实现了较低的压缩信息损耗,但对于资源受限的用户端设备,已有的自编码器网络在储存和计算上都过于笨重。The channel compression feedback technology of MIMO system based on deep learning achieves lower compression information loss at a lower compression rate through the learning of channel characteristics. It is too cumbersome in terms of storage and calculation.
发明内容SUMMARY OF THE INVENTION
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本申请的一个目的在于提出一种基于二值化神经网络的高效MIMO信道反馈方法,该方法可以在资源受限的用户端进行低开销的自编码器神经网络部署,实现更实际的信道压缩反馈方案。To this end, one purpose of this application is to propose an efficient MIMO channel feedback method based on a binary neural network, which can implement low-overhead autoencoder neural network deployment at the resource-constrained user end, and achieve a more practical Channel compression feedback scheme.
本申请的另一个目的在于提出一种基于二值化神经网络的高效MIMO信道反馈装置。Another object of the present application is to propose an efficient MIMO channel feedback device based on a binarized neural network.
为达到上述目的,本申请一方面实施例提出了一种基于二值化神经网络的高效MIMO信道反馈方法,包括:To achieve the above purpose, an embodiment of the present application proposes an efficient MIMO channel feedback method based on a binarized neural network, including:
S1,根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度;S1, the dimension of the space-frequency domain downlink channel matrix of the channel at the user end is determined according to the number of OFDM subcarriers in the communication system and the number of antennas at the base station end;
S2,将所述空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;S2, inputting the space-frequency domain downlink channel matrix into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and sending the channel eigenvector from the user terminal to the base station through the uplink;
S3,通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到所述空间-频率域下行信道矩阵。S3: Decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
为达到上述目的,本申请另一方面实施例提出了一种基于二值化神经网络的高效MIMO信道反馈装置,包括:In order to achieve the above purpose, another embodiment of the present application provides an efficient MIMO channel feedback device based on a binarized neural network, including:
用户端前序模块,用于根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度;The user-end preamble module is used to determine the dimension of the space-frequency domain downlink channel matrix of the user-end channel according to the number of OFDM subcarriers and the number of antennas at the base station in the communication system;
第一压缩模块,用于将所述空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;The first compression module is used for inputting the space-frequency domain downlink channel matrix into an autoencoder based on a binary neural network for compression to obtain a channel eigenvector, and compressing the channel eigenvector from the user terminal through the uplink sent to the base station;
第一解压缩模块,用于通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到所述空间-频率域下行信道矩阵。The first decompression module is configured to decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
本申请实施例的基于二值化神经网络的高效MIMO信道反馈方法及装置,通过引入网络二值化技术,将原本的自编码器中的浮点全连接层量化为二值化全连接层。在储存上,二值化自编码器是原始浮点自编码器的三十分之一;在计算速度上,二值化自编码器是原始浮点自编码器的两倍。通过上述网络二值化技术设计的超轻量级的MIMO信道压缩反馈自编码器,十分有利于在多样化的资源受限用户设备商进行网络部署。The high-efficiency MIMO channel feedback method and device based on the binarized neural network in the embodiments of the present application quantize the floating-point fully-connected layer in the original autoencoder into a binarized fully-connected layer by introducing the network binarization technology. In terms of storage, the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder. The ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1为根据本申请一个实施例的基于二值化神经网络的高效MIMO信道反馈方法流程图;FIG. 1 is a flowchart of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application;
图2为根据本申请一个实施例的基于二值化神经网络的高效MIMO信道反馈方法示意图;2 is a schematic diagram of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application;
图3为根据本申请一个实施例的二值化自编码器端到端的训练示意图;3 is a schematic diagram of an end-to-end training of a binarized autoencoder according to an embodiment of the present application;
图4为根据本申请一个实施例的自编码器中的二值化维度压缩全连接层以及自解码器中的维度扩张全连接层的可视化示意图;4 is a schematic diagram of a visualization of a binarized dimensionally compressed fully-connected layer in an autoencoder and a dimensionally expanded fully-connected layer in a self-decoder according to an embodiment of the present application;
图5为根据本申请一个实施例的基于二值化神经网络的高效MIMO信道反馈装置结构示意图。FIG. 5 is a schematic structural diagram of an efficient MIMO channel feedback device based on a binarized neural network according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.
下面参照附图描述根据本申请实施例提出的基于二值化神经网络的高效MIMO信道反馈方法及装置。The following describes an efficient MIMO channel feedback method and device based on a binarized neural network according to the embodiments of the present application with reference to the accompanying drawings.
首先将参照附图描述根据本申请实施例提出的基于二值化神经网络的高效MIMO信道反馈方法。First, an efficient MIMO channel feedback method based on a binarized neural network proposed according to an embodiment of the present application will be described with reference to the accompanying drawings.
图1为根据本申请一个实施例的基于二值化神经网络的高效MIMO信道反馈方法流程图。FIG. 1 is a flowchart of an efficient MIMO channel feedback method based on a binarized neural network according to an embodiment of the present application.
如图1所示,该基于二值化神经网络的高效MIMO信道反馈方法包括以下步骤:As shown in Figure 1, the efficient MIMO channel feedback method based on binarized neural network includes the following steps:
步骤S1,根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度。Step S1: Determine the dimension of the space-frequency domain downlink channel matrix of the channel at the user end according to the number of OFDM subcarriers and the number of antennas at the base station in the communication system.
根据预设通信需求或预设通信系统确定OFDM子载波个数N c以及基站端天线数N t,由此确定空间-频率域下行信道矩阵
Figure PCTCN2022083630-appb-000001
的维度。
Determine the number of OFDM subcarriers N c and the number of base station antennas N t according to preset communication requirements or preset communication systems, thereby determining the space-frequency domain downlink channel matrix
Figure PCTCN2022083630-appb-000001
dimension.
具体地,本申请的实施例主要应用于频分双工(FDD)系统中以降低资源受限的用户端的储存和计算开销,其中基站端是的多天线阵列,用户端是单天线。但在时分双工(TDD)系统中需要进行信道反馈时,也可以利用同样的系统结构进行压缩反馈,降低用户端的储存和计算开销。Specifically, the embodiments of the present application are mainly applied in a frequency division duplex (FDD) system to reduce the storage and computing overhead of a resource-constrained user end, where the base station end is a multi-antenna array, and the user end is a single antenna. However, when channel feedback needs to be performed in a time division duplex (TDD) system, the same system structure can also be used to perform compression feedback to reduce the storage and computing overhead of the user end.
进一步地,确定用户端信道的空间-频率域下行信道矩阵的维度之后,还可以对空间-频率域下行信道矩阵进行处理,通过两次离散傅里叶变换将空间-频率域下行信道矩阵变换为角度-时延域下行信道矩阵,通过非零子阵切割截取角度-时延域下行信道矩阵中的非零子阵。将非零子阵作为自编码器的输入。Further, after determining the dimension of the space-frequency domain downlink channel matrix of the user terminal channel, the space-frequency domain downlink channel matrix can also be processed, and the space-frequency domain downlink channel matrix can be transformed by two discrete Fourier transforms into The angle-delay domain downlink channel matrix, and the non-zero sub-matrix in the angle-delay domain downlink channel matrix is intercepted by non-zero sub-matrix cutting. Take a nonzero subarray as the input to the autoencoder.
可以理解的是,离散傅里叶变换操作针对任意OFDM子载波个数N c以及基站端天线数N t均可通用。截取非零子阵的方式由实际信道条件决定,当信道时延较高或较复杂时所截取的非零子阵较大,反之则只需要较小的非零子阵。 It can be understood that the discrete Fourier transform operation can be used for any number of OFDM subcarriers N c and the number of antennas N t at the base station. The method of intercepting the non-zero sub-arrays is determined by the actual channel conditions. When the channel delay is high or complex, the intercepted non-zero sub-arrays are larger, otherwise, only smaller non-zero sub-arrays are required.
步骤S2,将空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将信道特征向量通过上行链路从用户端发送到基站端。In step S2, the space-frequency domain downlink channel matrix is input into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and the channel eigenvector is sent from the user end to the base station through the uplink.
通过自编码器进行压缩后,用户端通过上行链路反馈该压缩后特征向量而非直接反馈原始信道矩阵,从而极大降低信道反馈的开销。After compression by the self-encoder, the user end feeds back the compressed eigenvectors through the uplink instead of directly feeding back the original channel matrix, thereby greatly reducing the overhead of channel feedback.
具体地,基于二值化神经网络的自编码器可以为未处理的空间-频率域下行信道矩阵,直接对空间-频率域下行信道矩阵进行压缩,得到信道特征向量。若对空间-频率域下行信道 矩阵进行处理,则将非零子阵作为自编码器的输入进行压缩,得到信道特征向量。Specifically, an autoencoder based on a binarized neural network can be an unprocessed space-frequency domain downlink channel matrix, and directly compress the space-frequency domain downlink channel matrix to obtain a channel eigenvector. If the downlink channel matrix in the space-frequency domain is processed, the non-zero sub-matrix is used as the input of the self-encoder to compress, and the channel eigenvector is obtained.
可以理解的是,在本申请的实施例中,是否对空间-频率域下行信道矩阵采取离散傅里叶变换均是可行方案。It can be understood that, in the embodiments of the present application, it is a feasible solution whether to adopt discrete Fourier transform for the downlink channel matrix in the space-frequency domain.
进一步地,基于二值化神经网络的自编码器由前序的卷积层和后序的全连接层级联组成,对基于二值化神经网络的自编码器中的全连接层进行二值化,卷积层二值化或不进行二值化。Further, the autoencoder based on the binarized neural network is composed of a convolutional layer in the pre-order and a fully connected layer in the post-order, and the fully connected layer in the autoencoder based on the binarized neural network is binarized. , the convolutional layers are binarized or not.
作为一种实施方式,只将用于调整信道矩阵维度的全连接层进行二值化。前序的卷积层可二值化可不二值化,为保证性能一般推荐不进行二值化。实际上全连接层通常占据90%以上的自编码器网络参数量与计算量权重,故二值化全连接层基本意味着二值化全自编码器。As an embodiment, only the fully connected layer used to adjust the dimension of the channel matrix is binarized. The pre-order convolutional layers can be binarized or not. To ensure performance, it is generally recommended not to perform binarization. In fact, the fully connected layer usually occupies more than 90% of the weight of the autoencoder network parameters and calculations, so the binarized fully connected layer basically means the binarized fully autoencoder.
步骤S3,通过基站端的基于神经网络的自解码器对接收到的信道特征向量进行解码,得到空间-频率域下行信道矩阵,从而完成低开销的MIMO系统下行信道反馈。Step S3: Decode the received channel eigenvectors through a neural network-based self-decoder at the base station to obtain a downlink channel matrix in the space-frequency domain, thereby completing downlink channel feedback in a low-overhead MIMO system.
具体地,若未对空间-频率域下行信道矩阵变换进行离散傅里叶变换,则直接通过自解码器对信道向量解码得到空间-频率域下行信道矩阵。若对空间-频率域下行信道矩阵进行了离散傅里叶变换以及非零子阵切割,则通过基站端的基于神经网络的自解码器对接收到的信道特征向量进行解码,得到非零角度-时延域下行信道子阵,再对非零角度-时延域下行信道子阵进行补零和两次离散傅里叶逆变换得到空间-频率域下行信道矩阵,完成完整的下行信道压缩反馈过程。Specifically, if the discrete Fourier transform is not performed on the space-frequency domain downlink channel matrix transformation, the space-frequency domain downlink channel matrix is obtained by directly decoding the channel vector through the self-decoder. If discrete Fourier transform and non-zero sub-array cutting are performed on the space-frequency domain downlink channel matrix, the received channel eigenvectors are decoded by the neural network-based self-decoder at the base station to obtain non-zero angle-time Delay domain downlink channel sub-array, and then perform zero-padding and two inverse discrete Fourier transform on the non-zero angle-delay domain downlink channel sub-array to obtain the space-frequency domain downlink channel matrix, and complete the complete downlink channel compression feedback process.
解压后角度-时延域非零信道子阵和真实角度-时延域非零信道子阵会有一定的差别。这样的差别即为压缩反馈的信息损失。本申请通过对自编码器和自解码器的充分恰当训练,可以使这样的信息损失足够小,也即使得恢复出的信道子阵充分精确和可用。另外,使用更加强大的自编码器和自解码器能够进一步降低这样的信息损失。There is a certain difference between the angle-delay domain non-zero channel sub-array and the real angle-delay domain non-zero channel sub-array after decompression. Such a difference is the information loss of the compressed feedback. The present application can make such information loss sufficiently small through adequate and proper training of the self-encoder and self-decoder, that is, the recovered channel sub-array is sufficiently accurate and usable. In addition, the use of more powerful autoencoders and autodecoders can further reduce such information loss.
进一步地,基于神经网络的自解码器可以有各种不同的设计,如利用残差网络、利用多分辨率网络、利用注意力机制增强网络等。Further, neural network-based self-decoders can have various designs, such as utilizing residual networks, utilizing multi-resolution networks, utilizing attention mechanism augmentation networks, and so on.
申请对MIMO信道压缩网络二值化自编码器的几种扩展,主要针对对自编码器中已有卷积神经网络进行扩展强化,从而获取更好的反馈性能。如添加额外的卷积层、使用额外的残差链接、扩大卷积核的尺寸等。对MIMO信道压缩网络自解码器的扩展,主要针对自解码器添加冗余的解码单元,从而获取更好的反馈性能。The application is for several extensions of the binarized autoencoder of the MIMO channel compression network, mainly aimed at expanding and strengthening the existing convolutional neural network in the autoencoder, so as to obtain better feedback performance. Such as adding additional convolutional layers, using additional residual links, expanding the size of the convolution kernel, etc. The extension of the self-decoder of the MIMO channel compression network mainly adds redundant decoding units to the self-decoder to obtain better feedback performance.
进一步地,在本申请的实施例中,需要对二值化自编码器和自解码器进行端到端训练。Further, in the embodiments of the present application, it is necessary to perform end-to-end training on the binarized self-encoder and self-decoder.
首先获取训练数据集和测试数据集。可以利用已有的成熟信道模型批量生成信道数据,也可以利用设备专门从实际环境中采集信道数据。前者成本较低但部署时有可能性能降退;后者成本较高但训练后的网络在实际环境中表现更好。将得到的信道矩阵经过离散傅里叶 变换和非零子阵切割等预处理后,生成可以用于端到端网络训练的数据集,获得数据后将其拆分成训练集和测试集,两个集合要求无重复数据。First get the training dataset and test dataset. Channel data can be generated in batches by using the existing mature channel model, or the device can be used to collect channel data from the actual environment. The former is less expensive but may degrade in performance when deployed; the latter is more expensive but the trained network performs better in real-world environments. After the obtained channel matrix is preprocessed by discrete Fourier transform and non-zero sub-array cutting, a data set that can be used for end-to-end network training is generated. After the data is obtained, it is divided into training set and test set. A collection requires no duplicate data.
后将自编码器和自解码器连接在一起进行端到端的训练,其中优化器可以使用ADAM、损失函数使用平均平方误差(MSE)。具体而言,将自编码器的输入直接作为自解码器的输入;而自解码器输出的角度-时延域非零信道子阵则和其真值计算平均平方误差(MSE)。因此网络前向传播最终会计算出损失函数MSE的值,然后通过该损失函数值进行反向传播,也即根据链式法则自动求导,获取每个层的梯度。在梯度获取后,利用ADAM优化器的初始化情况和优化法则给出自适应的学习率,然后对各个层的参数进行更新即可。The autoencoder and autodecoder are then connected together for end-to-end training, where the optimizer can use ADAM and the loss function uses mean squared error (MSE). Specifically, the input of the self-encoder is directly used as the input of the self-decoder; and the mean squared error (MSE) is calculated from the angle-delay domain non-zero channel sub-array output by the self-decoder and its true value. Therefore, the forward propagation of the network will eventually calculate the value of the loss function MSE, and then backpropagate through the loss function value, that is, automatically derive the gradient according to the chain rule to obtain the gradient of each layer. After the gradient is obtained, the adaptive learning rate is given by using the initialization and optimization rules of the ADAM optimizer, and then the parameters of each layer can be updated.
在端到端训练中,自编码器中的二值化模块(如二值化全连接层)不可导。为了实现正常的端到端训练和迭代,采用了梯度低通近似的方式,也即对于所有绝对值小于门限的参数原样反传其梯度,而对绝对值大于门限的参数进行梯度迫零。因此整个反馈网络可以实现完整的梯度反向传播,从而实现端到端训练。如图3所示,具体地流程如下:In end-to-end training, binarization modules in autoencoders (such as binarized fully connected layers) are not differentiable. In order to achieve normal end-to-end training and iteration, the gradient low-pass approximation method is adopted, that is, for all parameters whose absolute value is less than the threshold, the gradient is back-transmitted as is, and the gradient is zero-forcing for the parameters whose absolute value is greater than the threshold. Therefore, the entire feedback network can achieve complete gradient backpropagation, enabling end-to-end training. As shown in Figure 3, the specific process is as follows:
a)找到自编码器中所有的二值化层,默认只有最后的全连接层属于二值化层。基于二值化层的当前浮点参数计算等效权重。a) Find all binarization layers in the autoencoder. By default, only the last fully connected layer belongs to the binarization layer. Calculates equivalent weights based on the current floating point parameters of the binarization layer.
b)将二值化层的浮点参数二值化,二值化的方式为取其符号,也即正数和零二值化为1;负数二值化为-1。b) Binarize the floating-point parameters of the binarization layer, and the method of binarization is to take its sign, that is, positive numbers and zeros are binarized to 1; negative numbers are binarized to -1.
c)基于上述二值化后的参数,对整个反馈网络(自编码器加级联解码器)做前向传播,求出损失函数MSE。c) Based on the above binarized parameters, perform forward propagation on the entire feedback network (self-encoder plus cascaded decoder) to obtain the loss function MSE.
d)基于损失函数MSE进行反向传播,以自动求导获取各层梯度。d) Backpropagation is performed based on the loss function MSE to automatically derive the gradient of each layer.
e)上一步中的反向传播经过自编码器中的二值化层时,无法再继续自动求导,需要手动给出二值化函数的导数。此处用梯度低通滤波器作为二值化函数的近似导数,具体来说当原始参数绝对值大于1时,认为该参数的梯度为0;当原始参数的绝对值小于1时,认为该参数的梯度和正常的浮点梯度一致。e) When the backpropagation in the previous step passes through the binarization layer in the autoencoder, the automatic derivation cannot be continued, and the derivative of the binarization function needs to be manually given. Here, the gradient low-pass filter is used as the approximate derivative of the binarization function. Specifically, when the absolute value of the original parameter is greater than 1, the gradient of the parameter is considered to be 0; when the absolute value of the original parameter is less than 1, the parameter is considered to be The gradient of is consistent with the normal floating point gradient.
f)由上一步中定义的操作可以获取反馈网络所有层的梯度;可以从二值化层的等效权重和二值化层的原始梯度中,计算出对二值化层浮点参数的更新梯度。f) The gradient of all layers of the feedback network can be obtained by the operation defined in the previous step; the update of the floating point parameters of the binarization layer can be calculated from the equivalent weight of the binarization layer and the original gradient of the binarization layer gradient.
g)结合f)中计算出的二值化层更新梯度以及d)中得到的其他各层梯度,利用优化器(ADAM)和对应的学习率对整体反馈网络(自编码器级联自解码器)进行参数更新。g) Combine the update gradient of the binarization layer calculated in f) and the gradients of other layers obtained in d), use the optimizer (ADAM) and the corresponding learning rate to the overall feedback network (auto-encoder cascade auto-decoder ) to update the parameters.
保持端到端训练,逐步下降学习率直到网络收敛,即可获取压缩反馈网络的自编码器设计和自解码器设计。Keeping the end-to-end training and gradually decreasing the learning rate until the network converges, the autoencoder design and autodecoder design of the compressed feedback network can be obtained.
可以理解的是,不同的自解码器设计、不同的优化器设计、不同的学习率下降方式都可以通过梯度低通近似策略实现MIMO信道压缩二值化自编码器端到端训练。It can be understood that different self-decoder designs, different optimizer designs, and different learning rate descent methods can all achieve end-to-end training of MIMO channel compression binarized self-encoders through gradient low-pass approximation strategies.
本申请的实施例包含了一种对MIMO信道压缩网络(含自编码器和自解码器)有效的 学习率下降策略。即学习率先从0快速上升到最高点,再从最高点以余弦曲线下降为0。其中学习率的最低点可以不是0而是某个接近零的小量;学习率上升的区间和下降的区间长度可以不同,通常前者短得多。The embodiments of the present application include a learning rate reduction strategy that is effective for MIMO channel compression networks (including self-encoders and self-decoders). That is to say, the learning firstly rises rapidly from 0 to the highest point, and then decreases from the highest point to 0 in a cosine curve. The lowest point of the learning rate can be not 0 but a small amount close to zero; the length of the interval between which the learning rate rises and the interval during which it falls can be different, usually the former is much shorter.
对基于二值化神经网络的自编码器和所基于神经网络的自解码器进行端到端训练,其中在端到端训练时的关键困难是自编码器中的二值化模块(如二值化全连接层)是不可导的。为了实现正常的端到端训练和迭代,本申请采用了梯度低通近似的方式,也即对于所有绝对值小于门限的参数原样反传其梯度,而对绝对值大于门限的参数进行梯度迫零。如此一来整个反馈网络可以实现完整的梯度反向传播,从而实现端到端训练。其中的核心是在智能反馈通信系统下,利用梯度低通近似完成二值化反馈网络的端到端训练,而非数据采集、Adam优化器等通用反馈网络训练的技术细节。End-to-end training of the binarized neural network-based autoencoder and the neural network-based self-decoder, where the key difficulty in end-to-end training is the binarization module in the autoencoder (such as binary fully connected layer) is non-steerable. In order to achieve normal end-to-end training and iteration, the present application adopts the gradient low-pass approximation method, that is, for all parameters whose absolute value is less than the threshold, the gradient is back-transmitted as it is, and the gradient is zero-forcing for the parameters whose absolute value is greater than the threshold. . In this way, the entire feedback network can achieve complete gradient back-propagation, enabling end-to-end training. The core of this is to use the gradient low-pass approximation to complete the end-to-end training of the binary feedback network under the intelligent feedback communication system, rather than the technical details of general feedback network training such as data acquisition and Adam optimizer.
下面对本申请的基于二值化神经网络的高效MIMO信道反馈方法进行详细说明。The efficient MIMO channel feedback method based on the binarized neural network of the present application will be described in detail below.
S101:对于FDD系统,根据已有通信系统硬件和系统软件的情况,确定OFDM子载波个数N c以及基站端天线数N t,从而给出合并各个子载波的整体下行信道
Figure PCTCN2022083630-appb-000002
的维度,也即N c×N t。将其通过两次离散傅里叶变换将
Figure PCTCN2022083630-appb-000003
变换到角度-时延域后得到矩阵
Figure PCTCN2022083630-appb-000004
如下公式所示:
S101: For the FDD system, according to the existing communication system hardware and system software, determine the number N c of OFDM subcarriers and the number N t of antennas at the base station, so as to give an overall downlink channel combining each subcarrier
Figure PCTCN2022083630-appb-000002
The dimension of , that is, N c ×N t . It is transformed by two discrete Fourier transforms
Figure PCTCN2022083630-appb-000003
After transforming to the angle-delay domain, the matrix is obtained
Figure PCTCN2022083630-appb-000004
As shown in the following formula:
Figure PCTCN2022083630-appb-000005
Figure PCTCN2022083630-appb-000005
其中X和Y分别是对应维度的离散傅里叶变换矩阵。由于FDD的MIMO信道在时延域的稀疏性,
Figure PCTCN2022083630-appb-000006
只有有限而且集中的几行非零。对于绝大部分近似为零的行没必要进行反馈,故此时截取其前N s行组成的非零子阵H,作为信道反馈的对象,如下公式所示:
where X and Y are discrete Fourier transform matrices of corresponding dimensions, respectively. Due to the sparsity of the FDD MIMO channel in the delay domain,
Figure PCTCN2022083630-appb-000006
Only a limited and concentrated few rows are non-zero. There is no need for feedback for most of the rows that are approximately zero, so the non-zero sub-matrix H composed of the first N s rows is intercepted at this time as the object of channel feedback, as shown in the following formula:
Figure PCTCN2022083630-appb-000007
Figure PCTCN2022083630-appb-000007
S102:在用户端,矩阵H将通过基于二值化神经网络的自编码器E。自编码器由前序的卷积层和后续的全连接层级联组成,分别负责信道空域特征抽象和信道维度压缩,最终将输入的矩阵H转化为压缩后的特征向量v,如下公式所示:S102: At the user end, the matrix H will pass through the autoencoder E based on the binarized neural network. The autoencoder consists of a pre-order convolutional layer and a subsequent fully-connected layer, which are responsible for channel spatial feature abstraction and channel dimension compression, respectively, and finally convert the input matrix H into a compressed feature vector v, as shown in the following formula:
E(H)=E FC(E conv(H))=v E(H)= EFC(Econv ( H))=v
值得留意的是:特征向量v的维度是原本矩阵H的几分之一到几十分之一不等。因此用户端只需反馈v而无需反馈H将显著降低反馈开销。值得留意的是这里的卷积层可以有各种设计,其中一个设计见第一个具体实施例。而全连接层则被矩阵H和特征向量v的维度所约束。其中矩阵H的维度为2×N s×N t,其中2代表实部和虚部(信道矩阵H的每个位置上的元素均为一个复数);而特征向量v的维度为
Figure PCTCN2022083630-appb-000008
其中λ为压缩倍数,从几到几十不等。故而全连接层的维度为
Figure PCTCN2022083630-appb-000009
It is worth noting that the dimension of the feature vector v is from a fraction to several tenths of the original matrix H. Therefore, the user end only needs to feed back v but does not need to feed back H, which will significantly reduce the feedback overhead. It is worth noting that the convolutional layer here can have various designs, one of which is shown in the first specific embodiment. The fully connected layer is constrained by the dimensions of the matrix H and the feature vector v. The dimension of the matrix H is 2×N s ×N t , where 2 represents the real part and the imaginary part (the element at each position of the channel matrix H is a complex number); and the dimension of the feature vector v is
Figure PCTCN2022083630-appb-000008
Where λ is the compression factor, ranging from several to dozens. Therefore, the dimension of the fully connected layer is
Figure PCTCN2022083630-appb-000009
特征向量v通过上行链路进行理想反馈后,在基站端将接收到无损的特征向量v。留意 此处假设上行链路是无损数字传输,不考虑特征向量本身在传输中可能的信息损耗,而是专注于如何从压缩后的特征向量中恢复出尽可能精确的原始信道矩阵。After the eigenvector v is ideally fed back through the uplink, the base station will receive the lossless eigenvector v. Note that it is assumed that the uplink is lossless digital transmission, and the possible information loss of the eigenvector itself in transmission is not considered, but focuses on how to recover the original channel matrix as accurate as possible from the compressed eigenvectors.
S103:在基站端将收到的特征向量v输入基于神经网络的自解码器D,自解码器的结构和自编码器是对偶的,由一个维度对偶的全连接层级联后续卷积层组成。维度对偶的全连接层将压缩得到的特征向量v还原到压缩前的维度;后续的卷积层提取和还原原始信道矩阵H的空域信息,如下公式所示:S103: Input the received feature vector v into the neural network-based self-decoder D at the base station side. The structure of the self-decoder and the self-encoder are dual, and consist of a fully connected layer with dual dimension and concatenated subsequent convolution layers. The fully connected layer of dimension duality restores the compressed feature vector v to the dimension before compression; the subsequent convolutional layer extracts and restores the spatial information of the original channel matrix H, as shown in the following formula:
Figure PCTCN2022083630-appb-000010
Figure PCTCN2022083630-appb-000010
经过恰当训练的自解码器可以恢复出原始非零角度-时延域子阵H,也即使得
Figure PCTCN2022083630-appb-000011
和原始信道矩阵H充分接近。值得留意的是,此处的全连接层的维度和自编码器中的全连接层是相反的,也即
Figure PCTCN2022083630-appb-000012
而由于基站端的储存和计算能力均比较强,后续的卷积层D conv可以构建得相对复杂,从而更好地恢复信道矩阵的空域特征,实现更好的反馈性能。
A properly trained self-decoder can recover the original non-zero angle-delay domain subarray H, that is,
Figure PCTCN2022083630-appb-000011
is sufficiently close to the original channel matrix H. It is worth noting that the dimension of the fully connected layer here is opposite to that of the fully connected layer in the autoencoder, that is
Figure PCTCN2022083630-appb-000012
Due to the relatively strong storage and computing capabilities of the base station, the subsequent convolutional layer D conv can be constructed relatively complex, so as to better restore the spatial characteristics of the channel matrix and achieve better feedback performance.
S104:最后基站端通过补零和两次逆离散傅里叶变换,可以从非零角度-时延域子阵的恢复值
Figure PCTCN2022083630-appb-000013
获取原始下行信道的恢复值
Figure PCTCN2022083630-appb-000014
完成完整的下行信道压缩反馈过程,如下公式所示:
S104: Finally, the base station can recover the value from the non-zero angle-delay domain sub-array through zero-padding and two inverse discrete Fourier transforms.
Figure PCTCN2022083630-appb-000013
Get the recovered value of the original downlink channel
Figure PCTCN2022083630-appb-000014
Complete the complete downlink channel compression feedback process, as shown in the following formula:
Figure PCTCN2022083630-appb-000015
Figure PCTCN2022083630-appb-000015
其中
Figure PCTCN2022083630-appb-000016
为补零后的角度-时延域矩阵,是维度为N t×N c的复矩阵;而
Figure PCTCN2022083630-appb-000017
Figure PCTCN2022083630-appb-000018
则是对应的逆离散傅里叶变换矩阵。
in
Figure PCTCN2022083630-appb-000016
is the angle-delay domain matrix after zero-padded, and is a complex matrix with dimension N t ×N c ; and
Figure PCTCN2022083630-appb-000017
and
Figure PCTCN2022083630-appb-000018
is the corresponding inverse discrete Fourier transform matrix.
而本申请的关键,就是对占据了自编码器绝大部分储存/计算复杂度的全连接层E FC进行二值化。通过将其中的32位浮点数参数简化为1位的二值化参数,该层的储存复杂度降低为原本的
Figure PCTCN2022083630-appb-000019
而由于二值化后无需进行乘法运算,只需进行加法运算,其计算成本也大幅下降,保守估计运算速度提升2倍以上。值得留意的是,目前大多数自编码器设计中,卷积层所占的计算量和参数量相比于全连接层都很小。因此二值化自编码器卷积层的损失大于收益,故本申请的实例中不采用卷积层二值化(但这样的操作实质上也是可行的)。
The key of this application is to binarize the fully connected layer EFC which occupies most of the storage/computation complexity of the autoencoder . By simplifying the 32-bit floating point parameters into 1-bit binarization parameters, the storage complexity of this layer is reduced to the original
Figure PCTCN2022083630-appb-000019
Since the multiplication operation is not required after binarization, only the addition operation is required, and the calculation cost is also greatly reduced. It is conservatively estimated that the operation speed is increased by more than 2 times. It is worth noting that in most current autoencoder designs, the amount of computation and parameters occupied by the convolutional layer is very small compared to the fully connected layer. Therefore, the loss of binarizing the convolutional layer of the self-encoder is greater than the gain, so the binarization of the convolutional layer is not adopted in the example of this application (but such an operation is practically feasible).
用E BFC代表二值化的自编码器全连接层。则本申请提出的基于网络二值化的低复杂度MIMO信道压缩反馈方法可以由如下公式刻画: Let EBFC represent the binarized autoencoder fully connected layer. Then the low-complexity MIMO channel compression feedback method based on network binarization proposed in this application can be described by the following formula:
Figure PCTCN2022083630-appb-000020
Figure PCTCN2022083630-appb-000020
通过上述的流程,本申请极大地降低了资源敏感的用户端自编码器部署的成本,提高 了基于神经网络自编码器-自解码器的MIMO信道反馈方法在实际系统中的可行性。上述流程可以通过图2所刻画的基于网络二值化的低复杂度MIMO信道压缩反馈装置实现。Through the above process, the present application greatly reduces the cost of deploying resource-sensitive user-end self-encoders, and improves the feasibility of the MIMO channel feedback method based on neural network self-encoder-self-decoder in practical systems. The above process can be implemented by the low-complexity MIMO channel compression feedback device based on network binarization depicted in FIG. 2 .
另外,上述网络均需要通过端到端训练的方式来获取信道压缩/解压缩的能力,需要训练的部分包括E conv、E BFC、D FC和D conv。通过申请内容中所描述的端到端训练策略即可在采集/生成的数据上训练上述实施例刻画的网络,而其中的关键是对不可导的二值化自编码器全连接层E BFC进行端到端处理,按照图3所示获取其参数的等效权重,并计算出其更新梯度。 In addition, the above networks all need to obtain the channel compression/decompression capability through end-to-end training, and the parts that need to be trained include E conv , E BFC , D FC and D conv . The network described in the above embodiment can be trained on the collected/generated data through the end-to-end training strategy described in the content of the application, and the key is to perform the full-connection layer EBFC on the non-derivable binarized autoencoder. For end-to-end processing, the equivalent weights of its parameters are obtained as shown in Figure 3, and their update gradients are calculated.
设E BFC层的浮点参数用F BFC表示,其二值化参数用B BFC表示,则二值化层参数等效权重β的一种实例化计算方式如下: Assuming that the floating-point parameters of the E BFC layer are represented by FBFC , and the binarization parameters are represented by BBFC , an instantiated calculation method of the equivalent weight β of the parameters of the binarization layer is as follows:
Figure PCTCN2022083630-appb-000021
Figure PCTCN2022083630-appb-000021
其浮点参数的更新梯度的对应计算方式如下:The corresponding calculation method of the update gradient of its floating-point parameters is as follows:
Figure PCTCN2022083630-appb-000022
Figure PCTCN2022083630-appb-000022
其中f low_pass(F BFC)为前述梯度低通滤波器,用于处理二值化函数的近似求导。 where flow_pass (F BFC ) is the aforementioned gradient low-pass filter, which is used to process the approximate derivation of the binarization function.
下面将通过具体实施例的方式对基于网络二值化的低复杂度MIMO信道压缩反馈方法进行进一步阐述。The low-complexity MIMO channel compression feedback method based on network binarization will be further described below by way of specific embodiments.
1)对于一个FDD大规模MIMO系统,其OFDM子载波个数为N c=1024,基站端天线阵列规模为N t=32。则其下行信道矩阵
Figure PCTCN2022083630-appb-000023
的维度为1024×32。对下行信道矩阵
Figure PCTCN2022083630-appb-000024
做两次离散傅里叶变换,可以将其从空间-频率域变换到角度-时延域获取角度-时延域信道矩阵
Figure PCTCN2022083630-appb-000025
后,对其进行截取,取出前32行(一般16行即可,32行能进一步提高精度)作为需要反馈的复数子阵H,则其维度降为32×32,转为浮点数向量后维度即为2×32×32。
1) For an FDD massive MIMO system, the number of OFDM subcarriers is N c =1024, and the size of the antenna array at the base station is N t =32. Then its downlink channel matrix
Figure PCTCN2022083630-appb-000023
The dimension of is 1024×32. For the downlink channel matrix
Figure PCTCN2022083630-appb-000024
Do two discrete Fourier transforms, which can be transformed from the space-frequency domain to the angle-delay domain to obtain the angle-delay domain channel matrix
Figure PCTCN2022083630-appb-000025
After that, it is intercepted, and the first 32 lines (usually 16 lines are sufficient, 32 lines can further improve the accuracy) are taken as the complex sub-array H that needs to be fed back, then its dimension is reduced to 32 × 32, and the dimension after conversion to a floating-point number vector That is 2×32×32.
2)在用户端将子阵H输入自编码器E,其中自编码器E由一个卷积层和一个全连接层组成。设目标压缩倍数为8倍,将卷积层设为两个3×3卷积的级联,每个卷积后面跟随着批归一化层和激活函数层;而全连接层则实现信道维度的压缩,其参数维度为2048×256,因此自编码器E输出的特征向量v的维度为1×256。本申请中自编码器的全连接层是二值化的,因此实际上此处参数量仅为16K(原始浮点参数量为512K),也即参数量降低为原本的
Figure PCTCN2022083630-appb-000026
另外此处网络计算时不需要进行乘法运算,因为乘以±1等价于改变当前元素符号,最终退化为向量元素之间的加减法。而加法的运算速度在绝大多数硬件上都快于乘法,因此此处的二值化全连接层的前传速度是原始浮点参数全连接层的两倍以上。
2) Input the sub-array H into the self-encoder E at the user end, where the self-encoder E consists of a convolutional layer and a fully connected layer. Set the target compression factor to 8 times, set the convolution layer as a cascade of two 3×3 convolutions, each convolution is followed by a batch normalization layer and an activation function layer; while the fully connected layer realizes the channel dimension Compression, its parameter dimension is 2048×256, so the dimension of the feature vector v output from the encoder E is 1×256. The fully connected layer of the autoencoder in this application is binarized, so in fact, the parameter amount here is only 16K (the original floating-point parameter amount is 512K), that is, the parameter amount is reduced to the original
Figure PCTCN2022083630-appb-000026
In addition, there is no need to perform multiplication during network calculation here, because multiplying by ±1 is equivalent to changing the sign of the current element, which eventually degenerates into addition and subtraction between vector elements. The operation speed of addition is faster than multiplication on most hardware, so the forward transmission speed of the binarized fully-connected layer here is more than twice that of the original floating-point parameter fully-connected layer.
3)将特征向量v进行理想的数字反馈,基站端接收到无损的特征向量v后将其输入基于神经网络的自解码器。自解码器由训练好的一个维度扩展全连接层和一个较强大的卷积神 经网络构成。其中维度扩展全连接层的维度和自编码器中的维度压缩全连接层对偶,其参数维度为256×2048。值得留意的是本申请中的自解码器参数是正常的32位浮点数。通过维度扩展全连接层后,特征向量v的维度扩展回原本的2048=2×32×32。后续的卷积子网络主要由简单卷积层和残差结构组成,每个残差结构中包括三个卷积层,其卷积核大小均为3;三个卷积核分别将通道数扩增到8、扩增到16和压缩回2,通过对通道的放缩实现特征的恢复。本实例中级联两个这样的残差结构,并在最后添加一个卷积核大小为3的简单卷积层形成最终的自解码器卷积网络。由上述结构可以看到,最终自解码器恢复出的矩阵
Figure PCTCN2022083630-appb-000027
维度仍为2×32×32,以复数维度考虑即为32×32。
3) Perform ideal digital feedback on the feature vector v, and the base station receives the lossless feature vector v and inputs it into the neural network-based self-decoder. The self-decoder consists of a trained one-dimensionally expanded fully connected layer and a more powerful convolutional neural network. The dimension of the dimension-expanded fully-connected layer is dual with the dimension-compressed fully-connected layer in the autoencoder, and its parameter dimension is 256×2048. It is worth noting that the self-decoder parameters in this application are normal 32-bit floating point numbers. After the fully connected layer is expanded by dimension, the dimension of the feature vector v is expanded back to the original 2048=2×32×32. The subsequent convolution sub-network is mainly composed of a simple convolution layer and a residual structure. Each residual structure includes three convolution layers, and the convolution kernel size is 3; the three convolution kernels expand the number of channels respectively. Increase to 8, expand to 16 and compress back to 2, to achieve feature recovery by scaling the channel. In this example, two such residual structures are cascaded, and a simple convolutional layer with kernel size 3 is added at the end to form the final self-decoder convolutional network. As can be seen from the above structure, the matrix finally recovered from the decoder
Figure PCTCN2022083630-appb-000027
The dimension is still 2×32×32, which is 32×32 in terms of complex dimensions.
4)对矩阵
Figure PCTCN2022083630-appb-000028
补零,将其维度从32×32恢复到1024×32。对恢复OFDM维度的矩阵进行两次逆离散傅里叶变换,从而得到最终的MIMO信道反馈结果。
4) pair matrix
Figure PCTCN2022083630-appb-000028
Zero-padding restores its dimensions from 32×32 to 1024×32. Perform two inverse discrete Fourier transforms on the matrix that restores the OFDM dimension to obtain the final MIMO channel feedback result.
在上述具体实例中,训练数据由实际环境采集,或者由信道模型(COST2100模型、3GPPTR.38.901UMiNLOS模型或简单Saleh-Valenzuela模型等)进行生成。以COST2100的高频室内场景为例,在基于ADAM优化器对批量数据进行训练后,可以通过上述二值化自编码器和自解码器的级联压缩反馈网络,获取-10dB以下的信道恢复归一化均方误差,给后续的波束成形提供足够精确的下行信道信息。In the above specific example, the training data is collected from the actual environment, or generated by the channel model (COST2100 model, 3GPPTR.38.901UMiNLOS model or simple Saleh-Valenzuela model, etc.). Taking the high-frequency indoor scene of COST2100 as an example, after training batch data based on ADAM optimizer, the channel recovery normalization below -10dB can be obtained through the cascaded compression feedback network of the binarized auto-encoder and self-decoder. The normalized mean square error provides sufficiently accurate downlink channel information for subsequent beamforming.
本具体实例中,经过训练得到的自编码器中的二值化维度压缩全连接层以及自解码器中的维度扩张全连接层的可视化如图4所示。可以看到经过训练学习后的全连接层参数刻画了信道的某些空域特征。In this specific example, the visualization of the binarized dimensionally compressed fully-connected layer in the autoencoder and the dimensionally expanded fully-connected layer in the self-decoder obtained after training is shown in Figure 4 . It can be seen that the parameters of the fully connected layer after training and learning describe some spatial characteristics of the channel.
通过上述介绍可知,本申请实施例的方案是将自编码器中的FC层进行了二值化,而非对FC层输出的压缩后特征向量进行量化或者二值化。只要最终FC层使用了网络二值化,则无论是否在压缩流程中采用离散傅里叶变换,均属于本申请的申请构思。It can be seen from the above introduction that the solution of the embodiment of the present application is to binarize the FC layer in the self-encoder, instead of quantizing or binarizing the compressed feature vector output by the FC layer. As long as the final FC layer uses network binarization, whether discrete Fourier transform is used in the compression process or not, it belongs to the concept of the present application.
本申请的目的在于降低神经网络在用户端设备上部署的开销,因此需要压缩神经网络本身,而不是降低反馈开销,进一步量化需要反馈的CSI特征向量。本申请降低的开销包括储存开销和计算开销,压缩前后信道反馈的开销是不变的。The purpose of this application is to reduce the cost of deploying the neural network on the client device, so it is necessary to compress the neural network itself, rather than reducing the feedback cost, and further quantify the CSI feature vector that needs to be fed back. The overhead reduced by the present application includes storage overhead and calculation overhead, and the overhead of channel feedback before and after compression is unchanged.
本申请的实施例不考虑对特征向量的压缩,而是专注于网络本身的压缩和部署成本的降低。本申请所提到的二值化为网络二值化,具体而言是对反馈自编码器的全连接层进行二值化,并不是压缩后的特征向量进行二值化。The embodiments of the present application do not consider the compression of feature vectors, but focus on the compression of the network itself and the reduction of deployment costs. The binarization mentioned in this application is network binarization, and specifically, binarization is performed on the fully connected layer of the feedback self-encoder, rather than the binarization of the compressed feature vector.
本申请无论前序自编码器中其他层是否二值化(其它层可能是卷积层也可能是全连接层),只要在最后的全连接层处使用了网络二值化,即包含在本申请的申请构思范围内。In this application, regardless of whether other layers in the pre-order autoencoder are binarized (other layers may be convolutional layers or fully connected layers), as long as network binarization is used at the last fully connected layer, it is included in this application. within the scope of the application concept of the application.
根据本申请实施例提出的基于二值化神经网络的高效MIMO信道反馈方法,通过引入网络二值化技术,将原本的自编码器中的浮点全连接层量化为二值化全连接层。在储存上,二值化自编码器是原始浮点自编码器的三十分之一;在计算速度上,二值化自编码器是原 始浮点自编码器的两倍。通过上述网络二值化技术设计的超轻量级的MIMO信道压缩反馈自编码器,十分有利于在多样化的资源受限用户设备商进行网络部署。According to the efficient MIMO channel feedback method based on the binarized neural network proposed in the embodiment of the present application, by introducing the network binarization technology, the floating-point fully-connected layer in the original autoencoder is quantized into a binarized fully-connected layer. In terms of storage, the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder. The ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
其次参照附图描述根据本申请实施例提出的基于二值化神经网络的高效MIMO信道反馈装置。Next, an efficient MIMO channel feedback device based on a binarized neural network proposed according to an embodiment of the present application will be described with reference to the accompanying drawings.
图5为根据本申请一个实施例的基于二值化神经网络的高效MIMO信道反馈装置结构示意图。FIG. 5 is a schematic structural diagram of an efficient MIMO channel feedback device based on a binarized neural network according to an embodiment of the present application.
如图5所示,该基于二值化神经网络的高效MIMO信道反馈装置包括:用户端前序模块501、第一压缩模块502和第一解压缩模块。As shown in FIG. 5 , the high-efficiency MIMO channel feedback device based on the binarized neural network includes: a user-end preamble module 501 , a first compression module 502 and a first decompression module.
用户端前序模块501,用于根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度。The UE preamble module 501 is configured to determine the dimension of the space-frequency domain downlink channel matrix of the UE channel according to the number of OFDM subcarriers and the number of base station antennas in the communication system.
第一压缩模块502,用于将空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将信道特征向量通过上行链路从用户端发送到基站端。The first compression module 502 is configured to input the space-frequency domain downlink channel matrix into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and send the channel eigenvector from the user end to the base station through the uplink end.
第一解压缩模块503,用于通过基站端的基于神经网络的自解码器对接收到的信道特征向量进行解码,得到空间-频率域下行信道矩阵。The first decompression module 503 is configured to decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain a space-frequency domain downlink channel matrix.
进一步地,在本申请的实施例中,还包括:Further, in the embodiments of this application, it also includes:
变换模块504,用于通过两次离散傅里叶变换将空间-频率域下行信道矩阵变换为角度-时延域下行信道矩阵,通过非零子阵切割截取角度-时延域下行信道矩阵中的非零子阵。The transformation module 504 is configured to transform the space-frequency domain downlink channel matrix into an angle-delay domain downlink channel matrix through two discrete Fourier transforms, and intercept the angle-delay domain downlink channel matrix through non-zero sub-array cutting. nonzero subarray.
第二压缩模块505,用于将非零子阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将信道特征向量通过上行链路从用户端发送到基站端。The second compression module 505 is used for inputting the non-zero sub-array into the autoencoder based on the binarized neural network for compression to obtain the channel feature vector, and sending the channel feature vector from the user end to the base station end through the uplink.
第二解压缩模块506,用于通过基站端的基于神经网络的自解码器对接收到的信道特征向量进行解码,得到非零角度-时延域下行信道子阵,对非零角度-时延域下行信道子阵进行补零和两次离散傅里叶逆变换得到空间-频率域下行信道矩阵。The second decompression module 506 is configured to decode the received channel feature vector through the neural network-based self-decoder at the base station to obtain a non-zero angle-delay domain downlink channel sub-array, and for the non-zero angle-delay domain The downlink channel sub-array performs zero-padding and two inverse discrete Fourier transforms to obtain the downlink channel matrix in the space-frequency domain.
基站端最终获取到了空间-频率域MIMO系统下行信道
Figure PCTCN2022083630-appb-000029
可以用做其他后续模块的输入,例如波束成形模块等。
The base station finally obtains the downlink channel of the space-frequency domain MIMO system
Figure PCTCN2022083630-appb-000029
Can be used as input for other subsequent modules, such as beamforming modules, etc.
需要说明的是,前述对方法实施例的解释说明也适用于该实施例的装置,此处不再赘述。It should be noted that the foregoing explanations of the method embodiment are also applicable to the apparatus of this embodiment, and details are not repeated here.
根据本申请实施例提出的基于二值化神经网络的高效MIMO信道反馈装置,通过引入网络二值化技术,将原本的自编码器中的浮点全连接层量化为二值化全连接层。在储存上,二值化自编码器是原始浮点自编码器的三十分之一;在计算速度上,二值化自编码器是原始浮点自编码器的两倍。通过上述网络二值化技术设计的超轻量级的MIMO信道压缩反馈自编码器,十分有利于在多样化的资源受限用户设备商进行网络部署。According to the high-efficiency MIMO channel feedback device based on the binarized neural network proposed in the embodiment of the present application, by introducing the network binarization technology, the floating-point fully-connected layer in the original autoencoder is quantized into a binarized fully-connected layer. In terms of storage, the binarized auto-encoder is one-thirtieth of the original floating-point auto-encoder; in terms of computing speed, the binarized auto-encoder is twice the original floating-point auto-encoder. The ultra-lightweight MIMO channel compression feedback auto-encoder designed through the above-mentioned network binarization technology is very beneficial to network deployment in a variety of resource-constrained user equipment vendors.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (10)

  1. 一种基于二值化神经网络的高效MIMO信道反馈方法,其特征在于,包括以下步骤:An efficient MIMO channel feedback method based on a binarized neural network, characterized in that it comprises the following steps:
    S1,根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度;S1, the dimension of the space-frequency domain downlink channel matrix of the channel at the user end is determined according to the number of OFDM subcarriers in the communication system and the number of antennas at the base station end;
    S2,将所述空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;S2, inputting the space-frequency domain downlink channel matrix into an autoencoder based on a binarized neural network for compression to obtain a channel eigenvector, and sending the channel eigenvector from the user terminal to the base station through the uplink;
    S3,通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到所述空间-频率域下行信道矩阵。S3: Decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
  2. 根据权利要求1所述的方法,其特征在于,还包括:The method of claim 1, further comprising:
    通过两次离散傅里叶变换将所述空间-频率域下行信道矩阵变换为角度-时延域下行信道矩阵,通过非零子阵切割截取所述角度-时延域下行信道矩阵中的非零子阵;The space-frequency domain downlink channel matrix is transformed into an angle-delay domain downlink channel matrix through two discrete Fourier transforms, and the non-zero sub-matrix cutting is used to intercept the non-zero downlink channel matrix in the angle-delay domain subarray;
    将所述非零子阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;Inputting the non-zero sub-array into an autoencoder based on a binarized neural network for compression to obtain a channel feature vector, and sending the channel feature vector from the user end to the base station end through the uplink;
    通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到非零角度-时延域下行信道子阵,对所述非零角度-时延域下行信道子阵进行补零和两次离散傅里叶逆变换得到所述空间-频率域下行信道矩阵。The received channel feature vector is decoded by the neural network-based self-decoder at the base station to obtain a non-zero angle-delay domain downlink channel sub-array, and the non-zero angle-delay domain downlink channel sub-array is processed The space-frequency domain downlink channel matrix is obtained by zero-padding and inverse discrete Fourier transform twice.
  3. 根据权利要求1所述的方法,其特征在于,The method of claim 1, wherein:
    所述通信系统为频分双工系统或时分双工系统。The communication system is a frequency division duplex system or a time division duplex system.
  4. 根据权利要求1所述的方法,其特征在于,The method of claim 1, wherein:
    所述基于二值化神经网络的自编码器由前序的卷积层和后序的全连接层级联组成,对所述基于二值化神经网络的自编码器中的全连接层进行二值化,所述卷积层二值化或不进行二值化。The binarized neural network-based autoencoder is composed of a pre-order convolutional layer and a post-order fully connected layer, and the fully connected layer in the binarized neural network-based autoencoder is binarized. , the convolutional layer binarizes or does not binarize.
  5. 根据权利要求1所述的方法,其特征在于,所述基于神经网络的自解码器包括残差网络、多分辨率网络和注意力机制增强网络。The method of claim 1, wherein the neural network-based self-decoder comprises a residual network, a multi-resolution network, and an attention mechanism augmentation network.
  6. 根据权利要求1所述的方法,其特征在于,还包括:对所述基于二值化神经网络的自编码器和所述基于神经网络的自解码器进行端到端训练,具体为:The method according to claim 1, further comprising: performing end-to-end training on the binarized neural network-based self-encoder and the neural network-based self-decoder, specifically:
    获取训练数据集和测试数据集;Get training datasets and test datasets;
    将所述基于二值化神经网络的自编码器和所述基于神经网络的自解码器连接在一起利用训练数据集进行端到端的训练,训练中通过梯度低通滤波器采用梯度低通近似的方式保持端到端训练,逐步下降学习率直到网络收敛,通过测试数据集对训练后的所述基于二值 化神经网络的自编码器和所述基于神经网络的自解码器进行测试。The self-encoder based on the binary neural network and the self-decoder based on the neural network are connected together to perform end-to-end training using the training data set, and the gradient low-pass filter is used in the training. In this way, end-to-end training is maintained, and the learning rate is gradually decreased until the network converges, and the trained binarized neural network-based autoencoder and the neural network-based autodecoder are tested through a test dataset.
  7. 根据权利要求6所述的方法,其特征在于,通过实际信道环境或描述实际信道环境的信道模型得到多个信道矩阵,将所述多个信道矩阵进行离散傅里叶变换和非零子阵切割生成训练数据集和测试数据集,所述训练数据集和测试数据集中数据不重复。The method according to claim 6, wherein a plurality of channel matrices are obtained through an actual channel environment or a channel model describing the actual channel environment, and the plurality of channel matrices are subjected to discrete Fourier transform and non-zero sub-matrix cutting A training data set and a test data set are generated, and the data in the training data set and the test data set are not duplicated.
  8. 根据权利要求6所述的方法,其特征在于,所述端到端训练进一步包括:The method of claim 6, wherein the end-to-end training further comprises:
    确定自编码器中所有的二值化层,默认只有最后的全连接层属于二值化层,基于二值化层的当前浮点参数计算等效权重;Determine all binarization layers in the autoencoder. By default, only the last fully connected layer belongs to the binarization layer, and the equivalent weight is calculated based on the current floating-point parameters of the binarization layer;
    将二值化层的浮点参数二值化,二值化的方式为取其符号;Binarize the floating-point parameter of the binarization layer, and the way of binarization is to take its sign;
    基于二值化后的参数,对整个反馈网络做前向传播,求出损失函数;Based on the binarized parameters, forward the entire feedback network to obtain the loss function;
    基于损失函数进行反向传播,以自动求导获取各层梯度,通过梯度低通滤波器作为自编码器中的二值化层的近似导数,获取反馈网络所有层的梯度;Backpropagation is performed based on the loss function to automatically obtain the gradient of each layer, and the gradient low-pass filter is used as the approximate derivative of the binarization layer in the autoencoder to obtain the gradient of all layers of the feedback network;
    从二值化层的等效权重和二值化层的原始梯度中,计算出对二值化层浮点参数的更新梯度;From the equivalent weight of the binarization layer and the original gradient of the binarization layer, the update gradient of the floating point parameters of the binarization layer is calculated;
    结合二值化层更新梯度以及其他各层梯度,利用优化器和对应的学习率对整体反馈网络进行参数更新。Combined with the update gradient of the binarization layer and the gradients of other layers, the optimizer and the corresponding learning rate are used to update the parameters of the overall feedback network.
  9. 一种基于二值化神经网络的高效MIMO信道反馈装置,其特征在于,包括:An efficient MIMO channel feedback device based on a binarized neural network, characterized in that it includes:
    用户端前序模块,用于根据通信系统中OFDM的子载波个数和基站端天线数确定用户端信道的空间-频率域下行信道矩阵的维度;The user-end preamble module is used to determine the dimension of the space-frequency domain downlink channel matrix of the user-end channel according to the number of OFDM subcarriers and the number of antennas at the base station in the communication system;
    第一压缩模块,用于将所述空间-频率域下行信道矩阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;The first compression module is used for inputting the space-frequency domain downlink channel matrix into an autoencoder based on a binary neural network for compression to obtain a channel eigenvector, and compressing the channel eigenvector from the user terminal through the uplink sent to the base station;
    第一解压缩模块,用于通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到所述空间-频率域下行信道矩阵。The first decompression module is configured to decode the received channel feature vector through a neural network-based self-decoder at the base station to obtain the space-frequency domain downlink channel matrix.
  10. 根据权利要求9所述的装置,其特征在于,还包括:The device of claim 9, further comprising:
    变换模块,用于通过两次离散傅里叶变换将所述空间-频率域下行信道矩阵变换为角度-时延域下行信道矩阵,通过非零子阵切割截取所述角度-时延域下行信道矩阵中的非零子阵;a transformation module, configured to transform the space-frequency domain downlink channel matrix into an angle-delay domain downlink channel matrix through two discrete Fourier transforms, and intercept the angle-delay domain downlink channel through non-zero sub-array cutting a nonzero submatrix in a matrix;
    第二压缩模块,用于将所述非零子阵输入到基于二值化神经网络的自编码器进行压缩,得到信道特征向量,将所述信道特征向量通过上行链路从用户端发送到基站端;The second compression module is used for inputting the non-zero sub-array into an autoencoder based on a binarized neural network for compression, obtaining a channel feature vector, and sending the channel feature vector from the user end to the base station through the uplink end;
    第二解压缩模块,用于通过基站端的基于神经网络的自解码器对接收到的所述信道特征向量进行解码,得到非零角度-时延域下行信道子阵,对所述非零角度-时延域下行信道子阵进行补零和两次离散傅里叶逆变换得到所述空间-频率域下行信道矩阵。The second decompression module is used for decoding the received channel feature vector through the neural network-based self-decoder at the base station to obtain a non-zero angle-delay domain downlink channel sub-array, and for the non-zero angle- The space-frequency domain downlink channel matrix is obtained by performing zero-padding and two inverse discrete Fourier transforms on the downlink channel sub-array in the delay domain.
PCT/CN2022/083630 2021-04-01 2022-03-29 High-efficiency mimo channel feedback method and device based on binarized neural network WO2022206747A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110356946.1A CN113098805B (en) 2021-04-01 2021-04-01 Efficient MIMO channel feedback method and device based on binarization neural network
CN202110356946.1 2021-04-01

Publications (1)

Publication Number Publication Date
WO2022206747A1 true WO2022206747A1 (en) 2022-10-06

Family

ID=76672686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083630 WO2022206747A1 (en) 2021-04-01 2022-03-29 High-efficiency mimo channel feedback method and device based on binarized neural network

Country Status (2)

Country Link
CN (1) CN113098805B (en)
WO (1) WO2022206747A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113098805B (en) * 2021-04-01 2022-09-27 清华大学 Efficient MIMO channel feedback method and device based on binarization neural network
CN113572709B (en) * 2021-07-12 2022-05-17 华中科技大学 Method and system for estimating pilot frequency auxiliary channel of base station end in large-scale MIMO system
CN115706612A (en) * 2021-08-04 2023-02-17 中兴通讯股份有限公司 Feedback method, receiving method, terminal, base station and medium of channel state information
CN115001629B (en) * 2022-04-29 2024-03-12 清华大学 Channel quantization feedback method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017004546A1 (en) * 2015-07-02 2017-01-05 Interdigital Patent Holdings, Inc. Methods and apparatus for channel estimation and precoding based analog channel state information feedback
CN110289898A (en) * 2019-07-18 2019-09-27 中国人民解放军空军预警学院 A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
CN110311718A (en) * 2019-07-05 2019-10-08 东南大学 Quantization and inverse quantization method in a kind of extensive mimo channel status information feedback
CN110350958A (en) * 2019-06-13 2019-10-18 东南大学 A kind of more multiplying power compressed feedback methods of CSI of extensive MIMO neural network based
CN113098805A (en) * 2021-04-01 2021-07-09 清华大学 Efficient MIMO channel feedback method and device based on binarization neural network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10250309B2 (en) * 2016-03-24 2019-04-02 Huawei Technologies, Co., Ltd. System and method for downlink channel estimation in massive multiple-input-multiple-output (MIMO)
CN107491787A (en) * 2017-08-21 2017-12-19 珠海习悦信息技术有限公司 Local binarization CNN processing method, device, storage medium and processor
CN108259072B (en) * 2018-01-10 2020-09-15 杭州电子科技大学 Method for reducing training sequence overhead for FDD large-scale MIMO downlink system
WO2019220008A1 (en) * 2018-05-15 2019-11-21 Nokia Technologies Oy A method for neural networks
US10872209B2 (en) * 2018-10-13 2020-12-22 Massachusetts Institute Of Technology Methods and apparatus for radio frequency sensing in diverse environments
CN109743683B (en) * 2018-12-03 2020-08-07 北京航空航天大学 Method for determining position of mobile phone user by adopting deep learning fusion network model
CN110322529B (en) * 2019-07-12 2023-04-18 电子科技大学 Deep learning-based auxiliary art drawing method
CN110633668A (en) * 2019-09-09 2019-12-31 合肥飞扬机电有限公司 Railway shunting signal lamp detection method and system based on binary convolution neural network
CN112529160A (en) * 2020-12-09 2021-03-19 南京大学 High-dimensional simulation learning method for video image data recorded by camera equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017004546A1 (en) * 2015-07-02 2017-01-05 Interdigital Patent Holdings, Inc. Methods and apparatus for channel estimation and precoding based analog channel state information feedback
CN110350958A (en) * 2019-06-13 2019-10-18 东南大学 A kind of more multiplying power compressed feedback methods of CSI of extensive MIMO neural network based
CN110311718A (en) * 2019-07-05 2019-10-08 东南大学 Quantization and inverse quantization method in a kind of extensive mimo channel status information feedback
CN110289898A (en) * 2019-07-18 2019-09-27 中国人民解放军空军预警学院 A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
CN113098805A (en) * 2021-04-01 2021-07-09 清华大学 Efficient MIMO channel feedback method and device based on binarization neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TANGWEI2014: "Binarized Neural Network Series 1: Introduction to Binarized Neural Networks", CSDN BLOG, pages 1 - 4, XP009540122, Retrieved from the Internet <URL:https://blog.csdn.net/tangwei2014/article/details/55077172> [retrieved on 20221026] *

Also Published As

Publication number Publication date
CN113098805B (en) 2022-09-27
CN113098805A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
WO2022206747A1 (en) High-efficiency mimo channel feedback method and device based on binarized neural network
CN112737985B (en) Large-scale MIMO channel joint estimation and feedback method based on deep learning
CN111464220B (en) Channel state information reconstruction method based on deep learning
CN109743268B (en) Millimeter wave channel estimation and compression method based on deep neural network
CN108390706B (en) Large-scale MIMO channel state information feedback method based on deep learning
CN109474316B (en) Channel information compression feedback method based on deep cycle neural network
CN111698182B (en) Time-frequency blocking sparse channel estimation method based on compressed sensing
CN113381950B (en) Efficient MIMO channel feedback method and device based on network aggregation strategy
CN113748614A (en) Channel estimation model training method and device
CN111555781B (en) Large-scale MIMO channel state information compression and reconstruction method based on deep learning attention mechanism
CN110311718A (en) Quantization and inverse quantization method in a kind of extensive mimo channel status information feedback
CN107483091B (en) Channel information feedback algorithm under FDD large-scale MIMO-OFDM system
CN108259397B (en) Large-scale MIMO system channel estimation method based on adaptive regularization subspace tracking compressed sensing algorithm
CN115001629B (en) Channel quantization feedback method and device, electronic equipment and storage medium
CN114124623B (en) Wireless communication channel estimation method and device
CN115865145A (en) Large-scale MIMO channel state information feedback method based on Transformer
Hu et al. MmWave MIMO communication with semi-passive RIS: A low-complexity channel estimation scheme
CN112600596A (en) Millimeter wave system channel feedback method based on tensor parallel compression
CN116248156A (en) Deep learning-based large-scale MIMO channel state information feedback and reconstruction method
CN114884549A (en) Large-scale MIMO channel state information feedback method based on deep learning
CN113726395A (en) Intelligent reflector enhanced cloud access network multi-antenna user uplink transmission method
CN107809399B (en) Multi-antenna millimeter wave channel estimation method for quantized received signals
Chakma et al. Deep decoder CsiNet for FDD massive MIMO system
TW201944745A (en) Feedback method for use as a channel information based on deep learning
CN114553280A (en) CSI feedback method based on deep learning large-scale MIMO system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778929

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778929

Country of ref document: EP

Kind code of ref document: A1