WO2020220278A1 - Channel estimation model training method and device - Google Patents

Channel estimation model training method and device Download PDF

Info

Publication number
WO2020220278A1
WO2020220278A1 PCT/CN2019/085230 CN2019085230W WO2020220278A1 WO 2020220278 A1 WO2020220278 A1 WO 2020220278A1 CN 2019085230 W CN2019085230 W CN 2019085230W WO 2020220278 A1 WO2020220278 A1 WO 2020220278A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
channel matrix
input
fully connected
output
Prior art date
Application number
PCT/CN2019/085230
Other languages
French (fr)
Chinese (zh)
Inventor
黄鸿基
胡慧
刘劲楠
杨帆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980095867.0A priority Critical patent/CN113748614B/en
Priority to PCT/CN2019/085230 priority patent/WO2020220278A1/en
Publication of WO2020220278A1 publication Critical patent/WO2020220278A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines

Definitions

  • This application relates to the field of communication technology, and in particular to a channel estimation model training method and device.
  • the performance of a massive (Massive) input and output (multiple-input multiple output, MIMO) system largely depends on the availability of complete and effective channel state information (CSI) of the system, based on how the pilot symbols are used.
  • CSI channel state information
  • the current channel estimation algorithms can be roughly divided into the following three categories: pilot-assisted channel estimation, blind channel estimation and semi-blind channel estimation.
  • the three channel estimation methods are introduced separately below:
  • Pilot-aided channel estimation adopts the strategy of mixing pilot and data frequency points in the symbol; for frequency domain estimation, first estimate the channel frequency response of each pilot point, and realize other pilots through interpolation calculations of different methods.
  • Frequency domain estimation and time domain estimation generally adopt least square method or minimum mean square error method to realize channel estimation.
  • the computational complexity of pilot-assisted channel estimation is relatively low, but the overhead from the pilot band reduces the spectral efficiency of the Massive MIMO system.
  • Blind channel estimation algorithms do not need to use training sequences or pilot signals for channel estimation. They mainly use some statistical characteristics of received and transmitted signals to achieve channel estimation.
  • Semi-blind channel estimation mainly includes semi-blind estimation method based on subspace, semi-blind detection algorithm based on joint detection strategy and semi-blind estimation method assisted by adaptive filter.
  • the main idea of the semi-blind estimation algorithm based on joint detection is to send fewer training sequences or pilots to obtain the initial value of the channel estimation, and based on this value, the decoder and the detector complete the channel estimation and iteratively. track. Because this method achieves better channel estimation performance under the premise of saving spectrum resources, it has received extensive attention from researchers. However, this type of method also has the problem of high computational complexity.
  • the technology in this field proposes to integrate deep learning based on The entire Massive MIMO system is regarded as a black box and performs end-to-end learning to achieve unsupervised channel estimation.
  • the structure of the deep learning network used in the unsupervised channel estimation process is shown in Figure 1.
  • the channel matrix is independently decomposed into a gain matrix and a steering vector for estimation, for example, each angle of arrival is fixed to obtain the corresponding received signal.
  • the received signal and the angle of arrival are used as samples for training to realize the estimation of the steering vector through a deep neural network (DNN), and then the gain matrix is estimated by the same method.
  • DNN deep neural network
  • This method does not need to rely on pilots, so the spectrum efficiency is not reduced, but because its channel estimation is based on the gain matrix and steering vector estimation, the result of channel estimation has additional errors.
  • the embodiments of the present application disclose a channel estimation model training method and equipment, which can reduce channel estimation errors.
  • an embodiment of the present application provides a channel estimation model training method, which includes: converting a first channel matrix into codeword information, and reconstructing a channel matrix using the codeword information to obtain a second channel matrix; The first channel matrix and the second channel matrix perform in-depth learning of the channel estimation model to obtain the channel estimation model after training, wherein the channel estimation model is constructed based on a deep neural network; and the first channel transmitted by the terminal is obtained. Signal; using the trained channel estimation model to perform channel estimation on the first signal.
  • the first channel matrix is converted into codeword information, and then the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • the converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain the second channel matrix includes : Convert the real part and the imaginary part of the first channel matrix into two real vectors; convert the two real vectors into codeword information; use the codeword information to reconstruct the channel matrix to obtain the second channel matrix.
  • the use of the codeword information to reconstruct a channel matrix to obtain a second channel matrix Including: extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal; reconstructing a channel matrix according to the second signal and the first white noise to obtain the first Two-channel matrix.
  • the first channel matrix is converted into codeword information
  • the method further includes: generating the first channel matrix according to the third signal, the fourth signal, and the second white noise, wherein the third signal is sent by the network device Signal; the fourth signal is a signal obtained when the terminal receives the third signal, and the second white noise is white noise fed back by the terminal.
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, Global pooling layer and third convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first global The input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the maximum pooling layer , The output of the maximum pooling layer is used as the input of the first insertion layer, the output of the first insertion layer is
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, and a fifth Convolutional layer, sixth convolutional layer, and seventh convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as The input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the maximum The input of the pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the output
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the first channel matrix is used as the first fully connected layer of the encoding network
  • the input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network Layer input, the output of the second fully connected layer is used as the input of the residual network, the output of the residual network is used as the input of the third fully connected layer, the third fully connected
  • the layer is used to generate the second channel matrix.
  • the deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the method of
  • the channel estimation model adopts a compressed sensing mechanism. Therefore, the dimensionality of the deep learning network is reduced, thereby reducing the computational complexity.
  • the first convolutional layer is configured to use the first channel matrix
  • the real part and the imaginary part are converted into two real vectors; the first fully connected layer is used to convert the two real vectors into the codeword information.
  • an embodiment of the present application provides a channel estimation model training device.
  • the device includes a processor and a memory.
  • the memory is used to store program instructions and model parameters.
  • the processor is used to call the program instructions and model parameters. To perform the following operations: convert the first channel matrix into codeword information, and use the codeword information to reconstruct the channel matrix to obtain the second channel matrix; use the first channel matrix and the second channel matrix to estimate the channel
  • the model performs deep learning to obtain a trained channel estimation model, where the channel estimation model is constructed based on a deep neural network; the first signal transmitted by the terminal is obtained; and the trained channel estimation model is used to calculate the first signal Channel estimation for a signal.
  • the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • the first channel matrix is converted into codeword information, and the codeword information is used to reconstruct the channel matrix to obtain the second channel matrix.
  • the steps are: converting the real part and the imaginary part of the first channel matrix into two real vectors; converting the two real vectors into codeword information; using the codeword information to reconstruct the channel matrix to obtain the second channel matrix.
  • the use of the codeword information to reconstruct a channel matrix to obtain a second channel matrix Specifically: extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal; reconstructing a channel matrix according to the second signal and the first white noise to obtain The second channel matrix.
  • the first channel matrix is converted into codeword information, and the The codeword information reconstructs the channel matrix to obtain the second channel matrix.
  • the processor is further configured to generate the first channel matrix according to the third signal, the fourth signal and the second white noise, wherein the third signal is a network A signal sent by the device; the fourth signal is a signal obtained when the terminal receives the third signal, and the second white noise is white noise fed back by the terminal.
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, Global pooling layer and third convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first global The input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the maximum pooling layer , The output of the maximum pooling layer is used as the input of the first insertion layer, the output of the first insertion layer
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, and a fifth Convolutional layer, sixth convolutional layer, and seventh convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as The input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the maximum The input of the pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the first channel matrix is used as the first fully connected layer of the encoding network
  • the input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network Layer input, the output of the second fully connected layer is used as the input of the residual network, the output of the residual network is used as the input of the third fully connected layer, the third fully connected
  • the layer is used to generate the second channel matrix.
  • the deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the
  • the channel estimation model adopts a compressed sensing mechanism.
  • the first convolutional layer is configured to use the first channel matrix
  • the real part and the imaginary part are converted into two real vectors; the first fully connected layer is used to convert the two real vectors into the codeword information.
  • an embodiment of the present application provides a computer-readable storage medium having program instructions stored in the computer-readable storage medium.
  • the program instructions run on a processor, the first aspect or the first aspect is implemented. Any possible implementation of the aspect described in the method.
  • the embodiments of the present application provide a computer program product, which, when the computer program product runs on a processor, implements the method described in the first aspect or any possible implementation of the first aspect.
  • the first channel matrix is converted into codeword information, and then the second channel matrix is obtained by reconstructing the matrix according to the codeword information, and then adjusting the parameters of the channel estimation model through deep learning to reduce the second channel matrix and the first channel matrix.
  • the relationship between the channel matrices can be used to obtain the trained channel estimation model; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • FIG. 1 is a schematic diagram of the structure of a deep learning network in the prior art provided by this application;
  • FIG. 2 is a schematic diagram of a scene of a communication system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a channel estimation model training method provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a scene of a communication system 200 provided by an embodiment of the present application.
  • the communication system 200 includes a network device 201 and a terminal 202.
  • the network device 201 may be a base station, and the base station may be used to communicate with one or more terminals, and may also be used to communicate with one or more base stations with partial terminal functions (for example, communication between a macro base station and a micro base station).
  • the base station can be the Base Transceiver Station (BTS) in the Time Division Synchronous Code Division Multiple Access (TD-SCDMA) system, or the Evolutional Node B (Evolutional Node B) in the LTE system , ENB), as well as base stations in 5G systems and New Air Interface (NR) systems.
  • BTS Base Transceiver Station
  • TD-SCDMA Time Division Synchronous Code Division Multiple Access
  • ENB Evolutional Node B
  • the base station may also be an access point (Access Point, AP), a transmission node (Trans TRP), a central unit (Central Unit, CU) or other network entities, and may include some or all of the functions of the above network entities .
  • AP Access Point
  • TRP transmission node
  • Central Unit CU
  • the network device 201 may face greater computing pressure, a server or a server cluster can be deployed for it to separately provide computing capabilities for the network device 201. At this time, the server or server cluster It can be regarded as a part of the network device 201.
  • the terminals 202 may be distributed in the entire wireless communication system 200, and may be stationary or mobile, and the number thereof is usually multiple.
  • the terminal 202 may include handheld devices with wireless communication functions (for example, mobile phones, tablets, palmtop computers, etc.), vehicle-mounted devices (for example, automobiles, bicycles, electric vehicles, airplanes, ships, etc.), and wearable devices (for example, smart watches).
  • multiple transmitting antennas are deployed at the transmitting end (such as a network device), and multiple receiving antennas are deployed at the receiving end (such as a terminal), forming a Massive MIMO system.
  • FIG. 3 is a channel estimation model training method provided by an embodiment of the present application. The method includes but is not limited to the following steps:
  • Step S301 The network device generates a first channel matrix.
  • the third signal is a signal sent by the network device in a specific direction; the fourth signal is a signal received by the terminal from the specific direction, and the second white noise is the terminal Feedback white noise.
  • N s is the number of orthogonal frequency division multiplexing OFDM carriers, which is equal to the dimension of the sparse precoding vector.
  • the precoding vector v j used for subcarrier power allocation at the jth subcarrier, the transmitted signal x j at the jth subcarrier, and the jth subcarrier White noise z j determine the channel vector at the jth subcarrier Among them, j takes a positive integer between 1 and N f in turn, the transmitted signal x j at the jth subcarrier is the signal sent by the network device in a specific direction, that is, the third signal; the received signal at the jth subcarrier y j The signal received by the terminal from the specific direction, that is, the fourth signal, is fed back to the network device by the terminal after being acquired, and the precoding vector v j at the jth subcarrier is based on the transmission power of each subcarrier The predefined quantity, the white noise z j at the jth subcarrier, that is, the second white noise can also be fed
  • the second step according to the channel vector at each subcarrier Get the conjugate matrix among them
  • the third step is to use formula (2) to compare the conjugate matrix Perform dual DFT transformation to obtain the first channel matrix H.
  • the above three steps describe how to obtain the first channel matrix H of a batch of (patch) signaling.
  • the embodiment of the present application needs to obtain the first channel matrix H of each patch signaling in multiple patch signaling through the same principle.
  • the obtained signaling matrices of M patch signaling are H 1 , H 2 , H 3 , ..., Hi , H i+1 , ..., H M.
  • the acquired M first channel matrices need to be input into the channel estimation model as samples for training, so as to obtain an ideal channel estimation model.
  • the channel estimation model may be a deep learning network (also called a deep neural network), the deep learning network includes an encoder network and a decoder network, where the encoding network is used to obtain codeword information according to the first channel matrix , And the decoding network is used to reconstruct the matrix according to the codeword information; the first channel matrix H i of each batch (patch) can be used as the input of the deep learning network, and correspondingly, the output of the deep learning network is the reconstruction matrix
  • the decoding network may use a compressed sensing mechanism to reduce the dimension of the network, thereby reducing computational complexity.
  • an inception layer may be deployed in the decoding network, which includes convolution kernels of different sizes, reduces the complexity of the network through segmentation, and improves the performance of the network through splicing (such as learning ability, accuracy, etc.).
  • the coding network includes two neural network layers and the activation functions of these two neural network layers are both Rectified Linear Unit (ReLU). Each layer introduces a batch normalization mechanism.
  • the first layer It is a convolutional layer (may be referred to as the first convolutional layer) with a filter size of 3 ⁇ 3 and a step size of 2. This layer is used to obtain the real and imaginary parts of the input first channel matrix H.
  • the second layer is a fully connected layer (may be called the first fully connected layer), and its width is related to the compression rate of compressed sensing.
  • the length and width of the network and the number of feature maps are a, b, and c respectively, and the compression rate is r, then the number of neurons in this layer is (a ⁇ b ⁇ c)/r.
  • This layer converts the real and imaginary parts of the first channel matrix H into two real vectors, which reflect the characteristics of the channel (for example, antenna position, communication link fading coefficient, angle of arrival gain, etc.).
  • the codeword information s is further generated according to these two real vectors.
  • Decoding network including:
  • a fully connected layer (may be called a second fully connected layer): the width of this layer is consistent with the dimension of the first channel matrix H, that is, N f ⁇ N t .
  • This layer generates 32 feature maps.
  • a deep connection (DepthConcat) module A deep connection (DepthConcat) module.
  • all neural network layers of the decoding network select the rectified linear unit ReLU as the activation function.
  • the data processing flow in the above-mentioned deep neural network is: input the sample (the first channel matrix Hi ) into the first convolutional layer of the Encoder of the encoding network, and the first convolutional layer passes the information flow to the first full Connection layer.
  • the first fully connected layer passes the information flow to the fully connected layer at the decoder network Decoder; then, the fully connected layer passes the information flow to the max pool layer; the output of the max pool is passed to the first inception layer; the first The output of inception is passed to the second inception layer.
  • the output of the second inception layer is passed to the DepthConcate module; then, the DepthConcat module passes the output to the average pooling layer, and passes the output of this layer to the last convolutional layer, which gets the output of the network,
  • the difference from the deep learning network shown in Figure 4 is that in Figure 5, Figure 4
  • the inception layer of the decoding network is replaced with a common convolutional layer.
  • a small convolution kernel can be used to design a convolution layer (3 ⁇ 3).
  • the deep neural network includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, The largest pooling layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the sixth convolutional layer, and the seventh convolutional layer; the first channel matrix is used as the first channel of the coding network
  • the input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network
  • the output of the second fully connected layer is used as the input of the maximum pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the third The output of the convolutional layer is used as the input of the fourth convolutional layer, the output of the fourth convolutional layer is used as the input of the
  • the deep neural network includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third Fully connected layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, the The output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the residual network, and the output of the residual network Used as an input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix.
  • the encoding network includes a first convolutional layer and a first fully connected layer
  • the decoding network includes a second fully connected layer, a residual network, and a third Fully connected layer
  • the first channel matrix is used as the input of the first convolutional layer of the
  • Step S302 The network device converts the first channel matrix into codeword information, and uses the codeword information to reconstruct the channel matrix to obtain a second channel matrix.
  • the real part and the imaginary part of the first channel matrix are converted into two real vectors, and the two real vectors are converted into codeword information.
  • the real part and the imaginary part of the first channel matrix can be specifically converted by the encoding network.
  • the imaginary part is converted into two real vectors, and the two real vectors are converted into codeword information.
  • the decoding network can use the codeword information to reconstruct the channel matrix to obtain the second channel matrix.
  • the decoding network extracts the transmission signal and white noise from the codeword information, and the extracted transmission signal is called the first Second signal, the white noise extracted here is called the first white noise.
  • the decoding network may also extract other characteristic information, such as channel fading coefficient, channel noise, etc., and what other information has been extracted? There are no restrictions. After the second signal and the first white noise are extracted, a channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
  • Step S303 The network device uses the first channel matrix and the second channel matrix to perform in-depth learning of the channel estimation model to obtain a trained channel estimation model.
  • a loss function can be introduced, which is used to constrain the deviation between the second channel matrix and the first channel matrix. If the deviation between the second channel matrix and the first channel matrix does not satisfy the constraint condition of the loss function, It is necessary to continue the iterative training.
  • the Stochastic Gradient Descent (SGD) algorithm can be used for iterative training. When the deviation between the second channel matrix and the first channel matrix meets the constraints of the loss function , The channel estimation model at this time is the channel estimation model after training.
  • the loss function l( ⁇ ) used in the embodiment of the present application is as shown in formula (3).
  • M represents the total number of samples, specifically, Represents the reconstruction matrix obtained from the m-th sample reconstruction, that is, the second channel matrix, and H m represents the m-th first channel matrix among the M samples.
  • the loss function l( ⁇ ) is based on the idea of mean square error, which can minimize the error between the first channel matrix and the second channel matrix.
  • Step S304 The network device obtains the first signal transmitted by the terminal.
  • the signal transmitted by the terminal is called the first signal, and accordingly, the network device receives the signal transmitted by the terminal.
  • Step S305 The network device uses the trained channel estimation model to perform channel estimation on the first signal.
  • the codeword information is generated according to the first channel matrix
  • the second signal and the first white noise are extracted from the codeword information
  • the channel matrix is reconstructed according to the second signal and the first white noise
  • the The channel estimation model performs iterative training operations, which is equivalent to summarizing the relationship between the transmitted signal and the channel matrix. This relationship is described by the trained channel estimation model, so the first signal obtained is input to the trained channel
  • the estimation model can then perform channel estimation.
  • the second signal and the first signal here are essentially transmission signals, but we call the transmission signal used for training the second signal, and the transmission signal used in the actual estimation process as the first signal.
  • the first white noise and the second white noise here are essentially white noise, but we call the white noise used to calculate the training sample (ie the first channel matrix) the second white noise, which is extracted from the training sample
  • the white noise is the first white noise.
  • NMSE normalized mean square error
  • H is the actual channel matrix
  • I the channel matrix estimated by the channel estimation model.
  • the VTC-DNN method achieves a better NMSE in an indoor scene when the compression ratio is 0.03125, on the whole, the CS-DNN method in the embodiment of the present application achieves better channel estimation performance.
  • the embodiment of the present application requires less running time and lower computational complexity.
  • the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and The relationship between the first channel matrix is obtained, and the trained channel estimation model can be obtained; the channel estimation model after the training can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • FIG. 7 shows a network device (ie, a channel estimation device) 700 provided by some embodiments of the present application.
  • the network equipment 700 can be implemented as central office (CO) equipment, multi-dwelling unit (MDU), multi-merchant unit (MTU), digital subscriber line access multiplexer (DSLAM), multi-service access node (MSAN) ), Optical Network Unit (ONU), etc.
  • the network device 700 may include: one or more device processors 701, a memory 702, a transmitter 704, and a receiver 705. These components can be connected to the processor. among them:
  • the processor 701 may be implemented as one or more central processing unit (CPU) chips, cores (for example, multi-core processors), field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), and/or digital signal processors (DSP), and/or can be part of one or more ASICs.
  • the processor 701 may be configured to execute any of the solutions described in the above application embodiments, including data transmission methods.
  • the processor 701 may be implemented by hardware or a combination of hardware and software.
  • the memory 702 is coupled with the processor 701, and is configured to store various software programs and/or groups of program instructions.
  • the memory 702 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 702 may also store a network communication program, which may be used to communicate with one or more additional devices, one or more terminal devices, and one or more devices.
  • the transmitter 704 can be used as an output device of the network device 700. For example, data can be transferred out of device 700.
  • the receiver 705 can be used as an input device of the network device 700, for example, data can be transferred to the device 700.
  • the transmitter 704 may include one or more optical transmitters, and/or one or more electrical transmitters.
  • the receiver 705 may include one or more optical receivers, and/or one or more electrical receivers.
  • the transmitter 704/receiver 705 can take the following forms: modem, modem bank, Ethernet card, universal serial bus (USB) interface card, serial interface, token ring card, optical fiber distributed data interface (FDDI) card, etc. Wait.
  • the network device 700 may not have a receiver and a transmitter, but a wired communication interface, which can communicate with other devices in a wired manner.
  • the processor 701 may be used to read and execute computer-readable program instructions and related model parameters. Specifically, the processor 701 may be used to call programs and model parameters stored in the memory 702, for example, the channel estimation model training method provided in one or more embodiments implements program instructions and model parameters on the network device side, and executes the program instructions And model parameters. Optionally, the processor 701 performs the following operations by calling program instructions and model parameters in the memory 702:
  • the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel
  • the relationship between the matrices can be used to obtain the trained channel estimation model; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • the converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain the second channel matrix is specifically: converting the real part of the first channel matrix And the imaginary part is converted into two real vectors; the two real vectors are converted into codeword information; the channel matrix is reconstructed using the codeword information to obtain a second channel matrix.
  • the reconstructing the channel matrix by using the codeword information to obtain the second channel matrix is specifically: extracting the second signal and the first white noise from the codeword information, and the second signal
  • the second signal is a transmission signal; a channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
  • the processor before said converting the first channel matrix into codeword information and reconstructing the channel matrix using the codeword information to obtain the second channel matrix, is further configured to Signal, fourth signal, and second white noise to generate the first channel matrix, where the third signal is a signal sent by a network device; the fourth signal is obtained when the terminal receives the third signal Signal, the second white noise is white noise fed back by the terminal.
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The second convolutional layer, the maximum pooling layer, the first insertion layer, the second insertion layer, the deep connection module, the global pooling layer, and the third convolutional layer; the first channel matrix is used as the coding network
  • the input of the first convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second of the decoding network
  • the input of the fully connected layer, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the input of the first insertion layer.
  • the output of an insertion layer is used as the input of the second insertion layer, the output of the second insertion layer is used as the input of the deep connection module, and the output of the deep connection module is used as the full pool
  • the output of the full-pooling layer is used as the input of the third convolutional layer, and the third convolutional layer is used to generate the second channel matrix.
  • the deep learning network with this structure has stronger learning ability and fast training speed, and the error of the channel matrix estimated by the channel estimation model at the training place is smaller.
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The second convolutional layer, the maximum pooling layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the sixth convolutional layer, and the seventh convolutional layer; the first channel matrix is used for As the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the The input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the third convolutional layer The output of the third convolutional layer is used as the input of the fourth convolutional layer, the output of the fourth convolutional layer is used as the input of the first convolutional layer
  • the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The residual network and the third fully connected layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first fully connected
  • the input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the residual network
  • the output of the residual network is used as an input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix.
  • the deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the method of cutting off the connection of the features.
  • the channel estimation model adopts a compressed sensing mechanism.
  • the first convolutional layer is used to convert the real part and imaginary part of the first channel matrix into two real vectors; the first fully connected layer is used to convert the The two real vectors are converted into the codeword information.
  • each operation may also correspond to the corresponding description of the method embodiment shown in FIG. 3.
  • An embodiment of the present application also provides a chip system.
  • the chip system includes at least one processor, a memory, and an interface circuit.
  • the memory, the transceiver, and the at least one processor are interconnected by wires, and the at least one memory Instructions are stored in the processor; when the instructions are executed by the processor, the method flow shown in FIG. 3 is realized.
  • An embodiment of the present application also provides a computer-readable storage medium, which stores instructions, and when it runs on a processor, the method flow shown in FIG. 3 is implemented.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product runs on a processor, the method flow shown in FIG. 3 is realized.
  • the first channel matrix is converted into codeword information, and then the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical discs and other media that can store program codes.

Abstract

Embodiments of the present application provide a channel estimation model training method and device. The method comprises: converting a first channel matrix into codeword information, and reconstructing a channel matrix by using the codeword information, so as to obtain a second channel matrix; performing deep learning on a channel estimation model by using the first channel matrix and the second channel matrix, so as to obtain a trained channel estimation model, wherein the channel estimation model is constructed by using a deep neural network; obtaining a first signal transmitted by a terminal; performing channel estimation on the first signal by using the trained channel estimation model. Using the embodiments of the present application, errors of channel estimation can be reduced.

Description

一种信道估计模型训练方法及设备Method and equipment for training channel estimation model 技术领域Technical field
本申请涉及通信技术领域,尤其涉及一种信道估计模型训练方法及设备。This application relates to the field of communication technology, and in particular to a channel estimation model training method and device.
背景技术Background technique
大规模(Massive)输入输出(multiple-input multiple output,MIMO)系统的性能很大程度上取决于能否获得系统完整有效的信道状态信息(channel state information,CSI),根据导频符号利用方式的不同,当前的信道估计算法大致可以分为如下三大类:导频辅助的信道估计、盲信道估计和半盲信道估计,下面对这三种信道估计方法分别介绍:The performance of a massive (Massive) input and output (multiple-input multiple output, MIMO) system largely depends on the availability of complete and effective channel state information (CSI) of the system, based on how the pilot symbols are used. Different, the current channel estimation algorithms can be roughly divided into the following three categories: pilot-assisted channel estimation, blind channel estimation and semi-blind channel estimation. The three channel estimation methods are introduced separately below:
1、导频辅助的信道估计在符号内采用导频和数据频点混排的策略;针对频域估计,首先估计各个导频点的信道频率响应,并通过不同方法的内插计算实现其他导频点的信道频率响应估计;针对时域估计,首先估计时域可分径增益,接着通过快速傅里叶变换得到应用于数据检测的频域信道传输矩阵。频域估计和时域估计一般都会采用最小二乘法或最小均方误差的方法实现信道估计。导频辅助的信道估计的计算复杂度相对较低,但导频带来的开销降低了Massive MIMO系统的频谱效率。2、盲信道估计算法不需要通过训练序列或导频信号进行信道估计,其主要是利用接收信号和发送信号的一些统计特性实现信道估计,这类算法的计算复杂度通常比较高。3、半盲信道估计主要有基于子空间的半盲估计方法,基于联合检测策略的半盲检测算法和自适应滤波器辅助的半盲估计方法。其中,基于联合检测的半盲估计算法的主要思想是通过导频发送较少的训练序列或导频得到信道估计的初始值,并基于该值在译码器和检测器通过迭代完成信道估计和追踪。由于该方法在节省频谱资源的前提下实现了较好的信道估计性能,因此受到了研究人员广泛的关注。然而该类方法也存在计算复杂度较高的问题。可以看出,基于导频辅助的信道估计、盲信道估计和半盲信道估计要么存在频谱效率低的问题,要么存在计算复杂高的问题,针对这种情况,本领域的技术提出基于深度学习将整个Massive MIMO系统视为一个黑盒子,进行端到端学习,实现非监督信道估计,非监督信道估计过程中用到的深度学习网络的结构如图1所示。具体实现上,通过将信道矩阵独立分解成增益矩阵和导向向量进行估计,例如,固定每一个到达角,得到对应的接收信号。然后将该接收信号和到达角作为样本进行训练以通过深度神经网络(deep neural network,DNN)实现导向向量的估计,接着用相同方法实现增益矩阵的估计。该方法不需要借助导频,因此没有降低频谱效率,但由于其信道估计是基于增益矩阵和导向向量进行估计,因此信道估计的结果存在额外的误差。1. Pilot-aided channel estimation adopts the strategy of mixing pilot and data frequency points in the symbol; for frequency domain estimation, first estimate the channel frequency response of each pilot point, and realize other pilots through interpolation calculations of different methods. Frequency channel frequency response estimation; for time domain estimation, first estimate the time domain separable gain, and then obtain the frequency domain channel transmission matrix for data detection through fast Fourier transform. Frequency domain estimation and time domain estimation generally adopt least square method or minimum mean square error method to realize channel estimation. The computational complexity of pilot-assisted channel estimation is relatively low, but the overhead from the pilot band reduces the spectral efficiency of the Massive MIMO system. 2. Blind channel estimation algorithms do not need to use training sequences or pilot signals for channel estimation. They mainly use some statistical characteristics of received and transmitted signals to achieve channel estimation. Such algorithms usually have relatively high computational complexity. 3. Semi-blind channel estimation mainly includes semi-blind estimation method based on subspace, semi-blind detection algorithm based on joint detection strategy and semi-blind estimation method assisted by adaptive filter. Among them, the main idea of the semi-blind estimation algorithm based on joint detection is to send fewer training sequences or pilots to obtain the initial value of the channel estimation, and based on this value, the decoder and the detector complete the channel estimation and iteratively. track. Because this method achieves better channel estimation performance under the premise of saving spectrum resources, it has received extensive attention from researchers. However, this type of method also has the problem of high computational complexity. It can be seen that pilot-assisted channel estimation, blind channel estimation, and semi-blind channel estimation either have the problem of low spectrum efficiency or high computational complexity. In view of this situation, the technology in this field proposes to integrate deep learning based on The entire Massive MIMO system is regarded as a black box and performs end-to-end learning to achieve unsupervised channel estimation. The structure of the deep learning network used in the unsupervised channel estimation process is shown in Figure 1. In specific implementation, the channel matrix is independently decomposed into a gain matrix and a steering vector for estimation, for example, each angle of arrival is fixed to obtain the corresponding received signal. Then the received signal and the angle of arrival are used as samples for training to realize the estimation of the steering vector through a deep neural network (DNN), and then the gain matrix is estimated by the same method. This method does not need to rely on pilots, so the spectrum efficiency is not reduced, but because its channel estimation is based on the gain matrix and steering vector estimation, the result of channel estimation has additional errors.
如何在保证频谱效率相对较高和计算复杂度相对较低的前提下,尽量降低信道估计误差是本领域的技术人员正在研究的技术问题。How to minimize the channel estimation error under the premise of ensuring relatively high spectrum efficiency and relatively low computational complexity is a technical problem being studied by those skilled in the art.
发明内容Summary of the invention
本申请实施例公开了一种信道估计模型训练方法及设备,能够降低信道估计误差。The embodiments of the present application disclose a channel estimation model training method and equipment, which can reduce channel estimation errors.
第一方面,本申请实施例提供一种信道估计模型训练方法,该方法包括:将第一信道 矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵;利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型,其中,所述信道估计模型是基于深度神经网络构建的;获取终端发射的第一信号;利用所述训练后的信道估计模型,对所述第一信号进行信道估计。In a first aspect, an embodiment of the present application provides a channel estimation model training method, which includes: converting a first channel matrix into codeword information, and reconstructing a channel matrix using the codeword information to obtain a second channel matrix; The first channel matrix and the second channel matrix perform in-depth learning of the channel estimation model to obtain the channel estimation model after training, wherein the channel estimation model is constructed based on a deep neural network; and the first channel transmitted by the terminal is obtained. Signal; using the trained channel estimation model to perform channel estimation on the first signal.
通过执行上述方法,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。By performing the above method, the first channel matrix is converted into codeword information, and then the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
结合第一方面,在第一方面的第一种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,包括:将第一信道矩阵的实部和虚部转换为两个实向量;将所述两个实向量转换为码字信息;利用所述码字信息重建信道矩阵,得到第二信道矩阵。。With reference to the first aspect, in a first possible implementation manner of the first aspect, the converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain the second channel matrix includes : Convert the real part and the imaginary part of the first channel matrix into two real vectors; convert the two real vectors into codeword information; use the codeword information to reconstruct the channel matrix to obtain the second channel matrix. .
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述利用所述码字信息重建信道矩阵,得到第二信道矩阵,包括:从所述码字信息中提取第二信号和第一白噪声,所述第二信号为发送信号;根据所述第二信号和所述第一所述白噪声重建信道矩阵,得到第二信道矩阵。With reference to the first aspect or any one of the foregoing possible implementation manners of the first aspect, in a second possible implementation manner of the first aspect, the use of the codeword information to reconstruct a channel matrix to obtain a second channel matrix , Including: extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal; reconstructing a channel matrix according to the second signal and the first white noise to obtain the first Two-channel matrix.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第三种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵之前,还包括:根据第三信号、第四信号和第二白噪声生成所述第一信道矩阵,其中,所述第三信号为网络设备发送的信号;所述第四信号为所述终端接收所述第三信号时获得的信号,所述第二白噪声为所述终端反馈的白噪声。With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a third possible implementation manner of the first aspect, the first channel matrix is converted into codeword information, and the Before reconstructing the channel matrix with codeword information to obtain the second channel matrix, the method further includes: generating the first channel matrix according to the third signal, the fourth signal, and the second white noise, wherein the third signal is sent by the network device Signal; the fourth signal is a signal obtained when the terminal receives the third signal, and the second white noise is white noise fed back by the terminal.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第四种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第一插入层、第二插入层、深度连接模块、全局池化层和第三卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第一插入层的输入,所述第一插入层的输出用于作为所述第二插入层的输入,所述第二插入层的输出用于作为所述深度连接模块的输入,所述深度连接模块的输出用于作为所述全池化层的输入,所述全池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层用于生成所述第二信道矩阵。这种结构的深度学习网络的学习能力更强、训练速度快,且训练处的信道估计模型所估计的信道矩阵的误差更小。With reference to the first aspect, or any of the foregoing possible implementation manners of the first aspect, in a fourth possible implementation manner of the first aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, Global pooling layer and third convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first global The input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the maximum pooling layer , The output of the maximum pooling layer is used as the input of the first insertion layer, the output of the first insertion layer is used as the input of the second insertion layer, and the output of the second insertion layer is used As the input of the deep connection module, the output of the deep connection module is used as the input of the full pooling layer, and the output of the full pooling layer is used as the input of the third convolutional layer, The third convolutional layer is used to generate the second channel matrix. The deep learning network with this structure has stronger learning ability and fast training speed, and the error of the channel matrix estimated by the channel estimation model at the training place is smaller.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第五种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括 第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为第七卷积层的输入,所述第七卷积层用于生成所述第二信道矩阵。这种深度学习网络的结构,在规模较小的massive MIMO系统场景中,仍然有着比现有技术更强的学习能力、更快的训练速度和更小的误差,更小的计算复杂度。With reference to the first aspect or any of the foregoing possible implementation manners of the first aspect, in a fifth possible implementation manner of the first aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, and a fifth Convolutional layer, sixth convolutional layer, and seventh convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as The input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the maximum The input of the pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the output of the third convolutional layer is used as the input of the fourth convolutional layer, so The output of the fourth convolutional layer is used as the input of the fifth convolutional layer, the output of the fifth convolutional layer is used as the input of the sixth convolutional layer, and the sixth convolutional layer The output of is used as the input of the seventh convolutional layer, and the seventh convolutional layer is used to generate the second channel matrix. The structure of this deep learning network still has stronger learning capabilities, faster training speed, smaller errors, and smaller computational complexity than existing technologies in the smaller massive MIMO system scenario.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第六种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、残差网络和第三全连接层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。这种结构的深度学习网络通过特征切断连接的方法保证了网络能够在加深模型深度的前提下避免引起较高的复杂度。With reference to the first aspect or any of the foregoing possible implementation manners of the first aspect, in a sixth possible implementation manner of the first aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the first channel matrix is used as the first fully connected layer of the encoding network The input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network Layer input, the output of the second fully connected layer is used as the input of the residual network, the output of the residual network is used as the input of the third fully connected layer, the third fully connected The layer is used to generate the second channel matrix. The deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the method of cutting off the connection of the features.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第七种可能的实现方式中,所述信道估计模型采用了压缩感知的机制。因此,降低了深度学习网络的维度,从而降低计算复杂度。With reference to the first aspect, or any of the foregoing possible implementation manners of the first aspect, in the seventh possible implementation manner of the first aspect, the channel estimation model adopts a compressed sensing mechanism. Therefore, the dimensionality of the deep learning network is reduced, thereby reducing the computational complexity.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第七种可能的实现方式中,所述第一卷积层用于将所述第一信道矩阵的实部和虚部转换为两个实向量;所述第一全连接层用于将所述两个实向量转换为所述码字信息。With reference to the first aspect, or any of the foregoing possible implementation manners of the first aspect, in a seventh possible implementation manner of the first aspect, the first convolutional layer is configured to use the first channel matrix The real part and the imaginary part are converted into two real vectors; the first fully connected layer is used to convert the two real vectors into the codeword information.
第二方面,本申请实施例提供一种信道估计模型训练设备,该设备包括处理器和存储器,所述存储器用于存储程序指令和模型参数,所述处理器用于调用所述程序指令和模型参数来执行如下操作:将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵;利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型,其中,所述信道估计模型是基于深度神经网络构建的;获取终端发射的第一信号;利用所述训练后的信道估计模型,对所述第一信号进行信道估计。In a second aspect, an embodiment of the present application provides a channel estimation model training device. The device includes a processor and a memory. The memory is used to store program instructions and model parameters. The processor is used to call the program instructions and model parameters. To perform the following operations: convert the first channel matrix into codeword information, and use the codeword information to reconstruct the channel matrix to obtain the second channel matrix; use the first channel matrix and the second channel matrix to estimate the channel The model performs deep learning to obtain a trained channel estimation model, where the channel estimation model is constructed based on a deep neural network; the first signal transmitted by the terminal is obtained; and the trained channel estimation model is used to calculate the first signal Channel estimation for a signal.
在上述设备中,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益 矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。In the above equipment, the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
结合第二方面,在第二方面的第一种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:将第一信道矩阵的实部和虚部转换为两个实向量;将所述两个实向量转换为码字信息;利用所述码字信息重建信道矩阵,得到第二信道矩阵。With reference to the second aspect, in a first possible implementation manner of the second aspect, the first channel matrix is converted into codeword information, and the codeword information is used to reconstruct the channel matrix to obtain the second channel matrix. The steps are: converting the real part and the imaginary part of the first channel matrix into two real vectors; converting the two real vectors into codeword information; using the codeword information to reconstruct the channel matrix to obtain the second channel matrix.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:从所述码字信息中提取第二信号和第一白噪声,所述第二信号为发送信号;根据所述第二信号和所述第一所述白噪声重建信道矩阵,得到第二信道矩阵。With reference to the second aspect, or any of the foregoing possible implementation manners of the second aspect, in a second possible implementation manner of the second aspect, the use of the codeword information to reconstruct a channel matrix to obtain a second channel matrix , Specifically: extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal; reconstructing a channel matrix according to the second signal and the first white noise to obtain The second channel matrix.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第三种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,该处理器还用于:根据第三信号、第四信号和第二白噪声生成所述第一信道矩阵,其中,所述第三信号为网络设备发送的信号;所述第四信号为所述终端接收所述第三信号时获得的信号,所述第二白噪声为所述终端反馈的白噪声。With reference to the second aspect, or any of the foregoing possible implementation manners of the second aspect, in a third possible implementation manner of the second aspect, the first channel matrix is converted into codeword information, and the The codeword information reconstructs the channel matrix to obtain the second channel matrix. The processor is further configured to generate the first channel matrix according to the third signal, the fourth signal and the second white noise, wherein the third signal is a network A signal sent by the device; the fourth signal is a signal obtained when the terminal receives the third signal, and the second white noise is white noise fed back by the terminal.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第四种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第一插入层、第二插入层、深度连接模块、全局池化层和第三卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第一插入层的输入,所述第一插入层的输出用于作为所述第二插入层的输入,所述第二插入层的输出用于作为所述深度连接模块的输入,所述深度连接模块的输出用于作为所述全池化层的输入,所述全池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层用于生成所述第二信道矩阵。这种结构的深度学习网络的学习能力更强、训练速度快,且训练处的信道估计模型所估计的信道矩阵的误差更小。With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a fourth possible implementation manner of the second aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, Global pooling layer and third convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first global The input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the maximum pooling layer , The output of the maximum pooling layer is used as the input of the first insertion layer, the output of the first insertion layer is used as the input of the second insertion layer, and the output of the second insertion layer is used As the input of the deep connection module, the output of the deep connection module is used as the input of the full pooling layer, and the output of the full pooling layer is used as the input of the third convolutional layer, The third convolutional layer is used to generate the second channel matrix. The deep learning network with this structure has stronger learning ability and fast training speed, and the error of the channel matrix estimated by the channel estimation model at the training place is smaller.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第五种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为所述第七卷积层的输入,第七卷积层用 于生成所述第二信道矩阵。这种深度学习网络的结构,在规模较小的massive MIMO系统场景中,仍然有着比现有技术更强的学习能力、更快的训练速度和更小的误差,更小的计算复杂度。With reference to the second aspect, or any of the foregoing possible implementation manners of the second aspect, in a fifth possible implementation manner of the second aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, and a fifth Convolutional layer, sixth convolutional layer, and seventh convolutional layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as The input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the maximum The input of the pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the output of the third convolutional layer is used as the input of the fourth convolutional layer, so The output of the fourth convolutional layer is used as the input of the fifth convolutional layer, the output of the fifth convolutional layer is used as the input of the sixth convolutional layer, and the sixth convolutional layer The output of is used as the input of the seventh convolutional layer, and the seventh convolutional layer is used to generate the second channel matrix. The structure of this deep learning network still has stronger learning capabilities, faster training speed, smaller errors, and smaller computational complexity than existing technologies in the smaller massive MIMO system scenario.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第六种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、残差网络和第三全连接层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。这种结构的深度学习网络通过特征切断连接的方法保证了网络能够在加深模型深度的前提下避免引起较高的复杂度。With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a sixth possible implementation manner of the second aspect, the channel estimation model includes an encoding network and a decoding network, wherein the encoding The network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the first channel matrix is used as the first fully connected layer of the encoding network The input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network Layer input, the output of the second fully connected layer is used as the input of the residual network, the output of the residual network is used as the input of the third fully connected layer, the third fully connected The layer is used to generate the second channel matrix. The deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the method of cutting off the connection of the features.
结合第二方面,或者第二方面的上述任一种可能的实现方式,在第二方面的第七种可能的实现方式中,所述信道估计模型采用了压缩感知的机制。With reference to the second aspect, or any of the foregoing possible implementation manners of the second aspect, in the seventh possible implementation manner of the second aspect, the channel estimation model adopts a compressed sensing mechanism.
结合第一方面,或者第一方面的上述任一种可能的实现方式,在第一方面的第八种可能的实现方式中,所述第一卷积层用于将所述第一信道矩阵的实部和虚部转换为两个实向量;所述第一全连接层用于将所述两个实向量转换为所述码字信息。With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in an eighth possible implementation manner of the first aspect, the first convolutional layer is configured to use the first channel matrix The real part and the imaginary part are converted into two real vectors; the first fully connected layer is used to convert the two real vectors into the codeword information.
第三方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有程序指令,当所述程序指令在处理器上运行时,实现第一方面,或者第一方面的任意可能的实现方式所描述的方法。In a third aspect, an embodiment of the present application provides a computer-readable storage medium having program instructions stored in the computer-readable storage medium. When the program instructions run on a processor, the first aspect or the first aspect is implemented. Any possible implementation of the aspect described in the method.
第四方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在处理器上运行时,实现第一方面,或者第一方面的任意可能的实现方式所描述的方法。In a fourth aspect, the embodiments of the present application provide a computer program product, which, when the computer program product runs on a processor, implements the method described in the first aspect or any possible implementation of the first aspect.
通过实施本申请实施例,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。By implementing the embodiment of this application, the first channel matrix is converted into codeword information, and then the second channel matrix is obtained by reconstructing the matrix according to the codeword information, and then adjusting the parameters of the channel estimation model through deep learning to reduce the second channel matrix and the first channel matrix. The relationship between the channel matrices can be used to obtain the trained channel estimation model; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
附图说明Description of the drawings
图1是本申请提供的一种现有技术中深度学习网络的结构示意图;FIG. 1 is a schematic diagram of the structure of a deep learning network in the prior art provided by this application;
图2是本申请实施例提供的一种通信系统的场景示意图;FIG. 2 is a schematic diagram of a scene of a communication system provided by an embodiment of the present application;
图3是本申请实施例提供的一种信道估计模型训练方法的流程示意图;3 is a schematic flowchart of a channel estimation model training method provided by an embodiment of the present application;
图4是本申请实施例提供的一种深度学习网络的结构示意图;FIG. 4 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application;
图5是本申请实施例提供的一种深度学习网络的结构示意图;FIG. 5 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application;
图6是本申请实施例提供的一种深度学习网络的结构示意图;Fig. 6 is a schematic structural diagram of a deep learning network provided by an embodiment of the present application;
图7是本申请实施例提供的一种设备的结构示意图。Fig. 7 is a schematic structural diagram of a device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面结合本申请实施例中的附图对本申请实施例进行描述。The embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application.
请参见图2,图2是本申请实施例提供的一种通信系统200的场景示意图,该通信系统200包括网络设备201、终端202。Please refer to FIG. 2, which is a schematic diagram of a scene of a communication system 200 provided by an embodiment of the present application. The communication system 200 includes a network device 201 and a terminal 202.
网络设备201可以为基站,基站可以用于与一个或多个终端进行通信,也可以用于与一个或多个具有部分终端功能的基站进行通信(比如宏基站与微基站之间的通信)。基站可以是时分同步码分多址(Time Division Synchronous Code Division Multiple Access,TD-SCDMA)系统中的基站收发台(Base Transceiver Station,BTS),也可以是LTE系统中的演进型基站(Evolutional Node B,eNB),以及5G系统、新空口(NR)系统中的基站。另外,基站也可以为接入点(Access Point,AP)、传输节点(Trans TRP)、中心单元(Central Unit,CU)或其他网络实体,并且可以包括以上网络实体的功能中的一些或所有功能。可选的,考虑到网络设备201可能面临较大的计算压力,因此可以为其部署一台服务器或者一个服务器集群,以单独为该网络设备201提供计算能力,此时该一台服务器或者服务器集群可以看做是该网络设备201的一部分。The network device 201 may be a base station, and the base station may be used to communicate with one or more terminals, and may also be used to communicate with one or more base stations with partial terminal functions (for example, communication between a macro base station and a micro base station). The base station can be the Base Transceiver Station (BTS) in the Time Division Synchronous Code Division Multiple Access (TD-SCDMA) system, or the Evolutional Node B (Evolutional Node B) in the LTE system , ENB), as well as base stations in 5G systems and New Air Interface (NR) systems. In addition, the base station may also be an access point (Access Point, AP), a transmission node (Trans TRP), a central unit (Central Unit, CU) or other network entities, and may include some or all of the functions of the above network entities . Optionally, considering that the network device 201 may face greater computing pressure, a server or a server cluster can be deployed for it to separately provide computing capabilities for the network device 201. At this time, the server or server cluster It can be regarded as a part of the network device 201.
终端202可以分布在整个无线通信系统200中,可以是静止的,也可以是移动的,其数量通常为多个。该终端202可以包括具有无线通信功能的手持设备(例如,手机、平板电脑、掌上电脑等)、车载设备(例如,汽车、自行车、电动车、飞机、船舶等)、可穿戴设备(例如智能手表(如iWatch等)、智能手环、计步器等)、智能家居设备(例如,冰箱、电视、空调、电表等)、智能机器人、车间设备、能够连接到无线调制解调器的其它处理设备,以及各种形式的用户设备(User Equipment,UE)、移动台(Mobile station,MS)、终端(terminal)、终端设备(Terminal Equipment),等等。The terminals 202 may be distributed in the entire wireless communication system 200, and may be stationary or mobile, and the number thereof is usually multiple. The terminal 202 may include handheld devices with wireless communication functions (for example, mobile phones, tablets, palmtop computers, etc.), vehicle-mounted devices (for example, automobiles, bicycles, electric vehicles, airplanes, ships, etc.), and wearable devices (for example, smart watches). (Such as iWatch, etc.), smart bracelets, pedometers, etc.), smart home equipment (for example, refrigerators, TVs, air conditioners, electric meters, etc.), smart robots, workshop equipment, other processing equipment that can be connected to wireless modems, and various Various forms of user equipment (User Equipment, UE), mobile station (Mobile station, MS), terminal (terminal), terminal equipment (Terminal Equipment), etc.
该通信系统200中的发射端(例如网络设备)部署了多个发射天线,以及接收端(例如终端)部署了多个接收天线,构成了Massive MIMO系统。In the communication system 200, multiple transmitting antennas are deployed at the transmitting end (such as a network device), and multiple receiving antennas are deployed at the receiving end (such as a terminal), forming a Massive MIMO system.
请参见图3,图3是本申请实施例提供的一种信道估计模型训练方法,该方法包括但不限于如下步骤:Please refer to FIG. 3. FIG. 3 is a channel estimation model training method provided by an embodiment of the present application. The method includes but is not limited to the following steps:
步骤S301:网络设备生成第一信道矩阵。Step S301: The network device generates a first channel matrix.
具体地,所述第三信号为网络设备在特定的方向上发送的信号;所述第四信号为所述终端从所述特定的方向上接收的信号,所述第二白噪声为所述终端反馈的白噪声。Specifically, the third signal is a signal sent by the network device in a specific direction; the fourth signal is a signal received by the terminal from the specific direction, and the second white noise is the terminal Feedback white noise.
举例来说,假若网络设备有N t根发射天线,N s为正交频分复用OFDM载波的数目,其等于稀疏化的预编码向量的维度。那么考虑到第一信道矩阵H大多数元素接近于0,特别是延时域中,由于特定时隙中多径到达的时延,只有第N f行包含非0值,因此,第一信道矩阵H可降维成N f×N t的矩阵。对于第一信道矩阵H,可以通过如下方式生成: For example, if the network device has N t transmit antennas, N s is the number of orthogonal frequency division multiplexing OFDM carriers, which is equal to the dimension of the sparse precoding vector. Then considering that most elements of the first channel matrix H are close to 0, especially in the delay domain, due to the delay of multipath arrival in a specific time slot, only the N f row contains non-zero values. Therefore, the first channel matrix H can be reduced to a matrix of N f ×N t . For the first channel matrix H, it can be generated as follows:
第一步,根据第j个子载波处的接收信号y j,第j个子载波处用于子载波功率分配的预编码向量v j、第j个子载波处的发送信号x j和第j个子载波处的白噪声z j,确定第j个子载波处的信道向量
Figure PCTCN2019085230-appb-000001
其中,j依次取1-N f之间的正整数,第j个子载波处的发送信号x j为网络设备在特定的方向上发送的信号,即第三信号;第j个子载波处的接收信号y j所述终端从所述特定的方向上接收的信号,即第四信号,由终端在获取之后反馈给网络设备,第j个 子载波处的预编码向量v j为根据各个子载波发送功率大小预定义的量,第j个子载波处的白噪声z j,即第二白噪声也可由终端反馈给网络设备,以上各个量之间的关系如公式(1)。
In the first step, according to the received signal y j at the jth subcarrier, the precoding vector v j used for subcarrier power allocation at the jth subcarrier, the transmitted signal x j at the jth subcarrier, and the jth subcarrier White noise z j , determine the channel vector at the jth subcarrier
Figure PCTCN2019085230-appb-000001
Among them, j takes a positive integer between 1 and N f in turn, the transmitted signal x j at the jth subcarrier is the signal sent by the network device in a specific direction, that is, the third signal; the received signal at the jth subcarrier y j The signal received by the terminal from the specific direction, that is, the fourth signal, is fed back to the network device by the terminal after being acquired, and the precoding vector v j at the jth subcarrier is based on the transmission power of each subcarrier The predefined quantity, the white noise z j at the jth subcarrier, that is, the second white noise can also be fed back to the network device by the terminal, and the relationship between the above quantities is as shown in formula (1).
Figure PCTCN2019085230-appb-000002
Figure PCTCN2019085230-appb-000002
第二步,根据各个子载波处的信道向量
Figure PCTCN2019085230-appb-000003
得到共轭矩阵
Figure PCTCN2019085230-appb-000004
其中
Figure PCTCN2019085230-appb-000005
The second step, according to the channel vector at each subcarrier
Figure PCTCN2019085230-appb-000003
Get the conjugate matrix
Figure PCTCN2019085230-appb-000004
among them
Figure PCTCN2019085230-appb-000005
第三步,通过公式(2)对共轭矩阵
Figure PCTCN2019085230-appb-000006
进行双DFT变换可得到第一信道矩阵H。
The third step is to use formula (2) to compare the conjugate matrix
Figure PCTCN2019085230-appb-000006
Perform dual DFT transformation to obtain the first channel matrix H.
Figure PCTCN2019085230-appb-000007
Figure PCTCN2019085230-appb-000007
以上三个步骤讲述的是如何获取一批(patch)信令的第一信道矩阵H,本申请实施例需要通过同样的原理获取多patch信令中每一个patch信令的第一信道矩阵H,例如,获得的M个patch信令的信令矩阵依次为H 1、H 2、H 3、……、H i、H i+1、……、H M。获取的这M个第一信道矩阵需要作为样本输入到信道估计模型中进行训练,以获得理想的信道估计模型。 The above three steps describe how to obtain the first channel matrix H of a batch of (patch) signaling. The embodiment of the present application needs to obtain the first channel matrix H of each patch signaling in multiple patch signaling through the same principle. For example, the obtained signaling matrices of M patch signaling are H 1 , H 2 , H 3 , ..., Hi , H i+1 , ..., H M. The acquired M first channel matrices need to be input into the channel estimation model as samples for training, so as to obtain an ideal channel estimation model.
该信道估计模型可以为一个深度学习网络(也称深度神经网络),该深度学习网络包括编码(encoder)网络和解码(decoder)网络,其中,编码网络用于根据第一信道矩阵获得码字信息,而解码网络用于根据码字信息重建矩阵;可以将每一批(patch)的第一信道矩阵H i作为该深度学习网络的输入,相应的,该深度学习网络的输出则为重建矩阵
Figure PCTCN2019085230-appb-000008
该重建矩阵可以称为第二信道矩阵,参数集θ={θ en,θ de}分别代表编码网络和解码网络的数据集。可选的,在本申请实施例中,该解码网络可以采用压缩感知的机制来降低网络的维度,从而降低计算复杂度。可选的,该解码网络中还可以部署插入(inception)层,包含不同大小的卷积核,通过分割降低网络的复杂度,以及通过拼接提升网络的性能(如学习能力、精度等)。
The channel estimation model may be a deep learning network (also called a deep neural network), the deep learning network includes an encoder network and a decoder network, where the encoding network is used to obtain codeword information according to the first channel matrix , And the decoding network is used to reconstruct the matrix according to the codeword information; the first channel matrix H i of each batch (patch) can be used as the input of the deep learning network, and correspondingly, the output of the deep learning network is the reconstruction matrix
Figure PCTCN2019085230-appb-000008
The reconstruction matrix can be called the second channel matrix, and the parameter set θ={θ en , θ de } represents the data set of the encoding network and the decoding network, respectively. Optionally, in this embodiment of the present application, the decoding network may use a compressed sensing mechanism to reduce the dimension of the network, thereby reducing computational complexity. Optionally, an inception layer may be deployed in the decoding network, which includes convolution kernels of different sizes, reduces the complexity of the network through segmentation, and improves the performance of the network through splicing (such as learning ability, accuracy, etc.).
下面提供该深度学习网络的三种可选的结构,以方便理解。Three optional structures of the deep learning network are provided below to facilitate understanding.
第一种,如图4所示:The first one, as shown in Figure 4:
编码网络,包含两个神经网络层且这两个神经网络层的激活函数都为线性整流函数(Rectified Linear Unit,ReLU),每一层都引入分批标准化(batch normalization)机制,第一个层为卷积层(可称为第一卷积层),其滤波器的大小为3×3,步长为2,这一层用于获取输入的第一信道矩阵H的实部和虚部。第二层为全连接层(可称为第一全连接层),其宽度与压缩感知的压缩率有关。设网络的长、宽和特征映射(feature maps)的数目分别为a,b,c,压缩率为r,则该层的神经元数目为(a×b×c)/r。这一层将第一信道矩阵H的实部和虚部转化为两个实向量,该向量体现了信道的特征(例如,天线位置、通信链路衰落系数、到达角增益等),这一层进一步根据这两个实向量生成码字信息s。The coding network includes two neural network layers and the activation functions of these two neural network layers are both Rectified Linear Unit (ReLU). Each layer introduces a batch normalization mechanism. The first layer It is a convolutional layer (may be referred to as the first convolutional layer) with a filter size of 3×3 and a step size of 2. This layer is used to obtain the real and imaginary parts of the input first channel matrix H. The second layer is a fully connected layer (may be called the first fully connected layer), and its width is related to the compression rate of compressed sensing. Suppose the length and width of the network and the number of feature maps are a, b, and c respectively, and the compression rate is r, then the number of neurons in this layer is (a×b×c)/r. This layer converts the real and imaginary parts of the first channel matrix H into two real vectors, which reflect the characteristics of the channel (for example, antenna position, communication link fading coefficient, angle of arrival gain, etc.). The codeword information s is further generated according to these two real vectors.
解码网络,包括:Decoding network, including:
一个全连接层(可称为第二全连接层):该层的宽度与第一信道矩阵H的维度一致,即N f×N tA fully connected layer (may be called a second fully connected layer): the width of this layer is consistent with the dimension of the first channel matrix H, that is, N f ×N t .
一个卷积层(可称为第二卷积层):包含8个3×3的滤波器,步长=2,图4中的s即代表步长stride,采用补零(zero padding)机制。这一层生成8个特征映射。A convolutional layer (may be called the second convolutional layer): contains 8 3×3 filters, step size=2, s in Figure 4 represents the step size stride, using a zero padding mechanism. This layer generates 8 feature maps.
一个最大池化(max pooling)层:包含8个3×3的滤波器,步长=2。A max pooling layer: contains 8 3×3 filters, step size=2.
一个插入(inception)层(可称为第一插入层):8个3×3的滤波器,步长=1;8个5×5的滤波器,步长=1;一个3×3的滤波器,步长=1的max pooling层。该层采用zero padding 机制。An inception layer (may be called the first insertion layer): 8 3×3 filters, step size=1; 8 5×5 filters, step size=1; one 3×3 filter Detector, max pooling layer with step=1. This layer adopts zero padding mechanism.
一个插入(inception)层(可称为第二插入层):8个3×3的滤波器,步长=1;24个1×1的滤波器,步长=1,该层采用zero padding机制。这一层生成32个特征映射。An insertion (inception) layer (can be called the second insertion layer): 8 3×3 filters, step size=1; 24 1×1 filters, step size=1, this layer uses zero padding mechanism . This layer generates 32 feature maps.
一个深度连接(DepthConcat)模块。A deep connection (DepthConcat) module.
一个全局池化层(average pooling)层:包含8个3×3的滤波器,步长=1。A global pooling layer (average pooling) layer: contains 8 3×3 filters with a step size=1.
一个卷积层(可称为第三卷积层):包含2个3×3的滤波器,步长=2,采用zero padding机制,这一层用于重建第二信道矩阵
Figure PCTCN2019085230-appb-000009
A convolutional layer (can be called the third convolutional layer): Contains two 3×3 filters, step size = 2, using zero padding mechanism, this layer is used to reconstruct the second channel matrix
Figure PCTCN2019085230-appb-000009
可选的,解码网络所有的神经网络层都选用整流线性单元ReLU作为激活函数。Optionally, all neural network layers of the decoding network select the rectified linear unit ReLU as the activation function.
上述深度神经网络中的数据处理流程为:将样本(第一信道矩阵H i)输入到编码网络Encoder的第一个卷积层,该第一个卷积层将信息流传递到第一个全连接层。第一个全连接层将信息流传递到解码网络Decoder处的全连接层;接着,该全连接层将信息流传递到max pool层;max pool的输出传递到第一个inception层;第一个inception的输出传递到第二个inception层。第二个inception层的输出传递到DepthConcate模块;接着,DepthConcat模块将输出传递到average pooling层,并将该层的输出传递到最后一个卷积层,该最后一个卷积层得到了网络的输出,即第二信道矩阵
Figure PCTCN2019085230-appb-000010
The data processing flow in the above-mentioned deep neural network is: input the sample (the first channel matrix Hi ) into the first convolutional layer of the Encoder of the encoding network, and the first convolutional layer passes the information flow to the first full Connection layer. The first fully connected layer passes the information flow to the fully connected layer at the decoder network Decoder; then, the fully connected layer passes the information flow to the max pool layer; the output of the max pool is passed to the first inception layer; the first The output of inception is passed to the second inception layer. The output of the second inception layer is passed to the DepthConcate module; then, the DepthConcat module passes the output to the average pooling layer, and passes the output of this layer to the last convolutional layer, which gets the output of the network, The second channel matrix
Figure PCTCN2019085230-appb-000010
第二种,如图5所示:The second, as shown in Figure 5:
在massive MIMO系统规模相对较小的情况下(例如,发射天线介于0-64,用户数目介于0-16),与图4所示深度学习网络不同的是,图5中将图4中解码网络的inception层的部分替换为普通的卷积层。为了降低模型复杂度,其中采用小卷积核的方式可设计卷积层(3×3)。例如,所述深度神经网络包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为所述第七卷积层的输入,所述第七卷积层用于生成所述第二信道矩阵。实验结果显示,虽然较图4所示实施例而言性能稍微下降,但在规模较小的massive MIMO系统场景中,仍然有着比其他方法更好的性能表现,同时计算复杂度较低。此外,解码网络所有的神经网络层都可以选用ReLU作为激活函数。In the case where the massive MIMO system is relatively small (for example, the transmit antenna is between 0-64 and the number of users is between 0-16), the difference from the deep learning network shown in Figure 4 is that in Figure 5, Figure 4 The inception layer of the decoding network is replaced with a common convolutional layer. In order to reduce the complexity of the model, a small convolution kernel can be used to design a convolution layer (3×3). For example, the deep neural network includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a second convolutional layer, The largest pooling layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the sixth convolutional layer, and the seventh convolutional layer; the first channel matrix is used as the first channel of the coding network The input of a convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second fully connected of the decoding network The output of the second fully connected layer is used as the input of the maximum pooling layer, the output of the maximum pooling layer is used as the input of the third convolutional layer, and the third The output of the convolutional layer is used as the input of the fourth convolutional layer, the output of the fourth convolutional layer is used as the input of the fifth convolutional layer, and the output of the fifth convolutional layer is used As the input of the sixth convolution layer, the output of the sixth convolution layer is used as the input of the seventh convolution layer, and the seventh convolution layer is used to generate the second channel matrix . The experimental results show that although the performance is slightly lower than that of the embodiment shown in FIG. 4, it still has a better performance than other methods in a smaller-scale massive MIMO system scenario, and the computational complexity is lower. In addition, all neural network layers of the decoding network can use ReLU as the activation function.
第三种,如图6所示:The third type, as shown in Figure 6:
对于规模更大的massive MIMO系统(比如,毫米波场景的大规模场景一般情况下有256-512根发射天线,64-128个用户),非常复杂,因此必须加深网络的结构。但一般而言,神经网络深度的增加会带来复杂度急剧地上升。因此,如图6所示,图6相较于图4而言,编码网络保持不变,引入残差网络(Residual net)替换解码网络处除了第一个全连接层的其他结构,并在最后一层设计了一个维度为N f×N t的全连接层。例如,所述深度神经网络包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解 码网络包括第二全连接层、残差网络和第三全连接层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。该网络通过特征切断连接的方法保证了网络能够在加深模型深度的前提下避免引起较高的复杂度。根据文章K.He,X.Zhang,S.Ren,and J.Sun,"Deep Residual Learning for Image Recognition,"in Proc.2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR),Las Vegas,NV,2016,pp.770-778.所示结果,该结构实现了比传统的卷积网络层更好的性能,因此可提升本申请在更大规模massive MIMO场景下的性能表现。 For larger-scale massive MIMO systems (for example, a large-scale millimeter wave scene usually has 256-512 transmitting antennas and 64-128 users), it is very complicated, so the network structure must be deepened. But generally speaking, the increase in the depth of the neural network will bring about a sharp increase in complexity. Therefore, as shown in Figure 6, compared with Figure 4, the coding network remains unchanged. A residual network is introduced to replace the decoding network with other structures except the first fully connected layer. On the first layer, a fully connected layer with a dimension of N f ×N t is designed. For example, the deep neural network includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer, a residual network, and a third Fully connected layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, the The output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the residual network, and the output of the residual network Used as an input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix. The method of cutting off connections through features ensures that the network can avoid high complexity while deepening the model depth. According to the article K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in Proc.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016 , pp.770-778. As shown in the results, this structure achieves better performance than the traditional convolutional network layer, so it can improve the performance of the application in a larger-scale massive MIMO scenario.
步骤S302:网络设备将所述第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵。Step S302: The network device converts the first channel matrix into codeword information, and uses the codeword information to reconstruct the channel matrix to obtain a second channel matrix.
具体地,将第一信道矩阵的实部和虚部转化为两个实向量,并将所述两个实向量转换为码字信息,这里可以具体由编码网络将第一信道矩阵的实部和虚部转化为两个实向量,并将所述两个实向量转换为码字信息。Specifically, the real part and the imaginary part of the first channel matrix are converted into two real vectors, and the two real vectors are converted into codeword information. Here, the real part and the imaginary part of the first channel matrix can be specifically converted by the encoding network. The imaginary part is converted into two real vectors, and the two real vectors are converted into codeword information.
这里可以具体由解码网络利用所述码字信息重建信道矩阵,得到第二信道矩阵,可选的,解码网络从所述码字信息中提取发送信号和白噪声,这里提取的发送信号称为第二信号,这里提取的白噪声称为第一白噪声,可以理解的是,当解码网络因训练迭代导致参数集发生改变时,依旧会提取出用于表征第二信号的量和用于表征第一白噪声的量,但提取的第二信号和第一白噪声的具体值可能发生改变。可选的,该解码网络除了从该码字信息中提取第二信号和第一白噪声之外,还可能提取了其他特征信息,例如信道衰落系数、信道噪声等,具体提取了其他什么信息此处不做限定。提取出第二信号和第一白噪声之后,根据所述第二信号和所述第一白噪声重建信道矩阵,获得第二信道矩阵。Here, the decoding network can use the codeword information to reconstruct the channel matrix to obtain the second channel matrix. Optionally, the decoding network extracts the transmission signal and white noise from the codeword information, and the extracted transmission signal is called the first Second signal, the white noise extracted here is called the first white noise. It can be understood that when the parameter set of the decoding network changes due to training iterations, it will still extract the quantity used to characterize the second signal and the first white noise. One amount of white noise, but the specific values of the extracted second signal and first white noise may change. Optionally, in addition to extracting the second signal and the first white noise from the codeword information, the decoding network may also extract other characteristic information, such as channel fading coefficient, channel noise, etc., and what other information has been extracted? There are no restrictions. After the second signal and the first white noise are extracted, a channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
步骤S303:网络设备利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型。Step S303: The network device uses the first channel matrix and the second channel matrix to perform in-depth learning of the channel estimation model to obtain a trained channel estimation model.
具体地,可以引入损失函数,该损失函数用于约束第二信道矩阵与第一信道矩阵之间的偏差,若第二信道矩阵与第一信道矩阵之间的偏差不满足损失函数的约束条件,就需要继续进行迭代训练,可选的,可以采用随机梯度下降法(Stochastic Gradient Descent,SGD)算法进行迭代训练,当第二信道矩阵与第一信道矩阵之间的偏差满足损失函数的约束条件时,此时的信道估计模型就为训练后的信道估计模型。可选的,本申请实施例中使用的损失函数l(Θ)如公式(3)所示。Specifically, a loss function can be introduced, which is used to constrain the deviation between the second channel matrix and the first channel matrix. If the deviation between the second channel matrix and the first channel matrix does not satisfy the constraint condition of the loss function, It is necessary to continue the iterative training. Optionally, the Stochastic Gradient Descent (SGD) algorithm can be used for iterative training. When the deviation between the second channel matrix and the first channel matrix meets the constraints of the loss function , The channel estimation model at this time is the channel estimation model after training. Optionally, the loss function l(Θ) used in the embodiment of the present application is as shown in formula (3).
Figure PCTCN2019085230-appb-000011
Figure PCTCN2019085230-appb-000011
公式(3)中,M代表样本的总数,具体的,
Figure PCTCN2019085230-appb-000012
代表根据第m个样本重建得到的重建矩阵,即第二信道矩阵,H m代表M个样本中的第m个第一信道矩阵。该损失函数l(Θ)基于均方误差的思想,能够最小化第一信道矩阵和第二信道矩阵之间的误差。
In formula (3), M represents the total number of samples, specifically,
Figure PCTCN2019085230-appb-000012
Represents the reconstruction matrix obtained from the m-th sample reconstruction, that is, the second channel matrix, and H m represents the m-th first channel matrix among the M samples. The loss function l(Θ) is based on the idea of mean square error, which can minimize the error between the first channel matrix and the second channel matrix.
步骤S304:网络设备获取终端发射的第一信号。Step S304: The network device obtains the first signal transmitted by the terminal.
具体地,终端发射的信号称为第一信号,相应地,网络设备接收该终端发射的信号。Specifically, the signal transmitted by the terminal is called the first signal, and accordingly, the network device receives the signal transmitted by the terminal.
步骤S305:网络设备利用所述训练后的信道估计模型,对所述第一信号进行信道估计。Step S305: The network device uses the trained channel estimation model to perform channel estimation on the first signal.
可以理解的是,前面根据第一信道矩阵生成码字信息,从码字信息中提取第二信号和第一白噪声,根据所述第二信号和所述第一白噪声重建信道矩阵,以及对信道估计模型进行迭代训练这些操作,相当于总结出了发送信号与信道矩阵之间的关系,该关系通过训练后的信道估计模型来刻画,因此将获得的第一信号输入到该训练后的信道估计模型就可以进行信道估计。需要说明的是,这里的第二信号和第一信号实质都是发送信号,只不过我们称用于训练的发送信号为第二信号,称实际估计过程中用到的发送信号为第一信号。同理,这里的第一白噪声和第二白噪声实质都是白噪声,只不过我们称用于计算训练样本(即第一信道矩阵)的白噪声为第二白噪声,从训练样本中提取的白噪声为第一白噪声。It can be understood that the codeword information is generated according to the first channel matrix, the second signal and the first white noise are extracted from the codeword information, the channel matrix is reconstructed according to the second signal and the first white noise, and the The channel estimation model performs iterative training operations, which is equivalent to summarizing the relationship between the transmitted signal and the channel matrix. This relationship is described by the trained channel estimation model, so the first signal obtained is input to the trained channel The estimation model can then perform channel estimation. It should be noted that the second signal and the first signal here are essentially transmission signals, but we call the transmission signal used for training the second signal, and the transmission signal used in the actual estimation process as the first signal. In the same way, the first white noise and the second white noise here are essentially white noise, but we call the white noise used to calculate the training sample (ie the first channel matrix) the second white noise, which is extracted from the training sample The white noise is the first white noise.
为了验证以上训练后的信道估计模型的性能,分别针对一个室外的massive MIMO系统和一个室内的massive MIMO系统进行了实验,采用上述第一种深度学习网络的架构,具体参数设置如下:In order to verify the performance of the above-trained channel estimation model, experiments were carried out on an outdoor massive MIMO system and an indoor massive MIMO system. The first deep learning network architecture described above was used. The specific parameter settings are as follows:
室外massive MIMO系统:配置有128根天线,32个单天线用户,100条传播路径,子载波N s=1024,N f=32,区域宽度为400m; Outdoor massive MIMO system: equipped with 128 antennas, 32 single-antenna users, 100 propagation paths, subcarriers N s =1024, N f =32, and an area width of 400m;
室内massive MIMO系统:配置有32根天线,16个单天线用户,6条传播路径,子载波N s==1024,N f=32,区域宽度为20m。 Indoor massive MIMO system: equipped with 32 antennas, 16 single-antenna users, 6 propagation paths, subcarriers N s ==1024, N f =32, and the area width is 20m.
同时,采用归一化均方误差(nirmalized mean square error,NMSE)作为评估标准,NMSE计算方式如公式(4)。At the same time, the normalized mean square error (NMSE) is used as the evaluation standard, and the NMSE calculation method is as shown in formula (4).
Figure PCTCN2019085230-appb-000013
Figure PCTCN2019085230-appb-000013
公式(4)中,H为实际的信道矩阵,
Figure PCTCN2019085230-appb-000014
为通过信道估计模型估计得到的信道矩阵。
In formula (4), H is the actual channel matrix,
Figure PCTCN2019085230-appb-000014
Is the channel matrix estimated by the channel estimation model.
表1-NMSE性能结果Table 1-NMSE performance results
Figure PCTCN2019085230-appb-000015
Figure PCTCN2019085230-appb-000015
从表1可以看出,基于深度学习的两种方法(将信道矩阵分解为导向向量和增益矩阵进行独立估计的传统深度神经网络方法(conventional deep neural network,C-DNN)C-DNN和直接对信道矩阵进行信道估计的压缩感知深度神经网络方法(compress sensing deep neural network,CS-DNN))与其他非深度学习的方法(需要导频的最小绝对收缩和选择操作(least absolute shrinkage and selection operator,LASSO)和近似消息传递方法(approximate message passing,AMP))相比实现了更好的性能。同时,本申请实施例在不同压缩率下与其他方法相比有着更低的NMSE。虽然压缩率为0.03125时,VTC-DNN方法在室内场景下实现了更优的NMSE,但整体上看,本申请实施例的CS-DNN方法实现了更好的信道估计性能。此外,从平均运行时间的比较可看出,本申请实施例所需的运行时间更少,计算复杂度更低。It can be seen from Table 1 that two methods based on deep learning (the traditional deep neural network method (conventional deep neural network, C-DNN) that decomposes the channel matrix into steering vectors and gain matrices for independent estimation, C-DNN and direct pair Channel matrix for channel estimation compressed sensing deep neural network method (compress sensing deep neural network, CS-DNN) and other non-deep learning methods (requires minimum absolute shrinkage and selection operation of pilots (least absolute shrinkage and selection operator, LASSO) and approximate message passing method (approximate message passing, AMP) achieve better performance. At the same time, the embodiment of the application has a lower NMSE than other methods under different compression ratios. Although the VTC-DNN method achieves a better NMSE in an indoor scene when the compression ratio is 0.03125, on the whole, the CS-DNN method in the embodiment of the present application achieves better channel estimation performance. In addition, it can be seen from the comparison of the average running time that the embodiment of the present application requires less running time and lower computational complexity.
在图3所示的方法中,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。In the method shown in Figure 3, the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and The relationship between the first channel matrix is obtained, and the trained channel estimation model can be obtained; the channel estimation model after the training can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
上述详细阐述了本申请实施例的方法,下面提供了本申请实施例的装置。The foregoing describes the method of the embodiment of the present application in detail, and the device of the embodiment of the present application is provided below.
参考图7,图7示出了本申请的一些实施例提供的网络设备(即信道估计设备)700。可以理解的,网络设备700可实施为中心局(CO)设备、多住户单元(MDU)、多商户单元(MTU)、数字用户线接入复接器(DSLAM)、多业务接入节点(MSAN)、光网络单元(ONU)等等。如图7所示,网络设备700可包括:一个或多个设备处理器701、存储器702、发射器704以及接收器705。这些部件可与处理器进行连接。其中:Referring to FIG. 7, FIG. 7 shows a network device (ie, a channel estimation device) 700 provided by some embodiments of the present application. It is understandable that the network equipment 700 can be implemented as central office (CO) equipment, multi-dwelling unit (MDU), multi-merchant unit (MTU), digital subscriber line access multiplexer (DSLAM), multi-service access node (MSAN) ), Optical Network Unit (ONU), etc. As shown in FIG. 7, the network device 700 may include: one or more device processors 701, a memory 702, a transmitter 704, and a receiver 705. These components can be connected to the processor. among them:
处理器701可以实施为一个或多个中央处理器(CPU)芯片、核(例如,多核处理器)、现场可编程门阵列(FPGA)、专用集成电路(ASIC),和/或数字信号处理器(DSP),并且/或者可以是一个或多个ASIC的一部分。处理器701可用于执行上述申请实施例中所述的任何方案,包括数据传输方法。处理器701可通过硬件或硬件与软件的组合来实施。The processor 701 may be implemented as one or more central processing unit (CPU) chips, cores (for example, multi-core processors), field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), and/or digital signal processors (DSP), and/or can be part of one or more ASICs. The processor 701 may be configured to execute any of the solutions described in the above application embodiments, including data transmission methods. The processor 701 may be implemented by hardware or a combination of hardware and software.
存储器702与处理器701耦合,用于存储各种软件程序和/或多组程序指令。具体的,存储器702可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器702还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个终端设备,一个或多个设备进行通信。The memory 702 is coupled with the processor 701, and is configured to store various software programs and/or groups of program instructions. Specifically, the memory 702 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 702 may also store a network communication program, which may be used to communicate with one or more additional devices, one or more terminal devices, and one or more devices.
发射器704可用作网络设备700的输出设备。例如,可将数据传出设备700。接收器705可用作网络设备700的输入设备,例如,可将数据传入设备700。此外,发射器704可包括一个或多个光发射器、和/或一个或多个电发射器。接收器705可包括一个或多个光接收器、和/或一个或多个电接收器。发射器704/接收器705可采用以下形式:调制解调器、调制解调器组、以太网卡、通用串行总线(USB)接口卡、串行接口、令牌环卡、光纤分布式数据接口(FDDI)卡,等等。可选的,该网络设备700也可能不存在接收器和发射器,而是 存在有线的通信接口,能够通过有线的方式与其他设备进行通信。The transmitter 704 can be used as an output device of the network device 700. For example, data can be transferred out of device 700. The receiver 705 can be used as an input device of the network device 700, for example, data can be transferred to the device 700. In addition, the transmitter 704 may include one or more optical transmitters, and/or one or more electrical transmitters. The receiver 705 may include one or more optical receivers, and/or one or more electrical receivers. The transmitter 704/receiver 705 can take the following forms: modem, modem bank, Ethernet card, universal serial bus (USB) interface card, serial interface, token ring card, optical fiber distributed data interface (FDDI) card, etc. Wait. Optionally, the network device 700 may not have a receiver and a transmitter, but a wired communication interface, which can communicate with other devices in a wired manner.
处理器701可用于读取和执行计算机可读程序指令和相关的模型参数。具体的,处理器701可用于调用存储于存储器702的程序和模型参数,例如一个或多个实施例提供的信道估计模型训练方法在网络设备侧的实现程序指令和模型参数,并执行该程序指令和模型参数。可选的,该处理器701通过调用存储器702中的程序指令和模型参数来执行如下操作:The processor 701 may be used to read and execute computer-readable program instructions and related model parameters. Specifically, the processor 701 may be used to call programs and model parameters stored in the memory 702, for example, the channel estimation model training method provided in one or more embodiments implements program instructions and model parameters on the network device side, and executes the program instructions And model parameters. Optionally, the processor 701 performs the following operations by calling program instructions and model parameters in the memory 702:
将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵;利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型;其中,所述信道估计模型是基于深度神经网络构建的;获取终端发射的第一信号;利用所述训练后的信道估计模型,对所述第一信号进行信道估计。Convert the first channel matrix into codeword information, and use the codeword information to reconstruct the channel matrix to obtain the second channel matrix; use the first channel matrix and the second channel matrix to perform deep learning on the channel estimation model, Obtain a trained channel estimation model; wherein the channel estimation model is constructed based on a deep neural network; obtain the first signal transmitted by the terminal; use the trained channel estimation model to perform channel estimation on the first signal .
在上述网络设备中,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。In the above network equipment, the first channel matrix is converted into codeword information, and the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel The relationship between the matrices can be used to obtain the trained channel estimation model; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
在一种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:将第一信道矩阵的实部和虚部转换为两个实向量;将所述两个实向量转换为码字信息;利用所述码字信息重建信道矩阵,得到第二信道矩阵。In a possible implementation manner, the converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain the second channel matrix is specifically: converting the real part of the first channel matrix And the imaginary part is converted into two real vectors; the two real vectors are converted into codeword information; the channel matrix is reconstructed using the codeword information to obtain a second channel matrix.
在一种可能的实现方式中,所述利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:从所述码字信息中提取第二信号和第一白噪声,所述第二信号为发送信号;根据所述第二信号和所述第一所述白噪声重建信道矩阵,得到第二信道矩阵。In a possible implementation manner, the reconstructing the channel matrix by using the codeword information to obtain the second channel matrix is specifically: extracting the second signal and the first white noise from the codeword information, and the second signal The second signal is a transmission signal; a channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
在一种可能的实现方式中,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵之前,所述处理器还用于根据第三信号、第四信号和第二白噪声生成所述第一信道矩阵,其中,所述第三信号为网络设备发送的信号;所述第四信号为所述终端接收所述第三信号时获得的信号,所述第二白噪声为所述终端反馈的白噪声。In a possible implementation manner, before said converting the first channel matrix into codeword information and reconstructing the channel matrix using the codeword information to obtain the second channel matrix, the processor is further configured to Signal, fourth signal, and second white noise to generate the first channel matrix, where the third signal is a signal sent by a network device; the fourth signal is obtained when the terminal receives the third signal Signal, the second white noise is white noise fed back by the terminal.
在一种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第一插入层、第二插入层、深度连接模块、全局池化层和第三卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第一插入层的输入,所述第一插入层的输出用于作为所述第二插入层的输入,所述第二插入层的输出用于作为所述深度连接模块的输入,所述深度连接模块的输出用于作为所述全池化层的输入,所述全池化层的输出用于作为所述第三卷积层的输入, 所述第三卷积层用于生成所述第二信道矩阵。这种结构的深度学习网络的学习能力更强、训练速度快,且训练处的信道估计模型所估计的信道矩阵的误差更小。In a possible implementation, the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The second convolutional layer, the maximum pooling layer, the first insertion layer, the second insertion layer, the deep connection module, the global pooling layer, and the third convolutional layer; the first channel matrix is used as the coding network The input of the first convolutional layer, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the second of the decoding network The input of the fully connected layer, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the input of the first insertion layer. The output of an insertion layer is used as the input of the second insertion layer, the output of the second insertion layer is used as the input of the deep connection module, and the output of the deep connection module is used as the full pool The output of the full-pooling layer is used as the input of the third convolutional layer, and the third convolutional layer is used to generate the second channel matrix. The deep learning network with this structure has stronger learning ability and fast training speed, and the error of the channel matrix estimated by the channel estimation model at the training place is smaller.
在一种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为所述第七卷积层的输入,第七卷积层用于生成所述第二信道矩阵。这种深度学习网络的结构,在规模较小的massive MIMO系统场景中,仍然有着比现有技术更强的学习能力、更快的训练速度和更小的误差,更小的计算复杂度。In a possible implementation, the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The second convolutional layer, the maximum pooling layer, the third convolutional layer, the fourth convolutional layer, the fifth convolutional layer, the sixth convolutional layer, and the seventh convolutional layer; the first channel matrix is used for As the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as the The input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the third convolutional layer The output of the third convolutional layer is used as the input of the fourth convolutional layer, the output of the fourth convolutional layer is used as the input of the fifth convolutional layer, and the first The output of the five convolutional layer is used as the input of the sixth convolutional layer, the output of the sixth convolutional layer is used as the input of the seventh convolutional layer, and the seventh convolutional layer is used to generate the The second channel matrix. The structure of this deep learning network still has stronger learning capabilities, faster training speed, smaller errors, and smaller computational complexity than existing technologies in the smaller massive MIMO system scenario.
在一种可能的实现方式中,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、残差网络和第三全连接层;所述第一信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。这种结构的深度学习网络通过特征切断连接的方法保证了网络能够在加深模型深度的前提下避免引起较高的复杂度。In a possible implementation, the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; the decoding network includes a second fully connected layer , The residual network and the third fully connected layer; the first channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used as the first fully connected The input of the connection layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the residual network, The output of the residual network is used as an input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix. The deep learning network of this structure ensures that the network can avoid high complexity on the premise of deepening the model depth through the method of cutting off the connection of the features.
在一种可能的实现方式中,所述信道估计模型采用了压缩感知的机制。In a possible implementation manner, the channel estimation model adopts a compressed sensing mechanism.
在一种可能的实现方式中,所述第一卷积层用于将所述第一信道矩阵的实部和虚部转换为两个实向量;所述第一全连接层用于将所述两个实向量转换为所述码字信息。In a possible implementation, the first convolutional layer is used to convert the real part and imaginary part of the first channel matrix into two real vectors; the first fully connected layer is used to convert the The two real vectors are converted into the codeword information.
需要说明的是,各个操作的实现还可以对应参照图3所示的方法实施例的相应描述。It should be noted that the implementation of each operation may also correspond to the corresponding description of the method embodiment shown in FIG. 3.
本申请实施例还提供一种芯片系统,所述芯片系统包括至少一个处理器,存储器和接口电路,所述存储器、所述收发器和所述至少一个处理器通过线路互联,所述至少一个存储器中存储有指令;所述指令被所述处理器执行时,图3所示的方法流程得以实现。An embodiment of the present application also provides a chip system. The chip system includes at least one processor, a memory, and an interface circuit. The memory, the transceiver, and the at least one processor are interconnected by wires, and the at least one memory Instructions are stored in the processor; when the instructions are executed by the processor, the method flow shown in FIG. 3 is realized.
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在处理器上运行时,图3所示的方法流程得以实现。An embodiment of the present application also provides a computer-readable storage medium, which stores instructions, and when it runs on a processor, the method flow shown in FIG. 3 is implemented.
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在处理器上运行时,图3所示的方法流程得以实现。The embodiment of the present application also provides a computer program product. When the computer program product runs on a processor, the method flow shown in FIG. 3 is realized.
综上所述,将第一信道矩阵转化为码字信息,再根据码字信息重建矩阵得到第二信道矩阵,然后通过深度学习调整信道估计模型的参数以缩小第二信道矩阵与第一信道矩阵之间的关系,从而得到训练后的信道估计模型;后续该训练后的信道估计模型就可以根据输入的发送信号进行信道估计。由于该过程中没有引入对中间量导向向量的估计和增益矩阵的估计,因此显著降低了运算压力;另外,由于本申请只是进行信道矩阵的估计,而不是 通过估计得到的导向向量和增益矩阵进行信道估计,避免了中间环节的信息失真,因此本申请实施例的估计结果误差更小。In summary, the first channel matrix is converted into codeword information, and then the matrix is reconstructed according to the codeword information to obtain the second channel matrix, and then the parameters of the channel estimation model are adjusted through deep learning to reduce the second channel matrix and the first channel matrix Therefore, the trained channel estimation model can be obtained; the subsequently trained channel estimation model can perform channel estimation according to the input transmission signal. Since the estimation of the intermediate steering vector and the gain matrix is not introduced in this process, the calculation pressure is significantly reduced; in addition, because this application only estimates the channel matrix, not the steering vector and gain matrix obtained by estimation. Channel estimation avoids information distortion in the intermediate links, so the error of the estimation result in the embodiment of the present application is smaller.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the process in the above-mentioned embodiment method can be realized. The process can be completed by a computer program instructing relevant hardware. The program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments. The aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical discs and other media that can store program codes.

Claims (20)

  1. 一种信道估计方法,其特征在于,包括:A channel estimation method, characterized by comprising:
    将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵;Converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain a second channel matrix;
    利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型,其中,所述信道估计模型是基于深度神经网络构建的;Deep learning the channel estimation model by using the first channel matrix and the second channel matrix to obtain a channel estimation model after training, wherein the channel estimation model is constructed based on a deep neural network;
    获取终端发射的第一信号;Acquiring the first signal transmitted by the terminal;
    利用所述训练后的信道估计模型,对所述第一信号进行信道估计。Using the trained channel estimation model to perform channel estimation on the first signal.
  2. 根据权利要求1所述的方法,其特征在于,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,包括:The method according to claim 1, wherein the converting the first channel matrix into codeword information and reconstructing the channel matrix using the codeword information to obtain the second channel matrix comprises:
    将第一信道矩阵的实部和虚部转换为两个实向量;Converting the real part and the imaginary part of the first channel matrix into two real vectors;
    将所述两个实向量转换为码字信息;Converting the two real vectors into codeword information;
    利用所述码字信息重建信道矩阵,得到第二信道矩阵。The channel matrix is reconstructed using the codeword information to obtain the second channel matrix.
  3. 根据权利要求2所述的方法,其特征在于,所述利用所述码字信息重建信道矩阵,得到第二信道矩阵,包括:The method according to claim 2, wherein the reconstructing a channel matrix using the codeword information to obtain a second channel matrix comprises:
    从所述码字信息中提取第二信号和第一白噪声,所述第二信号为发送信号;Extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal;
    根据所述第二信号和所述第一所述白噪声重建信道矩阵,得到第二信道矩阵。A channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵之前,还包括:The method according to any one of claims 1 to 3, wherein before the converting the first channel matrix into codeword information, and using the codeword information to reconstruct the channel matrix, the second channel matrix is obtained include:
    根据第三信号、第四信号和第二白噪声生成所述第一信道矩阵,其中,所述第三信号为网络设备发送的信号;所述第四信号为所述终端接收所述第三信号时获得的信号,所述第二白噪声为所述终端反馈的白噪声。The first channel matrix is generated according to the third signal, the fourth signal and the second white noise, where the third signal is a signal sent by a network device; the fourth signal is the terminal receiving the third signal The second white noise is the white noise fed back by the terminal.
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第一插入层、第二插入层、深度连接模块、全局池化层和第三卷积层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第一插入层的输入,所述第一插入层的输出用于作为所述第二插入层的输入,所述第二插入层的输出用于作为所述深度连接模块的输入,所述深度连接模块的输出用于作为所述全池化层的输入,所述全池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层用于生成所述第二信道矩阵。The method according to any one of claims 1-4, wherein the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, a global pooling layer, and a third convolutional layer; the channel matrix Used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as The input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the first insertion The input of the layer, the output of the first insertion layer is used as the input of the second insertion layer, the output of the second insertion layer is used as the input of the deep connection module, and the output of the deep connection module Used as the input of the full pooling layer, the output of the full pooling layer is used as the input of the third convolutional layer, and the third convolutional layer is used for generating the second channel matrix.
  6. 根据权利要求1-4任一项所述的方法,其特征在于,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为所述第七卷积层的输入,所述第七卷积层用于生成所述第二信道矩阵。The method according to any one of claims 1-4, wherein the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, a sixth convolutional layer, and a seventh convolutional layer The channel matrix is used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, the first fully connected layer The output of is used as the input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as The input of the third convolutional layer, the output of the third convolutional layer is used as the input of the fourth convolutional layer, and the output of the fourth convolutional layer is used as the fifth convolution The output of the fifth convolutional layer is used as the input of the sixth convolutional layer, and the output of the sixth convolutional layer is used as the input of the seventh convolutional layer. The seventh convolutional layer is used to generate the second channel matrix.
  7. 根据权利要求1-4任一项所述的方法,其特征在于,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、残差网络和第三全连接层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。The method according to any one of claims 1-4, wherein the channel estimation model includes an encoding network and a decoding network, wherein the encoding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used As the input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the second fully connected layer. The input of the residual network, the output of the residual network is used as the input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix.
  8. 根据权利要求1-7任一项所述的方法,其特征在于:The method according to any one of claims 1-7, characterized in that:
    所述信道估计模型采用了压缩感知的机制。The channel estimation model uses a compressed sensing mechanism.
  9. 根据权利要求5-7任一项所述的方法,其特征在于:The method according to any one of claims 5-7, characterized in that:
    所述第一卷积层用于将所述第一信道矩阵的实部和虚部转换为两个实向量;所述第一全连接层用于将所述两个实向量转换为所述码字信息。The first convolutional layer is used to convert the real part and the imaginary part of the first channel matrix into two real vectors; the first fully connected layer is used to convert the two real vectors into the code Word information.
  10. 一种信道估计设备,其特征在于,包括处理器和存储器,所述存储器用于存储程序指令和模型参数,所述处理器用于调用所述程序指令和模型参数来执行如下操作:A channel estimation device is characterized by comprising a processor and a memory, the memory is used to store program instructions and model parameters, and the processor is used to call the program instructions and model parameters to perform the following operations:
    将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵;Converting the first channel matrix into codeword information, and reconstructing the channel matrix using the codeword information to obtain a second channel matrix;
    利用所述第一信道矩阵和所述第二信道矩阵对信道估计模型进行深度学习,得到训练后的信道估计模型,其中,所述信道估计模型是基于深度神经网络构建的;Deep learning the channel estimation model by using the first channel matrix and the second channel matrix to obtain a channel estimation model after training, wherein the channel estimation model is constructed based on a deep neural network;
    获取终端发射的第一信号;Acquiring the first signal transmitted by the terminal;
    利用所述训练后的信道估计模型,对所述第一信号进行信道估计。Using the trained channel estimation model to perform channel estimation on the first signal.
  11. 根据权利要求10所述的设备,其特征在于,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:The device according to claim 10, wherein the converting the first channel matrix into codeword information, and using the codeword information to reconstruct the channel matrix to obtain the second channel matrix is specifically:
    将第一信道矩阵的实部和虚部转换为两个实向量;Converting the real part and the imaginary part of the first channel matrix into two real vectors;
    将所述两个实向量转换为码字信息;Converting the two real vectors into codeword information;
    利用所述码字信息重建信道矩阵,得到第二信道矩阵。The channel matrix is reconstructed using the codeword information to obtain the second channel matrix.
  12. 根据权利要求11所述的设备,其特征在于,所述利用所述码字信息重建信道矩阵,得到第二信道矩阵,具体为:The device according to claim 11, wherein the reconstruction of the channel matrix by using the codeword information to obtain the second channel matrix is specifically:
    从所述码字信息中提取第二信号和第一白噪声,所述第二信号为发送信号;Extracting a second signal and a first white noise from the codeword information, where the second signal is a transmission signal;
    根据所述第二信号和所述第一所述白噪声重建信道矩阵,得到第二信道矩阵。A channel matrix is reconstructed according to the second signal and the first white noise to obtain a second channel matrix.
  13. 根据权利要求10-12任一项所述的设备,其特征在于,所述将第一信道矩阵转换为码字信息,并利用所述码字信息重建信道矩阵,得到第二信道矩阵之前,所述处理器还用于根据第三信号、第四信号和第二白噪声生成所述第一信道矩阵,其中,所述第三信号为网络设备发送的信号;所述第四信号为所述终端接收所述第三信号时获得的信号,所述第二白噪声为所述终端反馈的白噪声。The device according to any one of claims 10-12, wherein the first channel matrix is converted into codeword information, and the codeword information is used to reconstruct the channel matrix to obtain the second channel matrix. The processor is further configured to generate the first channel matrix according to a third signal, a fourth signal, and a second white noise, where the third signal is a signal sent by a network device; the fourth signal is the terminal A signal obtained when receiving the third signal, and the second white noise is white noise fed back by the terminal.
  14. 根据权利要求10-13任一项所述的设备,其特征在于,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第一插入层、第二插入层、深度连接模块、全局池化层和第三卷积层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第一插入层的输入,所述第一插入层的输出用于作为所述第二插入层的输入,所述第二插入层的输出用于作为所述深度连接模块的输入,所述深度连接模块的输出用于作为所述全池化层的输入,所述全池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层用于生成所述第二信道矩阵。The device according to any one of claims 10-13, wherein the channel estimation model includes a coding network and a decoding network, wherein the coding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a first insertion layer, a second insertion layer, a deep connection module, a global pooling layer, and a third convolutional layer; the channel matrix Used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, and the output of the first fully connected layer is used as The input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as the first insertion The input of the layer, the output of the first insertion layer is used as the input of the second insertion layer, the output of the second insertion layer is used as the input of the deep connection module, and the output of the deep connection module Used as the input of the full-pooling layer, the output of the full-pooling layer is used as the input of the third convolutional layer, and the third convolutional layer is used for generating the second channel matrix.
  15. 根据权利要求10-13任一项所述的设备,其特征在于,所述信道估计模型包括编码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、第二卷积层、最大池化层、第三卷积层、第四卷积层、第五卷积层、第六卷积层和第七卷积层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述最大池化层的输入,所述最大池化层的输出用于作为所述第三卷积层的输入,所述第三卷积层的输出用于作为所述第四卷积层的输入,所述第四卷积层的输出用于作为所述第五卷积层的输入,所述第五卷积层的输出用于作为所述第六卷积层的输入,所述第六卷积层的输出用于作为所述第七卷积层的输入,所述第七卷积层用于生成所述第二信道矩阵。The device according to any one of claims 10-13, wherein the channel estimation model includes a coding network and a decoding network, wherein the coding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a second convolutional layer, a maximum pooling layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, a sixth convolutional layer, and a seventh convolutional layer The channel matrix is used as the input of the first convolutional layer of the coding network, the output of the first convolutional layer is used as the input of the first fully connected layer, the first fully connected layer The output of is used as the input of the second fully connected layer of the decoding network, the output of the second fully connected layer is used as the input of the maximum pooling layer, and the output of the maximum pooling layer is used as The input of the third convolutional layer, the output of the third convolutional layer is used as the input of the fourth convolutional layer, and the output of the fourth convolutional layer is used as the fifth convolution The output of the fifth convolutional layer is used as the input of the sixth convolutional layer, and the output of the sixth convolutional layer is used as the input of the seventh convolutional layer. The seventh convolutional layer is used to generate the second channel matrix.
  16. 根据权利要求10-13任一项所述的设备,其特征在于,所述信道估计模型包括编 码网络和解码网络,其中,所述编码网络包括第一卷积层和第一全连接层;所述解码网络包括第二全连接层、残差网络和第三全连接层;所述信道矩阵用于作为所述编码网络的第一卷积层的输入,所述第一卷积层的输出用于作为所述第一全连接层的输入,所述第一全连接层的输出用于作为所述解码网络的第二全连接层的输入,所述第二全连接层的输出用于作为所述残差网络的输入,所述残差网络的输出用于作为所述第三全连接层的输入,所述第三全连接层用于生成所述第二信道矩阵。The device according to any one of claims 10-13, wherein the channel estimation model includes a coding network and a decoding network, wherein the coding network includes a first convolutional layer and a first fully connected layer; The decoding network includes a second fully connected layer, a residual network, and a third fully connected layer; the channel matrix is used as the input of the first convolutional layer of the coding network, and the output of the first convolutional layer is used As the input of the first fully connected layer, the output of the first fully connected layer is used as the input of the second fully connected layer of the decoding network, and the output of the second fully connected layer is used as the input of the second fully connected layer. The input of the residual network, the output of the residual network is used as the input of the third fully connected layer, and the third fully connected layer is used to generate the second channel matrix.
  17. 根据权利要求10-16任一项所述的设备,其特征在于:The device according to any one of claims 10-16, characterized in that:
    所述信道估计模型采用了压缩感知的机制。The channel estimation model uses a compressed sensing mechanism.
  18. 根据权利要求14-16任一项所述的设备,其特征在于,所述第一卷积层用于将所述第一信道矩阵的实部和虚部转换为两个实向量;所述第一全连接层用于将所述两个实向量转换为所述码字信息。The device according to any one of claims 14-16, wherein the first convolutional layer is used to convert the real part and the imaginary part of the first channel matrix into two real vectors; A fully connected layer is used to convert the two real vectors into the codeword information.
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,当所述程序指令在处理器上运行时,实现权利要求1-9任一所述的方法。A computer-readable storage medium, wherein the computer-readable storage medium stores program instructions, and when the program instructions run on a processor, the method according to any one of claims 1-9 is implemented.
  20. 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时,实现权利要求1-9任一项所述的方法。A computer program product, characterized in that, when the computer program product runs on a processor, the method according to any one of claims 1-9 is implemented.
PCT/CN2019/085230 2019-04-30 2019-04-30 Channel estimation model training method and device WO2020220278A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980095867.0A CN113748614B (en) 2019-04-30 2019-04-30 Channel estimation model training method and device
PCT/CN2019/085230 WO2020220278A1 (en) 2019-04-30 2019-04-30 Channel estimation model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/085230 WO2020220278A1 (en) 2019-04-30 2019-04-30 Channel estimation model training method and device

Publications (1)

Publication Number Publication Date
WO2020220278A1 true WO2020220278A1 (en) 2020-11-05

Family

ID=73029551

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085230 WO2020220278A1 (en) 2019-04-30 2019-04-30 Channel estimation model training method and device

Country Status (2)

Country Link
CN (1) CN113748614B (en)
WO (1) WO2020220278A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
CN113890795A (en) * 2021-09-09 2022-01-04 广州杰赛科技股份有限公司 Method, device and medium for constructing large-scale MIMO channel estimation model
CN114363129A (en) * 2022-01-10 2022-04-15 嘉兴学院 Wireless fading channel estimation method based on deep dense residual error network
CN114422059A (en) * 2022-01-24 2022-04-29 清华大学 Channel prediction method, device, electronic equipment and storage medium
CN114844749A (en) * 2022-04-26 2022-08-02 电子科技大学 Optical fiber channel estimation method based on neural network
CN115001916A (en) * 2022-06-06 2022-09-02 北京理工大学 MCS identification method based on deep learning and blind identification
WO2022236788A1 (en) * 2021-05-13 2022-11-17 Oppo广东移动通信有限公司 Communication method and device, and storage medium
WO2022267633A1 (en) * 2021-06-22 2022-12-29 华为技术有限公司 Information transmission method and apparatus
CN115913423A (en) * 2022-10-31 2023-04-04 华中科技大学 Multi-step prediction model training method and prediction method for non-stationary large-scale MIMO channel
CN116319195A (en) * 2023-04-04 2023-06-23 上海交通大学 Millimeter wave and terahertz channel estimation method based on pruned convolutional neural network
WO2023137641A1 (en) * 2022-01-19 2023-07-27 Oppo广东移动通信有限公司 Channel estimation method, channel estimation model training method, and communication device
CN116962121A (en) * 2023-07-27 2023-10-27 广东工业大学 LoRa system signal detection method for deep learning joint channel estimation
WO2024016936A1 (en) * 2022-07-18 2024-01-25 中兴通讯股份有限公司 Method for determining channel state information, electronic device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116961707A (en) * 2022-04-15 2023-10-27 展讯半导体(南京)有限公司 Information feedback method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868422A (en) * 2012-09-07 2013-01-09 天津理工大学 MMSE-BDFE (Minimum Mean Square Error-Blind Decision Feedback Equalizer) multi-user detection system based on neural network, and working method of MMSE-BDFE multi-user detection system
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN109672464A (en) * 2018-12-13 2019-04-23 西安电子科技大学 Extensive mimo channel state information feedback method based on FCFNN

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9882675B2 (en) * 2013-08-16 2018-01-30 Origin Wireless, Inc. Time-reversal wireless systems having asymmetric architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9015093B1 (en) * 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN102868422A (en) * 2012-09-07 2013-01-09 天津理工大学 MMSE-BDFE (Minimum Mean Square Error-Blind Decision Feedback Equalizer) multi-user detection system based on neural network, and working method of MMSE-BDFE multi-user detection system
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN109672464A (en) * 2018-12-13 2019-04-23 西安电子科技大学 Extensive mimo channel state information feedback method based on FCFNN

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
WO2022236788A1 (en) * 2021-05-13 2022-11-17 Oppo广东移动通信有限公司 Communication method and device, and storage medium
WO2022267633A1 (en) * 2021-06-22 2022-12-29 华为技术有限公司 Information transmission method and apparatus
CN113890795A (en) * 2021-09-09 2022-01-04 广州杰赛科技股份有限公司 Method, device and medium for constructing large-scale MIMO channel estimation model
CN113890795B (en) * 2021-09-09 2023-06-23 广州杰赛科技股份有限公司 Method, device and medium for constructing large-scale MIMO channel estimation model
CN114363129A (en) * 2022-01-10 2022-04-15 嘉兴学院 Wireless fading channel estimation method based on deep dense residual error network
WO2023137641A1 (en) * 2022-01-19 2023-07-27 Oppo广东移动通信有限公司 Channel estimation method, channel estimation model training method, and communication device
CN114422059B (en) * 2022-01-24 2023-01-24 清华大学 Channel prediction method, device, electronic equipment and storage medium
CN114422059A (en) * 2022-01-24 2022-04-29 清华大学 Channel prediction method, device, electronic equipment and storage medium
CN114844749A (en) * 2022-04-26 2022-08-02 电子科技大学 Optical fiber channel estimation method based on neural network
CN115001916A (en) * 2022-06-06 2022-09-02 北京理工大学 MCS identification method based on deep learning and blind identification
CN115001916B (en) * 2022-06-06 2023-08-01 北京理工大学 MCS (modulation and coding scheme) identification method based on deep learning and blind identification
WO2024016936A1 (en) * 2022-07-18 2024-01-25 中兴通讯股份有限公司 Method for determining channel state information, electronic device, and storage medium
CN115913423A (en) * 2022-10-31 2023-04-04 华中科技大学 Multi-step prediction model training method and prediction method for non-stationary large-scale MIMO channel
CN116319195A (en) * 2023-04-04 2023-06-23 上海交通大学 Millimeter wave and terahertz channel estimation method based on pruned convolutional neural network
CN116319195B (en) * 2023-04-04 2023-10-20 上海交通大学 Millimeter wave and terahertz channel estimation method based on pruned convolutional neural network
CN116962121A (en) * 2023-07-27 2023-10-27 广东工业大学 LoRa system signal detection method for deep learning joint channel estimation
CN116962121B (en) * 2023-07-27 2024-02-27 广东工业大学 LoRa system signal detection method for deep learning joint channel estimation

Also Published As

Publication number Publication date
CN113748614A (en) 2021-12-03
CN113748614B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
WO2020220278A1 (en) Channel estimation model training method and device
CN112737985B (en) Large-scale MIMO channel joint estimation and feedback method based on deep learning
CN111698182B (en) Time-frequency blocking sparse channel estimation method based on compressed sensing
CN111555992B (en) Large-scale multi-antenna channel estimation method based on deep convolutional neural network
CN110289898A (en) A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
CN113872652B (en) CSI feedback method based on 3D MIMO time-varying system
CN107171984A (en) A kind of asynchronous multi-carrier system frequency domain channel estimation method
CN111555781A (en) Large-scale MIMO channel state information compression and reconstruction method based on deep learning attention mechanism
Shen et al. Deep learning for super-resolution channel estimation in reconfigurable intelligent surface aided systems
CN114884775A (en) Deep learning-based large-scale MIMO system channel estimation method
CN109391315B (en) Data model dual-drive MIMO receiver
CN109802901B (en) 3D MIMO channel estimation method and system based on angle of arrival measurement
CN107231177B (en) Efficient CR detection method and architecture based on large-scale MIMO
KR100934170B1 (en) Channel Estimation Apparatus and Method in Multi-antenna Wireless Communication System
GB2447675A (en) Incremental signal processing for subcarriers in a channel of a communication system
WO2017114053A1 (en) Method and apparatus for signal processing
CN107733487B (en) Signal detection method and device for large-scale multi-input multi-output system
CN101447969A (en) Channel estimation method of multi-band orthogonal frequency division multiplexing ultra wide band system
CN109039402B (en) MIMO topological interference alignment method based on user compression
CN115022134B (en) Millimeter wave large-scale MIMO system channel estimation method and system based on non-iterative reconstruction network
CN114430590B (en) Wireless transmission method for realizing uplink large-scale URLLC
CN116192209A (en) Gradient uploading method for air computing federal learning under MIMO channel
CN114826832A (en) Channel estimation method, neural network training method, device and equipment
CN112054826A (en) Single-user low-complexity hybrid precoding method based on intermediate channel
CN112019461A (en) Channel prediction method, wireless communication system, and storage device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19926789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19926789

Country of ref document: EP

Kind code of ref document: A1