WO2023011472A1 - 信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质 - Google Patents

信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质 Download PDF

Info

Publication number
WO2023011472A1
WO2023011472A1 PCT/CN2022/109704 CN2022109704W WO2023011472A1 WO 2023011472 A1 WO2023011472 A1 WO 2023011472A1 CN 2022109704 W CN2022109704 W CN 2022109704W WO 2023011472 A1 WO2023011472 A1 WO 2023011472A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel information
channel
information
dimension
state information
Prior art date
Application number
PCT/CN2022/109704
Other languages
English (en)
French (fr)
Inventor
肖华华
吴昊
鲁照华
李伦
高争光
李夏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP22852184.5A priority Critical patent/EP4362365A1/en
Publication of WO2023011472A1 publication Critical patent/WO2023011472A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0658Feedback reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0023Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the signalling
    • H04L1/0026Transmission of channel quality indication

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular to a method for feeding back channel state information, a method for receiving channel state information, a terminal, a base station, and a computer-readable storage medium.
  • Multi-antenna technology is one of the important means to improve wireless communication performance, widely used in Long Term Evolution (LTE, Long Term Evolution), Long Term Evolution Enhanced (LTE-A, Long Term Evolution-Advanced), New Radio Access Technology (NR , New Radio Access Technology) and other standards.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution Enhanced
  • NR New Radio Access Technology
  • CSI Channel State Information
  • the channel state information includes but not limited to the rank of the channel, the precoding matrix matched with the channel, and the like.
  • the more accurate the channel state information the better the performance of the multi-antenna technology can be brought into play; the more accurate the channel state information fed back, the greater the feedback overhead required.
  • the number of bits fed back by a type 2 codebook (type II codebook) in NR is as high as hundreds of bits.
  • an embodiment of the present disclosure provides a method for feedbacking channel state information, including:
  • the channel state information is fed back.
  • an embodiment of the present disclosure provides a method for receiving channel state information, including:
  • an embodiment of the present disclosure provides a terminal, including:
  • a memory on which at least one computer program is stored, and when the at least one computer program is executed by the at least one processor, the at least one processor implements the channel state information in the embodiment of the present disclosure described in the first aspect Feedback methods;
  • At least one I/O interface is connected between the processor and the memory, configured to realize information exchange between the processor and the memory.
  • an embodiment of the present disclosure provides a base station, including:
  • a memory on which at least one computer program is stored, and when the at least one computer program is executed by the at least one processor, the at least one processor implements the channel state information in the embodiment of the present disclosure described in the second aspect method of receipt;
  • At least one I/O interface is connected between the processor and the memory, configured to realize information exchange between the processor and the memory.
  • an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, at least one of the following methods is implemented:
  • FIG. 1 is a flowchart of a channel state information feedback method in an embodiment of the present disclosure
  • FIG. 2 is a flowchart of some steps in a channel state information feedback method in an embodiment of the present disclosure
  • Fig. 3 is a flowchart of some steps in a channel state information feedback method in an embodiment of the present disclosure
  • FIG. 4 is a flow chart of some steps in a channel state information feedback method in an embodiment of the present disclosure
  • FIG. 5 is a flowchart of some steps in a channel state information feedback method in an embodiment of the present disclosure
  • FIG. 6 is a flowchart of some steps in a channel state information feedback method in an embodiment of the present disclosure
  • FIG. 7 is a flow chart of some steps in a channel state information feedback method in an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of a method for receiving channel state information in an embodiment of the present disclosure.
  • FIG. 9 is a block diagram of a terminal in an embodiment of the present disclosure.
  • FIG. 10 is a block diagram of a base station in an embodiment of the present disclosure.
  • Fig. 11 is a composition block diagram of a computer-readable storage medium in an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of an architecture for feeding back channel state information in an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of an architecture for feeding back channel state information in an embodiment of the present disclosure.
  • Fig. 14 is a schematic diagram of an architecture for feeding back channel state information in an embodiment of the present disclosure.
  • Fig. 15 is a schematic diagram of a sampling point location in an embodiment of the present disclosure.
  • AI Artificial Intelligence
  • there is a fully-connected layer in the neural network used to feed back CSI and the fully-connected layer has fixed input and output dimensions. For different numbers of antennas, different transmission data streams, or different bandwidths Different scenarios need to correspond to different neural networks, which will increase the overhead or complexity of the wireless communication system.
  • an embodiment of the present disclosure provides a channel state information feedback method, including steps S110 to S130.
  • step S110 neural network parameters are determined, and an encoder is constructed according to the neural network parameters.
  • step S120 the encoder is used to compress the channel information to obtain channel state information.
  • step S130 the channel state information is fed back.
  • an encoder in a terminal and a decoder in a base station are constructed based on a neural network.
  • the encoder in the terminal corresponds to the decoder in the base station, that is, when the neural network parameters of any one of the terminal and the base station are determined, the neural network parameters of the other are also determined.
  • the encoder in the terminal and the decoder in the base station together form an autoencoder.
  • the neural network parameters determined in step S110 are the neural network parameters of multiple sets of autoencoders obtained through offline training or through a combination of offline training and online training. one of.
  • the neural network parameters determined in the terminal correspond to the neural network parameters in the base station.
  • the channel information is obtained by the terminal by receiving a reference signal, such as a channel state information reference signal (CSI-RS, Channel State Information).
  • CSI-RS channel state information reference signal
  • channel information can be obtained by decoding the channel state information fed back by the terminal through a decoder corresponding to the encoder in the terminal.
  • the terminal may include a mobile phone, a data block, a notebook, and various mobile manufacturing equipment in a factory.
  • the present disclosure makes no special limitation on this.
  • the base station may include various macro base stations, micro base stations, home base stations, and pico base stations. The present disclosure does not specifically limit this.
  • the terminal or the base station can determine the neural network parameters according to at least one factor such as the channel scene, the angle spread of the channel, the delay spread, and the Doppler spread, and the terminal and the base station It is possible to exchange neural network parameter information to use corresponding neural network parameters in the terminal and base station to construct corresponding encoders and decoders, so that it can target different antenna numbers, different transmission data streams, or different bandwidths. Feedback of channel state information in different scenarios reduces the overhead and complexity of wireless communication.
  • step S120 includes steps S121 and S122.
  • step S121 the channel information is preprocessed, so that the dimension of the preprocessed channel information matches the dimension of the input data of the encoder.
  • step S122 the encoder is used to compress the preprocessed channel information to obtain the channel state information.
  • the dimension of the input data of the encoder is also determined accordingly.
  • the encoder can only process channel information for a specific number of antennas or a specific transport data stream.
  • matching the dimension of the preprocessed channel information with the dimension of the input data of the encoder refers to transforming the channel information into conforming to the input data of the encoder through preprocessing. data dimension.
  • the channel information may be time-domain channel information or frequency-domain channel information.
  • the present disclosure makes no special limitation on this.
  • the preprocessing the channel information so that the dimension of the preprocessed channel information matches the dimension of the input data of the encoder includes:
  • the channel information is divided into K groups to obtain K groups of channel information as the preprocessed channel information, the dimension of each group of channel information matches the dimension of the input data of the encoder, and K is a positive integer.
  • step S121 includes steps S1211 and S1212.
  • step S1211 the number K of groups of channel information is determined according to channel parameters.
  • the channel information is divided into K groups according to the channel parameters, and K groups of channel information are obtained as the preprocessed channel information, and the dimension of each group of channel information is consistent with the input data dimension of the encoder Matching, K is a positive integer, and the channel parameters include at least one of the following: the number of antenna polarization directions, the number of antennas, the number of data streams, the number of time-domain sampling point groups, and the number of frequency-domain granularity groups.
  • the number of antennas may be the number of receiving antennas and/or the number of transmitting antennas.
  • the present disclosure makes no special limitation on this.
  • the present disclosure makes no special limitation on how to divide the channel information into K groups according to channel parameters.
  • dividing the channel information into K groups according to the channel parameters includes at least one of the following:
  • the channel information is grouped according to the group number of frequency domain granularity.
  • the channel information is divided into K groups according to the channel parameters, which may be based on the number of antenna polarization directions, the number of antennas, the number of data streams, and the time domain sampling points Any one of the group number and the frequency domain granularity group number divides the channel information into K groups; it may also divide the channel information into K groups according to multiple channel parameters.
  • the present disclosure makes no special limitation on this.
  • the channel information is divided into K groups according to multiple channel parameters, for example, when the channel information is divided into K groups according to the number of antenna polarization directions and the number of antennas, it can be grouped initially according to the number of antenna polarization directions, and based on the preliminary grouping, according to The number of antennas is further grouped to finally obtain K groups of channel information; it may also be initially grouped according to the number of antennas, and further grouped according to the number of antenna polarization directions on the basis of the preliminary grouping to finally obtain K groups of channel information.
  • the channel information is H, which is a matrix of Nt*Ns*C, where Nt represents the number of antennas, Ns represents the number of sampling point groups, and C represents the channel dimension.
  • the channel information H may be a matrix obtained by performing fast Fourier transform on the obtained channel information to obtain a time-domain channel/frequency-domain channel, and then performing truncation at points in the time-domain/frequency domain and normalizing.
  • the grouping the channel information according to the number of antenna polarization directions includes:
  • Channel information corresponding to the same polarization direction is grouped into the same group of channel information.
  • the grouping the channel information according to the number of antenna polarization directions further includes:
  • the channel information corresponding to the same polarization direction is grouped into the same group of channel information, the channel information is sorted according to the antenna polarization direction.
  • the channel information when the channel information is sorted according to the antenna polarization direction, it may be sorted according to each row in the channel information matrix, or may be sorted by block.
  • the present disclosure does not specifically limit this.
  • one polarization direction corresponds to one block, and the corresponding channels in one block can be continuous or discontinuous.
  • the present disclosure does not specifically limit this.
  • the channel information corresponding to polarization direction 1 is ranked before polarization direction 2 as a whole, the channels corresponding to polarization direction 1 are not necessarily continuous, and the channels corresponding to polarization direction 2 are not necessarily continuous.
  • the grouping the channel information according to the number of antennas includes:
  • the antenna group includes a transmitting antenna group and/or a receiving antenna group.
  • the grouping the channel information according to the number of antennas further includes:
  • the channel information corresponding to the same antenna group is divided into the same group of channel information, the channel information is sorted according to the indexes of the antennas.
  • the antenna includes a sending antenna and/or a receiving antenna.
  • the channel information when the channel information is sorted according to the antenna index, it may be sorted according to each row in the channel information matrix, or may be sorted by block.
  • the present disclosure makes no special limitation on this.
  • one antenna group corresponds to one block, and the corresponding channels in one block can be continuous or discontinuous.
  • the present disclosure does not specifically limit this.
  • the channel information corresponding to antenna group i is ranked before the channel information corresponding to antenna group j as a whole, i ⁇ j; the channels corresponding to antenna group i are not necessarily continuous, and the channels corresponding to antenna group j are not necessarily continuous. Must be continuous.
  • the grouping the channel information according to the number of data streams includes:
  • the channel information corresponding to the same data stream is divided into the same group of channel information.
  • the grouping the channel information according to the number of data streams further includes:
  • the channel information corresponding to the same data stream is divided into the same group of channel information, the channel information is sorted according to the data stream index.
  • the sorting when the channel information is sorted according to the data stream index, the sorting may be performed by row in the channel information matrix, or by block.
  • the present disclosure makes no special limitation on this.
  • sorting by blocks one data stream corresponds to one block, and the corresponding channels in one block can be continuous or discontinuous.
  • the present disclosure does not specifically limit this.
  • the channel information corresponding to data stream i is ranked before the channel information corresponding to data stream j as a whole, i ⁇ j; the channels corresponding to data stream i are not necessarily continuous, and the channels corresponding to data stream j are not necessarily continuous.
  • the grouping the channel information according to the number of time-domain sampling point groups includes:
  • the channel information corresponding to the same sampling point group is divided into the same group of channel information.
  • the grouping the channel information according to the number of time-domain sampling point groups further includes:
  • the channel information corresponding to the same sampling point group is divided into the same group of channel information, the channel information is sorted according to the sampling point size index.
  • the channel information when the channel information is sorted according to the sampling point size index, it may be sorted according to each row in the channel information matrix, or may be sorted by block.
  • the present disclosure makes no special limitation on this.
  • one sampling point group corresponds to one block, and the corresponding channels in one block can be continuous or discontinuous.
  • the present disclosure does not specifically limit this.
  • the channel information corresponding to sampling point group i is ranked before the channel information corresponding to sampling point group j as a whole, i ⁇ j; the channels corresponding to sampling point group i are not necessarily continuous, and sampling point group j
  • the corresponding channels are not necessarily continuous.
  • the grouping the channel information according to the frequency domain granularity group number includes:
  • the channel information corresponding to the same frequency domain granularity group is divided into the same group of channel information.
  • the grouping the channel information according to the number of frequency-domain granularity groups further includes:
  • the channel information corresponding to the same frequency domain granularity group is grouped into the same group of channel information, the channel information is sorted according to the frequency domain granularity index.
  • the sorting when sorting the channel information according to the frequency-domain granularity index, the sorting may be done by row in the channel information matrix, or by block.
  • the present disclosure makes no special limitation on this.
  • sorting by blocks one frequency domain granularity group corresponds to one block, and the corresponding channels in one block can be continuous or discontinuous. The present disclosure does not specifically limit this.
  • the channel information corresponding to frequency domain granularity group i is ranked before the channel information corresponding to frequency domain granularity group j, i ⁇ j; the channels corresponding to frequency domain granularity group i are not necessarily continuous, The channels corresponding to the frequency domain granularity group j are not necessarily continuous.
  • frequency domain granularity may be in units of subcarriers, physical resource blocks or subbands, one physical resource block may include multiple subcarriers, one subband may include multiple physical resource blocks, one or more frequency domain
  • the grains form a group of frequency domain grains.
  • using the encoder to compress the preprocessed channel information to obtain the channel state information includes:
  • Each group of channel information in the K groups of channel information is respectively compressed by the encoder to obtain K groups of channel state information.
  • the feedback method of the channel state information further includes:
  • the feedback method of the channel state information further includes:
  • the K group of channel state information is calculated according to the group index corresponding to the K group of channel state information
  • the information is jointly coded.
  • Pre-processing may also include down-sampling, so that the pre-processed channel information matches the input data dimension of the encoder.
  • step S121 includes step S1213.
  • step S1213 the channel information is down-sampled to obtain the down-sampled channel information as the pre-processed channel information, the dimension of the down-sampled channel information is the same as the input data dimension of the encoder match.
  • the present disclosure makes no special limitation on how to down-sample the channel information.
  • downsampling the channel information includes at least one of the following:
  • the preprocessing may also include zero padding, so that the preprocessed channel information matches the dimension of the input data of the encoder.
  • step S121 includes step S1214.
  • step S1214 the channel information is zero-padded to obtain the zero-padded channel information as the preprocessed channel information, and the dimension of the zero-padded channel information is the same as the input data dimension of the encoder match.
  • the present disclosure makes no special limitation on how to pad the channel information with zeros.
  • padding the channel information with zeros includes at least one of the following:
  • step S110 includes step S111.
  • step S111 one of at least one set of candidate neural network parameters of an autoencoder configured in advance is selected as the neural network parameter according to channel factor information; the autoencoder includes a pair of encoder and decoder.
  • step S110 includes steps S112 and S113.
  • step S112 neural network parameter information is received.
  • step S113 the neural network parameters are determined according to the neural network parameter information.
  • the base station directly sends the neural network parameters to the terminal after determining the neural network parameters according to the channel conditions, and the neural network parameter information received by the terminal through step S112 is the neural network parameters.
  • the terminal and the base station respectively save the neural network parameters of multiple pairs of encoders and decoders.
  • the base station sends the index of the encoder and decoder pair to the terminal through high-level signaling according to the channel conditions, and the terminal passes The neural network parameter information received in step S112 is the index of the encoder and decoder pair, and the terminal knows which set of encoder neural network parameters to use after receiving the index.
  • the neural network parameters include the size of the convolution kernel of the convolution layer, the number of convolution layers, the step size of the convolution layer, the weight of the convolution layer, the bias of the convolution layer, At least one of the activation functions of the convolutional layer.
  • the encoder includes a first processing layer and a compression layer;
  • the first processing layer includes a plurality of network layers, and each of the network layers includes a plurality of nodes, at least one network layer weight, activation function and/or network layer bias, the first processing layer configured to extract features of the channel information;
  • the compression layer is configured to compress the features of the channel information to obtain the channel state information.
  • the compression layer includes any one of a fully connected layer, a set of convolutional layers, and a recurrent network, and the set of convolutional layers includes at least one convolutional layer.
  • an embodiment of the present disclosure provides a method for receiving channel state information, including steps S210 to S230.
  • step S210 neural network parameters are determined, and a decoder is constructed according to the neural network parameters.
  • step S220 channel state information is received.
  • step S230 use the decoder to decompress the channel state information to obtain second channel information.
  • an encoder in a terminal and a decoder in a base station are constructed based on a neural network.
  • the encoder in the terminal corresponds to the decoder in the base station, that is, when the neural network parameters of any one of the terminal and the base station are determined, the neural network parameters of the other are also determined.
  • the encoder in the terminal and the decoder in the base station together form an autoencoder.
  • the neural network parameters determined in step S210 are the neural network parameters of multiple sets of autoencoders obtained through offline training or a combination of offline training and online training. one of.
  • the neural network parameters determined in the terminal correspond to the neural network parameters in the base station.
  • the terminal obtains channel information by receiving a reference signal, such as a channel state information reference signal (CSI-RS, Channel State Information).
  • CSI-RS channel state information reference signal
  • the second channel information can be obtained by decoding the channel state information fed back by the terminal through a decoder corresponding to the encoder in the terminal.
  • the second channel information acquired by the base station is an estimated value of the channel information acquired by the terminal.
  • the terminal may include a mobile phone, a data block, a notebook, and various mobile manufacturing equipment in a factory.
  • the present disclosure makes no special limitation on this.
  • the base station may include various macro base stations, micro base stations, home base stations, and pico base stations. The present disclosure does not specifically limit this.
  • the terminal or the base station can determine the neural network parameters according to at least one factor such as the channel scene, the angle spread of the channel, the delay spread, and the Doppler spread, and the terminal and the base station It is possible to exchange neural network parameter information to use corresponding neural network parameters in the terminal and base station to construct corresponding encoders and decoders, so that it can target different antenna numbers, different transmission data streams, or different bandwidths. Feedback of channel state information in different scenarios reduces the overhead and complexity of wireless communication.
  • the channel state information includes K groups of channel state information obtained by compressing each group of channel information in K groups of channel information; the decompressing the channel state information by using the decoder Obtaining the second channel information includes:
  • K is a positive integer.
  • the obtaining the second channel information according to the K sets of channel information includes:
  • the channel parameters include at least one of the following: the number of antenna polarization directions, the number of antennas, the number of data streams, the number of time-domain sampling point groups, and the number of frequency-domain granularity groups.
  • the number of antennas may be the number of receiving antennas and/or the number of sending antennas.
  • the present disclosure makes no special limitation on this.
  • the terminal performs joint encoding on the K sets of channel state information according to the group index corresponding to the K sets of channel state information. After receiving the jointly coded channel state information, the base station needs to perform joint decoding.
  • the method for receiving channel state information further includes:
  • the terminal down-samples the channel information to obtain the down-sampled channel information as preprocessed channel information, and uses an encoder to compress the preprocessed channel information to obtain channel state information.
  • the base station after decompressing the channel state information, it is necessary to further obtain the second channel information according to methods such as upsampling, difference, and deconvolution.
  • using the decoder to decompress the channel state information to obtain the second channel information includes:
  • upsampling is performed in the base station in a manner corresponding to downsampling in the terminal.
  • the upsampling according to the downsampled channel information to obtain the second channel information includes at least one of the following:
  • the terminal pads the channel information with zeros to obtain the zero-padded channel information as preprocessed channel information, and uses an encoder to compress the preprocessed channel information to obtain channel state information.
  • the base station after the channel state information is decompressed, zero removal needs to be further performed to obtain the second channel information.
  • using the decoder to decompress the channel state information to obtain the second channel information includes:
  • zero removal is performed in the base station in a manner corresponding to zero padding in the terminal.
  • the dezeroing according to the zero-padded channel information to obtain the second channel information includes at least one of the following:
  • the determining neural network parameters, and constructing a decoder according to the neural network parameters include:
  • One of at least one set of candidate neural network parameters of an autoencoder preconfigured is selected as the neural network parameter according to channel factor information; the autoencoder includes a pair of encoder and decoder.
  • the determining the neural network parameters, and constructing the encoder according to the neural network parameters include:
  • the neural network parameters are determined according to the neural network parameter information.
  • the terminal determines the neural network parameters according to at least one channel factor information such as the channel scene, the channel angle spread, the delay spread, and the Doppler spread, and sends the neural network parameters directly to the base station,
  • the neural network parameter information received by the base station is the neural network parameter.
  • the terminal and the base station respectively save the neural network parameters of multiple pairs of encoders and decoders.
  • the terminal according to at least one channel The factor information determines the parameters of the neural network, and sends the index of the encoder and decoder pair to the base station through high-level signaling, and the base station knows which set of neural network parameters of the decoder to use after receiving the index.
  • the neural network parameters include the size of the convolution kernel of the convolution layer, the number of convolution layers, the step size of the convolution layer, the weight of the convolution layer, the bias of the convolution layer, At least one of the activation functions of the convolutional layer.
  • the decoder includes a second processing layer and a decompression layer; the decompression layer is configured to decompress the channel state information;
  • the second processing layer includes a plurality of network layers, each of the network layers includes a plurality of nodes, at least one network layer weight, an activation function and/or a network layer bias, and the second processing layer is configured to extract a solution
  • the features of the compressed channel state information are used to obtain the second channel information.
  • the decompression layer includes any one of a fully connected layer, a deconvolution layer group, and a recurrent network.
  • an embodiment of the present disclosure provides a terminal, including:
  • At least one processor 101 (only one is shown in FIG. 9 );
  • a memory 102 on which at least one computer program is stored, and when the at least one computer program is executed by the at least one processor 101, the at least one processor 101 implements the embodiment of the present disclosure described in the first aspect A feedback method for channel state information;
  • At least one I/O interface 103 is connected between the processor 101 and the memory 102 and is configured to realize information exchange between the processor 101 and the memory 102 .
  • Processor 101 is a device with data processing capability, including but not limited to central processing unit (CPU) etc.
  • memory 102 is a device with data storage capability, including but not limited to random access memory (RAM, more specifically SDRAM, DDR etc.), read-only memory (ROM), electrified erasable programmable read-only memory (EEPROM), flash memory (FLASH);
  • I/O interface (read-write interface) 103 is connected between processor 101 and memory 102, can realize processing Information interaction between the device 101 and the memory 102, including but not limited to a data bus (Bus).
  • Buss data bus
  • the processor 101 , the memory 102 and the I/O interface 103 are connected to each other through the bus 104 , and are further connected to other components of the computing device.
  • an embodiment of the present disclosure provides a base station, including:
  • At least one processor 201 (only one is shown in Figure 10);
  • the memory 202 stores at least one computer program thereon, and when the at least one computer program is executed by the at least one processor 201, the at least one processor 201 implements the embodiment of the present disclosure described in the second aspect A method for receiving channel state information;
  • At least one I/O interface 203 is connected between the processor 201 and the memory 202 and is configured to realize information exchange between the processor 201 and the memory 202 .
  • Processor 201 is a device with data processing capability, including but not limited to central processing unit (CPU) etc.
  • memory 202 is a device with data storage capability, including but not limited to random access memory (RAM, more specifically SDRAM, DDR etc.), read-only memory (ROM), electrified erasable programmable read-only memory (EEPROM), flash memory (FLASH);
  • I/O interface (read-write interface) 203 is connected between processor 201 and memory 202, can realize processing Information interaction between the device 201 and the memory 202, including but not limited to a data bus (Bus).
  • Buss data bus
  • the processor 201 , the memory 202 and the I/O interface 203 are connected to each other through the bus 204 , and further connected to other components of the computing device.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, at least one of the following methods is implemented:
  • the index and the indicator can be replaced with each other.
  • Transmission includes sending or receiving; terminals can include various mobile devices, such as mobile phones, data cards, notebooks, and various manufacturing equipment in factories; base stations include various macro base stations, micro base stations, home base stations, and pico base stations.
  • the channel information is defined as follows. For example, if the channel information is a three-dimensional matrix H, then there is only a colon ":" in a certain dimension, which means to take all the values in the corresponding dimension, and in a certain dimension to take For L1:L2, only the values whose index dimension starts from L1 to the end of L2 on the corresponding dimension are retrieved. For example, H(1:3,2:4,:), it means that the first dimension takes the value of index 1, 2, 3, the second dimension takes the value of index 2, 3, 4, and the third Dimensions take values under all indexes.
  • This example is used to illustrate the encoder and decoder construction of a neural network.
  • Artificial intelligence can be realized through a deep learning neural network, where the neural network is an autoencoder, including an encoder and a decoder, and the encoder is located at the terminal.
  • the terminal here can be various mobile devices, such as mobile phones, data cards, notebooks, and various manufacturing equipment in factories; while the decoder is located in the base station, which includes various macro base stations, micro base stations, femto base stations, and pico base stations.
  • the structure of feeding back channel state information is shown in FIG. 12 .
  • the terminal obtains the channel information H by receiving a reference signal, such as a channel state information reference signal (CSI-RS, Channel-state information reference signal).
  • CSI-RS channel state information reference signal
  • the channel information is generally an Nt*Nr complex matrix.
  • Nt and Nr are respectively the number of transmitting antennas of the base station and the number of receiving antennas of the terminal, where the antennas may be logical antennas or various types of physical antennas.
  • the terminal After obtaining the channel information, the terminal preprocesses the channel information, and inputs the preprocessed channel information to the encoder, and the encoder includes a first processing layer and a compression layer.
  • the first processing layer is a sub-neural network, including C network layers
  • the i-th network layer can be a convolutional layer, a pooling layer, or a direct connection layer, etc.
  • the compression layer can be a fully connected layer, or C1 convolutional layers, or a recurrent network, for example, the recurrent network can be a long short-term memory network (LSTM, long short-term memory network), a gated recurrent neural network (GRU, gated recurrent neural network).
  • LSTM long short-term memory network
  • GRU gated recurrent neural network
  • the i-th layer can be a convolutional layer, a pooling layer, or a direct connection layer, or a combination of them, or a convolutional layer group composed of several convolutional layers, such as a residual network block, and the decompression layer can be a A fully connected layer, or C1 convolutional layers, or a recurrent network, for example, the recurrent network can be any one of a long short-term memory network and a gated recurrent neural network.
  • the terminal inputs the preprocessed channel information to the encoder, and the encoder outputs the channel state information, and then transmits the channel state information to the base station.
  • the terminal may also perform quantization, coding, and modulation on the channel state information and transmit it to the base station.
  • the base station receives the channel state information.
  • the base station can also dequantize the channel state information, demodulate and decode it as the input of the decoder, and the decoder outputs the channel information.
  • This example is used to illustrate how to obtain the parameters of the neural network.
  • the neural network parameters of K0 sets of autoencoders are obtained through offline training, or through the process of offline training combined with online training, and each set of autoencoders includes a pair of encoders and decoders.
  • the terminal and the base station store the neural network parameters of the K0 pair encoder and decoder respectively.
  • the base station configures the index of the K0 set of encoder and decoder pair through high-level signaling according to the channel conditions, so that the terminal knows which set of encoder neural network parameters are used after receiving the index, that is, the base station configures the encoder and the index of the decoder pair, the terminal receives the index of the encoder and decoder pair, and determines the neural network parameters corresponding to the encoder; or the terminal according to the channel scene, channel angle expansion, delay expansion, Doppler expansion, etc. At least one factor selects an encoder, and transmits information of the selected encoder to the base station through physical layer and/or high layer signaling.
  • both the compression layer in the encoder and the decompression layer in the decoder are a fully connected layer.
  • the structure of feeding back channel state information is shown in FIG. 13 .
  • the weight of each fully connected layer is a two-dimensional matrix. In the compressed layer, it is an N*M matrix. N and M represent the dimensions of the input and output of the fully connected layer corresponding to the compressed layer; In the layer, it is a matrix of M*N, and M and N respectively represent the input and output dimensions of the fully connected layer corresponding to the decompression layer.
  • the base station may transmit the network parameters of the encoder to the terminal through high-layer signaling.
  • the compression layer in the encoder is a convolutional layer block composed of a set of convolutional layers, such as a convolutional layer block obtained by serial and/or parallelization of multiple convolutional layers, such as the residual A block resblock, a dense block denseblock or a convolutional block consisting of multiple serial links.
  • a convolutional layer block and a convolutional layer group are equivalent and interchangeable concepts.
  • the architecture for feeding back channel state information is shown in Figure 14, and multiple convolutional layer blocks constitute a compression layer.
  • the base station transmits the parameters of one or more convolutional layer blocks to the terminal by configuring high-level and/or physical layer signaling, and the terminal obtains the one or more parameters by receiving high-level signaling and/or physical layer signaling.
  • the parameters of the convolution layer block, the parameters of the convolution layer block include but are not limited to at least one of the following: the size of the convolution kernel of each convolution layer, the step size of the convolution kernel, the data filling method, the Weights, activation functions.
  • the decompression layer in the decoder is a deconvolution layer block composed of a set of deconvolution layers, such as a deconvolution layer block obtained by serial and/or parallelization of multiple deconvolution layers , such as a transposed convolutional layer, a dilated convolutional layer, etc.
  • a deconvolutional layer block and a deconvolutional layer group are the same concept.
  • the parameters of the deconvolution block corresponding to each decoder and the convolution layer block corresponding to the encoder can be obtained through online or offline training, and appear in pairs.
  • the number of convolutional layers in the convolutional layer block and the deconvolutional layer block is determined by at least one of the following: the number of transmitting antennas, the number of receiving antennas, the rank of the channel, the number of physical resource blocks, the number of sampling points in the time domain, and the compression rate . Because their size determines the input and output sizes of the convolutional layer blocks, each convolutional layer controls the input and output sizes of the convolutional layer blocks through different strides. The situation is similar for inverse convolutional layers.
  • This example is used to illustrate the preprocessing of input data, grouping channel information, and the channel information is a time-domain channel.
  • the input size of the compression layer is N
  • C is the channel dimension, which is generally equal to 2.
  • a method to reduce the number of sets of neural network parameters is to train a set of neural network parameters of a benchmark size, such as the neural network parameters corresponding to the number of antennas Nt1 and the number of sampling points Ns1, and Nt is not equal to Nt1, or Ns is not equal to
  • the input data is grouped, down-sampled, or zero-filled to match the dimension of Nt1*Ns1.
  • the channel information is grouped into antenna dimensions according to the polarization direction.
  • the channel corresponding to the first polarization direction is first arranged in the channel dimension, and then the channel corresponding to the second polarization direction is arranged.
  • the first polarization direction is the polarization whose polarization angle is P1
  • the second The first polarization direction is a polarization with a polarization angle P2, and P1 and P2 are different, and the value may be at least one of -45°, 0°, 90°, and +45°.
  • rows Nt/2+1 to row Nt correspond to channels in the second polarization direction.
  • the grouping can also be performed according to the number of transmitting antennas here, not necessarily divided into two groups according to the polarization antennas, but can be divided into groups according to the number of transmitting antennas.
  • K group at this time K>2.
  • the first group corresponds to the channel information of H(1:Nt/2,:,:)
  • the second group corresponds to the channel information of H(1+Nt/2: Nt,:,:).
  • the antenna polarization direction here includes the polarization direction of the transmitting antenna and/or the polarization direction of the receiving antenna. In some implementations, it is necessary to feed back the polarization phases between antenna groups with different polarizations.
  • the receiving end After passing through the decompression module, the receiving end obtains the estimated values of the K groups of channel information, such as K groups of second channel information. It is necessary to perform an inverse operation according to the above grouping rules to link the estimated values of the K groups of channel information into the The estimated value of the channel information, that is, the second channel information.
  • the number Nt of antennas of the channel information is relatively close to the number Nt1 of antennas of the reference neural network.
  • the antennas can be down-sampled, such as taking the channel information H(1: Nt1,:,:), or later (Nt-Nt1+1): the channel information H corresponding to Nt ((Nt-Nt1+1): Nt,:,:), or the channel information corresponding to an odd antenna index, or The channel information corresponding to the even number of the antenna index.
  • Nt1/2 channel information in the same polarization direction for example, take Nt1/2 channel information H(1:Nt1/2,:,:) of the first polarization direction, and the second polarization direction Nt1/2 channel information H(Nt/2+1:Nt/2+Nt1/2,:,:), and then combine them.
  • the decompression module After the decompression module is used at the receiving end to obtain the estimated value of the downsampling of the channel information, such as downsampling the second channel information, it is necessary to perform an inverse operation according to the above-mentioned downsampling rules, such as upsampling (including interpolation, upsampling, Inverse operations such as deconvolution layer and fully connected layer) to obtain the estimated value of the channel information, that is, the second channel information.
  • upsampling including interpolation, upsampling, Inverse operations such as deconvolution layer and fully connected layer
  • the number Nt of antennas of the channel information is smaller than the number Nt1 of antennas of the reference neural network.
  • Nt ⁇ Nt1 that is, the channel information is H0
  • its dimension is a matrix of Nt*Ns*C. It zero pads to matrix H of dimension Nt1*Ns*C.
  • Zero-padding operations can be performed in the antenna dimension, such as zero-padding the channel information from the first 1 to (Nt1-Nt+1) of the first dimension of H, that is, the value of H(1:Nt1-Nt,:,:) is zero , and then (Nt1-Nt+1): H0 for the Nt line; or after (Nt1-Nt+1): the channel information corresponding to Nt1 is filled with zeros, that is, H((Nt1-Nt+1): Nt1,:,: ) has a value of 0, and 1: the channel of Nt takes the value of H0; or every other element of H takes the value of H0, and fills other places with zeros.
  • the first polarization direction takes the Nt/2 channel information of the channel in the first polarization direction of H0, that is, H(1:Nt/2,:,:) takes The channel information of the first polarization direction of H0
  • the second polarization direction takes Nt/2 channel information of the second polarization direction of H0, that is, H(1+Nt1/2: Nt1/2+Nt/ 2,:,:)
  • the channel information of the second polarization direction of H0 is taken, and the channel information corresponding to other antenna indexes is zero.
  • the first polarization takes the value of the channel information H0(1:4,:,:) of the first polarization direction
  • the channel information of the second polarization direction H(9: 12,:,:) takes the value of the channel information H0(5:8,:,:) of the second polarization direction
  • the value of the channel information corresponding to other antenna indexes is zero, that is, H(5:8,: ,:) and H(13:16,:,:) take the value 0.
  • the estimated value of the channel information after zero padding is obtained, such as the second channel information after zero padding
  • an inverse operation such as a zero removal operation (i.e. , remove the zeros in the original zero padding) to obtain the estimated value of the channel information, that is, the second channel information.
  • the antenna dimensions are grouped according to the number of receive antennas or the rank of the channel (number of data streams).
  • the channel information is H, which is a matrix of Nt*Ns*C. It can be obtained by performing fast Fourier transform on the obtained channel information to obtain the time domain channel, and then truncate and normalize at the time domain point.
  • the method is the same as in The third dimension is similar.
  • the channel information is sorted by the receiving antenna index in the channel dimension.
  • the channel corresponding to the first receiving antenna is arranged first, and then the channel corresponding to the second receiving antenna is arranged, and so on until all the receiving antennas are arranged.
  • the channel information is divided into K groups according to the number of receiving antennas, and the i-th group corresponds to the channel information of H(Ai,:,:).
  • the antenna dimensions are grouped according to the rank (number of data streams) of the channel.
  • the channel information is H, which is a matrix of Nt*Ns*C. It can be obtained by performing fast Fourier transform on the obtained channel information to obtain the time domain channel, and then truncate and normalize at the time domain point.
  • the approach is similar to that in the third dimension.
  • Channel information is sorted by channel rank or data stream index in the channel dimension.
  • the channel corresponding to the first data stream is sorted first, and then the channel of the second data stream is sorted, and so on until all data streams are sorted.
  • Nt Ntx*K
  • Ntx is the number of transmitting antennas, such as 16, and the i-th antenna index set Ai line corresponds to the channel information of the i-th data stream
  • Ai Ntx*(i-1)+1 to Ntx*
  • An integer of i, i 1, ..., K
  • K is the channel rank or the number of data streams.
  • the channel information is divided into K groups according to the channel rank or the number of data streams, and the i-th group corresponds to the channel information of H(Ai,:,:).
  • sample point dimensions are grouped according to sample point numbers.
  • the channel information is H, which is a matrix of Nt*Ns*C. It can be obtained by performing fast Fourier transform on the obtained channel information to obtain the time domain channel, and then truncate and normalize at the time domain point.
  • the channel information in the sample point dimension is arranged according to the sample point size index.
  • the channel information is divided into K groups.
  • the sampling points are sorted from large to small, and divided into K groups, that is, K sampling point groups.
  • the receiving end After passing through the decompression module, the receiving end obtains the estimated values of the K groups of channel information, such as K groups of second channel information, and needs to perform an inverse operation according to the above grouping rules to link the estimated values of the K groups of channel information into the
  • the estimated value of the channel information includes the case where the input data is grouped according to at least one of the receiving antenna, the number of data streams, and the number of sampling point groups.
  • the locations of non-zero sampling points or sampling points greater than the threshold value T0 are different. If the number of sampling points Ns0 and Ns representing the channel are relatively close, if Ns0 ⁇ Ns, you need to fill in the dimension of the sampling point, that is, the channel assignment of H(:,Ns0:Ns,:) is 0; if Ns0>Ns , then the channel information needs to be further truncated so that the number of truncated sampling points is Ns, and the starting position of the truncation window or the index value of the truncation point is fed back.
  • the K sets of channel information are respectively input into an encoder, and the channel state information of the K sets of channel information are respectively obtained, and the K sets of channel state information are quantized and jointly encoded to obtain the final channel state information. Feedback the channel state information to the base station.
  • the base station After the base station obtains the channel state information, it decodes it to obtain K groups of channel state information, and inputs the K groups of channel state information into the decoder respectively to obtain K groups of channel information, and the K groups of channel state information are obtained in extreme
  • the final channel information can be obtained by restoring the channel dimension. For example, in the channel dimension, the first group of channel information is first placed, and then the second group of channel information is restored to the final channel information.
  • This example is used to illustrate the preprocessing of input data, grouping channel information, and the channel information is a frequency domain channel.
  • the input size of the compression layer is N and the output is M
  • C is the channel dimension, generally equal to 2
  • Nt is the number of antennas
  • Nf is the frequency domain granularity
  • the frequency domain Granularity can be in units of subcarriers, physical resource blocks or subbands.
  • a physical resource block can include multiple subcarriers, such as 6, 12, etc., and a subband can include multiple physical resource blocks.
  • One or more frequency domain granularities Form a frequency-domain granularity group. Changing the number Nt of antennas and the granularity Nf in the frequency domain may change the parameter values of the neural network. In reality, the number of antennas is diverse, and the frequency domain granularity Nf is related to the configured bandwidth, which will lead to the transmission or storage of multiple sets of neural network parameters.
  • a method to reduce the number of sets of neural network parameters is to train a set of neural network parameters of a benchmark size, such as the number of antennas is Nt1, the frequency domain granularity is the neural network parameters corresponding to Ns1, and for Nt>Nt1, or Ns>Ns1
  • the input data is grouped, down-sampled or zero-filled to match the dimension of Nt1*Ns1.
  • the channel information is grouped into antenna dimensions according to the polarization direction.
  • the channel information is H, which is a matrix of Nt*Nf*C, which can be obtained by normalizing the obtained channel
  • the channel dimension can also be in the first dimension, and the method is similar to that in the third dimension.
  • Channel information in the channel dimension first arranges the channel corresponding to the first polarization direction, and then arranges the channel corresponding to the second polarization direction.
  • the first polarization direction is the polarization whose polarization angle is P1
  • the second The polarization direction is a polarization with a polarization angle P2, and P1 and P2 are different, and the value may be at least one of -45°, 0°, 90°, and +45°.
  • rows Nt/2+1 to row Nt correspond to channels in the second polarization direction.
  • the first group corresponds to the channel information of H(1:Nt/2,:,:), and the second group corresponds to the channel information of H(1+Nt/2:Nt,:,:).
  • the antenna polarization direction here includes the polarization direction of the transmitting antenna and/or the polarization direction of the receiving antenna. In some implementations, it is necessary to feed back the polarization phases between antenna groups with different polarizations.
  • the antenna dimensions are grouped according to the number of receive antennas or the rank of the channel (number of data streams).
  • the channel information is H, which is a matrix of Nt*Nf*C, which can be obtained by normalizing the obtained channel information.
  • Nt Ntx*K
  • K is the number of receiving antennas
  • the channel dimension can also be in the first dimension, which is similar to that in the third dimension.
  • Channel information is sorted according to the receiving antenna index in the channel dimension.
  • the channel corresponding to the first receiving antenna is arranged first, and then the channel corresponding to the second receiving antenna is arranged, and so on until all receiving antennas are arranged.
  • the channel information is divided into K groups according to the number of receiving antennas, and the i-th group corresponds to the channel information of H(Ai,:,:).
  • the antenna dimensions are grouped according to the rank (number of data streams) of the access channel.
  • the channel information is H, which is a matrix of Nt*Nf*C, which can be obtained by normalizing the obtained channel information.
  • Nt Ntx*K
  • Ntx is the number of transmitting antennas, such as 16
  • K is the channel rank or the number of data streams
  • the channel dimension can also be in the first dimension, and the method is similar to that in the third dimension.
  • Channel information is sorted by channel rank or data stream index in the channel dimension.
  • the channel corresponding to the first data stream is sorted first, and then the channel of the second data stream is sorted, and so on until all data streams are sorted.
  • Nt Ntx*K
  • Ntx is the number of transmitting antennas, such as 16, and the i-th antenna index set Ai line corresponds to the channel information of the i-th data stream
  • Ai Ntx*(i-1)+1 to Ntx*
  • An integer of i, i 1, ..., K
  • K is the channel rank or the number of data streams.
  • the channel information is divided into K groups according to the channel rank or the number of data streams, and the i-th group corresponds to the channel information of H(Ai,:,:).
  • the grouping is done according to frequency domain granularity.
  • the channel information is H, which is a matrix of Nt*Nf*C, which can be obtained by normalizing the obtained channel information.
  • the channel dimension can also be in the first dimension, and the method is similar to that in the third dimension.
  • the channel information is arranged in the frequency domain according to the frequency domain granularity index. However, the bandwidth of the channel corresponding to the system may be different.
  • the channel information is divided into K groups according to the frequency domain granularity, and the frequency domain granularity is sorted by size and divided into K groups.
  • the channel assignment of H(:,Nf:Ns,:) is 0; If Nf>Ns, then the channel information needs to be further truncated so that the number of truncated sampling points is Ns, and the index corresponding to the frequency domain granularity in the feedback stage.
  • Another method is to down-sample frequency-domain channels, such as selecting only channel information whose frequency-domain granularity index is odd or even. That is, the channel of H(:,1:2:Ns,:) or H(:,2:2:Ns,:) is selected as the input of the encoder.
  • the frequency domain granularity is a subband
  • the number of physical resource blocks included in a subband is related to the compression rate and bandwidth. For example, the smaller the compression rate, the fewer the number of physical resource blocks included in the Subband .
  • each subband includes 8 physical resource blocks, and when the compression rate is 1/10, the number of physical resource blocks included in each subband is 4.
  • the number of physical resource blocks included in a subband is proportional to the bandwidth. For example, each subband includes 4 physical resource blocks when the bandwidth is 10M, then each subband includes 8 physical resource blocks when the bandwidth is 20M, and so on.
  • the K sets of channel information are respectively input into an encoder, and the channel state information of the K sets of channel information are respectively obtained, and the K sets of channel state information are quantized and jointly encoded to obtain the final channel state information. Feedback the channel state information to the base station.
  • the base station After the base station obtains the channel state information, it decodes it to obtain K groups of channel state information, and inputs the K groups of channel state information into the decoder respectively to obtain K groups of channel information, and the K groups of channel state information are obtained in extreme
  • the final channel information can be obtained by restoring the channel dimension. For example, in the channel dimension, the first group of channel information is first placed, and then the second group of channel information is restored to the final channel information.
  • the number of antennas is 8, but the number of antennas in the network input is 32, then 0 needs to be added to make the number of antennas input by the network consistent.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit circuit.
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Radio Transmission System (AREA)

Abstract

本公开提供一种信道状态信息的反馈方法,包括:确定神经网络参数,根据所述神经网络参数构造编码器;利用所述编码器对信道信息进行压缩得到信道状态信息;以及反馈所述信道状态信息。本公开还提供一种信道状态信息的接收方法、一种终端、一种基站、及一种计算机可读存储介质。

Description

信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质
相关申请的交叉引用
本申请要求于2021年8月4日提交的中国专利申请NO.202110893081.2的优先权,该中国专利申请的内容通过引用的方式整体合并于此。
技术领域
本公开涉及通信技术领域,特别涉及信道状态信息的反馈方法、信道状态信息的接收方法、终端、基站、以及计算机可读存储介质。
背景技术
多天线技术是提高无线通信性能的重要手段之一,广泛应用于在长期演进(LTE,Long Term Evolution)、长期演进增强(LTE-A,Long Term Evolution-Advanced)、新无线接入技术(NR,New Radio Access Technology)等标准中。为了在频分复用等系统中充分发挥多天线技术的性能,需要终端反馈信道状态信息(CSI,Channel State Information)。信道状态信息包括但不限于信道的秩、跟信道匹配的预编码矩阵等。信道状态信息越准确,越能发挥多天线技术的性能;而反馈的信道状态信息越准确,需要的反馈开销也越大。例如,NR中的类型2码本(type II码本)反馈的比特数目高达几百比特。
在一些相关技术中,无法兼顾信道状态信息的准确度和反馈开销。
公开内容
第一方面,本公开实施例提供一种信道状态信息的反馈方法, 包括:
确定神经网络参数,根据所述神经网络参数构造编码器;
利用所述编码器对信道信息进行压缩得到信道状态信息;以及
反馈所述信道状态信息。
第二方面,本公开实施例提供一种信道状态信息的接收方法,包括:
确定神经网络参数,根据所述神经网络参数构造解码器;
接收信道状态信息;以及
利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息。
第三方面,本公开实施例提供一种终端,包括:
至少一个理器;
存储器,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行,使得所述至少一个处理器实现第一方面所述的本公开实施例的信道状态信息的反馈方法;以及
至少一个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
第四方面,本公开实施例提供一种基站,包括:
至少一个处理器;
存储器,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行,使得所述至少一个处理器实现第二方面所述的本公开实施例的信道状态信息的接收方法;以及
至少一个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
第五方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下方法中的至少一者:
第一方面所述的本公开实施例的信道状态信息的反馈方法;或
第二方面所述的本公开实施例的信道状态信息的接收方法。
附图说明
图1是本公开实施例中一种信道状态信息的反馈方法的流程图;
图2是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图3是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图4是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图5是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图6是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图7是本公开实施例中一种信道状态信息的反馈方法中部分步骤的流程图;
图8是本公开实施例中一种信道状态信息的接收方法的流程图;
图9是本公开实施例中一种终端的组成框图;
图10是本公开实施例中一种基站的组成框图;
图11是本公开实施例中一种计算机可读存储介质的组成框图;
图12是本公开实施例中一种反馈信道状态信息的架构示意图;
图13是本公开实施例中一种反馈信道状态信息的架构示意图;
图14是本公开实施例中一种反馈信道状态信息的架构示意图;以及
图15是本公开实施例中一种采样点位置的示意图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的信道状态信息的反馈方法、信道状态信息的接收方法、终端、基站、计算机可读存储介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现,且本公开不应当被解释为限于本文阐述的实施例。提供这些实施例的目的在于使本公开更加透彻和完整,并使本领域技术人员充分理解本公开的范围。
在不冲突的情况下,本公开各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括一个或多个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在特定特征、整体、步骤、操作、元件和/或组件,但不排除存在或可添加一个或多个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术术语和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
人工智能(AI,Artificial Intelligence)具有强大的特征提取、分类等能力,目前被广泛用于各行各业。将主要通过深度学习的神经网络实现的AI技术应用于多天线技术的信道状态信息反馈中是一种非常值得研究的领域。应用AI技术可以用更少的反馈开销达到和传统码本相同的性能。但是,在一些相关技术中,用于反馈CSI的神经网络存在全连接层,而全连接层对输入和输出的维度是固定的,对不同的天线数目、不同传输数据流或者不同的带宽下的场景,都需要对应不同的神经网络,这会增加无线通信系统的开销或者复杂度。
第一方面,参照图1,本公开实施例提供一种信道状态信息的反馈方法,包括步骤S110至S130。
在步骤S110中,确定神经网络参数,根据所述神经网络参数构造编码器。
在步骤S120中,利用所述编码器对信道信息进行压缩得到信道状态信息。
在步骤S130中,反馈所述信道状态信息。
在本公开实施例提供的信道状态信息的反馈方法中,基于神经网络构造终端中的编码器和基站中的解码器。终端中的编码器与基站中的解码器相对应,即当终端和基站中任意一者的神经网络参数确定时,另一者的神经网络参数也就确定了。终端中的编码器与基站中的解码器共同构成自编码器。
在本公开实施例提供的信道状态信息的反馈方法中,通过步骤S110确定的神经网络参数是通过线下训练或者通过线下训练和线上训练结合得到的多套自编码器的神经网络参数中的一者。终端中确定的神经网络参数与基站中的神经网络参数相对应。
需要说明的是,在本公开实施例提供的信道状态信息的反馈方法中,信道信息是终端通过接收参考信号,例如信道状态信息参考信号(CSI-RS,Channel State Information)获得的。在基站中,通过与终端中编码器对应的解码器对终端反馈的信道状态信息进行解码,能够获取信道信息。
还需要说明的是,本公开实施例中,终端可以包括手机、数据块、笔记本、工厂的各种可移动的制造设备。本公开对此不做特殊限定。基站可以包括各种宏基站、微基站、家庭基站、微微基站。本公开对此也不做特殊限定。
在本公开实施例提供的信道状态信息的反馈方法中,终端或基站能够根据信道的场景、信道的角度扩展、时延扩展、多普勒扩展等至少一个因素确定神经网络参数,并且终端和基站能够交互神经网络参数信息,以在终端和基站中使用相对应的神经网络参数分别构造相对应的编码器和解码器,从而能够针对不同的天线数目、不同的传输数据流或者不同的带宽下的场景进行信道状态信息的反馈,降低了无 线通信的开销和复杂度。
在一些实施方式中,参照图2,步骤S120包括步骤S121和S122。
在步骤S121中,对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配。
在步骤S122中,利用所述编码器对所述预处理后的信道信息进行压缩得到所述信道状态信息。
需要说明的是,在本公开实施例提供的信道状态信息的反馈方法中,神经网络参数确定之后,编码器的输入数据维度也相应确定。编码器只能对特定天线数目或特定传输数据流的信道信息进行处理。在本公开实施例提供的信道状态信息的反馈方法中,使预处理后的信道信息的维度与编码器的输入数据维度相匹配是指,通过预处理,使得信道信息变换为符合编码器的输入数据维度。
还需要说明的是,本公开实施例中,信道信息可以是时域信道信息,也可以是频域信道信息。本公开对此不做特殊限定。
在一些实施方式中,所述对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配包括:
根据所述信道信息的维度与所述编码器的输入数据维度确定所述信道信息的组数K;以及
将所述信道信息分成K组,得到K组信道信息作为所述预处理后的信道信息,每一组信道信息的维度与所述编码器的输入数据维度相匹配,K为正整数。
在一些实施方式中,参照图3,步骤S121包括步骤S1211和S1212。
在步骤S1211中,根据信道参数确定所述信道信息的组数K。
在步骤S1212中,根据所述信道参数将所述信道信息分成K组,得到K组信道信息作为所述预处理后的信道信息,每一组信道信息的维度与所述编码器的输入数据维度相匹配,K为正整数,所述信道参数包括以下至少之一:天线极化方向数目、天线数目、数据流数目、时域采样点组数目、频域粒度组数目。
在本公开实施例中,天线数目可以是接收天线数目和/或发送天 线数目。本公开对此不做特殊限定。
本公开对于如何根据信道参数将所述信道信息分成K组不做特殊限定。
在一些实施方式中,根据所述信道参数将所述信道信息分成K组包括以下至少之一:
根据天线极化方向数目对所述信道信息分组;
根据天线数目对所述信道信息分组;
根据数据流数目对所述信道信息分组;
根据时域采样点组数目对所述信道信息分组;以及
根据频域粒度组数目对所述信道信息分组。
需要说明的是,在本公开实施例提供的信道状态信息的反馈方法中,根据信道参数将信道信息分成K组,可以是根据天线极化方向数目、天线数目、数据流数目、时域采样点组数目、频域粒度组数目中的任意一者将信道信息分成K组;也可以是根据多个信道参数将信道信息分成K组。本公开对此不做特殊限定。根据多个信道参数将信道信息分成K组时,例如根据天线极化方向数目和天线数目将信道信息分成K组时,可以是先根据天线极化方向数目初步分组,在初步分组的基础上根据天线数目进一步分组,最终得到K组信道信息;也可以是先根据天线数目初步分组,在初步分组的基础上根据天线极化方向数目进一步分组,最终得到K组信道信息。
在一些实施方式中,信道信息为H,它是一个Nt*Ns*C的矩阵,Nt表示天线数目,Ns表示采样点组数目,C表示通道维度。信道信息H可以是通过对获得的信道信息进行快速傅里叶变换得到时域信道/频域信道,再通过在时域/频域点上进行截断,归一化后得到的矩阵。
相应地,在一些实施方式中,所述根据天线极化方向数目对所述信道信息分组包括:
将同一个极化方向对应的信道信息分到同一组信道信息中。
在一些实施方式中,所述根据天线极化方向数目对所述信道信 息分组还包括:
在所述将同一个极化方向对应的信道信息分到同一组信道信息中之前,根据天线极化方向对所述信道信息进行排序。
在本公开实施例提供的信道状态信息的反馈方法中,根据天线极化方向对所述信道信息进行排序时,可以按照信道信息矩阵中的各行进行排序,也可以按块进行排序。本公开实对此不做特殊限定。按块进行排序时,一个极化方向对应一个块,一个块中对应的信道可以是连续的,也可以是不连续的。本公开对此也不做特殊限定。例如,按块进行排序时,极化方向1对应的信道信息整体排在极化方向2之前,极化方向1对应的信道不一定连续,极化方向2对应的信道也不一定连续。
相应地,在一些实施方式中,所述根据天线数目对所述信道信息分组包括:
将同一个天线组对应的信道信息分到同一组信道信息中;
所述天线组包括发送天线组和/或接收天线组。
在一些实施方式中,所述根据天线数目对所述信道信息分组还包括:
在所述将同一个天线组对应的信道信息分到同一组信道信息中之前,根据天线的索引对所述信道信息进行排序。
需要说明的是,本公开实施例中,天线包括发送天线和/或接收天线。
在本公开实施例提供的信道状态信息的反馈方法中,根据天线的索引对所述信道信息进行排序时,可以按照信道信息矩阵中的各行进行排序,也可以按块进行排序。本公开对此不做特殊限定。按块进行排序时,一个天线组对应一个块,一个块中对应的信道可以是连续的,也可以是不连续的。本公开对此也不做特殊限定。例如,按天线组进行排序时,天线组i对应的信道信息整体排在天线组j对应的信道信息之前,i<j;天线组i对应的信道不一定连续,天线组j对应的信道也不一定连续。
相应地,在一些实施方式中,所述根据数据流数目对所述信道信息分组包括:
将同一个数据流对应的信道信息分到同一组信道信息中。
在一些实施方式中,所述根据数据流数目对所述信道信息分组还包括:
在所述将同一个数据流对应的信道信息分到同一组信道信息中之前,根据数据流索引对所述信道信息进行排序。
在本公开实施例提供的信道状态信息的反馈方法中,根据数据流索引对所述信道信息进行排序时,可以按照信道信息矩阵中的行进行排序,也可以按块进行排序。本公开对此不做特殊限定。按块进行排序时,一个数据流对应一个块,一个块中对应的信道可以是连续的,也可以是不连续的。本公开对此也不做特殊限定。例如,按数据流进行排序时,数据流i对应的信道信息整体排在数据流j对应的信道信息之前,i<j;数据流i对应的信道不一定连续,数据流j对应的信道也不一定连续。
相应地,在一些实施方式中,所述根据时域采样点组数目对所述信道信息分组包括:
将同一个采样点组对应的信道信息分到同一组信道信息中。
在一些实施方式中,所述根据时域采样点组数目对所述信道信息分组还包括:
在所述将同一个采样点组对应的信道信息分到同一组信道信息中之前,根据采样点大小索引对所述信道信息进行排序。
在本公开实施例提供的信道状态信息的反馈方法中,根据采样点大小索引对所述信道信息进行排序时,可以按照信道信息矩阵中的各行进行排序,也可以按块进行排序。本公开对此不做特殊限定。按块进行排序时,一个采样点组对应一个块,一个块中对应的信道可以是连续的,也可以是不连续的。本公开对此也不做特殊限定。例如,按采样点组进行排序时,采样点组i对应的信道信息整体排在采样点组j对应的信道信息之前,i<j;采样点组i对应的信道不一定连续, 采样点组j对应的信道也不一定连续。
在一些实施方式中,将采样点按大到小排序,并将它分成K组,即K个采样点组,假设第i组采样点的索引集合为Bi=Ns/K*(i-1)+1至Ns/K*i的整数,第i组采样点对应信道信息H(:,Bi,:),i=1,…,K,K为正整数。
相应地,在一些实施方式中,所述根据频域粒度组数目对所述信道信息分组包括:
将同一个频域粒度组对应的信道信息分到同一组信道信息中。
在一些实施方式中,所述根据频域粒度组数目对所述信道信息分组还包括:
在将同一个频域粒度组对应的信道信息分到同一组信道信息中之前,根据频域粒度索引对所述信道信息进行排序。
在本公开实施例提供的信道状态信息的反馈方法中,根据频域粒度索引对所述信道信息进行排序时,可以按照信道信息矩阵中的行进行排序,也可以按块进行排序。本公开对此不做特殊限定。按块进行排序时,一个频域粒度组对应一个块,一个块中对应的信道可以是连续的,也可以是不连续的。本公开对此也不做特殊限定。例如,按频域粒度组进行排序时,频域粒度组i对应的信道信息整体排在频域粒度组j对应的信道信息之前,i<j;频域粒度组i对应的信道不一定连续,频域粒度组j对应的信道也不一定连续。
在一些实施方式中,频域粒度可以以子载波或者物理资源块或者子带为单位,一个物理资源块可以包括多个子载波,一个子带可以包括多个物理资源块,一个或者多个频域粒度组成一个频域粒度组。
在一些实施方式中,将频域粒度按大小排序并分成K组,假设第i组的频域粒度索引集合为Ci=Nf/K*(i-1)+1至Nf/K*i的整数,第i个频域粒度组对应信道信息H(:,Ci,:),i=1,…,K,K为正整数。
在一些实施方式中,所述利用所述编码器对所述预处理后的信道信息进行压缩得到所述信道状态信息包括:
利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息。
在一些实施方式中,所述信道状态信息的反馈方法还包括:
在所述利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息之后,获取所述各组信道信息的组间相位并反馈所述组间相位。
在一些实施方式中,所述信道状态信息的反馈方法还包括:
在所述利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息之后,根据所述K组信道状态信息对应的组索引对所述K组信道状态信息进行联合编码。
在一些实施方式中,当信道信息的天线数目或采样点组数目等信道参数大于确定的神经网络参数对应的信道参数,例如,信道信息的天线数目大于确定的神经网络参数对应的天线数目时,预处理还可以包括下采样,从而使得预处理后的信道信息与编码器的输入数据维度相匹配。
在一些实施方式中,参照图4,步骤S121包括步骤S1213。
在步骤S1213中,对所述信道信息进行下采样,得到下采样后的信道信息作为所述预处理后的信道信息,所述下采样后的信道信息的维度与所述编码器的输入数据维度相匹配。
本公开对于如何对信道信息进行下采样不做特殊限定。
在一些实施方式中,对所述信道信息进行下采样包括以下至少一者:
对天线维度进行下采样;
对时域采样点维度进行下采样;
对频域粒度维度进行下采样。
在一些实施方式中,当信道信息的天线数目或采样点组数目等信道参数小于确定的神经网络参数对应的信道参数,例如,信道信息的天线数目小于确定的神经网络参数对应的天线数目时,预处理还可以包括补零,从而使得预处理后的信道信息与编码器的输入数据维度 相匹配。
在一些实施方式中,参照图5,步骤S121包括步骤S1214。
在步骤S1214中,对所述信道信息进行补零,得到补零后的信道信息作为所述预处理后的信道信息,所述补零后的信道信息的维度与所述编码器的输入数据维度相匹配。
本公开对于如何对信道信息进行补零不做特殊限定。
在一些实施方式中,对所述信道信息进行补零包括以下至少一者:
在同一极化方向的天线维度补零;
在时域采样点维度补零;
在频域粒度维度补零。
在一些实施方式中,参照图6,步骤S110包括步骤S111。
在步骤S111中,根据信道因素信息选择预先配置的至少一套自编码器的候选神经网络参数中的一者作为所述神经网络参数;所述自编码器包括一对编码器和解码器。
在一些实施方式中,参照图7,步骤S110包括步骤S112和S113。
在步骤S112中,接收神经网络参数信息。
在步骤S113中,根据所述神经网络参数信息确定所述神经网络参数。
需要说明的是,在一些实施方式中,基站根据信道情况确定神经网络参数后,将神经网络参数直接发送到终端,终端通过步骤S112接收的神经网络参数信息即为神经网络参数。在一些实施方式中,终端和基站分别保存多对编码器和解码器的神经网络参数,在使用时,基站根据信道情况通过高层信令将编码器和解码器对的索引发送到终端,终端通过步骤S112接收的神经网络参数信息即为编码器和解码器对的索引,终端接收所述索引就知道使用哪套编码器的神经网络参数。
在一些实施方式中,所述神经网络参数包括卷积层的卷积核的大小、卷积层的个数、卷积层的步长、卷积层的权值、卷积层的偏置、 卷积层的激活函数中的至少一者。
在一些实施方式中,所述编码器包括第一处理层和压缩层;所述第一处理层包括多个网络层,每一个所述网络层包括多个节点、至少一个网络层权值、激活函数和/或网络层偏置,所述第一处理层配置为提取所述信道信息的特征;
所述压缩层配置为对所述信道信息的特征进行压缩,得到所述信道状态信息。
在一些实施方式中,所述压缩层包括全连接层、卷积层组、循环网络中的任意一者,所述卷积层组包括至少一个卷积层。
第二方面,参照图8,本公开实施例提供一种信道状态信息的接收方法,包括步骤S210至S230。
在步骤S210中,确定神经网络参数,根据所述神经网络参数构造解码器。
在步骤S220中,接收信道状态信息。
在步骤S230中,利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息。
在本公开实施例提供的信道状态信息的接收方法中,基于神经网络构造终端中的编码器和基站中的解码器。终端中的编码器与基站中的解码器相对应,即当终端和基站中任意一者的神经网络参数确定时,另一者的神经网络参数也就确定了。终端中的编码器与基站中的解码器共同构成自编码器。
在本公开实施例提供的信道状态信息的接收方法中,通过步骤S210确定的神经网络参数是通过线下训练或者通过线下训练和线上训练结合得到的多套自编码器的神经网络参数中的一者。终端中确定的神经网络参数与基站中的神经网络参数相对应。
需要说明的是,在本公开实施例提供的信道状态信息的接收方法中,终端通过接收参考信号,例如信道状态信息参考信号(CSI-RS,Channel State Information)获得信道信息。在基站中,通过与终端中编码器对应的解码器对终端反馈的信道状态信息进行 解码,能够获取第二信道信息。在一些实施方式中,基站中获取的第二信道信息为终端获取的信道信息的估计值。
还需要说明的是,本公开实施例中,终端可以包括手机、数据块、笔记本、工厂的各种可移动的制造设备。本公开对此不做特殊限定。基站可以包括各种宏基站、微基站、家庭基站、微微基站。本公开对此也不做特殊限定。
在本公开实施例提供的信道状态信息的接收方法中,终端或基站能够根据信道的场景、信道的角度扩展、时延扩展、多普勒扩展等至少一个因素确定神经网络参数,并且终端和基站能够交互神经网络参数信息,以在终端和基站中使用相对应的神经网络参数分别构造相对应的编码器和解码器,从而能够针对不同的天线数目、不同的传输数据流或者不同的带宽下的场景进行信道状态信息的反馈,降低了无线通信的开销和复杂度。
在一些实施方式中,所述信道状态信息包括对K组信道信息中的各组信道信息分别进行压缩得到的K组信道状态信息;所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息包括:
对所述K组信道状态信息中的各组信道状态信息分别进行解压缩,得到K组信道信息;以及
根据所述K组信道信息获得所述第二信道信息;
K为正整数。
在一些实施方式中,所述根据所述K组信道信息获得所述第二信道信息包括:
根据信道参数对所述K组信道信息进行组合,得到所述第二信道信息;
所述信道参数包括以下至少之一:天线极化方向数目、天线数目、数据流数目、时域采样点组数目、频域粒度组数目。
在本公开实施例中,天线数目可以是接收天线数目和/或发送天线数目。本公开对此不做特殊限定。
在一些实施方式中,终端根据K组信道状态信息对应的组索引 对K组信道状态信息进行联合编码。基站接收到联合编码后的信道状态信息后,需要进行联合解码。
相应地,在一些实施方式中,所述信道状态信息的接收方法还包括:
在所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息之前,根据K组信道状态信息对应的组索引对所述K组信道状态信息进行联合解码。
在一些实施方式中,终端对信道信息进行下采样,得到下采样后的信道信息作为预处理后的信道信息,并利用编码器对预处理后的信道信息进行压缩得到信道状态信息。在基站中,对信道状态信息进行解压缩后,还需要进一步根据上采样、差值、反卷积等方式获得第二信道信息。
相应地,在一些实施方式中,所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息包括:
利用所述解码器对所述信道状态信息进行解压缩,得到下采样后的信道信息;以及
根据所述下采样后的信道信息进行上采样获得所述第二信道信息。
本公开对于如何进行上采样不做特殊限定。例如,基站中用与终端中进行下采样对应的方式进行上采样。
相应地,在一些实施方式中,所述根据下采样后的信道信息进行上采样获得所述第二信道信息包括以下至少之一:
对天线维度进行上采样;
对时域采样点维度进行上采样;
对频域粒度维度进行上采样。
在一些实施方式中,终端对信道信息进行补零,得到补零后的信道信息作为预处理后的信道信息,并利用编码器对预处理后的信道信息进行压缩得到信道状态信息。在基站中,对信道状态信息进行解压缩后,还需要进一步进行去零以获得第二信道信息。
相应地,在一些实施方式中,所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息包括:
利用所述解码器对所述信道状态信息进行解压缩,得到补零后的信道信息;以及
根据所述补零后的信道信息进行去零获得所述第二信道信息。
本公开对于如何进行去零不做特殊限定。例如,基站中用与终端中进行补零对应的方式进行去零。
相应地,在一些实施方式中,所述根据补零后的信道信息进行去零获得第二信道信息包括以下至少之一:
在同一极化方向的天线维度去零;
在时域采样点维度去零;
在频域粒度维度去零。
在一些实施方式中,所述确定神经网络参数,根据所述神经网络参数构造解码器包括:
根据信道因素信息选择预先配置的至少一套自编码器的候选神经网络参数中的一者作为所述神经网络参数;所述自编码器包括一对编码器和解码器。
在一些实施方式中,所述确定神经网络参数,根据所述神经网络参数构造编码器包括:
接收神经网络参数信息;以及
根据所述神经网络参数信息确定所述神经网络参数。
需要说明的是,在一些实施方式中,终端根据信道的场景、信道的角度扩展、时延扩展、多普勒扩展等至少一个信道因素信息确定神经网络参数,将神经网络参数直接发送到基站,基站接收的神经网络参数信息即为神经网络参数。在一些实施方式中,终端和基站分别保存多对编码器和解码器的神经网络参数,在使用时,终端根据信道的场景、信道的角度扩展、时延扩展、多普勒扩展等至少一个信道因素信息确定神经网络参数,并通过高层信令将编码器和解码器对的索引发送到基站,基站接收所述索引就知道使用哪套解码器的神经网络 参数。
在一些实施方式中,所述神经网络参数包括卷积层的卷积核的大小、卷积层的个数、卷积层的步长、卷积层的权值、卷积层的偏置、卷积层的激活函数中的至少一者。
在一些实施方式中,所述解码器包括第二处理层和解压层;所述解压层配置为对所述信道状态信息进行解压缩;
所述第二处理层包括多个网络层,每一个所述网络层包括多个节点、至少一个网络层权值、激活函数和/或网络层偏置,所述第二处理层配置为提取解压缩后的信道状态信息的特征,得到所述第二信道信息。
在一些实施方式中,所述解压层包括全连接层、反卷积层组、循环网络中的任意一者。
第三方面,参照图9,本公开实施例提供一种终端,包括:
至少一个处理器101(图9中仅示出一个);
存储器102,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器101执行时,使得所述至少一个处理器101实现第一方面所述的本公开实施例的信道状态信息的反馈方法;以及
至少一个I/O接口103,连接在处理器101与存储器102之间,配置为实现处理器101与存储器102的信息交互。
处理器101为具有数据处理能力的器件,包括但不限于中央处理器(CPU)等;存储器102为具有数据存储能力的器件,包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)103连接在处理器101与存储器102间,能实现处理器101与存储器102的信息交互,包括但不限于数据总线(Bus)等。
在一些实施方式中,处理器101、存储器102和I/O接口103通过总线104相互连接,进而与计算设备的其它组件连接。
第四方面,参照图10,本公开实施例提供一种基站,包括:
至少一个处理器201(图10中仅示出一个);
存储器202,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器201执行时,使得所述至少一个处理器201实现第二方面所述的本公开实施例的信道状态信息的接收方法;以及
至少一个I/O接口203,连接在处理器201与存储器202之间,配置为实现处理器201与存储器202的信息交互。
处理器201为具有数据处理能力的器件,包括但不限于中央处理器(CPU)等;存储器202为具有数据存储能力的器件,包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)203连接在处理器201与存储器202间,能实现处理器201与存储器202的信息交互,包括但不限于数据总线(Bus)等。
在一些实施方式中,处理器201、存储器202和I/O接口203通过总线204相互连接,进而与计算设备的其它组件连接。
第五方面,参照图11本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下方法中的至少一者:
第一方面所述的本公开实施例的信道状态信息的反馈方法;
第二方面所述的本公开实施例的信道状态信息的接收方法。
为了使本领域技术人员能够更清楚地理解本公开实施例提供的技术方案,下面通过具体的实例,对本公开实施例提供的技术方案进行详细说明。
在本公开实施例中,索引Index、指示器Indicator是可以相互替换的。传输包括发送或接收;终端可以包括各种移动的设备,比如手机、数据卡、笔记本、工厂的各种制造设备;基站包括各种宏基站、微基站、家庭基站、微微基站等。在本公开中,对信道信息进行取值 的定义如下,比如信道信息为三维矩阵H,那么在某个维度上只有冒号“:”,表示取对应维度上的所有值,在某个维度上取L1:L2,则只去对应维度上索引维度为L1开始到L2结束的值。比如H(1:3,2:4,:),那么表示第一个维度上取索引为1、2、3的值,第二个维度取索引为2、3、4的值,而第三个维度取所有索引下的值。
实例一
本实例用于说明神经网络的编码器和解码器构造。
人工智能可以通过深度学习的神经网络实现,这里的神经网络是一个自编码器,包括编码器和解码器,编码器位于终端。这里终端可以是各种移动的设备,比如手机、数据卡、笔记本、工厂的各种制造设备;而解码器位于基站,基站包括各种宏基站、微基站、家庭基站、微微基站。
本实例中,反馈信道状态信息的架构如图12所示。
终端通过接收参考信号,比如信道状态信息参考信号(CSI-RS,Channel-state information reference signal)获得信道信息H。所述信道信息一般为一个Nt*Nr的复数矩阵。Nt和Nr分别为基站的发送天线数目和终端的接收天线数目,这里的天线可以是逻辑的天线,也可以是各种类型的物理天线。
终端获得信道信息后对信道信息进行预处理,并将预处理后的信道信息输入到编码器,编码器包括第一处理层和压缩层。第一处理层是一个子神经网络,包括C个网络层,第i层网络层包括L e,i个节点、至少一个网络层权值W e,i、0至1个网络层偏置b e,i、激活函数S e,i,i=1,…,C。第i层网络层可以是一个卷积层、池化层或者直连层等。压缩层可以是一个全连接层、或者C1个卷积层、或者一个循环网络,例如循环网络可以是长短期记忆网络(LSTM,long short-term memory network)、门控循环神经网络(GRU,gated recurrent neural network)。在基站侧,配置有对应于终端中编码器的解码器,解码器包括解压层和第二处理层,第二处理层包括L层,第i层包括L d,i个节点、至少一个网络层权值W d,i、0至1个网络 层偏置b d,i、激活函数S d,i,i=1,…,L。第i层可以是一个卷积层、池化层或者直连层,也可以是它们的组合,或者是几个卷积层组成的卷积层组,比如残差网络块,解压层可以是一个全连接层、或者C1个卷积层、或者一个循环网络,例如,循环网络可以是长短期记忆网络、门控循环神经网络中的任意一者。终端将预处理后的信道信息输入到编码器,编码器输出信道状态信息,然后将信道状态信息传输给基站。在一些实施方式中,终端也可以将信道状态信息本进行量化、编码、调制后传输给基站。基站接收信道状态信息,在一些实施方式中,基站也可以将信道状态信息进行解量化,解调、解编码后作为解码器的输入,解码器输出信道信息,所述压缩层的输入数据包括N个元素,而输出包括M个元素,M和N的比值叫压缩率。比如N为2048个元素,而M为32个元素,那么压缩率为32/2048=1/64。
实例二
本实例用于说明如何获得所述神经网络的参数。
通过线下训练、或者通过线下训练结合线上训练的过程得到K0套自编码器的神经网络参数,每套自编码器包括一对编码器和解码器。终端和基站分别保存所述K0对编码器和解码器的神经网络参数。在使用时,基站根据信道情况通过高层信令配置所述K0套编码器和解码器对的索引,这样终端接收所述索引就知道使用了哪套编码器的神经网络参数,即基站配置编码器和解码器对的索引,终端接收所述编码器和解码器对的索引,确定编码器对应的神经网络参数;或者终端根据信道的场景、信道的角度扩展、时延扩展、多普勒扩展等至少一个因素选择一个编码器,并将所选的编码器信息通过物理层和/或高层信令传输给基站。
在一些实施方式中,编码器里的压缩层和解码器里的解压层都是一个全连接层,本实施方式中,反馈信道状态信息的架构如图13所示。每个全连接层的权值是一个二维的矩阵,在压缩层中,它是一个N*M的矩阵,N、M分别表示压缩层对应的全连接层的输入和输出的维度;在解压层中,它是一个M*N的矩阵,M、N分别表示解压层 对应的全连接层的输入和输出的维度。而N一般是一个比较大的数字,比如,N=Nt*Ns*C,Nt,Ns,C分别表示信道矩阵对应的天线数目、截断的时域信道样点个数、通道个数,在Nt=32,Ns=32,C=2时,N=2048,在压缩率为1/4的情况下,M=512,那么全连接层的参数个数就是N*M=2048*512=1048576,即超过100万个参数。随着天线的增加以及采样点的增加,它的参数个数是线性增加的。并且输入和输出维度N和M是固定的,每对N和M都需要传输或者保留一套参数。基站可以通过高层信令传输所述的编码器的网络参数给终端。
在一些实施方式中,编码器里的压缩层是由一组卷积层构成的卷积层块,比如多个卷积层的串行和/或并行得到的一个卷积层块,比如残差块resblock、稠密块denseblock或者包括多个串行链接的卷积块。在本公开中,卷积层块和卷积层组是同等的可以互相替换的概念。本实例中,反馈信道状态信息的架构如图14所示,多个卷积层块构成了压缩层。基站通过配置高层和/或物理层信令的方式将一个或者多个卷积层块的参数传输给终端,终端通过接收高层信令和/或者物理层信令的方式获取所述一个或者多个卷积层块的参数,所述卷积层块的参数包括但不限于以下至少之一:每个卷积层的卷积核大小、卷积核的步长、数据填充方式、卷积核的权值、激活函数。
在一些实施方式中,解码器里的解压层是由一组逆卷积层构成的逆卷积层块,比如多个逆卷积层的串行和/或并行得到的一个逆卷积层块,比如转置卷积层、空洞卷积层等,本公开中,逆卷积层块和逆卷积层组是相同的概念。每个解码器对应的逆卷积块和编码器对应的卷积层块的参数可以通过线上或者线下训练的方式得到,并且成对的出现。卷积层块和逆卷积层块中卷积层的数量由以下至少之一确定:发送天线数目、接收天线数目、信道的秩、物理资源块个数、时域采样点个数、压缩率。因为他们的大小确定了卷积层块的输入和输出大小,每个卷积层通过不同的步长来控制卷积层块的输入和输出的大小。逆卷积层的情况类似。
实例三
本实例用来说明对输入数据的预处理,对信道信息进行分组,且信道信息为时域信道。
在压缩层或者解压层中,如果输入和输出的元素个数和大小是固定的,那么每个不同的输入和输出大小都要配置和训练一套神经网络参数,比如压缩层的输入大小为N而输出为M,而N由天线数目Nt、采样点数目Ns确定,即N=C*Nt*Ns,C为通道维度,一般等于2。改变天线的数目和采样点的数目,都可能改变神经网络的参数取值。而实际中,天线数目是多样性的,采样点数目也可能由于信道场景的不同而不同,这就会导致多套神经网络参数的传输或者保存问题。一种减小神经网络参数的套数的方法是训练一套基准大小的神经网络参数,比如天线数目为Nt1、采样点数目为Ns1对应的神经网络参数,而对于Nt不等于Nt1,或Ns不等于Ns1的情况,对输入的数据进行分组、下采样或补零等方式以匹配Nt1*Ns1的维度。这里Nt1和Ns1为大于1的正整数,不失一般性,这里假设Nt1=16,Ns1=16,其它取值情况类似。
在一些实施方式中,将信道信息按极化方向对天线维度进行分组,比如信道信息为H,它是一个Nt*Ns*C的矩阵,它可以是通过对获得的信道信息进行快速傅里叶变换得到时域信道,再通过在时域点上进行截断,归一化后得到的矩阵,比如Nt=32、Ns=16、C=2,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在信道维度中先排第一个极化方向对应的信道,再排第二个极化方向对应的信道,比如第一个极化方向为极化角度为P1的极化,而第二个极化方向为极化角度为P2的极化,P1和P2不相同,且取值可以为-45°、0°、90°、+45°中的至少一个。比如在Nt=32时,1行到Nt/2行对应第一个极化方向的信道信息,而第Nt/2+1行到Nt行对应第二个极化方向的信道。根据极化方向将信道信息分成K组,此时K=2,需要说明的是,这里也可以是根据发送天线数目进行分组,不一定是按极化天线分成两组,可以根据发送天线数目分成K组,此时K>2。第一组对应H(1:Nt/2,:,:)的信道信息,而第二组对应H(1+Nt/2: Nt,:,:)的信道信息。这里的天线极化方向包括发送天线的极化方向和/或接收天线的极化方向。在一些实施方式中,需要反馈不同极化天线组间的极化相位。而在接收端在通过解压缩模块后得到所述K组信道信息的估计值,比如K组第二信道信息,需要根据上述分组规则,进行逆操作,将K组信道信息的估计值链接成所述信道信息的估计值,即第二信道信息。
在一些实施方式中,信道信息的天线数目Nt和基准神经网络的天线数目Nt1比较接近,在Nt>Nt1的时候,可以对天线进行下采样,比如取前1至Nt1的信道信息H(1:Nt1,:,:)、或者后(Nt-Nt1+1):Nt对应的信道信息H((Nt-Nt1+1):Nt,:,:)、或者天线索引为奇数对应的信道信息、或者天线索引为偶数对应的信道信息。或者同一个极化方向分别取Nt1/2个信道信息,比如取第一个极化方向的Nt1/2个信道信息H(1:Nt1/2,:,:),以及第二个极化方向的Nt1/2个信道信息H(Nt/2+1:Nt/2+Nt1/2,:,:),然后把他们合并起来。
而在接收端通过解压缩模块后得到所述信道信息下采样的估计值,比如下采样第二信道信息,需要根据上述下采样的规则,进行逆操作,比如上采样(包括插值、上采样、逆卷积层、全连接层等逆操作)得到所述信道信息的估计值,即第二信道信息。
在一些实施方式中,信道信息的天线数目Nt比基准神经网络的天线数目Nt1小,在Nt<Nt1的时候,即,信道信息为H0,它的维度为Nt*Ns*C的矩阵,需要把它补零到维度为Nt1*Ns*C的矩阵H。可在天线维度进行补零操作,比如对H第一个维度的前1至(Nt1-Nt+1)的信道信息补零,即H(1:Nt1-Nt,:,:)取值为零,而后(Nt1-Nt+1):Nt行取H0;或者对后(Nt1-Nt+1):Nt1对应的信道信息补零,即H((Nt1-Nt+1):Nt1,:,:)的值为0,而1:Nt的信道取值为H0;或者H的每隔一个元素取值为H0,其它地方补零。或者在同一个极化方向补零,即第一个极化方向取H0的第一个极化方向的信道的Nt/2个信道信息,即H(1:Nt/2,:,:)取H0的第一个极化方向的信道信息,第二个极化方向取H0的第二个极化方向的Nt/2个信道信息,即 H(1+Nt1/2:Nt1/2+Nt/2,:,:)取H0的第二个极化方向的信道信息,其它的天线索引对应的信道信息取值为零,例如在Nt1=16,Nt=8的情况下,第一个极化方向的信道信息H(1:4,:,:)取第一个极化方向的信道信息H0(1:4,:,:)的值,第二个极化方向的信道信息H(9:12,:,:)取第二个极化方向的信道信息H0(5:8,:,:)的值,其它的天线索引对应的信道信息取值为零,即H(5:8,:,:)和H(13:16,:,:)取值为0。
而在接收端在通过解压缩模块后得到所述信道信息补零后的估计值,比如补零后的第二信道信息,需要根据上述补零的规则,进行逆操作,比如去零操作(即,将原来补零的地方的零去掉)得到所述信道信息的估计值,即第二信道信息。
在一些实施方式中,根据接收天线的数目或者信道的秩(数据流数)对天线维度进行分组。比如信道信息为H,它是一个Nt*Ns*C的矩阵,它可以是通过对获得的信道信息进行快速傅里叶变换得到时域信道,再通过在时域点上进行截断,归一化后得到的矩阵,比如Nt=Ntx*K时,Ntx为发送天线数目,比如为16,Ns=16,C=2,K为接收天线数目,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在信道维度中按接收天线索引进行排序,比如先排第一个接收天线对应的信道,再排第二根接收天线对应的信道,以此类推排序,直到排完所有接收天线数目。比如,第i个天线索引集Ai行对应第i根接收天线的信道信息,Ai=Ntx*(i-1)+1至Ntx*i的整数,i=1,…,K,K为接收天线数目。根据接收天线数目将信道信息分成K组,第i组对应H(Ai,:,:)的信道信息。
在一些实施方式中,根据信道的秩(数据流数)对天线维度进行分组。比如信道信息为H,它是一个Nt*Ns*C的矩阵,它可以是通过对获得的信道信息进行快速傅里叶变换得到时域信道,再通过在时域点上进行截断,归一化后得到的矩阵,比如Nt=Ntx*K时,Ntx为发送天线数目,比如为16,Ns=16,C=2,K为信道秩或者数据流数,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在 信道维度中按信道秩或数据流索引进行排序,比如先排第一数据流对应的信道,再排第二数据流的信道,以此类推排序,直到排完所有数据流。比如在Nt=Ntx*K时,Ntx为发送天线数目比如为16,第i个天线索引集Ai行对应第i个数据流的信道信息,Ai=Ntx*(i-1)+1至Ntx*i的整数,i=1,…,K,K为信道秩或者数据流数。根据信道秩或者数据流数将信道信息分成K组,第i组对应H(Ai,:,:)的信道信息。
在一些实施方式中,根据采样点数目对采样点维度进行分组。比如信道信息为H,它是一个Nt*Ns*C的矩阵,它可以是通过对获得的信道信息进行快速傅里叶变换得到时域信道,再通过在时域点上进行截断,归一化后得到的矩阵,比如天线数目为Nt=16,Ns=16,C=2,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在采样点维度是根据采样点大小索引排列的。但由于信道的延迟拓展在同一场景中是不同的,比如在室内会小一些,在城市宏中会大些,这导致不同场景能代表的信道信息的截断的采样点数目可能是不同的。根据采样点数目将信道信息分成K组,这里将采样点按大到小排序,并将它分成K组,即K个采样点组,假设第i组采样点的索引集合为Bi=Ns/K*(i-1)+1至Ns/K*i的整数,第i组采样点对应信道信息H(:,Bi,:),i=1,…,K,K为正整数。
而在接收端在通过解压缩模块后得到所述K组信道信息的估计值,比如K组第二信道信息,需要根据上述分组规则,进行逆操作,将K组信道信息的估计值链接成所述信道信息的估计值,即第二信道信息,这里包括根据接收天线、数据流数、采样点组数至少之一对输入数据进行分组的情况。
在一些实施方式中,由于不同的用户离基站的远近不同,参照图15,非零采样点或者大于门限值T0的采样点位置是不同的。如果代表信道的采样点数目Ns0和Ns比较接近,如果Ns0<Ns时,需要在采样点维度上补零,即,H(:,Ns0:Ns,:)的信道赋值为0;如果Ns0>Ns,那么需要进一步对信道信息进行截断,使得截断后的采样点数为Ns, 反馈截断窗口的起始位置、或者截断点的索引值。
同理,这里也需要对解压缩后的信道信息进行去零或上采样以得到最终的信道信息。
将所述K组信道信息分别输入编码器,并分别得到K组信道信息的信道状态信息,对所述K组信道状态信息进行量化、联合编码得到最终的信道状态信息。反馈所述信道状态信息给基站。
基站得到所述的信道状态信息后,对其进行解编码得到K组信道状态信息,将K组信道状态信息分别输入解码器,分别得到K组信道信息,对所述K组信道状态信息在极化维度上还原得到最终的信道信息,比如在信道维度上按先后顺序先放第一组信道信息,再放第二组信道信息等还原为最终的信道信息。
需要说明的是,在实际中,上述对信道信息进行分组的方法可以相互组合。
实例四
本实例用来说明对输入数据的预处理,对信道信息进行分组,且信道信息为频域信道。
在压缩层或者解压层中,如果他们的输入和输出的元素个数和大小是固定的,那么每个不同的输入和输出大小都要配置和训练一套参数,比如在压缩层的输入大小为N而输出为M,而N由天线数目Nt,频域粒度Nf确定,即N=C*Nt*Nf,C为通道维度,一般等于2,Nt为天线数目,Nf为频域粒度,频域粒度可以以子载波或者物理资源块或者子带为单位,一个物理资源块可以包括多个子载波,比如6个、12个等,一个子带包括多个物理资源块,一个或者多个频域粒度组成一个频域粒度组。改变天线的数目Nt和频域粒度Nf,都可能改变神经网络的参数取值。而实际中,天线数目是多样性的,频域粒度Nf跟配置的带宽有关,这就会导致多套神经网络参数的传输或者保存问题。一种减小神经网络参数的套数的方法是训练一套基准大小的神经网络参数,比如天线数目为Nt1,频域粒度为Ns1对应的神经网络参数,而对于Nt>Nt1,或Ns>Ns1的情况,对输入的数据进行分组, 下采样或补零等方式以匹配Nt1*Ns1的维度。这里Nt1和Ns1为大于1的正整数,不失一般性,这里假设Nt1=16,Ns1=16,其它取值情况类似。
在一些实施方式中,将信道信息按极化方向对天线维度进行分组,比如信道信息为H,它是一个Nt*Nf*C的矩阵,它可以是通过对获得的信道归一化后得到的矩阵,比如Nt=32,Nf=16,C=2,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在信道维度中先排第一个极化方向对应的信道,再排第二个极化方向的信道,比如第一个极化方向为极化角度为P1的极化,而第二个极化方向为极化角度为P2的极化,P1和P2不相同,且取值可以为-45°、0°、90°、+45°中的至少一个。比如在Nt=32时,1行到Nt/2行对应第一个极化方向的信道信息,而第Nt/2+1行到Nt行对应第二个极化方向的信道。根据极化方向将信道信息分成K组,此时K=2。第一组对应H(1:Nt/2,:,:)的信道信息,而第二组对应H(1+Nt/2:Nt,:,:)的信道信息。这里的天线极化方向包括发送天线的极化方向和/或接收天线的极化方向。在一些实施方式中,需要反馈不同极化天线组间的极化相位。
在一些实施方式中,根据接收天线的数目或者信道的秩(数据流数)对天线维度进行分组。比如信道信息为H,它是一个Nt*Nf*C的矩阵,它可以是通过对获得的信道信息归一化后得到的矩阵,比如Nt=Ntx*K时,Ntx为发送天线数目,比如为16,Nf=16,C=2,K为接收天线数目,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在信道维度中按接收天线索引进行排序,比如先排第一个接收天线对应的信道,再排第二根接收天线的信道,以此类推排序,直到排完所有接收天线数目。比如,第i个天线索引集Ai行对应第i根接收天线的信道信息,Ai=Ntx*(i-1)+1至Ntx*i的整数,i=1,…,K,K为接收天线数目。根据接收天线数目将信道信息分成K组,第i组对应H(Ai,:,:)的信道信息。
在一些实施方式中,根据接信道的秩(数据流数)对天线维度进 行分组。比如信道信息为H,它是一个Nt*Nf*C的矩阵,它可以是通过对获得的信道信息归一化后得到的矩阵,比如Nt=Ntx*K时,Ntx为发送天线数目比如为16,Nf=16,C=2,K为信道秩或者数据流数,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在信道维度中按信道秩或数据流索引进行排序,比如先排第一数据流对应的信道,再排第二数据流的信道,以此类推排序,直到排完所有数据流。比如在Nt=Ntx*K时,Ntx为发送天线数目比如为16,第i个天线索引集Ai行对应第i个数据流的信道信息,Ai=Ntx*(i-1)+1至Ntx*i的整数,i=1,…,K,K为信道秩或者数据流数。根据信道秩或者数据流数将信道信息分成K组,第i组对应H(Ai,:,:)的信道信息。
在一些实施方式中,根据频域粒度进行分组。比如信道信息为H,它是一个Nt*Nf*C的矩阵,它可以是通过对获得的信道信息归一化后得到的矩阵,比如N天线数目比如为16,Nf=32,C=2,通道维度也可以在第一个维度,做法和在第三个维度类似。信道信息在频域上根据频域粒度索引排列的。但由于系统对应的信道的带宽可能不同。根据频域粒度将信道信息分成K组,将频域粒度按大小排序并分成K组,假设第i组的频域粒度索引集合为Ci=Nf/K*(i-1)+1至Nf/K*i的整数,第i个频域粒度组对应信道信息H(:,Ci,:),i=1,…,K,K为正整数。
在一些实施方式中,如果频域的粒度Nf和Ns比较接近,如果Nf<Ns时,需要在频域维度上补零,即,H(:,Nf:Ns,:)的信道赋值为0;如果Nf>Ns,那么需要进一步对信道信息进行截断,使得截断后的采样点数为Ns,反馈阶段的频域粒度对应的索引。另外一种方法是对频域信道进行下采样,比如只选择频域粒度索引为奇数或者偶数的信道信息。即选择H(:,1:2:Ns,:)或者H(:,2:2:Ns,:)的信道作为编码器的输入。
在一些实施方式中,频域粒度为子带,那么子带(subband)中包括的物理资源块个数与压缩率以及带宽有关,比如,压缩率越小, Subband包括的物理资源块数目越少。比如,同样是100个物理资源块,在压缩率为1/20时,每个subband包括8个物理资源块,在压缩率为1/10时,每个subband包括的物理资源块数为4。另外一方面,subband包括的物理资源块个数和带宽成正比。比如10M带宽时每个subband包括4个物理资源块,那么20M带宽每个subband包括8个物理资源块,以此类推。
将所述K组信道信息分别输入编码器,并分别得到K组信道信息的信道状态信息,对所述K组信道状态信息进行量化、联合编码得到最终的信道状态信息。反馈所述信道状态信息给基站。
基站得到所述的信道状态信息后,对其进行解编码得到K组信道状态信息,将K组信道状态信息分别输入解码器,分别得到K组信道信息,对所述K组信道状态信息在极化维度上还原得到最终的信道信息,比如在信道维度上按先后顺序先放第一组信道信息,再放第二组信道信息等还原为最终的信道信息。
需要说明的是,在实际中,上述对信道信息进行分组的方法可以相互组合。
实例五
补充天线数目小于网络规定的天线数目的情况。
比如天线数目为8,但网络的输入中天线数目为32,那么需要补0,使得网络输入的天线数目一致。
比如:
1)对于同极化方向上的天线数目上补零;
2)在数组末尾补零;
3)对同极化方向上的多列天线或者多行天线上补零;
4)在空间维度补零输入。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组 件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器(如中央处理器、数字信号处理器或微处理器)执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则与特定实施例相结合描述的特征、特性和/或元素可单独使用,或可与结合其它实施例描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (38)

  1. 一种信道状态信息的反馈方法,包括:
    确定神经网络参数,根据所述神经网络参数构造编码器;
    利用所述编码器对信道信息进行压缩得到信道状态信息;以及
    反馈所述信道状态信息。
  2. 根据权利要求1所述的方法,其中,所述利用所述编码器对信道信息进行压缩得到信道状态信息包括:
    对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配;以及
    利用所述编码器对所述预处理后的信道信息进行压缩得到所述信道状态信息。
  3. 根据权利要求2所述的方法,其中,所述对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配包括:
    根据所述信道信息的维度与所述编码器的输入数据维度确定所述信道信息的组数K;以及
    将所述信道信息分成K组,得到K组信道信息作为所述预处理后的信道信息,其中,每一组信道信息的维度与所述编码器的输入数据维度相匹配,K为正整数。
  4. 根据权利要求2所述的方法,其中,所述对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配包括:
    根据信道参数确定所述信道信息的组数K;以及
    根据所述信道参数将所述信道信息分成K组,得到K组信道信息作为所述预处理后的信道信息,其中,每一组信道信息的维度与所 述编码器的输入数据维度相匹配,K为正整数,所述信道参数包括以下至少之一:天线极化方向数目、天线数目、数据流数目、时域采样点组数目、频域粒度组数目。
  5. 根据权利要求4所述的方法,其中,所述根据所述信道参数将所述信道信息分成K组包括以下至少之一:
    根据天线极化方向数目对所述信道信息分组;
    根据天线数目对所述信道信息分组;
    根据数据流数目对所述信道信息分组;
    根据时域采样点组数目对所述信道信息分组;
    根据频域粒度组数目对所述信道信息分组。
  6. 根据权利要求5所述的方法,其中,所述根据天线极化方向数目对所述信道信息分组包括:
    将同一个极化方向对应的信道信息分到同一组信道信息中。
  7. 根据权利要求5所述的方法,其中,所述根据天线数目对所述信道信息分组包括:
    将同一个天线组对应的信道信息分到同一组信道信息中;
    其中,所述天线组包括发送天线组或接收天线组中的至少之一。
  8. 根据权利要求5所述的方法,其中,所述根据数据流数目对所述信道信息分组包括:
    将同一个数据流对应的信道信息分到同一组信道信息中。
  9. 根据权利要求5所述的方法,其中,所述根据时域采样点组数目对所述信道信息分组包括:
    将同一个采样点组对应的信道信息分到同一组信道信息中。
  10. 根据权利要求5所述的方法,其中,所述根据频域粒度组数目对所述信道信息分组包括:
    将同一个频域粒度组对应的信道信息分到同一组信道信息中。
  11. 根据权利要求3至10中任意一项所述的方法,其中,所述利用所述编码器对预处理后的信道信息进行压缩得到所述信道状态信息包括:
    利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息。
  12. 根据权利要求11所述的方法,还包括:
    在所述利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息之后,获取所述各组信道信息的组间相位并反馈所述组间相位。
  13. 根据权利要求11所述的方法,还包括:
    在所述利用所述编码器对K组信道信息中的各组信道信息分别进行压缩,得到K组信道状态信息之后,根据所述K组信道状态信息对应的组索引对所述K组信道状态信息进行联合编码。
  14. 根据权利要求2所述的方法,其中,所述对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配包括:
    对所述信道信息进行下采样,得到下采样后的信道信息作为所述预处理后的信道信息,所述下采样后的信道信息的维度与所述编码器的输入数据维度相匹配。
  15. 根据权利要求14所述的方法,其中,所述对所述信道信息进行下采样包括以下至少之一:
    对天线维度进行下采样;
    对时域采样点维度进行下采样;
    对频域粒度维度进行下采样。
  16. 根据权利要求2所述的方法,其中,所述对所述信道信息进行预处理,使预处理后的信道信息的维度与所述编码器的输入数据维度相匹配包括:
    对所述信道信息进行补零,得到补零后的信道信息作为所述预处理后的信道信息,所述补零后的信道信息的维度与所述编码器的输入数据维度相匹配。
  17. 根据权利要求16所述的方法,其中,所述对所述信道信息进行补零包括以下至少之一:
    在同一极化方向的天线维度补零;
    在时域采样点维度补零;
    在频域粒度维度补零。
  18. 根据权利要求1至10、14至17中任意一项所述的方法,其中,所述确定神经网络参数,根据所述神经网络参数构造编码器包括:
    根据信道因素信息选择预先配置的至少一套自编码器的候选神经网络参数中的一者作为所述神经网络参数;其中,所述自编码器包括一对编码器和解码器。
  19. 根据权利要求1至10、14至17中任意一项所述的方法,其中,所述确定神经网络参数,根据所述神经网络参数构造编码器包括:
    接收神经网络参数信息;以及
    根据所述神经网络参数信息确定所述神经网络参数。
  20. 根据权利要求1至10、14至17中任意一项所述的方法,其中,所述神经网络参数包括卷积层的卷积核的大小、卷积层的个数、卷积层的步长、卷积层的权值、卷积层的偏置、卷积层的激活函数中的至少一者。
  21. 根据权利要求1至10、14至17中任意一项所述的方法,其中,所述编码器包括第一处理层和压缩层;所述第一处理层包括多个网络层,每一个所述网络层包括多个节点、至少一个网络层权值、激活函数或网络层偏置中的至少一者,所述第一处理层配置为提取所述信道信息的特征;
    所述压缩层配置为对所述信道信息的特征进行压缩,得到所述信道状态信息。
  22. 根据权利要求21所述的方法,其中,所述压缩层包括全连接层、卷积层组、循环网络中的任意一者。
  23. 一种信道状态信息的接收方法,包括:
    确定神经网络参数,根据所述神经网络参数构造解码器;
    接收信道状态信息;以及
    利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息。
  24. 根据权利要求23所述的方法,其中,所述信道状态信息包括对K组信道信息中的各组信道信息分别进行压缩得到的K组信道状态信息;所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息包括:
    对所述K组信道状态信息中的各组信道状态信息分别进行解压缩,得到K组信道信息;以及
    根据所述K组信道信息获得所述第二信道信息;
    其中,K为正整数。
  25. 根据权利要求24所述的方法,其中,所述根据所述K组信道信息获得所述第二信道信息包括:
    根据信道参数对所述K组信道信息进行组合,得到所述第二信道信息;
    其中,所述信道参数包括以下至少之一:天线极化方向数目、天线数目、数据流数目、时域采样点组数目、频域粒度组数目。
  26. 根据权利要求24或25所述的方法,还包括:
    所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息之前,根据K组信道状态信息对应的组索引对所述K组信道状态信息进行联合解码。
  27. 根据权利要求23所述的方法,其中,所述利用所述解码器对所述信道状态信息进行解压缩得到第二信道信息包括:
    利用所述解码器对所述信道状态信息进行解压缩,得到下采样后的信道信息;以及
    根据所述下采样后的信道信息进行上采样获得所述第二信道信息。
  28. 根据权利要求27所述的方法,其中,所述根据所述下采样后的信道信息进行上采样获得所述第二信道信息包括以下至少之一:
    对天线维度进行上采样;
    对时域采样点维度进行上采样;
    对频域粒度维度进行上采样。
  29. 根据权利要求23所述的方法,其中,所述利用所述解码器 对所述信道状态信息进行解压缩得到第二信道信息包括:
    利用所述解码器对所述信道状态信息进行解压缩,得到补零后的信道信息;以及
    根据所述补零后的信道信息进行去零获得所述第二信道信息。
  30. 根据权利要求29所述的方法,其中,所述根据所述补零后的信道信息进行去零获得第二信道信息包括以下至少之一:
    在同一极化方向的天线维度去零;
    在时域采样点维度去零;
    在频域粒度维度去零。
  31. 根据权利要求23至25、27至30中任意一项所述的方法,其中,所述确定神经网络参数,根据所述神经网络参数构造解码器包括:
    根据信道因素信息选择预先配置的至少一套自编码器的候选神经网络参数中的一者作为所述神经网络参数;其中,所述自编码器包括一对编码器和解码器。
  32. 根据权利要求23至25、27至30中任意一项所述的方法,其中,所述确定神经网络参数,根据所述神经网络参数构造编码器包括:
    接收神经网络参数信息;以及
    根据所述神经网络参数信息确定所述神经网络参数。
  33. 根据权利要求23至25、27至30中任意一项所述的方法,其中,所述神经网络参数包括卷积层的卷积核的大小、卷积层的个数、卷积层的步长、卷积层的权值、卷积层的偏置、卷积层的激活函数中的至少一者。
  34. 根据权利要求23至25、27至30中任意一项所述的方法,其中,所述解码器包括第二处理层和解压层;所述解压层配置为对所述信道状态信息进行解压缩;
    所述第二处理层包括多个网络层,每一个所述网络层包括多个节点、至少一个网络层权值、激活函数或网络层偏置中的至少一者,所述第二处理层配置为提取解压缩后的信道状态信息的特征,得到所述第二信道信息。
  35. 根据权利要求34所述的方法,其中,所述解压层包括全连接层、反卷积层组、循环网络中的任意一者。
  36. 一种终端,包括:
    至少一个处理器;
    存储器,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行,使得所述至少一个处理器实现根据权利要求1至22中任意一项所述的信道状态信息的反馈方法;以及
    至少一个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  37. 一种基站,包括:
    至少一个处理器;
    存储器,其上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行,使得所述至少一个处理器实现根据权利要求23至35中任意一项所述的信道状态信息的接收方法;以及
    至少一个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  38. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下方法中的至少一者:
    根据权利要求1至22中任意一项所述的信道状态信息的反馈方法;或
    根据权利要求23至35中任意一项所述的信道状态信息的接收方法。
PCT/CN2022/109704 2021-08-04 2022-08-02 信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质 WO2023011472A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22852184.5A EP4362365A1 (en) 2021-08-04 2022-08-02 Method for feeding back channel state information, method for receiving channel state information, and terminal, base station, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110893081.2A CN115706612A (zh) 2021-08-04 2021-08-04 信道状态信息的反馈方法、接收方法及终端、基站、介质
CN202110893081.2 2021-08-04

Publications (1)

Publication Number Publication Date
WO2023011472A1 true WO2023011472A1 (zh) 2023-02-09

Family

ID=85155242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/109704 WO2023011472A1 (zh) 2021-08-04 2022-08-02 信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质

Country Status (3)

Country Link
EP (1) EP4362365A1 (zh)
CN (1) CN115706612A (zh)
WO (1) WO2023011472A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193504B (zh) * 2023-04-18 2023-07-21 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577968A (zh) * 2008-05-05 2009-11-11 华为技术有限公司 一种获取下行信道信息的方法、系统和装置
CN109672464A (zh) * 2018-12-13 2019-04-23 西安电子科技大学 基于fcfnn的大规模mimo信道状态信息反馈方法
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
WO2020091542A1 (ko) * 2018-11-02 2020-05-07 엘지전자 주식회사 무선 통신 시스템에서 채널 상태 정보를 보고하기 위한 방법 및 이를 위한 장치
CN111434049A (zh) * 2017-06-19 2020-07-17 弗吉尼亚科技知识产权有限公司 使用多天线收发器无线传输的信息的编码和解码
US20210184744A1 (en) * 2019-12-13 2021-06-17 QUALCOMM lncornorated User equipment feedback of multi-path channel cluster information to assist network beam management
CN113098805A (zh) * 2021-04-01 2021-07-09 清华大学 基于二值化神经网络的高效mimo信道反馈方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577968A (zh) * 2008-05-05 2009-11-11 华为技术有限公司 一种获取下行信道信息的方法、系统和装置
CN111434049A (zh) * 2017-06-19 2020-07-17 弗吉尼亚科技知识产权有限公司 使用多天线收发器无线传输的信息的编码和解码
WO2020091542A1 (ko) * 2018-11-02 2020-05-07 엘지전자 주식회사 무선 통신 시스템에서 채널 상태 정보를 보고하기 위한 방법 및 이를 위한 장치
CN109672464A (zh) * 2018-12-13 2019-04-23 西安电子科技大学 基于fcfnn的大规模mimo信道状态信息反馈方法
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
US20210184744A1 (en) * 2019-12-13 2021-06-17 QUALCOMM lncornorated User equipment feedback of multi-path channel cluster information to assist network beam management
CN113098805A (zh) * 2021-04-01 2021-07-09 清华大学 基于二值化神经网络的高效mimo信道反馈方法及装置

Also Published As

Publication number Publication date
CN115706612A (zh) 2023-02-17
EP4362365A1 (en) 2024-05-01

Similar Documents

Publication Publication Date Title
CN110350958B (zh) 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
US10931400B2 (en) Decoding method and apparatus in wireless communication system
CN113098804B (zh) 一种基于深度学习与熵编码的信道状态信息反馈方法
WO2023011472A1 (zh) 信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质
CN115996160A (zh) 通信系统中的方法和设备
WO2022253023A1 (zh) 一种通信方法及装置
CN113726375B (zh) 一种基于深度学习的信道信息压缩反馈重建方法
WO2023104205A1 (zh) 反馈、获取及训练方法、终端、基站、电子设备和介质
CN114553280A (zh) 一种基于深度学习大规模mimo系统的csi反馈方法
WO2023030538A1 (zh) 信道状态信息的处理方法、终端、基站、计算机可读存储介质
CN114157722A (zh) 一种数据传输方法及装置
WO2022199090A1 (zh) 信道状态信息传输方法、装置、终端、基站和存储介质
WO2023179622A1 (zh) 信道状态信息的处理方法、终端、基站、介质
WO2023115254A1 (zh) 处理数据的方法及装置
WO2024007191A1 (zh) 模型训练方法及装置、样本数据生成方法及装置、电子设备
WO2023036119A1 (zh) 信道状态信息的反馈方法、反馈信息的处理方法、终端、基站、及计算机可读存储介质
WO2023060503A1 (zh) 信息处理方法、装置、设备、介质、芯片、产品及程序
US20240154670A1 (en) Method and apparatus for feedback channel status information based on machine learning in wireless communication system
CN114978413B (zh) 信息编码的控制方法及相关装置
US20230412230A1 (en) Systems, methods, and apparatus for artificial intelligence and machine learning based reporting of communication channel information
WO2022236785A1 (zh) 信道信息的反馈方法、收端设备和发端设备
WO2023231881A1 (zh) 一种模型应用方法及装置
WO2022151064A1 (zh) 信息发送方法、信息接收方法、装置、设备及介质
US20240048207A1 (en) Method and apparatus for transmitting and receiving feedback information based on artificial neural network
KR20240016872A (ko) 통신 시스템에서 채널 상태 정보 송수신 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22852184

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022852184

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022852184

Country of ref document: EP

Effective date: 20240124

NENP Non-entry into the national phase

Ref country code: DE