WO2023104205A1 - 反馈、获取及训练方法、终端、基站、电子设备和介质 - Google Patents

反馈、获取及训练方法、终端、基站、电子设备和介质 Download PDF

Info

Publication number
WO2023104205A1
WO2023104205A1 PCT/CN2022/138160 CN2022138160W WO2023104205A1 WO 2023104205 A1 WO2023104205 A1 WO 2023104205A1 CN 2022138160 W CN2022138160 W CN 2022138160W WO 2023104205 A1 WO2023104205 A1 WO 2023104205A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
channel state
state information
compression
modules
Prior art date
Application number
PCT/CN2022/138160
Other languages
English (en)
French (fr)
Inventor
李伦
肖华华
吴昊
刘磊
鲁照华
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023104205A1 publication Critical patent/WO2023104205A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0636Feedback format
    • H04B7/0639Using selective indices, e.g. of a codebook, e.g. pre-distortion matrix index [PMI] or for beam selection

Definitions

  • Embodiments of the present disclosure relate to, but are not limited to, the field of communications, and specifically relate to a channel state information feedback method, a channel state information acquisition method, a joint training method for an encoder and a decoder, and a terminal , a base station, an electronic device, and a computer readable medium.
  • Embodiments of the present disclosure provide a method for feedbacking channel state information, a method for obtaining channel state information, a method for joint training of an encoder and a decoder, a terminal, a base station, an electronic device, and an computer readable medium.
  • a channel state information feedback method for a terminal.
  • the feedback method includes: using an encoder to compress the first channel state information with M compression multiples to obtain M sets of For the second channel state information, M is a positive integer, and M is not less than 2; M groups of second channel state information are fed back.
  • a channel state information acquisition method for a base station comprising: receiving M sets of second channel state information, each received set of second channel state information
  • the information is the information obtained by the encoder after compressing the first channel state information by a corresponding multiple, M is a positive integer, and M is not less than 2; the decoder is used to decompress and combine each group of second channel state information, Obtaining target channel state information, wherein the expansion factor for decompressing a group of second channel state information with the smallest amount of data is greater than the expansion factor for decompressing other groups of second channel state information, and the target channel state information is The number of elements is the same as that of the first channel state information.
  • a method for joint training of an encoder and a decoder is provided, the encoder is capable of compression of N types of compression multiples, and the decoder is capable of decompression of N types of expansion multiples, And the numerical values of the compression multiples in N are equal to the expansion multiples of N kinds of decompression in one-to-one correspondence, and the method includes: training the initial parameters of the encoder to obtain the final parameters of the encoder; The initial parameters of the decoder are trained to obtain the final parameters of the decoder; wherein, the training error in the step of training the initial parameters of the encoder and the initial parameters of the decoder are trained The training error of is the error function between the input of the encoder and the output of the decoder.
  • a terminal includes: a first storage module, on which a first executable program is stored; at least one first processor, when the at least one first processor When the first executable program is called, the feedback method provided by the first aspect of the present disclosure is implemented.
  • a base station includes: a second storage module on which a second executable program is stored; at least one second processor, when the at least one second processor When the second executable program is called, the acquisition method provided by the second aspect of the present disclosure is realized.
  • an electronic device includes: a third storage module, on which a third executable program is stored; at least one third processor, when the at least one third When the processor invokes the third executable program, the joint training method provided by the third aspect of the present disclosure is implemented.
  • a computer-readable medium on which an executable program is stored, and when the executable program is invoked, the above method provided by the present disclosure can be implemented.
  • FIG. 1 is a flowchart of an embodiment of a channel state information feedback method provided by the present disclosure
  • Fig. 2 is a schematic structural diagram of an embodiment of an encoder
  • Fig. 3 is a schematic structural diagram of another embodiment of an encoder
  • 4 is a schematic diagram showing the relationship between sub-compression modules of different compression ratios and their respective feedback cycles
  • Fig. 5 is a flow chart of another embodiment of the channel state information feedback method provided by the present disclosure.
  • Fig. 6 is a flow chart of still another implementation manner of the channel state information feedback method provided by the present disclosure.
  • FIG. 7 is a flow chart of an implementation of a method for acquiring channel state information provided in the present disclosure
  • Fig. 8 is a schematic structural diagram of a decoder
  • FIG. 9 is a flowchart of an implementation manner of step S220.
  • Fig. 10 is a flow chart of another embodiment of the method for acquiring channel state information provided by the present disclosure.
  • Fig. 11 is a schematic diagram of the corresponding relationship between N sub-compression modules in the compression module and N sub-decompression modules in the decompression module;
  • Fig. 12 is a flowchart of an implementation manner of step S221;
  • Fig. 13 is a flow chart of another embodiment of the method for acquiring channel state information provided by the present disclosure.
  • Fig. 14 is a flowchart of an embodiment of a method for joint training of an encoder and a decoder
  • Fig. 15 is a schematic diagram showing an error function.
  • the method for feedbacking channel state information provided in this disclosure, the method for obtaining channel state information, the method for joint training of an encoder and a decoder, and Terminals, base stations, electronic devices, and computer-readable media are described in detail.
  • a channel state information feedback method for a terminal, as shown in FIG. 1 , the feedback method includes:
  • step S110 use the encoder to compress the first channel state information with M compression multiples to obtain M sets of second channel state information, where M is a positive integer and M is not less than 2;
  • step S120 M groups of second channel state information are fed back.
  • the terminal needs to feed back the downlink channel state information to the base station.
  • the first channel state information can reflect the downlink channel state to be fed back, and after compressing the downlink channel state information with M kinds of compression multiples, it is possible to obtain M sets of data that are smaller than the first channel state information The second channel state information of the data amount.
  • the larger the compression factor the smaller the amount of data of the obtained second channel state information, and the less overhead required when transmitting the second channel state information. That is, the second channel state information with the largest compression factor has the smallest data volume and the smallest transmission overhead, and the second channel state information with the largest compression factor has the largest data volume and requires the largest transmission overhead.
  • the receiving end When the receiving end receives each group of second channel state information, it can decompress each group of second channel state information, respectively obtain multiple groups of decompressed channel state information, and perform decompression on each group of decompressed channel state information.
  • the combined processing can obtain the target channel state information whose data amount is the same as that of the first channel state information.
  • the first channel state information obtained in real time is compressed and fed back with a smaller multiple at a lower frequency. It is easy to understand that the difference between the channel state information obtained after decompressing the second channel state information with the smallest compression factor and the first channel state information is the smallest, and the change of the downlink channel state is not rapid.
  • the channel state information obtained by decompressing the second channel state information with the smallest compression factor can still reflect the downlink channel state information relatively comprehensively to a certain extent.
  • the first channel state information obtained in real time is compressed with a large compression factor and fed back relatively frequently. Therefore, after the second channel state information with a larger compression factor is decompressed, channel state information that can reflect channel state changes can be obtained.
  • target channel state information that can fully reflect both downlink channel state and downlink channel state changes can be obtained.
  • the feedback method provided by the present disclosure can reduce the feedback overhead and enable the receiving end to grasp the downlink channel state in time.
  • each group of second channel state information is received at the receiving end, each group of second channel state information is After the corresponding channel state information is obtained through decompression, the channel state information obtained after decompression of each group is combined to obtain the target channel state information for accurately feeding back the downlink channel state information.
  • the base station can obtain accurate downlink channel state information and accurately judge the state of the downlink channel.
  • the encoder is a neural network model based on deep learning, and the encoder can better learn and extract features of the first channel state information, which is conducive to better compression and feedback of the first channel state information.
  • the overhead of state information feedback of the downlink channel increases with the increase of the number of antennas.
  • the feedback method provided by the present disclosure can compress the downlink channel state information to be fed back to a smaller amount of data, thereby ensuring that the second channel state information reflecting the downlink channel state is sent to the base station side in time. After receiving the second channel state information, the base station side decompresses and merges the second channel state information to obtain the downlink channel state.
  • the compression steps to achieve a compression factor of ⁇ L may include:
  • j is the number of local compression times, 1 ⁇ j ⁇ L;
  • ⁇ j is the compression factor of the jth local compression.
  • the encoder may include N sub-compression modules (respectively, sub-compression module 1, sub-compression module 2, ..., sub-compression module M, ... sub-compression module N), and N is not less than A positive integer of 2.
  • N is not less than A positive integer of 2.
  • step S110 multiple sub-compression modules in the N sub-compression modules are selected to perform M compression on the first channel state information. Compression with multiple compression ratios.
  • the encoder can realize compression of N different compression ratios.
  • the type of compression factor can be selected according to the amount of data in the downlink channel information. For example, when the data volume of the downlink channel information is large, more types of compression multiples may be used to respectively compress the downlink channel state information. When the data volume of the downlink channel information is relatively small, less types of compression multiples may be used to respectively compress the downlink channel state information.
  • M sub-compression modules in the N sub-compression modules are selected to realize the compression of the first channel state information Perform compression with M compression ratios. That is to say, the selected M sub-compression modules work in parallel, and feed back the obtained M sets of second channel state information in parallel.
  • the first channel state information is compressed by the sub-compression module 1 to obtain the first group of second channel state information CSI1; the first channel state information is compressed by the sub-compression module 2 Perform compression to obtain the second group of second channel state information CSI2; ..., use the sub-compression module N to compress the first channel state information to obtain the Nth group of second channel state information CSIN.
  • the N sub-compression modules in the encoder may also work serially. That is to say, each sub-compression module can achieve a local compression of a corresponding multiple, and a larger multiple of compression can be achieved by serializing multiple sub-compression modules.
  • each sub-compression module can achieve a local compression of a corresponding multiple, and a larger multiple of compression can be achieved by serializing multiple sub-compression modules.
  • the step of using the encoder to compress the first channel state information with M types of compression multiples for the compression of any one of the M compression multiples, one of the sub-compression modules, or multiple A combination of the sub-compression modules is realized.
  • the compression step with the smallest compression multiple uses the least number of sub-compression modules
  • the compression step with the largest compression multiple uses the largest number of sub-compression modules.
  • a sub-compression module with a compression factor of 2 times may be selected to realize the step of compressing the first channel state information.
  • a sub-compression module with a compression factor of 2 can be selected to compress the first channel state information for the first time to obtain intermediate information, and then a compression factor of 5 can be selected
  • the sub-compression module performs a second compression on the intermediate information to obtain the second channel state information with a compression factor of 10.
  • sub-compression module 1 can utilize sub-compression module 1 to compress the first channel state information, obtain the first group of second channel state information CSI1; Utilize sub-compression module 1 and sub-compression module 2 to Compress the first channel state information to obtain the second group of second channel state information CSI2; ..., use sub-compression module 1 to sub-compression module N to compress the first channel state information, and obtain the Nth group of second channel state information CSIN.
  • the step of compressing the first channel state information by using multiple sub-compression modules also satisfies the above formula (1), that is, the finally obtained compression factor is the continuous product of the compression factors of all the sub-compression modules performing compression.
  • the feedback method may also include:
  • step S130
  • One or more of the identification information of the plurality of sub-compression modules that realize each compression factor and meet the predetermined condition are respectively fed back.
  • the receiving end can finally determine the identification information of the multiple sub-compression modules that perform the compression step, and finally determine the M types of compression factors corresponding to the received M groups of second channel state information, and determine the M groups accordingly.
  • the expansion factor required for decompressing the second channel state information is the expansion factor required for decompressing the second channel state information.
  • step S130 may be executed first, and then step S130 may be executed, or step S130 may be executed first, and then step S120 may be executed, or step S120 and step S130 may be executed synchronously.
  • N sub-compression modules are connected in series, and the N sub-compression modules can be independently output, and the compression factor of the data output by the i-th sub-compression module is: The compression ratio achieved jointly by the first sub-compression module to the i-th compression module in the data flow direction. In this way, for the second channel state information of any compression factor, only the number of the sub-compression module that outputs the second channel state information needs to be fed back during feedback.
  • the identification information of the sub-compression modules includes numbers of the sub-compression modules, and the numbers of the N sub-compression modules conform to the first preset rule.
  • step S130 when step S130 is executed, only one or more serial numbers in the sub-compression modules that realize each compression factor can be fed back, and at the receiving end, the rest of the numbers that can realize each compression factor can be calculated according to the first preset rule.
  • the number of each subcompression module The number of each subcompression module.
  • the first preset rule includes sequentially numbering the N sub-compression modules incrementally from number 1 to N, and the number of the sub-compression modules is the same as the output of the sub-compression modules
  • the compression ratios are positively correlated, and the predetermined condition includes a maximum number among numbers of multiple sub-compression modules that realize each compression ratio. That is, in step S130 , the maximum number among the numbers of the multiple sub-compression modules realizing each compression factor is fed back.
  • a single sub-compression module is used to achieve local compression. It should be noted that different sub-compression modules can achieve the same multiple of local compression, or different multiples of local compression.
  • the compression factor output by the sub-compression module numbered 1 is 2 (that is, if the sub-compression module numbered 1 is used as the output module, the data is input from the input terminal of the sub-compression module numbered 1, and the data from the sub-compression module numbered 1 The output of the sub-compression module of the sub-compression module, so that the data can be compressed by 2 times);
  • the compression factor of the output of the sub-compression module numbered 2 is 4 (that is, if the sub-compression module numbered 2 is used as the output module, the Data is input from the input end of the sub-compression module numbered 1 and output from the output end of the sub-compression module numbered 2; the data is compressed by the sub-compression module numbered 1 and then compressed by the sub-compression module numbered 2 , so that
  • the predetermined condition includes realizing the maximum number among the numbers of multiple sub-compression modules for each compression factor.
  • the first preset rule includes consecutively numbering the N sub-compression modules from number N to number 1, and the number of the sub-compression modules is the same as that of the sub-compression modules The compression factor is anticorrelated.
  • the sub-compression module that outputs the second channel state information is the sub-compression module with the smallest number.
  • the predetermined condition includes realizing the smallest number among the numbers of multiple sub-compression modules for each compression factor.
  • the receiving end can determine the sub-decompression module corresponding to the sub-compression module with the smallest number until the sub-decompression module corresponding to the sub-compression module numbered N, and decompress the group of second channel state information Compression processing.
  • the receiving end after receiving the second channel state information, the receiving end can determine the value of M, and then deduce the numbers of all the sub-compression modules involved in the compression steps of various compression multiples according to the received numbers .
  • the numbers of the sub-compression modules may be bit-quantized.
  • the numbers of N sub-compression modules correspond to bits, of which is the round up sign. Therefore, the serial numbers of the M sub-compression modules corresponding to the fed-back M types of second channel state information need M*B bits. For example, there are 4 sub-compression modules in total, which are 64-fold compression sub-compression modules, 16-fold compression sub-compression modules, 8-fold compression sub-compression modules, and 4-fold compression sub-compression modules.
  • 2 bits can be used to quantize the numbers of the 4 sub-compression modules (that is, 00 represents a 4-fold compressed sub-compression module, 01 represents an 8-fold compressed sub-compression module, 10 represents a 16-fold compressed sub-compression module, and 11 represents a 64-fold compressed sub-compression module. subcompression module).
  • 00 represents a 4-fold compressed sub-compression module
  • 01 represents an 8-fold compressed sub-compression module
  • 10 represents a 16-fold compressed sub-compression module
  • 11 represents a 64-fold compressed sub-compression module. subcompression module.
  • the quantization method may also be adjusted to feed back number information.
  • use Bits represent the largest or smallest number among the M numbers, where is the round up sign. Knowing the maximum or minimum number, the difference value of the maximum or minimum number distance value between the number distances of the remaining M-1 sub-compression modules is sequentially expressed. Differential distance value for Bits in order from small to large represent the distance from the largest or smallest value from the closest to the farthest.
  • 8 sub-compression module numbers can be quantized with 3 bits (that is, 000 means 2 times compression, 001 means 4 times compression, 010 means 8 times compression, 011 means 16 times compression, 100 means 32 times compression, 101 means 64 times compression, 110 means 128 times compression, 111 means 256 times compression).
  • bit redundancy representation can also be reduced according to non-uniform quantization, further reducing feedback bit overhead.
  • the encoder is a deep learning-based neural network model.
  • the sub-compression module is also a neural network model based on deep learning.
  • the parameters corresponding to the sub-compression modules that achieve the same local compression ratio are shared, so that the training steps of the encoder can be reduced.
  • the sub-compression module meets any one of the following conditions:
  • the sub-compression module includes a fully connected network, and the fully connected network includes at least two fully connected layers;
  • the subcompression module comprises a convolutional network comprising at least one of: at least one convolutional layer, at least one residual block, or at least one dense block; or
  • the sub-compression module includes a cyclic network, and the cyclic network includes a long-term memory LSTM neural network or a gated cyclic GRU neural network.
  • the subcompression module includes a fully connected network
  • the fully connected network includes at least two fully connected layers:
  • Nin_j is the number of elements of the input data for the jth local compression
  • Nout_j is the jth local compression
  • the compression factor ⁇ i of the i-th sub-compression module is the product of the local compression factors of the sub-compression modules numbered 1 to i, namely where ⁇ is the multiplication symbol.
  • the parameters of the fully connected network include but are not limited to at least one of the following: the number of nodes in the fully connected layer, the weight of the fully connected layer, a normalization coefficient, and an activation function.
  • the number of elements of the input data of the fully connected network is Nin
  • the number of elements of the output data of the fully connected network is Nout, where Nout ⁇ Nin.
  • Nin_j is the number of elements of the input data for the jth local compression
  • Nout_j is the jth partial compression
  • the compression factor ⁇ i of the i-th sub-compression module is the product of the local compression factors of the sub-compression modules numbered 1 to i, namely where ⁇ is the multiplication symbol.
  • the parameters of the convolution network include but are not limited to at least one of the following: input size of the convolution layer, convolution kernel parameters, convolution step parameters, data filling parameters, pooling parameters, normalization coefficients, and activation functions.
  • the recurrent network can be a long short-term memory (LSTM) or a gated recurrent neural network (GRU) or a Transformer structure based on an attention mechanism.
  • LSTM long short-term memory
  • GRU gated recurrent neural network
  • Nin_j is the number of elements of the input data for the jth local compression
  • Nout_j is the jth partial compression The number of elements of the output data.
  • the compression factor ⁇ i of the i-th sub-compression module is the product of the local compression factors of the sub-compression modules numbered 1 to i, namely where ⁇ is the multiplication symbol.
  • the parameters of the recurrent network include but are not limited to at least one of the following: the number of nodes in the recurrent layer, the weight of the recurrent layer, gate parameters in the recurrent layer, attention mechanism parameters, normalization coefficients or activation functions.
  • the number M of types of compression multiples may be the same as the number N of sub-compression modules, and the number M of types of compression multiples may also be different from the number N of sub-compression modules.
  • the second channel state information with a larger compression factor may be fed back at a higher frequency, and the second channel state information with a smaller compression factor may be fed back at a lower frequency.
  • the group of second channel state information is fed back periodically, and the length of the feedback cycle is inversely related to the compression factor corresponding to the second channel state information fed back in the cycle. relevant. It should be pointed out that for the same compression factor, the first channel state information obtained in real time is compressed in different feedback cycles.
  • the feedback period of sub-compression module 1 is T 1 , and the compression factor is ⁇ 1 ;
  • the feedback period of sub-compression module 2 is T 2 , and the compression factor is ⁇ 2 ;
  • the sub-compression The feedback cycle of module N is T N , and the compression factor is ⁇ N . ⁇ 1 ⁇ 2 ⁇ ... ⁇ N , T 1 >T 2 >...>T N .
  • the first channel state information corresponding to sub-compression module 1 is represented by CSI 1
  • the first channel state information corresponding to sub-compression module 2 is represented by CSI 2
  • sub-compression module N The corresponding first channel state information is represented by CSIN , but it should be understood that at the start moment of feeding back the channel state information to the base station, the first channel state information is represented by CSI 1
  • the first channel state information is represented by CSI 2 , ...
  • the CSIN for the first channel state information are the same.
  • T 1 represents the feedback period of the second channel state information with the smallest compression factor
  • T M represents the feedback cycle of the second channel state information with the largest compression factor, T 1 > TM .
  • T 1 3T M .
  • the sending end compresses the first channel state information at time t0 with two compression factors to obtain the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor;
  • the sending end feeds back the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor.
  • the receiving end receives the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor;
  • the receiving end decompresses and combines the received two sets of second channel state information to obtain target channel state information reflecting the downlink channel state at time t0.
  • the sending end compresses the first channel state information at time t1 with the maximum compression factor to obtain the second channel state information with the largest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor
  • the receiving end decompresses and merges the received second channel state information and the second channel state information with the smallest compression factor received at time t0' to obtain the target channel state information at time t1'.
  • the sending end compresses the first channel state information at time t2 with the maximum compression factor to obtain the second channel state information with the largest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor
  • the receiving end decompresses and merges the received second channel state information and the second channel state information with the smallest compression factor received at time t0' to obtain target channel state information at time t2'.
  • the sending end compresses the first channel state information at time t3 with a maximum compression factor and a minimum compression factor to obtain a second channel state information with the largest compression factor and a second channel state information with the smallest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor;
  • the receiving end decompresses and merges the received second channel state information with the largest compression factor and the second channel state information with the smallest compression factor to obtain the target channel state information at time t3'.
  • the receiving end uses the second channel state information with the smallest compression factor to decompress and combine with the second channel state information with other compression factors. Processing, to obtain the target channel state information of the current period.
  • the second channel state information with the smallest compression factor obtained in the current cycle can be used to carry out the second channel state information with a larger compression factor sent in each cycle.
  • the decompression and merging processing not only reduces the overhead of feeding back the downlink channel state information, but also can obtain relatively comprehensive downlink channel state information that can reflect real-time status.
  • the feedback method may further include:
  • step S140 the length of the feedback period for each group of second channel state information is fed back.
  • the feedback method before using the encoder to compress at least one set of first channel state information with M types of compression multiples, the feedback method further includes:
  • step S101 the original channel information is acquired
  • step S102 using the encoder to perform feature extraction on the original channel information to obtain initial channel state information
  • step S103 the initial channel state information is preprocessed to obtain the first channel state information.
  • the decoder further includes a first processing module, and the first processing module is configured to perform a step of extracting features from the initial channel information.
  • the first processing module may also be a processing module based on a neural network. That is to say, the first processing module includes multiple network layers.
  • the plurality of network layers are each independently selected from any of the following layers:
  • the dimension of the first channel state information obtained in step S103 is consistent with the input dimension of the encoder.
  • the preprocessing may include but is not limited to performing domain transformation on channel information (for example, transformation from time domain to frequency domain, edge from frequency domain to time domain, transformation from angle domain to space domain, transformation from space domain to angle domain) , downsampling, zero padding, grouping, truncation, etc., as long as the obtained first channel state information dimension conforms to the input dimension of the encoder.
  • channel information for example, transformation from time domain to frequency domain, edge from frequency domain to time domain, transformation from angle domain to space domain, transformation from space domain to angle domain
  • original channel information can be obtained in the following manner:
  • the terminal obtains the channel H by receiving a reference signal, for example, a channel state information reference signal (Channel-state information reference signal, CSI-RS), and the channel is generally a complex matrix of N t *N r .
  • N t and N r are respectively the number of transmitting antennas of the base station and the number of receiving antennas of the terminal, where the antennas may be logical antennas or various types of physical antennas.
  • the original channel information may be time domain channel information H t or frequency domain channel information H f .
  • the time-domain channel information H t is a complex matrix of N t ⁇ N s , where N t is the number of antennas, and N s is the number of time-domain sampling points.
  • the number of antennas N t is N tx ⁇ N rx , where N tx is the number of transmitting antennas, and N rx is the number of receiving antennas; the number of antennas N t can also be N tx ⁇ R, where N tx is the number of transmitting antennas , R is the channel rank or the number of data streams.
  • the size of the time-domain channel information H t is N t ⁇ N′ s , where N′ s ⁇ N s .
  • the antenna dimension N t may not only be in the first dimension, but also may be in the second dimension, that is, N′ s ⁇ N t . According to different requirements, the antenna dimension can also be split into the above-mentioned two-dimensional representation.
  • the frequency domain channel information H f is a complex matrix of N t ⁇ N f , N t is the number of antennas, and N f is the frequency domain granularity, wherein the frequency domain granularity can be a subcarrier (Subcarrier) or a physical resource block (Physical Resource Block, PRB) or subband (Subband) as a unit, wherein one physical resource block may include multiple subcarriers, such as 6, 12, etc., and one subband includes multiple physical resource blocks.
  • Subcarrier subcarrier
  • PRB Physical Resource Block
  • Subband subband
  • the number of antennas N t is N tx *N rx , where N tx is the number of transmitting antennas, and N rx is the number of receiving antennas; the number of antennas N t can also be N tx *R, where N tx is the number of transmitting antennas , R is the channel rank or the number of data streams. Since the granularity in the frequency domain can be obtained by fast Fourier transform of sampling points in the time domain, it can be truncated in the dimension of granularity in the frequency domain. At this time, the size of the channel information H f in the frequency domain is N t ⁇ N′ f , where N′ f ⁇ N f .
  • the antenna dimension N t may not only be in the first dimension, but also may be in the second dimension, that is, N′ f ⁇ N t . According to different requirements, the antenna dimension can also be split into the above-mentioned two-dimensional representation.
  • the frequency domain channel information H f may be precoded.
  • N f there are M PRB frequency band resources in total, and each K continuous PRB channel constitutes a sub-band channel matrix, where the value of K is determined according to the bandwidth indication, for example, each sub-band includes 4 PRBs, 20M bandwidth, each subband includes 8 PRBs, etc.
  • the antenna dimension N t may not only be in the first dimension, but also may be in the second dimension, that is, N f ⁇ N t . According to different requirements, the antenna dimension can also be split into the above-mentioned two-dimensional representation.
  • a method for acquiring channel state information is provided for a base station, as shown in FIG. 7 , the acquiring method includes:
  • step S210 M sets of second channel state information are received, and each set of received second channel state information is the information obtained by the encoder after compressing the first channel state information by a corresponding multiple, and M is positive Integer, and M is not less than 2;
  • step S220 the decoder is used to decompress and merge each group of second channel state information to obtain target channel state information.
  • the expansion factor for decompressing a group of second channel state information with the smallest amount of data is greater than the expansion factor for decompressing other groups of second channel state information, and the number of elements of the target channel state information is the same as that of the second channel state information The number of elements of a channel state information is the same.
  • the acquisition method provided by the second aspect of the present disclosure is used in conjunction with the feedback method provided by the first aspect of the present disclosure.
  • the M sets of second channel state information received in step S210 are the M sets of second channel state information fed back in step S120.
  • the target channel state information obtained after decompressing and combining the second channel state information corresponds to the first channel state information, and the target channel state information may reflect the downlink channel state.
  • the first channel state information obtained in real time can be compressed and fed back with a large compression factor more frequently, and the first channel state information obtained in real time can be compressed and fed back at a lower frequency. feedback.
  • the first channel state information obtained in real time is compressed and fed back with a smaller multiple at a lower frequency. It is easy to understand that the difference between the channel state information obtained after decompressing the second channel state information with the smallest compression factor and the first channel state information is the smallest, and the change of the downlink channel state is not rapid. Therefore, within a period of time, the channel state information obtained by decompressing the second channel state information with the smallest compression factor can still reflect the downlink channel state information relatively comprehensively to a certain extent. Moreover, in the present disclosure, the first channel state information obtained in real time is compressed with a large compression factor and fed back relatively frequently.
  • channel state information that can reflect channel state changes can be obtained.
  • the receiving end i.e., the base station
  • the base station by combining the channel state that is obtained in time and can reflect channel state changes with more comprehensive channel state information, it is possible to obtain a channel that can not only fully reflect the state of the downlink channel, but also reflect the change of the downlink channel state.
  • Target channel state information is possible to obtain a channel that can not only fully reflect the state of the downlink channel, but also reflect the change of the downlink channel state.
  • the decoder is a neural network model based on deep learning, therefore, the error between the channel state information obtained by decompressing the decoder and the first channel state information is relatively small.
  • the encoder includes N sub-compression modules.
  • the decoder also includes N sub-decompression modules, where N is a positive integer not less than 2.
  • the decompression and merging of each group of second channel state information includes:
  • step S221 determine a plurality of sub-decompression modules corresponding to M groups of second channel state information among the N sub-decompression modules;
  • step S222 a plurality of sub-decompression modules are used to decompress and combine M sets of second channel state information respectively to obtain target channel state information.
  • the N sub-compression modules of the encoder correspond to the N sub-decompression modules of the decoder.
  • the N sub-compression modules of the encoder with different compression multiples work in parallel to realize compression of N different compression multiples.
  • the expansion multiples of the N sub-decompression modules are different from each other, therefore, in the step of determining a plurality of sub-decompression modules corresponding to M groups of second channel state information among the N sub-decompression modules , determining M sub-decompression modules corresponding to M groups of second channel state information among the N sub-decompression modules.
  • step S222 using multiple sub-decompression modules to decompress and combine M groups of second channel state information respectively to obtain M groups of target channel state information (that is, step S222), including:
  • M sub-decompression modules to respectively decompress M groups of second channel state information to obtain M groups of third channel state information
  • the results obtained by decompression are merged to obtain the target state information.
  • how to combine M groups of third channel state information includes:
  • Compressing the intermediate channel state information to obtain the target channel state information Compressing the intermediate channel state information to obtain the target channel state information.
  • the number of elements of the intermediate channel state information is greater than the number of elements of the first channel state information.
  • a network layer such as a convolutional layer may be used to compress and reduce the dimensionality of the intermediate channel state information to obtain the target channel state information.
  • the step of combining M groups of third channel state information may be performed in any one of the following ways:
  • Average value combination processing square average combination processing, and weighted combination processing.
  • N sub-compression modules are connected in series, and each group of second channel state information is compressed by at least one compression module.
  • each group of second channel state information corresponds to at least one sub-decompression module, and the higher the compression factor The greater the number of sub-decompression modules corresponding to the second channel state information is.
  • M-1 groups of second channel state information except the second channel state information with the smallest compression factor are decompressed to the first intermediate channel state information, the The number of elements in the first intermediate channel state information and the data amount of the second channel state information with the smallest compression factor.
  • the second intermediate channel state information there is no special limitation on how to decompress the second intermediate channel state information.
  • the number of elements of the second intermediate channel state information is the same as the data amount of the state information of the second channel with the smallest compression factor
  • one or more sub-decompression modules for decompressing the second channel state information with the smallest compression factor can be used Decompressing the second intermediate channel state information.
  • multiple sub-decompression modules may be reselected to decompress the second intermediate channel state information. It should be explained that the number of elements that can be input by the selected sub-decompression module is the same as the number of elements of the second intermediate channel state information.
  • the merging of the M-1 group of first intermediate channel state information and the second channel state information with the smallest compression factor to obtain the second intermediate channel state information includes:
  • the M-1 groups of first intermediate channel state information and the second channel state information with the smallest compression factor are concatenated to obtain the second intermediate channel state information.
  • the concatenating process here can include two ways. The first way is to directly combine M-1 sets of first intermediate channel state information and the second channel state information with the smallest compression factor; the other is to first combine M- 1 set of first intermediate channel state information and the second channel state information with the smallest compression factor are combined, and then the combined data is compressed to obtain the second intermediate channel state information, so that the number of elements of the second intermediate channel state information is the same as The number of elements of the second channel state information with the smallest compression factor is the same.
  • a network layer such as a convolutional layer may be used to perform the step of compressing the third intermediate channel state information to obtain the second intermediate channel state information.
  • the step of combining the M sets of first intermediate channel state information to obtain the second intermediate channel state information may be performed in any of the following ways:
  • Average value combination processing square average combination processing, and weighted combination processing.
  • the encoder includes N sub-compression modules
  • the feedback method provided in the first aspect of the present disclosure further includes feeding back to the base station side the information of the sub-compression modules performing compression of various compression multiples Steps to identify information.
  • the acquisition method before decompressing each group of second channel state information to obtain M groups of third channel state information (that is, step S220), the acquisition method further includes:
  • step S201 identification information of multiple sub-compression modules is received.
  • step S221 is specifically executed as:
  • the identification information of the sub-compression module includes the number of the sub-compression module, and the number of the N sub-compression modules of the encoder Meet the first preset rule.
  • the identification information of the sub-compression modules outputting each group of second channel state information is fed back.
  • the received identification information of each sub-compression module is respectively the identification information of the sub-compression modules outputting each group of second channel state information.
  • N sub-decompression modules are connected in series, and each of the N sub-decompression modules can independently receive input data.
  • the decompression multiple of the output data is the decompression multiple jointly realized by the first sub-decompression module to the i-th sub-decompression module in the data flow direction. It should be pointed out that "the first sub-decompression module in the data flow direction" is not necessarily the sub-decompression module numbered 1.
  • the identification information of the sub-decompression module includes the number of the sub-decompression module, and the numbers of the N sub-decompression modules of the decoder conform to the second preset rule, the first preset rule and the second There is a corresponding relationship between preset rules.
  • the step of determining the identification information of the sub-decompression modules required to decompress each group of second channel state information according to the determined identification information of the plurality of sub-decompression modules, according to the received sub-compression The identification information of the module determines the identification information of the sub-decompression modules into which each group of second channel state information is input.
  • the sub-decompression module and subsequent sub-decompression modules in the direction of data flow can be used to decompress the second channel state information step by step.
  • each sub-compression module can independently achieve 2x expansion.
  • 2 times decompression can be obtained (that is, if the sub-decompression module with the number 1 is used as the input module, the data from the sub-decompression module with the number 1 Input from the input terminal of the sub-compression module numbered 1 and output from the output terminal of the sub-compression module numbered 1, thereby decompressing the data twice)
  • the sub-decompression module numbered 2 is used as an input module, it can obtain 4 times decompression ( That is, if the sub-decompression module numbered 2 is used as the input module, the data is input from the input terminal of the sub-decompression module numbered 2 and output from the output terminal of the sub-decompression module numbered 1, and the data passes through the numbered The decompression of the sub-decompression module of 2, and the decompression of the
  • serial numbers of multiple sub-compression modules can be determined. According to the corresponding relationship between the first preset rule and the second preset rule, and the numbers of the multiple sub-compression modules, the numbers of the multiple sub-decompression modules can be determined.
  • the first preset rule includes consecutively numbering the N sub-compression modules incrementally from number 1 to number N, and the number of the sub-compression module is the same as the number of the sub-compression module The compression factor is positively correlated
  • the second preset rule includes consecutively numbering the N sub-decompression modules from the number N to the number 1, and the number of the sub-decompression module is the same as the number of the sub-decompression module
  • the expansion multiple is anticorrelated.
  • the correspondence between the first preset rule and the second preset rule includes: sub-compression modules with the same number correspond to sub-decompression modules.
  • the expansion factor of the sub-decompression module numbered i is greater than the expansion factor of the sub-decompression module numbered i+1, where i is a variable, and all are positive integers, 1 ⁇ i ⁇ N-1.
  • step S221 the received The sub-decompression module corresponding to the serial number is used as the sub-decompression module for inputting each group of second channel state information (this can be recorded as step S221a).
  • the sub-decompression module with the number 3 is used as the input module of the second channel state information.
  • the sub-compression module 1 corresponds to the sub-decompression module 1
  • the second channel state information CSI 1 output by the sub-compression module 1 is fed back to the base station and decompressed by the sub-decompression module 1
  • the sub-compression module i corresponds to the sub-decompression module i. After the second channel state information CSI i output by the sub-compression module i is fed back to the base station, it is decompressed by the sub-decompression module i, and so on.
  • the received identification information may also include the largest number among numbers of multiple sub-decompression modules required to perform decompression of each group of second channel state information.
  • the identification information of M sub-decompression modules corresponding to the plurality of sub-compression modules according to the identification information of the plurality of sub-compression modules also includes:
  • step S221b the numbers of the sub-decompression modules input to each group of second channel state information, up to number 1, are used as a plurality of sub-decompression modules corresponding to the plurality of sub-compression modules for obtaining each group of second channel state information one-to-one. identification information.
  • the N sub-decompression modules are numbered in descending order from N to 1.
  • the decompressing the second intermediate channel state information by using the sub-decompression module corresponding to the second channel state information with the smallest compression factor to obtain the target channel state information includes :
  • the smallest one of the numbers is determined as the number of the first sub-decompression module that decompresses the second intermediate channel state information with the smallest compression factor;
  • the sub-decompression module with the smallest number up to 1 is determined as the sub-decompression module for decompressing the second intermediate channel state information with the smallest compression factor.
  • the input module for decompressing the second channel state information with the smallest compression factor can be determined (the input module is the smallest of all the numbers fed back One of the corresponding sub-decompression modules).
  • the remaining sub-decompression modules for decompressing the second channel state information with the smallest compression factor can be determined. module. In this way, all sub-decompression modules for decompressing the second channel state information with the smallest compression factor are finally determined, and the target channel state information can be obtained by using these sub-decompression modules to decompress the second intermediate channel state information.
  • each sub-compression module can be numbered in descending order, therefore, each sub-decompression module can also be numbered in ascending order. specifically:
  • the first preset rule includes sequentially numbering the N sub-compression modules in descending order from number N to number 1, and the number of the sub-compression modules is inversely correlated with the compression factor of the sub-compression module, and the first The second preset rule includes sequentially numbering the N sub-decompression modules incrementally from number 1 to number N, and the number of the sub-decompression modules is positively correlated with the expansion factor of the sub-decompression modules,
  • the corresponding relationship between the first preset rule and the second preset rule includes: sub-compression modules with the same number correspond to sub-decompression modules;
  • the expansion multiple of the sub-decompression module numbered i is greater than the expansion multiple of the sub-decompression module numbered i+1, where i is a variable and a positive integer, 1 ⁇ i ⁇ N-1;
  • the sub-decompression module In the step of determining the identification information of the sub-decompression modules required to decompress each group of second channel state information according to the determined identification information of the plurality of sub-decompression modules, the sub-decompression module corresponding to the received number
  • the decompression module serves as a sub-decompression module that inputs each group of second channel state information.
  • the determining the identification information of a plurality of sub-decompression modules corresponding to the plurality of sub-compression modules according to the identification information of the plurality of sub-compression modules further includes:
  • the numbers of the sub-decompression modules that input each group of second channel state information, up to the number N, are used as identification information of a plurality of sub-decompression modules corresponding to the plurality of sub-compression modules one-to-one.
  • the identification information includes decompressing the second intermediate channel state information by using the sub-decompression module corresponding to the second channel state information with the smallest compression factor to obtain the target channel state information, include:
  • the largest one of the numbers is determined as the number of the first sub-decompression module that decompresses the second intermediate channel state information with the smallest compression factor
  • serial sub-decompression module decompresses and merges each group of second channel state information.
  • present disclosure is not limited thereto.
  • the use of multiple sub-decompression modules to respectively decompress and merge M groups of second channel states includes:
  • the decoder is a neural network model based on deep learning, and the sub-decompression module meets at least one of the following conditions:
  • the sub-decompression module includes a fully connected layer
  • the sub-decompression module includes a plurality of deconvolution layers, and the U convolution layer includes at least one of the following parts: an upsampling layer, an upper pooling layer, a transposed convolution layer, a residual block, a dense block , direct layer;
  • the sub-decompression module includes a cyclic network, and the cyclic network includes a long short-term memory LSTM neural network or a gated cyclic GRU neural network.
  • the sub-decompression module may be composed of a fully connected network, wherein the fully connected network includes at least two fully connected layers.
  • the number of elements of the fully connected network input data is N in
  • the expansion factor ⁇ i of the sub-decompression module numbered i is The number of nodes, the weight of the fully connected layer, the normalization coefficient, and the activation function.
  • the sub-decompression module may be composed of a convolutional network, wherein the convolutional network includes at least one convolutional layer, or at least one residual block (Residual block), dense block (Dense block) or their combinations etc.
  • the total number of data elements input by the convolutional network is Nin
  • the parameters of the convolution network include but are not limited to at least one of the following: input size of the convolution layer, convolution kernel parameters, convolution step size parameters, data filling parameters, anti-pooling parameters, upsampling coefficients, normalization Coefficient, activation function.
  • the sub-decompression module may be composed of a cyclic network, wherein the cyclic network may be a long short-term memory (long short-term memory, LSTM) or a gated recurrent neural network (gated recurrent neural network, GRU) Or the Transformer structure based on the attention mechanism, etc.
  • the total number of data elements input by the recurrent network is N in
  • the expansion multiple ⁇ i of the sub-decompression module numbered i is the product of the local expansion multiples of the sub-compression modules numbered N to i, namely where ⁇ is the multiplication symbol.
  • the parameters of the recurrent network include but are not limited to at least one of the following: the number of nodes in the recurrent layer, the weight of the recurrent layer, gate parameters in the recurrent layer, attention mechanism parameters, normalization coefficients, and activation functions.
  • the decoder further includes a second processing module, as shown in FIG. 13
  • the acquiring method further includes:
  • step S230 the target channel state information is processed by the second processing module to extract the channel state information fed back by the terminal.
  • the second processing module is also a neural network.
  • the second processing module is also a neural network.
  • the second processing module includes a neural network
  • the neural network includes a plurality of network layers, and each of the plurality of network layers is independently selected from at least one of the following layers:
  • time length information of a feedback cycle of each group of second channel state information is received, wherein the feedback cycle of the second channel state information is inversely correlated with the data amount of the second channel state information.
  • the second channel state information with a long feedback cycle can be used to have a short feedback cycle within the feedback cycle span of the second channel state information with a long feedback cycle Combine the second channel state information.
  • the decompressed data obtained after decompressing the second channel state information received within the feedback cycle span of the second channel state information may be directly used to merge the second channel state information with the smallest compression factor; It is also possible to decompress the second channel state information with the smallest compression factor to obtain the first decompressed data, and use the first decompressed data to perform the second channel state information received within the feedback cycle span of the second channel state information.
  • the decompressed data obtained after decompression is merged.
  • the feedback of the channel state information obtained by decompressing the second channel state information with a long feedback cycle within a feedback cycle span of the second channel state information with a long feedback cycle is jointly processed" for explanation.
  • the two sets of data of the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor are taken as examples for explanation and description.
  • T 1 represents the feedback period of the second channel state information with the smallest compression factor
  • T M represents the feedback cycle of the second channel state information with the largest compression factor, T 1 > TM .
  • T 1 3T M . That is to say, within the time span of the feedback cycle T1 , the second channel state information with the largest compression factor is fed back three times.
  • T1 use the third channel state information obtained by decompressing the second channel state information with the smallest compression factor to jointly process the third channel state information obtained by decompressing the second channel state information of the three feedback periods TM . details as follows:
  • the receiving end receives the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor;
  • the receiving end decompresses and combines the received two sets of second channel state information to obtain target channel state information reflecting the downlink channel state at time t0.
  • the sending end compresses the first channel state information at time t1 with the maximum compression factor to obtain the second channel state information with the largest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor
  • the receiving end decompresses and merges the received second channel state information and the second channel state information with the smallest compression factor received at time t0' to obtain target channel state information at time t1'.
  • the sending end compresses the first channel state information at time t2 with the maximum compression factor to obtain the second channel state information with the largest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor
  • the receiving end decompresses and merges the received second channel state information and the second channel state information with the smallest compression factor received at time t0' to obtain the target channel state information at time t2'.
  • the sending end compresses the first channel state information at time t3 with a maximum compression factor and a minimum compression factor to obtain a second channel state information with the largest compression factor and a second channel state information with the smallest compression factor;
  • the receiving end receives the second channel state information with the largest compression factor and the second channel state information with the smallest compression factor;
  • the receiving end decompresses and merges the received second channel state information with the largest compression factor and the second channel state information with the smallest compression factor to obtain the target channel state information at time t3'.
  • estimated channel information ie, target channel state information
  • the sub-decompression module at the base station side can compress and feed back the second channel information CSI 1 , CSI 2 , 2, and CSI M corresponding to the compression factor of M (M ⁇ 2) different sub-compression modules
  • the smallest (or the longest feedback period) sub-decompression module outputs CSI′ 1 , CSI′ 2 , 2, and CSI′ M corresponds to the square average combination of elements, namely The CSI′ after the square average processing is sent to the subsequent sub-decompression module to recover the estimated channel information (ie, the target channel state information).
  • the sub-decompression module on the base station side can correspond to the second channel information CSI 1 , CSI 2 , 2, and CSI M that are compressed and fed back by M (2 ⁇ M ⁇ N) different sub-compression modules Among them, the sub-decompression with the smallest compression factor (or the longest feedback cycle) The contribution of the state information to the current channel situation can be assigned different weights ⁇ , and the weighted CSI′ is sent to the subsequent sub-decompression module to recover the estimated channel information (ie, the target channel state information).
  • the above-mentioned sub-decompression module at the base station side can compress and feed back the second channel information CSI 1 , CSI 2 , 2, CSI M of M (2 ⁇ M ⁇ N) different sub-compression modules Corresponding to the sub-decompression module with the smallest compression factor (or the longest feedback period), output CSI′ 1 , CSI′ 2 , 2, and CSI′ M for processing, that is, CSI 1 , CSI 2 , 2, and CSI M are processed according to one of them Dimensions are combined in parallel to form a new CSI', and the concatenated CSI' is sent to a subsequent sub-decompression module to recover estimated channel information (ie, target channel state information).
  • estimated channel information ie, target channel state information
  • the CSI' 1 and CSI' 2 output by the sub-decompression module before the concatenation process are both 256*1-dimensional vectors, and they are merged in parallel according to the second dimension to form a new CSI' with a 256*2-dimensional matrix , and then sent to the subsequent sub-decompression module to recover the estimated channel information.
  • the base station side After the base station side receives the second channel information under different compression multiples fed back by the terminal, it needs to be sent to the decompression module in the decoder for processing to generate the enhanced target channel state information.
  • the input of the decompression module is the second channel state information CSI 1 , CSI 2 , .
  • the number of the sub-compression module is sent to the corresponding sub-decompression module to expand the compressed channel information.
  • the number of elements of the output second channel state information is P 1 , P 2 , ..., P N , which are different from the terminal N sub-solutions
  • the number of input elements Q 1 , Q 2 , ..., Q N of the compression module is the same, that is, the number of output data elements of the i-th sub-compression module on the terminal side is consistent with the number of output data elements of the i-th sub-decompression module on the base station side,
  • the corresponding numbers of the two are the same to form N groups of sub-compression and sub-decompression module pairs.
  • the base station receives M (2 ⁇ M ⁇ N) pieces of second channel state information CSI 1 , CSI 2 , ..., CSI M in different cycles fed back by the above-mentioned terminal, and then needs to send them to the decompression module for compressed information expansion , and at the output end of the sub-decompression module corresponding to the second channel state information with the longest feedback cycle (or the second channel state information with the smallest terminal compression factor) and the channel state recovered at the sub-decompression module in other feedback cycles
  • the information is merged and coordinated, and the specific process is shown in Figure 4.
  • the terminal feeds back two second channel state information with a compression factor of 4 times and 64 times to the base station, CSI 1 with a compression factor of 64 times is fed back every 1 TTI, and CSI 2 with a compression factor of 4 times is fed back every 10 TTIs Feedback once.
  • the base station records and saves the channel information of CSI 2 , and performs collaborative processing with the channel information of CSI 1 with a short feedback period at the output end of the sub-decompression module on the base station side corresponding to the compression module with a compression factor of 4, and sends it to the subsequent
  • the sub-decompression module processes until the estimated pre-compression channel state information (ie, target channel state information) is recovered.
  • the compression factor is determined by the serial number of the sub-compression module sent by the sending end, and the expansion factor for decompressing each group of second channel state information is further determined.
  • the sending end can send the length of the feedback cycle of each group of second channel state information, and correspondingly, the base station can determine the length of the feedback cycle of each group of second channel state information according to the received time length of the feedback cycle of each group of second channel state information. Dilation factor for decompression.
  • the sending end may feed back the compression multiples of each group of second channel state information.
  • the acquiring method further includes:
  • the expansion factor of each group of second channel state information is determined according to the compression factor information corresponding to each group of second channel state information.
  • the sending end may also feed back the index information of the encoder, and correspondingly, the base station may receive the index information of the encoder, and determine the decoder performing decompression according to the index information of the encoder.
  • encoders of different index information may be used in different application scenarios. Decoders corresponding to encoders corresponding to different index information can also be used in different application scenarios.
  • the neural network parameters of K0 sets of autoencoders are obtained through the above training process, and each autoencoder includes a pair of encoder and decoder.
  • the terminal and the base station store the neural network parameters of the K0 set of encoders and decoders respectively.
  • the base station configures the index of the K0 set of encoder and decoder pair through high-level signaling according to the channel conditions, so that the terminal knows which set of encoder parameters are used after receiving the index, that is, the base station configures the encoder and decoder.
  • the index of the decoder pair the terminal receives the index of the encoder and the decoder pair, and determines the parameters corresponding to the encoder.
  • the terminal selects one of the encoders according to at least one factor such as the channel scenario, the channel angle extension, the delay extension, and the Doppler extension, and passes the index of the selected encoder through the physical layer and/or high-layer signaling transmitted to the base station.
  • a method for joint training of an encoder and a decoder is provided, the encoder is capable of compression of N types of compression multiples, and the decoder is capable of decompression of N types of expansion multiples, And the numerical values of the compression multiples in N are equal to the expansion multiples of the N kinds of decompression in one-to-one correspondence, wherein, as shown in FIG. 14 , the method includes:
  • step S310 train the initial parameters of the encoder to obtain the final parameters of the encoder
  • step S320 the initial parameters of the decoder are trained to obtain the final parameters of the decoder.
  • the training error in the step of training the initial parameters of the encoder and the training error of training the initial parameters of the decoder are between the input of the encoder and the output of the decoder error function.
  • the encoder and the decoder are jointly trained, and in the encoder and the decoder trained by the above method, the input information of the encoder is closer to the output information of the decoder.
  • the encoder includes N sub-compression modules, and the N sub-compression modules can respectively realize the compression of N kinds of compression multiples.
  • the initial parameters of the encoder include the initial parameters of the N sub-compression modules.
  • the final parameters include final parameters of the N sub-compression modules.
  • the decoder includes N sub-decompression modules, and the N sub-decompression modules can respectively realize the decompression of the expansion factor in N, the initial parameters of the decoder include the initial parameters of the N sub-decompression modules, and the decoding The final parameters of the device include the final parameters of the N sub-decompression modules.
  • training the initial parameters of the encoder includes training the initial parameters of the N sub-compression modules respectively, and training the initial parameters of the decoder includes respectively training the initial parameters of the N sub-decompression modules.
  • the training error for training the initial parameters of the sub-compression module is the error between the input of the sub-compression module and the output of the sub-decompression module whose expansion factor is the same as the compression factor of the sub-compression module function.
  • the error function satisfies the following formula:
  • i is the serial number of the sub-compression module and the sub-decompression module
  • ki is the weighting coefficient of the i-th sub-compression module and the i-th sub-decompression module
  • the compression multiple is large or compressed
  • the training error function of the model with a small multiple is assigned higher weight.
  • the parameter training of a set of encoders and decoders of the terminal and the base station is completed.
  • the error function is selected from any one of the following functions:
  • the terminal and the base station first jointly train the sub-compression module and the sub-decompression module with the lowest compression factor ⁇ 1 , and the training goal is to stop training when the training error Loss is the smallest and almost unchanged, that is, the sub-compression module
  • the neural network parameter training of the corresponding sub-decompression module is completed, and the training error Loss is the error function between the input of the sub-compression module and the output of the corresponding sub-decompression module, namely It characterizes the loss error or closeness between the restored data output by the sub-decompression module and the original input data before the sub-compression module, and the mean square error function (Mean square error), absolute value loss function or logarithmic loss function can be used etc.
  • the specific schematic diagram is shown in Figure 15.
  • the terminal side and the base station side will fix the network parameters of the two modules and will not change. Then, the terminal and the base station jointly train the sub-compression module and the sub-decompression module of the second lowest compression factor ⁇ 2 , wherein when the training Loss2 is the lowest and remains almost unchanged, the sub-compression module and the sub-decompression module of the second lowest compression factor ⁇ 2
  • the neural network parameter training of the compression module is completed, and the network parameters of the two-part module are fixed and will not change.
  • the terminal and the base station jointly train the sub-compression module and sub-decompression module with the highest compression factor ⁇ N (where ⁇ 1 ⁇ 2 ⁇ ... ⁇ N ). Then, in cooperation with the parameter training of the first processing module and the second processing module in the encoder and decoder, the parameter training of a set of encoders and decoders of the terminal and the base station is completed.
  • the terminal and the base station first jointly train the sub-compression module and the sub-decompression module with the highest compression factor ⁇ 1 , and the training goal is to stop training when the training error Loss is the smallest and almost unchanged, that is, the sub-compression module
  • the neural network parameter training of the corresponding sub-decompression module is completed, and the training error Loss can adopt a mean square error function (Mean square error), an absolute value loss function or a logarithmic loss function, etc.
  • the terminal side and the base station side will fix the network parameters of the two modules and will not change.
  • the terminal and the base station jointly train the sub-compression module and sub-decompression module with the second highest compression factor ⁇ 2 , and when the training Loss2 is the lowest and remains almost unchanged, the sub-compression module and sub-decompression module with the second highest compression factor ⁇ 2
  • the neural network parameter training of the compression module is completed, and the network parameters of the two-part module are fixed and will not change.
  • the terminal and the base station jointly train the sub-compression module and sub-decompression module with the lowest compression factor ⁇ N (where ⁇ 1 > ⁇ 2 >...> ⁇ N ).
  • the parameter training of a set of encoders and decoders of the terminal and the base station is completed.
  • a terminal includes:
  • a first storage module on which a first executable program is stored
  • One or more first processors when the one or more first processors call the first executable program, implement the feedback method provided by the first aspect of the present disclosure.
  • a base station is provided, and the base station includes:
  • a second storage module on which a second executable program is stored
  • One or more second processors when the one or more second processors invoke the second executable program, implement the acquisition method provided by the second aspect of the present disclosure.
  • an electronic device includes:
  • a third storage module on which a third executable program is stored
  • One or more third processors when the one or more third processors call the third executable program, implement the joint training method provided by the third aspect of the present disclosure.
  • a computer-readable medium on which an executable program is stored.
  • the executable program When invoked, it can realize the feedback method provided in the first aspect of the present disclosure, the first Any one of the acquisition method provided by the two aspects and the training method provided by the third aspect of the present disclosure.
  • the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit .
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本公开提供一种信道状态信息的反馈方法,用于终端,所述反馈方法包括:利用编码器对第一信道状态信息进行M种压缩倍数的压缩,以获得M组第二信道状态信息,M为正整数,且M不小于2;反馈M组第二信道状态信息。本公开还提供一种信道状态信息的获取方法、一种编码器和解码器进行联合训练的方法、一种终端、一种基站、一种电子设备和一种计算机可读介质。摘图1

Description

反馈、获取及训练方法、终端、基站、电子设备和介质
相关公开的交叉引用
本公开要求在2021年12月10日提交国家知识产权局、公开号为CN202111505026.8、发明名称为“反馈、获取及训练方法、终端、基站、电子设备和介质”的中国专利申请的优先权,该申请的全部内容通过引用结合在本公开中。
技术领域
本公开的实施例涉及但不限于通信领域,具体地,涉及一种信道状态信息的反馈方法、一种信道状态信息的获取方法、一种编码器和解码器进行联合训练的方法、一种终端、一种基站、一种电子设备和一种计算机可读介质。
背景技术
从长期演进(Long Term Evolution,LTE)发展到现在的新无线接入技术(New Radio Access Technology,NR),多天线技术一直是通信标准中重要的技术之一。随着通信的标准指标在不断地提升,如何更加准确地获取信道状态信息(Channel State Information,CSI)也是多天线技术性能提升的关键所在。
发明内容
本公开实施例提供一种信道状态信息的反馈方法、一种信道状态信息的获取方法、一种编码器和解码器进行联合训练的方法、一种终端、一种基站、一种电子设备和一种计算机可读介质。
作为本公开的第一个方面,提供一种信道状态信息的反馈方法,用于 终端,所述反馈方法包括:利用编码器对第一信道状态信息进行M种压缩倍数的压缩,以获得M组第二信道状态信息,M为正整数,且M不小于2;反馈M组第二信道状态信息。
作为本公开的第二个方面,提供一种信道状态信息的获取方法,用于基站,所述获取方法包括:接收M组第二信道状态信息,接收到的每一组所述第二信道状态信息均为编码器对第一信道状态信息经相应倍数的压缩所获得的信息,M为正整数,且M不小于2;利用解码器对各组第二信道状态信息进行解压缩及合并处理,获得目标信道状态信息,其中,对数据量最小的一组第二信道状态信息进行解压缩的膨胀倍数大于对其他组第二信道状态信息进行解压缩的膨胀倍数,且所述目标信道状态信息的元素数量与所述第一信道状态信息的元素数量相同。
作为本公开的第三个方面,提供一种编码器和解码器进行联合训练的方法,所述编码器能够进行N种压缩倍数的压缩,所述解码器能够进行N种膨胀倍数的解压缩,且N中压缩倍数的数值分别与N种解压缩的膨胀倍数一一对应地相等,所述方法包括:对所述编码器的初始参数进行训练,以获得所述编码器的最终参数;对所述解码器的初始参数进行训练,以获得所述解码器的最终参数;其中,对所述编码器的初始参数进行训练的步骤中的训练误差、以及对所述解码器的初始参数进行训练的训练误差均为所述编码器的输入与所述解码器的输出之间的误差函数。
作为本公开的第四个方面,提供一种终端,所述终端包括:第一存储模块,其上存储有第一可执行程序;至少一个第一处理器,当所述至少一个第一处理器调用所述第一可执行程序时,实现本公开第一个方面所提供的反馈方法。
作为本公开的第五个方面,提供一种基站,所述基站包括:第二存储模块,其上存储有第二可执行程序;至少一个第二处理器,当所述至少一个第二处理器调用所述第二可执行程序时,实现本公开第二个方面所提供 的获取方法。
作为本公开的第六个方面,提供一种电子设备,所述电子设备包括:第三存储模块,其上存储有第三可执行程序;至少一个第三处理器,当所述至少一个第三处理器调用所述第三可执行程序时,实现本公开第三个方面所提供的联合训练的方法。
作为本公开的第五个方面,提供一种计算机可读介质,其上存储有可执行程序,当所述可执行程序被调用时,能够实现本公开所提供的上述方法。
附图说明
图1是本公开所提供的信道状态信息的反馈方法的一种实施方式的流程图;
图2是编码器的一种实施方式的结构示意图;
图3是编码器的另一种实施方式的结构示意图;
图4是展示不同压缩倍数的子压缩模块与各自的反馈周期之间的关系的示意图;
图5是本公开所提供的信道状态信息的反馈方法的另一种实施方式的流程图;
图6是本公开所提供的信道状态信息的反馈方法的还一种实施方式的流程图;
图7是本公开所提供的信道状态信息的获取方法的一种实施方式的流程图;
图8是解码器的结构示意图;
图9是步骤S220的一种实施方式的流程图;
图10是本公开所提供的信道状态信息的获取方法的另一种实施方式的流程图;
图11是压缩模块中N个子压缩模块与解压缩模块中N个子解压缩模块之间的对应关系示意图;
图12是步骤S221的一种实施方式的流程图;
图13是本公开所提供的信道状态信息的获取方法的还一种实施方式的流程图;
图14是编码器和解码器进行联合训练的方法的一种实施方式的流程图;
图15是表示误差函数的示意图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的信道状态信息的反馈方法、信道状态信息的获取方法、编码器和解码器进行联合训练的方法、终端、基站、电子设备和计算机可读介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
在不冲突的情况下,本公开各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括一个或多个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
作为本公开的第一个方面,提供一种信道状态信息的反馈方法,用于终端,如图1所示,所述反馈方法包括:
在步骤S110中,利用编码器对第一信道状态信息进行M种压缩倍数的压缩,以获得M组第二信道状态信息,M为正整数,且M不小于2;
在步骤S120中,反馈M组第二信道状态信息。
在天线系统中,终端需要将下行信道状态信息反馈给基站。在本公开所提供的反馈方法中,第一信道状态信息可以反映待反馈的下行信道状态,对下行信道状态信息进行M种压缩倍数的压缩后,可以获得M组数据量小于第一信道状态信息的数据量的第二信道状态信息。其中,压缩倍数越大,则获得的第二信道状态信息的数据量越少,传输该第二信道状态信息时所需要开销越少。即,压缩倍数最大的第二信道状态信息数据量最小,传输时开销最小,压缩倍数最大的第二信道状态信息中的数据量最大,传输时所需要的开销最大。
在接收端接收到各组第二信道状态信息时,可以对各组第二信道状态信息进行解压缩,分别获得多组解压缩后的信道状态信息,对各组解压缩后的信道状态信息进行合并处理,可以获得数据量与第一信道状态信息的数据量相同的目标信道状态信息。在本公开中,以较低的频率对实时获得的第一信道状态信息进行较小倍数的压缩及反馈。容易理解的是,压缩倍数最小的第二信道状态信息进行解压缩后获得的信道状态信息与第一信道状态信息之间的差距最小,而下行信道状态的变化并非快速变化。因此,在一段时间内,压缩倍数最小的第二信道状态信息解压缩获得的信道状态信息仍能在一定程度上较为全面地反映下行信道状态信息。并且,在本公开中,以较为频繁地对实时获得的第一信道状态信息进行大压缩倍数的压缩、及反馈。因此,压缩倍数较大的第二信道状态信息解压缩后,可以获 得能够反映信道状态变化的信道状态信息。在接收端,利用及时获得的、能够反映信道状态变化的信道状态与较为全面的信道状态信息进行合并,可以获得既能够全面反映下行信道状态、又能反映下行信道状态变化的目标信道状态信息。反馈压缩倍数小的第二信道状态信息所需要的开销较大,反馈压缩倍数大的第二信道状态信息所需要的开销较小。因此,通过本公开所提供的反馈方法可以减小反馈开销,又可以使得接收端及时掌握下行信道状态。
当然,本公开并不限于此,也可以利用相同的频率反馈不同压缩倍数的第二信道状态信息,在接收端接收到各组第二信道状态信息后,通过对各组第二信道状态信息进行解压缩获得相应的信道状态信息后,再对各组解压缩后获得的信道状态信息进行合并处理,可以获得精确反馈下行信道状态信息的目标信道状态信息。
综上,通过本公开所提供的反馈方法,可以使得基站获得精确的下行信道状态信息,并准确地对下行信道的状态进行判断。
在本公开中,编码器为基于深度学习的神经网络模型,采用编码器可以更好的学习并提取第一信道状态信息的特征,有利于更好地对第一信道状态信息进行压缩和反馈。
在使用大规模天线阵列的场景中,下行信道的状态信息反馈的开销随着天线数目的增加而增大。通过本公开所提供的反馈方法可以将待反馈的下行信道状态信息压缩至较少的数据量,从而可以确保及时地将反映下行信道状态的第二信道状态信息发送至基站侧。基站侧接收到第二信道状态信息后,对该第二信道状态信息进行解压缩及合并处理,可以获得下行信道状态。
在本公开中,对如何实现预定倍数的压缩不做特殊的限定。例如,实现压缩倍数为γ L的压缩步骤可以包括:
对第一信道状态信息进行L次局部压缩,并且,L次局部压缩满足以下公式(1):
Figure PCTCN2022138160-appb-000001
其中,j为局部压缩的次数编号,1≤j≤L;
η j为第j次局部压缩的压缩倍数。
在本公开中,如图2所示,编码器可以包括N个子压缩模块(分别为子压缩模块1、子压缩模块2、…、子压缩模块M、…子压缩模块N),N为不小于2的正整数。在所述编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤(即,步骤S110)中,选用N个子压缩模块中的多个子压缩模块实现对所述第一信道状态信息进行M种压缩倍数的压缩。
换言之,所述编码器可以实现N种不同压缩倍数的压缩。可以下行信道信息的数据量大小选择压缩倍数的种类。例如,当下行信道信息的数据量较大时,可以采用较多种类的压缩倍数分别对下行信道状态信息进行压缩。当下行信道信息的数据量较小时,可以采用较少种类的压缩倍数分别对下行信道状态信息进行压缩。
作为一种可选实施方式,2≤M≤N,N个所述子压缩模块的压缩倍数互不相同。如图2中所示,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤中,选用N个子压缩模块中的M个子压缩模块实现对所述第一信道状态信息进行M种压缩倍数的压缩。也就是说,被选中的M个子压缩模块并行作业,并且并行地对获得的M组第二信道状态信息进行反馈。具体地,在图2中所示的实施方式中,利用子压缩模块1对第一信道状态信息进行压缩,获得第一组第二信道状态信息CSI1;利用子压缩模块2对第一信道状态信息进行压缩,获得第二组第二信道状态信息CSI2;……,利用子压缩模块N对第一信道状态信息进行压缩,获得第N组第二信道状态信息CSIN。
当然,本公开并不限于此。作为另一种可选地实施方式,编码器中的N个子压缩模块也可以串行作业。也就是说,每一个子压缩模块能够实现相应倍数的局部压缩,通过多个子压缩模块串行,可以实现较大倍数的压缩。相应地,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤中,对于M种压缩倍数中的任意一种倍数的压缩,通过一个所述子压缩模块、或者多个所述子压缩模块的组合实现。并且,压缩倍数最小的压缩步骤所用到的子压缩模块数量最少,压缩倍数最大的压缩步骤所 用到的子压缩模块数量最多。
例如,第二信道状态信息对应的压缩倍数为2倍时,可以选用一个压缩倍数为2倍的子压缩模块来实现对第一信道状态信息进行压缩的步骤。当第二信道状态信息对应的压缩倍数为10倍时,可以选用一个压缩倍数为2的子压缩模块对第一信道状态信息进行第一次压缩,获得中间信息,然后再选用一个压缩倍数为5的子压缩模块对中间信息进行第二次压缩,以获得压缩倍数为10的第二信道状态信息。依次类推,通过多种压缩倍数的子压缩模块的组合,可以实现需要的压缩倍数。
在图3和图4中所示的实施方式中,可以利用子压缩模块1对第一信道状态信息进行压缩,获得第一组第二信道状态信息CSI1;利用子压缩模块1和子压缩模块2对第一信道状态信息进行压缩,获得第二组第二信道状态信息CSI2;……,利用子压缩模块1至子压缩模块N对第一信道状态信息进行压缩,获得第N组第二信道状态信息CSIN。
在利用多个子压缩模块对第一信道状态信息进行压缩的步骤,也是满足上述公式(1)的,即,最终获得的压缩倍数为执行压缩的所有子压缩模块的压缩倍数的连乘积。
如上文中给所述,可选取N个子压缩模块中的多个子压缩模块实现对所述第一信道状态信息进行M种压缩倍数的压缩。为了便于接收端确定具体执行压缩的子压缩模块,可选地,如图5所示,所述反馈方法还可以包括:
在步骤S130中:
分别反馈实现每种压缩倍数的多个子压缩模块的标识信息;或者
分别反馈实现每种压缩倍数的多个子压缩模块的标识信息中符合预定条件的一个或多个。
通过步骤S130,可以使得接收端最终确定执行压缩步骤的多个子压缩模块的标识信息,并最终确定接收到的M组第二信道状态信息分别对应的M种压缩倍数,并据此确定对M组第二信道状态信息进行解压缩所需要的膨胀倍数。
在本公开中,对步骤S130和步骤S120之间的先后顺序没有特殊的限定。例如,可以先执行步骤S120、后执行步骤S130、也可以先执行步骤S130、后执行步骤S120,还可以同步地执行步骤S120和步骤S130。
为了减少向基站进行反馈时的数据量,作为一种可选实施方式,N个子压缩模块串联,且N个子压缩模块均能独立地输出,且第i个子压缩模块输出的数据的压缩倍数为:数据流方向上的第1个子压缩模块至第i个压缩模块共同实现的压缩倍数。这样,对于任意一种压缩倍数的第二信道状态信息,反馈时只需要反馈输出该第二信道状态信息的子压缩模块的编号即可。
所述子压缩模块的标识信息包括子压缩模块的编号,N个所述子压缩模块的编号符合第一预设规则。
这样,在执行步骤S130时,可以只反馈分别实现每种压缩倍数的个子压缩模块中的一个或多个编号,在接收端,可以根据所述第一预设规则推算实现每种压缩倍数的其余各个子压缩模块的编号。
作为一种可选实施方式,所述第一预设规则包括从编号1至N递增地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块输出的压缩倍数正相关,所述预定条件包括实现每种压缩倍数的多个子压缩模块的编号中的最大编号。即,在步骤S130中,反馈实现每种压缩倍数的多个子压缩模块的编号中的最大编号。
在N个子压缩模块串联的实施方式中,单个子压缩模块用于实现局部压缩,需要指出的是,不同的子压缩模块可以实现相同倍数的局部压缩,也可以实现不同倍数的局部压缩。
下面以不同的子压缩模块实现相同倍数的局部压缩为例进行介绍。例如,3个子压缩模块串联。每个子压缩模块均能独立地实现2倍的局部压缩。这样,编号为1的子压缩模块输出的压缩倍数为2(即,以编号为1的子压缩模块作为输出模块的话,将数据从编号为1的子压缩模块的输入端输入、从编号为1的子压缩模块的输出端输出,从而可以实现对数据的2倍压缩);编号为2的子压缩模块输出的压缩倍数为4(即,以编号为2 的子压缩模块作为输出模块的话,将数据从编号为1的子压缩模块的输入端输入、从编号为2的子压缩模块的输出端输出;数据依次经过编号为1的子压缩模块的压缩、以及编号为2的子压缩模块的压缩,从而可以实现对数据的4倍压缩);编号为3的子压缩模块输出的压缩倍数为8(即,以编号为3的子压缩模块作为输出模块的话,将数据从编号为1的子压缩模块的输入端输入、从编号为3的子压缩模块的输出端输出;数据依次经过编号为1的子压缩模块的压缩、编号为2的子压缩模块的压缩、以及编号为3的子压缩模块的压缩,从而可以实现对数据的8倍压缩)。
如上文中所述,对于任意一种压缩倍数的第一信道状态信息,反馈时只需要反馈输出该第一信道状态信息的子压缩模块的编号即可。因此,在N个子压缩模块按照从1至N递增编号的实施方式中,在反馈子压缩模块的编号时,反馈每种压缩倍数多个子压缩模块的编号中的最大编号。即,所述预定条件包括实现每种压缩倍数多个子压缩模块的编号中的最大编号。
当然,本公开并不限于此。作为另一种可选实施方式,所述第一预设规则包括从编号N至编号1递减地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数反相关。这样,输出第二信道状态信息的子压缩模块为编号最小的子压缩模块。相应地,所述预定条件包括实现每种压缩倍数多个子压缩模块的编号中的最小编号。
具体地,对于任意实现一种压缩倍数的第二信道状态信息的一组子压缩模块而言,反馈时,只需要反馈这一组子压缩模块中编号最小的一者。接收端接收到编号最小的一者后,可以确定该编号最小的子压缩模块对应的子解压缩模块直至编号为N子压缩模块对应的子解压缩模块,对该组第二信道状态信息进行解压缩处理。
在本公开中,接收端接收到第二信道状态信息后,即可确定M的数值,再根据接收到的编号,即可推断出所有参与实现各种压缩倍数的压缩步骤的子压缩模块的编号。
作为一种可选实施方式,可以将子压缩模块的编号进行比特量化。N个子压缩模块的编号对应
Figure PCTCN2022138160-appb-000002
个比特,其中
Figure PCTCN2022138160-appb-000003
为向上取整符号。因 此,反馈的M种第二信道状态信息所对应的M个子压缩模块的编号需要M*B个比特。比如,总共有4个子压缩模块,分别为64倍压缩的子压缩模块、16倍压缩的子压缩模块、8倍压缩的子压缩模块、4倍压缩的子压缩模块。因此可以利用2比特量化4个子压缩模块的编号(即00表示4倍压缩的子压缩模块,01表示8倍压缩的子压缩模块,10表示16倍压缩的子压缩模块,11表示64倍压缩的子压缩模块)。现如果需要反馈4倍压缩后获得的第二信道状态信息的编号与64倍压缩后获得的第二信道状态信息编号,则需反馈00与11共4比特。
在另一个具体的实施例中,也可以调整量化方式来反馈编号信息。用
Figure PCTCN2022138160-appb-000004
个比特表示M个编号中最大或者最小的编号,其中
Figure PCTCN2022138160-appb-000005
为向上取整符号。知道了最大或最小编号,其余M-1个子压缩模块的编号距离的最大或最小编号距离值的差分值来依次表示。差分距离值用
Figure PCTCN2022138160-appb-000006
比特来从小到大依次表示距离最大或最小值的从近至远程度。比如,总共有8个子压缩模块,分别为256倍压缩的子压缩模块、128倍压缩的子压缩模块、64倍压缩的子压缩模块、32倍压缩的子压缩模块、16倍压缩的子压缩模块、8倍压缩的子压缩模块、4倍压缩的子压缩模块、2倍压缩的子压缩模块。因此可以用3比特量化8个子压缩模块编号(即000表示2倍压缩,001表示4倍压缩,010表示8倍压缩,011表示16倍压缩,100表示32倍压缩,101表示64倍压缩,110表示128倍压缩,111表示256倍压缩)。现如果需要反馈4倍压缩、64倍压缩与128倍压缩后的第二信道状态信息编号,则需反馈最大压缩倍数编号编码为110,64倍差分压缩编号编码为001,4倍差分压缩编号编码100共9比特。对于差分距离较小的编号也可以根据非均匀量化降低比特冗余表示,进一步来降低反馈比特开销。
如上文中所述,编码器为基于深度学习的神经网络模型。相应地,子压缩模块也是基于深度学习的神经网络模型。
在本公开中,对于压缩倍数不同的一组子压缩模块,共用实现相同的局部压缩倍数的子压缩模块所对应的参数,从而可以减少编码器的训练步骤。
作为一种可选实施方式,所述子压缩模块满足以下条件中的任意一者:
所述子压缩模块包括全连接网络,所述全连接网络包括至少两层全连接层;
所述子压缩模块包括卷积网络,所述卷积网络包括以下部分中的至少一者:至少一个卷积层、至少一个残差块或至少一个稠密块;或者
所述子压缩模块包括循环网络,所述循环网络包括长期记忆LSTM神经网络或者门控循环GRU神经网络。
在子压缩模块包括全连接网络、且全连接网络包括至少两层全连接层的实施方式中:
在第i个子压缩模块中(i=1,2,...,N),全连接网络的输入数据的元素数目为Nin,全连接网络输出数据的元素数目为Nout,其中Nout<Nin。因此该子压缩模块内部的第j次局部压缩倍数ηj=Nin_j/Nout_j,为第j次局部压缩的压缩倍数,Nin_j为第j次局部压缩的输入数据的元素数目,Nout_j为第j次局部压缩的输出数据的元素数目。例如,送入子压缩模块时有Nin=1024个元素,经过压缩后有Nout=256个元素,那么该子压缩模块内部的压缩倍数为1024/256=4倍。需要说明的是,第i个子压缩模块的压缩倍数γi为编号1至i的子压缩模块局部压缩倍数的积,即
Figure PCTCN2022138160-appb-000007
其中Π为连乘符号。所述全连接网络的参数包括但不限于以下至少之一:全连接层的节点数目,全连接层的权值,归一化系数,激活函数。
在所述子压缩模块包括卷积网络的实施方式中:
在第i个子压缩模块中(i=1,2,...,N),全连接网络的输入数据的元素数目为Nin,全连接网络输出数据的元素数目为Nout,其中Nout<Nin。该子压缩模块内部的第j次局部压缩倍数ηj=Nin_j/Nout_j,为第j次局部压缩的压缩倍数,Nin_j为第j次局部压缩的输入数据的元素数目,Nout_j为第j次局部压缩的输出数据的元素数目。例如,送入子压缩模块时数据大小为16*16*2,Nin=512个元素,经过压缩后数据大小为8*8*2,Nout=128个元素,那么该子压缩模块内部局部压缩倍数为512/128=4倍。需要说明的是,第i个子压缩模块的压缩倍数γi为编号1至i的子压缩模块局部压 缩倍数的积,即
Figure PCTCN2022138160-appb-000008
其中Π为连乘符号。所述卷积网络的参数包括但不限于以下至少之一:卷积层的输入尺寸,卷积核参数,卷积步长参数,数据填充参数,池化参数,归一化系数,激活函数。
在所述子压缩模块包括循环网络的实施方式中:
循环网络可以是长短期记忆(long short-term memory,LSTM)或者门控循环神经网络(gated recurrent neural network,GRU)或者基于注意力机制的Transformer结构等。在第i个子压缩模块中(i=1,2,...,N),循环网络输入的数据元素总数目为Nin,输出的数据元素总数目为Nout,其中Nout<Nin。该子压缩模块内部的第j次局部压缩倍数ηj=Nin_j/Nout_j,为第j次局部压缩的压缩倍数,Nin_j为第j次局部压缩的输入数据的元素数目,Nout_j为第j次局部压缩的输出数据的元素数目。例如,送入子压缩模块时有Nin=1024个元素,经过压缩后有Nout=256个元素,那么该子压缩模块内部的局部压缩倍数为1024/256=4倍。需要说明的是,第i个子压缩模块的压缩倍数γi为编号1至i的子压缩模块局部压缩倍数的积,即
Figure PCTCN2022138160-appb-000009
其中Π为连乘符号。所述循环网络的参数包括但不限于以下至少之一:循环层的节点数目,循环层的权值,循环层中门参数,注意力机制参数,归一化系数或激活函数。
在本公开中,压缩倍数的种类数量M可以与子压缩模块的数量N相同,压缩倍数的种类数量M也可以与子压缩模块N的数量不同。
如上文中所述,可以以较高的频率反馈压缩倍数较大的第二信道状态信息、而以较低的频率反馈压缩倍数较小的第二信道状态信息。换言之,在反馈每一组第二信道状态信息时,均周期性地对该组第二信道状态信息进行反馈,且反馈周期的长短与该周期中反馈的第二信道状态信息对应的压缩倍数反相关。需要指出的是,对于同一种压缩倍数,不同反馈周期中,对实时获得的第一信道状态信息进行压缩。
在图4中所示的实施方式中,子压缩模块1的反馈周期为T 1,压缩倍数为γ 1;子压缩模块2的反馈周期为T 2,压缩倍数为γ 2;……,子压缩模块N的反馈周期为T N,压缩倍数为γ N。γ 1<γ 2<……<γ N,T 1>T 2>……>T N
需要指出的是,虽然在图4中,子压缩模块1对应的第一信道状态信息用CSI 1表示、子压缩模块2对应的第一信道状态信息用CSI 2表示、……、子压缩模块N对应的第一信道状态信息用CSI N表示,但是,应当理解的是,在向基站反馈信道状态信息的开始时刻,第一信道状态信息用CSI 1、第一信道状态信息用CSI 2、……、第一信道状态信息用CSI N是相同的。
为了便于描述,以压缩倍数最大的第二信道状态信息和压缩倍数最小的第二信道状态信息两组数据为例进行解释和说明。
用T 1表示压缩倍数最小的第二信道状态信息的反馈周期,用T M表示压缩倍数最大的第二信道状态信息的反馈周期,T 1>T M。为了便于理解,假定T 1=3T M
在t0时刻:
发送端(即,终端)对t0时刻的第一信道状态信息进行两种压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息和压缩倍数最小的第二信道状态信息;
发送端反馈压缩倍数最大的第二信道状态信息、以及压缩倍数最小的第二信道状态信息。
在t0’时刻:
接收端(即,基站)接收到压缩倍数最大的第二信道状态信息、以及压缩倍数最小的第二信道状态信息;
接收端对接收到的两组第二信道状态信息进行解压缩以及合并处理,可以获得反映t0时刻下行信道状态的目标信道状态信息。
在t1(t1=t0+T M)时刻:
发送端对t1时刻的第一信道状态信息进行最大压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息;
在t1’(t1’=t0’+T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息;
接收端对接收到的第二信道状态信息与t0’时刻接收到的压缩倍数最 小的第二信道状态信息进行解压缩以及合并处理,获得t1’时刻的目标信道状态信息。
在t2(t2=t0+2T M)时刻:
发送端对t2时刻的第一信道状态信息进行最大压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息;
在t2’(t2’=t0’+2T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息;
接收端对接收到的第二信道状态信息与t0’时刻接收到的压缩倍数最小的第二信道状态信息进行解压缩以及合并处理,获得t2’时刻的目标信道状态信息。
在t3(t2=t0+3T M)时刻:
发送端对t3时刻的第一信道状态信息进行最大压缩倍数的压缩、以及最小压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息和压缩倍数最小的第二信道状态信息;
在t3’(t3’=t0’+3T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息、和压缩倍数最小的第二信道状态信息;
接收端对接收到的压缩倍数最大的第二信道状态和压缩倍数最小的第二信道状态信息进行解压缩及合并处理,获得t3’时刻的目标信道状态信息。
需要理解的是,虽然上文中介绍的实施方式中T 1是T M的整数倍,但是,本公开并不限于此,T 1也可以不是T M的整数倍,只要T 1>T M即可,例如,T 1=2.7T M
也就是说,接收端在接收到当前周期的、压缩倍数最小的第二信道状态信息后,利用该压缩倍数最小的第二信道状态信息与其他压缩倍数的第二信道状态信息进行解压缩以及合并处理,获得当前周期的目标信道状态信息。在接收到下一个反馈周期的压缩倍数最小的第二信道状态信息之前, 可以利用当前周期获得的压缩倍数最小的第二信道状态信息对各个周期发送的压缩倍数较大的第二信道状态信息进行解压缩以及合并处理,既减少了反馈下行信道状态信息的开销,又可以获得较为全面、且能够反映实时状态的下行信道状态信息。
为了便于接收端对各个反馈周期获得的第二信道状态信息进行处理,可选地,如图6所示,所述反馈方法还可以包括:
在步骤S140中,反馈各组第二信道状态信息的反馈周期的长短。
可选地,如图6所示,在所述利用编码器对至少一组第一信道状态信息进行M种压缩倍数的压缩之前,所述反馈方法还包括:
在步骤S101中,获取原始信道信息;
在步骤S102中,利用所述编码器对所述原始信道信息进行特征提取,以获得初始信道状态信息;
在步骤S103中,对所述初始信道状态信息进行预处理,以获得所述第一信道状态信息。
作为一种可选实施方式,所述解码器还包括第一处理模块,所述第一处理模块用于执行对所述初始信道信息进行特征提取的步骤。
所述第一处理模块也可以是基于神经网络的处理模块。也就是说,所述第一处理模块包括多个网络层。多个网络层各自独立地选自以下层中的任意一者:
全连接层、池化层、直连层、批归一化层。
可选地,在步骤S103中获得的第一信道状态信息的维度与编码器的输入维度一致。
在本公开中,对所述“预处理”的具体步骤不做特殊的限定。例如,所述预处理可以包括但不限于对信道信息进行域变换(比如,时域到频域的变换,频域到时域的边缘,角度域到空域的变换、空域到角度域的变换)、下采样、补零、分组、截断等,只有使得获得的第一信道状态信息维度符合编码器的输入维度即可。
在本公开中,对“原始信道信息”也不做特殊的限定。具体地,可以通过以下方式获得所述原始信道信息:
终端通过接收参考信号,例如,信道状态信息参考信号(Channel-state information reference signal,CSI-RS),获得信道H,所述信道一般为一个N t*N r的复数矩阵。其中,N t和N r分别为基站的发送天线数目和终端的接收天线数目,这里的天线可以是逻辑的天线,也可以是各种类型的物理天线。所述原始信道信息可以是时域信道信息H t,也可以为频域信道信息H f
在一个实施例中,时域信道信息H t是一个N t×N s的复数矩阵,N t为天线数目,N s为时域采样点数目。在天线维度中,天线数目N t为N tx×N rx,其中N tx为发送天线数目,N rx为接收天线数目;天线数目N t也可以为N tx×R,其中N tx为发送天线数目,R为信道秩或者数据流数。由于时域信道具有稀疏性,因此在时域采样点维度上可以将较小的信道值进行舍弃,即对时域点进行截断,此时时域信道信息H t尺寸为N t×N′ s,其中N′ s<N s。其中,天线维度N t不仅可以在第一维度,也可以在第二维度即N′ s×N t。根据不同需求也可以将天线维度拆成上述的两维表示。
在一个实施例中,频域信道信息H f是一个N t×N f的复数矩阵,N t为天线数目,N f为频域粒度,其中频域粒度可以子载波(Subcarrier)或者物理资源块(Physical Resource Block,PRB)或者子带(Subband)为单位,其中一个物理资源块可以包括多个子载波,比如6,12个等,一个子带包括多个物理资源块。在天线维度中,天线数目N t为N tx*N rx,其中N tx为发送天线数目,N rx为接收天线数目;天线数目N t也可以为N tx*R,其中N tx为发送天线数目,R为信道秩或者数据流数。由于频域粒度可以是时域采样点经过快速傅里叶变换得到,在频域粒度维度上可以对其进行截断,此时频域信道信息H f尺寸为N t×N′ f,其中N′ f<N f。其中,天线维度N t不仅可以在第一维度,也可以在第二维度即N′ f×N t。根据不同需求也可以将天线维度拆成上述的两维表示。
在一个实施例中,频域信道信息H f可以是预编码。在子带频域粒度N f上,共有M个PRB频带资源,每连续的K个PRB的信道构成一个子 带信道矩阵,其中K的数值根据带宽指示确定,比如10M带宽时每个子带包括4个PRB,20M带宽每个子带包括8个PRB等。我们对每一个子带中的K个信道H 1…H K分别乘以它们的共轭矩阵H 1 H…H K H,得到K个相关矩阵R 1…R K,将该K个相关矩阵取平均并做奇异值分解(Singular Value Decomposition,简称SVD),取最大特征值对应的特征向量作为该子带的特征向量V i,其中i=1…N f,将所有子带的特征向量堆叠在一起形成预编码V,其矩阵大小为N t×N f,其中N t为天线的数目,N f为子带的数目。其中,天线维度N t不仅可以在第一维度,也可以在第二维度即N f×N t。根据不同需求也可以将天线维度拆成上述的两维表示。
作为本公开的第二个方面,提供一种信道状态信息的获取方法,用于基站,如图7所示,所述获取方法包括:
在步骤S210中,接收M组第二信道状态信息,接收到的每一组所述第二信道状态信息均为编码器对第一信道状态信息经相应倍数的压缩所获得的信息,M为正整数,且M不小于2;
在步骤S220中,利用解码器对各组第二信道状态信息进行解压缩及合并处理,获目标信道状态信息。
其中,对数据量最小的一组第二信道状态信息进行解压缩的膨胀倍数大于对其他组第二信道状态信息进行解压缩的膨胀倍数,且所述目标信道状态信息的元素数量与所述第一信道状态信息的元素数量相同。
本公开第二个方面所提供的获取方法与本公开第一个方面所提供的反馈方法配合使用。步骤S210中接收到的M组第二信道状态信息为步骤S120中反馈的M组第二信道状态信息。
对第二信道状态信息进行解压缩以及合并处理后获得的目标信道状态信息与第一信道状态信息相对应,所述目标信道状态信息可以反映下行信道状态。
在本公开中,可以较为频繁地对实时获得的第一信道状态信息进行大压缩倍数的压缩、及反馈,而以较低的频率对实时获得的第一信道状态信息进行较小倍数的压缩及反馈。
在本公开中,以较低的频率对实时获得的第一信道状态信息进行较小倍数的压缩及反馈。容易理解的是,压缩倍数最小的第二信道状态信息进行解压缩后获得的信道状态信息与第一信道状态信息之间的差距最小,而下行信道状态的变化并非快速变化。因此,在一段时间内,压缩倍数最小的第二信道状态信息解压缩获得的信道状态信息仍能在一定程度上较为全面地反映下行信道状态信息。并且,在本公开中,以较为频繁地对实时获得的第一信道状态信息进行大压缩倍数的压缩、及反馈。因此,压缩倍数较大的第二信道状态信息解压缩后,可以获得能够反映信道状态变化的信道状态信息。在接收端(即,基站),利用及时获得的、能够反映信道状态变化的信道状态与较为全面的信道状态信息进行合并,可以获得既能够全面反映下行信道状态、又能反映下行信道状态变化的目标信道状态信息。
所述解码器为基于深度学习的神经网络模型,因此,利用解码器解压所获得信道状态信息与第一信道状态信息之间的误差较小。
如上文中所述,编码器包括N个子压缩模块,相应地,如图8所示,在所述获取方法中,解码器也包括N个子解压缩模块,N为不小于2的正整数。
如图9所示,所述对各组第二信道状态信息进行解压缩及合并处理(即,步骤S220),包括:
在步骤S221中,确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块;
在步骤S222中,利用多个子解压缩模块分别对M组第二信道状态信息进行解压缩及合并处理,获得目标信道状态信息。
在本公开中,编码器的N个子压缩模块和解码器的N个子解压缩模块一一对应。在一种实施方式中,编码器的N个压缩倍数互不相同的子压缩模块并行作业,可以实现N种不同压缩倍数的压缩。相应地,在解码器中,N个子解压缩模块的膨胀倍数互不相同,因此,在所述确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块的步骤中,确定 N个子解压缩模块中与M组第二信道状态信息对应的M个子解压缩模块。
相应地,所述利用多个子解压缩模块分别对M组第二信道状态信息进行解压缩及合并处理,获得M组目标信道状态信息(即,步骤S222),包括:
利用M个子解压缩模块分别对M组第二信道状态信息进行解压缩,以获得M组第三信道状态信息;
对M组第三信道状态信息进行合并处理,得到所述目标信道状态信息。
在上述实施方式中,对解压缩获得的结果进行合并处理,可以得到目标状态信息。
在本公开中,对如何对M组第三信道状态信息进行合并处理不做特殊的限定,作为一种可选实施方式,所述对如何对M组第三信道状态信息进行合并处理,包括:
对M组第三信道状态信息进行联结,以获得中间信道状态信息;
对所述中间信道状态信息进行压缩,以获得所述目标信道状态信息。
中间信道状态信息的元素数量大于第一信道状态信息的元素数量,为了获得元素数量与第一信道状态信息元素数量相同的目标信道状态信息,需要对中间信道状态信息进行压缩降维。作为一种可选实施方式,可以利用卷积层等网络层对中间信道状态信息进行压缩降维,获得目标信道状态信息。
当然,本公开并不限于此。可选地,可以通过以下方式中的任意一者执行所述对M组第三信道状态信息进行合并处理的步骤:
平均值合并处理、平方平均合并处理、加权合并处理。
在上文中所提供的另一种实施方式中,N个子压缩模块串联,每组第二信道状态信息由至少一个压缩模块压缩而成。相应地,在确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块的步骤中,每组第二信道状态信息对应至少一个子解压缩模块,并且,压缩倍数越大的第二信道状态信息对应的子解压缩模块的数量越多。
在这种实施方式中,对于多组第二信道状态信息,并非一次性将其解压缩至元素数量与压缩前的第一信道状态信息相同的状态。而是将除了压缩倍数最小的第二信道状态信息(即,压缩倍数最小的第二信道状态信息)之外的M-1组第二信道状态信息解压缩至第一中间信道状态信息,所述第一中间信道状态信息中的元素数量与压缩倍数最小的第二信道状态信息的数据量。
然后对M-1第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理,获得第二中间信道状态信息,在获得了第二中间信道状态信息后,利用多个子解压缩模块将合并处理获得的第二中间信道状态信息进行解压缩,获得所述目标信道状态信息。
在本公开中,对如何对第二中间信道状态信息进行解压缩不做特殊的限定。当第二中间信道状态信息的元素数量与压缩倍数最小的第二信道的状态信息的数据量相同时,可以利用对压缩倍数最小的第二信道状态信息进行解压缩的一个或多个子解压缩模块对第二中间信道状态信息进行解压缩。
当第二中间信道状态信息的元素数量大于压缩倍数最小的第二信道的状态信息的数据量时,可以重新选择多个子解压缩模块对第二中间信道状态信息进行解压缩。需要解释的是,被选择的子解压缩模块能够输入的元素数量与第二中间信道状态信息的元素数量相同。
在本公开中,对如何对M-1组第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理不做特殊的限定。例如,
所述对M-1组第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理,获得第二中间信道状态信息,包括:
对M-1组第一中间信道状态信息、以及压缩倍数最小的第二信道状态信息进行联结处理,以获得第二中间信道状态信息。此处的联结处理可以包括两种方式,第一种方式为直接将M-1组第一中间信道状态信息、以及压缩倍数最小的第二信道状态信息合并;另一种是,先将M-1组第一中间信道状态信息、以及压缩倍数最小的第二信道状态信息合并、再对合并后 的数据进行压缩,获得第二中间信道状态信息,以使得第二中间信道状态信息的元素数量与压缩倍数最小的第二信道状态信息的元素数量相同。
作为一种可选实施方式,可以利用卷积层等网络层执行对所述第三中间信道状态信息进行压缩,获得所述第二中间信道状态信息的步骤。
作为另一种可选实施方式,可以通过以下方式中的任意一者执行所述对M组第一中间信道状态信息进行合并处理,获得第二中间信道状态信息的步骤:
平均值合并处理、平方平均合并处理、加权合并处理。
作为一种可选实施方式,所述编码器包括N个子压缩模块,并且,在本公开第一个方面所提供的反馈方法还包括向基站侧反馈执行多种压缩倍数的压缩的子压缩模块的标识信息的步骤。
相应地,如图10所示,在所述对各组第二信道状态信息进行解压缩,获得M组第三信道状态信息(即,步骤S220)之前,所述获取方法还包括:
在步骤S201中,接收多个子压缩模块的标识信息。
相应地,步骤S221被具体执行为:
根据接收到的多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的多个子解压缩模块的标识信息;
根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息。
如上文中所述,N个子压缩模块串联,且N个子压缩模块均能独立地输出,所述子压缩模块的标识信息包括所述子压缩模块的编号,所述编码器的N个子压缩模块的编号符合第一预设规则。在反馈子压缩模块的标识信息时,反馈输出各组第二信道状态信息的子压缩模块的标识信息。相应地,在所述获取方法中,接收到的各个子压缩模块的标识信息分别为输出各组第二信道状态信息的子压缩模块的标识信息。
相应地,N个子解压缩模块串联,且N个子解压缩模块均能独立地接 收输入数据。对于第i个子解压缩模块,输出数据的解压缩倍数为数据流方向上第1个子解压缩模块至第i个子解压缩模块共同实现的解压缩倍数。需要指出的是,“数据流方向上的第1个子解压缩模块”不一定是编号为1的子解压缩模块。
所述子解压缩模块的标识信息包括所述子解压缩模块的编号,所述解码器的N个子解压缩模块的编号符合第二预设规则,所述第一预设规则和所述第二预设规则之间存在对应关系。
相应地,在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤中,根据接收到的子压缩模块的标识信息确定各组第二信道状态信息所输入的子解压缩模块的标识信息。
确定了能够输入第二信道状态信息的子解压缩模块后,利用该子解压缩模块、以及数据流方向上的后续的各个子解压缩模块可以实现对第二信道状态信息的逐级解压缩。
例如,3个子解压缩模块串联。每个子压缩模块均能独立地实现2倍膨胀。这样,以编号为1的子解压缩模块作为输入模块的话,可以获得2倍解压缩(即,以编号为1的子解压缩模块作为输入模块的话,将数据从编号为1的子解压缩模块的输入端输入、从编号为1的子压缩模块的输出端输出,从而可以实现对数据的2倍解压缩),编号为2的子解压缩模块作为输入模块的话,可以获得4倍解压缩(即,以编号为2的子解压缩模块作为输入模块的话,将数据从编号为2的子解压缩模块的输入端输入、从编号为1的子解压缩模块的输出端输出,数据依次经过编号为2的子解压缩模块的解压缩、以及编号为1的子解压缩模块的解压缩,从而可以实现对数据的4倍解压缩),以编号为3的子压缩模块作为输入模块可以实现8倍解压缩(即,以编号为3的子解压缩模块作为输入模块的话,将数据从编号为3的子解压缩模块的输入端输入、从编号为1的子解压缩模块的输出端输出,数据依次经过编号为2的子解压缩模块的解压缩、编号为2的子解压缩模块的解压缩、以及编号为1的子解压缩模块的解压缩,从而可以实现对数据的8倍解压缩)。
根据第一预设规则和接收到的子压缩模块的编号,可以确定多个子压缩模块的编号。根据第一预设规则和第二预设规则之间的对应关系、以及多个子压缩模块的编号,可以确定多个子解压缩模块的编号。
作为一种可选实施方式,所述第一预设规则包括从编号1直至编号N递增地连续对N个所述子压缩模块进行编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数正相关,所述第二预设规则包括从编号N直至编号1递减地连续对N个所述子解压缩模块进行编号,且所述子解压缩模块的编号与该子解压缩模块的膨胀倍数反相关。
所述第一预设规则与所述第二预设规则之间对应关系包括:编号相同的子压缩模块与子解压缩模块相对应。
并且,编号为i的子解压缩模块的膨胀倍数大于编号为i+1的子解压缩模块的膨胀倍数,其中,i为变量,且均为正整数,1≤i≤N-1。
在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤(即,步骤S221)中,将接收到的编号所对应的子解压缩模块作为输入各组第二信道状态信息的子解压缩模块(可以将其记作步骤S221a)。
例如,接收到编号3后,将编号为3的子解压缩模块作为第二信道状态信息的输入模块。
如图11中所示,子压缩模块1与子解压缩模块1对应,子压缩模块1输出的第二信道状态信息CSI 1被反馈至基站后,由子解压缩模块1进行解压缩。子压缩模块i与子解压缩模块i对应,子压缩模块i输出的第二信道状态信息CSI i被反馈至基站后,由子解压缩模块i进行解压缩,依次类推。
如上文中所述,反馈至基站侧的子压缩模块的编号分别为实现各种压缩倍数的各组子压缩模块中的最大编号。因此,接收到的标识信息也可以包括执行对每组第二信道状态信息进行解压缩所需要的多子解压缩模块的编号中的最大编号。相应地,如图12所示,所述根据多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的M个子解压缩模块的标识信息(即,步骤S221),还包括:
在步骤S221b中,将输入至各组第二信道状态信息的子解压缩模块的编号、直至编号1作为与获得各组第二信道状态信息的多个子压缩模块一一对应的多个子解压缩模的标识信息。
下面结合图8中所示的一种具体实施方式对子解压缩模块进行描述:
N个子解压缩模块按照N到1的降序编号。相应地,每个子解压缩模块按编号i=N到1降序逐模块对压缩后的数据进行扩充或者上采样,扩充后数据的元素数Q也是逐渐递增,直至最后一个子解压缩模块的输出元素数目最大,即与原始输入压缩模块中的数据元素数相同,即N compress<Q N<Q N-1<Q N-2<......<Q 1=N total,其中N compress为初始送入解压缩模块的数据元素数目,N total为初始送入压缩模块的数据的元素数目。
类似地,每个子解压缩模块按编号i=N到1降序逐模块对数据进行扩充或者上采样,扩充倍数(或者膨胀倍数)μ逐渐递增(或者是扩充程度逐渐变大),其中μ i=N total/Q i,其中,Q i是编号为i的子解压缩模块扩充后数据的元素个数,直至最后一个子解压缩模块的扩充倍数最大,即μ 1<μ 2<…<μ N
作为一种可选实施方式,所述利用与压缩倍数最小的第二信道状态信息对应的子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息,包括:
确定接收到的编号中最小的一者;
编号中最小的一者确定为对压缩倍数最小的第二中间信道状态信息进行解压缩的第一个子解压缩模块的编号;
将接收到的编号中最小的一者直至编号为1的子解压缩模块确定为压缩倍数最小的第二中间信道状态信息进行解压缩的子解压缩模块。
在本公开中,确定了接收到的编号中最小的一者之后,即可确定对压缩倍数最小的第二信道状态信息进行解压缩的输入模块(该输入模块即为反馈的所有编号中最小的一者所对应的子解压缩模块)。确定了对压缩倍数最小的第二信道状态信息进行解压缩的一组子解压缩模块中的输入模块之后,可以确定其余用于对压缩倍数最小的第二信道状态信息进行解压 缩的子解压缩模块。这样最终确定了所有用于对压缩倍数最小的第二信道状态信息进行解压缩的子解压缩模块,利用这些子解压缩模块对第二中间信道状态信息进行解压缩,可以获得目标信道状态信息。
当然,本公开并不限于此。如上文中所述,可以按照降序对各个子压缩模块进行编号,因此,也可以按照升序对各个子解压缩模块进行编号。具体地:
所述第一预设规则包括从编号N直至编号1递减地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数反相关,所述第二预设规则包括从编号1直至编号N递增地对N个所述子解压缩模块进行连续编号,且所述子解压缩模块的编号与该子解压缩模块的膨胀倍数正相关,
所述第一预设规则与所述第二预设规则之间对应关系包括:编号相同的子压缩模块与子解压缩模块相对应;
编号为i的子解压缩模块的膨胀倍数大于编号为i+1的子解压缩模块的膨胀倍数,其中,i为变量,且为正整数,1≤i≤N-1;
在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤中,将接收到的编号所对应的子解压缩模块作为输入各组第二信道状态信息的子解压缩模块。
可选地,所述根据多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的多个子解压缩模块的标识信息,还包括:
将输入各组第二信道状态信息的子解压缩模块的编号、直至编号N作为与该多个子压缩模块一一对应的多个子解压缩模块的标识信息。
可选地,所述标识信息包括所述利用与压缩倍数最小的第二信道状态信息对应的子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息,包括:
确定接收到的编号中最大的一者;
编号中最大的一者确定为对压缩倍数最小的第二中间信道状态信息进行解压缩的第一个子解压缩模块的编号;
将接收到的编号中最大的一者直至编号为N的子解压缩模块确定为压缩倍数最小的第二中间信道状态信息进行解压缩的子解压缩模块。
上文中所描述的是串行的子解压缩模块如何对各组第二信道状态信息进行解压缩及合并处理。当然,本公开并不限于此,针对并行的子解压缩模块而言,所述利用多个子解压缩模块分别对M组第二信道状态进行解压缩及合并处理,包括:
利用多个子解压缩模块中的一部分子解压缩模块对各组的第二信道状态信息进行解压缩,以分别获得M组第一中间信道状态信息,其中,对所述第一中间信道状态信息中的元素数量小于所述第一信道状态信息的元素数量;
对M组第一中间信道状态信息进行合并处理,获得第二中间信道状态信息;
利用多个子解压缩模块中的剩余子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息。
如上文中所述,解码器为基于深度学习的神经网络模型,所述子解压缩模块满足以下条件中的至少一者:
所述子解压缩模块包括全连接层;
所述子解压缩模块包括多个逆卷积层,所述你卷积层包括以下部分中的至少一者:上采样层、上池化层、转置卷积层、残差块、稠密块、直连层;
所述子解压缩模块包括循环网络,所述循环网络包括长短时记忆LSTM神经网络或者门控循环GRU神经网络。
在一个具体的实施例中,子解压缩模块可以是由全连接网络组成,其中全连接网络包含至少两层全连接层。在编号为i的子解压缩模块中(i=1,2,...,N),全连接网络输入数据的元素数目为N in,全连接网络输出数 据的元素数目为N out,其中N in<N out。因此该子解压缩模块内部的局部扩充倍数λ i=N out/N in。例如,送入子解压缩模块时有N in=128个元素,经过扩充后有N out=256个元素,那么该子解压缩模块内部的扩充倍数为256/128=2倍。需要说明的是,编号为i的子解压缩模块的扩充倍数λ i为编
Figure PCTCN2022138160-appb-000010
节点数目,全连接层的权值,归一化系数,激活函数。
在一个具体的实施例中,子解压缩模块可以是由卷积网络组成,其中卷积网络包含至少一个卷积层,或者至少一个残差块(Residual block),稠密块(Dense block)或者它们的组合等。在编号为i的子解压缩模块中(i=1,2,...,N),卷积网络输入的数据元素总数目为Nin,输出的数据元素总数目为Nout,其中N in<N out。因此该子解压缩模块内部的局部扩充倍数λ i=N out/N in。比如送入子解压缩模块时数据大小为8*8*2,N in=128个元素,经过扩充后数据大小为16*16*2,N out=512个元素,那么该子解压缩模块内部局部扩充倍数为512/128=4倍。需要说明的是,编号为i的子解压缩模块的扩充倍数λ i为编号N至i的子解压缩模块局部扩充倍数的积,即
Figure PCTCN2022138160-appb-000011
其中Π为连乘符号。所述卷积网络的参数包括但不限于以下至少之一:卷积层的输入尺寸,卷积核参数,卷积步长参数,数据填充参数,反池化参数,上采样系数,归一化系数,激活函数。
在一个具体的实施例中,子解压缩模块可以是由循环网络组成,其中循环网络可以是长短期记忆(long short-term memory,LSTM)或者门控循环神经网络(gated recurrent neural network,GRU)或者基于注意力机制的Transformer结构等。在编号为i的子解压缩模块中(i=1,2,...,N),循环网络输入的数据元素总数目为N in,输出的数据元素总数目为N out,其中N in<N out。因此该子解压缩模块内部的局部扩充倍数λ i=N out/N in。比如送入子解压缩模块时有N in=128个元素,经过扩充后有N out=256个元素,那么该子解压缩模块内部的局部扩充倍数为256/128=2倍。需要说明的是,编号为i的子解压缩模块的扩充倍数λ i为编号N至i的子压缩模块局部扩充倍数的积,即
Figure PCTCN2022138160-appb-000012
其中Π为连乘符号。所述循环网络的参数包 括但不限于以下至少之一:循环层的节点数目,循环层的权值,循环层中门参数,注意力机制参数,归一化系数,激活函数。
作为一种可选实施方式,所述解码器还包括第二处理模块,如图13所示,所述获取方法还包括:
在步骤S230中,通过所述第二处理模块对所述目标信道状态信息进行处理,以提取终端反馈的信道状态信息。
第二处理模块也是神经网络。作为一直用可选实施方式,
可选地,所述第二处理模块包括神经网络,所述神经网络包括多个网络层,多个网络层各自独立地选自以下层中的至少一者:
卷积层、池化层、直连层。
可选地,接收各组第二信道状态信息的反馈周期的时间长短信息,其中,所述第二信道状态信息的反馈周期与所述第二信道状态信息的数据量反相关。
在对各组第二信道状态信息进行解压缩及合并处理的步骤中,可以利用反馈周期长的第二信道状态信息对该反馈周期长的第二信道状态信息的反馈周期跨度内的反馈周期短的第二信道状态信息进行合并处理。
在本公开中,可以直接利用压缩倍数最小的第二信道状态信息对该第二信道状态信息的反馈周期跨度内接收到的第二信道状态信息进行解压缩后获得的解压缩数据进行合并处理;还可以对压缩倍数最小的第二信道状态信息进行解压缩获得第一解压缩数据,利用该第一解压缩数据对该第二信道状态信息的反馈周期跨度内接收到的第二信道状态信息进行解压缩后获得的解压缩数据进行合并处理。
下面结合一种具体实施方式,对“利用反馈周期长的第二信道状态信息在一个反馈周期内解压缩获得的信道状态信息对该反馈周期长的第二信道状态信息的反馈周期跨度内的反馈周期短的第二信道状态信息在其多个反馈周期内解压缩获得的信道状态信息进行联合处理”进行解释。
以压缩倍数最大的第二信道状态信息和压缩倍数最小的第二信道状 态信息两组数据为例进行解释和说明。
用T 1表示压缩倍数最小的第二信道状态信息的反馈周期,用T M表示压缩倍数最大的第二信道状态信息的反馈周期,T 1>T M。为了便于理解,假定T 1=3T M。也就是说,反馈周期T 1的时间跨度内,反馈了3次压缩倍数最大的第二信道状态信息。在T 1周期内,利用压缩倍数最小的第二信道状态信息解压缩获得的第三信道状态信息对三个反馈周期T M的第二信道状态信息解压缩获得的第三信道状态信息进行联合处理。具体如下:
在t0’时刻:
接收端(即,基站)接收到压缩倍数最大的第二信道状态信息、以及压缩倍数最小的第二信道状态信息;
接收端对接收到的两组第二信道状态信息进行解压缩以及合并处理,可以获得反映t0时刻下行信道状态的目标信道状态信息。
在t1(t1=t0+T M)时刻:
发送端对t1时刻的第一信道状态信息进行最大压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息;
在t1’(t1’=t0’+T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息;
接收端对接收到的第二信道状态信息与t0’时刻接收到的压缩倍数最小的第二信道状态信息进行解压缩以及合并处理,获得t1’时刻的目标信道状态信息。
在t2(t2=t0+2T M)时刻:
发送端对t2时刻的第一信道状态信息进行最大压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息;
在t2’(t2’=t0’+2T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息;
接收端对接收到的第二信道状态信息与t0’时刻接收到的压缩倍数最小的第二信道状态信息进行解压缩以及合并处理,获得t2’时刻的目标信 道状态信息。
在t3(t2=t0+3T M)时刻:
发送端对t3时刻的第一信道状态信息进行最大压缩倍数的压缩、以及最小压缩倍数的压缩,获得压缩倍数最大的第二信道状态信息和压缩倍数最小的第二信道状态信息;
在t3’(t3’=t0’+3T M)时刻:
接收端接收到压缩倍数最大的第二信道状态信息、和压缩倍数最小的第二信道状态信息;
接收端对接收到的压缩倍数最大的第二信道状态和压缩倍数最小的第二信道状态信息进行解压缩及合并处理,获得t3’时刻的目标信道状态信息。
需要理解的是,虽然上文中介绍的实施方式中T 1是T M的整数倍,但是,本公开并不限于此,T 1也可以不是T M的整数倍,只要T 1>T M即可,例如,T 1=2.7T M
下面介绍几种具体的实施例:
在一个具体的实施例中,上述基站侧的子解压缩模块可以对M(2≤M≤N)个不同子压缩模块压缩并反馈的第二信道信息CSI 1,CSI 2,2,CSI M对应其中压缩倍数最小(或者反馈周期最长)的子解压缩模块的输出CSI′ 1,CSI′ 2,2,CSI′ M对应元素进行平均值合并处理,即CSI′=CSI′ 1+CSI′ 2+…+CSI′ M/M,将平均处理后的CSI′继续送入后续子解压缩模块恢复出估计的信道信息(即,目标信道状态信息)。
在一个具体的实施例中,上述基站侧子解压缩模块可以对M(M≥2)个不同子压缩模块压缩并反馈的第二信道信息CSI 1,CSI 2,2,CSI M对应其中压缩倍数最小(或者反馈周期最长)子解压缩模块的输出CSI′ 1,CSI′ 2,2,CSI′ M对应元素进行平方平均合并处理,即
Figure PCTCN2022138160-appb-000013
将平方平均处理后的CSI′继续送入后续子解压缩模块恢复出估计的信道信息(即,目标信道状态信息)。
在一个具体的实施例中,上述基站侧的子解压缩模块可以对M(2≤M≤N)个不同子压缩模块压缩并反馈的第二信道信息CSI 1,CSI 2,2,CSI M对应其中压缩倍数最小(或者反馈周期最长)的子解压
Figure PCTCN2022138160-appb-000014
态信息对当前信道情况的贡献可以分配不同的权重α,将加权处理后的CSI′继续送入后续子解压缩模块恢复出估计的信道信息(即,目标信道状态信息)。
在另一个具体的实施例中,上述基站侧的子解压缩模块可以对M(2≤M≤N)个不同子压缩模块压缩并反馈的第二信道信息CSI 1,CSI 2,2,CSI M对应其中压缩倍数最小(或者反馈周期最长)的子解压缩模块输出CSI′ 1,CSI′ 2,2,CSI′ M联结进行处理,即CSI 1,CSI 2,2,CSI M按照其中的一个维度进行并列合并组成新的CSI′,将联结处理后的CSI′继续送入后续子解压缩模块恢复出估计的信道信息(即,目标信道状态信息)。
举个具体的例子,联结处理前子解压缩模块输出的CSI′ 1与CSI′ 2均为256*1维的向量,按照第二维度进行并列合并构成新的CSI′为256*2维的矩阵,再送入后续的子解压缩模块恢复出估计的信道信息。
基站侧接收终端反馈的不同压缩倍数下的第二信道信息后需要送入解码器中的解压缩模块进行处理生成增强后的目标信道状态信息。解压缩模块输入为终端反馈的M种(2≤M≤N)压缩倍数下的第二信道状态信息CSI 1,CSI 2,...,CSI M,根据反馈的第二信道状态信息中对应的子压缩模块编号将其送入对应的子解压缩模块进行压缩信道信息扩充。其中,在第1,2,...,N个子压缩模块中,输出的第二信道状态信息的元素个数分别为P 1,P 2,...,P N,其与终端N个子解压缩模块的输入元素个数Q 1,Q 2,...,Q N相同,即终端侧第i个子压缩模块的输出数据元素数目与基站侧第i个子解压缩模块的输出数据元素数目一致,二者对应编号相同共同构成N组子压缩与子解压缩模块对。
基站侧接收上述终端反馈的不同周期下的M(2≤M≤N)个第二信道状态信息CSI 1,CSI 2,...,CSI M后需要送入解压缩模块进行压缩后的信 息扩充,并且在反馈周期最长的第二信道状态信息(或者终端压缩倍数最小的第二信道状态信息)所对应的子解压缩模块输出端与其他反馈周期在该子解压缩模块处恢复的信道状态信息进行合并协同,具体流程如图4所示。比如,终端反馈两个压缩倍数为4倍与64倍的第二信道状态信息给基站,64倍压缩倍数的CSI 1每隔1个TTI反馈一次,4倍压缩倍数的CSI 2每隔10个TTI反馈一次。基站记录保存下CSI 2的信道信息,与反馈周期短的CSI 1信道信息在压缩倍数为4倍的压缩模块对应的基站侧子解压缩模块输出端进行信道信息协同处理,协同处理后送入后续子解压缩模块处理直至恢复出估计的压缩前的信道状态信息(即,目标信道状态信息)。
上文中介绍了通过发送端发送的子压缩模块的编号来确定压缩倍数、并进一步确定对各组第二信道状态信息进行解压缩的膨胀倍数。当然,本公开并不限于此。发送端可以发送各组第二信道状态信息的反馈周期的长短,相应地,基站侧可以根据接收到的各组第二信道状态信息的反馈周期的时间长短信息确定对各组第二信道状态信息进行解压缩的膨胀倍数。
作为另一种可选实施方式,发送端可以反馈各组第二信道状态信息的压缩倍数,相应地,可选地,所述获取方法还包括:
接收各组第二信道状态信息对应的压缩倍数信息;
根据各组第二信道状态信息对应的压缩倍数信息确定各组第二信道状态信息的膨胀倍数。
本公开中,发送端还可以反馈编码器的索引信息,相应地,基站侧可以接收编码器的索引信息、并根据编码器的索引信息确定执行解压缩的解码器。
作为一种可选实施方式,不同索引信息的编码器可以用于不同的应用场景。与不同索引信息对应的编码器相对应的解码器,也可以用于不同的应用场景。
根据不同实际的环境场景通过上述训练过程得到K0套自编码器的神经网络参数,所述每个自编码器包括一对编码器和译码器。终端和基站分别保存所述K0套编码器和译码器的神经网络参数。在使用时,基站根据 信道情况通过高层信令配置所述K0套编码器和译码器对的索引,这样终端接收所述索引就知道使用了哪套编码器的参数,即基站配置编码器和译码器对的索引,终端接收所述编码器和译码器对的索引,确定编码器对应的参数。其中,终端根据信道的场景,信道的角度扩展、时延扩展,多普勒扩展等至少一个因素选择其中的一个编码器,并将所选的编码器的索引通过物理层和/或高层信令传输给基站。基站通过终端反馈的编码网络索引,获得对应的译码器,对接收的第二信道信息进行处理得到第三信道信息。如果只有K0=1套编码器参数,则终端反馈索引0即可。
作为本公开的第三个方面,提供一种编码器和解码器进行联合训练的方法,所述编码器能够进行N种压缩倍数的压缩,所述解码器能够进行N种膨胀倍数的解压缩,且N中压缩倍数的数值分别与N种解压缩的膨胀倍数一一对应地相等,其中,如图14所示,所述方法包括:
在步骤S310中,对所述编码器的初始参数进行训练,以获得所述编码器的最终参数;
在步骤S320中,对所述解码器的初始参数进行训练,以获得所述解码器的最终参数。
其中,对所述编码器的初始参数进行训练的步骤中的训练误差、以及对所述解码器的初始参数进行训练的训练误差均为所述编码器的输入与所述解码器的输出之间的误差函数。
在本公开中,编码器和解码器联合训练,通过上述方法训练出的编码器和解码器中,编码器的输入信息与解码器的输出信息更加接近。
如上文中所述,所述编码器包括N个子压缩模块,N个子压缩模块分别能够实现N种压缩倍数的压缩,所述编码器的初始参数包括N个子压缩模块的初始参数,所述编码器的最终参数包括N个子压缩模块的最终参数。
所述解码器包括N个子解压缩模块,N个所述子解压缩模块分别能够实现N中膨胀倍数的解压缩,所述解码器的初始参数包括N个子解压缩模块的初始参数,所述解码器的最终参数包括N个子解压缩模块的最终参数。
相应地,对所述编码器的初始参数进行训练包括分别对N个子压缩模 块的初始参数进行训练,对所述解码器的初始参数进行训练包括分别对N个子解压缩模块的初始参数进行训练。
可选地,对子压缩模块的初始参数进行训练的训练误差为该子压缩模块的输入与膨胀倍数的数值与该子压缩模块的压缩倍数的数值相同的子解压缩模块的输出之间的误差函数。
作为一种可选实施方式,为了训练不同压缩倍数子模块下泛化性更好的参数,所述误差函数满足以下公式:
Figure PCTCN2022138160-appb-000015
其中,Loss为所述误差;
i为子压缩模块、以及子解压缩模块的序号;
ki为第i个子压缩模块以及第i个子解压缩模块的加权系数;
Figure PCTCN2022138160-appb-000016
通过对N个不同的子压缩模块与对应子解压缩模块训练的误差函数进行加权联合训练来进一步对多压缩倍数下的模型提供不同程度的模型训练贡献,可根据实际情况对压缩倍数大或者压缩倍数小的模型的训练误差函数赋有更高的权重。以新的训练误差函数对所有压缩倍数下的子模块一起协同训练,直至新的训练误差Loss最小且几乎不变时停止训练,即压缩模块与解压缩模块的网络参数训练完毕。然后,配合编码器与解码器中的第一处理模块与第二处理模块的参数训练,终端与基站的一套编码器与译码器的参数训练完成。
作为另一种可选实施方式,所述误差函数选自以下函数中的任意一者:
均方误差函数、绝对值损失函数、对数损失函数。
在一个具体的实施例中,终端与基站侧先对最低压缩倍数γ 1的子压缩模块与子解压缩模块联合训练,训练目标为训练误差Loss最小且几乎不变时停止训练,即子压缩模块与其对应的子解压缩模块的神经网络参数训练完成,其中训练误差Loss是子压缩模块的输入与对应子解压缩模块输出之间的误差函数,即
Figure PCTCN2022138160-appb-000017
其表征了子解压缩模块输出的恢复 数据与子压缩模块前的原始输入数据之间的损失误差或接近程度,可以采用均方误差函数(Mean square error)、绝对值损失函数或对数损失函数等,具体示意图如图15所示。当网络参数训练完成时,终端侧与基站侧则固定该两部分模块的网络参数不再改变。然后,终端与基站再对次最低压缩倍数γ 2的子压缩模块与子解压缩模块联合训练,其中当训练Loss2最低且几乎保持不变时即次最低压缩倍数γ 2的子压缩模块与子解压缩模块的神经网络参数训练完成,并固定该两部分模块的网络参数不再改变。以此类推,直至终端与基站对最高压缩倍数γ N的子压缩模块与子解压缩模块联合训练完毕(其中γ 12<…<γ N)。然后,配合编码器与解码器中的第一处理模块与第二处理模块的参数训练,终端与基站的一套编码器与译码器的参数训练完成。
在一个具体的实施例中,终端与基站侧先对最高压缩倍数γ 1的子压缩模块与子解压缩模块联合训练,训练目标为训练误差Loss最小且几乎不变时停止训练,即子压缩模块与其对应的子解压缩模块的神经网络参数训练完成,其中训练误差Loss可以采用均方误差函数(Mean square error)、绝对值损失函数或对数损失函数等。当网络参数训练完成时,终端侧与基站侧则固定该两部分模块的网络参数不再改变。然后,终端与基站再对次最高压缩倍数γ 2的子压缩模块与子解压缩模块联合训练,其中当训练Loss2最低且几乎保持不变时即次最高压缩倍数γ 2的子压缩模块与子解压缩模块的神经网络参数训练完成,并固定该两部分模块的网络参数不再改变。以此类推,直至终端与基站对最低压缩倍数γ N的子压缩模块与子解压缩模块联合训练完毕(其中γ 12>…>γ N)。然后,配合编码器与解码器中的第一处理模块与第二处理模块的参数训练,终端与基站的一套编码器与译码器的参数训练完成。
作为本公开的第四个方面,提供一种终端,所述终端包括:
第一存储模块,其上存储有第一可执行程序;
一个或多个第一处理器,当所述一个或多个第一处理器调用所述第一可执行程序时,实现本公开第一个方面所提供的反馈方法。
作为本公开的第五个方面,提供一种基站,所述基站包括:
第二存储模块,其上存储有第二可执行程序;
一个或多个第二处理器,当所述一个或多个第二处理器调用所述第二可执行程序时,实现本公开第二个方面所提供的所述的获取方法。
作为本公开的第六个方面,提供一种电子设备,所述电子设备包括:
第三存储模块,其上存储有第三可执行程序;
一个或多个第三处理器,当所述一个或多个第三处理器调用所述第三可执行程序时,实现本公开第三个方面所提供的联合训练方法。
作为本公开的第七个方面,提供一种计算机可读介质,其上存储有可执行程序,当所述可执行程序被调用时,能够实现本公开第一个方面所提供的反馈方法、第二个方面所提供的获取方法和本公开第三个方面所提供的训练方法中的任意一者。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调 制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其它实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (44)

  1. 一种信道状态信息的反馈方法,用于终端,所述反馈方法包括:
    利用编码器对第一信道状态信息进行M种压缩倍数的压缩,以获得M组第二信道状态信息,M为正整数,且M不小于2;
    反馈M组第二信道状态信息。
  2. 根据权利要求1所述的反馈方法,其中,所述编码器包括N个子压缩模块,N为不小于2的正整数,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤中,选用N个子压缩模块中的多个子压缩模块实现对所述第一信道状态信息进行M种压缩倍数的压缩。
  3. 根据权利要求2所述的反馈方法,其中,2≤M≤N,N个所述子压缩模块的压缩倍数互不相同,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤中,选用N个子压缩模块中的M个子压缩模块实现对所述第一信道状态信息进行M种压缩倍数的压缩。
  4. 根据权利要求2所述的反馈方法,其中,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩的步骤中,对于M种压缩倍数中的任意一种倍数的压缩,通过一个所述子压缩模块、或者多个所述子压缩模块的组合实现,其中,压缩倍数最小的压缩步骤所用到的子压缩模块数量最少,压缩倍数最大的压缩步骤所用到的子压缩模块数量最多。
  5. 根据权利要求4所述的反馈方法,其中,所述反馈方法还包括:
    分别反馈实现每种压缩倍数的子压缩模块的标识信息,或者
    分别反馈实现每种压缩倍数的多个子压缩模块的标识信息中符合预 定条件的一个或多个。
  6. 根据权利要求5所述的反馈方法,其中,N个子压缩模块串联,且N个子压缩模块均能独立地输出,所述子压缩模块的标识信息包括子压缩模块的编号,N个所述子压缩模块的编号符合第一预设规则,在分别反馈实现每种压缩倍数的多个子压缩模块的标识信息中符合预定条件的一个或多个的步骤中,至少反馈输出各组第二信道状态信息的子压缩模块的标识信息。
  7. 根据权利要求6所述的反馈方法,其中,所述第一预设规则包括从编号1至编号N递增地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块输出的压缩倍数正相关,所述预定条件包括实现每种压缩倍数多个子压缩模块的编号中的最大编号;或者
    所述第一预设规则包括从编号N至编号1递减地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数反相关,所述预定条件包括实现每种压缩倍数多个子压缩模块的编号中的最小编号。
  8. 根据权利要求2所述的反馈方法,其中,所述子压缩模块满足以下条件中的任意一者:
    所述子压缩模块包括全连接网络,所述全连接网络包括至少两层全连接层;
    所述子压缩模块包括卷积网络,所述卷积网络包括以下部分中的至少一者:至少一个卷积层、至少一个残差块、至少一个稠密块、至少一个池化层或至少一个直连层;或者
    所述子压缩模块包括循环网络,所述循环网络包括长短时记忆LSTM神经网络或者门控循环GRU神经网络。
  9. 根据权利要求1至8中任意一项所述的反馈方法,其中,在反馈每一组第二信道状态信息时,均周期性地对该组第二信道状态信息进行反馈,且反馈周期的长短与该周期中反馈的第二信道状态信息对应的压缩倍数反相关,
    对于同一种压缩倍数,不同反馈周期中,对实时获得的第一信道状态信息进行压缩。
  10. 根据权利要求9所述的反馈方法,其中,所述反馈方法还包括:
    反馈各组第二信道状态信息的反馈周期的长短。
  11. 根据权利要求1至8中任意一项所述的反馈方法,其中,在所述利用编码器对第一信道状态信息进行M种压缩倍数的压缩之前,所述反馈方法还包括:
    获取原始信道信息;
    利用所述编码器对所述原始信道信息进行特征提取,以获得初始信道状态信息;
    对所述初始信道状态信息进行预处理,以获得所述第一信道状态信息。
  12. 根据权利要求11所述的反馈方法,其中,所述编码器还包括第一处理模块,所述第一处理模块用于执行对所述原始信道信息进行特征提取的步骤,所述第一处理模块满足以下条件中的任意一者:
    所述第一处理模块包括全连接网络,所述全连接网络包括至少两层全连接层;
    所述第一处理模块包括卷积网络,所述卷积网络包括以下部分中的至少一者:至少一个卷积层、至少一个残差块、至少一个稠密块、至少一个池化层或至少一个直连层;或者
    所述第一处理模块包括循环网络,所述循环网络包括长短时记忆LSTM神经网络或者门控循环GRU神经网络。
  13. 根据权利要求12所述的反馈方法,其中,对所述初始信道状态信息进行预处理,以获得所述第一信道状态信息的步骤中,所述第一信道状态信息的维度与所述编码器的输入维度一致。
  14. 根据权利要求1至8中任意一项所述的反馈方法,其中,所述反馈方法还包括:
    反馈M组第二信道状态信息对应的压缩倍数。
  15. 一种信道状态信息的获取方法,用于基站,所述获取方法包括:
    接收M组第二信道状态信息,接收到的每一组所述第二信道状态信息均为编码器对第一信道状态信息经相应倍数的压缩所获得的信息,M为正整数,且M不小于2;
    利用解码器对各组第二信道状态信息进行解压缩及合并处理,获得目标信道状态信息,其中,
    对数据量最小的一组第二信道状态信息进行解压缩的膨胀倍数大于对其他组第二信道状态信息进行解压缩的膨胀倍数,且所述目标信道状态信息的元素数量与所述第一信道状态信息的元素数量相同。
  16. 根据权利要求15所述的获取方法,其中,所述解码器包括N个子解压缩模块,N为不小于2的正整数,所述对各组第二信道状态信息进行解压缩及合并处理,包括:
    确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块;
    利用多个子解压缩模块分别对M组第二信道状态信息进行解压缩及 合并处理,获得目标信道状态信息。
  17. 根据权利要求16所述的获取方法,其中,2≤M≤N,N个子解压缩模块的膨胀倍数互不相同,在所述确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块的步骤中,确定N个子解压缩模块中与M组第二信道状态信息对应的M个子解压缩模块,
    所述利用多个子解压缩模块分别对M组第二信道状态信息进行解压缩及合并处理,获得M组目标信道状态信息,包括:
    利用M个子解压缩模块分别对M组第二信道状态信息进行解压缩,以获得M组第三信道状态信息;
    对M组第三信道状态信息进行合并处理,得到所述目标信道状态信息。
  18. 根据权利要求17所述的获取方法,其中,对M组第三信道状态信息进行合并处理,包括:
    对M组第三信道状态信息进行联结,以获得中间信道状态信息;
    对所述中间信道状态信息进行压缩,以获得所述目标信道状态信息;或者,
    通过以下方式中的任意一者执行所述对M组第三信道状态信息进行合并处理的步骤:
    平均值合并处理、平方平均合并处理或加权合并处理。
  19. 根据权利要求16所述的获取方法,其中,在确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块的步骤中,每组第二信道状态信息对应至少一个子解压缩模块,并且,压缩倍数越大的第二信道状态信息对应的子解压缩模块的数量越多;
    所述利用多个子解压缩模块分别对M组第二信道状态进行解压缩及合并处理,包括:
    利用多个子解压缩模块中相应的子解压缩模块对除压缩倍数最小的第二信道状态信息之外的其他第二信道状态信息进行解压缩,以获得M-1组第一中间信道状态信息,其中,所述第一中间信道状态信息中的元素数量与压缩倍数最小的第二信道状态信息的元素数量相等;
    对M-1组第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理,获得第二中间信道状态信息;
    利用多个子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息。
  20. 根据权利要求19所述的获取方法,其中,所述对M-1组第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理,获得第二中间信道状态信息,包括:
    对M-1组第一中间信道状态信息、以及压缩倍数最小的第二信道状态信息进行联结处理,获得所述第二中间信道状态信息;或者
    通过以下方式中的任意一者执行所述对M-1组第一中间信道状态信息与压缩倍数最小的第二信道状态信息进行合并处理,获得第二中间信道状态信息的步骤:
    平均值合并处理、平方平均合并处理或加权合并处理。
  21. 根据权利要求19所述的获取方法,其中,所述编码器包括N个子压缩模块,在所述对各组第二信道状态信息进行解压缩及合并处理之前,所述获取方法还包括:接收多个子压缩模块的标识信息;
    所述确定N个子解压缩模块中与M组第二信道状态信息对应的多个子解压缩模块,包括:
    根据接收到的多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的多个子解压缩模块的标识信息;
    根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道 状态信息进行解压缩所需要的子解压缩模块的标识信息。
  22. 根据权利要求21所述的获取方法,其中,N个子压缩模块串联,且N个子压缩模块均能独立地输出,所述子压缩模块的标识信息包括所述子压缩模块的编号,所述编码器的N个子压缩模块的编号符合第一预设规则,接收到的各个子压缩模块的标识信息分别为输出各组第二信道状态信息的子压缩模块的标识信息;
    N个子解压缩模块串联,且N个子解压缩模块均能独立地接收输入数据,所述子解压缩模块的标识信息包括所述子解压缩模块的编号,所述解码器的N个子解压缩模块的编号符合第二预设规则,所述第一预设规则和所述第二预设规则之间存在对应关系,
    在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤中,根据接收到的子压缩模块的标识信息确定各组第二信道状态信息所输入的子解压缩模块的标识信息。
  23. 根据权利要求22所述的获取方法,其中,所述第一预设规则包括从编号1直至编号N递增地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数正相关,所述第二预设规则包括从编号N直至编号1递减地对N个所述子解压缩模块进行连续编号,且所述子解压缩模块的编号与该子解压缩模块的膨胀倍数反相关,
    所述第一预设规则与所述第二预设规则之间对应关系包括:编号相同的子压缩模块与子解压缩模块相对应;
    编号为i的子解压缩模块的膨胀倍数小于编号为i+1的子解压缩模块的膨胀倍数,其中,i为变量,且为正整数,1≤i≤N-1;
    在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤中,将接收到的编号所对应的子解压缩模块作为输入各组第二信道状态信息 的子解压缩模块。
  24. 根据权利要求23所述的获取方法,其中,所述根据多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的多个子解压缩模块的标识信息,还包括:
    将输入各组第二信道状态信息的子解压缩模块的编号、直至编号1作为与获得各组第二信道状态信息的多个子压缩模块一一对应的多个子解压缩模块的标识信息。
  25. 根据权利要求23所述的获取方法,其中,所述利用与压缩倍数最小的第二信道状态信息对应的子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息,包括:
    确定接收到的编号中最小的一者;
    编号中最小的一者确定为对压缩倍数最小的第二中间信道状态信息进行解压缩的第一个子解压缩模块的编号;
    将接收到的编号中最小的一者直至编号为1的子解压缩模块确定为压缩倍数最小的第二中间信道状态信息进行解压缩的子解压缩模块。
  26. 根据权利要求22所述的获取方法,其中,所述第一预设规则包括从编号N直至编号1递减地对N个所述子压缩模块进行连续编号,且所述子压缩模块的编号与该子压缩模块的压缩倍数反相关,所述第二预设规则包括从编号1直至编号N递增地对N个所述子解压缩模块进行连续编号,且所述子解压缩模块的编号与该子解压缩模块的膨胀倍数正相关,
    所述第一预设规则与所述第二预设规则之间对应关系包括:编号相同的子压缩模块与子解压缩模块相对应;
    编号为i的子解压缩模块的膨胀倍数大于编号为i+1的子解压缩模块的膨胀倍数,其中,i为变量,且为正整数,1≤i≤N-1;
    在所述根据确定的多个子解压缩模块的标识信息确定执行对各组第二信道状态信息进行解压缩所需要的子解压缩模块的标识信息的步骤中,将接收到的编号所对应的子解压缩模块作为输入各组第二信道状态信息的子解压缩模块。
  27. 根据权利要求26所述的获取方法,其中,所述根据多个子压缩模块的标识信息确定与该多个子压缩模块一一对应的多个子解压缩模块的标识信息,还包括:
    将输入各组第二信道状态信息的子解压缩模块的编号、直至编号N作为与该多个子压缩模块一一对应的多个子解压缩模块的标识信息。
  28. 根据权利要求26所述的获取方法,其中,所述标识信息包括所述利用与压缩倍数最小的第二信道状态信息对应的子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息,包括:
    确定接收到的编号中最大的一者;
    编号中最大的一者确定为对压缩倍数最小的第二中间信道状态信息进行解压缩的第一个子解压缩模块的编号;
    将接收到的编号中最大的一者直至编号为N的子解压缩模块确定为压缩倍数最小的第二中间信道状态信息进行解压缩的子解压缩模块。
  29. 根据权利要求16所述的获取方法,其中,所述利用多个子解压缩模块分别对M组第二信道状态进行解压缩及合并处理,包括:
    利用多个子解压缩模块中的一部分子解压缩模块对各组的第二信道状态信息进行解压缩,以分别获得M组第一中间信道状态信息,其中,对所述第一中间信道状态信息中的元素数量小于所述第一信道状态信息的元素数量;
    对M组第一中间信道状态信息进行合并处理,获得第二中间信道状态 信息;
    利用多个子解压缩模块中的剩余子解压缩模块对所述第二中间信道状态信息进行解压缩,以获得所述目标信道状态信息。
  30. 根据权利要求16所述的获取方法,其中,所述子解压缩模块满足以下条件中的至少一者:
    所述子解压缩模块包括全连接层;
    所述子解压缩模块包括多个逆卷积层,所述卷积层包括以下部分中的至少一者:上采样层、上池化层、转置卷积层、残差块、稠密块或直连层;或者
    所述子解压缩模块包括循环网络,所述循环网络包括长短时记忆LSTM神经网络或者门控循环GRU神经网络。
  31. 根据权利要求16所述的获取方法,其中,所述解码器还包括第二处理模块,所述获取方法还包括:
    通过所述第二处理模块对所述目标信道状态信息进行处理,以提取终端反馈的信道状态信息。
  32. 根据权利要求31所述的获取方法,其中,所述第二处理模块满足以下条件中的任意一者:
    所述第二处理模块包括全连接网络,所述全连接网络包括至少两层全连接层;
    所述第二处理模块包括卷积网络,所述卷积网络包括以下部分中的至少一者:至少一个卷积层、至少一个残差块、至少一个稠密块、至少一个池化层或至少一个直连层;或者
    所述第二处理模块包括循环网络,所述循环网络包括长短时记忆LSTM神经网络或者门控循环GRU神经网络。
  33. 根据权利要求15至32中任意一项所述的获取方法,其中,所述获取方法还包括:
    接收各组第二信道状态信息的反馈周期的时间长短信息,其中,所述第二信道状态信息的反馈周期与所述第二信道状态信息的数据量反相关;
    在对各组第二信道状态信息进行解压缩及合并处理的步骤中,利用反馈周期长的第二信道状态信息对该反馈周期长的第二信道状态信息的反馈周期跨度内的反馈周期短的第二信道状态信息进行合并处理。
  34. 根据权利要求33所述的获取方法,其中,所述获取方法还包括:
    根据接收到的各组第二信道状态信息的反馈周期的时间长短信息确定对各组第二信道状态信息进行解压缩的膨胀倍数。
  35. 根据权利要求15至32中任意一项所述的获取方法,其中,所述获取方法还包括:
    接收各组第二信道状态信息对应的压缩倍数信息;
    根据各组第二信道状态信息对应的压缩倍数信息确定各组第二信道状态信息的膨胀倍数。
  36. 一种编码器和解码器进行联合训练的方法,所述编码器能够进行N种压缩倍数的压缩,所述解码器能够进行N种膨胀倍数的解压缩,且N中压缩倍数的数值分别与N种解压缩的膨胀倍数一一对应地相等,所述方法包括:
    对所述编码器的初始参数进行训练,以获得所述编码器的最终参数;
    对所述解码器的初始参数进行训练,以获得所述解码器的最终参数;其中,
    对所述编码器的初始参数进行训练的步骤中的训练误差、以及对所述解码器的初始参数进行训练的训练误差均为所述编码器的输入与所述解码器的输出之间的误差函数。
  37. 根据权利要求36所述的方法,其中,所述编码器包括N个子压缩模块,N个子压缩模块分别能够实现N种压缩倍数的压缩,所述编码器的初始参数包括N个子压缩模块的初始参数,所述编码器的最终参数包括N个子压缩模块的最终参数;
    所述解码器包括N个子解压缩模块,N个所述子解压缩模块分别能够实现N中膨胀倍数的解压缩,所述解码器的初始参数包括N个子解压缩模块的初始参数,所述解码器的最终参数包括N个子解压缩模块的最终参数;
    对所述编码器的初始参数进行训练包括分别对N个子压缩模块的初始参数进行训练;
    对所述解码器的初始参数进行训练包括分别对N个子解压缩模块的初始参数进行训练。
  38. 根据权利要求37所述的方法,其中,对子压缩模块的初始参数进行训练的训练误差为该子压缩模块的输入与膨胀倍数的数值与该子压缩模块的压缩倍数的数值相同的子解压缩模块的输出之间的误差函数。
  39. 根据权利要求37所述的方法,其中,所述误差函数满足以下公式:
    Figure PCTCN2022138160-appb-100001
    其中,Loss为所述误差;
    i为子压缩模块、以及子解压缩模块的序号;
    ki为第i个子压缩模块以及第i个子解压缩模块的加权系数;
    Figure PCTCN2022138160-appb-100002
  40. 根据权利要求36至39中任意一项所述的方法,其中,所述误差函数选自以下函数中的任意一者:
    均方误差函数、绝对值损失函数、对数损失函数。
  41. 一种终端,所述终端包括:
    第一存储模块,其上存储有第一可执行程序;
    至少一个第一处理器,当所述至少一个第一处理器调用所述第一可执行程序时,实现权利要求1至14中任意一项所述的反馈方法。
  42. 一种基站,所述基站包括:
    第二存储模块,其上存储有第二可执行程序;
    至少一个第二处理器,当所述至少一个第二处理器调用所述第二可执行程序时,实现权利要求15至35中任意一项所述的获取方法。
  43. 一种电子设备,所述电子设备包括:
    第三存储模块,其上存储有第三可执行程序;
    至少一个第三处理器,当所述至少一个第三处理器调用所述第三可执行程序时,实现权利要求36至40中任意一项所述的联合训练的方法。
  44. 一种计算机可读介质,其上存储有可执行程序,当所述可执行程序被调用时,能够实现权利要求1至40中任意一项所述的方法。
PCT/CN2022/138160 2021-12-10 2022-12-09 反馈、获取及训练方法、终端、基站、电子设备和介质 WO2023104205A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111505026.8A CN116260494A (zh) 2021-12-10 2021-12-10 反馈、获取及训练方法、终端、基站、电子设备和介质
CN202111505026.8 2021-12-10

Publications (1)

Publication Number Publication Date
WO2023104205A1 true WO2023104205A1 (zh) 2023-06-15

Family

ID=86677936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138160 WO2023104205A1 (zh) 2021-12-10 2022-12-09 反馈、获取及训练方法、终端、基站、电子设备和介质

Country Status (2)

Country Link
CN (1) CN116260494A (zh)
WO (1) WO2023104205A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117411526A (zh) * 2022-07-06 2024-01-16 华为技术有限公司 一种通信方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150036529A1 (en) * 2012-04-20 2015-02-05 Huawei Technologies Co., Ltd. Channel state information feedback methods and devices
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
CN111416642A (zh) * 2019-01-04 2020-07-14 财团法人工业技术研究院 基于深度学习与信道状态信息的通信系统及编解码方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150036529A1 (en) * 2012-04-20 2015-02-05 Huawei Technologies Co., Ltd. Channel state information feedback methods and devices
CN111416642A (zh) * 2019-01-04 2020-07-14 财团法人工业技术研究院 基于深度学习与信道状态信息的通信系统及编解码方法
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIAO YONG, YAO HAI-MEI, HUA YUAN-XIAO, ZHAO YAN: "CSI Feedback Method Based on Deep Learning for FDD Massive MIMO Systems", ACTA ELECTRONICA SINICA, ZHONGGUO DIANZI XUEHUI, CN, vol. 48, no. 6, 1 June 2020 (2020-06-01), CN , pages 1182 - 1189, XP093070189, ISSN: 0372-2112, DOI: 10.3969/j.issn.0372-2112.2020.06.020 *

Also Published As

Publication number Publication date
CN116260494A (zh) 2023-06-13

Similar Documents

Publication Publication Date Title
CN114039635B (zh) 用于压缩和/或解压缩信道状态信息的设备和方法
CN111510189A (zh) 信息反馈方法及装置
EP2475126B1 (en) Method, terminal and base station for processing channel state information
CN101965704B (zh) 具有不等差错保护的反馈的方法和装置
CN111800172B (zh) 一种通信方法及装置
CN109039406A (zh) 一种信道状态信息发送、接收方法及设备
WO2010003095A1 (en) System and method for quantization of channel state information
WO2012041103A1 (zh) 信道信息反馈方法及终端
CN101667895A (zh) 多天线系统中信道信息量化码本的构造方法及装置
WO2023104205A1 (zh) 反馈、获取及训练方法、终端、基站、电子设备和介质
US11509374B2 (en) Method for determining index of orthogonal basis vector and device
CN111436075B (zh) 信道状态信息的上报方法、解码方法、终端及网络侧设备
CN102025450B (zh) 一种进行信道信息编码反馈的方法及移动终端
WO2023011472A1 (zh) 信道状态信息的反馈方法及接收方法、终端、基站、计算机可读存储介质
WO2018113550A1 (zh) 信道编码方法、信道编码装置、芯片系统及存储介质
CN106685497A (zh) 码书限制信令的发送、信道信息的量化反馈方法及装置
TW201944745A (zh) 基於深度學習作為通道狀態資訊之回饋方法
US11522662B2 (en) Method and user equipment for generating a channel state information feedback report including jointly encoded parameters from multiple layers
WO2023030538A1 (zh) 信道状态信息的处理方法、终端、基站、计算机可读存储介质
WO2020063804A1 (en) Enhanced type ii channel state information in mobile communications
WO2023036119A1 (zh) 信道状态信息的反馈方法、反馈信息的处理方法、终端、基站、及计算机可读存储介质
US20240171246A1 (en) Channel state information transmission method and apparatus, terminal, base station, and storage medium
CN111010218A (zh) 指示和确定预编码向量的方法以及通信装置
WO2023071683A1 (zh) 用于反馈信道状态的方法和装置
WO2024093686A1 (zh) 一种下行信道状态信息上报方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903622

Country of ref document: EP

Kind code of ref document: A1