WO2023159614A1 - 一种预编码矩阵确定方法及设备/存储介质/装置 - Google Patents

一种预编码矩阵确定方法及设备/存储介质/装置 Download PDF

Info

Publication number
WO2023159614A1
WO2023159614A1 PCT/CN2022/078445 CN2022078445W WO2023159614A1 WO 2023159614 A1 WO2023159614 A1 WO 2023159614A1 CN 2022078445 W CN2022078445 W CN 2022078445W WO 2023159614 A1 WO2023159614 A1 WO 2023159614A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
channel
information
present disclosure
sub
Prior art date
Application number
PCT/CN2022/078445
Other languages
English (en)
French (fr)
Inventor
陈栋
池连刚
许威
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/078445 priority Critical patent/WO2023159614A1/zh
Publication of WO2023159614A1 publication Critical patent/WO2023159614A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular to a precoding matrix determination method and equipment/storage medium/apparatus.
  • m-MIMO massive Multiple-Input Multiple-Output, large-scale multiple-input multiple-output
  • UE User Equipment, user equipment
  • the base station can determine the downlink precoding matrix according to the channel matrix.
  • the number of downlinks corresponding to the base station is also large, resulting in a large overhead of the feedback channel matrix. Therefore, there is an urgent need for a low-overhead Simple and efficient method for determining precoding matrix under limited feedback.
  • the methods for determining the precoding matrix mainly include:
  • Method 1 Each UE uses the estimated channel matrix and a given codebook to find the analog domain precoding/combining vector that maximizes the modulus of the low-dimensional baseband equivalent channel vector through an exhaustive search method, and based on the determined The analog domain precoding/combination vector obtains the corresponding low-dimensional baseband equivalent channel vector, and then the UE feeds back the low-dimensional baseband equivalent channel vector and the corresponding codebook information to the base station, so that the base station uses the low-dimensional equivalent channel vector to use ZF(force zero) method to determine the digital domain precoding matrix.
  • Method 2 The base station uses CNN (Convolutional Neural Networks, convolutional neural network) reinforcement learning to obtain the mapping from the environment to the direction of the user beam, so as to find the most likely direction of the user beam, so as to concentrate the sensing beam energy as much as possible and reduce the perception
  • CNN Convolutional Neural Networks, convolutional neural network
  • the exhaustive search method is used to determine the precoder/combiner in the analog domain, which has high computational complexity and low efficiency, and also requires careful design of the codebook.
  • the CNN neural network constructed at the base station fails to fully mine the internal characteristics of the channel state information, and does not consider the temporal change trend of the channel data, so that the base station cannot accurately determine the precoding matrix of the UE.
  • the precoding matrix determination method and equipment/storage medium/device proposed in the present disclosure are used to solve the technical problems of low efficiency, high computational complexity, and unsatisfactory precoding design effects in related technologies.
  • the method for determining the precoding matrix proposed by an embodiment of the present disclosure is applied to a base station, including:
  • the channel characteristic information q k,t is obtained after the channel estimation information h k,t is processed by the first subnetwork at the UE side; wherein, the The dimension of the channel feature information q k,t is lower than the dimension of the channel estimation information h k,t ; the compression rate parameter ⁇ is the compression rate of the first sub-network; and, the first sub-network includes the first sub-network A convolutional neural network and a first long-short-term memory network LSTM, wherein the first convolutional neural network and the first LSTM are connected in a residual network-like manner.
  • the determining the precoding matrices F t and W t based on the compression rate parameter ⁇ and channel characteristic information q k,t includes :
  • the second subnetwork is composed of two parallel first network modules and second network modules;
  • the first network module includes a sequentially connected second LSTM, a first residual neural network, and a calculation module, and the first residual neural network includes a first one-dimensional convolutional network and a network consisting of n convolutional neural networks.
  • the first residual unit, n is a positive integer, and the first one-dimensional convolutional network and the first residual unit are connected in a residual network-like manner; wherein, the channel feature information q k,t passes through the The second LSTM and the first residual neural network are processed to obtain the phase shifter angle ⁇ t , and the calculation module is used to calculate and output the precoding matrix F t based on the phase shifter angle ⁇ t ;
  • the second network module includes a third LSTM, a second residual neural network, and a normalization processing module connected in sequence, the input end of the normalization module is also connected to the output end of the calculation module, and the first
  • the two residual neural network comprises a second one-dimensional convolutional network and a second residual unit composed of n convolutional neural networks, and the second one-dimensional convolutional network and the second residual unit are based on a class residual network Connected in a manner; wherein, the channel feature information q k, t is processed through the third LSTM, the second residual neural network and a normalization processing module to obtain a precoding matrix W t and output it;
  • the matrix size corresponding to the first residual neural network is different from the matrix size corresponding to the second residual neural network.
  • the base station includes at least one second subnetwork, where different second subnetworks have different compression ratios;
  • the method also includes:
  • the precoding matrix determination network is trained based on the sample data set H.
  • the method further includes:
  • the method for determining the precoding matrix proposed by another embodiment of the present disclosure is applied to the UE, including:
  • the determining the channel characteristic information q k,t based on the channel estimation information h k,t includes:
  • the first sub-network includes a first convolutional neural network and a first LSTM, wherein the first convolutional neural network and the first LSTM are connected in a residual network-like manner, and the compression rate parameter ⁇ is the compression rate of the first subnetwork.
  • the method further includes:
  • the precoding matrix determination device proposed by the embodiment includes:
  • An acquisition module configured to acquire channel characteristic information q k,t and a compression rate parameter ⁇ from UE, where k is used to indicate the kth UE, and t is used to indicate the timestamp corresponding to the channel characteristic information q k,t ;
  • a determining module configured to determine precoding matrices F t and W t based on the compression rate parameter ⁇ and channel characteristic information q k , t .
  • the precoding matrix determination device proposed by the embodiment includes:
  • a determining module configured to determine channel estimation information h k,t , where k is used to indicate the kth UE, and t is used to indicate the time stamp corresponding to the channel estimation information h k,t ;
  • the above determining module is further configured to determine channel characteristic information q k,t based on the channel estimation information h k,t , wherein the dimension of the channel characteristic information q k,t is lower than that of the channel estimation information h k,t dimension;
  • a sending module configured to send channel characteristic information q k,t and a compression rate parameter ⁇ to the base station.
  • an embodiment provides a communication device, the device includes a processor and a memory, a computer program is stored in the memory, and the processor executes the computer program stored in the memory, so that the The device executes the method provided in the embodiment of the foregoing aspect.
  • an embodiment provides a communication device, the device includes a processor and a memory, a computer program is stored in the memory, and the processor executes the computer program stored in the memory, so that the The device executes the method provided in the above embodiment of another aspect.
  • a communication device provided by an embodiment of another aspect of the present disclosure includes: a processor and an interface circuit;
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to execute the method provided in one embodiment.
  • a communication device provided by an embodiment of another aspect of the present disclosure includes: a processor and an interface circuit;
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to execute the method provided in another embodiment.
  • the computer-readable storage medium provided by another embodiment of the present disclosure is used to store instructions, and when the instructions are executed, the method provided by the first embodiment is implemented.
  • the computer-readable storage medium provided by another embodiment of the present disclosure is used to store instructions, and when the instructions are executed, the method provided by another embodiment is implemented.
  • the base station can obtain the channel characteristic information q k,t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate the kth UE, t is used to indicate the time stamp, and the precoding matrices F t , W t are determined through the second sub-network based on the compression rate parameter ⁇ and the channel characteristic information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • the precoding matrix determination network in the above precoding matrix determination method is composed of the first subnetwork for channel compression and reconstruction and the second subnetwork for determining the precoding matrix, and starting from the two-dimensional correlation of time and space of the channel, The user and rate maximization at all times is the optimization goal, and the joint optimization of channel compression and precoding matrix is carried out, so as to realize the efficient determination of the hybrid precoding matrix under the limited feedback of UE.
  • the first sub-network is composed of the first convolutional neural network and the first LSTM, which extracts the spatial and temporal features of the channel, and through the residual cascade design of the first convolutional neural network and the first LSTM, the The obtained channel state information is compressed into low-dimensional channel feature information, thereby reducing the complexity of the network.
  • the second sub-network is composed of two parallel first network modules and second network modules.
  • the network modules are both composed of LSTM and residual neural network. Integrating the channel reconstruction and precoding matrix calculation process, it realizes the lightweight of the network.
  • the precoding matrix determines that the network is trained on the base station side, and after the training is completed, the network parameters corresponding to each simulated sub-network after the training are distributed to the corresponding UE, thus ensuring that the UE can receive Network parameters Deploy multiple first sub-networks with different compression ratios in the UE, so that the deployed first sub-networks can be used to process channel state information to obtain low-dimensional channel feature information, and combine the channel feature information with the first The compression ratio parameters of the subnetwork are sent to the base station, and the base station determines the precoding matrix, which ensures the successful determination of the precoding matrix.
  • the first sub-network and the second network can flexibly adjust and select networks corresponding to different compression rates according to the performance of the current precoding, thereby achieving performance and Equilibrium with limited feedback.
  • Fig. 1a is a schematic flowchart of a method for determining a precoding matrix provided by an embodiment of the present disclosure
  • Fig. 1b is a schematic structural diagram of a first sub-network provided by an embodiment of the present disclosure
  • Fig. 1c is a schematic structural diagram of a second sub-network provided by an embodiment of the present disclosure.
  • Fig. 2a is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure
  • FIG. 2b is a schematic structural diagram of a precoding matrix determination network provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of an apparatus for determining a precoding matrix provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an apparatus for determining a precoding matrix provided by another embodiment of the present disclosure.
  • Fig. 8 is a block diagram of a user equipment provided by an embodiment of the present disclosure.
  • Fig. 9 is a block diagram of a network side device provided by an embodiment of the present disclosure.
  • first, second, third, etc. may use the terms first, second, third, etc. to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the embodiments of the present disclosure, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information
  • second information may also be called first information.
  • the words "if” and "if” as used herein may be interpreted as “at” or "when” or "in response to a determination.”
  • Fig. 1a is a schematic flowchart of a method for determining a precoding matrix provided by an embodiment of the present disclosure. The method is executed by a base station. As shown in Fig. 1, the method for determining a precoding matrix may include the following steps:
  • Step 101a acquiring channel feature information q k,t and compression ratio parameter ⁇ from UE.
  • a UE may be a device that provides voice and/or data connectivity to a user.
  • Terminal equipment can communicate with one or more core networks via RAN (Radio Access Network, wireless access network), and UE can be an IoT terminal, such as a sensor device, a mobile phone (or called a "cellular" phone) and a
  • the computer of the networked terminal for example, may be a fixed, portable, pocket, hand-held, built-in computer or vehicle-mounted device.
  • station Station, STA
  • subscriber unit subscriber unit
  • subscriber station subscriber station
  • mobile station mobile station
  • mobile station mobile
  • remote station remote station
  • access point remote terminal
  • user terminal or user agent.
  • the UE may also be a device of an unmanned aerial vehicle.
  • the UE may also be a vehicle-mounted device, for example, it may be a trip computer with a wireless communication function, or a wireless terminal connected externally to the trip computer.
  • the UE may also be a roadside device, for example, it may be a street lamp, a signal lamp, or other roadside devices with a wireless communication function.
  • the above-mentioned channel feature information q k,t may specifically be: obtained after the first subnetwork at the UE side processes the channel estimation information h k,t , wherein the channel feature information
  • the dimension of q k,t is lower than the dimension of channel estimation information h k,t
  • the channel estimation information h k,t may specifically be the downlink channel estimation information determined by the UE through a typical channel estimation method, the typical channel estimation method It is an existing channel estimation method.
  • this typical channel estimation method reference may be made to the introduction of the prior art, and the embodiments of the present disclosure will not repeat it here.
  • the above k is used to indicate the kth UE, and t is used to indicate the time stamp.
  • the above channel estimation information h k,t is: the downlink channel estimation information of the k-th UE at time t;
  • the channel feature information q k,t is: the channel characteristics of the k-th UE at time t information.
  • the above-mentioned compression rate parameter ⁇ is the compression rate of the first sub-network.
  • FIG. 1 b is a schematic structural diagram of a first subnetwork provided by an embodiment of the present disclosure.
  • the first sub-network may include a first convolutional neural network and a first LSTM (Long Short Term Memory, long short-term memory network).
  • the first convolutional neural network and the first LSTM are connected in a residual network-like manner, the first convolutional neural network can be used to extract channel space features, and the first LSTM is used to extract channel time features.
  • the channel estimation information h k,t can be input into the first sub-network and processed by the first convolutional neural network and the first LSTM in the first sub-network to obtain the channel feature information q k,t .
  • the channel estimation information h k,t before inputting the channel estimation information h k,t to the first convolutional neural network, it needs to be multiplied by 2 ⁇ N t ⁇ K ⁇ T to Perform reconstruction, where N t is the number of antennas at the base station, K is the total number of UEs, and T is the total number of time stamps. Based on this, assuming that there are N sets of channel estimation information h k,t , the channel estimation information h k,t can be reconstructed as An Nx2xNtxKxT matrix. Afterwards, the N ⁇ 2 ⁇ N t ⁇ K ⁇ T matrix is input to the first convolutional neural network for convolution operation.
  • the first convolutional neural network may be a one-dimensional convolutional neural network, and the N ⁇ 2 ⁇ N t ⁇ K ⁇ T matrix is input to the first convolutional neural network Afterwards, the first convolutional neural network can perform a convolution operation on the N ⁇ 2 ⁇ N t ⁇ K ⁇ T matrix, wherein the convolution operation of the first convolutional neural network can be:
  • y d, i, j are the (d, i, j)th elements in the matrix y obtained after the convolution operation of the first convolutional neural network
  • W d, c, h are the first convolutional neural network
  • b d is the dth element in the convolution kernel bias b of the first convolutional neural network
  • x c,i ⁇ s+h is
  • the input x of the first convolutional neural network is the (c,i ⁇ s+h)th element after being zero-filled
  • s is the convolutional step of the first convolutional neural network (for example, the convolutional step can be 1).
  • the above-mentioned "zero padding" can be specifically obtained by adding 0 to the periphery of the input x of the first convolutional neural network to obtain a matrix x', and the matrix x' should meet the following requirements: using the first convolution
  • the size of the matrix y obtained after the neural network performs the convolution operation on the x' is the same as the size of the input x of the first convolutional neural network.
  • each convolutional neural network in the follow-up content also needs to zero-fill the input.
  • all convolutional layers of all convolutional neural networks use the same zero Fill strategy.
  • the activation function of the convolutional layer in the above-mentioned first convolutional neural network is the Leakyrelu function, which is defined as:
  • x is the output after the convolution operation of the first convolutional neural network (that is, the matrix y obtained after the convolution operation of the first convolutional neural network), and f i (x) is the first The output of the convolutional neural network.
  • the first convolutional neural network will sequentially perform convolution activation operations to obtain the output f i of the first convolutional neural network ( x).
  • the dimension of the output of the above-mentioned first convolutional network is lower than the dimension of the input of the first convolutional network (that is, the channel estimation information h k,t ).
  • the above-mentioned first convolutional neural network can take advantage of the characteristics of millimeter wave channel sparsity, and at the same time take into account the time correlation of the channel, and use the channel estimation information h k at time t ,t is compressed into a low-dimensional matrix, so that the matrix size of the output of the first convolutional neural network is smaller than the matrix size of the channel estimation information h k,t , thereby reducing the computational complexity.
  • the output of the first convolutional neural network can be used as the input of the first LSTM, and the first LSTM can be used as the input of the first convolutional neural network.
  • Output processing obtains the output of the first LSTM, and then adds the output of the first LSTM and the output of the first convolutional neural network to obtain channel feature information q k,t .
  • the first convolutional neural network can not only extract channel space features, but also preprocess the data input to the first LSTM, and make the data dimension of the first LSTM network greatly reduced, thereby effectively reducing the computational complexity.
  • the first LTSM after inputting the output of the first convolutional neural network (that is, a matrix with a size of N ⁇ M ⁇ T) to the first LSTM, the first LTSM needs to perform the first step on the matrix with a size of N ⁇ M ⁇ T Transformation times to obtain the first transformation matrix with a size of N ⁇ T ⁇ M, and process the first transformation matrix to obtain a processed matrix, the size of the processed matrix is N ⁇ T ⁇ M, and then process the The final matrix is converted for the second time to obtain an LTSM output matrix with a size of N ⁇ M ⁇ T. After that, the LTSM output matrix is added to the output matrix of the first convolutional neural network to obtain the channel feature information q k,t , where the size of the channel feature information q k,t is N ⁇ M ⁇ T.
  • the above-mentioned first LSTM needs to use the hidden layer information s 1,k,t of the first LSTM and its own state information c 1,k,t when processing the matrix.
  • the s 1, The values of k, t and c 1, k, t vary with the input of the first sub-network, specifically, the input to the first sub-network is the channel estimation information h 1,1 used when The values of s 1,1,1 and c 1,1,1 are the same as the values of s 1,1,2 and c 1,1,2 used when the input to the first subnetwork is channel estimation information h 1,2 are not the same, thus it can be ensured that the values of s 1, k, t and c 1, k, t correspond to the input of the first sub-network, thus ensuring the processing accuracy.
  • the first LSTM used in the embodiment of the present disclosure is a classic LSTM model in the prior art. Based on this, the specific processing of the first LSTM for the input For the method, reference may be made to the description of the prior art, and the embodiments of the present disclosure are not described in detail here.
  • the channel characteristic information q k,t is specifically the low-latitude information obtained by processing the channel estimation information h k,t by the first sub-network deployed at the UE side.
  • Step 102a Determine precoding matrices F t and W t based on the compression rate parameter ⁇ and channel feature information q k, t .
  • the method for determining the precoding matrix F t and W t based on the compression rate parameter ⁇ and the channel characteristic information q k,t includes: determining the second subnetwork corresponding to the compression rate parameter ⁇ (PreNet), and input the channel feature information q k,t into the second sub-network to output precoding matrices F t , W t .
  • the above-mentioned compression rate parameter ⁇ is the compression rate of the above-mentioned first sub-network, and when the compression rate parameter ⁇ is different (that is, when the compression rate of the first sub-network used to obtain the channel characteristic information q k,t is different ), the second sub-network to be used when obtaining the precoding matrices F t and W t based on the channel feature information q k,t will also be different.
  • the compression ratio parameter ⁇ is 1/16 (that is, the compression ratio of the first sub-network used to obtain the channel characteristic information q k,t is 1/16)
  • the prediction is obtained based on the channel characteristic information q k,t
  • the second sub-network to be used when encoding the matrices F t and W t should be a second sub-network with a compression ratio of 1/32.
  • FIG. 1c is a schematic structural diagram of a second subnetwork provided by an embodiment of the present disclosure.
  • the second sub-network is composed of two parallel first network modules and second network modules.
  • the first network module includes a second LSTM, a first residual neural network and a computing module connected in sequence.
  • the structure of the second LSTM is the same as that of the above-mentioned first LSTM, and details are not described in this embodiment of the present disclosure.
  • the base station acquires the channel characteristic information q k,t from the UE, it can input the channel characteristic information q k,t into the second LSTM to obtain the output of the second LSTM, and input the output of the second LSTM into to the first residual neural network.
  • the matrix size of the output of the second LSTM is the same as the matrix size of the channel characteristic information q k,t , both are: N ⁇ M ⁇ T.
  • the first residual neural network after the first residual neural network obtains the output of the second LSTM, it can process the output of the second LSTM to obtain the output of the first residual neural network (ie, shift Phaser angle ⁇ t ).
  • the first residual neural network may include a first one-dimensional convolutional network and a first residual unit composed of n convolutional neural networks, where n is a positive integer (for example, it can be 4). And, the first one-dimensional convolutional network and the first residual unit are connected in a residual network-like manner.
  • the structure of the first one-dimensional convolutional network is the same as that of the above-mentioned first convolutional network, and will not be described in detail here in the embodiment of the present disclosure.
  • the first residual unit can perform a residual operation, wherein the residual operation can be defined as:
  • y is the output of the first residual unit
  • x is the input of the first residual unit
  • W i and bi are the weights and biases of the middle layer convolutional network of the first residual unit
  • f i ( ) is The activation function of the intermediate layer convolutional network of the first residual unit, wherein the activation function may be a LeakyReLU function, and the introduction of the LeakyReLU function may refer to the description in the above embodiments, and the embodiments of the present disclosure will not repeat them here.
  • the size of the output matrix of the last convolutional network in the first residual unit should be the same as the size of the input matrix of the first residual unit, and the output matrix of the intermediate network of the first residual unit The size is adjustable.
  • the first residual neural network processes the output of the second LSTM to obtain the output of the first residual neural network (that is, the phase shifter angle ⁇ t ) process is mainly: before the output of the second LSTM is input to the first convolutional neural network, it is matrix expanded to expand the matrix size from N ⁇ M ⁇ T to N ⁇ 2 ⁇ Nt ⁇ K ⁇ T After obtaining the expansion matrix, the first one-dimensional convolutional network in the first residual neural network performs a convolution activation operation on the expansion matrix to obtain the output of the first one-dimensional convolutional network, and the first one-dimensional convolutional network The output of is input to the first residual unit, so that the first residual unit performs a residual operation on the output of the first one-dimensional convolutional network to obtain the output of the first residual unit, and then, the first residual unit The output of the output and the first one-dimensional convolutional network is added to obtain the output of the first residual
  • the calculation module after the calculation module obtains the phase shifter angle ⁇ t output by the first residual neural network, it can calculate the simulated precoding matrix based on the phase shifter angle ⁇ t F t . Wherein, the calculation module calculates the calculation formula of the precoding matrix F t based on the phase shifter angle ⁇ t :
  • the simulated precoding matrix F t can be obtained after the channel feature information q k,t is processed by the above-mentioned first network module.
  • the second network module includes a third LSTM, a second residual neural network and a normalization processing module connected in sequence, and the input end of the normalization module is also connected with the first network module The output terminals of the computing modules are connected.
  • the structure of the third LSTM is the same as that of the above-mentioned second LSTM, and details will not be described here in this embodiment of the present disclosure.
  • the base station acquires the channel characteristic information q k,t from the UE, it can input the channel characteristic information q k,t to the third LSTM to obtain the output of the third LSTM, and input the output of the third LSTM to to the second residual neural network.
  • the matrix size of the output of the third LSTM is the same as the matrix size of the channel characteristic information q k,t , both are: N ⁇ M ⁇ T.
  • the second residual neural network after the second residual neural network obtains the output of the third LSTM, it can process the output of the third LSTM to obtain the output of the second residual neural network (ie ).
  • the second residual neural network may include a second one-dimensional convolutional network and a second residual unit composed of n convolutional neural networks, where n is a positive integer (for example, it can be 4). And, the second one-dimensional convolutional network and the second residual unit are connected in a residual network-like manner.
  • the structure of the second one-dimensional convolutional network is the same as that of the above-mentioned first convolutional network, and will not be described in detail here in the embodiment of the present disclosure.
  • the operation of the second residual unit is the same as that of the above-mentioned first residual unit, which will not be repeated in this embodiment of the present disclosure.
  • the size of the output matrix of the last convolutional network in the second residual unit should be the same as the size of the input matrix of the second residual unit, and the output matrix of the intermediate network of the second residual unit The size is adjustable.
  • the second residual neural network processes the output of the third LSTM to obtain the output of the second residual neural network (ie ) process is mainly: before the output of the third LSTM is input to the second convolutional neural network, it is matrix expanded to expand the matrix size from N ⁇ M ⁇ T to N ⁇ 2 ⁇ K ⁇ K ⁇ T After obtaining the expansion matrix, the second one-dimensional convolutional network in the second residual neural network performs a convolution activation operation on the expansion matrix to obtain the output of the second one-dimensional convolutional network, and the second one-dimensional convolutional network The output of is input to the second residual unit, so that the second residual unit performs a residual operation on the output of the second one-dimensional convolutional network to obtain the output of the second residual unit, and then, the second residual unit The output of and the output of the second one-dimensional convolutional network are added to obtain the output of the second residual neural network (ie ), then the In
  • the input end of the normalization processing module is connected to the output end of the calculation module, and the normalization processing module obtains the output of the second residual neural network Afterwards, based on the and the analog precoding matrix F t to obtain the digital precoding matrix W t .
  • the normalization processing module is based on the and the analog precoding matrix F t to calculate the digital precoding matrix W t calculation formula is:
  • the digital precoding matrix W t can be obtained after the channel characteristic information q k,t is processed by the above-mentioned first network module.
  • the structures of the first network module and the second network module in the above-mentioned second sub-network are similar but the corresponding matrix sizes are different, and the first network module and the second network module are The second network module is used to calculate the analog precoding matrix and the digital precoding matrix respectively, realizing the calculation of the implicit hybrid precoding matrix.
  • the channel reconstruction and precoding matrix calculation processes are integrated in the second sub-network to realize the lightweight of the network.
  • the base station can obtain the channel characteristic information q k, t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • the training process of the first sub-network and the second sub-network will be introduced in detail below.
  • Fig. 2a is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure. The method is executed by a base station. As shown in Fig. 2a, the method for determining a precoding matrix may include the following steps:
  • Step 201a deploying at least one simulated subnetwork at the base station based on the structure of the first subnetwork, wherein the simulated subnetwork has the same structure as the first subnetwork, and different simulated subnetworks have different compression ratios.
  • Step 202a connect the simulation sub-network with the corresponding compression rate and the second sub-network to obtain at least one precoding matrix determination network.
  • the base station may include at least one second subnetwork, where different second subnetworks have different compression ratios.
  • a precoding matrix determination network can be obtained by simultaneously simulating the subnetwork and the second subnetwork, and is used to determine hybrid precoding.
  • the analog sub-networks with different compression rates correspond to the second sub-networks with different compression rates.
  • the analog sub-network with a compression rate of 1/16 can be combined with the second sub-network with a compression rate of 1/32 to obtain a predetermined
  • the coding matrix determines the network.
  • the precoding performance of the network determined for different precoding matrices will also be different.
  • FIG. 2b is a schematic structural diagram of a network for determining a precoding matrix provided in the present application.
  • Step 203a acquiring channel estimation information h k,t from each UE.
  • the channel estimation information h k,t may be, for example, a CSI (Channel State Information, channel state information) matrix.
  • N t 256
  • N c 1024 subcarriers, in a 2.4GHz outdoor picocell scenario, for example, 5000 sets of channel estimation information h k,t from the one UE can be obtained.
  • Such 5000 sets of channel estimation information are:
  • Step 204a determine the sample data set H based on the channel estimation information h k,t of each UE.
  • K is the total number of UEs
  • k is the kth UE
  • k 1, 2, 3... K
  • N t is the number of BS antennas
  • T is the total number of time stamps
  • t is the time stamp
  • the 5000 sample data set H can be divided into a training set of 3500 samples (accounting for 70%), a verification set of 1000 samples (accounting for 20%), and a test set of 500 samples (accounting for 10%) , and use the training set, verification set, and test set as the precoding matrix to determine the training data of the network.
  • Step 205a train the precoding matrix determination network based on the sample data set H.
  • the optimization goal is to maximize the user and rate at all times, and adjust the precoding matrix determination network.
  • the precoding matrix determination network may be trained based on the sample data set H by using the methods shown in FIGS. 1 to 5 above, and the loss function Loss may be calculated.
  • the loss function Loss can be defined as:
  • the above-mentioned Rate may be a combined rate, and it can be known from the above formula that the loss function Loss may be the opposite number of the combined rate.
  • the precoding matrix determination network is trained in an end-to-end training manner until the loss function converges, and the precoding matrix determination network training is completed.
  • Step 206a determining network parameters corresponding to each simulated sub-network after training.
  • Step 207a sending the network parameters corresponding to each simulated subnetwork to the UE.
  • the UE can deploy multiple first subnetworks with different compression ratios in the UE according to the received network parameters, so as to determine the When mixing the precoding matrix, the UE can first estimate the channel estimation information h k,t , and use the deployed first sub-network to process the channel state information h k to obtain low-dimensional channel feature information q k,t , and The channel feature information q k,t and the compression rate parameter ⁇ of the first subnetwork are sent to the base station, so that the base station can select the second subnetwork corresponding to the compression rate ⁇ to process the received channel feature information q k,t to The mixed precoding matrices F t and W t are obtained.
  • the base station can obtain the channel characteristic information q k, t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • FIG. 3 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure. The method is executed by a base station. As shown in FIG. 3 , the method for determining a precoding matrix may include the following steps:
  • Step 301 Deploy at least one simulated subnet at the base station based on the structure of the first subnet, where the simulated subnet has the same structure as the first subnet, and different simulated subnets have different compression ratios.
  • Step 302 connect the simulation sub-network with the corresponding compression rate and the second sub-network to obtain at least one precoding matrix determination network.
  • Step 303 Obtain channel estimation information h k,t from each UE.
  • Step 304 Determine the sample data set H based on the channel estimation information h k,t of each UE.
  • Step 305 train the network for determining the precoding matrix based on the sample data set H.
  • Step 306. Determine network parameters corresponding to each simulated sub-network after training.
  • Step 307 sending the network parameters corresponding to each simulated subnet to the UE.
  • Step 308 acquire channel feature information q k,t and compression ratio parameter ⁇ from UE.
  • Step 309 Determine precoding matrices F t and W t based on the compression rate parameter ⁇ and channel characteristic information q k , t .
  • the base station can obtain the channel characteristic information q k, t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • FIG. 4 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure. The method is executed by a UE. As shown in FIG. 4 , the method for determining a precoding matrix may include the following steps:
  • Step 401 Determine channel estimation information h k,t .
  • k is used to indicate the kth UE
  • t is used to indicate a time stamp
  • the kth UE may obtain the downlink channel estimation information h k,t at the tth moment through a typical channel estimation method.
  • Step 402. Determine channel characteristic information q k,t based on channel estimation information h k,t , wherein the dimension of channel characteristic information q k,t is lower than that of channel estimation information h k,t .
  • the method for determining the channel characteristic information q k,t based on the channel estimation information h k,t may include inputting the channel estimation information h k,t into the first sub-network to output the channel characteristic Information q k,t , wherein, the first sub-network includes the first convolutional neural network and the first LSTM, the first convolutional neural network and the first LSTM are connected in a residual network-like manner, and the compression rate parameter ⁇ is the first Compression ratio for a subnetwork.
  • Step 403 sending channel feature information q k,t and compression rate parameter ⁇ to the base station.
  • the base station receives the channel characteristic information q k,t and the compression rate parameter ⁇ sent by the UE, and then selects the corresponding second subnetwork pair according to the compression rate parameter ⁇ to the received channel
  • the feature information q k, t is processed to obtain a precoding matrix.
  • the UE can flexibly select the first subnetwork used for precoding according to the precoding performance.
  • the UE may first select a first subnetwork with a higher compression rate to determine the channel feature information q k,t , and combine the determined channel feature information q k,t with the first subnetwork with a higher compression rate
  • the compression rate parameter ⁇ of the subnetwork is sent to the base station so that the base station selects the second subnetwork corresponding to the higher compression rate for precoding; wherein, when the UE determines that the precoding performance is better this time, the next precoding
  • the UE may select a first subnetwork with a higher compression rate to determine the channel feature information q k,t , and combine the determined channel feature information q k,t with the first subnetwork with a higher compression rate
  • the compression rate parameter ⁇ is sent to the base station so that the base station selects the second subnetwork corresponding to the higher compression rate for precoding; wherein, when the UE determines that the precoding performance is poor this time, the UE A first sub-network with a lower compression rate may be selected
  • the base station can obtain the channel characteristic information q k, t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • FIG. 5 is a schematic flowchart of a method for determining a precoding matrix provided by another embodiment of the present disclosure. The method is executed by a UE. As shown in FIG. 5 , the method for determining a precoding matrix may include the following steps:
  • Step 501 receiving network parameters sent by the base station.
  • the network parameters may include network parameters of the first convolutional neural network and network parameters of the first LSTM network corresponding to multiple different compression rates.
  • Step 502 Deploy the first subnet based on network parameters.
  • Step 503. Determine channel estimation information h k,t , where k is used to indicate the kth UE, and t is used to indicate a time stamp.
  • Step 504. Determine channel characteristic information q k,t based on channel estimation information h k,t , wherein the dimension of channel characteristic information q k,t is lower than that of channel estimation information h k,t .
  • Step 505 sending channel feature information q k,t and compression rate parameter ⁇ to the base station.
  • the base station can obtain the channel characteristic information q k, t obtained by the UE through the first subnetwork and the compression rate parameter ⁇ , where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • the precoding matrix determination network in the above precoding matrix determination method is composed of the first subnetwork for channel compression and reconstruction and the second subnetwork for determining the precoding matrix, and starting from the two-dimensional correlation of time and space of the channel, The user and rate maximization at all times is the optimization goal, and the joint optimization of channel compression and precoding matrix is carried out, so as to realize the efficient determination of the hybrid precoding matrix under the limited feedback of UE.
  • the first sub-network is composed of the first convolutional neural network and the first LSTM, which extracts the spatial and temporal features of the channel, and through the residual cascade design of the first convolutional neural network and the first LSTM, the The obtained channel state information is compressed into low-dimensional channel feature information, thereby reducing the complexity of the network.
  • the second sub-network is composed of two parallel first network modules and second network modules.
  • the network modules are both composed of LSTM and residual neural network. Integrating the channel reconstruction and precoding matrix calculation process, it realizes the lightweight of the network.
  • the precoding matrix determines that the network is trained on the base station side, and after the training is completed, the network parameters corresponding to each simulated sub-network after the training are distributed to the corresponding UE, thus ensuring that the UE can receive Network parameters Deploy multiple first sub-networks with different compression ratios in the UE, so that the deployed first sub-networks can be used to process channel state information to obtain low-dimensional channel feature information, and combine the channel feature information with the first The compression ratio parameters of the subnetwork are sent to the base station, and the base station determines the precoding matrix, which ensures the successful determination of the precoding matrix.
  • the first sub-network and the second network can flexibly adjust and select networks corresponding to different compression rates according to the performance of the current precoding, thereby achieving performance and Equilibrium with limited feedback.
  • FIG. 6 is a schematic structural diagram of an apparatus for determining a precoding matrix provided by an embodiment of the present disclosure. As shown in FIG. 6 , the apparatus 600 may include:
  • An acquisition module 601 configured to acquire channel characteristic information q k, t from a UE and a compression rate parameter ⁇ , where k is used to indicate the kth UE, and t is used to indicate a time stamp;
  • a determining module 602 configured to determine precoding matrices F t and W t based on the compression rate parameter ⁇ and channel feature information q k , t .
  • the base station can obtain the channel characteristic information q k, t and the compression rate parameter ⁇ obtained by the UE through the first subnetwork, where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE side processes the channel estimation information h k,t ; wherein, the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t ; the compression rate parameter ⁇ is the compression rate of the first sub-network; and, the first sub-network includes a first convolutional neural network and a first long-short-term memory network LSTM, where the first A convolutional neural network and the first LSTM are connected in a residual network-like manner.
  • the determining module 602 is also used for:
  • the channel feature information q k,t is input into the second sub-network to output precoding matrices F t , W t .
  • the second subnetwork is composed of two parallel first network modules and second network modules
  • the first network module includes a second LSTM connected in sequence, a first residual neural network, and a calculation module, and the first residual neural network includes a first one-dimensional convolutional network and a first residual of n convolutional neural networks.
  • n is a positive integer
  • the first one-dimensional convolutional network and the first residual unit are connected in a manner similar to the residual network; wherein, the channel feature information q k,t passes through the second LSTM and the first residual neural network Process to obtain the phase shifter angle ⁇ t , and the calculation module is used to calculate the precoding matrix F t based on the phase shifter angle ⁇ t and output;
  • the second network module includes the third LSTM, the second residual neural network and the normalization processing module connected in sequence, the input end of the normalization module is also connected with the output end of the calculation module, and the second residual neural network includes the second A one-dimensional convolutional network and a second residual unit composed of n convolutional neural networks, the second one-dimensional convolutional network and the second residual unit are connected in a residual network-like manner; wherein, the channel feature information q k, t is processed through the third LSTM, the second residual neural network and the normalization processing module to obtain the precoding matrix W t and output; and, the matrix size corresponding to the first residual neural network is the same as that of the second residual neural network
  • the corresponding matrices are of different sizes.
  • the base station includes at least one second subnetwork, where compression ratios of different second subnetworks are different.
  • the above-mentioned device is also used for:
  • the precoding matrix determination network is trained based on the sample data set H.
  • the above-mentioned device is also used for:
  • FIG. 7 is a schematic structural diagram of an apparatus for determining a precoding matrix provided by another embodiment of the present disclosure. As shown in FIG. 7 , the apparatus 700 may include:
  • a determining module 701 configured to determine channel estimation information h k,t , where k is used to indicate the kth UE, and t is used to indicate the time stamp corresponding to the channel estimation information h k,t ;
  • the above determining module 701 is further configured to determine channel characteristic information q k,t based on channel estimation information h k,t , wherein the dimension of channel characteristic information q k,t is lower than the dimension of channel estimation information h k,t ;
  • the sending module 702 is configured to send the channel feature information q k,t and the compression rate parameter ⁇ to the base station.
  • the base station can obtain the channel characteristic information q k, t and the compression rate parameter ⁇ obtained by the UE through the first subnetwork, where k is used to indicate The kth UE, t is used to indicate the time stamp, and determines the precoding matrices F t , W t through the second subnetwork based on the compression rate parameter ⁇ and the channel feature information q k , t .
  • the channel characteristic information q k,t is obtained after the first sub-network at the UE end processes the channel estimation information h k,t ; the dimension of the channel characteristic information q k,t is lower than the dimension of the channel estimation information h k,t .
  • the UE first compresses the obtained channel state information into low-dimensional channel characteristic information through the first subnetwork, and feeds back the compression ratio parameters to the base station, and then the base station can use
  • the second sub-network determines the precoding matrix based on the channel characteristic information q k,t and the compression ratio parameter.
  • the transmission complexity is low and the transmission efficiency is high, thus providing a simple and efficient method for determining the precoding matrix under the limited feedback of low overhead .
  • the above determination module 701 is also used to:
  • the first sub-network includes the first convolutional neural network and the first LSTM, wherein the first convolutional neural network and the first LSTM are connected in a residual network-like manner, and the compression rate parameter ⁇ is the first sub-network Compression ratio.
  • the above-mentioned device is also used for:
  • a first sub-network is deployed based on network parameters.
  • Fig. 8 is a block diagram of a user equipment UE800 provided by an embodiment of the present disclosure.
  • the UE 800 may be a mobile phone, a computer, a digital broadcast terminal device, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • UE800 may include at least one of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 813, and a communication component 816.
  • Processing component 802 generally controls the overall operations of UE 800, such as those associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include at least one processor 820 to execute instructions, so as to complete all or part of the steps of the above method.
  • processing component 802 can include at least one module that facilitates interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the UE 800 . Examples of such data include instructions for any application or method operating on UE800, contact data, phonebook data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the UE 800 .
  • Power components 806 may include a power management system, at least one power supply, and other components associated with generating, managing, and distributing power for UE 800 .
  • the multimedia component 808 includes a screen providing an output interface between the UE 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes at least one touch sensor to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or slide action, but also detect a wake-up time and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the UE800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive an external audio signal when the UE 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • the sensor component 813 includes at least one sensor for providing various aspects of state assessment for the UE 800 .
  • the sensor component 813 can detect the open/close state of the device 800, the relative positioning of components, such as the display and the keypad of the UE800, the sensor component 813 can also detect the position change of the UE800 or a component of the UE800, and the user and Presence or absence of UE800 contact, UE800 orientation or acceleration/deceleration and temperature change of UE800.
  • the sensor assembly 813 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • the sensor assembly 813 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 813 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communications between UE 800 and other devices.
  • UE800 can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • UE 800 may be powered by at least one Application Specific Integrated Circuit (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array ( FPGA), controller, microcontroller, microprocessor or other electronic components for implementing the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components for implementing the above method.
  • FIG. 9 is a block diagram of a network-side device 900 provided by an embodiment of the present application.
  • the network side device 900 may be provided as a network side device.
  • the network side device 900 includes a processing component 911 , which further includes at least one processor, and a memory resource represented by a memory 932 for storing instructions executable by the processing component 922 , such as application programs.
  • the application program stored in memory 932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 915 is configured to execute instructions, so as to execute any of the aforementioned methods applied to the network side device, for example, the method shown in FIG. 1 .
  • the network side device 900 may also include a power supply component 926 configured to perform power management of the network side device 900, a wired or wireless network interface 950 configured to connect the network side device 900 to the network, and an input/output (I/O ) interface 958.
  • the network side device 900 can operate based on the operating system stored in the memory 932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, Free BSDTM or similar.
  • the methods provided in the embodiments of the present disclosure are introduced from the perspectives of the network side device and the UE respectively.
  • the network side device and the UE may include a hardware structure and a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above-mentioned functions may be implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • the methods provided in the embodiments of the present disclosure are introduced from the perspectives of the network side device and the UE respectively.
  • the network side device and the UE may include a hardware structure and a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above-mentioned functions may be implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • the communication device may include a transceiver module and a processing module.
  • the transceiver module may include a sending module and/or a receiving module, the sending module is used to realize the sending function, the receiving module is used to realize the receiving function, and the sending and receiving module can realize the sending function and/or the receiving function.
  • the communication device may be a terminal device (such as the terminal device in the foregoing method embodiments), or a device in the terminal device, or a device that can be matched with the terminal device.
  • the communication device may be a network device, or a device in the network device, or a device that can be matched with the network device.
  • the communication device may be a network device, or a terminal device (such as the terminal device in the above method embodiment), or a chip, a chip system, or a processor that supports the network device to implement the above method, or it may be a terminal device that supports A chip, a chip system, or a processor for realizing the above method.
  • the device can be used to implement the methods described in the above method embodiments, and for details, refer to the descriptions in the above method embodiments.
  • a communications device may include one or more processors.
  • the processor may be a general purpose processor or a special purpose processor or the like.
  • it can be a baseband processor or a central processing unit.
  • the baseband processor can be used to process communication protocols and communication data
  • the central processor can be used to control communication devices (such as network side equipment, baseband chips, terminal equipment, terminal equipment chips, DU or CU, etc.)
  • a computer program that processes data for a computer program.
  • the communication device may further include one or more memories, on which computer programs may be stored, and the processor executes the computer programs, so that the communication device executes the methods described in the foregoing method embodiments.
  • data may also be stored in the memory.
  • the communication device and the memory can be set separately or integrated together.
  • the communication device may further include a transceiver and an antenna.
  • the transceiver may be referred to as a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement a transceiver function.
  • the transceiver may include a receiver and a transmitter, and the receiver may be called a receiver or a receiving circuit for realizing a receiving function; the transmitter may be called a transmitter or a sending circuit for realizing a sending function.
  • the communication device may further include one or more interface circuits.
  • the interface circuit is used to receive code instructions and transmit them to the processor.
  • the processor executes the code instructions to enable the communication device to execute the methods described in the foregoing method embodiments.
  • the communication device is a terminal device (such as the terminal device in the above method embodiment): the processor is configured to execute any of the methods shown in FIGS. 1-6 .
  • the communication device is a network device: the transceiver is used to execute the method shown in any one of Fig. 7-Fig. 8 .
  • the processor may include a transceiver for implementing receiving and transmitting functions.
  • the transceiver may be a transceiver circuit, or an interface, or an interface circuit.
  • the transceiver circuits, interfaces or interface circuits used to realize the functions of receiving and sending can be separated or integrated together.
  • the above-mentioned transceiver circuit, interface or interface circuit may be used for reading and writing code/data, or the above-mentioned transceiver circuit, interface or interface circuit may be used for signal transmission or transfer.
  • the processor may store a computer program, and the computer program runs on the processor to enable the communication device to execute the methods described in the foregoing method embodiments.
  • a computer program may be embedded in a processor, in which case the processor may be implemented by hardware.
  • the communication device may include a circuit, and the circuit may implement the function of sending or receiving or communicating in the foregoing method embodiments.
  • the processors and transceivers described in this disclosure can be implemented on integrated circuits (integrated circuits, ICs), analog ICs, radio frequency integrated circuits (RFICs), mixed signal ICs, application specific integrated circuits (ASICs), printed circuit boards ( printed circuit board, PCB), electronic equipment, etc.
  • the processor and transceiver can also be fabricated using various IC process technologies such as complementary metal oxide semiconductor (CMOS), nMetal-oxide-semiconductor (NMOS), P-type Metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (bipolar junction transistor, BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (Gas), etc.
  • CMOS complementary metal oxide semiconductor
  • NMOS nMetal-oxide-semiconductor
  • PMOS bipolar junction transistor
  • BJT bipolar CMOS
  • SiGe silicon germanium
  • Gas gallium arsenide
  • the communication device described in the above embodiments may be a network device or a terminal device (such as the terminal device in the above method embodiments), but the scope of the communication device described in this disclosure is not limited thereto, and the structure of the communication device may not be affected by limits.
  • a communication device may be a stand-alone device or may be part of a larger device.
  • the communication device may be:
  • a set of one or more ICs may also include storage components for storing data and computer programs;
  • ASIC such as modem (Modem);
  • the communications device may be a chip or system-on-a-chip
  • the chip includes a processor and an interface.
  • the number of processors may be one or more, and the number of interfaces may be more than one.
  • the chip also includes a memory, which is used to store necessary computer programs and data.
  • An embodiment of the present disclosure also provides a system for determining the duration of a side link, the system includes a communication device as a terminal device (such as the first terminal device in the method embodiment above) in the above embodiment and a communication device as a network device, Alternatively, the system includes a communication device serving as a terminal device in the above embodiment (such as the first terminal device in the above method embodiment) and a communication device serving as a network device.
  • the present disclosure also provides a readable storage medium on which instructions are stored, and when the instructions are executed by a computer, the functions of any one of the above method embodiments are realized.
  • the present disclosure also provides a computer program product, which implements the functions of any one of the above method embodiments when executed by a computer.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product comprises one or more computer programs. When the computer program is loaded and executed on the computer, all or part of the processes or functions according to the embodiments of the present disclosure will be generated.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer program can be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program can be downloaded from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)) etc.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a high-density digital video disc (digital video disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)
  • At least one in the present disclosure can also be described as one or more, and a plurality can be two, three, four or more, and the present disclosure is not limited.
  • the technical feature is distinguished by "first”, “second”, “third”, “A”, “B”, “C” and “D”, etc.
  • the technical features described in the “first”, “second”, “third”, “A”, “B”, “C” and “D” have no sequence or order of magnitude among the technical features described.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提出一种预编码矩阵确定方法及设备/存储介质/装置,属于通信技术领域。其中,该方法包括:获取来自用户设备UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳;基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t。本公开提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。

Description

一种预编码矩阵确定方法及设备/存储介质/装置 技术领域
本公开涉及通信技术领域,尤其涉及一种预编码矩阵确定方法及设备/存储介质/装置。
背景技术
由于m-MIMO(massive Multiple-Input Multiple-Output,大规模多输入多输出)技术的稳定性、能量利用率、以及抗干扰能力均较好,因此,通常会利用m-MIMO系统进行无线通信。其中,在m-MIMO系统中,UE(User Equipment,用户设备)通常需要向基站反馈下行链路的信道矩阵,以使得基站可以根据该信道矩阵确定下行链路的预编码矩阵。其中,由于m-MIMO系统中基站端天线数量较多,则导致基站端对应的下行链路的数量也较多,从而导致反馈信道矩阵的开销较大,因此,亟需一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
相关技术中,确定预编码矩阵的方法主要包括:
方法一:各个UE利用估计得到的信道矩阵和给定的码本,通过穷举搜索法找到使得低维的基带等效信道向量的模最大的模拟域预编码/组合向量,并基于所确定的模拟域预编码/组合向量得到对应的低维基带等效信道向量,然后UE将低维基带等效信道向量和对应码本信息反馈至基站,使得基站利用低维等效信道向量使用ZF(迫零)方法确定数字域预编码矩阵。
方法二:基站利用CNN(Convolutional Neural Networks,卷积神经网络)强化学习,得到从环境到用户波束方向的映射,从而寻找用户波束最有可能出现的方位,以尽可能集中感知波束能量,减少感知波束数量,并且在基站处利用CNN学习如何利用少量感知向量确定对应的预编码/组合器矩阵,然后基站可以基于UE发送的反馈参数结合其利用CNN学习到的少量导频,直接确定预编码矩阵。
但是,相关技术的方法一中采用穷举搜索法进行模拟域确定预编码/组合器,计算复杂度高,效率较低,同时还需要对码本精心设计。方法二中基站处构建的CNN神经网络未能充分挖掘信道状态信息的内部特征,也未考虑信道数据在时间上变化趋势,使得基站无法准确的确定出UE的预编码矩阵。
发明内容
本公开提出的预编码矩阵确定方法及设备/存储介质/装置,用于解决相关技术的效率较低、计算复杂度高、预编码设计的效果不理想的技术问题。
本公开一方面实施例提出的预编码矩阵确定方法,应用于基站,包括:
获取来自UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳;
基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
可选的,在本公开的一个实施例,所述信道特征信息q k,t为:所述UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;所述压缩率参数γ为所述第一子网络的压缩率;以及,所述第一子网络包括第一卷积神经网络和第一长短期记忆网络LSTM,其中,所述第一卷积神经网络和第一LSTM以类残差网络的方式进行连接。
可选的,在本公开的另一个实施例之中,所述基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t,包括:
确定与所述压缩率参数γ对应的第二子网络;
将所述信道特征信息q k,t输入至所述第二子网络中以输出所述预编码矩阵F t、W t
可选的,在本公开的另一实施例之中,所述第二子网络由两个并行的第一网络模块和第二网络模块组成;
所述第一网络模块包括依次连接的第二LSTM、第一残差神经网络和计算模块,所述第一残差神经网络包括第一一维卷积网络和由n个卷积神经网络组成的第一残差单元,n为正整数,所述第一一维卷积网络和第一残差单元以类残差网络的方式进行连接;其中,所述信道特征信息q k,t经由所述第二LSTM 和第一残差神经网络处理以得到移相器角度Θ t,所述计算模块用于基于所述移相器角度Θ t计算出预编码矩阵F t并输出;
所述第二网络模块包括依次连接的第三LSTM、第二残差神经网络和归一化处理模块,所述归一化模块的输入端还与所述计算模块的输出端连接,所述第二残差神经网络包括第二一维卷积网络和由n个卷积神经网络组成的第二残差单元,所述第二一维卷积网络和第二残差单元以类残差网络的方式进行连接;其中,所述信道特征信息q k,t经由所述第三LSTM、第二残差神经网络处理和归一化处理模块以得到预编码矩阵W t并输出;
以及,所述第一残差神经网络对应的矩阵大小与所述第二残差神经网络对应的矩阵大小不同。
可选的,在本公开的另一实施例之中,所述基站包括至少一个第二子网络,其中,不同第二子网络的压缩率不同;
所述方法还包括:
基于所述第一子网络的结构在所述基站端部署至少一个模拟子网络,其中,所述模拟子网络与所述第一子网络结构相同,以及不同模拟子网络的压缩率不同;
连接具有对应压缩率的所述模拟子网络和所述第二子网络以得到至少一个预编码矩阵确定网络;
获取来自各个UE的信道估计信息h k,t
基于各个UE的信道估计信息h k,t确定样本数据集H;
基于所述样本数据集H对所述预编码矩阵确定网络进行训练。
可选的,在本公开的另一实施例之中,所述方法还包括:
确定训练完成后的所述模拟子网络对应的网络参数;
将所述网络参数发送至UE。
本公开另一方面实施例提出的预编码矩阵确定方法,应用于UE,包括:
确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示所述信道估计信息h k,t对应的时间戳;
基于所述信道估计信息h k,t确定信道特征信息q k,t,其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;
向基站发送信道特征信息q k,t以及压缩率参数γ。
可选的,在本公开的一个实施例之中,所述基于所述信道估计信息h k,t确定信道特征信息q k,t,包括:
将所述信道估计信息h k,t输入至第一子网络以输出所述信道特征信息q k,t
其中,所述第一子网络包括第一卷积神经网络和第一LSTM,其中,所述第一卷积神经网络和第一LSTM以类残差网络的方式进行连接,所述压缩率参数γ为所述第一子网络的压缩率。
可选的,在本公开的另一个实施例之中,所述方法还包括:
接收基站发送的网络参数;
基于所述网络参数部署所述第一子网络。
本公开又一方面实施例提出的预编码矩阵确定装置,包括:
获取模块,用于获取来自UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示所述信道特征信息q k,t对应的时间戳;
确定模块,用于基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
本公开又一方面实施例提出的预编码矩阵确定装置,包括:
确定模块,用于确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示所述信道估计信息h k,t对应的时间戳;
上述确定模块,还用于基于所述信道估计信息h k,t确定信道特征信息q k,t,其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;
发送模块,用于向基站发送信道特征信息q k,t以及压缩率参数γ。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有 计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提出的方法。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上另一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如另一方面实施例提出的方法。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如一方面实施例提出的方法被实现。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如另一方面实施例提出的方法被实现。
综上所述,在本公开实施例提供的预编码矩阵确定方法及设备/存储介质/装置之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
本公开实施例的方案还具备以下有益效果:
(1)上述预编码矩阵确定方法中的预编码矩阵确定网络由进行信道压缩重建的第一子网络和确定预编码矩阵的第二子网络构成,且从信道的时空二维相关性出发,以全部时刻用户和速率最大化为优化目标,进行了信道压缩和预编码矩阵的联合优化,从而实现了在UE有限反馈下的高效的确定混合预编码矩阵。
(2)第一子网络由第一卷积神经网络和第一LSTM组成,提取了信道的空间特征和时间特征,并通过第一卷积神经网络和第一LSTM的残差级联设计,将获取到的信道状态信息压缩成低维度的信道特征信息,从而降低了网络的复杂度。
(3)第二子网络由两个并行的第一网络模块和第二网络模块组成,网络模块均由LSTM和残差神经网络组成,同时LSTM和残差神经网络利用类残差网络方式进行级联,将信道重建和预编码矩阵计算流程融合,实现了网络的轻量化。
(4)预编码矩阵确定网络在基站侧进行训练,且在完成训练后,将训练完成后的各个模拟子网络对应的网络参数分布式地传输给对应UE,从而确保了UE可以根据接收到的网络参数在UE中部署多个具备不同压缩率的第一子网络,以便后续利用部署的第一子网络对信道状态信息进行处理得到低维的信道特征信息,并将该信道特征信息和第一子网络的压缩率参数发送至基站,由基站确定预编码矩阵,则保证了预编码矩阵的成功确定。
(5)利用上述预编码矩阵确定方法进行预编码矩阵确定的过程中,第一子网络和第二网络可以根据当前预编码的性能,灵活调整选用不同压缩率对应的网络,从而实现了性能和有限反馈的均衡。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1a为本公开一个实施例所提供的预编码矩阵确定方法的流程示意图;
图1b为本公开一个实施例所提供的第一子网络的结构示意图;
图1c为本公开一个实施例所提供的第二子网络的结构示意图;
图2a为本公开另一个实施例所提供的预编码矩阵确定方法的流程示意图;
图2b为本公开一个实施例所提供的预编码矩阵确定网络的结构示意图;
图3为本公开再一个实施例所提供的预编码矩阵确定方法的流程示意图;
图4为本公开又一个实施例所提供的预编码矩阵确定方法的流程示意图;
图5为本公开又一个实施例所提供的预编码矩阵确定方法的流程示意图;
图6为本公开一个实施例所提供的预编码矩阵确定装置的结构示意图;
图7为本公开另一个实施例所提供的预编码矩阵确定装置的结构示意图;
图8是本公开一个实施例所提供的一种用户设备的框图;
图9为本公开一个实施例所提供的一种网络侧设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开实施例的一些方面相一致的装置和方法的例子。
在本公开实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开实施例。在本公开实施例和所附权利要求书中所使用的单数形式的“一种”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”及“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的要素。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图对本公开提供的预编码矩阵确定方法及设备/存储介质/装置进行详细描述。
图1a为本公开实施例所提供的一种预编码矩阵确定方法的流程示意图,该方法由基站执行,如图1所示,该预编码矩阵确定方法可以包括以下步骤:
步骤101a、获取来自UE的信道特征信息q k,t以及压缩率参数γ。
需要说明的是,在本公开的一个实施例之中,UE可以是指向用户提供语音和/或数据连通性的设备。终端设备可以经RAN(Radio Access Network,无线接入网)与一个或多个核心网进行通信,UE可以是物联网终端,如传感器设备、移动电话(或称为“蜂窝”电话)和具有物联网终端的计算机,例如,可以是固定式、便携式、袖珍式、手持式、计算机内置的或者车载的装置。例如,站(Station,STA)、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile)、远程站(remote station)、接入点、远程终端(remoteterminal)、接入终端(access terminal)、用户装置(user terminal)或用户代理(useragent)。或者,UE也可以是无人飞行器的设备。或者,UE也可以是车载设备,比如,可以是具有无线通信功能的行车电脑,或者是外接行车电脑的无线终端。或者,UE也可以是路边设备,比如,可以是具有无线通信功能的路灯、信号灯或者其它路边设备等。
其中,在本公开的一个实施例之中,上述信道特征信息q k,t具体可以为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的,其中,信道特征信息q k,t的维度低于信道估计信息h k,t的维度,以及,该信道估计信息h k,t具体可以为该UE通过典型信道估计方法确定的下行信道估计信息,该典型信道估计方法即为现有的信道估计方法,其中关于该典型信道估计方法的详细介绍可以参考现有技术介绍,本公开实施例在此不做赘述。进一步地,上述的k用于指示第k个UE,t用于指示时间戳。基于此,上述的信道估计信息h k,t即为:第k个UE在第t时刻的下行信道估计信息;信道特征信息q k,t即为:第k个UE在第t时刻的信道特征信息。
以及,在本公开的一个实施例之中,上述的压缩率参数γ为第一子网络的压缩率。
以下,对上述的第一子网络进行详细介绍。
在本公开的一个实施例之中,图1b为本公开实施例提供的一种第一子网络的结构示意图。参考图1b所示,在本公开的一个实施例之中,第一子网络(例如可以为CHNet)可以包括第一卷积神经网络和第一LSTM(Long Short Term Memory,长短期记忆网络)。其中,该第一卷积神经网络和第一LSTM以类残差网络的方式进行连接,该第一卷积神经网络可以用于提取信道空间特征,该第一LSTM用于提取信道时间特征。
以下,对于第一子网络中的第一卷积神经网络和第一LSTM对信息的处理流程进行详细介绍。
首先,参考图1b可知,信道估计信息h k,t可以先输入至第一子网络并经由第一子网络中的第一卷积神经网络和第一LSTM处理以得到信道特征信息q k,t
具体的,在本公开的一个实施例之中,如图1所示,信道估计信息h k,t在输入至第一卷积神经网络之前,需要先乘以2×N t×K×T以进行重构,其中,N t为基站端的天线数量,K为UE的总数量,T为时间戳的总数量。基于此,假设包括有N组信道估计信息h k,t,则在输入至第一卷积神经网络之前通过乘以2×N t×K×T可以将信道估计信息h k,t重构为一N×2×N t×K×T矩阵。之后,再将该N×2×N t×K×T矩阵输入至第一卷积神经网络以进行卷积操作。
以及,在本公开的一个实施例之中,该第一卷积神经网络可以为一维卷积神经网络,以及,N×2×N t×K×T矩阵在输入至第一卷积神经网络后,该第一卷积神经网络可以对N×2×N t×K×T矩阵进行卷积操作,其中,该第一卷积神经网络的卷积操作可以为:
Figure PCTCN2022078445-appb-000001
其中,y d,i,j为经第一卷积神经网络的卷积操作之后得到的矩阵y中第(d,i,j)个元素,W d,c,h为第一卷积神经网络的卷积核权重矩阵W中第(d,c,h)个元素,b d为第一卷积神经网络的卷积核偏置b中第d个元素,x c,i×s+h为第一卷积神经网络的输入x经过零填充后第(c,i×s+h)个元素,s为第一卷积神经网络的卷积步长(例如卷积步长可以为1)。其中,第一卷积神经网络的卷积核可以为d i×c i×3,以及,满足d i-1=c i的约束。其中,需要说明的是,上述的“零填充”具体可以为在第一卷积神经网络的输入x的外周添加0得到矩阵x’,并且该矩阵x’应满足以下要求:利用第一卷积神经网络对该x’进行卷积操作之后得到的矩阵y的大小与第一卷积神经网络的输入x的大小相同。以及,需要说明的是,后续内容中的各个卷积神经网络也需要对输入进行零填充处理,在本公开的一个实施例之中,所有卷积神经网络的所有卷积层均采用相同的零填充策略。
以及,上述的第一卷积神经网络中卷积层的激活函数为Leakyrelu函数,定义为:
Figure PCTCN2022078445-appb-000002
其中,x为经第一卷积神经网络的卷积操作之后的输出(也即是上述的经第一卷积神经网络的卷积操作之后得到的矩阵y),f i(x)为第一卷积神经网络的输出。
则由上述内容可知,信道估计信息h k,t输入至第一卷积神经网络之后,该第一卷积神经网络会依次进行卷积激活操作而得到第一卷积神经网络的输出f i(x)。
进一步地,还需要说明的是,上述的第一卷积网络的输出的维度低于第一卷积网络的输入(即信道 估计信息h k,t)的维度。具体而言,在本公开的一个实施例之中,上述第一卷积神经网络可以利用毫米波信道稀疏性的特点,同时考虑到信道在时间上的相关性,将t时刻信道估计信息h k,t压缩成低维度矩阵,以使得第一卷积神经网络的输出的矩阵大小小于信道估计信息h k,t的矩阵大小,从而降低了计算复杂度。
具体的,在本公开的一个实施例之中,第一卷积神经网络的输出与信道估计信息h k,t的矩阵大小关系如下:若信道估计信息h k,t的矩阵大小为N×2×N t×K×T,则经过第一卷积神经网络后所得到的输出的矩阵大小可以为N×M×T,其中,M与第一子网络的压缩率γ相关,γ=M/(Nt×K×2)。
示例的,若第一子网络的压缩率γ=1/32,Nt=256,K=1时,则M=16。
再进一步地,参考图1b所示,本公开的一个实施例之中,第一卷积神经网络的输出可以作为第一LSTM的输入,以及,该第一LSTM对该第一卷积神经网络的输出处理得到第一LSTM的输出,再将该第一LSTM的输出与第一卷积神经网络的输出相加以得到信道特征信息q k,t
由此可知,在本公开的一个实施例之中,第一卷积神经网络不仅可以提取信道空间特征,还会对输入第一LSTM的数据进行预处理,并使得进入第一LSTM网络的数据维度大大降低,从而有效地降低了计算复杂度。
以及,以下对第一LSTM进行详细介绍。
其中,在将第一卷积神经网络的输出(即大小为N×M×T的矩阵)输入至第一LSTM之后,第一LTSM需要先将该大小为N×M×T的矩阵进行第一次转换,得到大小为N×T×M的第一转换矩阵,并对第一转换矩阵进行处理得到处理后的矩阵,该处理后的矩阵的大小为N×T×M,之后再将该处理后的矩阵进行第二次转换,得到大小为N×M×T的LTSM输出矩阵,之后,再将LTSM输出矩阵与第一卷积神经网络的输出矩阵相加,得到信道特征信息q k,t,其中,该信道特征信息q k,t的大小为N×M×T。
以及,上述的第一LSTM在对矩阵进行处理时,需要用到第一LSTM的隐藏层信息s 1,k,t和自身状态信息c 1,k,t,需要说明的是,该s 1,k,t和c 1,k,t的取值是随着第一子网络的输入不同而变化的,具体的,输入至第一子网络的是信道估计信息h 1,1时所使用到的s 1,1,1和c 1,1,1的值与输入至第一子网络的是信道估计信息h 1,2时所使用到的s 1,1,2和c 1,1,2均不相同,由此可以确保s 1,k,t和c 1,k,t的取值与第一子网络的输入相对应,则保证了处理精确度。
此外,需要说明的是,参考图1b所示的第一LSTM的结构,本公开实施例采用的第一LSTM为现有技术中的经典LSTM模型,基于此,关于第一LSTM对于输入的具体处理方法可以参见现有技术描述,本公开实施例在此不做赘述。
由上述内容可知,信道特征信息q k,t具体是由部署于UE端第一子网络对信道估计信息h k,t进行处理后所得到的低纬度信息。
步骤102a、基于压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
其中,在本公开的一个实施例之中,基于压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t的方法包括:确定与压缩率参数γ对应的第二子网络(PreNet),并将信道特征信息q k,t输入至第二子网络中以输出预编码矩阵F t、W t
需要说明的是,在本公开的一个实施例之中,基站端部署有多个第二子网络,且不同第二子网络的压缩率不同。其中,上述的压缩率参数γ为上述第一子网络的压缩率,以及,当压缩率参数γ不同时(即当用于得到信道特征信息q k,t的第一子网络的压缩率不同时),基于信道特征信息q k,t得到预编码矩阵F t、W t时所要使用的第二子网络也会有所不相同。示例的,假设该压缩率参数γ为1/16(即用于得到信道特征信息q k,t的第一子网络的压缩率为1/16),则基于信道特征信息q k,t得到预编码矩阵F t、W t时所要使用的第二子网络应当为压缩率为1/32的第二子网络。
以及,图1c为本公开实施例提供的一种第二子网络的结构示意图。参考图1c所示,第二子网络由两个并行的第一网络模块和第二网络模块组成。
以下对上述的第一网络模块进行详细介绍。
在本公开的一个实施例之中,第一网络模块包括依次连接的第二LSTM、第一残差神经网络和计算模块。
其中,在本公开的一个实施例之中,第二LSTM与上述第一LSTM结构相同,本公开实施例在此不做赘述。以及,基站在获取到来自UE的信道特征信息q k,t后,可以将该信道特征信息q k,t输入至第二LSTM以得到第二LSTM的输出,并将该第二LSTM的输出输入至第一残差神经网络中。其中,该第二LSTM的输出的矩阵大小与信道特征信息q k,t的矩阵大小相同,均为:N×M×T。
以及,在本公开的一个实施例之中,第一残差神经网络在获取到第二LSTM的输出之后,可以对该第二LSTM的输出进行处理得到第一残差神经网络的输出(即移相器角度Θ t)。
具体的,在本公开的一个实施例之中,该第一残差神经网络可以包括第一一维卷积网络和由n个卷积神经网络组成的第一残差单元,n为正整数(例如可以为4)。以及,第一一维卷积网络和第一残差单元以类残差网络的方式进行连接。其中,该第一一维卷积网络的结构与上述的第一卷积网络的结构相同,本公开实施例在此不做赘述。以及,该第一残差单元可以进行残差操作,其中,该残差操作可以定义为:
y=f n(W nf n-1(W n-1f n-2(…f 1(W 1x+b 1)…)+b n-1)+b n+x),
其中,y为第一残差单元的输出,x第一残差单元的输入,W i与b i为第一残差单元的中间层卷积网络的权重与偏置,f i(·)为第一残差单元的中间层卷积网络的激活函数,其中,该激活函数可以为LeakyReLU函数,关于该LeakyReLU函数的介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
以及,需要说明的是,该第一残差单元中最后一层卷积网络的输出矩阵大小应当与第一残差单元的输入矩阵大小相同,并且,该第一残差单元中间网络的输出矩阵大小是可调的。
以及,结合上述内容以及图1c可知,在本公开的一个实施例之中,第一残差神经网络对于第二LSTM的输出处理得到第一残差神经网络的输出(即移相器角度Θ t)的过程主要为:第二LSTM的输出在输入至第一卷积神经网络之前,先对其进行矩阵扩展,以将矩阵大小从N×M×T扩展为N×2×Nt×K×T得到扩展矩阵,之后,第一残差神经网络中的第一一维卷积网络对该扩展矩阵进行卷积激活操作得到第一一维卷积网络的输出,将该第一一维卷积网络的输出输入至第一残差单元,使得该第一残差单元对该第一一维卷积网络的输出进行残差操作得到第一残差单元的输出,之后,将该第一残差单元的输出与第一一维卷积网络的输出相加得到第一残差神经网络的输出(即移相器角度Θ t),再将该移相器角度Θ t输入至计算模块。
进一步地,在本公开的一个实施例之中,该计算模块在获取到第一残差神经网络输出的移相器角度Θ t之后,可以基于该移相器角度Θ t计算得到模拟预编码矩阵F t。其中,计算模块基于移相器角度Θ t计算出预编码矩阵F t的计算公式为:
Figure PCTCN2022078445-appb-000003
由上可知,信道特征信息q k,t通过上述第一网络模块处理后可以得到模拟预编码矩阵F t
以及,以下对上述的第二网络模块进行详细介绍。
在本公开的一个实施例之中,第二网络模块包括依次连接的第三LSTM、第二残差神经网络和归一化处理模块,该归一化模块的输入端还与第一网络模块中的计算模块的输出端连接。
其中,在本公开的一个实施例之中,第三LSTM与上述第二LSTM结构相同,本公开实施例在此不做赘述。以及,基站在获取到来自UE的信道特征信息q k,t后,可以将该信道特征信息q k,t输入至第三LSTM以得到第三LSTM的输出,并将该第三LSTM的输出输入至第二残差神经网络中。其中,该第三LSTM的输出的矩阵大小与信道特征信息q k,t的矩阵大小相同,均为:N×M×T。
以及,在本公开的一个实施例之中,第二残差神经网络在获取到第三LSTM的输出之后,可以对该第三LSTM的输出进行处理得到第二残差神经网络的输出(即
Figure PCTCN2022078445-appb-000004
)。
具体的,在本公开的一个实施例之中,该第二残差神经网络可以包括第二一维卷积网络和由n个卷积神经网络组成的第二残差单元,n为正整数(例如可以为4)。以及,第二一维卷积网络和第二残差单 元以类残差网络的方式进行连接。其中,该第二一维卷积网络的结构与上述的第一卷积网络的结构相同,本公开实施例在此不做赘述。以及,该第二残差单元与上述第一残差单元的操作相同,本公开实施例在此不做赘述。
以及,需要说明的是,该第二残差单元中最后一层卷积网络的输出矩阵大小应当与第二残差单元的输入矩阵大小相同,并且,该第二残差单元中间网络的输出矩阵大小是可调的。
进一步地,结合上述内容以及图1c可知,在本公开的一个实施例之中,第二残差神经网络对于第三LSTM的输出处理得到第二残差神经网络的输出(即
Figure PCTCN2022078445-appb-000005
)的过程主要为:第三LSTM的输出在输入至第二卷积神经网络之前,先对其进行矩阵扩展,以将矩阵大小从N×M×T扩展为N×2×K×K×T得到扩展矩阵,之后,第二残差神经网络中的第二一维卷积网络对该扩展矩阵进行卷积激活操作得到第二一维卷积网络的输出,将该第二一维卷积网络的输出输入至第二残差单元,使得该第二残差单元对该第二一维卷积网络的输出进行残差操作得到第二残差单元的输出,之后,将该第二残差单元的输出与第二一维卷积网络的输出相加得到第二残差神经网络的输出(即
Figure PCTCN2022078445-appb-000006
),再将该
Figure PCTCN2022078445-appb-000007
输入至归一化处理模块。
进一步地,在本公开的一个实施例之中,该归一化处理模块的输入端与计算模块的输出端相连接,归一化处理模块在获取到第二残差神经网络输出的
Figure PCTCN2022078445-appb-000008
之后,可以基于该
Figure PCTCN2022078445-appb-000009
和模拟预编码矩阵F t计算得到数字预编码矩阵W t。其中,其中,归一化处理模块基于该
Figure PCTCN2022078445-appb-000010
和模拟预编码矩阵F t计算得到数字预编码矩阵W t的计算公式为:
Figure PCTCN2022078445-appb-000011
由上可知,信道特征信息q k,t通过上述第一网络模块处理后可以得到数字预编码矩阵W t
以及,由上述内容可知,在本公开的一个实施例之中,上述的第二子网络中的第一网络模块和第二网络模块的结构类似但对应的矩阵大小不同,并且第一网络模块和第二网络模块分别用于计算模拟预编码矩阵和数字预编码矩阵,实现了隐式的混合预编码矩阵的计算。此外,第二子网络中还将信道重建和预编码矩阵计算流程融合,实现了网络的轻量化。
综上所述,在本公开实施例提供的预编码矩阵确定方法之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
其中,在本公开的一个实施例之中,在利用第一子网络和第二子网络得出混合预编码矩阵之前,通常还需要对第一子网络和第二子网络进行训练。以下针对第一子网络和第二子网络的训练过程进行详细介绍。
图2a为本公开另一个实施例所提供的一种预编码矩阵确定方法的流程示意图,该方法由基站执行,如图2a所示,该预编码矩阵确定方法可以包括以下步骤:
步骤201a、基于第一子网络的结构在基站端部署至少一个模拟子网络,其中,模拟子网络与第一子网络结构相同,以及不同模拟子网络的压缩率不同。
其中,关于第一子网络的介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202a、连接具有对应压缩率的模拟子网络和第二子网络以得到至少一个预编码矩阵确定网络。
其中,在本公开的一个实施例之中,基站可以包括至少一个第二子网络,其中,不同第二子网络的压缩率不同。
以及,在本公开的一个实施例之中,通过联立模拟子网络和第二子网络可以得到预编码矩阵确定网 络,用于确定混合预编码。其中,不同压缩率的模拟子网络对应不同压缩率的第二子网络,示例的,压缩率为1/16的模拟子网络可以与压缩率为1/32的第二子网络联立以得到预编码矩阵确定网络。并且,针对不同的预编码矩阵确定网络其预编码性能也会有所不同。
示例的,图2b为本申请提供一种预编码矩阵确定网络的结构示意图。
步骤203a、获取来自各个UE的信道估计信息h k,t
其中,关于信道估计信息h k,t的介绍可以参考上述实施例描述,本公开实施例在此不做赘述。以及,在本公开的一个实施例之中,该信道估计信息h k,t例如可以为CSI(Channel State Information,信道状态信息)矩阵。
示例的,在本公开的一个实施例之中,在一种点到点MIMO-OFDM(Orthogonal Frequency Division Multiplexing,正交频分复用)系统的下行链路中,假设在基站端以半波长间隔配置256根天线,在UE端配置单天线,即N t=256,K=1(即UE数量为一个)。则采用N c=1024个子载波,在2.4GHz室外微微蜂窝场景例如可以获取来自该一个UE的5000组信道估计信息h k,t。其中,每组样本包括T=5个时间戳上相关的信道估计信息h k,t,即h k,t={h 1,1,…,h 1,5}。将信道估计信息h 1,t做虚实分解,即
Figure PCTCN2022078445-appb-000012
其中j 2=-1。这样的5000组信道估计信息,即为:
Figure PCTCN2022078445-appb-000013
Figure PCTCN2022078445-appb-000014
步骤204a、基于各个UE的信道估计信息h k,t确定样本数据集H。
其中,在本公开的一个实施例之中,基于各个UE的信道估计信息h k,t确定样本数据集H的方法可以包括:样本数据集H={H 1,…,H T},其中
Figure PCTCN2022078445-appb-000015
h k,t为各个UE上报的信道估计信息。其中,K为UE总数量,k为第k个UE,k=1、2、3........K,N t为BS天线数量,T为时间戳总数量,t为时间戳,t==1、2、3........T。
基于此,假设K=1、T=5,且上述步骤203a中获取到来自一个UE的5000组信道估计信息,即为:
Figure PCTCN2022078445-appb-000016
此时可以得到5000组样本数据集H,其中每组样本数据集H={H 1,…,H 5},
Figure PCTCN2022078445-appb-000017
Figure PCTCN2022078445-appb-000018
其中,
Figure PCTCN2022078445-appb-000019
然后,可以将5000组样本数据集H划分为3500组样本的训练集(占比70%),1000组样本的验证集(占比20%),500组样本的测试集(占比10%),并将训练集、验证集、测试集作为预编码矩阵确定网络的训练数据。
步骤205a、基于样本数据集H对预编码矩阵确定网络进行训练。
其中,在本公开的一个实施例之中,在基于样本数据集H对预编码矩阵确定网络进行训练的过程中,以全部时刻用户和速率最大化为优化目标,调整预编码矩阵确定网络中的网络参数,以实现在有限反馈下的高效的混合预编码设计。
具体的,可以采用上述图1至图5所示的方法基于样本数据集H对该预编码矩阵确定网络进行训练,计算损失函数Loss。
其中,在本公开的一个实施例之中,损失函数Loss可以定义为:
Figure PCTCN2022078445-appb-000020
Figure PCTCN2022078445-appb-000021
其中,上述Rate可以为合速率,以及由上述公式可知,损失函数Loss可以为合速率的相反数。
进一步地,在本公开的一个实施例之中,采用端到端的训练方式进行预编码矩阵确定网络的训练, 直至该损失函数收敛,确定预编码矩阵确定网络训练完成。
步骤206a、确定训练完成后的各个模拟子网络对应的网络参数。
步骤207a、将各个模拟子网络对应的网络参数发送至UE。
在本公开的一个实施例之中,UE接收各个模拟子网络对应的网络参数之后,UE可以根据接收到的网络参数在UE中部署多个具备不同压缩率的第一子网络,以便后续要确定混合预编码矩阵时,UE可以先估计得到信道估计信息h k,t,并利用部署的第一子网络对信道状态信息h k进行处理为得到低维的信道特征信息q k,t,并将该信道特征信息q k,t和第一子网络的压缩率参数γ发送至基站,使得基站可以选用对应压缩率γ的第二子网络,对接收到的信道特征信息q k,t进行处理以得到混合预编码矩阵F t、W t
综上所述,在本公开实施例提供的预编码矩阵确定方法之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
图3为本公开再一个实施例所提供的一种预编码矩阵确定方法的流程示意图,该方法由基站执行,如图3所示,该预编码矩阵确定方法可以包括以下步骤:
步骤301、基于第一子网络的结构在基站端部署至少一个模拟子网络,其中,模拟子网络与第一子网络结构相同,以及不同模拟子网络的压缩率不同。
步骤302、连接具有对应压缩率的模拟子网络和第二子网络以得到至少一个预编码矩阵确定网络。
步骤303、获取来自各个UE的信道估计信息h k,t
步骤304、基于各个UE的信道估计信息h k,t确定样本数据集H。
步骤305、基于样本数据集H对预编码矩阵确定网络进行训练。
步骤306、确定训练完成后的各个模拟子网络对应的网络参数。
步骤307、将各个模拟子网络对应的网络参数发送至UE。
步骤308、获取来自UE的信道特征信息q k,t以及压缩率参数γ。
步骤309、基于压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
其中,在本公开的一个实施例之中,关于步骤301~309的详细介绍可以参考上述实施例中的相关介绍,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的预编码矩阵确定方法之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
图4为本公开再一个实施例所提供的一种预编码矩阵确定方法的流程示意图,该方法由UE执行,如图4所示,该预编码矩阵确定方法可以包括以下步骤:
步骤401、确定信道估计信息h k,t
其中,在本公开的一个实施例之中,k用于指示第k个UE,t用于指示时间戳。
以及,在本公开的一个实施例之中,第k个UE可以通过典型信道估计方法获得第t时刻的下行信道 估计信息h k,t
步骤402、基于信道估计信息h k,t确定信道特征信息q k,t,其中,信道特征信息q k,t的维度低于信道估计信息h k,t的维度。
其中,在本公开的一个实施例之中,基于信道估计信息h k,t确定信道特征信息q k,t的方法可以包括将信道估计信息h k,t输入至第一子网络以输出信道特征信息q k,t,其中,第一子网络包括第一卷积神经网络和第一LSTM,第一卷积神经网络和第一LSTM以类残差网络的方式进行连接,压缩率参数γ为第一子网络的压缩率。
关于第一子网络的详细介绍可以参考上述实施例中的相关介绍,本公开实施例在此不做赘述。
步骤403、向基站发送信道特征信息q k,t以及压缩率参数γ。
其中,在本公开的一个实施例之中,基站接收UE发送的信道特征信息q k,t和压缩率参数γ,之后,可以根据压缩率参数γ选用对应的第二子网络对接收到的信道特征信息q k,t进行处理得到预编码矩阵。
此外,需要说明的是,在本公开的一个实施例之中,UE端部署有多个第一子网络,且不同第一子网络的压缩率不同。基于此,在本公开的一个实施例之中,UE可以根据预编码的性能灵活选择用于预编码的第一子网络。具体而言,UE可以先选取一压缩率较高的第一子网络来确定出信道特征信息q k,t,并将所确定该信道特征信息q k,t和该压缩率较高的第一子网络的压缩率参数γ发送至基站,以便基站选取该较高压缩率对应的第二子网络来进行预编码;其中,当UE确定出此次预编码性能较好时,则下次预编码时,UE可以选取一压缩率更高的第一子网络来确定出信道特征信息q k,t,并将所确定该信道特征信息q k,t和该压缩率更高的第一子网络的压缩率参数γ发送至基站,以便基站选取该更高压缩率对应的第二子网络来进行预编码;其中,当UE确定出此次预编码性能较差时,则下次预编码时,UE可以选取一压缩率较低的第一子网络来确定出信道特征信息q k,t,并将所确定该信道特征信息q k,t和该压缩率较低的第一子网络的压缩率参数γ发送至基站,以便基站选取该较低压缩率对应的第二子网络来进行预编码。
综上所述,在本公开实施例提供的预编码矩阵确定方法之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
图5为本公开又一个实施例所提供的一种预编码矩阵确定方法的流程示意图,该方法由UE执行,如图5所示,该预编码矩阵确定方法可以包括以下步骤:
步骤501、接收基站发送的网络参数。
其中,在本公开的一个实施例之中,网络参数可以包括多个不同压缩率对应的第一卷积神经网络的网络参数和第一LSTM网络的网络参数。
步骤502、基于网络参数部署第一子网络。
步骤503、确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示时间戳。
步骤504、基于信道估计信息h k,t确定信道特征信息q k,t,其中,信道特征信息q k,t的维度低于信道估计信息h k,t的维度。
步骤505、向基站发送信道特征信息q k,t以及压缩率参数γ。
其中,在本公开的一个实施例之中,关于步骤501~505的详细介绍可以参考上述实施例中的相关介绍,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的预编码矩阵确定方法之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信 道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
此外,需要说明的是,本公开实施例的方案还具备以下有益效果:
(1)上述预编码矩阵确定方法中的预编码矩阵确定网络由进行信道压缩重建的第一子网络和确定预编码矩阵的第二子网络构成,且从信道的时空二维相关性出发,以全部时刻用户和速率最大化为优化目标,进行了信道压缩和预编码矩阵的联合优化,从而实现了在UE有限反馈下的高效的确定混合预编码矩阵。
(2)第一子网络由第一卷积神经网络和第一LSTM组成,提取了信道的空间特征和时间特征,并通过第一卷积神经网络和第一LSTM的残差级联设计,将获取到的信道状态信息压缩成低维度的信道特征信息,从而降低了网络的复杂度。
(3)第二子网络由两个并行的第一网络模块和第二网络模块组成,网络模块均由LSTM和残差神经网络组成,同时LSTM和残差神经网络利用类残差网络方式进行级联,将信道重建和预编码矩阵计算流程融合,实现了网络的轻量化。
(4)预编码矩阵确定网络在基站侧进行训练,且在完成训练后,将训练完成后的各个模拟子网络对应的网络参数分布式地传输给对应UE,从而确保了UE可以根据接收到的网络参数在UE中部署多个具备不同压缩率的第一子网络,以便后续利用部署的第一子网络对信道状态信息进行处理得到低维的信道特征信息,并将该信道特征信息和第一子网络的压缩率参数发送至基站,由基站确定预编码矩阵,则保证了预编码矩阵的成功确定。
(5)利用上述预编码矩阵确定方法进行预编码矩阵确定的过程中,第一子网络和第二网络可以根据当前预编码的性能,灵活调整选用不同压缩率对应的网络,从而实现了性能和有限反馈的均衡。
图6本公开一个实施例所提供的一种预编码矩阵确定装置的结构示意图,如图6所示,装置600可以包括:
获取模块601,用于获取来自UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳;
确定模块602,用于基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
综上所述,在本公开实施例提供的预编码矩阵确定装置之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
在本公开一个实施例之中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;其中,信道特征信息q k,t的维度低于信道估计信息h k,t的维度;压缩率参数γ为第一子网络的压缩率;以及,第一子网络包括第一卷积神经网络和第一长短期记忆网络LSTM,其中,第一卷积神经网络和第一LSTM以类残差网络的方式进行连接。
进一步地,在本公开另一个实施例之中,确定模块602还用于:、
确定与压缩率参数γ对应的第二子网络;
将信道特征信息q k,t输入至第二子网络中以输出预编码矩阵F t、W t
进一步地,在本公开另一个实施例之中,第二子网络由两个并行的第一网络模块和第二网络模块组成;
第一网络模块包括依次连接的第二LSTM、第一残差神经网络和计算模块,第一残差神经网络包括第一一维卷积网络和由n个卷积神经网络组成的第一残差单元,n为正整数,第一一维卷积网络和第一残差单元以类残差网络的方式进行连接;其中,信道特征信息q k,t经由第二LSTM和第一残差神经网络处理以得到移相器角度Θ t,计算模块用于基于移相器角度Θ t计算出预编码矩阵F t并输出;
第二网络模块包括依次连接的第三LSTM、第二残差神经网络和归一化处理模块,归一化模块的输入端还与计算模块的输出端连接,第二残差神经网络包括第二一维卷积网络和由n个卷积神经网络组成的第二残差单元,第二一维卷积网络和第二残差单元以类残差网络的方式进行连接;其中,信道特征信息q k,t经由第三LSTM、第二残差神经网络处理和归一化处理模块以得到预编码矩阵W t并输出;以及,第一残差神经网络对应的矩阵大小与第二残差神经网络对应的矩阵大小不同。
进一步地,在本公开另一个实施例之中,基站包括至少一个第二子网络,其中,不同第二子网络的压缩率不同。
进一步地,在本公开另一个实施例之中,上述装置还用于:
基于第一子网络的结构在基站端部署至少一个模拟子网络,其中,模拟子网络与第一子网络结构相同,以及不同模拟子网络的压缩率不同;
连接具有对应压缩率的模拟子网络和第二子网络以得到至少一个预编码矩阵确定网络;
获取来自各个UE的信道估计信息h k,t
基于各个UE的信道估计信息h k,t确定样本数据集H;
基于样本数据集H对所述预编码矩阵确定网络进行训练。
进一步地,在本公开另一个实施例之中,上述装置还用于:
确定训练完成后的模拟子网络对应的网络参数;
将网络参数发送至UE。
图7为本公开另一个实施例所提供的一种预编码矩阵确定装置的结构示意图,如图7所示,装置700可以包括:
确定模块701,用于确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示信道估计信息h k,t对应的时间戳;
上述确定模块701,还用于基于信道估计信息h k,t确定信道特征信息q k,t,其中,信道特征信息q k,t的维度低于信道估计信息h k,t的维度;
发送模块702,用于向基站发送信道特征信息q k,t以及压缩率参数γ。
综上所述,在本公开实施例提供的预编码矩阵确定装置之中,基站可以获取UE通过第一子网络得到的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳,并基于压缩率参数γ和信道特征信息q k,t通过第二子网络确定预编码矩阵F t、W t。其中,在本公开的实施例中,信道特征信息q k,t为:UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;该信道特征信息q k,t的维度低于信道估计信息h k,t的维度。由此可知,在本公开实施例中,UE会先通过第一子网络将获取到的信道状态信息压缩成低维度的信道特征信息,并与压缩率参数反馈给基站,之后,基站即可利用第二子网络基于信道特征信息q k,t和压缩率参数确定预编码矩阵。其中,鉴于UE与基站之间传输的是低纬度信息,因此传输复杂度较低且传输效率较高,从而提供了一种在低开销的有限反馈下,简单而高效的预编码矩阵的确定方法。
在本公开一个实施例之中,上述确定模块701还用于:
将信道估计信息h k,t输入至第一子网络以输出信道特征信息q k,t
其中,第一子网络包括第一卷积神经网络和第一LSTM,其中,第一卷积神经网络和第一LSTM以类残差网络的方式进行连接,压缩率参数γ为第一子网络的压缩率。
进一步地,在本公开另一个实施例之中,上述装置还用于:
接收基站发送的网络参数;
基于网络参数部署第一子网络。
图8是本公开一个实施例所提供的一种用户设备UE800的框图。例如,UE800可以是移动电话,计算机,数字广播终端设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图8,UE800可以包括以下至少一个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件813,以及通信组件816。
处理组件802通常控制UE800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括至少一个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括至少一个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在UE800的操作。这些数据的示例包括用于在UE800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为UE800的各种组件提供电力。电源组件806可以包括电源管理系统,至少一个电源,及其他与为UE800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述UE800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括至少一个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的唤醒时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当UE800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当UE800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件813包括至少一个传感器,用于为UE800提供各个方面的状态评估。例如,传感器组件813可以检测到设备800的打开/关闭状态,组件的相对定位,例如所述组件为UE800的显示器和小键盘,传感器组件813还可以检测UE800或UE800一个组件的位置改变,用户与UE800接触的存在或不存在,UE800方位或加速/减速和UE800的温度变化。传感器组件813可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件813还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件813还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于UE800和其他设备之间有线或无线方式的通信。UE800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,UE800可以被至少一个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处 理器或其他电子元件实现,用于执行上述方法。
图9是本申请实施例所提供的一种网络侧设备900的框图。例如,网络侧设备900可以被提供为一网络侧设备。参照图9,网络侧设备900包括处理组件911,其进一步包括至少一个处理器,以及由存储器932所代表的存储器资源,用于存储可由处理组件922的执行的指令,例如应用程序。存储器932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件915被配置为执行指令,以执行上述方法前述应用在所述网络侧设备的任意方法,例如,如图1所示方法。
网络侧设备900还可以包括一个电源组件926被配置为执行网络侧设备900的电源管理,一个有线或无线网络接口950被配置为将网络侧设备900连接到网络,和一个输入输出(I/O)接口958。网络侧设备900可以操作基于存储在存储器932的操作系统,例如Windows Server TM,Mac OS XTM,Unix TM,Linux TM,Free BSDTM或类似。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
本公开实施例提供的一种通信装置。通信装置可包括收发模块和处理模块。收发模块可包括发送模块和/或接收模块,发送模块用于实现发送功能,接收模块用于实现接收功能,收发模块可以实现发送功能和/或接收功能。
通信装置可以是终端设备(如上述方法实施例中的终端设备),也可以是终端设备中的装置,还可以是能够与终端设备匹配使用的装置。或者,通信装置可以是网络设备,也可以是网络设备中的装置,还可以是能够与网络设备匹配使用的装置。
本公开实施例提供的另一种通信装置。通信装置可以是网络设备,也可以是终端设备(如上述方法实施例中的终端设备),也可以是支持网络设备实现上述方法的芯片、芯片系统、或处理器等,还可以是支持终端设备实现上述方法的芯片、芯片系统、或处理器等。该装置可用于实现上述方法实施例中描述的方法,具体可以参见上述方法实施例中的说明。
通信装置可以包括一个或多个处理器。处理器可以是通用处理器或者专用处理器等。例如可以是基带处理器或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对通信装置(如,网络侧设备、基带芯片,终端设备、终端设备芯片,DU或CU等)进行控制,执行计算机程序,处理计算机程序的数据。
可选的,通信装置中还可以包括一个或多个存储器,其上可以存有计算机程序,处理器执行所述计算机程序,以使得通信装置执行上述方法实施例中描述的方法。可选的,所述存储器中还可以存储有数据。通信装置和存储器可以单独设置,也可以集成在一起。
可选的,通信装置还可以包括收发器、天线。收发器可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
可选的,通信装置中还可以包括一个或多个接口电路。接口电路用于接收代码指令并传输至处理器。处理器运行所述代码指令以使通信装置执行上述方法实施例中描述的方法。
通信装置为终端设备(如上述方法实施例中的终端设备):处理器用于执行图1-图6任一所示的方法。
通信装置为网络设备:收发器用于执行图7-图8任一所示的方法。
在一种实现方式中,处理器中可以包括用于实现接收和发送功能的收发器。例如该收发器可以是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路可以是 分开的,也可以集成在一起。上述收发电路、接口或接口电路可以用于代码/数据的读写,或者,上述收发电路、接口或接口电路可以用于信号的传输或传递。
在一种实现方式中,处理器可以存有计算机程序,计算机程序在处理器上运行,可使得通信装置执行上述方法实施例中描述的方法。计算机程序可能固化在处理器中,该种情况下,处理器可能由硬件实现。
在一种实现方式中,通信装置可以包括电路,所述电路可以实现上述方法实施例中发送或接收或者通信的功能。本公开中描述的处理器和收发器可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。该处理器和收发器也可以用各种IC工艺技术来制造,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)、N型金属氧化物半导体(nMetal-oxide-semiconductor,NMOS)、P型金属氧化物半导体(positive channel metal oxide semiconductor,PMOS)、双极结型晶体管(bipolar junction transistor,BJT)、双极CMOS(BiCMOS)、硅锗(SiGe)、砷化镓(Gas)等。
以上实施例描述中的通信装置可以是网络设备或者终端设备(如上述方法实施例中的终端设备),但本公开中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;
(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,计算机程序的存储部件;
(3)ASIC,例如调制解调器(Modem);
(4)可嵌入在其他设备内的模块;
(5)接收机、终端设备、智能终端设备、蜂窝电话、无线设备、手持机、移动单元、车载设备、网络设备、云设备、人工智能设备等等;
(6)其他等等。
对于通信装置可以是芯片或芯片系统的情况,芯片包括处理器和接口。其中,处理器的数量可以是一个或多个,接口的数量可以是多个。
可选的,芯片还包括存储器,存储器用于存储必要的计算机程序和数据。
本领域技术人员还可以了解到本公开实施例列出的各种说明性逻辑块(illustrative logical block)和步骤(step)可以通过电子硬件、电脑软件,或两者的结合进行实现。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本公开实施例保护的范围。
本公开实施例还提供一种确定侧链路时长的系统,该系统包括上述实施例中作为终端设备(如上述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置,或者,该系统包括上述实施例中作为终端设备(如上述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置。
本公开还提供一种可读存储介质,其上存储有指令,该指令被计算机执行时实现上述任一方法实施例的功能。
本公开还提供一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例的功能。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序。在计算机上加载和执行所述计算机程序时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站 点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解:本公开中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本公开实施例的范围,也表示先后顺序。
本公开中的至少一个还可以描述为一个或多个,多个可以是两个、三个、四个或者更多个,本公开不做限制。在本公开实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”描述的技术特征间无先后顺序或者大小顺序。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本公开旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (17)

  1. 一种预编码矩阵确定方法,其特征在于,应用于基站,包括:
    获取来自用户设备UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示时间戳;
    基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
  2. 如权利要求1所述的方法,其特征在于,所述信道特征信息q k,t为:所述UE端的第一子网络对信道估计信息h k,t进行处理后所得到的;其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;所述压缩率参数γ为所述第一子网络的压缩率;
    以及,所述第一子网络包括第一卷积神经网络和第一长短期记忆网络LSTM,其中,所述第一卷积神经网络和第一LSTM以类残差网络的方式进行连接。
  3. 如权利要求2所述的方法,其特征在于,所述基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t,包括:
    确定与所述压缩率参数γ对应的第二子网络;
    将所述信道特征信息q k,t输入至所述第二子网络中以输出所述预编码矩阵F t、W t
  4. 如权利要求3所述的方法,其特征在于,所述第二子网络由两个并行的第一网络模块和第二网络模块组成;
    所述第一网络模块包括依次连接的第二LSTM、第一残差神经网络和计算模块,所述第一残差神经网络包括第一一维卷积网络和由n个卷积神经网络组成的第一残差单元,n为正整数,所述第一一维卷积网络和第一残差单元以类残差网络的方式进行连接;其中,所述信道特征信息q k,t经由所述第二LSTM和第一残差神经网络处理以得到移相器角度Θ t,所述计算模块用于基于所述移相器角度Θ t计算出预编码矩阵F t并输出;
    所述第二网络模块包括依次连接的第三LSTM、第二残差神经网络和归一化处理模块,所述归一化模块的输入端还与所述计算模块的输出端连接,所述第二残差神经网络包括第二一维卷积网络和由n个卷积神经网络组成的第二残差单元,所述第二一维卷积网络和第二残差单元以类残差网络的方式进行连接;其中,所述信道特征信息q k,t经由所述第三LSTM、第二残差神经网络处理和归一化处理模块以得到预编码矩阵W t并输出;
    以及,所述第一残差神经网络对应的矩阵大小与所述第二残差神经网络对应的矩阵大小不同。
  5. 如权利要求3所述的方法,其特征在于,所述基站包括至少一个第二子网络,其中,不同第二子网络的压缩率不同;
    所述方法还包括:
    基于所述第一子网络的结构在所述基站端部署至少一个模拟子网络,其中,所述模拟子网络与所述第一子网络结构相同,以及不同模拟子网络的压缩率不同;
    连接具有对应压缩率的所述模拟子网络和所述第二子网络以得到至少一个预编码矩阵确定网络;
    获取来自各个UE的信道估计信息h k,t
    基于各个UE的信道估计信息h k,t确定样本数据集H;
    基于所述样本数据集H对所述预编码矩阵确定网络进行训练。
  6. 如权利要求5所述的方法,其特征在于,所述方法还包括:
    确定训练完成后的所述模拟子网络对应的网络参数;
    将所述网络参数发送至UE。
  7. 一种预编码矩阵确定方法,其特征在于,应用于UE,包括:
    确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示所述信道估计信息h k,t对应的时间戳;
    基于所述信道估计信息h k,t确定信道特征信息q k,t,其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;
    向基站发送信道特征信息q k,t以及压缩率参数γ。
  8. 如权利要求7所述的方法,其特征在于,所述基于所述信道估计信息h k,t确定信道特征信息q k,t,包括:
    将所述信道估计信息h k,t输入至第一子网络以输出所述信道特征信息q k,t
    其中,所述第一子网络包括第一卷积神经网络和第一LSTM,其中,所述第一卷积神经网络和第一LSTM以类残差网络的方式进行连接,所述压缩率参数γ为所述第一子网络的压缩率。
  9. 如权利要求8所述的方法,其特征在于,所述方法还包括:
    接收基站发送的网络参数;
    基于所述网络参数部署所述第一子网络。
  10. 一种预编码矩阵确定装置,其特征在于,包括:
    获取模块,用于获取来自UE的信道特征信息q k,t以及压缩率参数γ,其中,k用于指示第k个UE,t用于指示所述信道特征信息q k,t对应的时间戳;
    确定模块,用于基于所述压缩率参数γ和信道特征信息q k,t确定预编码矩阵F t、W t
  11. 一种预编码矩阵确定装置,其特征在于,包括:
    确定模块,用于确定信道估计信息h k,t,其中,k用于指示第k个UE,t用于指示所述信道估计信息h k,t对应的时间戳;
    上述确定模块,还用于基于所述信道估计信息h k,t确定信道特征信息q k,t,其中,所述信道特征信息q k,t的维度低于所述信道估计信息h k,t的维度;
    发送模块,用于向基站发送信道特征信息q k,t以及压缩率参数γ。
  12. 一种通信装置,其特征在于,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求1至6中任一项所述的方法。
  13. 一种通信装置,其特征在于,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求7至9中任一项所述的方法。
  14. 一种通信装置,其特征在于,包括:处理器和接口电路;
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求1至6中任一项所述的方法。
  15. 一种通信装置,其特征在于,包括:处理器和接口电路;
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求7至9任一所述的方法。
  16. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求1至6中任一项所述的方法被实现。
  17. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求7至9中任一项所述的方法被实现。
PCT/CN2022/078445 2022-02-28 2022-02-28 一种预编码矩阵确定方法及设备/存储介质/装置 WO2023159614A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/078445 WO2023159614A1 (zh) 2022-02-28 2022-02-28 一种预编码矩阵确定方法及设备/存储介质/装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/078445 WO2023159614A1 (zh) 2022-02-28 2022-02-28 一种预编码矩阵确定方法及设备/存储介质/装置

Publications (1)

Publication Number Publication Date
WO2023159614A1 true WO2023159614A1 (zh) 2023-08-31

Family

ID=87764454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078445 WO2023159614A1 (zh) 2022-02-28 2022-02-28 一种预编码矩阵确定方法及设备/存储介质/装置

Country Status (1)

Country Link
WO (1) WO2023159614A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104452A (zh) * 2009-12-22 2011-06-22 华为技术有限公司 信道状态信息反馈方法、信道状态信息获得方法及设备
US20130142228A1 (en) * 2011-12-02 2013-06-06 Metanoia Communications Inc. Using ACE Symbols With Precoding To Save Memory And Complexity In Receiver
CN105812042A (zh) * 2016-04-27 2016-07-27 电子科技大学 基于压缩感知的c-ran架构大规模mimo系统的数据传输方法
WO2021217519A1 (zh) * 2020-04-29 2021-11-04 华为技术有限公司 用于调整神经网络的方法和装置
US20220038145A1 (en) * 2019-03-29 2022-02-03 Huawei Technologies Co., Ltd. Communication method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104452A (zh) * 2009-12-22 2011-06-22 华为技术有限公司 信道状态信息反馈方法、信道状态信息获得方法及设备
US20130142228A1 (en) * 2011-12-02 2013-06-06 Metanoia Communications Inc. Using ACE Symbols With Precoding To Save Memory And Complexity In Receiver
CN105812042A (zh) * 2016-04-27 2016-07-27 电子科技大学 基于压缩感知的c-ran架构大规模mimo系统的数据传输方法
US20220038145A1 (en) * 2019-03-29 2022-02-03 Huawei Technologies Co., Ltd. Communication method and apparatus
WO2021217519A1 (zh) * 2020-04-29 2021-11-04 华为技术有限公司 用于调整神经网络的方法和装置

Similar Documents

Publication Publication Date Title
CN107689820A (zh) 无线通信系统中的电子设备和服务器以及无线通信方法
WO2023159614A1 (zh) 一种预编码矩阵确定方法及设备/存储介质/装置
WO2023025133A1 (zh) 传输方法、装置、通信设备及计算机存储介质
US20230137584A1 (en) Multiple-input multiple-output mode configuration method and apparatus, and storage medium
CN118056429A (zh) 虚拟信道样本的质量评估方法和设备
WO2023155053A1 (zh) 一种辅助通信设备的发送方法及设备/存储介质/装置
CN116134922A (zh) Pusch重复传输方法及终端设备
WO2023130397A1 (zh) 一种预编码方法/装置/设备及存储介质
WO2022261842A1 (zh) 预编码矩阵确定方法、装置、用户设备、基站及存储介质
WO2023050153A1 (zh) 一种上报方法、装置、用户设备、网路侧设备及存储介质
WO2023193276A1 (zh) 一种上报方法/装置/设备及存储介质
WO2023230795A1 (zh) 基于波束配置的通信方法、装置及系统
WO2024007918A1 (zh) 预编码矩阵指示、确定方法、装置、网络侧设备及终端
WO2023070407A1 (zh) 一种预编码方法、装置、用户设备、可重构智能表面ris阵列及存储介质
WO2023108435A1 (zh) 一种预编码方法及设备/存储介质/装置
WO2023092602A1 (zh) 一种预编码方法、装置、用户设备、ris阵列、基站及存储介质
WO2024103409A1 (zh) Pucch-sri指示方法、装置
WO2023240654A1 (zh) 一种部分天线相干传输码字的确定方法及其装置
WO2023274101A1 (zh) 一种获得预编码矩阵的方法及装置
WO2023115579A1 (zh) 一种人工智能ai服务提供方法和装置
WO2022261937A1 (zh) 干扰测量方法及其装置
WO2022246847A1 (zh) 干扰测量方法、干扰处理方法及其装置
WO2023179540A1 (zh) 信道预测方法、装置及无线通信设备
WO2023246919A1 (zh) 信道估计方法、装置、通信设备、系统及存储介质
WO2024020904A1 (zh) 智能反射表面irs的相移配置的发送、接收方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22927895

Country of ref document: EP

Kind code of ref document: A1