WO2023115254A1 - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
WO2023115254A1
WO2023115254A1 PCT/CN2021/139596 CN2021139596W WO2023115254A1 WO 2023115254 A1 WO2023115254 A1 WO 2023115254A1 CN 2021139596 W CN2021139596 W CN 2021139596W WO 2023115254 A1 WO2023115254 A1 WO 2023115254A1
Authority
WO
WIPO (PCT)
Prior art keywords
training data
neural network
codebook
present application
precoding matrix
Prior art date
Application number
PCT/CN2021/139596
Other languages
French (fr)
Chinese (zh)
Inventor
肖寒
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/139596 priority Critical patent/WO2023115254A1/en
Publication of WO2023115254A1 publication Critical patent/WO2023115254A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting

Definitions

  • the present application relates to the field of communication technology, and more specifically, to a method and device for processing data.
  • a scheme based on a codebook can be used to realize channel information feedback. Due to the elegant design of the codebook, the channel information feedback scheme based on the codebook can adapt to the channel information feedback in different scenarios, that is, the scheme has high generalization.
  • the channel information needs to be mapped to the precoding matrix in the codebook, and the feedback of the channel information is realized by feeding back the precoding matrix. Since the precoding matrix is discrete, the mapping process is quantized and lossy, which leads to low accuracy of the codebook-based precoding scheme.
  • the present application provides a data processing method and device to solve the problem of difficulty in obtaining training data for a neural network-based channel feedback model.
  • a method for processing data includes: using a codebook to generate a plurality of training data, the plurality of training data is used to train a neural network model, and the neural network model is used to perform channel information feedback.
  • a device for processing data includes: a first generation unit, configured to generate a plurality of training data using a codebook, the plurality of training data are used to train a neural network model, and the neural network The network model is used for channel information feedback.
  • a device for processing data including a processor, a memory, and a communication interface, the memory is used to store one or more computer programs, and the processor is used to call the computer programs in the memory to make the The terminal device executes the method described in the first aspect.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program enables the terminal device to perform some or all of the steps in the method of the first aspect above .
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to enable the terminal to execute the above-mentioned first Some or all of the steps in the method of one aspect.
  • the computer program product can be a software installation package.
  • an embodiment of the present application provides a chip, the chip includes a memory and a processor, and the processor can call and run a computer program from the memory to implement some or all of the steps described in the method of the first aspect above .
  • a computer program product including a program, the program causes a computer to execute the method described in the first aspect.
  • a computer program causes a computer to execute the method described in the first aspect.
  • the present application uses a codebook to generate a plurality of training data, and uses the training data to train a neural network-based channel feedback model. Therefore, the present application can reduce the requirement and overhead of the neural network for acquiring actual scene channel data, and enhance the generalization of the neural network by using the generalization capability of the codebook.
  • Fig. 1 is a wireless communication system applied in the embodiment of the present application.
  • FIG. 2 is a structural diagram of a neural network applicable to an embodiment of the present application.
  • FIG. 3 is a structural diagram of a convolutional neural network applicable to an embodiment of the present application.
  • Fig. 4 is an example diagram of an image compression process based on an autoencoder.
  • Fig. 5 is an example diagram of an image generation process based on a variational autoencoder.
  • Fig. 6 is a schematic diagram of a neural network-based channel feedback process.
  • Fig. 7 is a schematic flowchart of a method for processing data provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a method for grouping precoding matrices of a codebook provided by an embodiment of the present application.
  • FIG. 9 is a channel information feedback model based on a variational autoencoder provided in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a variational autoencoder provided by an embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of an apparatus for data processing provided by an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 1 is a wireless communication system 100 applied in an embodiment of the present application.
  • the wireless communication system 100 may include a network device 110 and a terminal device 120 .
  • the network device 110 may be a device that communicates with the terminal device 120 .
  • the network device 110 can provide communication coverage for a specific geographical area, and can communicate with the terminal device 120 located in the coverage area.
  • Figure 1 exemplarily shows one network device and two terminals.
  • the wireless communication system 100 may include multiple network devices and each network device may include other numbers of terminal devices within the coverage area. The embodiment does not limit this.
  • the wireless communication system 100 may further include other network entities such as a network controller and a mobility management entity, which is not limited in this embodiment of the present application.
  • network entities such as a network controller and a mobility management entity, which is not limited in this embodiment of the present application.
  • the technical solutions of the embodiments of the present application can be applied to various communication systems, for example: the fifth generation (5th generation, 5G) system or new radio (new radio, NR), long term evolution (long term evolution, LTE) system , LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD), etc.
  • the technical solutions provided in this application can also be applied to future communication systems, such as the sixth generation mobile communication system, and satellite communication systems, and so on.
  • the terminal equipment in the embodiment of the present application may also be called user equipment (user equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station (mobile station, MS), mobile terminal (mobile terminal, MT) ), remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.
  • the terminal device in the embodiment of the present application may be a device that provides voice and/or data connectivity to users, and can be used to connect people, objects and machines, such as handheld devices with wireless connection functions, vehicle-mounted devices, and the like.
  • the terminal device in the embodiment of the present application can be mobile phone (mobile phone), tablet computer (Pad), notebook computer, palmtop computer, mobile internet device (mobile internet device, MID), wearable device, virtual reality (virtual reality, VR) equipment, augmented reality (augmented reality, AR) equipment, wireless terminals in industrial control, wireless terminals in self driving, wireless terminals in remote medical surgery, smart Wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city, wireless terminals in smart home, etc.
  • UE can be used to act as a base station.
  • a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X or D2D, etc.
  • a cell phone and an automobile communicate with each other using sidelink signals. Communication between cellular phones and smart home devices without relaying communication signals through base stations.
  • the network device in this embodiment of the present application may be a device for communicating with a terminal device, and the network device may also be called an access network device or a wireless access network device, for example, the network device may be a base station.
  • the network device in this embodiment of the present application may refer to a radio access network (radio access network, RAN) node (or device) that connects a terminal device to a wireless network.
  • radio access network radio access network, RAN node (or device) that connects a terminal device to a wireless network.
  • the base station can broadly cover various names in the following, or replace with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), relay station, Access point, transmission point (transmitting and receiving point, TRP), transmission point (transmitting point, TP), primary station MeNB, secondary station SeNB, multi-standard radio (MSR) node, home base station, network controller, access node , wireless node, access point (access point, AP), transmission node, transceiver node, base band unit (base band unit, BBU), remote radio unit (Remote Radio Unit, RRU), active antenna unit (active antenna unit) , AAU), radio head (remote radio head, RRH), central unit (central unit, CU), distributed unit (distributed unit, DU), positioning nodes, etc.
  • NodeB Node B
  • eNB evolved base station
  • next generation NodeB next generation NodeB
  • a base station may be a macro base station, a micro base station, a relay node, a donor node, or the like, or a combination thereof.
  • a base station may also refer to a communication module, modem or chip used to be set in the aforementioned equipment or device.
  • the base station can also be a mobile switching center, a device that undertakes the function of a base station in D2D, vehicle-to-everything (V2X), machine-to-machine (M2M) communication, and a device in a 6G network.
  • V2X vehicle-to-everything
  • M2M machine-to-machine
  • Base stations can support networks of the same or different access technologies. The embodiment of the present application does not limit the specific technology and specific device form adopted by the network device.
  • Base stations can be fixed or mobile.
  • a helicopter or drone can be configured to act as a mobile base station, and one or more cells can move according to the location of the mobile base station.
  • a helicopter or drone may be configured to serve as a device in communication with another base station.
  • the network device in this embodiment of the present application may refer to a CU or a DU, or, the network device includes a CU and a DU.
  • a gNB may also include an AAU.
  • Network equipment and terminal equipment can be deployed on land, including indoors or outdoors, hand-held or vehicle-mounted; they can also be deployed on water; they can also be deployed on aircraft, balloons and satellites in the air.
  • the scenarios where the network device and the terminal device are located are not limited.
  • a codebook-based scheme can be used to realize channel feature extraction and feedback. That is, after the receiver performs channel estimation, according to the result of channel estimation, according to a certain optimization criterion, select the precoding matrix that best matches the current channel from the pre-set precoding codebook, and pass the feedback link of the air interface to the Precoding matrix index (precoding matrix index, PMI) information is fed back to the transmitter for the transmitter to implement precoding.
  • the receiver may also feed back the measured channel quality indication (CQI) to the transmitter for the transmitter to implement adaptive modulation and coding.
  • Channel feedback may also be called channel state information (channel state information-reference signal, CSI) feedback.
  • Codebook-Based Channel Feedback Scheme Due to the sophisticated design of the codebook, multiple precoding matrices in the codebook can represent channel information in different scenarios. Therefore, the codebook-based channel feedback scheme can adapt to channel information feedback tasks in different scenarios.
  • mapping channel information to the precoding matrix in the codebook is a process of discretely quantizing continuous channel information, which makes the mapping process lossy in quantization, resulting in a decrease in the accuracy of the feedback channel information, thereby reducing the accuracy of the precoding matrix. Encoding performance.
  • AI artificial intelligence
  • Neural networks are commonly used architectures in AI. Common neural networks include convolutional neural network (CNN), recurrent neural network (RNN), deep neural network (DNN), etc.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • DNN deep neural network
  • the neural network applicable to the embodiment of the present application is introduced below with reference to FIG. 2 .
  • the neural network shown in FIG. 2 can be divided into three types according to the position of different layers: input layer 210 , hidden layer 220 and output layer 230 .
  • the first layer is the input layer 210
  • the last layer is the output layer 230
  • the middle layer between the first layer and the last layer is the hidden layer 220 .
  • the input layer 210 is used to input data. Taking a communication system as an example, the input data may be, for example, a received signal received by a receiver.
  • the hidden layer 220 is used to process the input data, for example, to decompress the received signal.
  • the output layer 230 is used for outputting processed output data, for example, outputting a decompressed signal.
  • the neural network includes multiple layers, each layer includes multiple neurons, and the neurons between layers can be fully connected or partially connected. For connected neurons, the output of neurons in the previous layer can be used as the input of neurons in the next layer.
  • neural network deep learning algorithms have been proposed in recent years.
  • the neural network deep learning algorithm introduces more hidden layers in the neural network.
  • This neural network model is widely used in pattern recognition, signal processing, optimization combination, anomaly detection and so on.
  • CNN is a deep neural network with a convolutional structure, and its structure can be shown in Figure 3.
  • the neural network shown in FIG. 3 may include an input layer 310 , a convolutional layer 320 , a pooling layer 330 , a fully connected layer 340 , and an output layer 350 .
  • Each convolutional layer 320 can include many convolution kernels.
  • the convolution kernel is also called an operator. Its function can be regarded as a filter for extracting specific information from the input signal.
  • the convolution kernel can be a weight in essence. matrix, this weight matrix is usually predefined.
  • weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained through training can extract information from the input signal, thereby helping CNN to make correct predictions.
  • the initial convolutional layer often extracts more general features, which can also be called low-level features; as the depth of CNN deepens, the later convolution The features extracted by the layers are getting more and more complex.
  • Pooling layer 330 because it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer, for example, it can be a layer of convolutional layer followed by a layer of pooling layer as shown in Figure 3 , can also be a multi-layer convolutional layer followed by one or more pooling layers. In signal processing, the sole purpose of pooling layers is to reduce the spatial size of the extracted information.
  • the introduction of the convolutional layer 320 and the pooling layer 330 effectively controls the sharp increase of network parameters, limits the number of parameters and taps the characteristics of the local structure, improving the robustness of the algorithm.
  • the fully connected layer 340 after being processed by the convolutional layer 320 and the pooling layer 330, CNN is not enough to output the required output information. Because as mentioned above, the convolutional layer 320 and the pooling layer 330 only extract features and reduce the parameters brought by the input data. However, in order to generate the final output information (eg, the bitstream of the original information transmitted by the transmitter), the CNN also needs to utilize the fully connected layer 340 .
  • the fully connected layer 340 may include a plurality of hidden layers, and the parameters contained in the multi-layer hidden layers may be pre-trained according to relevant training data of a specific task type, for example, the task type may include receiving For another example, the task type may also include performing channel estimation based on the pilot signal received by the receiver.
  • the output layer 350 for outputting results.
  • the output layer 350 is provided with a loss function (for example, a loss function similar to classification cross entropy), which is used to calculate the prediction error, or to evaluate the result (also called predicted value) output by the CNN model and the ideal result (also called The degree of difference between the true value).
  • a loss function for example, a loss function similar to classification cross entropy
  • the CNN model In order to minimize the loss function, the CNN model needs to be trained.
  • the CNN model may be trained using a backpropagation algorithm (BP).
  • BP backpropagation algorithm
  • the training process of BP consists of forward propagation process and back propagation process.
  • forward propagation the propagation from 310 to 350 in Fig. 3 is forward propagation
  • the input data is input into the above layers of the CNN model, processed layer by layer and transmitted to the output layer.
  • the above loss function is minimized as the optimization goal, and transferred to backpropagation (as shown in Figure 3, the propagation from 350 to 310 is backpropagation), and the calculation is obtained layer by layer
  • the partial derivative of the optimization target to the weight of each neuron constitutes the gradient of the optimization target to the weight vector, which is used as the basis for modifying the model weight.
  • the training process of CNN is completed in the weight modification process. When the above error reaches the expected value, the training process of CNN ends.
  • the CNN shown in Figure 3 is only an example of a convolutional neural network.
  • the convolutional neural network can also exist in the form of other network models, and this embodiment of the present application does not make any reference to this. limited.
  • Autoencoders are a class of artificial neural networks used in semi-supervised and unsupervised learning.
  • An autoencoder is a neural network that takes an input signal as the training target.
  • An autoencoder can include an encoder (encoder) and a decoder (decoder).
  • the input of the encoder may be an image to be compressed.
  • a code stream (code) is output.
  • the number of bits occupied by the code stream output by the encoder is generally smaller than the number of bits occupied by the image to be compressed. For example, the number of bits occupied by the code stream output by the encoder shown in FIG. 4 may be less than 784 bits. From this, it can be seen that the encoder can achieve a compressed representation of the entity input to the encoder.
  • the input of the decoder can be code stream.
  • the code stream may be a code stream output by an encoder.
  • the output of the decoder is the decompressed image. It can be seen from Fig. 4 that the decompressed image is consistent with the image to be compressed input to the encoder. Therefore, the decoder can realize the reconstruction of the original entity.
  • the data to be compressed (such as the picture to be compressed in Figure 4) can be used as the input of the self-encoder (ie, the input of the encoder) and the label (ie, the output of the decoder), and the encoder and the decoder for end-to-end joint training.
  • Variational autoencoders introduce probability distributions in the encoder.
  • the probability distribution can make the entity input to the encoder be mapped to the code stream conforming to the probability distribution.
  • different values collected in the probability distribution can be combined, so that the input of the decoder of the variational autoencoder can be multiple different code streams.
  • the decoder can decode different code streams output by the encoder, so as to output multiple entities similar to the input entity.
  • FIG. 5 is an example diagram of an image generation process based on a variational autoencoder.
  • the input of the encoder can be the original image, and the output of the encoder can be the mean and variance.
  • the code stream can be calculated by the mean and variance of the encoder output and the collected values collected from the probability distribution.
  • the code stream can be input into the decoder, and the decoder decodes the code stream and outputs an image similar to the original image.
  • the probability distribution may be a normal distribution.
  • Channel feedback can be realized based on AI, such as neural network-based channel feedback.
  • the network device side can restore the channel information fed back by the terminal device side as much as possible through the neural network.
  • This neural network-based channel feedback can restore channel information, and also provides the possibility of reducing channel feedback overhead on the terminal device side.
  • a deep learning autoencoder can be used to implement channel feedback.
  • the input of the AI-based channel feedback model can be channel information, that is, the channel information can be regarded as the compressed image input to the self-encoder.
  • the AI-based channel feedback model can perform compressed feedback on channel information.
  • the AI-based channel feedback model can reconstruct the compressed channel information, thereby retaining the channel information to a greater extent.
  • FIG. 6 is a schematic diagram of an AI-based channel feedback process.
  • the channel feedback model shown in Fig. 6 includes an encoder and a decoder.
  • the encoder and decoder are respectively deployed at the receiving end (receive, Rx) and the sending end (transmit, Tx).
  • the receiving end can obtain the channel information matrix through channel estimation.
  • the channel information matrix can be compressed and encoded by the neural network of the encoder to form a compressed bit stream (codeword).
  • codeword compressed bit stream
  • the compressed bit stream can be fed back to the receiving end through an air interface feedback link.
  • the sending end can decode or restore the channel information according to the feedback bit stream through the decoder, so as to obtain complete feedback channel information.
  • the AI-based channel feedback model may have the structure shown in FIG. 6 .
  • the encoder may include several fully connected layers, and the decoder may include a residual network.
  • FIG. 6 is only an example, and the present application does not limit the structure of the network model inside the encoder and decoder, and the structure of the network model can be flexibly designed.
  • the channel feedback based on the neural network can directly compress the channel information. Therefore, the accuracy of the channel information fed back by the neural network-based channel feedback scheme is relatively high.
  • the channel feedback scheme based on neural network has better performance.
  • the performance of neural network-based channel feedback schemes is poor when the training data does not match the test data.
  • neural network-based channel feedback schemes perform poorly when the scenarios of training data and test data are inconsistent. Therefore, channel feedback schemes based on neural networks have the problem of low generalization.
  • Related technologies use multi-scenario training data to train a neural network model to improve generalization. In the actual operation process, it is more difficult to obtain a large-scale and high-diversity training data set.
  • FIG. 7 is a schematic flowchart of a method for processing data provided by an embodiment of the present application.
  • the method shown in FIG. 7 may include step S710.
  • Step S710 using the codebook to generate a plurality of training data.
  • Multiple training data can be used to train the neural network model.
  • the neural network model can be used for channel information feedback.
  • Training data may include channel information.
  • the channel information may be a tensor composed of channel feature vectors of multiple subbands and multiple layers.
  • the channel information may be a tensor with a dimension of N ⁇ M ⁇ S ⁇ I.
  • N may be the number of transmitting antennas, and the value of N may be greater than or equal to 2.
  • M may be the number of subbands, and the value of M may be greater than or equal to 1.
  • S may be the number of transmission layers, and the value of S may be greater than or equal to 1.
  • the codebook may be the codebook used in codebook-based channel feedback in the related art.
  • a codebook may include multiple precoding matrices.
  • the present application does not limit the form of the precoding matrix.
  • the precoding matrix may be in the form of a vector, that is, the codebook may include multiple precoding vectors.
  • the codebook may be implemented based on a discrete Fourier transform (discrete fourier transform, DFT) vector, that is, the precoding matrix may include a DFT vector.
  • the length of the DFT vector may be N
  • the number of DFT vectors in the codebook may be N 1 N 2 .
  • N 1 and N 2 may be the number of antenna ports.
  • N 1 may be the number of antenna ports in the first dimension
  • N 2 may be the number of antenna ports in the second dimension.
  • N may be the total number of antenna ports.
  • the precoding matrix in the codebook can represent channel information in different scenarios, that is, the codebook naturally has generalization ability.
  • the multiple pieces of training data generated by using the codebook may also include channel information of multiple scenarios. Therefore, using the training data generated by the codebook to train the neural network model can enhance the generalization of the neural network model.
  • the process of using codebooks to generate training data is much easier than obtaining diverse training data in actual scenarios. Therefore, using the technical solution of the present application, a neural network model with high generalization can be trained more simply, so as to realize high-precision and high-performance channel information feedback.
  • the method shown in FIG. 7 may further include step S720.
  • Step S720 perform oversampling processing on multiple precoding matrices in the codebook.
  • Oversampling can be to insert more precoding matrices between adjacent precoding matrices. It can be understood that each precoding matrix may correspond to a spatial angle orientation, and there is a certain angular interval between adjacent precoding matrices. Through oversampling processing, the angle interval between adjacent precoding matrices can be made smaller, so as to achieve the purpose of fine quantization of angle.
  • the oversampling factors of the first dimension and the second dimension of the two-dimensional array antenna may be O 1 and O 2 respectively.
  • the precoding matrix as a two-dimensional DFT vector as an example, the oversampled two-dimensional DFT vector a m,n can be expressed as:
  • x m and u n are oversampled one-dimensional DFT vectors respectively, and x m and u n can be expressed as:
  • u n [1,...,exp(j2 ⁇ (N 2 ⁇ 1)m)/N 2 O 2 ] T .
  • a precoding matrix can represent a beam.
  • the actual channel information can consist of information from one or more beams. Therefore, one or more precoding matrices can be selected in the codebook to form a training data to simulate real channel information.
  • the present application does not limit the method of selecting the precoding matrix in the codebook to generate the training data.
  • the precoding matrix may be selected in a grouping manner to generate training data.
  • Multiple precoding matrices can belong to multiple groups.
  • the plurality of groups may include a first group, and the plurality of training data may include the first training data.
  • the first training data can be generated using the precoding matrix in the first packet. For example, all the precoding matrices contained in the first training data can be selected from the first group. It should be noted that, the present application does not limit the number of precoding matrices in the first group, for example, it may be one or more.
  • the grouping can be determined according to the beam situation corresponding to the precoding matrix. For example, real channel information often includes information about multiple similar beams. Therefore, precoding matrices corresponding to adjacent or similar beams may be grouped into one group. It can be seen that selecting training data from a group can make the training data closer to real channel information.
  • FIG. 8 is a schematic diagram of a grouping method provided by an embodiment of the present application.
  • a grid point (circular point in FIG. 8) on the two-dimensional grid represents a precoding matrix.
  • a grid block (rectangular box in FIG. 8 ) on a two-dimensional grid can represent a group.
  • the first precoding matrix may be any grid point on the two-dimensional grid.
  • grid point 811 may represent the first precoding matrix.
  • the first group can be any rectangular box on the two-dimensional grid.
  • block 821 may represent the first grouping.
  • the first beam of the grid block ie the first beam (the shaded circular point in Fig. 8)
  • the starting position of the packet can be adjusted.
  • the size of the grid block the number of precoding matrices in the group can be adjusted.
  • the precoding matrices in one group may be continuous or spaced, which is not limited in this application.
  • grouping can be controlled by parameters L 1 , L 2 , s 1 , s 2 , p 1 , p 2 .
  • L 1 represents the size of the grid in the first dimension
  • L 2 represents the size of the grid in the second dimension
  • L 1 ⁇ L 2 represents the size of the grid.
  • s 1 represents the distance between the first beams of adjacent grid blocks in the first dimension.
  • s 2 represents the distance between the first beams of adjacent grid blocks in the second dimension.
  • p 1 represents the distance between adjacent beams in a grid block in the first dimension.
  • p 2 represents the distance between adjacent beams in a grid block in the second dimension.
  • the training data can be selected from the precoding matrix in the packet.
  • MS precoding matrices can be selected in the first group, and further processed (for example, perform dimension conversion) to generate a tensor with a dimension of N ⁇ M ⁇ S ⁇ I, and use this tensor as the first training data.
  • the present application does not limit the selection manner of the precoding matrix.
  • the first training data may randomly select a precoding matrix in the first group.
  • each group can generate at least one training data, and multiple groups can generate multiple training data.
  • the first group can generate the first training data and the second training data
  • the second group can generate the third training data.
  • a data set composed of multiple training data can cover all precoding matrices in the codebook. It can be understood that all precoding matrices in the codebook can cover various scenarios. Therefore, the neural network model trained with this dataset can integrate the high generalization performance of the codebook.
  • precoding matrices in multiple groups may overlap, for example, multiple groups may include a first group and a second group, multiple precoding matrices may include a first precoding matrix, and the first precoding matrix may be both Belonging to the first group also belongs to the second group.
  • the second grouping may be a grid block 822 .
  • the first precoding matrix 811 may belong to both the first group 821 and the second group 822 . It can be seen from the example shown in FIG. 8 that the first group 821 and the second group 822 may overlap in the first dimension. It can be understood that the first group and the second group may overlap in the second dimension. Alternatively, the first grouping and the second grouping overlap in both the first dimension and the second dimension.
  • precoding matrices of multiple groups can make the codebook be divided into more groups, thereby obtaining more training data.
  • the application does not limit the type of neural network model.
  • the neural network model may include a variational autoencoder.
  • the variational autoencoder can map the input training data to a probability distribution, so that multiple code streams conforming to the probability distribution can be obtained from one training data. Multiple codestreams provide more training data for the decoder, which can improve the generalization of the decoder and thus the generalization of the variational autoencoder.
  • the probability distribution may not be introduced, and the variational autoencoder can become an autoencoder, so that the decoder can accurately restore the channel information input to the encoder.
  • FIG. 9 is a channel information feedback model based on a variational autoencoder provided in an embodiment of the present application.
  • Both the input data of the encoder 910 and the output data of the decoder 920 may be channel information.
  • the channel information may be training data generated by a codebook, or actually obtained channel information.
  • the input data of the encoder 910 or the output data of the decoder 920 may include training data generated based on a codebook, or may include training data collected in an actual scene.
  • the input data of the encoder 910 or the output data of the decoder 920 may be channel information obtained after actual channel estimation.
  • the channel information may be reshaped to change the dimension of the channel information.
  • training data or channel information obtained from channel estimation may be reshaped.
  • reshaping can make the input data of the encoder 910 adapt to the model structure of the encoder 910, thereby simplifying the operation of the neural network model.
  • reshaping can make the output data suitable for subsequent processing, thereby simplifying subsequent operations.
  • the shaped channel information can be a three-dimensional tensor, and the dimension format of the three-dimensional tensor can be NM ⁇ S ⁇ I, NS ⁇ M ⁇ I, NI ⁇ M ⁇ S, N ⁇ MS ⁇ I, N ⁇ MI ⁇ S Or N ⁇ M ⁇ SI.
  • the shaped channel information may be a two-dimensional matrix, and the dimension format of the two-dimensional matrix may be NMS ⁇ I, NMI ⁇ S, NSI ⁇ M or MSI ⁇ N.
  • the shaped channel information may be a vector, and the length of the vector may be NMSI.
  • m can be related to the mean
  • v can be related to the variance.
  • m and v can be of length p.
  • the input to the decoder 920 may be a vector of length p.
  • the input to the decoder 920 can be different.
  • the input of the decoder 920 may be a quantized and dequantized vector of the vector s ⁇ exp(v)+m.
  • the vectors v and m are obtained from the output of the encoder 910, and the vector s can be collected from a standard normal distribution.
  • the vector s can also be of length p.
  • the input of the decoder 920 may be a quantized and dequantized vector m, and m may be obtained from the output of the encoder 910 .
  • This application does not limit the specific structure of the encoder or decoder in variational self-encoding.
  • it can be constructed by using one or more network structures of fully connected network, convolutional neural network, residual network, and self-attention mechanism network.
  • FIG. 10 is a schematic structural diagram of a variational autoencoder provided by an embodiment of the present application.
  • the variational autoencoder shown in FIG. 10 includes an encoder 1010 and a decoder 1020 .
  • the encoder 1010 may include a fully connected layer 1012 , a fully connected layer 1013 , a fully connected layer 1014 , a fully connected layer 1015 and a fully connected layer 1016 .
  • the decoder 1020 may include a fully connected layer 1022 , a fully connected layer 1023 and a fully connected layer 1024 .
  • the quantization scheme of the variational autoencoder shown in FIG. 10 may be 3-bit uniform quantization, and the feedback resource may be 48 bits.
  • the fully connected layer 1012 can output a vector with a dimension of 1024.
  • the fully connected layer 1013 can output a vector with a dimension of 256.
  • the fully connected layer 1014 can output a vector with a dimension of 128.
  • Both the input of the fully connected layer 1015 and the fully connected layer 1016 may be the output of the fully connected layer 1014 .
  • the fully connected layer 1015 can output a vector m with a dimension of 16.
  • the fully connected layer 1016 can output a vector v with dimension 16.
  • the input of decoder 1020 may be a vector with dimension 16.
  • the fully connected layer 1022 can output a vector with a dimension of 2048.
  • the fully connected layer 1023 can output a vector with a dimension of 1024.
  • the fully connected layer 1024 can output a vector with a dimension of 768.
  • the multiple training data generated by using the codebook can be used to train the neural network-based channel information feedback model.
  • This application does not limit the training method of the model.
  • W can be used as the input of the encoder, and the output of the decoder can be W ⁇ .
  • W may include training data generated using a codebook, or may include training data collected in actual scenarios.
  • the loss function can be:
  • training data is also referred to as training samples or samples. This application is not limited to this.
  • FIG. 11 is a schematic structural diagram of an apparatus 1100 for processing data provided by an embodiment of the present application.
  • the apparatus 1100 may include a generating unit 1110 .
  • the first generating unit 1110 may be configured to generate a plurality of training data by using a codebook, the plurality of training data are used to train a neural network model, and the neural network model is used for channel information feedback.
  • the neural network model includes a variational autoencoder.
  • the codebook includes multiple precoding matrices.
  • the apparatus 1100 may further include an oversampling unit 1120 .
  • the oversampling unit 1120 may be configured to perform oversampling processing on the multiple precoding matrices.
  • the multiple precoding matrices belong to multiple groups, the multiple groups include a first group, the multiple training data include first training data, and the first generating unit 1110 may include: a second A generating unit, configured to generate the first training data by using the precoding matrix in the first group.
  • the multiple precoding matrices include a first precoding matrix
  • the multiple groups include a second group
  • the precoding matrix includes a discrete Fourier transform (DFT) vector.
  • DFT discrete Fourier transform
  • the device further includes: a reshaping unit, configured to reshape the training data, so as to change the dimensions of the training data.
  • a reshaping unit configured to reshape the training data, so as to change the dimensions of the training data.
  • FIG. 12 is a schematic structural diagram of an apparatus for processing data according to an embodiment of the present application.
  • the dashed line in Figure 12 indicates that the unit or module is optional.
  • the apparatus 1200 may be used to implement the methods described in the foregoing method embodiments.
  • Apparatus 1200 may be a chip, a terminal device or a network device.
  • Apparatus 1200 may include one or more processors 1210 .
  • the processor 1210 can support the device 1200 to implement the methods described in the foregoing method embodiments.
  • the processor 1210 may be a general purpose processor or a special purpose processor.
  • the processor may be a central processing unit (central processing unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • Apparatus 1200 may also include one or more memories 1220 .
  • a program is stored in the memory 1220, and the program can be executed by the processor 1210, so that the processor 1210 executes the methods described in the foregoing method embodiments.
  • the memory 1220 may be independent from the processor 1210 or may be integrated in the processor 1210 .
  • the apparatus 1200 may also include a transceiver 1230 .
  • the processor 1210 can communicate with other devices or chips through the transceiver 1230 .
  • the processor 1210 may send and receive data with other devices or chips through the transceiver 1230 .
  • the embodiment of the present application also provides a computer-readable storage medium for storing programs.
  • the computer-readable storage medium can be applied to the terminal or the network device provided in the embodiments of the present application, and the program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product includes programs.
  • the computer program product can be applied to the terminal or the network device provided in the embodiments of the present application, and the program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the embodiment of the present application also provides a computer program.
  • the computer program can be applied to the terminal or the network device provided in the embodiments of the present application, and the computer program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
  • the "indication" mentioned may be a direct indication, may also be an indirect indication, and may also mean that there is an association relationship.
  • a indicates B which can mean that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indirectly indicates B, for example, A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
  • the term "corresponding" may indicate that there is a direct or indirect correspondence between the two, or that there is an association between the two, or that it indicates and is instructed, configures and is configured, etc. relation.
  • predefined or “preconfigured” can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in devices (for example, including terminal devices and network devices).
  • the application does not limit its specific implementation.
  • pre-defined may refer to defined in the protocol.
  • the "protocol” may refer to a standard protocol in the communication field, for example, may include the LTE protocol, the NR protocol, and related protocols applied to future communication systems, which is not limited in the present application.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be read by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (digital video disc, DVD)) or a semiconductor medium (for example, a solid state disk (solid state disk, SSD) )wait.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a digital versatile disc (digital video disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present application provides a data processing method and device. The data processing method comprises: generating a plurality of pieces of training data by using a codebook, the plurality of pieces of training data being used for training a neural network model, and the neural network model being used for performing channel information feedback. According to the present application, the plurality of pieces of training data are generated by using the codebook, and a channel feedback model based on a neural network is trained by using the training data. Therefore, according to the present application, the requirement and overhead of obtaining actual scene channel data by the neural network can be reduced, and the generalization of the neural network is enhanced by using the generalization ability of the codebook.

Description

处理数据的方法及装置Method and device for processing data 技术领域technical field
本申请涉及通信技术领域,并且更为具体地,涉及一种处理数据的方法及装置。The present application relates to the field of communication technology, and more specifically, to a method and device for processing data.
背景技术Background technique
通信系统中,可以利用基于码本的方案实现信道信息反馈。由于码本的精妙设计,基于码本的信道信息反馈方案可以适应不同场景下的信道信息反馈,即该方案的泛化性高。信道信息需要映射到码本中的预编码矩阵,通过反馈预编码矩阵,实现信道信息的反馈。由于预编码矩阵是离散的,因此映射的过程是量化有损的,这导致基于码本的预编码方案的精确度较低。In a communication system, a scheme based on a codebook can be used to realize channel information feedback. Due to the exquisite design of the codebook, the channel information feedback scheme based on the codebook can adapt to the channel information feedback in different scenarios, that is, the scheme has high generalization. The channel information needs to be mapped to the precoding matrix in the codebook, and the feedback of the channel information is realized by feeding back the precoding matrix. Since the precoding matrix is discrete, the mapping process is quantized and lossy, which leads to low accuracy of the codebook-based precoding scheme.
随着人工智能技术的发展,相关技术提出了基于神经网络的信道信息反馈方案。该方案可以直接将信道信息压缩并反馈。因此,基于神经网络的信道信息反馈方案的精确度较高。With the development of artificial intelligence technology, related technologies have proposed a channel information feedback scheme based on neural networks. This scheme can directly compress and feed back channel information. Therefore, the accuracy of the channel information feedback scheme based on the neural network is relatively high.
为了提高基于神经网络的信道反馈模型的泛化性,相关技术利用多场景的训练数据对神经网络模型进行训练。在实际操作过程中,规模大、多样性高的训练数据的获取是较为困难的。因此,获取训练数据会消耗大量的人力、物力、财力和时间。In order to improve the generalization of the neural network-based channel feedback model, related technologies use multi-scenario training data to train the neural network model. In the actual operation process, it is difficult to obtain large-scale and high-diversity training data. Therefore, obtaining training data will consume a lot of manpower, material resources, financial resources and time.
发明内容Contents of the invention
本申请提供一种处理数据的方法及装置,以解决基于神经网络的信道反馈模型的训练数据获取困难的问题。The present application provides a data processing method and device to solve the problem of difficulty in obtaining training data for a neural network-based channel feedback model.
第一方面,提供了一种处理数据的方法,所述方法包括:利用码本生成多个训练数据,所述多个训练数据用于训练神经网络模型,所述神经网络模型用于进行信道信息反馈。In the first aspect, a method for processing data is provided, the method includes: using a codebook to generate a plurality of training data, the plurality of training data is used to train a neural network model, and the neural network model is used to perform channel information feedback.
第二方面,提供了一种处理数据的装置,所述装置包括:第一生成单元,用于利用码本生成多个训练数据,所述多个训练数据用于训练神经网络模型,所述神经网络模型用于进行信道信息反馈。In a second aspect, a device for processing data is provided, and the device includes: a first generation unit, configured to generate a plurality of training data using a codebook, the plurality of training data are used to train a neural network model, and the neural network The network model is used for channel information feedback.
第三方面,提供一种处理数据的装置,包括处理器、存储器、通信接口,所述存储器用于存储一个或多个计算机程序,所述处理器用于调用所述存储器中的计算机程序使得所 述终端设备执行第一方面所述的方法。In a third aspect, a device for processing data is provided, including a processor, a memory, and a communication interface, the memory is used to store one or more computer programs, and the processor is used to call the computer programs in the memory to make the The terminal device executes the method described in the first aspect.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序使得终端设备执行上述第一方面的方法中的部分或全部步骤。In the fourth aspect, the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program enables the terminal device to perform some or all of the steps in the method of the first aspect above .
第五方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使终端执行上述第一方面的方法中的部分或全部步骤。在一些实现方式中,该计算机程序产品可以为一个软件安装包。In a fifth aspect, an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to enable the terminal to execute the above-mentioned first Some or all of the steps in the method of one aspect. In some implementations, the computer program product can be a software installation package.
第六方面,本申请实施例提供了一种芯片,该芯片包括存储器和处理器,处理器可以从存储器中调用并运行计算机程序,以实现上述第一方面的方法中所描述的部分或全部步骤。In a sixth aspect, an embodiment of the present application provides a chip, the chip includes a memory and a processor, and the processor can call and run a computer program from the memory to implement some or all of the steps described in the method of the first aspect above .
第七方面,提供一种计算机程序产品,包括程序,所述程序使得计算机执行第一方面所述的方法。In a seventh aspect, a computer program product is provided, including a program, the program causes a computer to execute the method described in the first aspect.
第八方面,提供一种计算机程序,所述计算机程序使得计算机执行第一方面所述的方法。In an eighth aspect, a computer program is provided, the computer program causes a computer to execute the method described in the first aspect.
本申请利用码本生成多个训练数据,并使用该训练数据训练基于神经网络的信道反馈模型。因此,本申请可以降低神经网络对实际场景信道数据获取的需求和开销的,并利用码本的泛化能力增强神经网络的泛化性。The present application uses a codebook to generate a plurality of training data, and uses the training data to train a neural network-based channel feedback model. Therefore, the present application can reduce the requirement and overhead of the neural network for acquiring actual scene channel data, and enhance the generalization of the neural network by using the generalization capability of the codebook.
附图说明Description of drawings
图1是本申请实施例应用的无线通信系统。Fig. 1 is a wireless communication system applied in the embodiment of the present application.
图2是本申请实施例适用的神经网络的结构图。FIG. 2 is a structural diagram of a neural network applicable to an embodiment of the present application.
图3是本申请实施例适用的卷积神经网络的结构图。FIG. 3 is a structural diagram of a convolutional neural network applicable to an embodiment of the present application.
图4是基于自编码器的图像压缩过程示例图。Fig. 4 is an example diagram of an image compression process based on an autoencoder.
图5是基于变分自编码器的图像生成过程示例图。Fig. 5 is an example diagram of an image generation process based on a variational autoencoder.
图6是基于神经网络的信道反馈过程示意图。Fig. 6 is a schematic diagram of a neural network-based channel feedback process.
图7是本申请实施例提供的一种处理数据的方法的示意性流程图。Fig. 7 is a schematic flowchart of a method for processing data provided by an embodiment of the present application.
图8为本申请实施例提供的一种码本的预编码矩阵的分组方法的示意图。FIG. 8 is a schematic diagram of a method for grouping precoding matrices of a codebook provided by an embodiment of the present application.
图9为本申请实施例提供的一种基于变分自编码器的信道信息反馈模型。FIG. 9 is a channel information feedback model based on a variational autoencoder provided in an embodiment of the present application.
图10为本申请实施例提供的一种变分自编码器的结构示意图。FIG. 10 is a schematic structural diagram of a variational autoencoder provided by an embodiment of the present application.
图11是本申请实施例提供的一种用于数据处理的装置的示意性结构图。Fig. 11 is a schematic structural diagram of an apparatus for data processing provided by an embodiment of the present application.
图12是本申请实施例提供的一种装置的示意性结构图。Fig. 12 is a schematic structural diagram of a device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below with reference to the accompanying drawings.
通信系统Communication Systems
图1是本申请实施例应用的无线通信系统100。该无线通信系统100可以包括网络设备110和终端设备120。网络设备110可以是与终端设备120通信的设备。网络设备110可以为特定的地理区域提供通信覆盖,并且可以与位于该覆盖区域内的终端设备120进行通信。FIG. 1 is a wireless communication system 100 applied in an embodiment of the present application. The wireless communication system 100 may include a network device 110 and a terminal device 120 . The network device 110 may be a device that communicates with the terminal device 120 . The network device 110 can provide communication coverage for a specific geographical area, and can communicate with the terminal device 120 located in the coverage area.
图1示例性地示出了一个网络设备和两个终端,可选地,该无线通信系统100可以包括多个网络设备并且每个网络设备的覆盖范围内可以包括其它数量的终端设备,本申请实施例对此不做限定。Figure 1 exemplarily shows one network device and two terminals. Optionally, the wireless communication system 100 may include multiple network devices and each network device may include other numbers of terminal devices within the coverage area. The embodiment does not limit this.
可选地,该无线通信系统100还可以包括网络控制器、移动管理实体等其他网络实体,本申请实施例对此不作限定。Optionally, the wireless communication system 100 may further include other network entities such as a network controller and a mobility management entity, which is not limited in this embodiment of the present application.
应理解,本申请实施例的技术方案可以应用于各种通信系统,例如:第五代(5th generation,5G)系统或新无线(new radio,NR)、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)等。本申请提供的技术方案还可以应用于未来的通信系统,如第六代移动通信系统,又如卫星通信系统,等等。It should be understood that the technical solutions of the embodiments of the present application can be applied to various communication systems, for example: the fifth generation (5th generation, 5G) system or new radio (new radio, NR), long term evolution (long term evolution, LTE) system , LTE frequency division duplex (frequency division duplex, FDD) system, LTE time division duplex (time division duplex, TDD), etc. The technical solutions provided in this application can also be applied to future communication systems, such as the sixth generation mobile communication system, and satellite communication systems, and so on.
本申请实施例中的终端设备也可以称为用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动站、移动台(mobile station,MS)、移动终端(mobile terminal,MT)、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。本申请实施例中的终端设备可以是指向用户提供语音和/或数据连通性的设备,可以用于连接人、物和机,例如具有无线连接功能的手持式设备、车载设备等。本申请的实施例中的终端设备可以是手机(mobile phone)、平板电脑(Pad)、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备,虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。可选地,UE可以用于充当基站。例如,UE可以充当调度实体,其在V2X或D2D等中的 UE之间提供侧行链路信号。比如,蜂窝电话和汽车利用侧行链路信号彼此通信。蜂窝电话和智能家居设备之间通信,而无需通过基站中继通信信号。The terminal equipment in the embodiment of the present application may also be called user equipment (user equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station (mobile station, MS), mobile terminal (mobile terminal, MT) ), remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device. The terminal device in the embodiment of the present application may be a device that provides voice and/or data connectivity to users, and can be used to connect people, objects and machines, such as handheld devices with wireless connection functions, vehicle-mounted devices, and the like. The terminal device in the embodiment of the present application can be mobile phone (mobile phone), tablet computer (Pad), notebook computer, palmtop computer, mobile internet device (mobile internet device, MID), wearable device, virtual reality (virtual reality, VR) equipment, augmented reality (augmented reality, AR) equipment, wireless terminals in industrial control, wireless terminals in self driving, wireless terminals in remote medical surgery, smart Wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city, wireless terminals in smart home, etc. Optionally, UE can be used to act as a base station. For example, a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X or D2D, etc. For example, a cell phone and an automobile communicate with each other using sidelink signals. Communication between cellular phones and smart home devices without relaying communication signals through base stations.
本申请实施例中的网络设备可以是用于与终端设备通信的设备,该网络设备也可以称为接入网设备或无线接入网设备,如网络设备可以是基站。本申请实施例中的网络设备可以是指将终端设备接入到无线网络的无线接入网(radio access network,RAN)节点(或设备)。基站可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(NodeB)、演进型基站(evolved NodeB,eNB)、下一代基站(next generation NodeB,gNB)、中继站、接入点、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、主站MeNB、辅站SeNB、多制式无线(MSR)节点、家庭基站、网络控制器、接入节点、无线节点、接入点(access point,AP)、传输节点、收发节点、基带单元(base band unit,BBU)、射频拉远单元(Remote Radio Unit,RRU)、有源天线单元(active antenna unit,AAU)、射频头(remote radio head,RRH)、中心单元(central unit,CU)、分布式单元(distributed unit,DU)、定位节点等。基站可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。基站还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。基站还可以是移动交换中心以及设备到设备D2D、车辆外联(vehicle-to-everything,V2X)、机器到机器(machine-to-machine,M2M)通信中承担基站功能的设备、6G网络中的网络侧设备、未来的通信系统中承担基站功能的设备等。基站可以支持相同或不同接入技术的网络。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。The network device in this embodiment of the present application may be a device for communicating with a terminal device, and the network device may also be called an access network device or a wireless access network device, for example, the network device may be a base station. The network device in this embodiment of the present application may refer to a radio access network (radio access network, RAN) node (or device) that connects a terminal device to a wireless network. The base station can broadly cover various names in the following, or replace with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), relay station, Access point, transmission point (transmitting and receiving point, TRP), transmission point (transmitting point, TP), primary station MeNB, secondary station SeNB, multi-standard radio (MSR) node, home base station, network controller, access node , wireless node, access point (access point, AP), transmission node, transceiver node, base band unit (base band unit, BBU), remote radio unit (Remote Radio Unit, RRU), active antenna unit (active antenna unit) , AAU), radio head (remote radio head, RRH), central unit (central unit, CU), distributed unit (distributed unit, DU), positioning nodes, etc. A base station may be a macro base station, a micro base station, a relay node, a donor node, or the like, or a combination thereof. A base station may also refer to a communication module, modem or chip used to be set in the aforementioned equipment or device. The base station can also be a mobile switching center, a device that undertakes the function of a base station in D2D, vehicle-to-everything (V2X), machine-to-machine (M2M) communication, and a device in a 6G network. Network-side equipment, equipment that assumes base station functions in future communication systems, etc. Base stations can support networks of the same or different access technologies. The embodiment of the present application does not limit the specific technology and specific device form adopted by the network device.
基站可以是固定的,也可以是移动的。例如,直升机或无人机可以被配置成充当移动基站,一个或多个小区可以根据该移动基站的位置移动。在其他示例中,直升机或无人机可以被配置成用作与另一基站通信的设备。Base stations can be fixed or mobile. For example, a helicopter or drone can be configured to act as a mobile base station, and one or more cells can move according to the location of the mobile base station. In other examples, a helicopter or drone may be configured to serve as a device in communication with another base station.
在一些部署中,本申请实施例中的网络设备可以是指CU或者DU,或者,网络设备包括CU和DU。gNB还可以包括AAU。In some deployments, the network device in this embodiment of the present application may refer to a CU or a DU, or, the network device includes a CU and a DU. A gNB may also include an AAU.
网络设备和终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上;还可以部署在空中的飞机、气球和卫星上。本申请实施例中对网络设备和终端设备所处的场景不做限定。Network equipment and terminal equipment can be deployed on land, including indoors or outdoors, hand-held or vehicle-mounted; they can also be deployed on water; they can also be deployed on aircraft, balloons and satellites in the air. In the embodiment of the present application, the scenarios where the network device and the terminal device are located are not limited.
应理解,本申请中的通信设备的全部或部分功能也可以通过在硬件上运行的软件功能来实现,或者通过平台(例如云平台)上实例化的虚拟化功能来实现。It should be understood that all or part of the functions of the communication device in this application may also be realized by software functions running on hardware, or by virtualization functions instantiated on a platform (such as a cloud platform).
信道反馈channel feedback
在无线通信系统中,可以利用基于码本的方案来实现信道特征的提取与反馈。即在接收机进行信道估计后,根据信道估计的结果按照某种优化准则,从预先设定的预编码码本中选择与当前信道最匹配的预编码矩阵,并通过空口的反馈链路,将预编码矩阵索引(precoding matrix index,PMI)信息反馈给发射机,供发射机实现预编码。在一些实现方式中,接收机还可以将测量得出的信道质量指示(channel quality indication,CQI)反馈给发射机,供发射机实现自适应调制编码等。信道反馈也可以称为信道状态信息(channel state information-reference signal,CSI)反馈。In a wireless communication system, a codebook-based scheme can be used to realize channel feature extraction and feedback. That is, after the receiver performs channel estimation, according to the result of channel estimation, according to a certain optimization criterion, select the precoding matrix that best matches the current channel from the pre-set precoding codebook, and pass the feedback link of the air interface to the Precoding matrix index (precoding matrix index, PMI) information is fed back to the transmitter for the transmitter to implement precoding. In some implementation manners, the receiver may also feed back the measured channel quality indication (CQI) to the transmitter for the transmitter to implement adaptive modulation and coding. Channel feedback may also be called channel state information (channel state information-reference signal, CSI) feedback.
基于码本的信道反馈方案由于码本的精妙设计,码本中的多个预编码矩阵可以表示不同场景下的信道信息。因此,基于码本的信道反馈方案可以适应不同场景下的信道信息反馈任务。Codebook-Based Channel Feedback Scheme Due to the sophisticated design of the codebook, multiple precoding matrices in the codebook can represent channel information in different scenarios. Therefore, the codebook-based channel feedback scheme can adapt to channel information feedback tasks in different scenarios.
码本中的预编码矩阵是离散的。因此,将信道信息映射到码本中的预编码矩阵是将连续的信道信息进行离散量化的过程,这使得映射过程是量化有损的,从而导致反馈的信道信息精确度下降,进而降低了预编码的性能。The precoding matrix in the codebook is discrete. Therefore, mapping channel information to the precoding matrix in the codebook is a process of discretely quantizing continuous channel information, which makes the mapping process lossy in quantization, resulting in a decrease in the accuracy of the feedback channel information, thereby reducing the accuracy of the precoding matrix. Encoding performance.
神经网络Neural Networks
近年来,人工智能(artificial intelligence,AI)研究在计算机视觉、自然语言处理等很多领域都取得了非常大的成果,其也将在未来很长一段时间内在人们的生产生活中起到重要的作用。通信领域也开始尝试利用AI技术寻求新的技术思路,以解决传统方法受限的技术难题。In recent years, artificial intelligence (AI) research has achieved great results in many fields such as computer vision and natural language processing, and it will also play an important role in people's production and life for a long time in the future . The field of communication has also begun to try to use AI technology to seek new technical ideas to solve technical problems that are limited by traditional methods.
神经网络为AI中常用的架构。常见的神经网络有卷积神经网络(convolutional neural network,CNN)、循环神经网络(recurrent neural network,RNN)、深度神经网络(deep neural network,DNN)等。Neural networks are commonly used architectures in AI. Common neural networks include convolutional neural network (CNN), recurrent neural network (RNN), deep neural network (DNN), etc.
下文结合图2介绍本申请实施例适用的神经网络。图2所示的神经网络按照不同层的位置划分可以分为三类:输入层210,隐藏层220和输出层230。一般来说,第一层是输入层210、最后一层是输出层230,第一层和最后一层之间的中间层都是隐藏层220。The neural network applicable to the embodiment of the present application is introduced below with reference to FIG. 2 . The neural network shown in FIG. 2 can be divided into three types according to the position of different layers: input layer 210 , hidden layer 220 and output layer 230 . Generally speaking, the first layer is the input layer 210 , the last layer is the output layer 230 , and the middle layer between the first layer and the last layer is the hidden layer 220 .
输入层210用于输入数据。以通信系统为例,输入数据例如可以是接收机接收的接收信号。隐藏层220用于对输入数据进行处理,例如,对接收信号进行解压缩处理。输出层230用于输出处理后的输出数据,例如,输出解压后的信号。The input layer 210 is used to input data. Taking a communication system as an example, the input data may be, for example, a received signal received by a receiver. The hidden layer 220 is used to process the input data, for example, to decompress the received signal. The output layer 230 is used for outputting processed output data, for example, outputting a decompressed signal.
如图2所示,神经网络包括多个层,每个层包括多个神经元,层与层之间的神经元可以是全连接的,也可以是部分连接的。对于连接的神经元而言,上一层的神经元的输出可以作为下一层的神经元的输入。As shown in Figure 2, the neural network includes multiple layers, each layer includes multiple neurons, and the neurons between layers can be fully connected or partially connected. For connected neurons, the output of neurons in the previous layer can be used as the input of neurons in the next layer.
随着神经网络研究的不断发展,近年又提出了神经网络深度学习算法。神经网络深度学习算法在神经网络中引入了较多的隐层。通过多隐层的神经网络逐层训练进行特征学习,极大地提升了神经网络的学习和处理能力。这种神经网络模型广泛应用于模式识别、信号处理、优化组合、异常探测等方面。With the continuous development of neural network research, neural network deep learning algorithms have been proposed in recent years. The neural network deep learning algorithm introduces more hidden layers in the neural network. Through layer-by-layer training of multi-hidden neural network for feature learning, the learning and processing capabilities of the neural network are greatly improved. This neural network model is widely used in pattern recognition, signal processing, optimization combination, anomaly detection and so on.
CNN是一种带有卷积结构的深度神经网络,其结构可以如图3所示。图3所示的神经网络可以包括输入层310、卷积层320、池化层330、全连接层340、以及输出层350。CNN is a deep neural network with a convolutional structure, and its structure can be shown in Figure 3. The neural network shown in FIG. 3 may include an input layer 310 , a convolutional layer 320 , a pooling layer 330 , a fully connected layer 340 , and an output layer 350 .
每一个卷积层320可以包括很多个卷积核,卷积核也称为算子,其作用可以看作是一个从输入信号中提取特定信息的过滤器,卷积核本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义。Each convolutional layer 320 can include many convolution kernels. The convolution kernel is also called an operator. Its function can be regarded as a filter for extracting specific information from the input signal. The convolution kernel can be a weight in essence. matrix, this weight matrix is usually predefined.
这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以从输入信号中提取信息,从而帮助CNN进行正确的预测。The weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained through training can extract information from the input signal, thereby helping CNN to make correct predictions.
当CNN有多个卷积层的时候,初始的卷积层往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着CNN深度的加深,越往后的卷积层提取到的特征越来越复杂。When CNN has multiple convolutional layers, the initial convolutional layer often extracts more general features, which can also be called low-level features; as the depth of CNN deepens, the later convolution The features extracted by the layers are getting more and more complex.
池化层330,由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,例如,可以是图3所示的一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在信号处理过程中,池化层的唯一目的就是减少提取的信息的空间大小。Pooling layer 330, because it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer, for example, it can be a layer of convolutional layer followed by a layer of pooling layer as shown in Figure 3 , can also be a multi-layer convolutional layer followed by one or more pooling layers. In signal processing, the sole purpose of pooling layers is to reduce the spatial size of the extracted information.
卷积层320和池化层330的引入,有效地控制了网络参数的剧增,限制了参数的个数并挖掘了局部结构的特点,提高了算法的鲁棒性。The introduction of the convolutional layer 320 and the pooling layer 330 effectively controls the sharp increase of network parameters, limits the number of parameters and taps the characteristics of the local structure, improving the robustness of the algorithm.
全连接层340,在经过卷积层320、池化层330的处理后,CNN还不足以输出所需要的输出信息。因为如前所述,卷积层320、池化层330只会提取特征,并减少输入数据带来的参数。然而为了生成最终的输出信息(例如,发射端发射的原始信息的比特流),CNN还需要利用全连接层340。通常,全连接层340中可以包括多个隐含层,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如,该任务类型可以包括对接收机接收的数据信号进行解码,又例如,该任务类型还可以包括基于接收机接收的导频信号进行信道估计。The fully connected layer 340, after being processed by the convolutional layer 320 and the pooling layer 330, CNN is not enough to output the required output information. Because as mentioned above, the convolutional layer 320 and the pooling layer 330 only extract features and reduce the parameters brought by the input data. However, in order to generate the final output information (eg, the bitstream of the original information transmitted by the transmitter), the CNN also needs to utilize the fully connected layer 340 . Generally, the fully connected layer 340 may include a plurality of hidden layers, and the parameters contained in the multi-layer hidden layers may be pre-trained according to relevant training data of a specific task type, for example, the task type may include receiving For another example, the task type may also include performing channel estimation based on the pilot signal received by the receiver.
在全连接层340中的多层隐含层之后,也就是整个CNN的最后层为输出层350,用于输出结果。通常,该输出层350设置有损失函数(例如,类似分类交叉熵的损失函数),用于计算预测误差,或者说用于评价CNN模型输出的结果(又称预测值)与理想结果(又称 真实值)之间的差异程度。After the multi-layer hidden layers in the fully connected layer 340, that is, the last layer of the entire CNN is the output layer 350 for outputting results. Usually, the output layer 350 is provided with a loss function (for example, a loss function similar to classification cross entropy), which is used to calculate the prediction error, or to evaluate the result (also called predicted value) output by the CNN model and the ideal result (also called The degree of difference between the true value).
为了使损失函数最小化,需要对CNN模型进行训练。在一些实现方式中,可以使用反向传播算法(backpropagation algorithm,BP)对CNN模型进行训练。BP的训练过程由正向传播过程和反向传播过程组成。在正向传播(如图3由310至350的传播为正向传播)过程中,输入数据输入CNN模型的上述各层,经过逐层处理并传向输出层。如果在输出层输出的结果与理想结果差异较大,则将上述损失函数最小化作为优化目标,转入反向传播(如图3由350至310的传播为反向传播),逐层求出优化目标对各神经元权值的偏导数,构成优化目标对权值向量的梯量,作为修改模型权重的依据,CNN的训练过程在权重修改过程中完成。当上述误差达到所期望值时,CNN的训练过程结束。In order to minimize the loss function, the CNN model needs to be trained. In some implementations, the CNN model may be trained using a backpropagation algorithm (BP). The training process of BP consists of forward propagation process and back propagation process. In the process of forward propagation (the propagation from 310 to 350 in Fig. 3 is forward propagation), the input data is input into the above layers of the CNN model, processed layer by layer and transmitted to the output layer. If the result output at the output layer is quite different from the ideal result, the above loss function is minimized as the optimization goal, and transferred to backpropagation (as shown in Figure 3, the propagation from 350 to 310 is backpropagation), and the calculation is obtained layer by layer The partial derivative of the optimization target to the weight of each neuron constitutes the gradient of the optimization target to the weight vector, which is used as the basis for modifying the model weight. The training process of CNN is completed in the weight modification process. When the above error reaches the expected value, the training process of CNN ends.
需要说明的是,如图3所示的CNN仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在,本申请实施例对此不作限定。It should be noted that the CNN shown in Figure 3 is only an example of a convolutional neural network. In a specific application, the convolutional neural network can also exist in the form of other network models, and this embodiment of the present application does not make any reference to this. limited.
自编码器autoencoder
自编码器是一类在半监督学习和非监督学习中使用的人工神经网络。自编码器是一种将输入信号作为训练目标的神经网络。自编码器可以包括编码器(encoder)和解码器(decoder)。Autoencoders are a class of artificial neural networks used in semi-supervised and unsupervised learning. An autoencoder is a neural network that takes an input signal as the training target. An autoencoder can include an encoder (encoder) and a decoder (decoder).
以图4所示的图像压缩为例,对自编码器进行说明。Taking the image compression shown in Figure 4 as an example, the self-encoder is described.
编码器的输入可以为待压缩图像。图4所示的实施例中,待压缩图像占用28×28=784比特。待压缩图像经过编码器压缩后,输出码流(code)。编码器输出的码流占用的比特数通常小于待压缩图像占用的比特数。例如图4所示的编码器输出的码流占用的比特数可以小于784比特。由此可知,编码器可以实现输入编码器的实体的压缩表示。The input of the encoder may be an image to be compressed. In the embodiment shown in FIG. 4 , the image to be compressed occupies 28×28=784 bits. After the image to be compressed is compressed by the encoder, a code stream (code) is output. The number of bits occupied by the code stream output by the encoder is generally smaller than the number of bits occupied by the image to be compressed. For example, the number of bits occupied by the code stream output by the encoder shown in FIG. 4 may be less than 784 bits. From this, it can be seen that the encoder can achieve a compressed representation of the entity input to the encoder.
解码器的输入可以为码流。该码流可以为编码器输出的码流。解码器的输出为解压缩后的图像。由图4可以看出,解压缩后的图像与输入编码器的待压缩图像是一致的。因此,解码器可以实现原始实体的重构。The input of the decoder can be code stream. The code stream may be a code stream output by an encoder. The output of the decoder is the decompressed image. It can be seen from Fig. 4 that the decompressed image is consistent with the image to be compressed input to the encoder. Therefore, the decoder can realize the reconstruction of the original entity.
在训练自编码器的过程中,可以将待压缩数据(例如图4中待压缩的图片)作为自编码器的输入(即编码器输入)和标签(即解码器输出),将编码器和解码器进行端到端联合训练。In the process of training the self-encoder, the data to be compressed (such as the picture to be compressed in Figure 4) can be used as the input of the self-encoder (ie, the input of the encoder) and the label (ie, the output of the decoder), and the encoder and the decoder for end-to-end joint training.
变分自编码器Variational Autoencoder
变分自编码在编码器中引入了概率分布。概率分布可以使得输入到编码器的实体映射至符合该概率分布的码流。针对同一输入实体可以结合概率分布中采集的不同值,从而使得变分自编码器的解码器的输入可以为多个不同的码流。解码器可以对编码器输出的不同 的码流进行解码,从而输出多个与输入实体相似的实体。Variational autoencoders introduce probability distributions in the encoder. The probability distribution can make the entity input to the encoder be mapped to the code stream conforming to the probability distribution. For the same input entity, different values collected in the probability distribution can be combined, so that the input of the decoder of the variational autoencoder can be multiple different code streams. The decoder can decode different code streams output by the encoder, so as to output multiple entities similar to the input entity.
下面结合图5,说明变分自编码器的原理。图5为一种基于变分自编码器的图像生成过程的示例图。编码器的输入可以为原始图像,编码器的输出可以为均值以及方差。码流可以通过编码器输出的均值和方差以及从概率分布中采集的采集值计算获得。该码流可以输入到解码器中,解码器对码流进行解码,输出与原始图像相似的图像。其中,概率分布可以为正态分布。The principle of the variational autoencoder will be described below in combination with FIG. 5 . FIG. 5 is an example diagram of an image generation process based on a variational autoencoder. The input of the encoder can be the original image, and the output of the encoder can be the mean and variance. The code stream can be calculated by the mean and variance of the encoder output and the collected values collected from the probability distribution. The code stream can be input into the decoder, and the decoder decodes the code stream and outputs an image similar to the original image. Wherein, the probability distribution may be a normal distribution.
基于神经网络的信道反馈Channel Feedback Based on Neural Networks
可以基于AI实现信道反馈,例如实现基于神经网络的信道反馈。网络设备侧可以通过神经网络尽可能地还原终端设备侧反馈的信道信息。这种基于神经网络的信道反馈可以实现信道信息的还原,还为终端设备侧降低信道反馈开销提供了可能性。Channel feedback can be realized based on AI, such as neural network-based channel feedback. The network device side can restore the channel information fed back by the terminal device side as much as possible through the neural network. This neural network-based channel feedback can restore channel information, and also provides the possibility of reducing channel feedback overhead on the terminal device side.
作为一个实施例,可以利用深度学习自编码器实现信道反馈。基于AI的信道反馈模型的输入可以为信道信息,也就是说,可以将信道信息视为输入自编码器的压缩图像。基于AI的信道反馈模型可以对信道信息进行压缩反馈。在发送端,基于AI的信道反馈模型可以对压缩后的信道信息进行重构,从而可以较大程度地保留信道信息。As an example, a deep learning autoencoder can be used to implement channel feedback. The input of the AI-based channel feedback model can be channel information, that is, the channel information can be regarded as the compressed image input to the self-encoder. The AI-based channel feedback model can perform compressed feedback on channel information. At the sending end, the AI-based channel feedback model can reconstruct the compressed channel information, thereby retaining the channel information to a greater extent.
图6为一种基于AI的信道反馈过程示意图。图6所示的信道反馈模型包括编码器和解码器。编码器和解码器分别部署在接收端(receive,Rx)和发送端(transmit,Tx)。接收端可以通过信道估计得到信道信息矩阵。该信道信息矩阵可以通过编码器的神经网络进行压缩编码,形成压缩后的比特流(codeword)。压缩后的比特流可以通过空口反馈链路反馈(feedback)给接收端。发送端可以通过解码器根据反馈的比特流对信道信息进行解码或恢复,从而获得完整的反馈信道信息。FIG. 6 is a schematic diagram of an AI-based channel feedback process. The channel feedback model shown in Fig. 6 includes an encoder and a decoder. The encoder and decoder are respectively deployed at the receiving end (receive, Rx) and the sending end (transmit, Tx). The receiving end can obtain the channel information matrix through channel estimation. The channel information matrix can be compressed and encoded by the neural network of the encoder to form a compressed bit stream (codeword). The compressed bit stream can be fed back to the receiving end through an air interface feedback link. The sending end can decode or restore the channel information according to the feedback bit stream through the decoder, so as to obtain complete feedback channel information.
基于AI的信道反馈模型可以为图6所示的结构。例如,编码器可以包括若干全连接层,解码器可以包括残差网络。可以理解的是,图6仅为示例,本申请不限制编码器和解码器内部的网络模型结构,网络模型的结构可以灵活设计。The AI-based channel feedback model may have the structure shown in FIG. 6 . For example, the encoder may include several fully connected layers, and the decoder may include a residual network. It can be understood that FIG. 6 is only an example, and the present application does not limit the structure of the network model inside the encoder and decoder, and the structure of the network model can be flexibly designed.
基于神经网络的信道反馈可以直接对信道信息进行压缩反馈。因此,基于神经网络的信道反馈方案反馈的信道信息精确度较高。The channel feedback based on the neural network can directly compress the channel information. Therefore, the accuracy of the channel information fed back by the neural network-based channel feedback scheme is relatively high.
在训练数据与测试数据相匹配的情况下,基于神经网络的信道反馈方案具有较好的性能。当训练数据与测试数据不匹配的情况下,基于神经网络的信道反馈方案的性能较差。例如,当训练数据与测试数据的场景不一致的时候,基于神经网络的信道反馈方案的性能较差。因此,基于神经网络的信道反馈方案具有泛化性低的问题。相关技术利用多场景的训练数据对神经网络模型进行训练,以提高泛化性。在实际操作过程中,规模大、多样性 高的训练数据集的获取是较为困难的。In the case where the training data is matched with the test data, the channel feedback scheme based on neural network has better performance. The performance of neural network-based channel feedback schemes is poor when the training data does not match the test data. For example, neural network-based channel feedback schemes perform poorly when the scenarios of training data and test data are inconsistent. Therefore, channel feedback schemes based on neural networks have the problem of low generalization. Related technologies use multi-scenario training data to train a neural network model to improve generalization. In the actual operation process, it is more difficult to obtain a large-scale and high-diversity training data set.
针对上述问题,本申请提出了一种处理数据的方法。图7为本申请实施例提供的一种处理数据的方法的流程性示意图。图7所示的方法可以包括步骤S710。In view of the above problems, the present application proposes a method for processing data. FIG. 7 is a schematic flowchart of a method for processing data provided by an embodiment of the present application. The method shown in FIG. 7 may include step S710.
步骤S710,利用码本生成多个训练数据。多个训练数据可以用于训练神经网络模型。神经网络模型可以用于进行信道信息反馈。Step S710, using the codebook to generate a plurality of training data. Multiple training data can be used to train the neural network model. The neural network model can be used for channel information feedback.
训练数据可以包括信道信息。本申请不限制信道信息的表示形式。例如,信道信息可以为多个子带多个层的信道特征向量构成的张量。作为一个实施例,信道信息可以为维度为N×M×S×I的张量。N可以为发射天线数量,N的取值可以大于或等于2。M可以为子带数量,M的取值可以大于或等于1。S可以为传输层数,S的取值可以大于或等于1。I可以表示实部与虚部,I的取值可以为I=2。或者,I可以表示幅度与相位,I的取值可以为I=2。或者,I可以为实部、虚部、幅度与相位,I的取值可以为I=4。Training data may include channel information. The present application does not limit the representation form of the channel information. For example, the channel information may be a tensor composed of channel feature vectors of multiple subbands and multiple layers. As an embodiment, the channel information may be a tensor with a dimension of N×M×S×I. N may be the number of transmitting antennas, and the value of N may be greater than or equal to 2. M may be the number of subbands, and the value of M may be greater than or equal to 1. S may be the number of transmission layers, and the value of S may be greater than or equal to 1. I may represent a real part and an imaginary part, and a value of I may be I=2. Alternatively, I may represent amplitude and phase, and the value of I may be I=2. Alternatively, I may be real part, imaginary part, amplitude and phase, and the value of I may be I=4.
码本可以为相关技术中基于码本的信道反馈中使用的码本。码本中可以包括多个预编码矩阵。本申请不限制预编码矩阵的形式。例如,预编码矩阵可以为向量的形式,即码本可以包括多个预编码向量。The codebook may be the codebook used in codebook-based channel feedback in the related art. A codebook may include multiple precoding matrices. The present application does not limit the form of the precoding matrix. For example, the precoding matrix may be in the form of a vector, that is, the codebook may include multiple precoding vectors.
本申请不限码本的类型,例如码本可以基于离散傅里叶变换(discrete fourier transform,DFT)向量实现,即预编码矩阵可以包括DFT向量。作为一个实施例,DFT向量的长度可以为N,码本中DFT向量的个数可以为N 1N 2。其中,N 1和N 2可以为天线端口数。以发射端的二维平面阵列天线为例,N 1可以为第一维度的天线端口数,N 2可以为第二维度的天线端口数。N可以为总天线端口数。例如,总天线端口数可以为N=N 1N 2The present application does not limit the type of the codebook. For example, the codebook may be implemented based on a discrete Fourier transform (discrete fourier transform, DFT) vector, that is, the precoding matrix may include a DFT vector. As an embodiment, the length of the DFT vector may be N, and the number of DFT vectors in the codebook may be N 1 N 2 . Wherein, N 1 and N 2 may be the number of antenna ports. Taking the two-dimensional planar array antenna at the transmitting end as an example, N 1 may be the number of antenna ports in the first dimension, and N 2 may be the number of antenna ports in the second dimension. N may be the total number of antenna ports. For example, the total number of antenna ports may be N=N 1 N 2 .
由上文可知,码本中的预编码矩阵可以表示不同场景下的信道信息,即码本天然具有泛化能力。可以理解的是,利用码本生成的多个训练数据也可以包括多种场景的信道信息。因此,利用码本生成的训练数据对神经网络模型进行训练,可以增强神经网络模型的泛化性。并且,利用码本生成训练数据的过程比在实际场景下获取多样性的训练数据容易的多。因此,使用本申请的技术方案,可以更简单地训练得到泛化性高的神经网络模型,从而实现高精确度且高性能的信道信息反馈。It can be seen from the above that the precoding matrix in the codebook can represent channel information in different scenarios, that is, the codebook naturally has generalization ability. It can be understood that the multiple pieces of training data generated by using the codebook may also include channel information of multiple scenarios. Therefore, using the training data generated by the codebook to train the neural network model can enhance the generalization of the neural network model. Moreover, the process of using codebooks to generate training data is much easier than obtaining diverse training data in actual scenarios. Therefore, using the technical solution of the present application, a neural network model with high generalization can be trained more simply, so as to realize high-precision and high-performance channel information feedback.
作为一个实施例,图7所示的方法还可以包括步骤S720。As an embodiment, the method shown in FIG. 7 may further include step S720.
步骤S720,对码本中的多个预编码矩阵进行过采样处理。Step S720, perform oversampling processing on multiple precoding matrices in the codebook.
过采样可以是在相邻的预编码矩阵中间插入更多的预编码矩阵。可以理解的是,每个预编码矩阵都可以对应一个空间的角度指向,相邻的预编码矩阵之间有一定的角度间隔。通过过采样处理,可以使得相邻的预编码矩阵之间的角度间隔更小,从而达到角度精细量 化的目的。Oversampling can be to insert more precoding matrices between adjacent precoding matrices. It can be understood that each precoding matrix may correspond to a spatial angle orientation, and there is a certain angular interval between adjacent precoding matrices. Through oversampling processing, the angle interval between adjacent precoding matrices can be made smaller, so as to achieve the purpose of fine quantization of angle.
作为一种实施例,二维阵列天线第一维度和第二维度的过采样因子可以分别为O 1和O 2。过采样的预编码矩阵可以有N 1O 1N 2O 2个。以预编码矩阵为二维DFT向量为例,过采样的二维DFT向量a m,n可以表示为: As an embodiment, the oversampling factors of the first dimension and the second dimension of the two-dimensional array antenna may be O 1 and O 2 respectively. There may be N 1 O 1 N 2 O 2 oversampled precoding matrices. Taking the precoding matrix as a two-dimensional DFT vector as an example, the oversampled two-dimensional DFT vector a m,n can be expressed as:
Figure PCTCN2021139596-appb-000001
Figure PCTCN2021139596-appb-000001
其中,m=0,1,2,...N 1O 1-1,n=0,1,2,...N 2O 2-1。x m和u n分别是过采样的一维DFT向量,x m和u n可以分别表示为: Wherein, m=0,1,2,...N 1 O 1 -1, n=0,1,2,...N 2 O 2 -1. x m and u n are oversampled one-dimensional DFT vectors respectively, and x m and u n can be expressed as:
x m=[1,...,exp(j2π(N 1-1)m)/N 1O 1] Tx m =[1,...,exp(j2π(N 1 -1)m)/N 1 O 1 ] T ,
u n=[1,...,exp(j2π(N 2-1)m)/N 2O 2] Tu n =[1,...,exp(j2π(N 2 −1)m)/N 2 O 2 ] T .
可以理解的是,从物理含义上看,一个预编码矩阵可以表示一个波束。真实的信道信息可以由一个或多个波束的信息组成。因此,可以在码本中选取一个或多个预编码矩阵,组成一个训练数据,以模拟真实的信道信息。本申请不限制在码本中选取预编码矩阵以生成训练数据的方法。It can be understood that, from a physical point of view, a precoding matrix can represent a beam. The actual channel information can consist of information from one or more beams. Therefore, one or more precoding matrices can be selected in the codebook to form a training data to simulate real channel information. The present application does not limit the method of selecting the precoding matrix in the codebook to generate the training data.
作为一个实施例,可以使用分组的方式选取预编码矩阵,以生成训练数据。多个预编码矩阵可以属于多个分组。多个分组可以包括第一分组,多个训练数据可以包括第一训练数据。可以利用第一分组中的预编码矩阵生成第一训练数据。例如,第一训练数据中包含的预编码矩阵均可以从第一分组中选取。需要说明的是,本申请不限制第一分组中的预编码矩阵的数目,例如可以为一个或多个。As an embodiment, the precoding matrix may be selected in a grouping manner to generate training data. Multiple precoding matrices can belong to multiple groups. The plurality of groups may include a first group, and the plurality of training data may include the first training data. The first training data can be generated using the precoding matrix in the first packet. For example, all the precoding matrices contained in the first training data can be selected from the first group. It should be noted that, the present application does not limit the number of precoding matrices in the first group, for example, it may be one or more.
分组可以根据预编码矩阵对应的波束情况确定。例如,真实的信道信息中往往包括相似的多个波束信息,因此,可以将相邻或相似的波束对应的预编码矩阵分到一个分组中。由此可知,从一个分组中选取训练数据,可以使得训练数据更加接近真实的信道信息。The grouping can be determined according to the beam situation corresponding to the precoding matrix. For example, real channel information often includes information about multiple similar beams. Therefore, precoding matrices corresponding to adjacent or similar beams may be grouped into one group. It can be seen that selecting training data from a group can make the training data closer to real channel information.
图8为本申请实施例提供的一种分组方法的示意图。图8中,二维网格上的一个格点(图8中的圆形点)表示一个预编码矩阵。二维网格上的一个格块(图8中的矩形框)可以表示一个分组。第一预编码矩阵可以为二维网格上任意一个格点。例如,图8中,格点811可以表示第一预编码矩阵。第一分组可以为二维网格上任意一个矩形框。例如,图8中,格块821可以表示第一分组。通过调整格块的第一个波束,即首波束(图8中有阴影的圆形点),可以调整分组的起始位置。通过调整格块的尺寸,可以调整分组内的预编码矩阵数目。需要说明的是,一个分组内的预编码矩阵可以是连续的,也可以是间隔的,本申请对此不做限制。FIG. 8 is a schematic diagram of a grouping method provided by an embodiment of the present application. In FIG. 8, a grid point (circular point in FIG. 8) on the two-dimensional grid represents a precoding matrix. A grid block (rectangular box in FIG. 8 ) on a two-dimensional grid can represent a group. The first precoding matrix may be any grid point on the two-dimensional grid. For example, in FIG. 8 , grid point 811 may represent the first precoding matrix. The first group can be any rectangular box on the two-dimensional grid. For example, in FIG. 8, block 821 may represent the first grouping. By adjusting the first beam of the grid block, ie the first beam (the shaded circular point in Fig. 8), the starting position of the packet can be adjusted. By adjusting the size of the grid block, the number of precoding matrices in the group can be adjusted. It should be noted that the precoding matrices in one group may be continuous or spaced, which is not limited in this application.
作为一个实施例,分组可以通过参数L 1,L 2,s 1,s 2,p 1,p 2控制。L 1表示在第一维度 上格块的尺寸,L 2表示在第二维度上格块的尺寸,L 1×L 2表示格块的大小。s 1表示相邻的格块的首波束在第一维度上的间距。s 2表示相邻的格块的首波束在第二维度上的间距。p 1表示一个格块内相邻波束在第一维度上的间距。p 2表示一个格块内相邻波束在第二维度上的间距。图8所示的分组方式中,各个参数的取值分别为:L 1=4,L 2=2,s 1=s 2=2,p 1=p 2=1。 As an example, grouping can be controlled by parameters L 1 , L 2 , s 1 , s 2 , p 1 , p 2 . L 1 represents the size of the grid in the first dimension, L 2 represents the size of the grid in the second dimension, and L 1 ×L 2 represents the size of the grid. s 1 represents the distance between the first beams of adjacent grid blocks in the first dimension. s 2 represents the distance between the first beams of adjacent grid blocks in the second dimension. p 1 represents the distance between adjacent beams in a grid block in the first dimension. p 2 represents the distance between adjacent beams in a grid block in the second dimension. In the grouping method shown in FIG. 8 , the values of each parameter are: L 1 =4, L 2 =2, s 1 =s 2 =2, p 1 =p 2 =1.
训练数据可以从分组中的预编码矩阵中选取。例如,可以在第一分组中选择MS个预编码矩阵,并进一步处理(例如进行维度转换),生成维度为N×M×S×I的张量,并将该张量作为第一训练数据。本申请不限制预编码矩阵的选取方式。例如,第一训练数据可以在第一分组中随机选取预编码矩阵。The training data can be selected from the precoding matrix in the packet. For example, MS precoding matrices can be selected in the first group, and further processed (for example, perform dimension conversion) to generate a tensor with a dimension of N×M×S×I, and use this tensor as the first training data. The present application does not limit the selection manner of the precoding matrix. For example, the first training data may randomly select a precoding matrix in the first group.
可以理解的是,每个分组均可以生成至少一个训练数据,多个分组即可生成多个训练数据。例如,第一分组可以生成第一训练数据以及第二训练数据,第二分组可以生成第三训练数据。It can be understood that each group can generate at least one training data, and multiple groups can generate multiple training data. For example, the first group can generate the first training data and the second training data, and the second group can generate the third training data.
当获取到的训练数据足够多时,多个训练数据组成的数据集即可覆盖码本中的全部预编码矩阵。可以理解的是,码本中的全部预编码矩阵可以覆盖多样的场景。因此,利用该数据集训练的神经网络模型可以集成码本的高泛化性能。When enough training data is obtained, a data set composed of multiple training data can cover all precoding matrices in the codebook. It can be understood that all precoding matrices in the codebook can cover various scenarios. Therefore, the neural network model trained with this dataset can integrate the high generalization performance of the codebook.
可选地,多个分组中的预编码矩阵可以重叠,例如,多个分组可以包括第一分组和第二分组,多个预编码矩阵可以包括第一预编码矩阵,第一预编码矩阵可以既属于第一分组也属于第二分组。继续以图8为例,第二分组可以为格块822。第一预编码矩阵811可以既属于第一分组821,也属于第二分组822。由图8所示的示例可以看出,第一分组821和第二分组822可以在第一维度上存在重叠。可以理解的是,第一分组和第二分组可以在第二维度上存在重叠。或者,第一分组和第二分组在第一维度和第二维度上均有重叠。Optionally, precoding matrices in multiple groups may overlap, for example, multiple groups may include a first group and a second group, multiple precoding matrices may include a first precoding matrix, and the first precoding matrix may be both Belonging to the first group also belongs to the second group. Continuing to take FIG. 8 as an example, the second grouping may be a grid block 822 . The first precoding matrix 811 may belong to both the first group 821 and the second group 822 . It can be seen from the example shown in FIG. 8 that the first group 821 and the second group 822 may overlap in the first dimension. It can be understood that the first group and the second group may overlap in the second dimension. Alternatively, the first grouping and the second grouping overlap in both the first dimension and the second dimension.
可以理解的是,多个分组的预编码矩阵重叠,可以使得码本被分为更多的分组,从而获取到更多的训练数据。It can be understood that the overlap of precoding matrices of multiple groups can make the codebook be divided into more groups, thereby obtaining more training data.
本申请不限制神经网络模型的类型。作为一个实施例,神经网络模型可以包括变分自编码器。The application does not limit the type of neural network model. As an example, the neural network model may include a variational autoencoder.
可以理解的是,在训练的过程中,变分自编码器可以将输入的训练数据映射到一个概率分布上,从而可以由一个训练数据得到符合概率分布的多个码流。多个码流为解码器提供了更多训练数据,从而可以提高解码器的泛化性,从而提高变分自编码器的泛化性。在实际进行信道信息反馈过程(也可以称为测试阶段)中,可以不引入概率分布,则变分自编码器可以成为自编码器,从而使得解码器准确还原输入编码器的信道信息。It can be understood that during the training process, the variational autoencoder can map the input training data to a probability distribution, so that multiple code streams conforming to the probability distribution can be obtained from one training data. Multiple codestreams provide more training data for the decoder, which can improve the generalization of the decoder and thus the generalization of the variational autoencoder. In the actual channel information feedback process (also called the testing phase), the probability distribution may not be introduced, and the variational autoencoder can become an autoencoder, so that the decoder can accurately restore the channel information input to the encoder.
图9为本申请实施例提供的一种基于变分自编码器的信道信息反馈模型。FIG. 9 is a channel information feedback model based on a variational autoencoder provided in an embodiment of the present application.
编码器910的输入数据和解码器920的输出数据均可以为信道信息。信道信息可以为由码本生成的训练数据,也可以为实际获取的信道信息。在训练阶段中,编码器910的输入数据或者解码器920的输出数据可以包括基于码本生成的训练数据,也可以包括实际场景中采集的训练数据。在测试阶段,编码器910的输入数据或者解码器920的输出数据可以为实际信道估计后得到的信道信息。Both the input data of the encoder 910 and the output data of the decoder 920 may be channel information. The channel information may be training data generated by a codebook, or actually obtained channel information. In the training phase, the input data of the encoder 910 or the output data of the decoder 920 may include training data generated based on a codebook, or may include training data collected in an actual scene. In the test phase, the input data of the encoder 910 or the output data of the decoder 920 may be channel information obtained after actual channel estimation.
作为一个实施例,可以对信道信息进行重整形,以改变信道信息的维度。例如,在训练阶段或测试阶段,可以对训练数据或信道估计得到的信道信息进重整形。对于编码器910的输入数据,重整形可以使得编码器910的输入数据适应编码器910的模型结构,从而简化神经网络模型的运算。对于解码器920的输出数据,重整形可以使得输出数据适应于后续处理,从而简化后续运算。As an embodiment, the channel information may be reshaped to change the dimension of the channel information. For example, in a training phase or a testing phase, training data or channel information obtained from channel estimation may be reshaped. For the input data of the encoder 910, reshaping can make the input data of the encoder 910 adapt to the model structure of the encoder 910, thereby simplifying the operation of the neural network model. For the output data of the decoder 920, reshaping can make the output data suitable for subsequent processing, thereby simplifying subsequent operations.
以信道信息为N×M×S×I的四维张量为例。可以将该张量重整形为其他维格式,以作为编码器910的输入数据或解码器920的输出数据。例如,整形后的信道信息可以为三维张量,三维张量的维度格式可以为NM×S×I、NS×M×I、NI×M×S、N×MS×I、N×MI×S或N×M×SI。或者,整形后的信道信息可以为二维矩阵,二维矩阵的维度格式可以为NMS×I、NMI×S、NSI×M或MSI×N。或者,整形后的信道信息可以为向量,向量的长度可以为NMSI。Take the four-dimensional tensor whose channel information is N×M×S×I as an example. This tensor can be reshaped into other dimensional formats as input data to encoder 910 or output data to decoder 920 . For example, the shaped channel information can be a three-dimensional tensor, and the dimension format of the three-dimensional tensor can be NM×S×I, NS×M×I, NI×M×S, N×MS×I, N×MI×S Or N×M×SI. Alternatively, the shaped channel information may be a two-dimensional matrix, and the dimension format of the two-dimensional matrix may be NMS×I, NMI×S, NSI×M or MSI×N. Alternatively, the shaped channel information may be a vector, and the length of the vector may be NMSI.
编码器910的输出可以包括两个向量m=[m_1,...,m_p]与v=[v_1,...,v_p]。其中,m可以与均值相关,v可以与方差相关。m和v的长度可以为p。输出长度p可以根据量化方案及反馈资源大小确定。例如,在量化方案为3比特均匀量化、反馈资源为48比特的条件下,p取值为48/3=16比特。The output of the encoder 910 may include two vectors m=[m_1, . . . , m_p] and v=[v_1, . . . , v_p]. Among them, m can be related to the mean, and v can be related to the variance. m and v can be of length p. The output length p can be determined according to the quantization scheme and the size of the feedback resource. For example, under the condition that the quantization scheme is 3-bit uniform quantization and the feedback resource is 48 bits, the value of p is 48/3=16 bits.
解码器920的输入可以为长度为p的向量。训练阶段和测试阶段,解码器920的输入可以不同。例如,在训练阶段,解码器920的输入可以为向量s×exp(v)+m进行量化、解量化后的向量。其中向量v与m由编码器910的输出获得,向量s可以从标准正态分布中采集获得。向量s的长度也可以为p。在测试阶段,解码器920的输入可以为向量m进行量化、解量化后的向量,m可以由编码器910的输出获得。The input to the decoder 920 may be a vector of length p. In the training phase and the testing phase, the input to the decoder 920 can be different. For example, in the training phase, the input of the decoder 920 may be a quantized and dequantized vector of the vector s×exp(v)+m. The vectors v and m are obtained from the output of the encoder 910, and the vector s can be collected from a standard normal distribution. The vector s can also be of length p. In the test phase, the input of the decoder 920 may be a quantized and dequantized vector m, and m may be obtained from the output of the encoder 910 .
本申请不限制变分自编码中编码器或解码器的具体结构。例如可以采用全连接网络、卷积神经网络、残差网络、自注意力机制网络中的一种或者多种网络结构构建。This application does not limit the specific structure of the encoder or decoder in variational self-encoding. For example, it can be constructed by using one or more network structures of fully connected network, convolutional neural network, residual network, and self-attention mechanism network.
图10为本申请实施例提供的一种变分自编码器的结构示意图。图10所示的变分自编码器包括编码器1010和解码器1020。编码器1010可以包括全连接层1012、全连接层1013、全连接层1014、全连接层1015以及全连接层1016。解码器1020可以包括全连接层1022、 全连接层1023以及全连接层1024。图10所示的变分自编码器的量化方案可以为3比特均匀量化,反馈资源可以为48比特。FIG. 10 is a schematic structural diagram of a variational autoencoder provided by an embodiment of the present application. The variational autoencoder shown in FIG. 10 includes an encoder 1010 and a decoder 1020 . The encoder 1010 may include a fully connected layer 1012 , a fully connected layer 1013 , a fully connected layer 1014 , a fully connected layer 1015 and a fully connected layer 1016 . The decoder 1020 may include a fully connected layer 1022 , a fully connected layer 1023 and a fully connected layer 1024 . The quantization scheme of the variational autoencoder shown in FIG. 10 may be 3-bit uniform quantization, and the feedback resource may be 48 bits.
编码器1010的输入可以为维度为768的向量(例如NMSI=768)。全连接层1012可以输出维度为1024的向量。全连接层1013可以输出维度为256的向量。全连接层1014可以输出维度为128的向量。全连接层1015和全连接层1016的输入均可以为全连接层1014的输出。全连接层1015可以输出维度为16的向量m。全连接层1016可以输出维度为16的向量v。The input to the encoder 1010 may be a vector of dimension 768 (eg NMSI=768). The fully connected layer 1012 can output a vector with a dimension of 1024. The fully connected layer 1013 can output a vector with a dimension of 256. The fully connected layer 1014 can output a vector with a dimension of 128. Both the input of the fully connected layer 1015 and the fully connected layer 1016 may be the output of the fully connected layer 1014 . The fully connected layer 1015 can output a vector m with a dimension of 16. The fully connected layer 1016 can output a vector v with dimension 16.
解码器1020的输入可以为维度为16的向量。全连接层1022可以输出维度为2048的向量。全连接层1023可以输出维度为1024的向量。全连接层1024可以输出维度为768的向量。The input of decoder 1020 may be a vector with dimension 16. The fully connected layer 1022 can output a vector with a dimension of 2048. The fully connected layer 1023 can output a vector with a dimension of 1024. The fully connected layer 1024 can output a vector with a dimension of 768.
利用码本生成的多个训练数据可以用来训练基于神经网络的信道信息反馈模型。本申请不限制模型的训练方法。作为一种实现方式,可以将W作为编码器的输入,解码器的输出可以为W`。W可以包括利用码本生成的训练数据,也可以包括在实际场景采集的训练数据。The multiple training data generated by using the codebook can be used to train the neural network-based channel information feedback model. This application does not limit the training method of the model. As an implementation, W can be used as the input of the encoder, and the output of the decoder can be W`. W may include training data generated using a codebook, or may include training data collected in actual scenarios.
本申请不限制损失函数的类型,损失函数例如可以为:This application does not limit the type of loss function. For example, the loss function can be:
loss=||W-W`|| F+(exp(v)-(z+v)+m⊙m) loss=||WW`|| F +(exp(v)-(z+v)+m⊙m)
其中||·|| F为Frobenius范数,z=[1,1,...,1] T∈{1} N,⊙为Hadamard乘积。 Where ||·|| F is the Frobenius norm, z=[1,1,...,1] T ∈{1} N , ⊙ is the Hadamard product.
需要说明的是,在一些实施例中,训练数据也称为训练样本或样本。本申请对此不作限制。It should be noted that, in some embodiments, training data is also referred to as training samples or samples. This application is not limited to this.
上文结合图7至图10,详细描述了本申请的方法实施例,下面结合图11,详细描述本申请的装置实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。The method embodiment of the present application is described in detail above with reference to FIG. 7 to FIG. 10 , and the device embodiment of the present application is described in detail below in conjunction with FIG. 11 . It should be understood that the descriptions of the method embodiments correspond to the descriptions of the device embodiments, therefore, for parts not described in detail, reference may be made to the foregoing method embodiments.
图11为本申请实施例提供的一种处理数据的装置1100的示意性结构图。装置1100可以包括生成单元1110。FIG. 11 is a schematic structural diagram of an apparatus 1100 for processing data provided by an embodiment of the present application. The apparatus 1100 may include a generating unit 1110 .
第一生成单元1110可以用于利用码本生成多个训练数据,所述多个训练数据用于训练神经网络模型,所述神经网络模型用于进行信道信息反馈。The first generating unit 1110 may be configured to generate a plurality of training data by using a codebook, the plurality of training data are used to train a neural network model, and the neural network model is used for channel information feedback.
可选地,所述神经网络模型包括变分自编码器。Optionally, the neural network model includes a variational autoencoder.
可选地,所述码本包括多个预编码矩阵。Optionally, the codebook includes multiple precoding matrices.
可选地,所述装置1100还可以包括过采样单元1120。过采样单元1120可以用于对所述多个预编码矩阵进行过采样处理。Optionally, the apparatus 1100 may further include an oversampling unit 1120 . The oversampling unit 1120 may be configured to perform oversampling processing on the multiple precoding matrices.
可选地,所述多个预编码矩阵属于多个分组,所述多个分组包括第一分组,所述多个训练数据包括第一训练数据,所述第一生成单元1110可以包括:第二生成单元,用于利用所述第一分组中的预编码矩阵生成所述第一训练数据。Optionally, the multiple precoding matrices belong to multiple groups, the multiple groups include a first group, the multiple training data include first training data, and the first generating unit 1110 may include: a second A generating unit, configured to generate the first training data by using the precoding matrix in the first group.
可选地,所述多个预编码矩阵包括第一预编码矩阵,所述多个分组包括第二分组,所可选地,所述预编码矩阵包括离散傅里叶变换DFT向量。Optionally, the multiple precoding matrices include a first precoding matrix, the multiple groups include a second group, and optionally, the precoding matrix includes a discrete Fourier transform (DFT) vector.
可选地,所述装置还包括:重整形单元,用于对所述训练数据进行重整形,以改变所述训练数据的维度。Optionally, the device further includes: a reshaping unit, configured to reshape the training data, so as to change the dimensions of the training data.
图12是本申请实施例的处理数据的装置的示意性结构图。图12中的虚线表示该单元或模块为可选的。该装置1200可用于实现上述方法实施例中描述的方法。装置1200可以是芯片、终端设备或网络设备。FIG. 12 is a schematic structural diagram of an apparatus for processing data according to an embodiment of the present application. The dashed line in Figure 12 indicates that the unit or module is optional. The apparatus 1200 may be used to implement the methods described in the foregoing method embodiments. Apparatus 1200 may be a chip, a terminal device or a network device.
装置1200可以包括一个或多个处理器1210。该处理器1210可支持装置1200实现前文方法实施例所描述的方法。该处理器1210可以是通用处理器或者专用处理器。例如,该处理器可以为中央处理单元(central processing unit,CPU)。或者,该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。Apparatus 1200 may include one or more processors 1210 . The processor 1210 can support the device 1200 to implement the methods described in the foregoing method embodiments. The processor 1210 may be a general purpose processor or a special purpose processor. For example, the processor may be a central processing unit (central processing unit, CPU). Alternatively, the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
装置1200还可以包括一个或多个存储器1220。存储器1220上存储有程序,该程序可以被处理器1210执行,使得处理器1210执行前文方法实施例所描述的方法。存储器1220可以独立于处理器1210也可以集成在处理器1210中。Apparatus 1200 may also include one or more memories 1220 . A program is stored in the memory 1220, and the program can be executed by the processor 1210, so that the processor 1210 executes the methods described in the foregoing method embodiments. The memory 1220 may be independent from the processor 1210 or may be integrated in the processor 1210 .
装置1200还可以包括收发器1230。处理器1210可以通过收发器1230与其他设备或芯片进行通信。例如,处理器1210可以通过收发器1230与其他设备或芯片进行数据收发。The apparatus 1200 may also include a transceiver 1230 . The processor 1210 can communicate with other devices or chips through the transceiver 1230 . For example, the processor 1210 may send and receive data with other devices or chips through the transceiver 1230 .
本申请实施例还提供一种计算机可读存储介质,用于存储程序。该计算机可读存储介质可应用于本申请实施例提供的终端或网络设备中,并且该程序使得计算机执行本申请各个实施例中的由终端或网络设备执行的方法。The embodiment of the present application also provides a computer-readable storage medium for storing programs. The computer-readable storage medium can be applied to the terminal or the network device provided in the embodiments of the present application, and the program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
本申请实施例还提供一种计算机程序产品。该计算机程序产品包括程序。该计算机程序产品可应用于本申请实施例提供的终端或网络设备中,并且该程序使得计算机执行本申请各个实施例中的由终端或网络设备执行的方法。The embodiment of the present application also provides a computer program product. The computer program product includes programs. The computer program product can be applied to the terminal or the network device provided in the embodiments of the present application, and the program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
本申请实施例还提供一种计算机程序。该计算机程序可应用于本申请实施例提供的终端或网络设备中,并且该计算机程序使得计算机执行本申请各个实施例中的由终端或网络 设备执行的方法。The embodiment of the present application also provides a computer program. The computer program can be applied to the terminal or the network device provided in the embodiments of the present application, and the computer program enables the computer to execute the methods performed by the terminal or the network device in the various embodiments of the present application.
应理解,本申请中术语“系统”和“网络”可以被可互换使用。另外,本申请使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。It should be understood that the terms "system" and "network" may be used interchangeably in this application. In addition, the terms used in the application are only used to explain the specific embodiments of the application, and are not intended to limit the application. The terms "first", "second", "third" and "fourth" in the specification and claims of the present application and the drawings are used to distinguish different objects, rather than to describe a specific order . Furthermore, the terms "include" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion.
在本申请的实施例中,提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。In the embodiments of the present application, the "indication" mentioned may be a direct indication, may also be an indirect indication, and may also mean that there is an association relationship. For example, A indicates B, which can mean that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indirectly indicates B, for example, A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
在本申请实施例中,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。In this embodiment of the application, "B corresponding to A" means that B is associated with A, and B can be determined according to A. However, it should also be understood that determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
在本申请实施例中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。In this embodiment of the application, the term "corresponding" may indicate that there is a direct or indirect correspondence between the two, or that there is an association between the two, or that it indicates and is instructed, configures and is configured, etc. relation.
本申请实施例中,“预定义”或“预配置”可以通过在设备(例如,包括终端设备和网络设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。比如预定义可以是指协议中定义的。In this embodiment of the application, "predefined" or "preconfigured" can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in devices (for example, including terminal devices and network devices). The application does not limit its specific implementation. For example, pre-defined may refer to defined in the protocol.
本申请实施例中,所述“协议”可以指通信领域的标准协议,例如可以包括LTE协议、NR协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。In the embodiment of the present application, the "protocol" may refer to a standard protocol in the communication field, for example, may include the LTE protocol, the NR protocol, and related protocols applied to future communication systems, which is not limited in the present application.
本申请实施例中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。The term "and/or" in the embodiment of the present application is only an association relationship describing associated objects, which means that there may be three relationships, for example, A and/or B, which can mean: A exists alone, and A and B exist at the same time , there are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。In various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显 示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够读取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(digital video disc,DVD))或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be read by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (digital video disc, DVD)) or a semiconductor medium (for example, a solid state disk (solid state disk, SSD) )wait.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above is only a specific implementation of the application, but the scope of protection of the application is not limited thereto. Anyone familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the application. Should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be determined by the protection scope of the claims.

Claims (22)

  1. 一种处理数据的方法,其特征在于,包括:A method for processing data, comprising:
    利用码本生成多个训练数据,所述多个训练数据用于训练神经网络模型,所述神经网络模型用于进行信道信息反馈。A codebook is used to generate a plurality of training data, the plurality of training data are used to train a neural network model, and the neural network model is used to perform channel information feedback.
  2. 根据权利要求1所述的方法,其特征在于,所述神经网络模型包括变分自编码器。The method according to claim 1, wherein the neural network model comprises a variational autoencoder.
  3. 根据权利要求1或2所述的方法,其特征在于,所述码本包括多个预编码矩阵。The method according to claim 1 or 2, wherein the codebook comprises a plurality of precoding matrices.
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:The method according to claim 3, characterized in that the method further comprises:
    对所述多个预编码矩阵进行过采样处理。Perform oversampling processing on the multiple precoding matrices.
  5. 根据权利要求3或4所述的方法,其特征在于,所述多个预编码矩阵属于多个分组,所述多个分组包括第一分组,所述多个训练数据包括第一训练数据,所述利用码本生成多个训练数据包括:The method according to claim 3 or 4, wherein the plurality of precoding matrices belong to a plurality of groups, the plurality of groups include a first group, and the plurality of training data includes first training data, so The use of codebooks to generate multiple training data includes:
    利用所述第一分组中的预编码矩阵生成所述第一训练数据。The first training data is generated by using the precoding matrix in the first group.
  6. 根据权利要求5所述的方法,其特征在于,所述多个预编码矩阵包括第一预编码矩阵,所述多个分组包括第二分组,所述第一预编码矩阵既属于所述第一分组也属于所述第二分组。The method according to claim 5, wherein the plurality of precoding matrices include a first precoding matrix, the plurality of groups include a second group, and the first precoding matrix belongs to the first Groups also belong to said second group.
  7. 根据权利要求1~6中任一项所述的方法,其特征在于,所述预编码矩阵包括离散傅里叶变换DFT向量。The method according to any one of claims 1-6, wherein the precoding matrix comprises a discrete Fourier transform (DFT) vector.
  8. 根据权利要求1~7中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 7, wherein the method further comprises:
    对所述训练数据进行重整形,以改变所述训练数据的维度。Reshaping the training data to change the dimensions of the training data.
  9. 一种处理数据的装置,其特征在于,包括:A device for processing data, characterized in that it comprises:
    第一生成单元,用于利用码本生成多个训练数据,所述多个训练数据用于训练神经网络模型,所述神经网络模型用于进行信道信息反馈。The first generation unit is configured to generate a plurality of training data using a codebook, the plurality of training data are used to train a neural network model, and the neural network model is used for channel information feedback.
  10. 根据权利要求9所述的装置,其特征在于,所述神经网络模型包括变分自编码器。The apparatus according to claim 9, wherein the neural network model comprises a variational autoencoder.
  11. 根据权利要求9或10所述的装置,其特征在于,所述码本包括多个预编码矩阵。The device according to claim 9 or 10, wherein the codebook comprises a plurality of precoding matrices.
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:
    过采样单元,用于对所述多个预编码矩阵进行过采样处理。An oversampling unit, configured to perform oversampling processing on the plurality of precoding matrices.
  13. 根据权利要求11或12所述的装置,其特征在于,所述多个预编码矩阵属于多个分组,所述多个分组包括第一分组,所述多个训练数据包括第一训练数据,所述第一生成单元包括:The device according to claim 11 or 12, wherein the plurality of precoding matrices belong to a plurality of groups, the plurality of groups include a first group, and the plurality of training data includes first training data, so The first generating unit includes:
    第二生成单元,用于利用所述第一分组中的预编码矩阵生成所述第一训练数据。A second generating unit, configured to generate the first training data by using the precoding matrix in the first group.
  14. 根据权利要求13所述的装置,其特征在于,所述多个预编码矩阵包括第一预编码矩阵,所述多个分组包括第二分组,所述第一预编码矩阵既属于所述第一分组也属于所述第二分组。The device according to claim 13, wherein the multiple precoding matrices include a first precoding matrix, the multiple groups include a second grouping, and the first precoding matrix belongs to the first Groups also belong to said second group.
  15. 根据权利要求9~14中任一项所述的装置,其特征在于,所述预编码矩阵包括离散傅里叶变换DFT向量。The device according to any one of claims 9-14, wherein the precoding matrix comprises a discrete Fourier transform (DFT) vector.
  16. 根据权利要求9~15中任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 9-15, characterized in that the device further comprises:
    重整形单元,用于对所述训练数据进行重整形,以改变所述训练数据的维度。A reshaping unit, configured to reshape the training data to change the dimensions of the training data.
  17. 一种处理数据的装置,其特征在于,包括存储器和处理器,所述存储器用于存储程序,所述处理器用于调用所述存储器中的程序,以执行如权利要求1-8中任一项所述的方法。A device for processing data, characterized by comprising a memory and a processor, the memory is used to store a program, and the processor is used to call the program in the memory to execute any one of claims 1-8 the method described.
  18. 一种处理数据的装置,其特征在于,包括处理器,用于从存储器中调用程序,以执行如权利要求1-8中任一项所述的方法。A device for processing data, characterized by comprising a processor for invoking a program from a memory to execute the method according to any one of claims 1-8.
  19. 一种芯片,其特征在于,包括处理器,用于从存储器调用程序,使得安装有所述芯片的设备执行如权利要求1-8中任一项所述的方法。A chip, characterized by comprising a processor, configured to call a program from a memory, so that a device installed with the chip executes the method according to any one of claims 1-8.
  20. 一种计算机可读存储介质,其特征在于,其上存储有程序,所述程序使得计算机执行如权利要求1-8中任一项所述的方法。A computer-readable storage medium, characterized in that a program is stored thereon, and the program causes a computer to execute the method according to any one of claims 1-8.
  21. 一种计算机程序产品,其特征在于,包括程序,所述程序使得计算机执行如权利要求1-8中任一项所述的方法。A computer program product, characterized by comprising a program, the program causes a computer to execute the method according to any one of claims 1-8.
  22. 一种计算机程序,其特征在于,所述计算机程序使得计算机执行如权利要求1-8中任一项所述的方法。A computer program, characterized in that the computer program causes a computer to execute the method according to any one of claims 1-8.
PCT/CN2021/139596 2021-12-20 2021-12-20 Data processing method and device WO2023115254A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/139596 WO2023115254A1 (en) 2021-12-20 2021-12-20 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/139596 WO2023115254A1 (en) 2021-12-20 2021-12-20 Data processing method and device

Publications (1)

Publication Number Publication Date
WO2023115254A1 true WO2023115254A1 (en) 2023-06-29

Family

ID=86900943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139596 WO2023115254A1 (en) 2021-12-20 2021-12-20 Data processing method and device

Country Status (1)

Country Link
WO (1) WO2023115254A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464220A (en) * 2020-03-10 2020-07-28 西安交通大学 Channel state information reconstruction method based on deep learning
CN112583501A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Channel measurement method and communication device
US20210250068A1 (en) * 2020-02-10 2021-08-12 Korea University Research And Business Foundation Limited-feedback method and device based on machine learning in wireless communication system
CN113381790A (en) * 2021-06-09 2021-09-10 东南大学 AI-based environment knowledge assisted wireless channel feedback method
CN113381950A (en) * 2021-04-25 2021-09-10 清华大学 Efficient MIMO channel feedback method and device based on network aggregation strategy
WO2021237423A1 (en) * 2020-05-25 2021-12-02 Oppo广东移动通信有限公司 Channel state information transmission methods, electronic device, and storage medium
CN113810086A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Channel information feedback method, communication device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583501A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Channel measurement method and communication device
US20210250068A1 (en) * 2020-02-10 2021-08-12 Korea University Research And Business Foundation Limited-feedback method and device based on machine learning in wireless communication system
CN111464220A (en) * 2020-03-10 2020-07-28 西安交通大学 Channel state information reconstruction method based on deep learning
WO2021237423A1 (en) * 2020-05-25 2021-12-02 Oppo广东移动通信有限公司 Channel state information transmission methods, electronic device, and storage medium
CN113810086A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Channel information feedback method, communication device and storage medium
CN113381950A (en) * 2021-04-25 2021-09-10 清华大学 Efficient MIMO channel feedback method and device based on network aggregation strategy
CN113381790A (en) * 2021-06-09 2021-09-10 东南大学 AI-based environment knowledge assisted wireless channel feedback method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOSTAFA HUSSIEN; KIM KHOA NGUYEN; MOHAMED CHERIET: "PRVNet: Variational Autoencoders for Massive MIMO CSI Feedback", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 November 2020 (2020-11-09), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081809116 *

Similar Documents

Publication Publication Date Title
WO2021217519A1 (en) Method and apparatus for adjusting neural network
US20240137082A1 (en) Communication method and apparatus
WO2023126007A1 (en) Channel information transmission method and apparatus
US20230136416A1 (en) Neural network obtaining method and apparatus
WO2023115254A1 (en) Data processing method and device
CN114492784A (en) Neural network testing method and device
WO2023070675A1 (en) Data processing method and apparatus
WO2023283785A1 (en) Method for processing signal, and receiver
WO2023016503A1 (en) Communication method and apparatus
US20240187052A1 (en) Communication method and apparatus
WO2024008004A1 (en) Communication method and apparatus
WO2024032775A1 (en) Quantization method and apparatus
WO2023231881A1 (en) Model application method and apparatus
WO2024046288A1 (en) Communication method and apparatus
WO2023279947A1 (en) Communication method and apparatus
US20240120971A1 (en) Information transmission method and apparatus
WO2023006096A1 (en) Communication method and apparatus
WO2023236803A1 (en) Channel information transmission method, communication apparatus and communication device
WO2023133886A1 (en) Channel information feedback method, sending end device, and receiving end device
WO2023125699A1 (en) Communication method and apparatus
WO2024046215A1 (en) Communication method and apparatus
WO2024108356A1 (en) Csi feedback method, transmitter device and receiver device
WO2024036631A1 (en) Information feedback method and apparatus, device, and storage medium
WO2023231934A1 (en) Communication method and apparatus
WO2023060503A1 (en) Information processing method and apparatus, device, medium, chip, product, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21968411

Country of ref document: EP

Kind code of ref document: A1