WO2022217506A1 - Channel information feedback method, sending end device, and receiving end device - Google Patents

Channel information feedback method, sending end device, and receiving end device Download PDF

Info

Publication number
WO2022217506A1
WO2022217506A1 PCT/CN2021/087288 CN2021087288W WO2022217506A1 WO 2022217506 A1 WO2022217506 A1 WO 2022217506A1 CN 2021087288 W CN2021087288 W CN 2021087288W WO 2022217506 A1 WO2022217506 A1 WO 2022217506A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
feature vector
target
end device
bit stream
Prior art date
Application number
PCT/CN2021/087288
Other languages
French (fr)
Chinese (zh)
Inventor
肖寒
田文强
刘文东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/087288 priority Critical patent/WO2022217506A1/en
Priority to CN202180079843.3A priority patent/CN116569527A/en
Publication of WO2022217506A1 publication Critical patent/WO2022217506A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks

Definitions

  • the embodiments of the present application relate to the field of communications, and in particular, to a method for feeding back channel information, a transmitter device, and a receiver device.
  • the codebook-based scheme is mainly used to achieve channel feature extraction and feedback. That is, after the channel estimation is performed at the transmitting end, the precoding matrix that best matches the channel estimation structure is selected from the preset precoding codebook according to the result of the channel estimation according to the specific optimization criterion, and the precoding matrix is selected through the feedback link of the air interface. The index information of the matrix is fed back to the receiving end for the receiving end to implement precoding.
  • the mapping process of the channel information to the channel information in the precoding codebook is quantization lossy, which reduces the accuracy of the feedback channel information, and further reduces the precoding performance. If the full channel information of the channel estimation is fed back, the CSI feedback overhead is larger. Therefore, how to balance the feedback accuracy of the channel information and the CSI feedback overhead is an urgent problem to be solved.
  • the present application provides a channel information feedback method, a transmitting end device and a receiving end device, which can take into account the feedback accuracy of the channel information and the CSI feedback overhead.
  • a method for channel information feedback comprising: a transmitting end device receiving a reference signal sent by a receiving end device; performing channel estimation according to the reference signal to obtain a relationship between the transmitting end device and the receiving end device. the channel information between the two; perform feature decomposition on the channel information to obtain at least one first feature vector; encode the at least one first feature vector through a neural network to obtain a target bit stream; describe the target bitstream.
  • a method for channel information feedback comprising: a receiving end device receiving a target bit stream sent by a transmitting end device, where the target bit stream is obtained by encoding at least one first feature vector by the transmitting end device , the at least one first feature vector is obtained by performing feature decomposition on the channel estimation result by the transmitting end device; the target bit stream is decoded through a neural network to obtain at least one target feature vector.
  • a transmitting end device configured to execute the method in the above-mentioned first aspect or each of its implementations.
  • the sending end device includes a functional module for executing the method in the above-mentioned first aspect or each implementation manner thereof.
  • a receiving end device is provided, which is configured to execute the method in the second aspect or each of its implementations.
  • the receiving end device includes a functional module for executing the method in the second aspect or each implementation manner thereof.
  • a transmitter device including a processor and a memory.
  • the memory is used for storing a computer program
  • the processor is used for calling and running the computer program stored in the memory to execute the method in the above-mentioned first aspect or each implementation manner thereof.
  • a receiver device including a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program stored in the memory to execute the method in the second aspect or each of its implementations.
  • a chip is provided for implementing any one of the above-mentioned first aspect to the second aspect or the method in each implementation manner thereof.
  • the chip includes: a processor for invoking and running a computer program from a memory, so that a device in which the device is installed executes any one of the above-mentioned first to second aspects or each of its implementations method.
  • a computer-readable storage medium for storing a computer program, the computer program causing a computer to execute the method in any one of the above-mentioned first aspect to the second aspect or each of its implementations.
  • a computer program product comprising computer program instructions, the computer program instructions causing a computer to execute the method in any one of the above-mentioned first to second aspects or the implementations thereof.
  • a computer program which, when run on a computer, causes the computer to perform the method in any one of the above-mentioned first to second aspects or the respective implementations thereof.
  • the transmitting end device obtains at least one eigenvector by decomposing the full channel information of the channel estimation, and further encodes the eigenvector by using a neural network to obtain a target bit stream, and sends the target bit stream to the receiving end, Correspondingly, the receiving end decodes the target bit stream to obtain the target feature vector.
  • channel information when channel information is sent, it only needs to send the target bit stream encoded with the feature vector obtained by decomposing the full channel information, which is beneficial to reduce CSI overhead.
  • encoding the eigenvectors of the full-channel information instead of directly encoding the full-channel information, considering the correlation characteristics between the eigenvectors, is beneficial to avoid compressing too much redundant information and reduce the compression efficiency. Improve encoding performance.
  • FIG. 1 is a schematic diagram of a communication system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic interaction diagram of a method for feeding back channel information according to an embodiment of the present application.
  • FIG. 3 is a system architecture diagram of a method for feeding back channel information according to an embodiment of the present application.
  • Figure 4 is a schematic diagram of the structure of a neural network.
  • FIG. 5 is a schematic structural diagram of an encoder according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a decoder according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of the common module in FIG. 6 .
  • FIG. 8 is a schematic structural diagram of a recurrent neural network.
  • FIG. 9 is a schematic structural diagram of an encoder according to another embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a decoder according to another embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an encoder according to yet another embodiment of the present application.
  • FIG. 12 is an exemplary structural diagram of the self-attention module in FIG. 11 .
  • FIG. 13 is a schematic structural diagram of a decoder according to yet another embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of the mask block in FIG. 13 .
  • FIG. 15 is a schematic structural diagram of the residual block in FIG. 13 .
  • FIG. 16 is a schematic block diagram of a transmitting end device according to an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of a receiving end device according to an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of a communication device provided according to an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of a chip provided according to an embodiment of the present application.
  • FIG. 20 is a schematic block diagram of a communication system provided according to an embodiment of the present application.
  • GSM Global System of Mobile communication
  • CDMA Code Division Multiple Access
  • CDMA Wideband Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GPRS General Packet Radio Service
  • LTE Long Term Evolution
  • LTE-A Advanced Long Term Evolution
  • NR New Radio
  • NTN Non-Terrestrial Networks
  • UMTS Universal Mobile Telecommunication System
  • WLAN Wireless Local Area Networks
  • Wireless Fidelity Wireless Fidelity
  • WiFi fifth-generation communication
  • D2D Device to Device
  • M2M Machine to Machine
  • MTC Machine Type Communication
  • V2V Vehicle to Vehicle
  • V2X Vehicle to everything
  • the communication system in this embodiment of the present application may be applied to a carrier aggregation (Carrier Aggregation, CA) scenario, a dual connectivity (Dual Connectivity, DC) scenario, or a standalone (Standalone, SA) distribution. web scene.
  • Carrier Aggregation, CA Carrier Aggregation, CA
  • DC Dual Connectivity
  • SA standalone
  • the communication system in the embodiment of the present application may be applied to an unlicensed spectrum, where the unlicensed spectrum may also be considered as a shared spectrum; or, the communication system in the embodiment of the present application may also be applied to a licensed spectrum, where, Licensed spectrum can also be considered unshared spectrum.
  • the embodiments of the present application describe various embodiments in conjunction with network equipment and terminal equipment, where the terminal equipment may also be referred to as user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent or user device, etc.
  • user equipment User Equipment, UE
  • access terminal subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent or user device, etc.
  • the terminal device may be a station (STATION, ST) in the WLAN, and may be a cellular phone, a cordless phone, a Session Initiation Protocol (Session Initiation Protocol, SIP) phone, a Wireless Local Loop (WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, next-generation communication systems such as end devices in NR networks, or future Terminal equipment in the evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
  • PLMN Public Land Mobile Network
  • the terminal device can be deployed on land, including indoor or outdoor, handheld, wearable, or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as airplanes, balloons, and satellites) superior).
  • the terminal device may be a mobile phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiver function, a virtual reality (Virtual Reality, VR) terminal device, and an augmented reality (Augmented Reality, AR) terminal Equipment, wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • a mobile phone Mobile Phone
  • a tablet computer Pad
  • a computer with a wireless transceiver function a virtual reality (Virtual Reality, VR) terminal device
  • augmented reality (Augmented Reality, AR) terminal Equipment wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • the terminal device may also be a wearable device.
  • Wearable devices can also be called wearable smart devices, which are the general term for the intelligent design of daily wear and the development of wearable devices using wearable technology, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-scale, complete or partial functions without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which needs to cooperate with other devices such as smart phones.
  • the network device may be a device for communicating with a mobile device, and the network device may be an access point (Access Point, AP) in WLAN, or a base station (Base Transceiver Station, BTS) in GSM or CDMA , it can also be a base station (NodeB, NB) in WCDMA, it can also be an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, or a relay station or access point, or in-vehicle equipment, wearable devices and NR networks
  • the network device may have a mobile feature, for example, the network device may be a mobile device.
  • the network device may be a satellite or a balloon station.
  • the satellite may be a low earth orbit (LEO) satellite, a medium earth orbit (MEO) satellite, a geostationary earth orbit (GEO) satellite, a High Elliptical Orbit (HEO) ) satellite etc.
  • the network device may also be a base station set in a location such as land or water.
  • a network device may provide services for a cell, and a terminal device communicates with the network device through transmission resources (for example, frequency domain resources, or spectrum resources) used by the cell, and the cell may be a network device (
  • the cell can belong to the macro base station, or it can belong to the base station corresponding to the small cell (Small cell).
  • Pico cell Femto cell (Femto cell), etc.
  • These small cells have the characteristics of small coverage and low transmission power, and are suitable for providing high-speed data transmission services.
  • the communication system 100 may include a network device 110, and the network device 110 may be a device that communicates with a terminal device 120 (or referred to as a communication terminal, a terminal).
  • the network device 110 may provide communication coverage for a particular geographic area, and may communicate with terminal devices located within the coverage area.
  • FIG. 1 exemplarily shows one network device and two terminal devices.
  • the communication system 100 may include multiple network devices and the coverage of each network device may include other numbers of terminal devices. This application The embodiment does not limit this.
  • the communication system 100 may further include other network entities such as a network controller and a mobility management entity, which are not limited in this embodiment of the present application.
  • network entities such as a network controller and a mobility management entity, which are not limited in this embodiment of the present application.
  • a device having a communication function in the network/system may be referred to as a communication device.
  • the communication device may include a network device 110 and a terminal device 120 with a communication function, and the network device 110 and the terminal device 120 may be the specific devices described above, which will not be repeated here.
  • the communication device may also include other devices in the communication system 100, such as other network entities such as a network controller, a mobility management entity, etc., which are not limited in this embodiment of the present application.
  • the "instruction" mentioned in the embodiments of the present application may be a direct instruction, an indirect instruction, or an associated relationship.
  • a indicates B it can indicate that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indicates B indirectly, such as A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • corresponding may indicate that there is a direct or indirect corresponding relationship between the two, or may indicate that there is an associated relationship between the two, or indicate and be instructed, configure and be instructed configuration, etc.
  • predefinition may be implemented by pre-saving corresponding codes, forms, or other means that can be used to indicate relevant information in devices (for example, including terminal devices and network devices).
  • the implementation method is not limited.
  • predefined may refer to the definition in the protocol.
  • the "protocol” may refer to a standard protocol in the communication field, for example, may include the LTE protocol, the NR protocol, and related protocols applied in future communication systems, which are not limited in this application.
  • the feedback of the channel information adopts a codebook-based feedback scheme.
  • the feedback scheme is to select the optimal channel information eigenvalue vector from the precoding codebook according to the result of the channel estimation. Since the precoding codebook itself is limited, the channel estimation results in the precoding codebook are based on the channel estimation.
  • the mapping process is quantized and lossy, which reduces the accuracy of the channel information fed back, which in turn reduces the performance of precoding. If the full channel information is fed back, the CSI feedback overhead will be increased. Therefore, how to take account of the channel information Feedback accuracy and CSI feedback overhead is an urgent problem to be solved.
  • FIG. 2 is a schematic interaction diagram of a method 200 for channel information feedback according to an embodiment of the present application. As shown in FIG. 2 , the method 200 includes at least some of the following contents:
  • the transmitting end device receives the reference signal sent by the receiving end device
  • the transmitting end device performs channel estimation according to the reference signal, and obtains channel information between the transmitting end device and the receiving end device;
  • the transmitting end device performs feature decomposition on the channel information to obtain at least one first feature vector
  • the sending end device encodes the at least one first feature vector through a neural network to obtain a target bit stream
  • the sending end device sends the target bit stream to the receiving end device.
  • the receiving end device decodes the target bit stream to obtain at least one target feature vector.
  • the sending end device is a terminal device
  • the receiving end device is a network device.
  • the sending end device is a network device
  • the receiving end device is a terminal device
  • the transmitting end device is a terminal device
  • the receiving end device is another terminal device.
  • the sending end device is a network device
  • the receiving end device is another network device.
  • the reference signal is also different according to the difference between the transmitting end device and the receiving end device.
  • the reference signal can be a demodulation reference signal.
  • Signal Demodulation Reference Signal, DMRS.
  • an encoder is deployed in the transmitting end device, and a decoder is deployed in the receiving end device.
  • the encoder in the transmitting end device and the decoding device in the receiving end device are The device can be implemented by a neural network.
  • the embodiments of the present application do not limit the specific manner in which the transmitting end device performs feature decomposition on the channel information.
  • the sending end device may perform singular value decomposition (Singular Value Decomposition, SVD) on the channel information to obtain the at least one first feature vector.
  • singular value decomposition Single Value Decomposition, SVD
  • the channel information is full channel information obtained by performing channel estimation on a reference signal, that is, unquantized channel information.
  • the transmitting end device obtains at least one feature vector by decomposing the full channel information obtained by channel estimation, and further encodes the feature vector by using a neural network to obtain a target bit stream, which is sent to the receiving end. the target bitstream.
  • the receiving end device decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector.
  • the transmitting end device when transmitting channel information, the transmitting end device only needs to transmit the target bit stream obtained by encoding the eigenvector obtained by eigendecomposition of the full channel information, which is beneficial to reduce CSI overhead
  • encoding the feature vector of the full channel information instead of directly compressing and encoding the full channel information can take into account the correlation characteristics between the channel information, which is beneficial to avoid compressing too much redundant information and improve the compression efficiency. , thereby improving the encoding performance.
  • the receiving end device may perform precoding according to at least one target feature vector obtained by decoding, for example, may perform beamforming according to the at least one target feature vector.
  • the scheduling bandwidth of the transmitting end device may include multiple subcarrier groups, respectively corresponding to multiple subbands.
  • the transmitting end device may feed back the channel information on the multiple subbands of the transmitting end device respectively. .
  • the transmitting end device may perform channel estimation according to the reference signal to obtain channel information on multiple subbands, and further perform feature decomposition on the channel information on the multiple subbands to obtain eigenvectors corresponding to the multiple subbands respectively.
  • the full channel information of the channel estimation includes channel information corresponding to each subcarrier group in the multiple subcarrier groups of the transmitting end device.
  • the full channel information may include channel information corresponding to each of the multiple subbands of the transmitting end device.
  • the at least one first eigenvector includes an eigenvector corresponding to each subcarrier group in the multiple subcarrier groups respectively.
  • the at least one first feature vector includes a feature vector corresponding to each of the plurality of subbands, respectively.
  • the channel information on multiple subbands are correlated, for example, the channel information on adjacent subbands has more correlation information, by jointly compressing and encoding the eigenvectors of the multiple subbands, compared to considering This correlation information is beneficial to improve compression efficiency and enhance feedback recovery performance.
  • FIG. 3 is a schematic system architecture diagram of a method for channel feedback according to an embodiment of the present application.
  • the transmitting end device can perform feature decomposition on the channel information of the n sub-bands to obtain the feature vectors corresponding to the n sub-bands, denoted as W_1, W_2, ..., W_n, and further input the feature vectors corresponding to the n sub-bands to the encoding
  • the encoder uses the neural network to jointly encode the feature vectors of the n sub-bands to obtain the target bit stream, denoted as B. Further, the transmitting end device sends this target bit stream B to the receiving end device, and the decoder of the receiving end device decodes and recovers this target bit stream B, obtains n target feature vectors, denoted as W ⁇ _1, W ⁇ _2 , ..., W ⁇ _n.
  • the transmitting end device performs joint compression and feedback on the eigenvectors of the channel information of multiple subbands on the scheduling bandwidth, and correspondingly, the receiving end device can decompress and recycle the eigenvectors of the multiple subbands.
  • the structure is beneficial to improve the compression efficiency, thereby enhancing the performance of compression, feedback and decompression of the entire system.
  • the scheduling bandwidth may be at least one BWP, or at least one carrier, or at least one frequency band, etc., which is not limited in this application.
  • the encoder in the sending end and the decoder in the receiving end may be implemented using the neural network structure in FIG. 4 .
  • the neural network includes an input layer, a hidden layer and an output layer, the input layer is used for receiving data, the hidden layer is used for processing the received data, and the processing result is generated in the output layer.
  • each node represents a processing module, which can be considered to simulate a neuron, and multiple neurons form a layer of neural network, and multiple layers of information transmission and processing construct an overall neural network.
  • the encoder of the transmitting end device may use computer vision technology to process the input at least one first feature vector, for example, use a neural network to process the at least one first feature vector as an image to be compressed Compression coding is performed to obtain the target bit stream.
  • the decoder of the receiving end device can decode and restore the target bitstream by using the target bitstream as the information obtained by compressing the image to obtain the target image, wherein the target image includes at least one target feature vector.
  • the encoder of the sender device may use a neural network for image processing to compress and encode the at least one first feature vector
  • the neural network for image processing may be a convolutional neural network , or other neural networks with better image processing performance, which are not limited in this application.
  • the encoder of the receiving end device may use a neural network for image processing to decompress the target bit stream, for example, the neural network for image processing may be a convolutional neural network, or other
  • the neural network with better image processing performance is not limited in this application.
  • the convolutional neural network includes an input layer, at least one convolutional layer, at least one pooling layer, a fully connected layer and an output layer.
  • the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
  • the at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
  • the neural network encodes the feature vector matrix as an image to be compressed to obtain the target bit stream.
  • the feature vectors W_1, W_2 are input, and the feature vector matrix W is used as the image to be compressed to be compressed and encoded through the neural network, and the target bit stream B is obtained.
  • the target bit stream is decoded by the neural network to obtain at least one target feature vector, including:
  • the decoder at the receiving end takes the received target bitstream B as information obtained by compressing the image, and decompresses the target bitstream B to obtain a target image including at least one target feature vector.
  • the encoder in the transmitting end device and the decoder in the receiving end device may be implemented using the network structures shown in FIG. 5 and FIG. 6 , respectively.
  • the encoder in the transmitting end device may include: a feature extraction module, configured to receive an input eigenvector matrix W, perform feature extraction on the eigenvector matrix W, and obtain the corresponding eigenvector matrix W feature map.
  • a feature extraction module configured to receive an input eigenvector matrix W, perform feature extraction on the eigenvector matrix W, and obtain the corresponding eigenvector matrix W feature map.
  • the feature extraction module performs feature extraction on the feature vector matrix W through convolution kernels of different sizes to obtain feature maps of different fields of view of the feature vector matrix W, which increases the nonlinearity in the convolution process, Improves the expressiveness of convolutional neural networks.
  • the feature extraction module may include 3 ⁇ 3 convolutional layers, 5 ⁇ 5 convolutional layers and 7 ⁇ 7 convolutional layers.
  • the 3x3 convolutional layer uses a 3x3 convolution kernel
  • the 5x5 convolutional layer uses a 5x5 convolution kernel
  • the 7x7 convolutional layer uses a 7x7 convolution kernel.
  • convolution kernels of other sizes can also be replaced, which is not limited in this application.
  • the plurality of feature maps output by the feature extraction module are input to the splicing module to merge the feature maps, for example, the splicing module realizes the splicing of the channel dimensions of the plurality of feature maps.
  • the splicing module can be implemented by a 1 ⁇ 1 convolutional layer, where the 1 ⁇ 1 convolutional layer uses a 1 ⁇ 1 convolution kernel, and in the 1 ⁇ 1 convolutional layer, the convolutional convolution can be controlled by The number of kernels controls the number of channels.
  • the convolution process using a 1 ⁇ 1 convolution kernel is larger than the calculation process of the fully connected layer, and a nonlinear activation function is added, which is beneficial to increase the other nonlinearity of the neural network and make the neural network express the characteristics. More complex.
  • the feature map output from the splicing module is processed by the fully connected layer and the quantization layer and converted into the target bitstream B.
  • the convolutional neural network structure in Figure 5 is only an example. In practical applications, it can be flexibly designed according to the number of subbands, encoding and decoding performance requirements, etc., for example, adding an activation layer between the convolutional layers, normalizing
  • the application is not limited to network layers such as the ization layer.
  • the decoder may include: a fully connected layer, a dimension adjustment module and a residual module, wherein the target bitstream B received from the sender device is first input to the fully connected layer and the dimension The adjustment module is converted into the dimension of the feature vector matrix W, which is further input to the residual module, and outputs the feature vector matrix W ⁇ composed of the at least one target feature vector.
  • the residual module may include a convolutional layer, an L-order common module, a convolutional layer and a summation module, where L is a positive integer.
  • the output of the dimension adjustment module is sampled and copied, one input is input to the summation module, and the other is input to the convolutional layer, the convolution kernel of the convolutional layer is used to enlarge the number of channels, and the feature information is further extracted through L times of public modules. .
  • the output of the L public modules is further reduced in number of channels through the convolutional layer, and the output of the convolutional layer is input to the summation module and the output of the dimension adjustment module for sampling and replication is summed to obtain the feature vector matrix W'.
  • the embodiments of the present application do not specifically limit the number and structural composition of the common modules.
  • the common modules may adopt the structure shown in FIG. 7 , but the present application is not limited thereto.
  • the convolutional neural network structure in FIG. 6 is only an example. In practical applications, it can be flexibly configured according to information such as the number of subbands, encoding and decoding performance requirements, and the present application is not limited thereto.
  • the model parameters of the neural network of the encoder and the decoder are obtained by joint training. For example, firstly, the model parameters of the neural network of the encoder and the decoder are initialized, and multiple sets of feature vector matrices are input to the neural network of the encoder. The samples are encoded to obtain multiple target bitstreams, and the multiple target bitstreams are further input to the decoding end for decoding, and the model parameters of the encoder and decoder are adjusted according to the decoding results until the feature vector output by the decoder and the input to the encoding. The eigenvectors of the neural network of the generator satisfy the convergence conditions.
  • the transmitting end device uses the convolutional neural network to compress and encode the feature vectors corresponding to the channel information of the multiple subbands as images to obtain the target bit stream.
  • the receiving end device decompresses and restores the target bit stream to the target image through the convolutional neural network, thereby obtaining at least one target feature vector.
  • encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead.
  • joint compression feedback is performed according to the cross-correlation information between the feature vectors of multiple subbands in the frequency domain, which is beneficial to improve the compression feedback performance.
  • the encoder of the sender device may use a cyclic neural network (RNN) to process the input at least one first feature vector, for example, use the neural network to use the at least one first feature vector as a
  • RNN cyclic neural network
  • the encoder of the receiving end device can use the cyclic neural network to decompress and recover the input target bit stream as the information obtained by compressing and encoding the sequence to obtain a target sequence consisting of at least one target bit vector. .
  • the neural network includes an input layer, a hidden layer, and an output layer.
  • the output is controlled by an activation function, and the layers are connected by weights.
  • the activation function is predetermined, and what the neural network model learns through training is contained in the weights.
  • the basic neural network only establishes weight connections between layers. The biggest difference between RNN and basic neural networks is that weight connections are also established between neurons between layers.
  • Figure 8 is a typical RNN structure diagram. Each arrow in Figure 8 represents a transformation, that is to say, the arrow connections have weights.
  • the left picture in Figure 8 is the RNN structure folded diagram, the right side is the RNN structure expansion diagram, the arrow next to h in the left picture represents the "cycle" in this structure is reflected in the hidden layer.
  • RNN can have good performance in processing sequence data, that is, RNN is a recursive neural network that performs recursion in the evolution direction of the sequence and all nodes (recurrent units) are connected in a chain.
  • Long Short-Term Memory Long Short-Term Memory (Long Shor Term Memory, LSTM) is an evolved RNN. Different from the typical RNN structure, LSTM introduces the concept of cell state. Unlike RNN, which only considers the most recent state, the cell state of LSTM will determine Which states should be left behind and which states should be forgotten solves the shortcomings of traditional RNNs in long-term memory.
  • the at least one first feature vector may be processed through a basic RNN to obtain the target bit stream B, or the at least one first feature vector may also be processed through an LSTM to obtain the target bit stream B stream B.
  • the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
  • the target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
  • each feature vector in the at least one first feature vector is sequentially input to the LSTM, and the LSTM encodes each feature vector as an element of a sequence to obtain the target bitstream.
  • the target bit stream is decoded by the neural network to obtain at least one target feature vector, including:
  • the target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
  • the decoder at the receiving end takes the received target bit stream B as information obtained by compressing the sequence, and decompresses the target bit stream B to obtain a target sequence including at least one target feature vector.
  • FIG. 9 and FIG. 10 are only examples, and in practical applications, they can be flexibly configured according to information such as the number of subbands, encoding and decoding performance requirements, and the present application is not limited thereto.
  • the encoder may include: an LSTM module for sequentially receiving each feature vector in the at least one first feature vector, and using each feature vector as an element of the sequence to be processed.
  • the encoder may include a fully connected layer and a quantization layer for converting the result processed by the LSTM module to obtain the target bitstream B.
  • the decoder may include a fully connected layer, a plurality of LSTM modules, and a fully connected layer connected to each LSTM module of the plurality of LSTM modules, wherein the first A fully connected layer is used to perform dimension transformation on the target bitstream B.
  • the output of the first fully connected layer is used as the input of the first LSTM module, and the output of each LSTM module is used as the input of the next LSTM module.
  • the transmitting end device uses the cyclic neural network to compress the feature vectors of multiple subbands as elements of the sequence to obtain the target bit stream.
  • the receiving end device decompresses and restores the target bit stream to elements of the sequence through the cyclic neural network, thereby obtaining at least one target feature vector.
  • encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead.
  • joint compression feedback is performed according to the cross-correlation information between the feature vectors of multiple subbands in the frequency domain, which is beneficial to improve the compression feedback performance.
  • the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
  • the at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
  • the attention mechanism may be a self-attention mechanism, or may also be another attention mechanism, which is not limited in this application.
  • the self-attention mechanism adopts a "query-key-value" model.
  • Step 1 Calculate the similarity between the query and each key to obtain the weight.
  • Commonly used similarity functions are dot product, splicing, perceptron, etc.;
  • Step 2 Use a softmax function to normalize these weights
  • weight and the corresponding key value are weighted and summed to obtain the final attention.
  • the transmitting end device extracts correlation features between feature vectors of multiple subbands and correlation features between elements of feature vectors based on an attention mechanism, to further improve encoding performance and improve decompression performance.
  • the sender device extracts the correlation feature of the elements in the at least one first feature vector based on the attention mechanism, obtains at least one second feature vector, and further compresses the at least one second feature vector to obtain the Describe the target bitstream B.
  • the device at the receiving end may decode the target bitstream based on the attention mechanism to obtain at least one target feature vector.
  • the attention mechanism may be a self-attention mechanism, or may also be another attention mechanism, which is not limited in this application.
  • the decoder first performs feature extraction on the target bitstream to obtain a first feature map of the target bitstream; further determines, based on an attention mechanism, the features in the first feature map of the target bitstream.
  • the weight of the element; the first feature map of the target bitstream and the weights of the elements in the first feature map of the target bitstream are dot-multiplied to obtain the second feature map of the target bitstream;
  • the second feature map of the target bitstream is decompressed to obtain the at least one target feature vector.
  • FIG. 11 and FIG. 13 are only examples, and in practical applications, they can be flexibly configured according to information such as the number of subbands, codec performance requirements, and the present application is not limited thereto.
  • the encoder may include: s-time self-attention modules, concatenation modules, fully connected layers and quantization layers, where s is a positive integer, and each self-attention module is used for
  • the self-attention mechanism is used to extract the correlation features of the elements in the input multiple first feature vectors, and after the s cascaded sub-attention modules, multiple second feature vectors are output. Further, the plurality of second feature vectors are input to the splicing module, spliced into one feature vector, and the target bit stream B is obtained after processing by the fully connected layer and the quantization layer.
  • the attention module can be implemented using the structure shown in FIG. 12 , but the present application is not limited thereto.
  • the attention module includes: n fully-connected layers, an attention layer and n groups of two-layer fully-connected layers, wherein the n fully-connected layers are respectively used to receive a feature vector, the characteristic
  • the vector can be the first feature vector in the preceding paragraph, or it can be the feature vector output by the previous attention module; the attention layer is used to receive multiple feature vectors output from n fully connected layers, and extract the feature vectors of the multiple feature vectors. Correlation characteristics between elements.
  • Each output of the self-attention layer is first copied and sampled, and each output is summed through a corresponding set of two-layer fully connected layers and the sampled outputs as the output of the self-attention module.
  • the decoder may include: a fully connected layer, a dimension adjustment module, a feature extraction module and a self-attention module, and the target bitstream B is first input to the fully connected layer and the dimension adjustment module, It is converted into the dimension of the feature vector matrix W, and further input to the feature extraction module for feature extraction to obtain a first feature map, and the output of the dimension adjustment module is sampled and copied.
  • the feature extraction module may include a convolution layer, an activation function, a q-order residual block and a t-order residual block, where q and t are positive integers.
  • the convolutional layer is used to adjust the number of channels of the output of the module.
  • the output of the convolutional layer is input to the q-th residual block through the activation function, and the output of the activation function is sampled and copied.
  • the output of the activation function passes through the q residual blocks, it is divided into two paths, one passes through the t residual blocks to obtain the second feature map, and the other is input to the self-attention module to extract the attention of the elements in the feature vector matrix. Weights.
  • the self-attention module can be implemented by a mask block, or can also be implemented by the attention module in FIG. 11 , which is not limited in this application.
  • the output of the q-th residual block can be input to the mask block, which extracts attention masks for elements in the feature map.
  • the output of the mask block and the output of the t-th residual block are subjected to dot multiplication processing, and the dot multiplication result is added to the sampled copy result of the output of the activation function.
  • FIG. 14 is an exemplary structural diagram of the mask block, but the present application is not limited thereto.
  • the mask block can include m-order residual block, down-sampling module, 2m-order residual block, up-sampling module, m-order residual block, convolution layer and sigmoid activation function, namely the mask
  • the block can be realized by the 4mth residual block, and the upsampling and downsampling processing are performed after the mth residual block and the 3mth residual block, respectively.
  • the residual block may include: a convolution layer, a normalization layer, an activation function, a convolution layer and a summation module, wherein the input of the residual block is sampled and copied all the way to the calculation
  • the sum module, the other way through the convolution layer, the normalization layer, the activation function and the output of the convolution layer is sent to the summation module, and the input of the residual block and the output of the convolutional layer are summed through the summation module, as The output of the residual block.
  • Designing a suitable residual block in the neural network is beneficial to solve the gradient problem in the neural network.
  • the transmitting end device uses the attention mechanism to compress the feature vectors of multiple subbands as elements of the sequence to obtain the target bit stream.
  • the receiving end device uses the attention mechanism to decompress and restore the target bit stream, thereby obtaining at least one target feature vector.
  • encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead.
  • the attention mechanism is used to jointly compress the cross-correlation information between the elements of the eigenvectors of multiple subbands, taking into account the correlation features between the eigenvectors of multiple subbands, as well as the correlation between the elements of the eigenvectors.
  • the correlation feature is beneficial to improve the compression feedback performance.
  • FIG. 16 shows a schematic block diagram of a transmitting end device 400 according to an embodiment of the present application.
  • the sender device 400 includes:
  • the communication module 410 is used for receiving the reference signal sent by the receiving end device
  • the processing module 420 is configured to perform channel estimation according to the reference signal, obtain channel information between the transmitting end device and the receiving end device, and perform feature decomposition on the channel information to obtain at least one first eigenvector ;
  • an encoding module 430 configured to encode the at least one first feature vector through a neural network to obtain a target bit stream
  • the communication module 410 is further configured to: send the target bit stream to the receiving end device.
  • the channel information includes channel information corresponding to each of the multiple subcarrier groups of the transmitting end device, and the at least one first feature vector includes the multiple subcarrier groups Eigenvectors corresponding to each subcarrier group in .
  • the encoding module 430 is specifically configured to:
  • the at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
  • the target bit stream is obtained by encoding the feature vector matrix as an image to be compressed through the neural network.
  • the neural network is a convolutional neural network.
  • the neural network is a recurrent neural network
  • the encoding module 430 is further configured to:
  • the target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
  • the recurrent neural network includes a long short-term memory LSTM neural network.
  • the encoding module 430 is further configured to:
  • the at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
  • the encoding module 430 further includes:
  • an attention module configured to extract the correlation feature between elements in the at least one first feature vector based on the attention mechanism, to obtain at least one second feature vector;
  • a feature vector compression module configured to perform feature compression on the at least one second feature vector to obtain the target bit stream.
  • the transmitting end device is a terminal device
  • the receiving end device is a network device
  • the above-mentioned communication module may be a communication interface or a transceiver, or an input/output interface of a communication chip or a system-on-chip.
  • the above-mentioned processing module and encoding module may be one or more processors.
  • the transmitting end device 400 may correspond to the transmitting end device in the method embodiment of the present application, and the above-mentioned and other operations and/or functions of each unit in the transmitting end device 400 are for the purpose of realizing FIG. 2 .
  • the corresponding flow of the transmitting end device in the method 200 shown in FIG. 15 will not be repeated here for brevity.
  • FIG. 17 is a schematic block diagram of a receiving end device according to an embodiment of the present application.
  • the receiving end device 500 of FIG. 17 includes:
  • the communication module 510 is configured to receive a target bit stream sent by the sending end device, where the target bit stream is obtained by encoding at least one first feature vector by the sending end device, and the at least one first feature vector is the It is obtained by the terminal equipment performing eigendecomposition on the channel estimation result;
  • the decoding module 520 is configured to decode the target bit stream through a neural network to obtain at least one target feature vector.
  • the channel information includes channel information corresponding to each of the multiple subcarrier groups of the transmitting end device, and the at least one first feature vector includes the multiple subcarrier groups Eigenvectors corresponding to each subcarrier group in .
  • the decoding module 520 is configured to:
  • the neural network is a convolutional neural network.
  • the neural network is a recurrent neural network
  • the decoding module 520 is further configured to:
  • the target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
  • the recurrent neural network includes a long short-term memory LSTM neural network.
  • the decoding module 520 is further configured to:
  • the target bitstream is decoded based on the attention mechanism to obtain at least one target feature vector.
  • the encoding module 520 includes:
  • a feature extraction module configured to perform feature extraction on the target bitstream to obtain a first feature map
  • an attention module configured to determine the weight of the elements in the first feature map based on the attention mechanism
  • a dot product module configured to perform dot product on the first feature map and the weights of the elements in the first feature map to obtain a second feature map
  • a decompression module configured to decompress the second feature map to obtain the at least one target feature vector.
  • the receiving end device 500 further includes:
  • a processing module configured to perform precoding according to the at least one target feature vector.
  • the above-mentioned communication module may be a communication interface or a transceiver, or an input/output interface of a communication chip or a system-on-chip.
  • the above-mentioned processing module and encoding module may be one or more processors.
  • the receiving end device 500 may correspond to the receiving end device in the method embodiment of the present application, and the above-mentioned and other operations and/or functions of each unit in the receiving end device 500 are for the purpose of realizing FIG. 2 .
  • the corresponding flow of the receiving end device in the method 200 shown in FIG. 15 is not repeated here for brevity.
  • the transmitting end device obtains at least one feature vector by decomposing the full channel information obtained by channel estimation, and further encodes the feature vector by using a neural network to obtain a target bit stream, and sends the target bit stream to the receiving end.
  • the receiving end device decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector.
  • the transmitting end device when transmitting the channel information, the transmitting end device only needs to send the target bit stream obtained by encoding the eigenvector obtained by eigendecomposition of the full channel information, which is beneficial to reduce the CSI overhead.
  • Encoding the vector instead of directly compressing and encoding the full channel information can take into account the correlation characteristics between the channel information, which is beneficial to avoid compressing too much redundant information, improve the compression efficiency, and then improve the encoding performance.
  • FIG. 18 is a schematic structural diagram of a communication device 600 provided by an embodiment of the present application.
  • the communication device 600 shown in FIG. 18 includes a processor 610, and the processor 610 can call and run a computer program from a memory, so as to implement the method in the embodiment of the present application.
  • the communication device 600 may further include a memory 620 .
  • the processor 610 may call and run a computer program from the memory 620 to implement the methods in the embodiments of the present application.
  • the memory 620 may be a separate device independent of the processor 610 , or may be integrated in the processor 610 .
  • the communication device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, specifically, may send information or data to other devices, or receive other devices Information or data sent by a device.
  • the transceiver 630 may include a transmitter and a receiver.
  • the transceiver 630 may further include antennas, and the number of the antennas may be one or more.
  • the communication device 600 may specifically be the sending end device of the embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the sending end device in each method of the embodiment of the present application. Repeat.
  • the transceiver 630 in the communication device 600 may be used to implement the related operations of the communication module 410 in the transmitting end device 400 shown in FIG. 16 , which will not be repeated here for brevity.
  • the processor 610 in the communication device 600 may be used to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
  • the communication device 600 may specifically be the receiving end device of the embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the receiving end device in each method of the embodiment of the present application. Repeat.
  • the transceiver 630 in the communication device 600 may be used to implement the related operations of the communication module 510 in the receiving end device 500 shown in FIG. 17 , which will not be repeated here for brevity.
  • the processor 610 in the communication device 600 may be used to implement the related operations of the decoding module 520 and the processing module in the receiver device 500 shown in FIG. 17 , which are not repeated here for brevity.
  • FIG. 19 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • the chip 700 shown in FIG. 19 includes a processor 710, and the processor 710 can call and run a computer program from a memory, so as to implement the method in the embodiment of the present application.
  • the chip 700 may further include a memory 720 .
  • the processor 710 may call and run a computer program from the memory 720 to implement the methods in the embodiments of the present application.
  • the memory 720 may be a separate device independent of the processor 710 , or may be integrated in the processor 710 .
  • the chip 700 may further include an input interface 730 .
  • the processor 710 may control the input interface 730 to communicate with other devices or chips, and specifically, may acquire information or data sent by other devices or chips.
  • the chip 700 may further include an output interface 740 .
  • the processor 710 can control the output interface 740 to communicate with other devices or chips, and specifically, can output information or data to other devices or chips.
  • the chip can be applied to the transmitting end device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the transmitting end device in each method of the embodiment of the present application, which is not repeated here for brevity.
  • the input interface 730 and the output interface 740 in the chip 700 may be used to implement related operations of the communication module 410 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
  • the processor 710 in the chip 700 may be used to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
  • the chip can be applied to the receiving end device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the receiving end device in each method of the embodiment of the present application, which is not repeated here for brevity.
  • the input interface 730 and the output interface 740 in the chip 700 may be used to implement the related operations of the communication module 510 in the receiving end device 500 shown in FIG. 17 , which are not repeated here for brevity.
  • the processor 710 in the chip 700 may be used to implement the related operations of the decoding module 520 and the processing module in the receiving end device 500 shown in FIG. 17 , which are not repeated here for brevity.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-chip, or a system-on-a-chip, or the like.
  • FIG. 20 is a schematic block diagram of a communication system 900 provided by an embodiment of the present application. As shown in FIG. 20 , the communication system 900 includes a transmitter device 910 and a receiver device 920 .
  • the transmitting end device 910 can be used to realize the corresponding functions realized by the transmitting end device in the above method
  • the receiving end device 920 can be used to realize the corresponding functions realized by the receiving end device in the above method.
  • the transmitting end device 910 can be used to realize the corresponding functions realized by the transmitting end device in the above method
  • the receiving end device 920 can be used to realize the corresponding functions realized by the receiving end device in the above method.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available Programming logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be Random Access Memory (RAM), which acts as an external cache.
  • RAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • enhanced SDRAM ESDRAM
  • synchronous link dynamic random access memory Synchlink DRAM, SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the memory in the embodiment of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • Embodiments of the present application further provide a computer-readable storage medium for storing a computer program.
  • the computer-readable storage medium can be applied to the sending end device in the embodiments of the present application, and the computer program enables the computer to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application.
  • the computer program enables the computer to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application.
  • the computer-readable storage medium can be applied to the receiving-end device in the embodiments of the present application, and the computer program enables the computer to execute the corresponding processes implemented by the receiving-end device in each method of the embodiments of the present application.
  • the computer program enables the computer to execute the corresponding processes implemented by the receiving-end device in each method of the embodiments of the present application.
  • Embodiments of the present application also provide a computer program product, including computer program instructions.
  • the computer program product can be applied to the sending end device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application. This will not be repeated here.
  • the computer program product can be applied to the receiving end device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the receiving end device in the various methods of the embodiments of the present application. This will not be repeated here.
  • the embodiments of the present application also provide a computer program.
  • the computer program can be applied to the sending end device in the embodiments of the present application, and when the computer program is run on the computer, the computer is made to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application, For brevity, details are not repeated here.
  • the computer program can be applied to the receiving-end device in the embodiments of the present application, and when the computer program is run on the computer, the computer is made to execute the corresponding processes implemented by the receiving-end device in the various methods of the embodiments of the present application, For brevity, details are not repeated here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A channel information feedback method, a sending end device, and a receiving end device, which are advantageous for feedback precision of channel information and CSI feedback overhead. The method comprises: the sending end device receiving a reference signal sent by the receiving end device; performing channel estimation according to the reference signal to obtain channel information between the sending end device and the receiving end device; performing feature decomposition on the channel information to obtain at least one first feature vector; encoding the at least one first feature vector by means of a neural network to obtain a target bit stream; and sending the target bit stream to the receiving end device.

Description

信道信息反馈的方法、发送端设备和接收端设备Channel information feedback method, transmitter device and receiver device 技术领域technical field
本申请实施例涉及通信领域,具体涉及一种信道信息反馈的方法、发送端设备和接收端设备。The embodiments of the present application relate to the field of communications, and in particular, to a method for feeding back channel information, a transmitter device, and a receiver device.
背景技术Background technique
在新无线(New Radio,NR)系统中,在信道状态信息参考信号(Channel State Information Reference Signal,CSI-RS)反馈设计中,主要是利用基于码本的方案来实现信道特征的提取与反馈。即在发送端进行信道估计后,根据信道估计的结果按照特定优化准则从预先设定的预编码码本中选择与信道估计结构最匹配的预编码矩阵,并通过空口的反馈链路将预编码矩阵的索引信息反馈给接收端,供接收端实现预编码。这种信道信息到预编码码本中的信道信息的映射过程是量化有损的,降低了反馈的信道信息的精度,进而降低了预编码性能,如果反馈信道估计的全信道信息则CSI反馈开销较大。因此,如何兼顾信道信息的反馈精度和CSI反馈开销是一项亟需解决的问题。In the New Radio (NR) system, in the channel state information reference signal (Channel State Information Reference Signal, CSI-RS) feedback design, the codebook-based scheme is mainly used to achieve channel feature extraction and feedback. That is, after the channel estimation is performed at the transmitting end, the precoding matrix that best matches the channel estimation structure is selected from the preset precoding codebook according to the result of the channel estimation according to the specific optimization criterion, and the precoding matrix is selected through the feedback link of the air interface. The index information of the matrix is fed back to the receiving end for the receiving end to implement precoding. The mapping process of the channel information to the channel information in the precoding codebook is quantization lossy, which reduces the accuracy of the feedback channel information, and further reduces the precoding performance. If the full channel information of the channel estimation is fed back, the CSI feedback overhead is larger. Therefore, how to balance the feedback accuracy of the channel information and the CSI feedback overhead is an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
本申请提供了一种信道信息反馈的方法、发送端设备和接收端设备,能够兼顾信道信息的反馈精度和CSI反馈开销。The present application provides a channel information feedback method, a transmitting end device and a receiving end device, which can take into account the feedback accuracy of the channel information and the CSI feedback overhead.
第一方面,提供了一种信道信息反馈的方法,包括:发送端设备接收接收端设备发送的参考信号;根据所述参考信号进行信道估计,得到所述发送端设备和所述接收端设备之间的信道信息;对所述信道信息进行特征分解,得到至少一个第一特征向量;通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流;向所述接收端设备发送所述目标比特流。In a first aspect, a method for channel information feedback is provided, comprising: a transmitting end device receiving a reference signal sent by a receiving end device; performing channel estimation according to the reference signal to obtain a relationship between the transmitting end device and the receiving end device. the channel information between the two; perform feature decomposition on the channel information to obtain at least one first feature vector; encode the at least one first feature vector through a neural network to obtain a target bit stream; describe the target bitstream.
第二方面,提供了一种信道信息反馈的方法,包括:接收端设备接收发送端设备发送的目标比特流,所述目标比特流是所述发送端设备对至少一个第一特征向量编码得到的,所述至少一个第一特征向量是所述发送端设备对信道估计的结果进行特征分解得到的;通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量。In a second aspect, a method for channel information feedback is provided, comprising: a receiving end device receiving a target bit stream sent by a transmitting end device, where the target bit stream is obtained by encoding at least one first feature vector by the transmitting end device , the at least one first feature vector is obtained by performing feature decomposition on the channel estimation result by the transmitting end device; the target bit stream is decoded through a neural network to obtain at least one target feature vector.
第三方面,提供了一种发送端设备,用于执行上述第一方面或其各实现方式中的方法。In a third aspect, a transmitting end device is provided, which is configured to execute the method in the above-mentioned first aspect or each of its implementations.
具体地,该发送端设备包括用于执行上述第一方面或其各实现方式中的方法的功能模块。Specifically, the sending end device includes a functional module for executing the method in the above-mentioned first aspect or each implementation manner thereof.
第四方面,提供了一种接收端设备,用于执行上述第二方面或其各实现方式中的方法。In a fourth aspect, a receiving end device is provided, which is configured to execute the method in the second aspect or each of its implementations.
具体地,该接收端设备包括用于执行上述第二方面或其各实现方式中的方法的功能模块。Specifically, the receiving end device includes a functional module for executing the method in the second aspect or each implementation manner thereof.
第五方面,提供了一种发送端设备,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,执行上述第一方面或其各实现方式中的方法。In a fifth aspect, a transmitter device is provided, including a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the method in the above-mentioned first aspect or each implementation manner thereof.
第六方面,提供了一种接收端设备,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,执行上述第二方面或其各实现方式中的方法。In a sixth aspect, a receiver device is provided, including a processor and a memory. The memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the method in the second aspect or each of its implementations.
第七方面,提供了一种芯片,用于实现上述第一方面至第二方面中的任一方面或其各实现方式中的方法。In a seventh aspect, a chip is provided for implementing any one of the above-mentioned first aspect to the second aspect or the method in each implementation manner thereof.
具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该装置的设备执行如上述第一方面至第二方面中的任一方面或其各实现方式中的方法。Specifically, the chip includes: a processor for invoking and running a computer program from a memory, so that a device in which the device is installed executes any one of the above-mentioned first to second aspects or each of its implementations method.
第八方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。In an eighth aspect, a computer-readable storage medium is provided for storing a computer program, the computer program causing a computer to execute the method in any one of the above-mentioned first aspect to the second aspect or each of its implementations.
第九方面,提供了一种计算机程序产品,包括计算机程序指令,所述计算机程序指令使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。In a ninth aspect, a computer program product is provided, comprising computer program instructions, the computer program instructions causing a computer to execute the method in any one of the above-mentioned first to second aspects or the implementations thereof.
第十方面,提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面至第二方面中的任一方面或其各实现方式中的方法。In a tenth aspect, there is provided a computer program which, when run on a computer, causes the computer to perform the method in any one of the above-mentioned first to second aspects or the respective implementations thereof.
通过上述技术方案,发送端设备通过对信道估计的全信道信息进行特征分解得到至少一个特征向量,进一步利用神经网络对该特征向量进行编码,得到目标比特流,向接收端发送该目标比特流,对应地,接收端对该目标比特流进行解码得到目标特征向量,一方面,在信道信息发送时,只需要发送对全信道信息进行特征分解得到的特征向量进行编码的目标比特流,有利于降低CSI开销,另一方面,对全信道信息的特征向量进行编码而非对全信道信息直接进行编码,考虑特征向量间的相关性特征,有利于避免压缩过多的冗余信息,降低压缩效率,提升编码性能。Through the above technical solution, the transmitting end device obtains at least one eigenvector by decomposing the full channel information of the channel estimation, and further encodes the eigenvector by using a neural network to obtain a target bit stream, and sends the target bit stream to the receiving end, Correspondingly, the receiving end decodes the target bit stream to obtain the target feature vector. On the one hand, when channel information is sent, it only needs to send the target bit stream encoded with the feature vector obtained by decomposing the full channel information, which is beneficial to reduce CSI overhead. On the other hand, encoding the eigenvectors of the full-channel information instead of directly encoding the full-channel information, considering the correlation characteristics between the eigenvectors, is beneficial to avoid compressing too much redundant information and reduce the compression efficiency. Improve encoding performance.
附图说明Description of drawings
图1是本申请实施例提供的一种通信系统架构的示意性图。FIG. 1 is a schematic diagram of a communication system architecture provided by an embodiment of the present application.
图2是根据本申请实施例提供的一种信道信息反馈的方法的示意性交互图。FIG. 2 is a schematic interaction diagram of a method for feeding back channel information according to an embodiment of the present application.
图3是根据本申请实施例的信道信息反馈的方法的系统架构图。FIG. 3 is a system architecture diagram of a method for feeding back channel information according to an embodiment of the present application.
图4是一种神经网络的结构示意图。Figure 4 is a schematic diagram of the structure of a neural network.
图5是根据本申请一个实施例的编码器的示意性结构图。FIG. 5 is a schematic structural diagram of an encoder according to an embodiment of the present application.
图6是根据本申请一个实施例的解码器的示意性结构图。FIG. 6 is a schematic structural diagram of a decoder according to an embodiment of the present application.
图7是图6中的公共模块的示意性结构图。FIG. 7 is a schematic structural diagram of the common module in FIG. 6 .
图8是一种循环神经网络的结构示意图。FIG. 8 is a schematic structural diagram of a recurrent neural network.
图9是根据本申请另一个实施例的编码器的示意性结构图。FIG. 9 is a schematic structural diagram of an encoder according to another embodiment of the present application.
图10是根据本申请另一个实施例的解码器的示意性结构图。FIG. 10 is a schematic structural diagram of a decoder according to another embodiment of the present application.
图11是根据本申请又一个实施例的编码器的示意性结构图。FIG. 11 is a schematic structural diagram of an encoder according to yet another embodiment of the present application.
图12是图11中的自注意力模块的示例性结构图。FIG. 12 is an exemplary structural diagram of the self-attention module in FIG. 11 .
图13是根据本申请又一个实施例的解码器的示意性结构图。FIG. 13 is a schematic structural diagram of a decoder according to yet another embodiment of the present application.
图14是图13中的掩码块的示意性结构图。FIG. 14 is a schematic structural diagram of the mask block in FIG. 13 .
图15是图13中的残差块的示意性结构图。FIG. 15 is a schematic structural diagram of the residual block in FIG. 13 .
图16是根据本申请实施例提供的一种发送端设备的示意性框图。FIG. 16 is a schematic block diagram of a transmitting end device according to an embodiment of the present application.
图17是根据本申请实施例提供的一种接收端设备的示意性框图。FIG. 17 is a schematic block diagram of a receiving end device according to an embodiment of the present application.
图18是根据本申请实施例提供的一种通信设备的示意性框图。FIG. 18 is a schematic block diagram of a communication device provided according to an embodiment of the present application.
图19是根据本申请实施例提供的一种芯片的示意性框图。FIG. 19 is a schematic block diagram of a chip provided according to an embodiment of the present application.
图20是根据本申请实施例提供的一种通信系统的示意性框图。FIG. 20 is a schematic block diagram of a communication system provided according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。针对本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. With regard to the embodiments in the present application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
本申请实施例的技术方案可以应用于各种通信系统,例如:全球移动通讯(Global System of Mobile communication,GSM)系统、码分多址(Code Division Multiple Access,CDMA)系统、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)系统、通用分组无线业务(General Packet Radio Service,GPRS)、长期演进(Long Term Evolution,LTE)系统、先进的长期演进(Advanced long term evolution,LTE-A)系统、新无线(New Radio,NR)系统、NR系统的演进系统、非授权频谱上的LTE(LTE-based access to unlicensed spectrum,LTE-U)系统、非授权频谱上的NR(NR-based access to unlicensed spectrum,NR-U)系统、非地面通信网络(Non-Terrestrial Networks,NTN)系统、通用移动通信系统(Universal Mobile Telecommunication System,UMTS)、无线局域网(Wireless Local Area Networks,WLAN)、无线保真(Wireless Fidelity,WiFi)、第五代通信(5th-Generation,5G)系统或其他通信系统等。The technical solutions of the embodiments of the present application can be applied to various communication systems, for example: a Global System of Mobile communication (GSM) system, a Code Division Multiple Access (CDMA) system, a wideband Code Division Multiple Access (CDMA) system (Wideband Code Division Multiple Access, WCDMA) system, General Packet Radio Service (General Packet Radio Service, GPRS), Long Term Evolution (Long Term Evolution, LTE) system, Advanced Long Term Evolution (Advanced long term evolution, LTE-A) system , New Radio (NR) system, evolution system of NR system, LTE (LTE-based access to unlicensed spectrum, LTE-U) system on unlicensed spectrum, NR (NR-based access to unlicensed spectrum) unlicensed spectrum, NR-U) system, Non-Terrestrial Networks (NTN) system, Universal Mobile Telecommunication System (UMTS), Wireless Local Area Networks (WLAN), Wireless Fidelity (Wireless Fidelity, WiFi), fifth-generation communication (5th-Generation, 5G) system or other communication systems, etc.
通常来说,传统的通信系统支持的连接数有限,也易于实现,然而,随着通信技术的发展,移动通信系统将不仅支持传统的通信,还将支持例如,设备到设备(Device to Device,D2D)通信,机器到机器(Machine to Machine,M2M)通信,机器类型通信(Machine Type Communication,MTC),车辆间(Vehicle to Vehicle,V2V)通信,或车联网(Vehicle to everything,V2X)通信等,本申请实施例也可以应用于这些通信系统。Generally speaking, traditional communication systems support a limited number of connections and are easy to implement. However, with the development of communication technology, mobile communication systems will not only support traditional communication, but also support, for example, Device to Device (Device to Device, D2D) communication, Machine to Machine (M2M) communication, Machine Type Communication (MTC), Vehicle to Vehicle (V2V) communication, or Vehicle to everything (V2X) communication, etc. , the embodiments of the present application can also be applied to these communication systems.
可选地,本申请实施例中的通信系统可以应用于载波聚合(Carrier Aggregation,CA)场景,也可以应用于双连接(Dual Connectivity,DC)场景,还可以应用于独立(Standalone,SA)布网场景。Optionally, the communication system in this embodiment of the present application may be applied to a carrier aggregation (Carrier Aggregation, CA) scenario, a dual connectivity (Dual Connectivity, DC) scenario, or a standalone (Standalone, SA) distribution. web scene.
可选地,本申请实施例中的通信系统可以应用于非授权频谱,其中,非授权频谱也可以认为是共享频谱;或者,本申请实施例中的通信系统也可以应用于授权频谱,其中,授权频谱也可以认为是非共享频谱。Optionally, the communication system in the embodiment of the present application may be applied to an unlicensed spectrum, where the unlicensed spectrum may also be considered as a shared spectrum; or, the communication system in the embodiment of the present application may also be applied to a licensed spectrum, where, Licensed spectrum can also be considered unshared spectrum.
本申请实施例结合网络设备和终端设备描述了各个实施例,其中,终端设备也可以称为用户设备(User Equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置等。The embodiments of the present application describe various embodiments in conjunction with network equipment and terminal equipment, where the terminal equipment may also be referred to as user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent or user device, etc.
终端设备可以是WLAN中的站点(STATION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session Initiation Protocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字助理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备、下一代通信系统例如NR网络中的终端设备,或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的终端设备等。The terminal device may be a station (STATION, ST) in the WLAN, and may be a cellular phone, a cordless phone, a Session Initiation Protocol (Session Initiation Protocol, SIP) phone, a Wireless Local Loop (WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, next-generation communication systems such as end devices in NR networks, or future Terminal equipment in the evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
在本申请实施例中,终端设备可以部署在陆地上,包括室内或室外、手持、穿戴或车载;也可以 部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。In this embodiment of the present application, the terminal device can be deployed on land, including indoor or outdoor, handheld, wearable, or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as airplanes, balloons, and satellites) superior).
在本申请实施例中,终端设备可以是手机(Mobile Phone)、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(Virtual Reality,VR)终端设备、增强现实(Augmented Reality,AR)终端设备、工业控制(industrial control)中的无线终端设备、无人驾驶(self driving)中的无线终端设备、远程医疗(remote medical)中的无线终端设备、智能电网(smart grid)中的无线终端设备、运输安全(transportation safety)中的无线终端设备、智慧城市(smart city)中的无线终端设备或智慧家庭(smart home)中的无线终端设备等。In this embodiment of the present application, the terminal device may be a mobile phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiver function, a virtual reality (Virtual Reality, VR) terminal device, and an augmented reality (Augmented Reality, AR) terminal Equipment, wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
作为示例而非限定,在本申请实施例中,该终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。As an example and not a limitation, in this embodiment of the present application, the terminal device may also be a wearable device. Wearable devices can also be called wearable smart devices, which are the general term for the intelligent design of daily wear and the development of wearable devices using wearable technology, such as glasses, gloves, watches, clothing and shoes. A wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction, and cloud interaction. In a broad sense, wearable smart devices include full-featured, large-scale, complete or partial functions without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which needs to cooperate with other devices such as smart phones. Use, such as all kinds of smart bracelets, smart jewelry, etc. for physical sign monitoring.
在本申请实施例中,网络设备可以是用于与移动设备通信的设备,网络设备可以是WLAN中的接入点(Access Point,AP),GSM或CDMA中的基站(Base Transceiver Station,BTS),也可以是WCDMA中的基站(NodeB,NB),还可以是LTE中的演进型基站(Evolutional Node B,eNB或eNodeB),或者中继站或接入点,或者车载设备、可穿戴设备以及NR网络中的网络设备(gNB)或者未来演进的PLMN网络中的网络设备或者NTN网络中的网络设备等。In this embodiment of the present application, the network device may be a device for communicating with a mobile device, and the network device may be an access point (Access Point, AP) in WLAN, or a base station (Base Transceiver Station, BTS) in GSM or CDMA , it can also be a base station (NodeB, NB) in WCDMA, it can also be an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, or a relay station or access point, or in-vehicle equipment, wearable devices and NR networks The network equipment (gNB) in the PLMN network in the future evolution or the network equipment in the NTN network, etc.
作为示例而非限定,在本申请实施例中,网络设备可以具有移动特性,例如网络设备可以为移动的设备。可选地,网络设备可以为卫星、气球站。例如,卫星可以为低地球轨道(low earth orbit,LEO)卫星、中地球轨道(medium earth orbit,MEO)卫星、地球同步轨道(geostationary earth orbit,GEO)卫星、高椭圆轨道(High Elliptical Orbit,HEO)卫星等。可选地,网络设备还可以为设置在陆地、水域等位置的基站。As an example and not a limitation, in this embodiment of the present application, the network device may have a mobile feature, for example, the network device may be a mobile device. Optionally, the network device may be a satellite or a balloon station. For example, the satellite may be a low earth orbit (LEO) satellite, a medium earth orbit (MEO) satellite, a geostationary earth orbit (GEO) satellite, a High Elliptical Orbit (HEO) ) satellite etc. Optionally, the network device may also be a base station set in a location such as land or water.
在本申请实施例中,网络设备可以为小区提供服务,终端设备通过该小区使用的传输资源(例如,频域资源,或者说,频谱资源)与网络设备进行通信,该小区可以是网络设备(例如基站)对应的小区,小区可以属于宏基站,也可以属于小小区(Small cell)对应的基站,这里的小小区可以包括:城市小区(Metro cell)、微小区(Micro cell)、微微小区(Pico cell)、毫微微小区(Femto cell)等,这些小小区具有覆盖范围小、发射功率低的特点,适用于提供高速率的数据传输服务。In this embodiment of the present application, a network device may provide services for a cell, and a terminal device communicates with the network device through transmission resources (for example, frequency domain resources, or spectrum resources) used by the cell, and the cell may be a network device ( For example, the cell corresponding to the base station), the cell can belong to the macro base station, or it can belong to the base station corresponding to the small cell (Small cell). Pico cell), Femto cell (Femto cell), etc. These small cells have the characteristics of small coverage and low transmission power, and are suitable for providing high-speed data transmission services.
示例性的,本申请实施例应用的通信系统100如图1所示。该通信系统100可以包括网络设备110,网络设备110可以是与终端设备120(或称为通信终端、终端)通信的设备。网络设备110可以为特定的地理区域提供通信覆盖,并且可以与位于该覆盖区域内的终端设备进行通信。Exemplarily, a communication system 100 to which this embodiment of the present application is applied is shown in FIG. 1 . The communication system 100 may include a network device 110, and the network device 110 may be a device that communicates with a terminal device 120 (or referred to as a communication terminal, a terminal). The network device 110 may provide communication coverage for a particular geographic area, and may communicate with terminal devices located within the coverage area.
图1示例性地示出了一个网络设备和两个终端设备,可选地,该通信系统100可以包括多个网络设备并且每个网络设备的覆盖范围内可以包括其它数量的终端设备,本申请实施例对此不做限定。FIG. 1 exemplarily shows one network device and two terminal devices. Optionally, the communication system 100 may include multiple network devices and the coverage of each network device may include other numbers of terminal devices. This application The embodiment does not limit this.
可选地,该通信系统100还可以包括网络控制器、移动管理实体等其他网络实体,本申请实施例对此不作限定。Optionally, the communication system 100 may further include other network entities such as a network controller and a mobility management entity, which are not limited in this embodiment of the present application.
应理解,本申请实施例中网络/系统中具有通信功能的设备可称为通信设备。以图1示出的通信系统100为例,通信设备可包括具有通信功能的网络设备110和终端设备120,网络设备110和终端设备120可以为上文所述的具体设备,此处不再赘述;通信设备还可包括通信系统100中的其他设备,例如网络控制器、移动管理实体等其他网络实体,本申请实施例中对此不做限定。It should be understood that, in the embodiments of the present application, a device having a communication function in the network/system may be referred to as a communication device. Taking the communication system 100 shown in FIG. 1 as an example, the communication device may include a network device 110 and a terminal device 120 with a communication function, and the network device 110 and the terminal device 120 may be the specific devices described above, which will not be repeated here. ; The communication device may also include other devices in the communication system 100, such as other network entities such as a network controller, a mobility management entity, etc., which are not limited in this embodiment of the present application.
应理解,本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the terms "system" and "network" are often used interchangeably herein. The term "and/or" in this article is only an association relationship to describe the associated objects, indicating that there can be three kinds of relationships, for example, A and/or B, it can mean that A exists alone, A and B exist at the same time, and A and B exist independently B these three cases. In addition, the character "/" in this document generally indicates that the related objects are an "or" relationship.
应理解,在本申请的实施例中提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。It should be understood that the "instruction" mentioned in the embodiments of the present application may be a direct instruction, an indirect instruction, or an associated relationship. For example, if A indicates B, it can indicate that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indicates B indirectly, such as A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
在本申请实施例的描述中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct or indirect corresponding relationship between the two, or may indicate that there is an associated relationship between the two, or indicate and be instructed, configure and be instructed configuration, etc.
本申请实施例中,"预定义"可以通过在设备(例如,包括终端设备和网络设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。比如预定义可以是指协议中定义的。In this embodiment of the present application, "predefinition" may be implemented by pre-saving corresponding codes, forms, or other means that can be used to indicate relevant information in devices (for example, including terminal devices and network devices). The implementation method is not limited. For example, predefined may refer to the definition in the protocol.
本申请实施例中,所述"协议"可以指通信领域的标准协议,例如可以包括LTE协议、NR协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。In the embodiments of the present application, the "protocol" may refer to a standard protocol in the communication field, for example, may include the LTE protocol, the NR protocol, and related protocols applied in future communication systems, which are not limited in this application.
为便于理解本申请实施例的技术方案,以下通过具体实施例详述本申请的技术方案。以下相关技术作为可选方案与本申请实施例的技术方案可以进行任意结合,其均属于本申请实施例的保护范围。本申请实施例包括以下内容中的至少部分内容。In order to facilitate the understanding of the technical solutions of the embodiments of the present application, the technical solutions of the present application are described in detail below through specific embodiments. The following related technologies can be arbitrarily combined with the technical solutions of the embodiments of the present application as optional solutions, which all belong to the protection scope of the embodiments of the present application. The embodiments of the present application include at least part of the following contents.
在NR系统中,信道信息的反馈采用基于码本的反馈方案。该反馈方案是根据信道估计的结果,从预编码码本中挑选最优的信道信息特征值向量,由于预编码码本本身具有有限性,则基于信道估计的结果到预编码码本中的信道的映射过程是量化有损的,这使得反馈的信道信息精确度下降,进而降低了预编码的性能,若反馈的是全信道信息,则增大了CSI反馈开销,因此,如何兼顾信道信息的反馈精度和CSI反馈开销是一项亟需解决的问题。In the NR system, the feedback of the channel information adopts a codebook-based feedback scheme. The feedback scheme is to select the optimal channel information eigenvalue vector from the precoding codebook according to the result of the channel estimation. Since the precoding codebook itself is limited, the channel estimation results in the precoding codebook are based on the channel estimation. The mapping process is quantized and lossy, which reduces the accuracy of the channel information fed back, which in turn reduces the performance of precoding. If the full channel information is fed back, the CSI feedback overhead will be increased. Therefore, how to take account of the channel information Feedback accuracy and CSI feedback overhead is an urgent problem to be solved.
图2是根据本申请实施例的信道信息反馈的方法200的示意性交互图,如图2所示,该方法200包括如下至少部分内容:FIG. 2 is a schematic interaction diagram of a method 200 for channel information feedback according to an embodiment of the present application. As shown in FIG. 2 , the method 200 includes at least some of the following contents:
S201,发送端设备接收接收端设备发送的参考信号;S201, the transmitting end device receives the reference signal sent by the receiving end device;
S202,发送端设备根据所述参考信号进行信道估计,得到所述发送端设备和所述接收端设备之间的信道信息;S202, the transmitting end device performs channel estimation according to the reference signal, and obtains channel information between the transmitting end device and the receiving end device;
S203,发送端设备对所述信道信息进行特征分解,得到至少一个第一特征向量;S203, the transmitting end device performs feature decomposition on the channel information to obtain at least one first feature vector;
S204,发送端设备通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流;S204, the sending end device encodes the at least one first feature vector through a neural network to obtain a target bit stream;
S205,发送端设备向所述接收端设备发送所述目标比特流。S205, the sending end device sends the target bit stream to the receiving end device.
S206,接收端设备对目标比特流进行解码,得到至少一个目标特征向量。S206, the receiving end device decodes the target bit stream to obtain at least one target feature vector.
在一些实施例中,所述发送端设备为终端设备,所述接收端设备为网络设备。In some embodiments, the sending end device is a terminal device, and the receiving end device is a network device.
在另一些实施例中,所述发送端设备为网络设备,所述接收端设备为终端设备。In other embodiments, the sending end device is a network device, and the receiving end device is a terminal device.
在又一些实施例中,所述发送端设备为一终端设备,所述接收端设备为另一终端设备。In still other embodiments, the transmitting end device is a terminal device, and the receiving end device is another terminal device.
在又一些实施例中,所述发送端设备为网络设备,所述接收端设备为另一网络设备。In still other embodiments, the sending end device is a network device, and the receiving end device is another network device.
应理解,根据发送端设备和接收端设备的不同,参考信号也相应的不同,例如若所述发送端设备为终端设备,所述接收端设备为网络设备,所述参考信号可以为解调参考信号(Demodulation Reference Signal,DMRS)。It should be understood that the reference signal is also different according to the difference between the transmitting end device and the receiving end device. For example, if the transmitting end device is a terminal device and the receiving end device is a network device, the reference signal can be a demodulation reference signal. Signal (Demodulation Reference Signal, DMRS).
在本申请实施例中,所述发送端设备中部署有编码器,所述接收端设备中部署有解码器,在本申请实施例中,发送端设备中的编码器和接收端设备中的解码器可以通过神经网络实现。In the embodiment of the present application, an encoder is deployed in the transmitting end device, and a decoder is deployed in the receiving end device. In the embodiment of the present application, the encoder in the transmitting end device and the decoding device in the receiving end device are The device can be implemented by a neural network.
需要说明的是,本申请实施例并不限定所述发送端设备对信道信息进行特征分解的具体方式。作为示例,所述发送端设备可以对信道信息进行奇异值分解得(Singular Value Decomposition,SVD),得到所述至少一个第一特征向量。It should be noted that the embodiments of the present application do not limit the specific manner in which the transmitting end device performs feature decomposition on the channel information. As an example, the sending end device may perform singular value decomposition (Singular Value Decomposition, SVD) on the channel information to obtain the at least one first feature vector.
在本申请一些实施例中,所述信道信息为对参考信号进行信道估计得到的全信道信息,即未经量化的信道信息。In some embodiments of the present application, the channel information is full channel information obtained by performing channel estimation on a reference signal, that is, unquantized channel information.
因此,在本申请实施例中,发送端设备通过对信道估计得到的全信道信息进行特征分解得到至少一个特征向量,进一步利用神经网络对该特征向量进行编码,得到目标比特流,向接收端发送该目标比特流。对应地,接收端设备对该目标比特流进行解码得到目标特征向量,进一步根据该目标特征向量进行预编码。Therefore, in the embodiment of the present application, the transmitting end device obtains at least one feature vector by decomposing the full channel information obtained by channel estimation, and further encodes the feature vector by using a neural network to obtain a target bit stream, which is sent to the receiving end. the target bitstream. Correspondingly, the receiving end device decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector.
基于本申请实施例的信道信息反馈方案,一方面,发送端设备在信道信息发送时,只需要发送对全信道信息进行特征分解得到的特征向量进行编码得到的目标比特流,有利于降低CSI开销,另一方面,对全信道信息的特征向量进行编码而非对全信道信息直接进行压缩编码,能够考虑信道信息之间的相关性特征,有利于避免压缩过多的冗余信息,提升压缩效率,进而提升编码性能。Based on the channel information feedback scheme of the embodiments of the present application, on the one hand, when transmitting channel information, the transmitting end device only needs to transmit the target bit stream obtained by encoding the eigenvector obtained by eigendecomposition of the full channel information, which is beneficial to reduce CSI overhead On the other hand, encoding the feature vector of the full channel information instead of directly compressing and encoding the full channel information can take into account the correlation characteristics between the channel information, which is beneficial to avoid compressing too much redundant information and improve the compression efficiency. , thereby improving the encoding performance.
在本申请一些实施例中,该接收端设备可以根据解码得到的至少一个目标特征向量进行预编码,例如,可以根据该至少一个目标特征向量进行波束赋形。In some embodiments of the present application, the receiving end device may perform precoding according to at least one target feature vector obtained by decoding, for example, may perform beamforming according to the at least one target feature vector.
在一些场景中,发送端设备的调度带宽可以包括多个子载波组,分别对应多个子带,在申请一些实施例中,发送端设备可以对发送端设备的多个子带上的信道信息分别进行反馈。In some scenarios, the scheduling bandwidth of the transmitting end device may include multiple subcarrier groups, respectively corresponding to multiple subbands. In some embodiments of the application, the transmitting end device may feed back the channel information on the multiple subbands of the transmitting end device respectively. .
在一些实施例中,发送端设备可以根据参考信号进行信道估计得到多个子带上的信道信息,进一步对该多个子带上的信道信息进行特征分解得到所述多个子带分别对应的特征向量。In some embodiments, the transmitting end device may perform channel estimation according to the reference signal to obtain channel information on multiple subbands, and further perform feature decomposition on the channel information on the multiple subbands to obtain eigenvectors corresponding to the multiple subbands respectively.
也就是说,信道估计的全信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息。换言之,所述全信道信息可以包括所述发送端设备的多个子带中的每个子带分别对应的信道信息。That is to say, the full channel information of the channel estimation includes channel information corresponding to each subcarrier group in the multiple subcarrier groups of the transmitting end device. In other words, the full channel information may include channel information corresponding to each of the multiple subbands of the transmitting end device.
对应地,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。所述至少一个第一特征向量包括所述多个子带中的每个子带分别对应的特征向量。Correspondingly, the at least one first eigenvector includes an eigenvector corresponding to each subcarrier group in the multiple subcarrier groups respectively. The at least one first feature vector includes a feature vector corresponding to each of the plurality of subbands, respectively.
由于多个子带上的信道信息的特征向量间是具有相关性的,例如相邻子带上的信道信息的相关性信息较多,通过对多个子带的特性向量进行联合压缩编码,相对于考虑了该相关性信息,有利于提升压缩效率,增强反馈恢复性能。Since the eigenvectors of the channel information on multiple subbands are correlated, for example, the channel information on adjacent subbands has more correlation information, by jointly compressing and encoding the eigenvectors of the multiple subbands, compared to considering This correlation information is beneficial to improve compression efficiency and enhance feedback recovery performance.
图3是根据本申请实施例的信道反馈的方法的示意性系统架构图。FIG. 3 is a schematic system architecture diagram of a method for channel feedback according to an embodiment of the present application.
发送端设备可以对n个子带的信道信息进行特征分解,得到该n个子带分别对应的特征向量,记为W_1,W_2,…,W_n,进一步将该n个子带分别对应的特征向量输入至编码器中,编码器利用神经网络对该n个子带的特征向量进行联合编码,得到目标比特流,记为B。进一步地,发送端设备将该目标比特流B发送给接收端设备,接收端设备的解码器对该目标比特流B进行解码恢复,得到n个目标特征向量,记为W`_1,W`_2,…,W`_n。The transmitting end device can perform feature decomposition on the channel information of the n sub-bands to obtain the feature vectors corresponding to the n sub-bands, denoted as W_1, W_2, ..., W_n, and further input the feature vectors corresponding to the n sub-bands to the encoding In the encoder, the encoder uses the neural network to jointly encode the feature vectors of the n sub-bands to obtain the target bit stream, denoted as B. Further, the transmitting end device sends this target bit stream B to the receiving end device, and the decoder of the receiving end device decodes and recovers this target bit stream B, obtains n target feature vectors, denoted as W`_1, W`_2 , …, W`_n.
因此,在本申请实施例中,发送端设备通过对调度带宽上的多个子带的信道信息的特征向量进行联合压缩反馈,对应地,接收端设备可以对多个子带的特征向量进行解压缩重构,相对于利用了特征向量间的相关性信息,有利于提高压缩效率,从而增强了整个系统的压缩、反馈、解压缩的性能。Therefore, in this embodiment of the present application, the transmitting end device performs joint compression and feedback on the eigenvectors of the channel information of multiple subbands on the scheduling bandwidth, and correspondingly, the receiving end device can decompress and recycle the eigenvectors of the multiple subbands. Compared with the use of the correlation information between feature vectors, the structure is beneficial to improve the compression efficiency, thereby enhancing the performance of compression, feedback and decompression of the entire system.
可选地,在一些实施例中,所述调度带宽可以为至少一个BWP,或者至少一个载波,或者至少一个频段等,本申请对此不作限定。Optionally, in some embodiments, the scheduling bandwidth may be at least one BWP, or at least one carrier, or at least one frequency band, etc., which is not limited in this application.
应理解,本申请实施例并不具体限定所述发送端设备中的编码器和所述接收端设备中的解码器的具体实现方式。以下,结合具体实施例,说明发送端设备中的编码器和接收端设备中的解码器的实现方式。It should be understood that the embodiments of the present application do not specifically limit the specific implementation manners of the encoder in the transmitting end device and the decoder in the receiving end device. Hereinafter, with reference to specific embodiments, implementations of the encoder in the transmitting end device and the decoder in the receiving end device will be described.
在本申请一些实施例中,所述发送端中的编码器和接收端中的解码器可以采用图4中的神经网络结构实现。其中,该神经网络包括输入层,隐藏层和输出层,输入层用于接收数据,隐藏层用于对接收的数据进行处理,处理的结果在输出层产生。在该神经网络中,各个节点代表一个处理模块,可以认为是模拟了一个神经元,多个神经元组成一层神经网络,多层的信息传递与处理构造出一个整体的神经网络。In some embodiments of the present application, the encoder in the sending end and the decoder in the receiving end may be implemented using the neural network structure in FIG. 4 . Wherein, the neural network includes an input layer, a hidden layer and an output layer, the input layer is used for receiving data, the hidden layer is used for processing the received data, and the processing result is generated in the output layer. In this neural network, each node represents a processing module, which can be considered to simulate a neuron, and multiple neurons form a layer of neural network, and multiple layers of information transmission and processing construct an overall neural network.
在本申请一些实施例中,发送端设备的编码器可以利用计算机视觉技术对输入的至少一个第一特征向量进行处理,例如,通过神经网络将所述至少一个第一特征向量作为待压缩图像进行压缩编码,得到所述目标比特流。In some embodiments of the present application, the encoder of the transmitting end device may use computer vision technology to process the input at least one first feature vector, for example, use a neural network to process the at least one first feature vector as an image to be compressed Compression coding is performed to obtain the target bit stream.
对应地,接收端设备的解码器可以将目标比特流作为对图像压缩得到信息对该目标比特流进行解码恢复以得到目标图像,其中,该目标图像包括至少一个目标特征向量。Correspondingly, the decoder of the receiving end device can decode and restore the target bitstream by using the target bitstream as the information obtained by compressing the image to obtain the target image, wherein the target image includes at least one target feature vector.
在一些实施例中,发送端设备的编码器可以采用用于图像处理的神经网络对所述至少一个第一特征向量进行压缩编码,例如,该用于图像处理的神经网络可以为卷积神经网络,或者其他图像处理性能较优的神经网络,本申请对此不作限定。In some embodiments, the encoder of the sender device may use a neural network for image processing to compress and encode the at least one first feature vector, for example, the neural network for image processing may be a convolutional neural network , or other neural networks with better image processing performance, which are not limited in this application.
在一些实施例中,接收端设备的编码器可以采用用于图像处理的神经网络对所述目标比特流进行解压缩,例如,该用于图像处理的神经网络可以为卷积神经网络,或者其他图像处理性能较优的神经网络,本申请对此不作限定。In some embodiments, the encoder of the receiving end device may use a neural network for image processing to decompress the target bit stream, for example, the neural network for image processing may be a convolutional neural network, or other The neural network with better image processing performance is not limited in this application.
可选地,卷积神经网络包括输入层,至少一个卷积层,至少一个池化层,全连接层和输出层,通过引入卷积层和池化层,相对于图4中的神经网络架构有效地控制了网络参数的剧增,限制了参数的个数并有利于挖掘局部结构的特点,提高了算法的鲁棒性。Optionally, the convolutional neural network includes an input layer, at least one convolutional layer, at least one pooling layer, a fully connected layer and an output layer. By introducing a convolutional layer and a pooling layer, relative to the neural network architecture in Figure 4 The rapid increase of network parameters is effectively controlled, the number of parameters is limited and the characteristics of local structures are excavated, and the robustness of the algorithm is improved.
在一些实施例中,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:In some embodiments, the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
将所述至少一个第一特征向量拼接为特征向量矩阵,输入至所述神经网络;The at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
所述神经网络将所述特征向量矩阵作为待压缩图像进行编码,得到所述目标比特流。The neural network encodes the feature vector matrix as an image to be compressed to obtain the target bit stream.
例如,将所述n个子带分别对应的特征向量W_1,W_2,…,W_n拼接为特性向量矩阵W=[W_1,W_2,...,W_n] T,将该特性向量矩阵W作为待压缩图像输入神经网络,通过神经网络将该特征向量矩阵W作为待压缩图像进行压缩编码,得到目标比特流B。 For example, the feature vectors W_1, W_2 , . The neural network is input, and the feature vector matrix W is used as the image to be compressed to be compressed and encoded through the neural network, and the target bit stream B is obtained.
对应地,在解码端,所述通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量,包括:Correspondingly, at the decoding end, the target bit stream is decoded by the neural network to obtain at least one target feature vector, including:
将所述目标比特流输入至所述神经网络;inputting the target bit stream to the neural network;
通过所述神经网络将所述目标比特流作为对图像编码得到的信息对所述目标比特流进行解码,得到目标图像,所述目标图像包括所述至少一个目标特征向量。Decoding the target bitstream by using the neural network as information obtained by encoding an image to obtain a target image, where the target image includes the at least one target feature vector.
即,接收端的解码器将接收的目标比特流B作为对图像压缩得到的信息,对该目标比特流B进行解压缩,得到包括至少一个目标特征向量的目标图像。That is, the decoder at the receiving end takes the received target bitstream B as information obtained by compressing the image, and decompresses the target bitstream B to obtain a target image including at least one target feature vector.
在一些实施例中,发送端设备中的编码器和接收端设备中的解码器可以分别采用图5和图6中的网络结构实现。In some embodiments, the encoder in the transmitting end device and the decoder in the receiving end device may be implemented using the network structures shown in FIG. 5 and FIG. 6 , respectively.
在一些实施例中,发送端设备中的该编码器可以包括:特征提取模块,用于接收输入的特征向量矩阵W,对该特征向量矩阵W进行特征提取,得到所述特征向量矩阵W对应的特征图。In some embodiments, the encoder in the transmitting end device may include: a feature extraction module, configured to receive an input eigenvector matrix W, perform feature extraction on the eigenvector matrix W, and obtain the corresponding eigenvector matrix W feature map.
在一些实施例中,该特征提取模块通过不同大小的卷积核对该特征向量矩阵W进行特征提取,得到所述特性向量矩阵W的不同视野的特征图,增加了卷积过程中的非线性,提升了卷积神经网络的表达能力。In some embodiments, the feature extraction module performs feature extraction on the feature vector matrix W through convolution kernels of different sizes to obtain feature maps of different fields of view of the feature vector matrix W, which increases the nonlinearity in the convolution process, Improves the expressiveness of convolutional neural networks.
作为示例,如图5所示,该特征提取模块可以包括3×3卷积层,5×5卷积层和7×7卷积层。其中,3×3卷积层使用3x3卷积核,5×5卷积层使用5x5卷积核,7×7卷积层使用7x7卷积核。在实际应用中,也可以替换为其他尺寸的卷积核,本申请对此不作限定。As an example, as shown in Figure 5, the feature extraction module may include 3×3 convolutional layers, 5×5 convolutional layers and 7×7 convolutional layers. Among them, the 3x3 convolutional layer uses a 3x3 convolution kernel, the 5x5 convolutional layer uses a 5x5 convolution kernel, and the 7x7 convolutional layer uses a 7x7 convolution kernel. In practical applications, convolution kernels of other sizes can also be replaced, which is not limited in this application.
进一步地,特征提取模块输出的多个特征图输入至拼接模块,进行特征图合并,例如通过拼接模块实现该多个特征图的通道维度的拼接。Further, the plurality of feature maps output by the feature extraction module are input to the splicing module to merge the feature maps, for example, the splicing module realizes the splicing of the channel dimensions of the plurality of feature maps.
作为示例,如图5所示,该拼接模块可以通过1×1卷积层实现,其中,1×1卷积层使用1x1卷积核,在该1×1卷积层中可以通过控制卷积核的数量控制通道数,使用1×1卷积核的卷积过程想大于全连接层的计算过程,并且加入了非线性激活函数,有利于增加神经另外的非线性,使得神经网络表达的特征更复杂。As an example, as shown in Figure 5, the splicing module can be implemented by a 1×1 convolutional layer, where the 1×1 convolutional layer uses a 1×1 convolution kernel, and in the 1×1 convolutional layer, the convolutional convolution can be controlled by The number of kernels controls the number of channels. The convolution process using a 1×1 convolution kernel is larger than the calculation process of the fully connected layer, and a nonlinear activation function is added, which is beneficial to increase the other nonlinearity of the neural network and make the neural network express the characteristics. More complex.
进一步地,如图5所示,从拼接模块输出的特征图通过全连接层和量化层处理后转换为所述目标比特流B。Further, as shown in FIG. 5 , the feature map output from the splicing module is processed by the fully connected layer and the quantization layer and converted into the target bitstream B.
应理解,图5中的卷积神经网络结构仅为示例,在实际应用中,可以根据子带个数,编解码性能要求等信息灵活设计,例如在卷积层之间加入激活层,归一化层等网络层,本申请并不限于此。It should be understood that the convolutional neural network structure in Figure 5 is only an example. In practical applications, it can be flexibly designed according to the number of subbands, encoding and decoding performance requirements, etc., for example, adding an activation layer between the convolutional layers, normalizing The application is not limited to network layers such as the ization layer.
在一些实施例中,如图6所示,该解码器可以包括:全连接层,维度调整模块和残差模块,其中,从发送端设备接收的目标比特流B首先输入至全连接层和维度调整模块,转换为特征向量矩阵W的维度,进一步输入至残差模块,输出所述至少一个目标特征向量组成的特征向量矩阵W`。In some embodiments, as shown in FIG. 6 , the decoder may include: a fully connected layer, a dimension adjustment module and a residual module, wherein the target bitstream B received from the sender device is first input to the fully connected layer and the dimension The adjustment module is converted into the dimension of the feature vector matrix W, which is further input to the residual module, and outputs the feature vector matrix W` composed of the at least one target feature vector.
在一些实施例中,如图6所示,该残差模块可以包括卷积层,L次公共模块、卷积层和求和模块,其中,L为正整数。首先对维度调整模块的输出进行采样复制,一路输入至所述求和模块,另一路输入至卷积层,利用卷积层的卷积核放大通道数,进一步通过L次公共模块深度提取特征信息。L次公共模块的输出进一步通过卷积层进行通道数的缩小,该卷积层的输出输入至求和模块与采样复制的维度调整模块的输出进行求和,得到所述特征向量矩阵W`。In some embodiments, as shown in FIG. 6 , the residual module may include a convolutional layer, an L-order common module, a convolutional layer and a summation module, where L is a positive integer. First, the output of the dimension adjustment module is sampled and copied, one input is input to the summation module, and the other is input to the convolutional layer, the convolution kernel of the convolutional layer is used to enlarge the number of channels, and the feature information is further extracted through L times of public modules. . The output of the L public modules is further reduced in number of channels through the convolutional layer, and the output of the convolutional layer is input to the summation module and the output of the dimension adjustment module for sampling and replication is summed to obtain the feature vector matrix W'.
应理解,本申请实施例并不具体限定所述公共模块的个数以及结构组成,作为示例,公共模块可以采用图7所示的结构,但本申请并不限于此。It should be understood that the embodiments of the present application do not specifically limit the number and structural composition of the common modules. As an example, the common modules may adopt the structure shown in FIG. 7 , but the present application is not limited thereto.
应理解,图6中的卷积神经网络结构仅为示例,在实际应用中,可以根据子带个数,编解码性能要求等信息灵活配置,本申请并不限于此。It should be understood that the convolutional neural network structure in FIG. 6 is only an example. In practical applications, it can be flexibly configured according to information such as the number of subbands, encoding and decoding performance requirements, and the present application is not limited thereto.
在本申请一些实施例中,编码器和解码器的神经网络的模型参数是联合训练得到的,例如首先初始化编码端和解码端的神经网络的模型参数,向编码端的神经网络输入多组特征向量矩阵样本进行编码,得到多个目标比特流,进一步将该多个目标比特流输入至解码端进行解码,根据解码结果调整编码器和解码器的模型参数,直至解码器输出的特征向量和输入到编码器的神经网络的特征向量满足收敛条件。In some embodiments of the present application, the model parameters of the neural network of the encoder and the decoder are obtained by joint training. For example, firstly, the model parameters of the neural network of the encoder and the decoder are initialized, and multiple sets of feature vector matrices are input to the neural network of the encoder. The samples are encoded to obtain multiple target bitstreams, and the multiple target bitstreams are further input to the decoding end for decoding, and the model parameters of the encoder and decoder are adjusted according to the decoding results until the feature vector output by the decoder and the input to the encoding. The eigenvectors of the neural network of the generator satisfy the convergence conditions.
因此,在本申请实施例中,发送端设备通过卷积神经网络将多个子带的信道信息对应的特征向量作为图像进行压缩编码得到目标比特流。对应地,接收端设备通过卷积神经网络对目标比特流进行解压缩恢复为目标图像,从而得到至少一个目标特征向量。一方面,相对于直接对信道估计的全信道信息进行编码,对全信道信息的特征向量进行编码,有利于避免压缩过多的冗余信息,降低了CSI反馈开销。另一方面,根据频域的多个子带的特征向量间的互相关信息进行联合压缩反馈,有利于提升压缩反馈性能。Therefore, in the embodiment of the present application, the transmitting end device uses the convolutional neural network to compress and encode the feature vectors corresponding to the channel information of the multiple subbands as images to obtain the target bit stream. Correspondingly, the receiving end device decompresses and restores the target bit stream to the target image through the convolutional neural network, thereby obtaining at least one target feature vector. On the one hand, compared with directly encoding the full channel information of the channel estimation, encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead. On the other hand, joint compression feedback is performed according to the cross-correlation information between the feature vectors of multiple subbands in the frequency domain, which is beneficial to improve the compression feedback performance.
在本申请另一些实施例中,发送端设备的编码器可以利用循环神经网络(RNN)对输入的至少一个第一特征向量进行处理,例如,通过神经网络将所述至少一个第一特征向量作为序列的元素进行压缩处理,得到编码结果,即所述目标比特流。In other embodiments of the present application, the encoder of the sender device may use a cyclic neural network (RNN) to process the input at least one first feature vector, for example, use the neural network to use the at least one first feature vector as a The elements of the sequence are compressed to obtain an encoding result, that is, the target bit stream.
对应地,接收端设备的编码器可以利用循环神经网络将输入的目标比特流作为对序列进行压缩编码得到的信息对该目标比特流进行解压缩恢复,得到由至少一个目标比特向量组成的目标序列。Correspondingly, the encoder of the receiving end device can use the cyclic neural network to decompress and recover the input target bit stream as the information obtained by compressing and encoding the sequence to obtain a target sequence consisting of at least one target bit vector. .
如前所示,神经网络包含输入层、隐藏层、输出层,通过激活函数控制输出,层与层之间通过权值连接。激活函数是事先确定好的,神经网络模型通过训练学习到的东西就蕴含在权值中。基础的神经网络只在层与层之间建立了权连接,RNN与基础神经网络最大的区别在于在层之间的神经元之间也建立权连接。As shown above, the neural network includes an input layer, a hidden layer, and an output layer. The output is controlled by an activation function, and the layers are connected by weights. The activation function is predetermined, and what the neural network model learns through training is contained in the weights. The basic neural network only establishes weight connections between layers. The biggest difference between RNN and basic neural networks is that weight connections are also established between neurons between layers.
图8是一种典型的RNN结构图,图8中每个箭头代表做一次变换,也就是说箭头连接均带有权值。图8中左侧图是RNN结构折叠图,右侧是RNN结构展开图,左侧图中h旁边的箭头代表此结构 中的“循环“体现在隐藏层。Figure 8 is a typical RNN structure diagram. Each arrow in Figure 8 represents a transformation, that is to say, the arrow connections have weights. The left picture in Figure 8 is the RNN structure folded diagram, the right side is the RNN structure expansion diagram, the arrow next to h in the left picture represents the "cycle" in this structure is reflected in the hidden layer.
从展开的RRN结构图可以看出,隐藏层的神经元之间也是带有权值的。也就是说,随着序列的不断推进,前面的隐藏层将会影响后面的隐藏层。图8中x表示输入,h表示隐藏单元,O代表输出,y代表训练集的标签,L代表损失函数,K,V和U表示权值,t表示t时刻,t-1表示t-1时刻,t+1表示t+1时刻,由此可以看出“损失“也是随着序列的推荐而不断积累的。基于上述结构,RNN可以在处理序列数据上具有良好的性能,即RNN是一种在序列的演进方向进行递归且所有节点(循环单元)按链式连接的递归神经网络。It can be seen from the expanded RRN structure diagram that the neurons in the hidden layer also have weights. That is, as the sequence progresses, earlier hidden layers will affect later hidden layers. In Figure 8, x represents the input, h represents the hidden unit, O represents the output, y represents the label of the training set, L represents the loss function, K, V and U represent the weights, t represents time t, and t-1 represents time t-1. , t+1 represents time t+1, from which it can be seen that the "loss" also accumulates with the recommendation of the sequence. Based on the above structure, RNN can have good performance in processing sequence data, that is, RNN is a recursive neural network that performs recursion in the evolution direction of the sequence and all nodes (recurrent units) are connected in a chain.
长短期记忆(Long Shor Term Memory,LSTM)是一种演进的RNN,与典型的RNN结构不同的是,LSTM引入了细胞状态的概念,不同于RNN只考虑最近的状态,LSTM的细胞状态会决定哪些状态应该被留下来,哪些状态应该被遗忘,解决了传统RNN在长期记忆上存在的缺陷。Long Short-Term Memory (Long Shor Term Memory, LSTM) is an evolved RNN. Different from the typical RNN structure, LSTM introduces the concept of cell state. Unlike RNN, which only considers the most recent state, the cell state of LSTM will determine Which states should be left behind and which states should be forgotten solves the shortcomings of traditional RNNs in long-term memory.
在本申请一些实施例中,可以通过基本的RNN对该至少一个第一特征向量进行处理,得到目标比特流B,或者也可以通过LSTM对该至少一个第一特征向量进行处理,得到该目标比特流B。In some embodiments of the present application, the at least one first feature vector may be processed through a basic RNN to obtain the target bit stream B, or the at least one first feature vector may also be processed through an LSTM to obtain the target bit stream B stream B.
在一些实施例中,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:In some embodiments, the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
将所述至少一个第一特征向量中的每个特征向量,依次输入至所述循环神经网络;inputting each feature vector in the at least one first feature vector into the recurrent neural network in turn;
通过所述循环神经网络将所述每个特征向量作为序列的元素进行编码,得到所述目标比特流。The target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
例如,将所述至少一个第一特征向量中的每个特征向量,依次输入至所述LSTM,通过所述LSTM将所述每个特征向量作为序列的元素进行编码,得到所述目标比特流。For example, each feature vector in the at least one first feature vector is sequentially input to the LSTM, and the LSTM encodes each feature vector as an element of a sequence to obtain the target bitstream.
作为示例,将所述n个子带分别对应的特征向量W_1,W_2,…,W_n作为序列的不同元素依次输入至LSTM,通过LSTM对序列进行处理以得到所述目标比特流。As an example, the feature vectors W_1, W_2, .
对应地,在解码端,所述通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量,包括:Correspondingly, at the decoding end, the target bit stream is decoded by the neural network to obtain at least one target feature vector, including:
将所述目标比特流输入至所述循环神经网络;inputting the target bit stream to the recurrent neural network;
通过所述循环神经网络将所述目标比特流作为对序列编码得到的信息对所述目标比特流进行解码,得到目标序列,所述目标序列包括所述至少一个目标特征向量。The target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
即,接收端的解码器将接收的目标比特流B作为对序列压缩得到的信息,对该目标比特流B进行解压缩,得到包括至少一个目标特征向量的目标序列。That is, the decoder at the receiving end takes the received target bit stream B as information obtained by compressing the sequence, and decompresses the target bit stream B to obtain a target sequence including at least one target feature vector.
以下,以发送端设备中的编码器和接收端设备中的解码器分别采用图9和图10中的网络结构为例,说明的具体的编码和解码过程。应理解,图9和图10所示例的神经网络结构仅为示例,在实际应用中,可以根据子带个数,编解码性能要求等信息灵活配置,本申请并不限于此。Hereinafter, the specific encoding and decoding processes will be described by taking the encoder in the transmitting end device and the decoder in the receiving end device adopting the network structures in FIG. 9 and FIG. 10 respectively as an example. It should be understood that the neural network structures illustrated in FIG. 9 and FIG. 10 are only examples, and in practical applications, they can be flexibly configured according to information such as the number of subbands, encoding and decoding performance requirements, and the present application is not limited thereto.
在一些实施例中,如图9所示,该编码器可以包括:LSTM模块,用于依次接收所述至少一个第一特性向量中的每个特征向量,将该每个特征向量作为序列的元素进行处理。In some embodiments, as shown in FIG. 9 , the encoder may include: an LSTM module for sequentially receiving each feature vector in the at least one first feature vector, and using each feature vector as an element of the sequence to be processed.
进一步地,该编码器可以包括全连接层和量化层,用于对经LSTM模块处理的结果进行转换,得到所述目标比特流B。Further, the encoder may include a fully connected layer and a quantization layer for converting the result processed by the LSTM module to obtain the target bitstream B.
在一些实施例中,如图10所示,该解码器可以包括:全连接层,多个LSTM模块以及与所述多个LSTM模块中的每个LSTM模块连接的全连接层,其中,第一个全连接层用于对目标比特流B进行维度转换,第一个全连接层的输出作为第一个LSTM模块的输入,每个LSTM模块的输出作为下一个LSTM模块的输入。所述多个LSTM模块的输出通过对应的全连接层之后输出对应的特征向量矩阵W`={W`_1,W`_2,…,W`_n}。In some embodiments, as shown in FIG. 10 , the decoder may include a fully connected layer, a plurality of LSTM modules, and a fully connected layer connected to each LSTM module of the plurality of LSTM modules, wherein the first A fully connected layer is used to perform dimension transformation on the target bitstream B. The output of the first fully connected layer is used as the input of the first LSTM module, and the output of each LSTM module is used as the input of the next LSTM module. After the outputs of the multiple LSTM modules pass through the corresponding fully connected layers, the corresponding feature vector matrix W`={W`_1, W`_2, . . . , W`_n} is output.
因此,在本申请实施例中,发送端设备通过循环神经网络将多个子带的特征向量作为序列的元素进行压缩处理以得到目标比特流。接收端设备通过循环神经网络对目标比特流进行解压缩恢复为序列的元素,从而得到至少一个目标特征向量。一方面,相对于直接对信道估计的全信道信息进行编码,对全信道信息的特征向量进行编码,有利于避免压缩过多的冗余信息,降低了CSI反馈开销。另一方面,根据频域的多个子带的特征向量间的互相关信息进行联合压缩反馈,有利于提升压缩反馈性能。Therefore, in the embodiment of the present application, the transmitting end device uses the cyclic neural network to compress the feature vectors of multiple subbands as elements of the sequence to obtain the target bit stream. The receiving end device decompresses and restores the target bit stream to elements of the sequence through the cyclic neural network, thereby obtaining at least one target feature vector. On the one hand, compared with directly encoding the full channel information of the channel estimation, encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead. On the other hand, joint compression feedback is performed according to the cross-correlation information between the feature vectors of multiple subbands in the frequency domain, which is beneficial to improve the compression feedback performance.
在本申请又一些实施例中,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:In still other embodiments of the present application, the encoding of the at least one first feature vector through a neural network to obtain a target bit stream includes:
基于注意力机制对所述至少一个第一特征向量进行编码,得到所述目标比特流。The at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
可选地,在本申请实施例中,该注意力机制可以为自注意力机制,或者也可以为其他注意力机制,本申请对此不作限定。Optionally, in this embodiment of the present application, the attention mechanism may be a self-attention mechanism, or may also be another attention mechanism, which is not limited in this application.
自注意力机制采用“查询(query)-键(key)-值(value)”模式。The self-attention mechanism adopts a "query-key-value" model.
注意力(attention)的计算主要分为三步:The calculation of attention is mainly divided into three steps:
第一步:将query和每个key进行相似度计算得到权重,常用的相似度函数有点积,拼接,感知 机等;Step 1: Calculate the similarity between the query and each key to obtain the weight. Commonly used similarity functions are dot product, splicing, perceptron, etc.;
第二步:使用一个softmax函数对这些权重进行归一化;Step 2: Use a softmax function to normalize these weights;
最后将权重和相应的键值value进行加权求和得到最后的attention。Finally, the weight and the corresponding key value are weighted and summed to obtain the final attention.
在本申请一些实施例中,发送端设备基于注意力机制提取多个子带的特征向量间的相关性特征,以及特征向量的元素间的相关性特征,进一步提升编码性能,同时提升解压缩性能。例如,发送端设备基于注意力机制提取所述至少一个第一特征向量中的元素的相关性特征,得到至少一个第二特征向量,进一步对所述至少一个第二特征向量进行压缩处理,得到所述目标比特流B。In some embodiments of the present application, the transmitting end device extracts correlation features between feature vectors of multiple subbands and correlation features between elements of feature vectors based on an attention mechanism, to further improve encoding performance and improve decompression performance. For example, the sender device extracts the correlation feature of the elements in the at least one first feature vector based on the attention mechanism, obtains at least one second feature vector, and further compresses the at least one second feature vector to obtain the Describe the target bitstream B.
对应地,在解码端,接收端设备可以基于注意力机制对所述目标比特流进行解码,得到至少一个目标特征向量。Correspondingly, at the decoding end, the device at the receiving end may decode the target bitstream based on the attention mechanism to obtain at least one target feature vector.
可选地,在本申请实施例中,该注意力机制可以为自注意力机制,或者也可以为其他注意力机制,本申请对此不作限定。Optionally, in this embodiment of the present application, the attention mechanism may be a self-attention mechanism, or may also be another attention mechanism, which is not limited in this application.
在本申请一些实施例中,解码器首先对所述目标比特流进行特征提取得到所述目标比特流的第一特征图;进一步基于注意力机制确定所述目标比特流的第一特征图中的元素的权重;将所述目标比特流的第一特征图和所述目标比特流的第一特征图中的元素的权值进行点乘,得到所述目标比特流的第二特征图;对所述目标比特流的第二特征图进行解压缩,得到所述至少一个目标特征向量。In some embodiments of the present application, the decoder first performs feature extraction on the target bitstream to obtain a first feature map of the target bitstream; further determines, based on an attention mechanism, the features in the first feature map of the target bitstream. The weight of the element; the first feature map of the target bitstream and the weights of the elements in the first feature map of the target bitstream are dot-multiplied to obtain the second feature map of the target bitstream; The second feature map of the target bitstream is decompressed to obtain the at least one target feature vector.
以下,以发送端设备中的编码器和接收端设备中的解码器分别采用图11和图13中的网络结构为例,说明的具体的编码和解码过程。应理解,图11和图13所示例的神经网络结构仅为示例,在实际应用中,可以根据子带个数,编解码性能要求等信息灵活配置,本申请并不限于此。Hereinafter, the specific encoding and decoding processes will be described by taking the network structures shown in FIG. 11 and FIG. 13 as an example for the encoder in the transmitting end device and the decoder in the receiving end device respectively. It should be understood that the neural network structures illustrated in FIG. 11 and FIG. 13 are only examples, and in practical applications, they can be flexibly configured according to information such as the number of subbands, codec performance requirements, and the present application is not limited thereto.
在一些实施例中,如图11所示,该编码器可以包括:s次自注意力模块、拼接模块、全连接层和量化层,其中,s为正整数,每个自注意力模块用于利用自注意力机制提取输入的多个第一特征向量中的元素的相关性特征,经过该s个级联的子注意力模块后输出多个第二特征向量。进一步地,该多个第二特征向量输入至拼接模块,拼接为一个特征向量,通过全连接层和量化层处理后得到所述目标比特流B。In some embodiments, as shown in FIG. 11 , the encoder may include: s-time self-attention modules, concatenation modules, fully connected layers and quantization layers, where s is a positive integer, and each self-attention module is used for The self-attention mechanism is used to extract the correlation features of the elements in the input multiple first feature vectors, and after the s cascaded sub-attention modules, multiple second feature vectors are output. Further, the plurality of second feature vectors are input to the splicing module, spliced into one feature vector, and the target bit stream B is obtained after processing by the fully connected layer and the quantization layer.
在一些实施例中,注意力模块可以采用图12所示结构实现,但本申请并不限于此。In some embodiments, the attention module can be implemented using the structure shown in FIG. 12 , but the present application is not limited thereto.
作为示例,如图12所示,该注意力模块包括:n个全连接层,注意力层和n组两层全连接层,其中,n个全连接层分别用于接收一个特征向量,该特性向量可以为前文中的第一特征向量,或者可以为上一个注意力模块输出的特征向量;注意力层用于接收从n个全连接层输出的多个特征向量,提取该多个特征向量的元素间的相关性特征。该自注意力层的每一路输出首先进行复制采样,并将每一路输出通过对应的一组两层全连接层和采样的输出进行求和,作为自注意力模块的输出。As an example, as shown in Figure 12, the attention module includes: n fully-connected layers, an attention layer and n groups of two-layer fully-connected layers, wherein the n fully-connected layers are respectively used to receive a feature vector, the characteristic The vector can be the first feature vector in the preceding paragraph, or it can be the feature vector output by the previous attention module; the attention layer is used to receive multiple feature vectors output from n fully connected layers, and extract the feature vectors of the multiple feature vectors. Correlation characteristics between elements. Each output of the self-attention layer is first copied and sampled, and each output is summed through a corresponding set of two-layer fully connected layers and the sampled outputs as the output of the self-attention module.
在一些实施例中,如图13所示,该解码器可以包括:全连接层、维度调整模块、特征提取模块以及自注意力模块,目标比特流B首先输入至全连接层和维度调整模块,转换为特征向量矩阵W的维度,进一步输入至特征提取模块进行特征提取,得到第一特征图,并对所述维度调整模块的输出进行采样复制。In some embodiments, as shown in FIG. 13 , the decoder may include: a fully connected layer, a dimension adjustment module, a feature extraction module and a self-attention module, and the target bitstream B is first input to the fully connected layer and the dimension adjustment module, It is converted into the dimension of the feature vector matrix W, and further input to the feature extraction module for feature extraction to obtain a first feature map, and the output of the dimension adjustment module is sampled and copied.
可选地,如图13所示,所述特征提取模块可以包括卷积层,激活函数,q次残差块和t次残差块,其中,q,t为正整数。Optionally, as shown in FIG. 13 , the feature extraction module may include a convolution layer, an activation function, a q-order residual block and a t-order residual block, where q and t are positive integers.
该卷积层用于维度调整模块的输出的通道数,卷积层的输出通过激活函数输入至q次残差块,同时对激活函数的输出进行采样复制。The convolutional layer is used to adjust the number of channels of the output of the module. The output of the convolutional layer is input to the q-th residual block through the activation function, and the output of the activation function is sampled and copied.
激活函数的输出通过q次残差块之后,分为两路,一路通过t次残差块,得到第二特征图,另一路输入至自注意力模块,提取特征向量矩阵中的元素的注意力权重。After the output of the activation function passes through the q residual blocks, it is divided into two paths, one passes through the t residual blocks to obtain the second feature map, and the other is input to the self-attention module to extract the attention of the elements in the feature vector matrix. Weights.
在一些实施例中,该自注意力模块可以通过掩码块实现,或者也可以通过图11中的注意力模块实现,本申请对此不作限定。In some embodiments, the self-attention module can be implemented by a mask block, or can also be implemented by the attention module in FIG. 11 , which is not limited in this application.
例如,q次残差块的输出可以输入至掩码块,提取特征图中的元素的注意力掩码。For example, the output of the q-th residual block can be input to the mask block, which extracts attention masks for elements in the feature map.
进一步地,将掩码块的输出和t次残差块的输出进行点乘处理,并将点乘结果与激活函数的输出的采样复制结果相加。Further, the output of the mask block and the output of the t-th residual block are subjected to dot multiplication processing, and the dot multiplication result is added to the sampled copy result of the output of the activation function.
然后,将相加的结果输入至q次残差块进行结果修正,然后通过卷积层降低通道维数,与第二特征向量矩阵的采样复制结果进行相加,输出恢复后的特征向量矩阵W`=[W`_1,W`_2,…,W`_n]。Then, the addition result is input to the q-th residual block for result correction, and then the channel dimension is reduced through the convolution layer, and the result of the sampling copy of the second eigenvector matrix is added to output the restored eigenvector matrix W `=[W`_1, W`_2, ..., W`_n].
图14是一种该掩码块的示例性结构图,但本申请并不限于此。FIG. 14 is an exemplary structural diagram of the mask block, but the present application is not limited thereto.
如图14所示,该掩码块可以包括m次残差块,下采样模块,2m次残差块,上采样模块,m次残差块,卷积层和sigmoid激活函数,即该掩码块可以通过4m次残差块实现,在第m次残差块和第3m次残差块之后分别进行上采样和下采样处理,可以利用中间2m次残差块对更广域的全局信息进行特征提取,最后通过卷积层匹配通道维度,然后通过sigmoid激活函数将卷积层的输出映射到0~1之间,从而获得注意力权重。As shown in Fig. 14, the mask block can include m-order residual block, down-sampling module, 2m-order residual block, up-sampling module, m-order residual block, convolution layer and sigmoid activation function, namely the mask The block can be realized by the 4mth residual block, and the upsampling and downsampling processing are performed after the mth residual block and the 3mth residual block, respectively. Feature extraction, and finally match the channel dimension through the convolution layer, and then map the output of the convolution layer between 0 and 1 through the sigmoid activation function to obtain the attention weight.
应理解,本申请实施例中的残差块可以采用图15所示结构实现,或者也可以采用其他等价结构实现,本申请对此不作限定。It should be understood that the residual block in this embodiment of the present application may be implemented by using the structure shown in FIG. 15 , or may also be implemented by using other equivalent structures, which is not limited in this application.
作为示例,如图15所示,该残差块可以包括:卷积层,归一化层,激活函数,卷积层和求和模块,其中,残差块的输入一路进行采样复制输入至求和模块,另一路通过卷积层,归一化层,激活函数和卷积层的输出送入求和模块,通过求和模块对残差块的输入和卷积层的输出进行求和,作为残差块的输出。在神经网络中设计合适的残差块有利于解决神经网络中的梯度问题。As an example, as shown in Figure 15, the residual block may include: a convolution layer, a normalization layer, an activation function, a convolution layer and a summation module, wherein the input of the residual block is sampled and copied all the way to the calculation The sum module, the other way through the convolution layer, the normalization layer, the activation function and the output of the convolution layer is sent to the summation module, and the input of the residual block and the output of the convolutional layer are summed through the summation module, as The output of the residual block. Designing a suitable residual block in the neural network is beneficial to solve the gradient problem in the neural network.
因此,在本申请实施例中,发送端设备利用注意力机制将多个子带的特征向量作为序列的元素进行压缩处理以得到目标比特流。对应地,接收端设备利用注意力机制对目标比特流进行解压缩恢复,从而得到至少一个目标特征向量。一方面,相对于直接对信道估计的全信道信息进行编码,对全信道信息的特征向量进行编码,有利于避免压缩过多的冗余信息,降低了CSI反馈开销。另一方面,利用注意力机制对多个子带的特征向量的元素间的互相关信息进行联合压缩反馈,考虑了多个子带的特征向量之间的相关性特征,以及特征向量的元素之间的相关性特征,有利于提升压缩反馈性能。Therefore, in the embodiment of the present application, the transmitting end device uses the attention mechanism to compress the feature vectors of multiple subbands as elements of the sequence to obtain the target bit stream. Correspondingly, the receiving end device uses the attention mechanism to decompress and restore the target bit stream, thereby obtaining at least one target feature vector. On the one hand, compared with directly encoding the full channel information of the channel estimation, encoding the feature vector of the full channel information is beneficial to avoid compressing excessive redundant information and reduce the CSI feedback overhead. On the other hand, the attention mechanism is used to jointly compress the cross-correlation information between the elements of the eigenvectors of multiple subbands, taking into account the correlation features between the eigenvectors of multiple subbands, as well as the correlation between the elements of the eigenvectors. The correlation feature is beneficial to improve the compression feedback performance.
上文结合图2至图15,详细描述了本申请的方法实施例,下文结合图16至图20,详细描述本申请的装置实施例,应理解,装置实施例与方法实施例相互对应,类似的描述可以参照方法实施例。The method embodiments of the present application are described in detail above with reference to FIGS. 2 to 15 , and the device embodiments of the present application are described in detail below with reference to FIGS. 16 to 20 . It should be understood that the device embodiments and the method embodiments correspond to each other, and are similar to For the description, refer to the method embodiment.
图16示出了根据本申请实施例的发送端设备400的示意性框图。如图16所示,该发送端设备400包括:FIG. 16 shows a schematic block diagram of a transmitting end device 400 according to an embodiment of the present application. As shown in FIG. 16, the sender device 400 includes:
通信模块410,用于接收接收端设备发送的参考信号;The communication module 410 is used for receiving the reference signal sent by the receiving end device;
处理模块420,用于根据所述参考信号进行信道估计,得到所述发送端设备和所述接收端设备之间的信道信息,并对所述信道信息进行特征分解,得到至少一个第一特征向量;The processing module 420 is configured to perform channel estimation according to the reference signal, obtain channel information between the transmitting end device and the receiving end device, and perform feature decomposition on the channel information to obtain at least one first eigenvector ;
编码模块430,用于通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流;an encoding module 430, configured to encode the at least one first feature vector through a neural network to obtain a target bit stream;
所述通信模块410还用于:向所述接收端设备发送所述目标比特流。The communication module 410 is further configured to: send the target bit stream to the receiving end device.
在本申请一些实施例中,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。In some embodiments of the present application, the channel information includes channel information corresponding to each of the multiple subcarrier groups of the transmitting end device, and the at least one first feature vector includes the multiple subcarrier groups Eigenvectors corresponding to each subcarrier group in .
在本申请一些实施例中,所述编码模块430具体用于:In some embodiments of the present application, the encoding module 430 is specifically configured to:
将所述至少一个第一特征向量拼接为特征向量矩阵,输入至所述神经网络;The at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
通过所述神经网络将所述特征向量矩阵作为待压缩图像进行编码,得到所述目标比特流。The target bit stream is obtained by encoding the feature vector matrix as an image to be compressed through the neural network.
在本申请一些实施例中,所述神经网络为卷积神经网络。In some embodiments of the present application, the neural network is a convolutional neural network.
在本申请一些实施例中,所述神经网络为循环神经网络,所述编码模块430还用于:In some embodiments of the present application, the neural network is a recurrent neural network, and the encoding module 430 is further configured to:
将所述至少一个第一特征向量中的每个特征向量,依次输入至所述循环神经网络;inputting each feature vector in the at least one first feature vector into the recurrent neural network in turn;
通过所述循环神经网络将所述每个特征向量作为序列的元素进行编码,得到所述目标比特流。The target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
在本申请一些实施例中,所述循环神经网络包括长短期记忆LSTM神经网络。In some embodiments of the present application, the recurrent neural network includes a long short-term memory LSTM neural network.
在本申请一些实施例中,所述编码模块430还用于:In some embodiments of the present application, the encoding module 430 is further configured to:
基于注意力机制对所述至少一个第一特征向量进行编码,得到所述目标比特流。The at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
在本申请一些实施例中,所述编码模块430还包括:In some embodiments of the present application, the encoding module 430 further includes:
注意力模块,用于基于注意力机制提取所述至少一个第一特征向量中的元素间的相关性特征,得到至少一个第二特征向量;an attention module, configured to extract the correlation feature between elements in the at least one first feature vector based on the attention mechanism, to obtain at least one second feature vector;
特征向量压缩模块,用于对所述至少一个第二特征向量进行特征压缩,得到所述目标比特流。A feature vector compression module, configured to perform feature compression on the at least one second feature vector to obtain the target bit stream.
在本申请一些实施例中,所述发送端设备为终端设备,所述接收端设备为网络设备。In some embodiments of the present application, the transmitting end device is a terminal device, and the receiving end device is a network device.
可选地,在一些实施例中,上述通信模块可以是通信接口或收发器,或者是通信芯片或者片上系统的输入输出接口。上述处理模块、编码模块可以是一个或多个处理器。Optionally, in some embodiments, the above-mentioned communication module may be a communication interface or a transceiver, or an input/output interface of a communication chip or a system-on-chip. The above-mentioned processing module and encoding module may be one or more processors.
应理解,根据本申请实施例的发送端设备400可对应于本申请方法实施例中的发送端设备,并且发送端设备400中的各个单元的上述和其它操作和/或功能分别为了实现图2至图15所示方法200中发送端设备的相应流程,为了简洁,在此不再赘述。It should be understood that the transmitting end device 400 according to the embodiment of the present application may correspond to the transmitting end device in the method embodiment of the present application, and the above-mentioned and other operations and/or functions of each unit in the transmitting end device 400 are for the purpose of realizing FIG. 2 . The corresponding flow of the transmitting end device in the method 200 shown in FIG. 15 will not be repeated here for brevity.
图17是根据本申请实施例的接收端设备的示意性框图。图17的接收端设备500包括:FIG. 17 is a schematic block diagram of a receiving end device according to an embodiment of the present application. The receiving end device 500 of FIG. 17 includes:
通信模块510,用于接收发送端设备发送的目标比特流,所述目标比特流是所述发送端设备对至少一个第一特征向量编码得到的,所述至少一个第一特征向量是所述发送端设备对信道估计的结果进行特征分解得到的;The communication module 510 is configured to receive a target bit stream sent by the sending end device, where the target bit stream is obtained by encoding at least one first feature vector by the sending end device, and the at least one first feature vector is the It is obtained by the terminal equipment performing eigendecomposition on the channel estimation result;
解码模块520,用于通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量。The decoding module 520 is configured to decode the target bit stream through a neural network to obtain at least one target feature vector.
在本申请一些实施例中,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。In some embodiments of the present application, the channel information includes channel information corresponding to each of the multiple subcarrier groups of the transmitting end device, and the at least one first feature vector includes the multiple subcarrier groups Eigenvectors corresponding to each subcarrier group in .
在本申请一些实施例中,所述解码模块520用于:In some embodiments of the present application, the decoding module 520 is configured to:
将所述目标比特流输入至所述神经网络;inputting the target bit stream to the neural network;
通过所述神经网络将所述目标比特流作为对图像编码得到的信息对所述目标比特流进行解码,得到目标图像,所述目标图像包括所述至少一个目标特征向量。Decoding the target bitstream by using the neural network as information obtained by encoding an image to obtain a target image, where the target image includes the at least one target feature vector.
在本申请一些实施例中,所述神经网络为卷积神经网络。In some embodiments of the present application, the neural network is a convolutional neural network.
在本申请一些实施例中,所述神经网络为循环神经网络,所述解码模块520还用于:In some embodiments of the present application, the neural network is a recurrent neural network, and the decoding module 520 is further configured to:
将所述目标比特流输入至所述循环神经网络;inputting the target bit stream to the recurrent neural network;
通过所述循环神经网络将所述目标比特流作为对序列编码得到的信息对所述目标比特流进行解码,得到目标序列,所述目标序列包括所述至少一个目标特征向量。The target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
在本申请一些实施例中,所述循环神经网络包括长短期记忆LSTM神经网络。In some embodiments of the present application, the recurrent neural network includes a long short-term memory LSTM neural network.
在本申请一些实施例中,所述解码模块520还用于:In some embodiments of the present application, the decoding module 520 is further configured to:
基于注意力机制对所述目标比特流进行解码,得到至少一个目标特征向量。The target bitstream is decoded based on the attention mechanism to obtain at least one target feature vector.
在本申请一些实施例中,所述编码模块520包括:In some embodiments of the present application, the encoding module 520 includes:
特征提取模块,用于对所述目标比特流进行特征提取得到第一特征图;a feature extraction module, configured to perform feature extraction on the target bitstream to obtain a first feature map;
注意力模块,用于基于注意力机制确定所述第一特征图中的元素的权重;an attention module, configured to determine the weight of the elements in the first feature map based on the attention mechanism;
点乘模块,用于将所述第一特征图和所述第一特征图中的元素的权值进行点乘,得到第二特征图;a dot product module, configured to perform dot product on the first feature map and the weights of the elements in the first feature map to obtain a second feature map;
解压缩模块,用于对所述第二特征图进行解压缩,得到所述至少一个目标特征向量。A decompression module, configured to decompress the second feature map to obtain the at least one target feature vector.
在本申请一些实施例中,所述接收端设备500还包括:In some embodiments of the present application, the receiving end device 500 further includes:
处理模块,用于根据所述至少一个目标特征向量进行预编码。A processing module, configured to perform precoding according to the at least one target feature vector.
可选地,在一些实施例中,上述通信模块可以是通信接口或收发器,或者是通信芯片或者片上系统的输入输出接口。上述处理模块、编码模块可以是一个或多个处理器。Optionally, in some embodiments, the above-mentioned communication module may be a communication interface or a transceiver, or an input/output interface of a communication chip or a system-on-chip. The above-mentioned processing module and encoding module may be one or more processors.
应理解,根据本申请实施例的接收端设备500可对应于本申请方法实施例中的接收端设备,并且接收端设备500中的各个单元的上述和其它操作和/或功能分别为了实现图2至图15所示方法200中接收端设备的相应流程,为了简洁,在此不再赘述。It should be understood that the receiving end device 500 according to the embodiment of the present application may correspond to the receiving end device in the method embodiment of the present application, and the above-mentioned and other operations and/or functions of each unit in the receiving end device 500 are for the purpose of realizing FIG. 2 . The corresponding flow of the receiving end device in the method 200 shown in FIG. 15 is not repeated here for brevity.
综上,发送端设备通过对信道估计得到的全信道信息进行特征分解得到至少一个特征向量,进一步利用神经网络对该特征向量进行编码,得到目标比特流,向接收端发送该目标比特流。对应地,接收端设备对该目标比特流进行解码得到目标特征向量,进一步根据该目标特征向量进行预编码。一方面,发送端设备在信道信息发送时,只需要发送对全信道信息进行特征分解得到的特征向量进行编码得到的目标比特流,有利于降低CSI开销,另一方面,对全信道信息的特征向量进行编码而非对全信道信息直接进行压缩编码,能够考虑信道信息之间的相关性特征,有利于避免压缩过多的冗余信息,提升压缩效率,进而提升编码性能。In summary, the transmitting end device obtains at least one feature vector by decomposing the full channel information obtained by channel estimation, and further encodes the feature vector by using a neural network to obtain a target bit stream, and sends the target bit stream to the receiving end. Correspondingly, the receiving end device decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector. On the one hand, when transmitting the channel information, the transmitting end device only needs to send the target bit stream obtained by encoding the eigenvector obtained by eigendecomposition of the full channel information, which is beneficial to reduce the CSI overhead. Encoding the vector instead of directly compressing and encoding the full channel information can take into account the correlation characteristics between the channel information, which is beneficial to avoid compressing too much redundant information, improve the compression efficiency, and then improve the encoding performance.
图18是本申请实施例提供的一种通信设备600示意性结构图。图18所示的通信设备600包括处理器610,处理器610可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。FIG. 18 is a schematic structural diagram of a communication device 600 provided by an embodiment of the present application. The communication device 600 shown in FIG. 18 includes a processor 610, and the processor 610 can call and run a computer program from a memory, so as to implement the method in the embodiment of the present application.
可选地,如图18所示,通信设备600还可以包括存储器620。其中,处理器610可以从存储器620中调用并运行计算机程序,以实现本申请实施例中的方法。Optionally, as shown in FIG. 18 , the communication device 600 may further include a memory 620 . The processor 610 may call and run a computer program from the memory 620 to implement the methods in the embodiments of the present application.
其中,存储器620可以是独立于处理器610的一个单独的器件,也可以集成在处理器610中。The memory 620 may be a separate device independent of the processor 610 , or may be integrated in the processor 610 .
可选地,如图6所示,通信设备600还可以包括收发器630,处理器610可以控制该收发器630与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。Optionally, as shown in FIG. 6 , the communication device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, specifically, may send information or data to other devices, or receive other devices Information or data sent by a device.
其中,收发器630可以包括发射机和接收机。收发器630还可以进一步包括天线,天线的数量可以为一个或多个。Among them, the transceiver 630 may include a transmitter and a receiver. The transceiver 630 may further include antennas, and the number of the antennas may be one or more.
可选地,该通信设备600具体可为本申请实施例的发送端设备,并且该通信设备600可以实现本申请实施例的各个方法中由发送端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the communication device 600 may specifically be the sending end device of the embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the sending end device in each method of the embodiment of the present application. Repeat.
在一些实施例中,该通信设备600中的收发器630可以用于实现图16所示发送端设备400中通信模块410的相关操作,为了简洁,这里不再赘述。In some embodiments, the transceiver 630 in the communication device 600 may be used to implement the related operations of the communication module 410 in the transmitting end device 400 shown in FIG. 16 , which will not be repeated here for brevity.
在一些实施例中,该通信设备600中的处理器610可以用于实现图16所示发送端设备400中处理模块420和编码模块430的相关操作,为了简洁,这里不再赘述。In some embodiments, the processor 610 in the communication device 600 may be used to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
可选地,该通信设备600具体可为本申请实施例的接收端设备,并且该通信设备600可以实现本申请实施例的各个方法中由接收端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the communication device 600 may specifically be the receiving end device of the embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the receiving end device in each method of the embodiment of the present application. Repeat.
在一些实施例中,该通信设备600中的收发器630可以用于实现图17所示接收端设备500中通信模块510的相关操作,为了简洁,这里不再赘述。In some embodiments, the transceiver 630 in the communication device 600 may be used to implement the related operations of the communication module 510 in the receiving end device 500 shown in FIG. 17 , which will not be repeated here for brevity.
在一些实施例中,该通信设备600中的处理器610可以用于实现图17所示接收端设备500中解码模块520和处理模块的相关操作,为了简洁,这里不再赘述。In some embodiments, the processor 610 in the communication device 600 may be used to implement the related operations of the decoding module 520 and the processing module in the receiver device 500 shown in FIG. 17 , which are not repeated here for brevity.
图19是本申请实施例的芯片的示意性结构图。图19所示的芯片700包括处理器710,处理器710可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。FIG. 19 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip 700 shown in FIG. 19 includes a processor 710, and the processor 710 can call and run a computer program from a memory, so as to implement the method in the embodiment of the present application.
可选地,如图19所示,芯片700还可以包括存储器720。其中,处理器710可以从存储器720中调用并运行计算机程序,以实现本申请实施例中的方法。Optionally, as shown in FIG. 19 , the chip 700 may further include a memory 720 . The processor 710 may call and run a computer program from the memory 720 to implement the methods in the embodiments of the present application.
其中,存储器720可以是独立于处理器710的一个单独的器件,也可以集成在处理器710中。The memory 720 may be a separate device independent of the processor 710 , or may be integrated in the processor 710 .
可选地,该芯片700还可以包括输入接口730。其中,处理器710可以控制该输入接口730与其他设备或芯片进行通信,具体地,可以获取其他设备或芯片发送的信息或数据。Optionally, the chip 700 may further include an input interface 730 . The processor 710 may control the input interface 730 to communicate with other devices or chips, and specifically, may acquire information or data sent by other devices or chips.
可选地,该芯片700还可以包括输出接口740。其中,处理器710可以控制该输出接口740与其他设备或芯片进行通信,具体地,可以向其他设备或芯片输出信息或数据。Optionally, the chip 700 may further include an output interface 740 . The processor 710 can control the output interface 740 to communicate with other devices or chips, and specifically, can output information or data to other devices or chips.
可选地,该芯片可应用于本申请实施例中的发送端设备,并且该芯片可以实现本申请实施例的各个方法中由发送端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the chip can be applied to the transmitting end device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the transmitting end device in each method of the embodiment of the present application, which is not repeated here for brevity.
在一些实施例中,该芯片700中的输入接口730和输出接口740可以用于实现图16所示发送端设备400中通信模块410的相关操作,为了简洁,这里不再赘述。In some embodiments, the input interface 730 and the output interface 740 in the chip 700 may be used to implement related operations of the communication module 410 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
在一些实施例中,该芯片700中的处理器710可以用于实现图16所示发送端设备400中处理模块420和编码模块430的相关操作,为了简洁,这里不再赘述。In some embodiments, the processor 710 in the chip 700 may be used to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in FIG. 16 , which are not repeated here for brevity.
可选地,该芯片可应用于本申请实施例中的接收端设备,并且该芯片可以实现本申请实施例的各个方法中由接收端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the chip can be applied to the receiving end device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the receiving end device in each method of the embodiment of the present application, which is not repeated here for brevity.
在一些实施例中,该芯片700中的输入接口730和输出接口740可以用于实现图17所示接收端设备500中通信模块510的相关操作,为了简洁,这里不再赘述。In some embodiments, the input interface 730 and the output interface 740 in the chip 700 may be used to implement the related operations of the communication module 510 in the receiving end device 500 shown in FIG. 17 , which are not repeated here for brevity.
在一些实施例中,该芯片700中的处理器710可以用于实现图17所示接收端设备500中解码模块520和处理模块的相关操作,为了简洁,这里不再赘述。In some embodiments, the processor 710 in the chip 700 may be used to implement the related operations of the decoding module 520 and the processing module in the receiving end device 500 shown in FIG. 17 , which are not repeated here for brevity.
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。It should be understood that the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-chip, or a system-on-a-chip, or the like.
图20是本申请实施例提供的一种通信系统900的示意性框图。如图20所示,该通信系统900包括发送端设备910和接收端设备920。FIG. 20 is a schematic block diagram of a communication system 900 provided by an embodiment of the present application. As shown in FIG. 20 , the communication system 900 includes a transmitter device 910 and a receiver device 920 .
其中,该发送端设备910可以用于实现上述方法中由发送端设备实现的相应的功能,以及该接收端设备920可以用于实现上述方法中由接收端设备实现的相应的功能为了简洁,在此不再赘述。Wherein, the transmitting end device 910 can be used to realize the corresponding functions realized by the transmitting end device in the above method, and the receiving end device 920 can be used to realize the corresponding functions realized by the receiving end device in the above method. For brevity, in This will not be repeated here.
应理解,本申请实施例的处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。It should be understood that the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability. In the implementation process, each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software. The above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available Programming logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory. Wherein, the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which acts as an external cache. By way of illustration and not limitation, many forms of RAM are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synchlink DRAM, SLDRAM) ) and direct memory bus random access memory (Direct Rambus RAM, DR RAM). It should be noted that the memory of the systems and methods described herein is intended to include, but not be limited to, these and any other suitable types of memory.
应理解,上述存储器为示例性但不是限制性说明,例如,本申请实施例中的存储器还可以是静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、 同步连接动态随机存取存储器(synch link DRAM,SLDRAM)以及直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)等等。也就是说,本申请实施例中的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It should be understood that the above memory is an example but not a limitative description, for example, the memory in the embodiment of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
本申请实施例还提供了一种计算机可读存储介质,用于存储计算机程序。Embodiments of the present application further provide a computer-readable storage medium for storing a computer program.
可选的,该计算机可读存储介质可应用于本申请实施例中的发送端设备,并且该计算机程序使得计算机执行本申请实施例的各个方法中由发送端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer-readable storage medium can be applied to the sending end device in the embodiments of the present application, and the computer program enables the computer to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application. For brevity, It is not repeated here.
可选地,该计算机可读存储介质可应用于本申请实施例中的接收端设备,并且该计算机程序使得计算机执行本申请实施例的各个方法中由接收端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer-readable storage medium can be applied to the receiving-end device in the embodiments of the present application, and the computer program enables the computer to execute the corresponding processes implemented by the receiving-end device in each method of the embodiments of the present application. For brevity, It is not repeated here.
本申请实施例还提供了一种计算机程序产品,包括计算机程序指令。Embodiments of the present application also provide a computer program product, including computer program instructions.
可选的,该计算机程序产品可应用于本申请实施例中的发送端设备,并且该计算机程序指令使得计算机执行本申请实施例的各个方法中由发送端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program product can be applied to the sending end device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application. This will not be repeated here.
可选地,该计算机程序产品可应用于本申请实施例中的接收端设备,并且该计算机程序指令使得计算机执行本申请实施例的各个方法中由接收端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program product can be applied to the receiving end device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the receiving end device in the various methods of the embodiments of the present application. This will not be repeated here.
本申请实施例还提供了一种计算机程序。The embodiments of the present application also provide a computer program.
可选的,该计算机程序可应用于本申请实施例中的发送端设备,当该计算机程序在计算机上运行时,使得计算机执行本申请实施例的各个方法中由发送端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program can be applied to the sending end device in the embodiments of the present application, and when the computer program is run on the computer, the computer is made to execute the corresponding processes implemented by the sending end device in each method of the embodiments of the present application, For brevity, details are not repeated here.
可选地,该计算机程序可应用于本申请实施例中的接收端设备,当该计算机程序在计算机上运行时,使得计算机执行本申请实施例的各个方法中由接收端设备实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program can be applied to the receiving-end device in the embodiments of the present application, and when the computer program is run on the computer, the computer is made to execute the corresponding processes implemented by the receiving-end device in the various methods of the embodiments of the present application, For brevity, details are not repeated here.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. should be covered within the scope of protection of this application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (46)

  1. 一种信道信息反馈的方法,其特征在于,包括:A method for channel information feedback, comprising:
    发送端设备接收接收端设备发送的参考信号;The transmitting end device receives the reference signal sent by the receiving end device;
    根据所述参考信号进行信道估计,得到所述发送端设备和所述接收端设备之间的信道信息;Perform channel estimation according to the reference signal to obtain channel information between the transmitting end device and the receiving end device;
    对所述信道信息进行特征分解,得到至少一个第一特征向量;Perform feature decomposition on the channel information to obtain at least one first feature vector;
    通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流;The at least one first feature vector is encoded by a neural network to obtain a target bit stream;
    向所述接收端设备发送所述目标比特流。Send the target bitstream to the receiver device.
  2. 根据权利要求1所述的方法,其特征在于,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。The method according to claim 1, wherein the channel information includes channel information corresponding to each subcarrier group in the multiple subcarrier groups of the transmitting end device, and the at least one first eigenvector includes the Eigenvectors corresponding to each subcarrier group in the multiple subcarrier groups respectively.
  3. 根据权利要求1或2所述的方法,其特征在于,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:The method according to claim 1 or 2, wherein the encoding the at least one first feature vector through a neural network to obtain a target bit stream comprises:
    将所述至少一个第一特征向量拼接为特征向量矩阵,输入至所述神经网络;The at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
    通过所述神经网络将所述特征向量矩阵作为待压缩图像进行编码,得到所述目标比特流。The target bit stream is obtained by encoding the feature vector matrix as an image to be compressed through the neural network.
  4. 根据权利要求3所述的方法,其特征在于,所述神经网络为卷积神经网络。The method according to claim 3, wherein the neural network is a convolutional neural network.
  5. 根据权利要求1或2所述的方法,其特征在于,所述神经网络为循环神经网络,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:The method according to claim 1 or 2, wherein the neural network is a recurrent neural network, and the at least one first feature vector is encoded by the neural network to obtain a target bit stream, comprising:
    将所述至少一个第一特征向量中的每个特征向量,依次输入至所述循环神经网络;inputting each feature vector in the at least one first feature vector into the recurrent neural network in sequence;
    通过所述循环神经网络将所述每个特征向量作为序列的元素进行编码,得到所述目标比特流。The target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
  6. 根据权利要求5所述的方法,其特征在于,所述循环神经网络包括长短期记忆LSTM神经网络。The method of claim 5, wherein the recurrent neural network comprises a long short-term memory LSTM neural network.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流,包括:The method according to any one of claims 1-6, wherein the encoding the at least one first feature vector through a neural network to obtain a target bit stream comprises:
    基于注意力机制对所述至少一个第一特征向量进行编码,得到所述目标比特流。The at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
  8. 根据权利要求7所述的方法,其特征在于,所述基于注意力机制对所述至少一个第一特征向量进行编码,得到所述目标比特流,包括:The method according to claim 7, wherein the encoding of the at least one first feature vector based on an attention mechanism to obtain the target bitstream comprises:
    基于注意力机制提取所述至少一个第一特征向量中的元素间的相关性特征,得到至少一个第二特征向量;Extracting correlation features between elements in the at least one first feature vector based on the attention mechanism, to obtain at least one second feature vector;
    对所述至少一个第二特征向量进行特征压缩,得到所述目标比特流。Feature compression is performed on the at least one second feature vector to obtain the target bit stream.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述发送端设备为终端设备,所述接收端设备为网络设备。The method according to any one of claims 1-8, wherein the transmitting end device is a terminal device, and the receiving end device is a network device.
  10. 一种信道信息反馈的方法,其特征在于,包括:A method for channel information feedback, comprising:
    接收端设备接收发送端设备发送的目标比特流,所述目标比特流是所述发送端设备对至少一个第一特征向量编码得到的,所述至少一个第一特征向量是所述发送端设备对信道估计的结果进行特征分解得到的;The receiving end device receives the target bit stream sent by the sending end device, the target bit stream is obtained by the sending end device by encoding at least one first feature vector, and the at least one first feature vector is the pair of the sending end device. The result of channel estimation is obtained by eigendecomposition;
    通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量。The target bit stream is decoded through a neural network to obtain at least one target feature vector.
  11. 根据权利要求10所述的方法,其特征在于,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。The method according to claim 10, wherein the channel information includes channel information corresponding to each subcarrier group in the multiple subcarrier groups of the transmitting end device, and the at least one first eigenvector includes all the subcarrier groups. Eigenvectors corresponding to each subcarrier group in the multiple subcarrier groups respectively.
  12. 根据权利要求10或11所述的方法,其特征在于,所述通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量,包括:The method according to claim 10 or 11, wherein the decoding the target bit stream through a neural network to obtain at least one target feature vector, comprising:
    将所述目标比特流输入至所述神经网络;inputting the target bit stream to the neural network;
    通过所述神经网络将所述目标比特流作为对图像编码得到的信息对所述目标比特流进行解码,得到目标图像,所述目标图像包括所述至少一个目标特征向量。Decoding the target bitstream by using the neural network as information obtained by encoding an image to obtain a target image, where the target image includes the at least one target feature vector.
  13. 根据权利要求12所述的方法,其特征在于,所述神经网络为卷积神经网络。The method according to claim 12, wherein the neural network is a convolutional neural network.
  14. 根据权利要求10或11所述的方法,其特征在于,所述神经网络为循环神经网络,所述通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量,包括:The method according to claim 10 or 11, wherein the neural network is a recurrent neural network, and the target bit stream is decoded by the neural network to obtain at least one target feature vector, comprising:
    将所述目标比特流输入至所述循环神经网络;inputting the target bit stream to the recurrent neural network;
    通过所述循环神经网络将所述目标比特流作为对序列编码得到的信息对所述目标比特流进行解码,得到目标序列,所述目标序列包括所述至少一个目标特征向量。The target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
  15. 根据权利要求14所述的方法,其特征在于,所述循环神经网络包括长短期记忆LSTM神经网络。The method of claim 14, wherein the recurrent neural network comprises a long short term memory LSTM neural network.
  16. 根据权利要求10或11所述的方法,其特征在于,所述通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量,包括:The method according to claim 10 or 11, wherein the decoding the target bit stream through a neural network to obtain at least one target feature vector, comprising:
    基于注意力机制对所述目标比特流进行解码,得到至少一个目标特征向量。The target bitstream is decoded based on the attention mechanism to obtain at least one target feature vector.
  17. 根据权利要求16所述的方法,其特征在于,所述基于注意力机制对所述目标比特流进行解码,得到至少一个目标特征向量,包括:The method according to claim 16, wherein the decoding of the target bitstream based on the attention mechanism to obtain at least one target feature vector, comprising:
    对所述目标比特流进行特征提取得到第一特征图;Perform feature extraction on the target bit stream to obtain a first feature map;
    基于注意力机制确定所述第一特征图中的元素的权重;Determine the weight of the element in the first feature map based on the attention mechanism;
    将所述第一特征图和所述第一特征图中的元素的权值进行点乘,得到第二特征图;Dot multiplication of the weights of the elements in the first feature map and the first feature map to obtain a second feature map;
    对所述第二特征图进行解压缩,得到所述至少一个目标特征向量。Decompress the second feature map to obtain the at least one target feature vector.
  18. 根据权利要求10-17中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 10-17, wherein the method further comprises:
    所述接收端设备根据所述至少一个目标特征向量进行预编码。The receiving end device performs precoding according to the at least one target feature vector.
  19. 一种发送端设备,其特征在于,包括:A sending end device, characterized in that it includes:
    通信模块,用于接收接收端设备发送的参考信号;The communication module is used to receive the reference signal sent by the receiver device;
    处理模块,用于根据所述参考信号进行信道估计,得到所述发送端设备和所述接收端设备之间的信道信息,并对所述信道信息进行特征分解,得到至少一个第一特征向量;a processing module, configured to perform channel estimation according to the reference signal, obtain channel information between the transmitting end device and the receiving end device, and perform feature decomposition on the channel information to obtain at least one first feature vector;
    编码模块,用于通过神经网络对所述至少一个第一特征向量进行编码,得到目标比特流;an encoding module, configured to encode the at least one first feature vector through a neural network to obtain a target bit stream;
    所述通信模块还用于:向所述接收端设备发送所述目标比特流。The communication module is further configured to: send the target bit stream to the receiving end device.
  20. 根据权利要求19所述的发送端设备,其特征在于,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。The transmitting end device according to claim 19, wherein the channel information comprises channel information corresponding to each subcarrier group in the plurality of subcarrier groups of the transmitting end device, the at least one first eigenvector The eigenvectors corresponding to each subcarrier group in the multiple subcarrier groups are included.
  21. 根据权利要求19或20所述的发送端设备,其特征在于,所述编码模块具体用于:The transmitting end device according to claim 19 or 20, wherein the encoding module is specifically used for:
    将所述至少一个第一特征向量拼接为特征向量矩阵,输入至所述神经网络;The at least one first eigenvector is spliced into a matrix of eigenvectors and input to the neural network;
    通过所述神经网络将所述特征向量矩阵作为待压缩图像进行编码,得到所述目标比特流。The target bit stream is obtained by encoding the feature vector matrix as an image to be compressed through the neural network.
  22. 根据权利要求21所述的发送端设备,其特征在于,所述神经网络为卷积神经网络。The sender device according to claim 21, wherein the neural network is a convolutional neural network.
  23. 根据权利要求19或21所述的发送端设备,其特征在于,所述神经网络为循环神经网络,所述编码模块还用于:The sender device according to claim 19 or 21, wherein the neural network is a recurrent neural network, and the encoding module is further used for:
    将所述至少一个第一特征向量中的每个特征向量,依次输入至所述循环神经网络;inputting each feature vector in the at least one first feature vector into the recurrent neural network in turn;
    通过所述循环神经网络将所述每个特征向量作为序列的元素进行编码,得到所述目标比特流。The target bit stream is obtained by encoding each feature vector as an element of a sequence through the recurrent neural network.
  24. 根据权利要求23所述的发送端设备,其特征在于,所述循环神经网络包括长短期记忆LSTM神经网络。The sender device according to claim 23, wherein the recurrent neural network comprises a long short-term memory (LSTM) neural network.
  25. 根据权利要求19-24中任一项所述的发送端设备,其特征在于,所述编码模块还用于:The transmitting end device according to any one of claims 19-24, wherein the encoding module is further configured to:
    基于注意力机制对所述至少一个第一特征向量进行编码,得到所述目标比特流。The at least one first feature vector is encoded based on an attention mechanism to obtain the target bitstream.
  26. 根据权利要求25所述的发送端设备,其特征在于,所述编码模块还包括:The transmitting end device according to claim 25, wherein the encoding module further comprises:
    注意力模块,用于基于注意力机制提取所述至少一个第一特征向量中的元素间的相关性特征,得到至少一个第二特征向量;an attention module, configured to extract the correlation feature between elements in the at least one first feature vector based on the attention mechanism, to obtain at least one second feature vector;
    特征向量压缩模块,用于对所述至少一个第二特征向量进行特征压缩,得到所述目标比特流。A feature vector compression module, configured to perform feature compression on the at least one second feature vector to obtain the target bit stream.
  27. 根据权利要求19-26中任一项所述的发送端设备,其特征在于,所述发送端设备为终端设备,所述接收端设备为网络设备。The transmitting end device according to any one of claims 19-26, wherein the transmitting end device is a terminal device, and the receiving end device is a network device.
  28. 一种接收端设备,其特征在于,包括:A receiving end device, comprising:
    通信模块,用于接收发送端设备发送的目标比特流,所述目标比特流是所述发送端设备对至少一个第一特征向量编码得到的,所述至少一个第一特征向量是所述发送端设备对信道估计的结果进行特征分解得到的;A communication module, configured to receive a target bit stream sent by the sending end device, where the target bit stream is obtained by encoding at least one first feature vector by the sending end device, and the at least one first feature vector is the sending end device Obtained by the device performing eigendecomposition on the channel estimation result;
    解码模块,用于通过神经网络对所述目标比特流进行解码,得到至少一个目标特征向量。The decoding module is used for decoding the target bit stream through the neural network to obtain at least one target feature vector.
  29. 根据权利要求28所述的接收端设备,其特征在于,所述信道信息包括所述发送端设备的多个子载波组中的每个子载波组分别对应的信道信息,所述至少一个第一特征向量包括所述多个子载波组中的每个子载波组分别对应的特征向量。The receiving end device according to claim 28, wherein the channel information comprises channel information corresponding to each subcarrier group in the plurality of subcarrier groups of the transmitting end device, the at least one first eigenvector The eigenvectors corresponding to each subcarrier group in the multiple subcarrier groups are included.
  30. 根据权利要求28或29所述的接收端设备,其特征在于,所述解码模块用于:The receiver device according to claim 28 or 29, wherein the decoding module is used for:
    将所述目标比特流输入至所述神经网络;inputting the target bit stream to the neural network;
    通过所述神经网络将所述目标比特流作为对图像编码得到的信息对所述目标比特流进行解码,得到目标图像,所述目标图像包括所述至少一个目标特征向量。Decoding the target bitstream by using the neural network as information obtained by encoding an image to obtain a target image, where the target image includes the at least one target feature vector.
  31. 根据权利要求30所述的接收端设备,其特征在于,所述神经网络为卷积神经网络。The receiver device according to claim 30, wherein the neural network is a convolutional neural network.
  32. 根据权利要求28或29所述的接收端设备,其特征在于,所述神经网络为循环神经网络,所 述解码模块还用于:receiver equipment according to claim 28 or 29, is characterized in that, described neural network is cyclic neural network, and described decoding module is also used for:
    将所述目标比特流输入至所述循环神经网络;inputting the target bit stream to the recurrent neural network;
    通过所述循环神经网络将所述目标比特流作为对序列编码得到的信息对所述目标比特流进行解码,得到目标序列,所述目标序列包括所述至少一个目标特征向量。The target bit stream is decoded by using the cyclic neural network as information obtained by encoding the sequence to obtain a target sequence, where the target sequence includes the at least one target feature vector.
  33. 根据权利要求32所述的接收端设备,其特征在于,所述循环神经网络包括长短期记忆LSTM神经网络。The receiver device according to claim 32, wherein the recurrent neural network comprises a long short-term memory (LSTM) neural network.
  34. 根据权利要求28-33中任一项所述的接收端设备,其特征在于,所述解码模块还用于:The receiver device according to any one of claims 28-33, wherein the decoding module is further configured to:
    基于注意力机制对所述目标比特流进行解码,得到至少一个目标特征向量。The target bitstream is decoded based on the attention mechanism to obtain at least one target feature vector.
  35. 根据权利要求34所述的接收端设备,其特征在于,所述编码模块包括:The receiver device according to claim 34, wherein the encoding module comprises:
    特征提取模块,用于对所述目标比特流进行特征提取得到第一特征图;a feature extraction module, configured to perform feature extraction on the target bitstream to obtain a first feature map;
    注意力模块,用于基于注意力机制确定所述第一特征图中的元素的权重;an attention module, configured to determine the weight of the elements in the first feature map based on the attention mechanism;
    点乘模块,用于将所述第一特征图和所述第一特征图中的元素的权值进行点乘,得到第二特征图;a dot product module, configured to perform dot product on the first feature map and the weights of the elements in the first feature map to obtain a second feature map;
    解压缩模块,用于对所述第二特征图进行解压缩,得到所述至少一个目标特征向量。A decompression module, configured to decompress the second feature map to obtain the at least one target feature vector.
  36. 根据权利要求28-35中任一项所述的接收端设备,其特征在于,所述接收端设备还包括:The receiving end device according to any one of claims 28-35, wherein the receiving end device further comprises:
    处理模块,用于根据所述至少一个目标特征向量进行预编码。A processing module, configured to perform precoding according to the at least one target feature vector.
  37. 一种发送端设备,其特征在于,包括:处理器和存储器,该存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,执行如权利要求1至9中任一项所述的方法。A sending end device is characterized in that, comprising: a processor and a memory, the memory is used for storing a computer program, the processor is used for calling and running the computer program stored in the memory, and executes the program as claimed in claims 1 to 9 The method of any one.
  38. 一种芯片,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求1至9中任一项所述的方法。A chip, characterized by comprising: a processor for calling and running a computer program from a memory, so that a device installed with the chip executes the method according to any one of claims 1 to 9.
  39. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求1至9中任一项所述的方法。A computer-readable storage medium, characterized by being used for storing a computer program, the computer program causing a computer to execute the method according to any one of claims 1 to 9.
  40. 一种计算机程序产品,其特征在于,包括计算机程序指令,该计算机程序指令使得计算机执行如权利要求1至9中任一项所述的方法。A computer program product comprising computer program instructions, the computer program instructions causing a computer to perform the method of any one of claims 1 to 9.
  41. 一种计算机程序,其特征在于,所述计算机程序使得计算机执行如权利要求1至9中任一项所述的方法。A computer program, characterized in that the computer program causes a computer to perform the method according to any one of claims 1 to 9.
  42. 一种网络设备,其特征在于,包括:处理器和存储器,该存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,执行如权利要求10至18中任一项所述的方法。A network device, characterized in that it comprises: a processor and a memory, the memory is used to store a computer program, the processor is used to call and run the computer program stored in the memory, and execute any one of claims 10 to 18. one of the methods described.
  43. 一种芯片,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求10至18中任一项所述的方法。A chip, characterized by comprising: a processor for calling and running a computer program from a memory, so that a device installed with the chip executes the method according to any one of claims 10 to 18 .
  44. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求10至18中任一项所述的方法。A computer-readable storage medium, characterized by being used for storing a computer program, the computer program causing a computer to execute the method according to any one of claims 10 to 18.
  45. 一种计算机程序产品,其特征在于,包括计算机程序指令,该计算机程序指令使得计算机执行如权利要求10至18中任一项所述的方法。A computer program product comprising computer program instructions, the computer program instructions causing a computer to perform the method of any one of claims 10 to 18.
  46. 一种计算机程序,其特征在于,所述计算机程序使得计算机执行如权利要求10至18中任一项所述的方法。A computer program, characterized in that the computer program causes a computer to perform the method according to any one of claims 10 to 18.
PCT/CN2021/087288 2021-04-14 2021-04-14 Channel information feedback method, sending end device, and receiving end device WO2022217506A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/087288 WO2022217506A1 (en) 2021-04-14 2021-04-14 Channel information feedback method, sending end device, and receiving end device
CN202180079843.3A CN116569527A (en) 2021-04-14 2021-04-14 Channel information feedback method, transmitting end equipment and receiving end equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087288 WO2022217506A1 (en) 2021-04-14 2021-04-14 Channel information feedback method, sending end device, and receiving end device

Publications (1)

Publication Number Publication Date
WO2022217506A1 true WO2022217506A1 (en) 2022-10-20

Family

ID=83639978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087288 WO2022217506A1 (en) 2021-04-14 2021-04-14 Channel information feedback method, sending end device, and receiving end device

Country Status (2)

Country Link
CN (1) CN116569527A (en)
WO (1) WO2022217506A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024088006A1 (en) * 2022-10-26 2024-05-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatuses for processing of channel information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130343479A1 (en) * 2012-06-25 2013-12-26 Samsung Electronics Co., Ltd. Method of transmitting secret information at transmitting end and method of receiving secret information at receiving end, based on mimo multiplexing using antennas
CN109714086A (en) * 2019-01-23 2019-05-03 上海大学 Optimization MIMO detection method based on deep learning
CN110581732A (en) * 2019-09-30 2019-12-17 山东建筑大学 Multi-objective optimization system and method for indoor visible light communication based on neural network
CN112425083A (en) * 2018-07-05 2021-02-26 三星电子株式会社 Method and apparatus for performing beamforming in wireless communication system
CN112511472A (en) * 2020-11-10 2021-03-16 北京大学 Time-frequency second-order equalization method based on neural network and communication system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130343479A1 (en) * 2012-06-25 2013-12-26 Samsung Electronics Co., Ltd. Method of transmitting secret information at transmitting end and method of receiving secret information at receiving end, based on mimo multiplexing using antennas
CN112425083A (en) * 2018-07-05 2021-02-26 三星电子株式会社 Method and apparatus for performing beamforming in wireless communication system
CN109714086A (en) * 2019-01-23 2019-05-03 上海大学 Optimization MIMO detection method based on deep learning
CN110581732A (en) * 2019-09-30 2019-12-17 山东建筑大学 Multi-objective optimization system and method for indoor visible light communication based on neural network
CN112511472A (en) * 2020-11-10 2021-03-16 北京大学 Time-frequency second-order equalization method based on neural network and communication system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024088006A1 (en) * 2022-10-26 2024-05-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatuses for processing of channel information

Also Published As

Publication number Publication date
CN116569527A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
WO2022033456A1 (en) Channel state information measurement feedback method, and related apparatus
CN109873665B (en) Method and device for data transmission
WO2022121797A1 (en) Data transmission method and apparatus
US11956031B2 (en) Communication of measurement results in coordinated multipoint
WO2022217506A1 (en) Channel information feedback method, sending end device, and receiving end device
CN109889247B (en) Low-overhead dynamic feedback safe transmission method and system suitable for narrow-band Internet of things
US20230136416A1 (en) Neural network obtaining method and apparatus
WO2023011472A1 (en) Method for feeding back channel state information, method for receiving channel state information, and terminal, base station, and computer-readable storage medium
WO2023123062A1 (en) Quality evaluation method for virtual channel sample, and device
WO2022222116A1 (en) Channel recovery method and receiving end device
WO2024020793A1 (en) Channel state information (csi) feedback method, terminal device and network device
WO2024108356A1 (en) Csi feedback method, transmitter device and receiver device
WO2024011456A1 (en) Data processing method and apparatus, communication method and apparatus, and terminal device and network device
WO2023004638A1 (en) Channel information feedback methods, transmitting end devices, and receiving end devices
WO2024098259A1 (en) Sample set generation method and device
WO2022236785A1 (en) Channel information feedback method, receiving end device, and transmitting end device
WO2023133886A1 (en) Channel information feedback method, sending end device, and receiving end device
WO2023028948A1 (en) Model processing method, electronic device, network device, and terminal device
WO2023004563A1 (en) Method for obtaining reference signal and communication devices
CN114157722A (en) Data transmission method and device
WO2023115254A1 (en) Data processing method and device
WO2023030538A1 (en) Method for processing channel state information, and terminal, base station and computer-readable storage medium
WO2023015499A1 (en) Wireless communication method and device
WO2022257042A1 (en) Codebook reporting method, and terminal device and network device
WO2023060503A1 (en) Information processing method and apparatus, device, medium, chip, product, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936398

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180079843.3

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936398

Country of ref document: EP

Kind code of ref document: A1