CN116569527A - Channel information feedback method, transmitting end equipment and receiving end equipment - Google Patents

Channel information feedback method, transmitting end equipment and receiving end equipment Download PDF

Info

Publication number
CN116569527A
CN116569527A CN202180079843.3A CN202180079843A CN116569527A CN 116569527 A CN116569527 A CN 116569527A CN 202180079843 A CN202180079843 A CN 202180079843A CN 116569527 A CN116569527 A CN 116569527A
Authority
CN
China
Prior art keywords
neural network
target
feature vector
bit stream
target bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180079843.3A
Other languages
Chinese (zh)
Inventor
肖寒
田文强
刘文东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN116569527A publication Critical patent/CN116569527A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method for channel information feedback, a transmitting end device and a receiving end device are beneficial to considering the feedback precision and CSI feedback overhead of channel information. The method comprises the following steps: the method comprises the steps that a transmitting end device receives a reference signal transmitted by a receiving end device; performing channel estimation according to the reference signal to obtain channel information between the transmitting end equipment and the receiving end equipment; performing feature decomposition on the channel information to obtain at least one first feature vector; encoding the at least one first feature vector through a neural network to obtain a target bit stream; and sending the target bit stream to the receiving end equipment.

Description

Channel information feedback method, transmitting end equipment and receiving end equipment Technical Field
The embodiment of the application relates to the field of communication, in particular to a method for feeding back channel information, a transmitting terminal device and a receiving terminal device.
Background
In a New Radio (NR) system, in a feedback design of a channel state information reference signal (Channel State Information Reference Signal, CSI-RS), a codebook-based scheme is mainly used to implement extraction and feedback of channel characteristics. After the transmitting end carries out channel estimation, a precoding matrix which is most matched with a channel estimation structure is selected from a preset precoding codebook according to a specific optimization criterion according to a channel estimation result, and index information of the precoding matrix is fed back to the receiving end through an empty feedback link so as to realize precoding by the receiving end. The mapping process from the channel information to the channel information in the precoding codebook is quantized and lossy, the accuracy of the fed-back channel information is reduced, the precoding performance is further reduced, and if the full channel information of the channel estimation is fed back, the CSI feedback overhead is larger. Therefore, how to combine the feedback accuracy of the channel information and the CSI feedback overhead is a problem to be solved.
Disclosure of Invention
The application provides a channel information feedback method, a transmitting end device and a receiving end device, which can give consideration to feedback precision and CSI feedback overhead of channel information.
In a first aspect, a method for channel information feedback is provided, including: the method comprises the steps that a transmitting end device receives a reference signal transmitted by a receiving end device; performing channel estimation according to the reference signal to obtain channel information between the transmitting end equipment and the receiving end equipment; performing feature decomposition on the channel information to obtain at least one first feature vector; encoding the at least one first feature vector through a neural network to obtain a target bit stream; and sending the target bit stream to the receiving end equipment.
In a second aspect, a method for channel information feedback is provided, including: the method comprises the steps that a receiving end device receives a target bit stream sent by a sending end device, wherein the target bit stream is obtained by encoding at least one first feature vector by the sending end device, and the at least one first feature vector is obtained by carrying out feature decomposition on a channel estimation result by the sending end device; and decoding the target bit stream through a neural network to obtain at least one target feature vector.
In a third aspect, a transmitting device is provided for performing the method in the first aspect or each implementation manner thereof.
Specifically, the sender device includes a functional module for executing the method in the first aspect or each implementation manner thereof.
In a fourth aspect, a receiving end device is provided for performing the method in the second aspect or each implementation manner thereof.
Specifically, the receiving end device includes a functional module for executing the method in the second aspect or each implementation manner thereof.
In a fifth aspect, a transmitting end device is provided that includes a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory and executing the method in the first aspect or various implementation manners thereof.
In a sixth aspect, a receiver-side device is provided that includes a processor and a memory. The memory is for storing a computer program and the processor is for calling and running the computer program stored in the memory for performing the method of the second aspect or implementations thereof described above.
A seventh aspect provides a chip for implementing the method of any one of the first to second aspects or each implementation thereof.
Specifically, the chip includes: a processor for calling and running a computer program from a memory, causing a device in which the apparatus is installed to perform the method as in any one of the first to second aspects or implementations thereof described above.
In an eighth aspect, a computer-readable storage medium is provided for storing a computer program that causes a computer to perform the method of any one of the above-described first to second aspects or implementations thereof.
A ninth aspect provides a computer program product comprising computer program instructions for causing a computer to perform the method of any one of the first to second aspects or implementations thereof.
In a tenth aspect, there is provided a computer program which, when run on a computer, causes the computer to perform the method of any one of the first to second aspects or implementations thereof.
According to the technical scheme, the transmitting end equipment obtains at least one feature vector by carrying out feature decomposition on the full-channel information of the channel estimation, the neural network is further utilized to encode the feature vector to obtain a target bit stream, the target bit stream is transmitted to the receiving end, correspondingly, the receiving end decodes the target bit stream to obtain the target feature vector, on one hand, when the channel information is transmitted, only the target bit stream which encodes the feature vector obtained by carrying out feature decomposition on the full-channel information is required to be transmitted, thereby being beneficial to reducing CSI overhead, on the other hand, the feature vector of the full-channel information is encoded instead of directly encoding the full-channel information, and the correlation features among the feature vectors are considered, so that redundant information which is too much compressed is avoided, compression efficiency is reduced, and encoding performance is improved.
Drawings
Fig. 1 is a schematic diagram of a communication system architecture provided in an embodiment of the present application.
Fig. 2 is a schematic interaction diagram of a method for channel information feedback according to an embodiment of the present application.
Fig. 3 is a system architecture diagram of a method of channel information feedback according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a neural network.
Fig. 5 is a schematic block diagram of an encoder according to one embodiment of the present application.
Fig. 6 is a schematic block diagram of a decoder according to one embodiment of the present application.
Fig. 7 is a schematic structural view of the common module in fig. 6.
Fig. 8 is a schematic diagram of a recurrent neural network.
Fig. 9 is a schematic block diagram of an encoder according to another embodiment of the present application.
Fig. 10 is a schematic block diagram of a decoder according to another embodiment of the present application.
Fig. 11 is a schematic block diagram of an encoder according to yet another embodiment of the present application.
Fig. 12 is an exemplary block diagram of the self-attention module of fig. 11.
Fig. 13 is a schematic block diagram of a decoder according to still another embodiment of the present application.
Fig. 14 is a schematic structural diagram of the mask block in fig. 13.
Fig. 15 is a schematic structural diagram of the residual block in fig. 13.
Fig. 16 is a schematic block diagram of a transmitting end device according to an embodiment of the present application.
Fig. 17 is a schematic block diagram of a receiving-end device according to an embodiment of the present application.
Fig. 18 is a schematic block diagram of a communication device provided according to an embodiment of the present application.
Fig. 19 is a schematic block diagram of a chip provided according to an embodiment of the present application.
Fig. 20 is a schematic block diagram of a communication system provided according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden for the embodiments herein, are intended to be within the scope of the present application.
The technical solution of the embodiment of the application can be applied to various communication systems, for example: global system for mobile communications (Global System of Mobile communication, GSM), code division multiple access (Code Division Multiple Access, CDMA) system, wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) system, general packet Radio service (General Packet Radio Service, GPRS), long term evolution (Long Term Evolution, LTE) system, advanced long term evolution (Advanced long term evolution, LTE-a) system, new Radio (NR) system, evolved system of NR system, LTE-based access to unlicensed spectrum, LTE-U) system on unlicensed spectrum, NR (NR-based access to unlicensed spectrum, NR-U) system on unlicensed spectrum, non-terrestrial communication network (Non-Terrestrial Networks, NTN) system, universal mobile communication system (Universal Mobile Telecommunication System, UMTS), wireless local area network (Wireless Local Area Networks, WLAN), wireless fidelity (Wireless Fidelity, wiFi), fifth Generation communication (5 th-Generation, 5G) system, or other communication system, etc.
Generally, the number of connections supported by the conventional communication system is limited and easy to implement, however, with the development of communication technology, the mobile communication system will support not only conventional communication but also, for example, device-to-Device (D2D) communication, machine-to-machine (Machine to Machine, M2M) communication, machine type communication (Machine Type Communication, MTC), inter-vehicle (Vehicle to Vehicle, V2V) communication, or internet of vehicles (Vehicle to everything, V2X) communication, etc., and the embodiments of the present application may also be applied to these communication systems.
Optionally, the communication system in the embodiment of the present application may be applied to a carrier aggregation (Carrier Aggregation, CA) scenario, a dual connectivity (Dual Connectivity, DC) scenario, and a Stand Alone (SA) fabric scenario.
Optionally, the communication system in the embodiments of the present application may be applied to unlicensed spectrum, where unlicensed spectrum may also be considered as shared spectrum; alternatively, the communication system in the embodiments of the present application may also be applied to licensed spectrum, where licensed spectrum may also be considered as non-shared spectrum.
Embodiments of the present application describe various embodiments in connection with network devices and terminal devices, where a terminal device may also be referred to as a User Equipment (UE), access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent, user Equipment, or the like.
The terminal device may be a STATION (ST) in a WLAN, may be a cellular telephone, a cordless telephone, a session initiation protocol (Session Initiation Protocol, SIP) phone, a wireless local loop (Wireless Local Loop, WLL) STATION, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a vehicle mounted device, a wearable device, a terminal device in a next generation communication system such as an NR network, or a terminal device in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
In embodiments of the present application, the terminal device may be deployed on land, including indoor or outdoor, hand-held, wearable or vehicle-mounted; can also be deployed on the water surface (such as ships, etc.); but may also be deployed in the air (e.g., on aircraft, balloon, satellite, etc.).
In the embodiment of the present application, the terminal device may be a Mobile Phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented Reality (Augmented Reality, AR) terminal device, a wireless terminal device in industrial control (industrial control), a wireless terminal device in unmanned driving (self driving), a wireless terminal device in remote medical (remote medical), a wireless terminal device in smart grid (smart grid), a wireless terminal device in transportation security (transportation safety), a wireless terminal device in smart city (smart city), or a wireless terminal device in smart home (smart home), and the like.
By way of example, and not limitation, in embodiments of the present application, the terminal device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In this embodiment of the present application, the network device may be a device for communicating with a mobile device, where the network device may be an Access Point (AP) in WLAN, a base station (Base Transceiver Station, BTS) in GSM or CDMA, a base station (NodeB, NB) in WCDMA, an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, a relay station or an Access Point, a vehicle device, a wearable device, and a network device (gNB) in an NR network, or a network device in a PLMN network for future evolution, or a network device in an NTN network, etc.
By way of example and not limitation, in embodiments of the present application, a network device may have a mobile nature, e.g., the network device may be a mobile device. Alternatively, the network device may be a satellite, a balloon station. For example, the satellite may be a Low Earth Orbit (LEO) satellite, a medium earth orbit (medium earth orbit, MEO) satellite, a geosynchronous orbit (geostationary earth orbit, GEO) satellite, a high elliptical orbit (High Elliptical Orbit, HEO) satellite, or the like. Alternatively, the network device may be a base station disposed on land, in a water area, or the like.
In this embodiment of the present application, a network device may provide a service for a cell, where a terminal device communicates with the network device through a transmission resource (e.g., a frequency domain resource, or a spectrum resource) used by the cell, where the cell may be a cell corresponding to a network device (e.g., a base station), and the cell may belong to a macro base station, or may belong to a base station corresponding to a Small cell (Small cell), where the Small cell may include: urban cells (Metro cells), micro cells (Micro cells), pico cells (Pico cells), femto cells (Femto cells) and the like, and the small cells have the characteristics of small coverage area and low transmitting power and are suitable for providing high-rate data transmission services.
Exemplary, a communication system 100 to which embodiments of the present application apply is shown in fig. 1. The communication system 100 may include a network device 110, and the network device 110 may be a device that communicates with a terminal device 120 (or referred to as a communication terminal, terminal). Network device 110 may provide communication coverage for a particular geographic area and may communicate with terminal devices located within the coverage area.
Fig. 1 illustrates one network device and two terminal devices by way of example, and alternatively, the communication system 100 may include a plurality of network devices and may include other numbers of terminal devices within the coverage area of each network device, which is not limited in this embodiment of the present application.
Optionally, the communication system 100 may further include a network controller, a mobility management entity, and other network entities, which are not limited in this embodiment of the present application.
It should be understood that a device having a communication function in a network/system in an embodiment of the present application may be referred to as a communication device. Taking the communication system 100 shown in fig. 1 as an example, the communication device may include a network device 110 and a terminal device 120 with communication functions, where the network device 110 and the terminal device 120 may be specific devices described above, and are not described herein again; the communication device may also include other devices in the communication system 100, such as a network controller, a mobility management entity, and other network entities, which are not limited in this embodiment of the present application.
It should be understood that the terms "system" and "network" are used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that, in the embodiments of the present application, the "indication" may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, or the like.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the specific implementation of the present application is not limited. Such as predefined may refer to what is defined in the protocol.
In this embodiment of the present application, the "protocol" may refer to a standard protocol in the communication field, for example, may include an LTE protocol, an NR protocol, and related protocols applied in a future communication system, which is not limited in this application.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the technical solutions of the present application are described in detail below through specific embodiments. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application. Embodiments of the present application include at least some of the following.
In the NR system, a codebook-based feedback scheme is used for feedback of channel information. According to the feedback scheme, the optimal channel information eigenvalue vector is selected from the precoding codebook according to the result of channel estimation, and because the precoding codebook has limited property, the mapping process from the result of channel estimation to the channels in the precoding codebook is quantized and lossy, so that the accuracy of the fed back channel information is reduced, the precoding performance is further reduced, and if the fed back channel information is full channel information, the CSI feedback overhead is increased, therefore, how to consider the feedback accuracy of the channel information and the CSI feedback overhead is a problem to be solved is urgent.
Fig. 2 is a schematic interaction diagram of a method 200 of channel information feedback according to an embodiment of the present application, as shown in fig. 2, the method 200 including at least part of the following:
s201, a transmitting terminal device receives a reference signal transmitted by a receiving terminal device;
s202, a transmitting terminal device carries out channel estimation according to the reference signal to obtain channel information between the transmitting terminal device and the receiving terminal device;
s203, the transmitting terminal equipment performs feature decomposition on the channel information to obtain at least one first feature vector;
s204, the transmitting terminal equipment encodes the at least one first feature vector through a neural network to obtain a target bit stream;
s205, the transmitting terminal equipment transmits the target bit stream to the receiving terminal equipment.
S206, the receiving end equipment decodes the target bit stream to obtain at least one target feature vector.
In some embodiments, the transmitting end device is a terminal device and the receiving end device is a network device.
In other embodiments, the transmitting device is a network device and the receiving device is a terminal device.
In still other embodiments, the sender device is a terminal device and the receiver device is another terminal device.
In still other embodiments, the sender device is a network device and the receiver device is another network device.
It should be understood that, depending on the sender device and the receiver device, the reference signal may be a demodulation reference signal (Demodulation Reference Signal, DMRS), for example, if the sender device is a terminal device and the receiver device is a network device.
In the embodiment of the application, the encoder is disposed in the transmitting end device, the decoder is disposed in the receiving end device, and in the embodiment of the application, the encoder in the transmitting end device and the decoder in the receiving end device can be realized through a neural network.
It should be noted that, the embodiment of the present application is not limited to a specific way for the transmitting end device to perform feature decomposition on the channel information. As an example, the transmitting device may perform singular value decomposition (Singular Value Decomposition, SVD) on the channel information to obtain the at least one first eigenvector.
In some embodiments of the present application, the channel information is full channel information obtained by performing channel estimation on the reference signal, i.e. unquantized channel information.
Therefore, in the embodiment of the application, the transmitting end device performs feature decomposition on the full channel information obtained by channel estimation to obtain at least one feature vector, further encodes the feature vector by using the neural network to obtain a target bit stream, and transmits the target bit stream to the receiving end. Correspondingly, the receiving terminal equipment decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector.
Based on the channel information feedback scheme of the embodiment of the application, on one hand, when the transmitting end equipment transmits the channel information, the transmitting end equipment only needs to transmit a target bit stream obtained by encoding the feature vector obtained by carrying out feature decomposition on the all-channel information, so that the CSI overhead is reduced, on the other hand, the encoding is carried out on the feature vector of the all-channel information instead of directly carrying out compression encoding on the all-channel information, the correlation feature among the channel information can be considered, the redundant information with excessive compression is avoided, the compression efficiency is improved, and the encoding performance is further improved.
In some embodiments of the present application, the receiving end device may perform precoding according to at least one target feature vector obtained by decoding, for example, may perform beamforming according to the at least one target feature vector.
In some scenarios, the scheduling bandwidth of the transmitting end device may include a plurality of subcarrier groups, which respectively correspond to a plurality of subbands, and in some embodiments of the application, the transmitting end device may respectively feed back channel information on the plurality of subbands of the transmitting end device.
In some embodiments, the transmitting device may perform channel estimation according to the reference signal to obtain channel information on a plurality of subbands, and further perform feature decomposition on the channel information on the plurality of subbands to obtain feature vectors corresponding to the plurality of subbands, respectively.
That is, the full channel information of the channel estimation includes channel information corresponding to each of a plurality of subcarrier groups of the transmitting-end apparatus. In other words, the full channel information may include channel information corresponding to each of a plurality of subbands of the transmitting end device, respectively.
Correspondingly, the at least one first eigenvector comprises an eigenvector corresponding to each of the plurality of subcarrier groups. The at least one first feature vector includes a respective corresponding feature vector for each of the plurality of subbands.
Because the characteristic vectors of the channel information on the plurality of sub-bands have correlation, for example, the correlation information of the channel information on the adjacent sub-bands is more, and the characteristic vectors of the plurality of sub-bands are subjected to joint compression coding, the method is favorable for improving the compression efficiency and enhancing the feedback recovery performance relative to the consideration of the correlation information.
Fig. 3 is a schematic system architecture diagram of a method of channel feedback according to an embodiment of the present application.
The transmitting end device may perform feature decomposition on the channel information of the n subbands to obtain feature vectors corresponding to the n subbands, and record the feature vectors as w_1, w_2, …, and w_n, and further input the feature vectors corresponding to the n subbands into an encoder, where the encoder performs joint encoding on the feature vectors of the n subbands by using a neural network to obtain a target bit stream, and record the target bit stream as B. Further, the transmitting end device transmits the target bit stream B to the receiving end device, and the decoder of the receiving end device decodes and recovers the target bit stream B to obtain n target feature vectors, which are denoted as W ' 1, W ' 2, …, and W ' n.
Therefore, in the embodiment of the application, the transmitting end device performs joint compression feedback on the feature vectors of the channel information of the plurality of sub-bands on the scheduling bandwidth, and correspondingly, the receiving end device can perform decompression reconstruction on the feature vectors of the plurality of sub-bands, which is beneficial to improving the compression efficiency relative to the utilization of the correlation information among the feature vectors, so that the compression, feedback and decompression performances of the whole system are enhanced.
Alternatively, in some embodiments, the scheduling bandwidth may be at least one BWP, or at least one carrier, or at least one frequency band, etc., which is not limited in this application.
It should be understood that embodiments of the present application are not particularly limited to a particular implementation of the encoder in the sender device and the decoder in the receiver device. In the following, an implementation of an encoder in a transmitting end device and a decoder in a receiving end device is described in connection with specific embodiments.
In some embodiments of the present application, the encoder in the transmitting end and the decoder in the receiving end may be implemented using the neural network structure in fig. 4. The neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer is used for receiving data, the hidden layer is used for processing the received data, and a processing result is generated in the output layer. In the neural network, each node represents a processing module, which can be considered to simulate a neuron, a plurality of neurons form a layer of neural network, and a whole neural network is constructed by multi-layer information transmission and processing.
In some embodiments of the present application, the encoder of the transmitting end device may process the input at least one first feature vector by using a computer vision technology, for example, compression-encode the at least one first feature vector as an image to be compressed through a neural network, to obtain the target bitstream.
Correspondingly, the decoder of the receiving end device may decode and recover the target bit stream as the image compression obtaining information to obtain a target image, where the target image includes at least one target feature vector.
In some embodiments, the encoder of the transmitting end device may perform compression encoding on the at least one first feature vector by using a neural network for image processing, for example, the neural network for image processing may be a convolutional neural network, or other neural network with better image processing performance, which is not limited in this application.
In some embodiments, the encoder of the receiving end device may decompress the target bitstream using a neural network for image processing, for example, the neural network for image processing may be a convolutional neural network, or other neural network with better image processing performance, which is not limited in this application.
Optionally, the convolutional neural network comprises an input layer, at least one convolutional layer, at least one pooling layer, a full-connection layer and an output layer, and by introducing the convolutional layer and the pooling layer, the rapid increase of network parameters is effectively controlled relative to the neural network architecture in fig. 4, the number of parameters is limited, the characteristic of mining local structures is facilitated, and the robustness of the algorithm is improved.
In some embodiments, the encoding, by the neural network, the at least one first feature vector to obtain a target bitstream includes:
splicing the at least one first eigenvector into an eigenvector matrix, and inputting the eigenvector matrix into the neural network;
and the neural network encodes the eigenvector matrix as an image to be compressed to obtain the target bit stream.
For example, the feature vectors w_1, w_2, …, w_n corresponding to the n subbands are spliced to a feature vector matrix w= [ w_1, w_2 ], w_n] T And inputting the characteristic vector matrix W serving as an image to be compressed into a neural network, and performing compression coding on the characteristic vector matrix W serving as the image to be compressed through the neural network to obtain a target bit stream B.
Correspondingly, at the decoding end, the decoding the target bit stream through the neural network to obtain at least one target feature vector, including:
inputting the target bit stream to the neural network;
and decoding the target bit stream through the neural network as information obtained by encoding the image to obtain a target image, wherein the target image comprises the at least one target feature vector.
That is, the decoder at the receiving end decompresses the received target bit stream B as information obtained by compressing the image, thereby obtaining a target image including at least one target feature vector.
In some embodiments, the encoder in the sender device and the decoder in the receiver device may be implemented using the network structures of fig. 5 and 6, respectively.
In some embodiments, the encoder in the sender device may include: the feature extraction module is used for receiving an input feature vector matrix W, and extracting features of the feature vector matrix W to obtain a feature map corresponding to the feature vector matrix W.
In some embodiments, the feature extraction module performs feature extraction by checking the feature vector matrix W through convolution with different sizes, so as to obtain feature graphs of different fields of view of the feature vector matrix W, increase nonlinearity in the convolution process, and improve the expression capability of the convolutional neural network.
As an example, as shown in fig. 5, the feature extraction module may include a 3×3 convolution layer, a 5×5 convolution layer, and a 7×7 convolution layer. Wherein the 3x3 convolution layer uses a 3x3 convolution kernel, the 5x5 convolution layer uses a 5x5 convolution kernel, and the 7x7 convolution layer uses a 7x7 convolution kernel. In practical applications, convolution kernels of other sizes may be substituted, which is not limited in this application.
Further, the feature images output by the feature extraction module are input to the splicing module to be combined, for example, the splicing module is used for realizing the channel dimension splicing of the feature images.
As an example, as shown in fig. 5, the splicing module may be implemented by a 1×1 convolution layer, where the 1×1 convolution layer uses a 1×1 convolution kernel, the number of channels may be controlled by controlling the number of convolution kernels in the 1×1 convolution layer, the convolution process using the 1×1 convolution kernel is intended to be larger than the calculation process of the full-connection layer, and a nonlinear activation function is added, which is beneficial to increasing the further nonlinearity of the nerve, so that the characteristics expressed by the neural network are more complex.
Further, as shown in fig. 5, the feature map output from the splicing module is converted into the target bit stream B after being processed by the full connection layer and the quantization layer.
It should be understood that the convolutional neural network structure in fig. 5 is merely an example, and in practical application, the convolutional neural network structure may be flexibly designed according to information such as the number of subbands and the coding and decoding performance requirements, for example, network layers such as an activation layer and a normalization layer are added between the convolutional layers, which is not limited in this application.
In some embodiments, as shown in fig. 6, the decoder may include: the device comprises a full connection layer, a dimension adjustment module and a residual error module, wherein a target bit stream B received from a transmitting end device is firstly input into the full connection layer and the dimension adjustment module and converted into the dimension of a feature vector matrix W, and is further input into the residual error module to output the feature vector matrix W' formed by at least one target feature vector.
In some embodiments, as shown in fig. 6, the residual block may include a convolution layer, an L-th common block, a convolution layer, and a summation block, where L is a positive integer. Firstly, sampling and copying the output of the dimension adjustment module, one path of the output is input to the summation module, the other path of the output is input to the convolution layer, the number of channels is amplified by utilizing the convolution kernel of the convolution layer, and the characteristic information is further extracted through the depth of the L-time public module. The output of the L-time public module is further subjected to channel number reduction through a convolution layer, and the output of the convolution layer is input to a summation module to be summed with the output of the dimension adjustment module for sampling and copying, so that the eigenvector matrix W' is obtained.
It should be understood that the number and the structural composition of the common module are not particularly limited, and the common module may adopt the structure shown in fig. 7 by way of example, but the present application is not limited thereto.
It should be understood that the convolutional neural network structure in fig. 6 is merely an example, and in practical application, the convolutional neural network structure may be flexibly configured according to information such as the number of subbands, coding and decoding performance requirements, and the present application is not limited thereto.
In some embodiments of the present application, model parameters of the neural network of the encoder and the decoder are obtained by joint training, for example, the model parameters of the neural network of the encoder and the decoder are initialized first, a plurality of sets of feature vector matrix samples are input into the neural network of the encoder to encode, a plurality of target bit streams are obtained, the plurality of target bit streams are further input into the decoder to decode, and the model parameters of the encoder and the decoder are adjusted according to the decoding result until the feature vectors output by the decoder and the feature vectors input into the neural network of the encoder meet convergence conditions.
Therefore, in the embodiment of the application, the transmitting end device performs compression encoding on the feature vectors corresponding to the channel information of the plurality of sub-bands as images through the convolutional neural network to obtain the target bit stream. Correspondingly, the receiving terminal equipment decompresses the target bit stream through the convolutional neural network to restore the target bit stream into a target image, so as to obtain at least one target feature vector. On one hand, compared with the method for directly encoding the full-channel information of the channel estimation, the method for encoding the feature vector of the full-channel information is beneficial to avoiding the excessive redundant information compression and reducing the CSI feedback overhead. On the other hand, the joint compression feedback is carried out according to the cross-correlation information among the characteristic vectors of the plurality of frequency domains, so that the compression feedback performance is improved.
In other embodiments of the present application, the encoder of the transmitting end device may process the input at least one first feature vector by using a Recurrent Neural Network (RNN), for example, compress the at least one first feature vector as an element of the sequence through the neural network, to obtain the encoding result, that is, the target bitstream.
Correspondingly, the encoder of the receiving end device can utilize the cyclic neural network to decompress and recover the input target bit stream as information obtained by compression encoding the sequence, so as to obtain a target sequence composed of at least one target bit vector.
As shown before, the neural network comprises an input layer, a hidden layer and an output layer, the output is controlled by activating a function, and the layers are connected through weights. The activation function is determined in advance, and things learned by training the neural network model are included in the weight. The underlying neural network establishes weight connections only from layer to layer, the RNN being the biggest difference from the underlying neural network in that weight connections are also established between neurons between layers.
FIG. 8 is a schematic diagram of a typical RNN, where each arrow in FIG. 8 represents a transformation, that is, the arrow connections are weighted. The left side of figure 8 is a folding view of the RNN structure, the right side is an unfolding view of the RNN structure, and the arrow beside h in the left side represents the "circulation" in this structure, which is embodied in the hidden layer.
It can be seen from the developed RRN structure diagram that the neurons of the hidden layer are weighted. That is, as the sequence progresses, the preceding hidden layer will affect the following hidden layer. In fig. 8, x represents an input, h represents a hidden unit, O represents an output, y represents a label of a training set, L represents a loss function, K, V and U represent weights, t represents time t, t-1 represents time t-1, and t+1 represents time t+1, whereby it can be seen that "loss" is also accumulated continuously with the recommendation of a sequence. Based on the above structure, RNN can have good performance in processing sequence data, i.e., RNN is a recurrent neural network that recurses in the evolution direction of the sequence and all nodes (loop units) are chained.
Long and short term memory (Long Shor Term Memory, LSTM) is an evolving RNN, unlike typical RNN architectures in which LSTM introduces the concept of cell status, unlike RNNs that consider only the most recent status, LSTM's cell status will determine which status should be left and which status should be forgotten, solving the drawbacks of traditional RNNs in long term memory.
In some embodiments of the present application, the at least one first feature vector may be processed by the basic RNN to obtain the target bitstream B, or may be processed by the LSTM to obtain the target bitstream B.
In some embodiments, the encoding, by the neural network, the at least one first feature vector to obtain a target bitstream includes:
inputting each of the at least one first eigenvector into the recurrent neural network in turn;
and encoding each characteristic vector serving as a sequence element through the cyclic neural network to obtain the target bit stream.
For example, each feature vector in the at least one first feature vector is sequentially input to the LSTM, and the target bitstream is obtained by encoding each feature vector as an element of a sequence through the LSTM.
As an example, the feature vectors w_1, w_2, …, w_n corresponding to the n sub-bands respectively are sequentially input as different elements of the sequence to the LSTM, and the sequence is processed by the LSTM to obtain the target bit stream.
Correspondingly, at the decoding end, the decoding the target bit stream through the neural network to obtain at least one target feature vector, including:
inputting the target bit stream to the recurrent neural network;
and decoding the target bit stream by using the target bit stream as information obtained by encoding a sequence through the cyclic neural network to obtain a target sequence, wherein the target sequence comprises the at least one target feature vector.
That is, the decoder at the receiving end decompresses the received target bit stream B as information obtained by compressing the sequence, to obtain a target sequence including at least one target feature vector.
In the following, specific encoding and decoding processes are described taking as an example the network structures in fig. 9 and 10 employed by the encoder in the transmitting-side apparatus and the decoder in the receiving-side apparatus, respectively. It should be understood that the neural network structures illustrated in fig. 9 and 10 are merely examples, and in practical applications, the neural network structures may be flexibly configured according to information such as the number of subbands, codec performance requirements, and the like, and the application is not limited thereto.
In some embodiments, as shown in fig. 9, the encoder may include: and the LSTM module is used for sequentially receiving each characteristic vector in the at least one first characteristic vector and processing each characteristic vector as an element of the sequence.
Further, the encoder may include a full connection layer and a quantization layer for converting the result processed by the LSTM module to obtain the target bit stream B.
In some embodiments, as shown in fig. 10, the decoder may include: the system comprises a full connection layer, a plurality of LSTM modules and a full connection layer connected with each LSTM module in the plurality of LSTM modules, wherein the first full connection layer is used for carrying out dimension conversion on a target bit stream B, the output of the first full connection layer is used as the input of the first LSTM module, and the output of each LSTM module is used as the input of the next LSTM module. And outputting a corresponding eigenvector matrix W '= { W' 1, W '2, …, W' n } after the outputs of the LSTM modules pass through the corresponding full connection layers.
Therefore, in the embodiment of the application, the transmitting end device compresses the feature vectors of the plurality of sub-bands as elements of the sequence through the recurrent neural network to obtain the target bit stream. The receiving end device decompresses the target bit stream through the cyclic neural network to restore the target bit stream to be a sequence element, so that at least one target feature vector is obtained. On one hand, compared with the method for directly encoding the full-channel information of the channel estimation, the method for encoding the feature vector of the full-channel information is beneficial to avoiding the excessive redundant information compression and reducing the CSI feedback overhead. On the other hand, the joint compression feedback is carried out according to the cross-correlation information among the characteristic vectors of the plurality of frequency domains, so that the compression feedback performance is improved.
In still other embodiments of the present application, the encoding, by the neural network, the at least one first feature vector to obtain a target bitstream includes:
and encoding the at least one first feature vector based on an attention mechanism to obtain the target bit stream.
Alternatively, in the embodiment of the present application, the attention mechanism may be a self-attention mechanism, or may be another attention mechanism, which is not limited in this application.
The self-attention mechanism employs a "query-key-value" mode.
The calculation of attention (attention) is mainly divided into three steps:
the first step: performing similarity calculation on the query and each key to obtain a weight, wherein a common similarity function comprises dot products, splicing, a perceptron and the like;
and a second step of: normalizing the weights using a softmax function;
and finally, carrying out weighted summation on the weight and the corresponding key value to obtain the final attribute.
In some embodiments of the present application, the sending end device extracts correlation features between feature vectors of a plurality of subbands and correlation features between elements of the feature vectors based on an attention mechanism, so as to further improve coding performance and decompression performance. For example, the sending end device extracts the correlation feature of the element in the at least one first feature vector based on the attention mechanism to obtain at least one second feature vector, and further performs compression processing on the at least one second feature vector to obtain the target bit stream B.
Correspondingly, at the decoding end, the receiving end device may decode the target bit stream based on the attention mechanism to obtain at least one target feature vector.
Alternatively, in the embodiment of the present application, the attention mechanism may be a self-attention mechanism, or may be another attention mechanism, which is not limited in this application.
In some embodiments of the present application, a decoder firstly performs feature extraction on the target bitstream to obtain a first feature map of the target bitstream; determining weights for elements in a first feature map of the target bitstream further based on an attention mechanism; performing dot multiplication on the first feature map of the target bit stream and the weight of the element in the first feature map of the target bit stream to obtain a second feature map of the target bit stream; decompressing the second feature map of the target bit stream to obtain the at least one target feature vector.
In the following, specific encoding and decoding processes are described taking as an example the network structures in fig. 11 and 13 employed by the encoder in the transmitting-side apparatus and the decoder in the receiving-side apparatus, respectively. It should be understood that the neural network structures illustrated in fig. 11 and 13 are merely examples, and in practical applications, the neural network structures may be flexibly configured according to information such as the number of subbands, codec performance requirements, and the like, and the present application is not limited thereto.
In some embodiments, as shown in fig. 11, the encoder may include: the system comprises s times of self-attention modules, a splicing module, a full-connection layer and a quantization layer, wherein s is a positive integer, each self-attention module is used for extracting correlation characteristics of elements in a plurality of input first characteristic vectors by utilizing a self-attention mechanism, and a plurality of second characteristic vectors are output after passing through the s cascaded sub-attention modules. Further, the plurality of second feature vectors are input to a splicing module to be spliced into a feature vector, and the target bit stream B is obtained after the processing of the full connection layer and the quantization layer.
In some embodiments, the attention module may be implemented using the structure shown in fig. 12, but the present application is not limited thereto.
As an example, as shown in fig. 12, the attention module includes: the system comprises n full-connection layers, an attention layer and n groups of two full-connection layers, wherein the n full-connection layers are respectively used for receiving a characteristic vector, and the characteristic vector can be a first characteristic vector in the previous description or can be a characteristic vector output by a previous attention module; the attention layer is used for receiving a plurality of feature vectors output from the n full connection layers and extracting correlation features among elements of the plurality of feature vectors. Each output of the self-attention layer is first duplicated and sampled, and each output is summed as an output of the self-attention module by a corresponding set of two-layer fully-connected layers and sampled outputs.
In some embodiments, as shown in fig. 13, the decoder may include: the method comprises the steps of inputting a target bit stream B into the full-connection layer and the dimension adjustment module, converting the target bit stream B into the dimension of a feature vector matrix W, further inputting the dimension into the feature extraction module for feature extraction, obtaining a first feature map, and sampling and copying the output of the dimension adjustment module.
Alternatively, as shown in fig. 13, the feature extraction module may include a convolution layer, an activation function, a q-th order residual block, and a t-th order residual block, where q, t are positive integers.
The convolution layer is used for the channel number of the output of the dimension adjustment module, the output of the convolution layer is input to the q-time residual error block through the activation function, and meanwhile, the output of the activation function is sampled and copied.
After the output of the activation function passes through the residual block for q times, the output is divided into two paths, one path passes through the residual block for t times to obtain a second characteristic diagram, and the other path is input to the self-attention module to extract the attention weight of the element in the characteristic vector matrix.
In some embodiments, the self-attention module may be implemented by a masking block, or may also be implemented by the attention module in fig. 11, which is not limited in this application.
For example, the output of the q-time residual block may be input to a mask block, extracting the attention mask of the elements in the feature map.
Further, the output of the mask block and the output of the t-th residual block are subjected to a dot multiplication process, and the dot multiplication result is added to the sampling copy result of the output of the activation function.
Then, the added result is input to q residual blocks for result correction, then the channel dimension is reduced through a convolution layer, and added with the sampled and copied result of the second eigenvector matrix, and the recovered eigenvector matrix w= [ w_1, w_2, …, w_n ] is output.
Fig. 14 is an exemplary block diagram of the mask block, but the present application is not limited thereto.
As shown in fig. 14, the mask block may include m times of residual blocks, a downsampling module, 2m times of residual blocks, an upsampling module, m times of residual blocks, a convolution layer and a sigmoid activation function, i.e., the mask block may be implemented by 4m times of residual blocks, upsampling and downsampling processes may be performed after the m times of residual blocks and the 3m times of residual blocks, feature extraction may be performed on global information of a wider area by using the middle 2m times of residual blocks, and finally, channel dimensions may be matched by the convolution layer, and then output of the convolution layer may be mapped between 0 and 1 by the sigmoid activation function, thereby obtaining attention weight.
It should be understood that the residual block in the embodiment of the present application may be implemented by using the structure shown in fig. 15, or may also be implemented by using other equivalent structures, which is not limited in this application.
As an example, as shown in fig. 15, the residual block may include: the system comprises a convolution layer, a normalization layer, an activation function, a convolution layer and a summation module, wherein one path of input of a residual block is sampled, copied and input to the summation module, and the other path of input of the residual block is transmitted to the summation module through the convolution layer, the normalization layer, the output of the activation function and the convolution layer, and the input of the residual block and the output of the convolution layer are summed through the summation module to be used as the output of the residual block. Designing a suitable residual block in the neural network is advantageous for solving the gradient problem in the neural network.
Therefore, in the embodiment of the present application, the transmitting end device uses the attention mechanism to compress the feature vectors of the plurality of subbands as the elements of the sequence to obtain the target bit stream. Correspondingly, the receiving end device decompresses and recovers the target bit stream by using the attention mechanism, thereby obtaining at least one target feature vector. On one hand, compared with the method for directly encoding the full-channel information of the channel estimation, the method for encoding the feature vector of the full-channel information is beneficial to avoiding the excessive redundant information compression and reducing the CSI feedback overhead. On the other hand, the attention mechanism is utilized to carry out joint compression feedback on the cross-correlation information among the elements of the feature vectors of the plurality of sub-bands, and the correlation characteristics among the feature vectors of the plurality of sub-bands and the correlation characteristics among the elements of the feature vectors are considered, so that the compression feedback performance is improved.
The method embodiments of the present application are described in detail above in connection with fig. 2 to 15, and the apparatus embodiments of the present application are described in detail below in connection with fig. 16 to 20, it being understood that the apparatus embodiments and the method embodiments correspond to each other, and similar descriptions may refer to the method embodiments.
Fig. 16 shows a schematic block diagram of a sender device 400 according to an embodiment of the present application. As shown in fig. 16, the transmitting-end apparatus 400 includes:
a communication module 410, configured to receive a reference signal sent by a receiving end device;
the processing module 420 is configured to perform channel estimation according to the reference signal, obtain channel information between the transmitting end device and the receiving end device, and perform feature decomposition on the channel information to obtain at least one first feature vector;
an encoding module 430, configured to encode the at least one first feature vector through a neural network to obtain a target bitstream;
the communication module 410 is further configured to: and sending the target bit stream to the receiving end equipment.
In some embodiments of the present application, the channel information includes channel information corresponding to each of a plurality of subcarrier groups of the transmitting end device, and the at least one first feature vector includes a feature vector corresponding to each of the plurality of subcarrier groups.
In some embodiments of the present application, the encoding module 430 is specifically configured to:
splicing the at least one first eigenvector into an eigenvector matrix, and inputting the eigenvector matrix into the neural network;
and encoding the feature vector matrix serving as an image to be compressed through the neural network to obtain the target bit stream.
In some embodiments of the present application, the neural network is a convolutional neural network.
In some embodiments of the present application, the neural network is a recurrent neural network, and the encoding module 430 is further configured to:
inputting each of the at least one first eigenvector into the recurrent neural network in turn;
and encoding each characteristic vector serving as a sequence element through the cyclic neural network to obtain the target bit stream.
In some embodiments of the present application, the recurrent neural network includes a long short term memory LSTM neural network.
In some embodiments of the present application, the encoding module 430 is further configured to:
and encoding the at least one first feature vector based on an attention mechanism to obtain the target bit stream.
In some embodiments of the present application, the encoding module 430 further includes:
An attention module, configured to extract correlation features between elements in the at least one first feature vector based on an attention mechanism, and obtain at least one second feature vector;
and the feature vector compression module is used for carrying out feature compression on the at least one second feature vector to obtain the target bit stream.
In some embodiments of the present application, the transmitting end device is a terminal device, and the receiving end device is a network device.
Alternatively, in some embodiments, the communication module may be a communication interface or transceiver, or an input/output interface of a communication chip or a system on a chip. The processing module and the encoding module may be one or more processors.
It should be understood that the sender device 400 according to the embodiments of the present application may correspond to the sender device in the embodiments of the methods of the present application, and the foregoing and other operations and/or functions of each unit in the sender device 400 are respectively for implementing the corresponding flows of the sender device in the method 200 shown in fig. 2 to 15, which are not repeated herein for brevity.
Fig. 17 is a schematic block diagram of a receiving-end apparatus according to an embodiment of the present application. The receiving-end apparatus 500 of fig. 17 includes:
A communication module 510, configured to receive a target bitstream sent by a sender device, where the target bitstream is obtained by encoding, by the sender device, at least one first feature vector, where the at least one first feature vector is obtained by performing feature decomposition on a result of channel estimation by the sender device;
the decoding module 520 is configured to decode the target bitstream through a neural network to obtain at least one target feature vector.
In some embodiments of the present application, the channel information includes channel information corresponding to each of a plurality of subcarrier groups of the transmitting end device, and the at least one first feature vector includes a feature vector corresponding to each of the plurality of subcarrier groups.
In some embodiments of the present application, the decoding module 520 is configured to:
inputting the target bit stream to the neural network;
and decoding the target bit stream through the neural network as information obtained by encoding the image to obtain a target image, wherein the target image comprises the at least one target feature vector.
In some embodiments of the present application, the neural network is a convolutional neural network.
In some embodiments of the present application, the neural network is a recurrent neural network, and the decoding module 520 is further configured to:
inputting the target bit stream to the recurrent neural network;
and decoding the target bit stream by using the target bit stream as information obtained by encoding a sequence through the cyclic neural network to obtain a target sequence, wherein the target sequence comprises the at least one target feature vector.
In some embodiments of the present application, the recurrent neural network includes a long short term memory LSTM neural network.
In some embodiments of the present application, the decoding module 520 is further configured to:
and decoding the target bit stream based on an attention mechanism to obtain at least one target feature vector.
In some embodiments of the present application, the encoding module 520 includes:
the feature extraction module is used for carrying out feature extraction on the target bit stream to obtain a first feature map;
an attention module for determining weights of elements in the first feature map based on an attention mechanism;
the point multiplication module is used for carrying out point multiplication on the first feature map and the weight values of the elements in the first feature map to obtain a second feature map;
And the decompression module is used for decompressing the second feature map to obtain the at least one target feature vector.
In some embodiments of the present application, the receiving end device 500 further includes:
and the processing module is used for precoding according to the at least one target feature vector.
Alternatively, in some embodiments, the communication module may be a communication interface or transceiver, or an input/output interface of a communication chip or a system on a chip. The processing module and the encoding module may be one or more processors.
It should be understood that the receiver device 500 according to the embodiments of the present application may correspond to the receiver device in the embodiments of the method of the present application, and the foregoing and other operations and/or functions of each unit in the receiver device 500 are respectively for implementing the corresponding flow of the receiver device in the method 200 shown in fig. 2 to 15, which are not repeated herein for brevity.
In summary, the transmitting end device performs feature decomposition on the full channel information obtained by the channel estimation to obtain at least one feature vector, further uses the neural network to encode the feature vector to obtain a target bit stream, and transmits the target bit stream to the receiving end. Correspondingly, the receiving terminal equipment decodes the target bit stream to obtain a target feature vector, and further performs precoding according to the target feature vector. On one hand, when the transmitting terminal equipment transmits the channel information, only a target bit stream obtained by encoding the feature vector obtained by carrying out feature decomposition on the all-channel information is required to be transmitted, so that the CSI overhead is reduced, on the other hand, the feature vector of the all-channel information is encoded instead of directly compressing and encoding the all-channel information, the correlation feature among the channel information can be considered, redundant information which is too much compressed is avoided, the compression efficiency is improved, and the encoding performance is further improved.
Fig. 18 is a schematic structural diagram of a communication device 600 provided in an embodiment of the present application. The communication device 600 shown in fig. 18 comprises a processor 610, from which the processor 610 may call and run a computer program to implement the method in the embodiments of the present application.
Optionally, as shown in fig. 18, the communication device 600 may further comprise a memory 620. Wherein the processor 610 may call and run a computer program from the memory 620 to implement the methods in embodiments of the present application.
The memory 620 may be a separate device from the processor 610 or may be integrated into the processor 610.
Optionally, as shown in fig. 6, the communication device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, and in particular, may send information or data to other devices, or receive information or data sent by other devices.
The transceiver 630 may include a transmitter and a receiver, among others. Transceiver 630 may further include antennas, the number of which may be one or more.
Optionally, the communication device 600 may be specifically a transmitting device in the embodiment of the present application, and the communication device 600 may implement a corresponding flow implemented by the transmitting device in each method in the embodiment of the present application, which is not described herein for brevity.
In some embodiments, the transceiver 630 in the communication device 600 may be used to implement the relevant operation of the communication module 410 in the sender device 400 shown in fig. 16, which is not described herein for brevity.
In some embodiments, the processor 610 in the communication device 600 may be configured to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in fig. 16, which are not described herein for brevity.
Optionally, the communication device 600 may be specifically a receiving end device in the embodiment of the present application, and the communication device 600 may implement a corresponding flow implemented by the receiving end device in each method in the embodiment of the present application, which is not described herein for brevity.
In some embodiments, the transceiver 630 in the communication device 600 may be used to implement the relevant operation of the communication module 510 in the receiving end device 500 shown in fig. 17, which is not described herein for brevity.
In some embodiments, the processor 610 in the communication device 600 may be configured to implement the related operations of the decoding module 520 and the processing module in the receiving end device 500 shown in fig. 17, which are not described herein for brevity.
Fig. 19 is a schematic structural diagram of a chip of an embodiment of the present application. The chip 700 shown in fig. 19 includes a processor 710, and the processor 710 may call and run a computer program from a memory to implement the method in the embodiments of the present application.
Optionally, as shown in fig. 19, chip 700 may also include memory 720. Wherein the processor 710 may call and run a computer program from the memory 720 to implement the methods in embodiments of the present application.
Wherein the memory 720 may be a separate device from the processor 710 or may be integrated into the processor 710.
Optionally, the chip 700 may also include an input interface 730. The processor 710 may control the input interface 730 to communicate with other devices or chips, and in particular, may obtain information or data sent by other devices or chips.
Optionally, the chip 700 may further include an output interface 740. The processor 710 may control the output interface 740 to communicate with other devices or chips, and in particular, may output information or data to other devices or chips.
Optionally, the chip may be applied to a transmitting end device in the embodiment of the present application, and the chip may implement a corresponding flow implemented by the transmitting end device in each method in the embodiment of the present application, which is not described herein for brevity.
In some embodiments, the input interface 730 and the output interface 740 in the chip 700 may be used to implement the related operations of the communication module 410 in the transmitting end device 400 shown in fig. 16, which are not described herein for brevity.
In some embodiments, the processor 710 in the chip 700 may be configured to implement the related operations of the processing module 420 and the encoding module 430 in the transmitting end device 400 shown in fig. 16, which are not described herein for brevity.
Optionally, the chip may be applied to the receiving end device in the embodiment of the present application, and the chip may implement a corresponding flow implemented by the receiving end device in each method of the embodiment of the present application, which is not described herein for brevity.
In some embodiments, the input interface 730 and the output interface 740 in the chip 700 may be used to implement the related operations of the communication module 510 in the receiving end device 500 shown in fig. 17, which are not described herein for brevity.
In some embodiments, the processor 710 in the chip 700 may be configured to implement the related operations of the decoding module 520 and the processing module in the receiving end device 500 shown in fig. 17, which are not described herein for brevity.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
Fig. 20 is a schematic block diagram of a communication system 900 provided in an embodiment of the present application. As shown in fig. 20, the communication system 900 includes a transmitting-end device 910 and a receiving-end device 920.
The transmitting device 910 may be used to implement the corresponding functions implemented by the transmitting device in the above method, and the receiving device 920 may be used to implement the corresponding functions implemented by the receiving device in the above method, which are not described herein for brevity.
It should be appreciated that the processor of an embodiment of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that the above memory is exemplary but not limiting, and for example, the memory in the embodiments of the present application may be Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), direct RAM (DR RAM), and the like. That is, the memory in embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
Embodiments of the present application also provide a computer-readable storage medium for storing a computer program.
Optionally, the computer readable storage medium may be applied to a transmitting end device in the embodiments of the present application, and the computer program causes a computer to execute a corresponding flow implemented by the transmitting end device in each method of the embodiments of the present application, which is not described herein for brevity.
Optionally, the computer readable storage medium may be applied to a receiving end device in the embodiments of the present application, and the computer program causes a computer to execute a corresponding flow implemented by the receiving end device in each method of the embodiments of the present application, which is not described herein for brevity.
Embodiments of the present application also provide a computer program product comprising computer program instructions.
Optionally, the computer program product may be applied to a sender device in the embodiments of the present application, and the computer program instructions cause a computer to execute corresponding flows implemented by the sender device in the methods in the embodiments of the present application, which are not described herein for brevity.
Optionally, the computer program product may be applied to a receiving end device in the embodiments of the present application, and the computer program instructions cause a computer to execute corresponding processes implemented by the receiving end device in the methods in the embodiments of the present application, which are not described herein for brevity.
The embodiment of the application also provides a computer program.
Optionally, the computer program may be applied to a transmitting end device in the embodiments of the present application, and when the computer program runs on a computer, the computer is caused to execute a corresponding flow implemented by the transmitting end device in each method in the embodiments of the present application, which is not described herein for brevity.
Optionally, the computer program may be applied to the receiving end device in the embodiments of the present application, and when the computer program runs on a computer, the computer is caused to execute a corresponding flow implemented by the receiving end device in each method of the embodiments of the present application, which is not described herein for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (46)

  1. A method for channel information feedback, comprising:
    the method comprises the steps that a transmitting end device receives a reference signal transmitted by a receiving end device;
    performing channel estimation according to the reference signal to obtain channel information between the transmitting end equipment and the receiving end equipment;
    performing feature decomposition on the channel information to obtain at least one first feature vector;
    encoding the at least one first feature vector through a neural network to obtain a target bit stream;
    and sending the target bit stream to the receiving end equipment.
  2. The method of claim 1, wherein the channel information comprises channel information corresponding to each of a plurality of subcarrier groups of the transmitting device, and wherein the at least one first eigenvector comprises an eigenvector corresponding to each of the plurality of subcarrier groups.
  3. The method according to claim 1 or 2, wherein said encoding said at least one first feature vector via a neural network to obtain a target bitstream comprises:
    splicing the at least one first eigenvector into an eigenvector matrix, and inputting the eigenvector matrix into the neural network;
    and encoding the feature vector matrix serving as an image to be compressed through the neural network to obtain the target bit stream.
  4. A method according to claim 3, wherein the neural network is a convolutional neural network.
  5. The method according to claim 1 or 2, wherein the neural network is a recurrent neural network, the encoding the at least one first feature vector by the neural network to obtain a target bitstream comprises:
    inputting each of the at least one first eigenvector into the recurrent neural network in turn;
    and encoding each characteristic vector serving as a sequence element through the cyclic neural network to obtain the target bit stream.
  6. The method of claim 5, wherein the recurrent neural network comprises a long-short-term memory LSTM neural network.
  7. The method according to any of claims 1-6, wherein said encoding said at least one first feature vector via a neural network results in a target bitstream, comprising:
    and encoding the at least one first feature vector based on an attention mechanism to obtain the target bit stream.
  8. The method of claim 7, wherein the encoding the at least one first feature vector based on the attention mechanism to obtain the target bitstream comprises:
    extracting correlation features among elements in the at least one first feature vector based on an attention mechanism to obtain at least one second feature vector;
    and performing feature compression on the at least one second feature vector to obtain the target bit stream.
  9. The method according to any of claims 1-8, wherein the sender device is a terminal device and the receiver device is a network device.
  10. A method for channel information feedback, comprising:
    the method comprises the steps that a receiving end device receives a target bit stream sent by a sending end device, wherein the target bit stream is obtained by encoding at least one first feature vector by the sending end device, and the at least one first feature vector is obtained by carrying out feature decomposition on a channel estimation result by the sending end device;
    And decoding the target bit stream through a neural network to obtain at least one target feature vector.
  11. The method of claim 10, wherein the channel information comprises channel information corresponding to each of a plurality of subcarrier groups of the transmitting device, and wherein the at least one first eigenvector comprises an eigenvector corresponding to each of the plurality of subcarrier groups.
  12. The method according to claim 10 or 11, wherein decoding the target bitstream via a neural network results in at least one target feature vector, comprising:
    inputting the target bit stream to the neural network;
    and decoding the target bit stream through the neural network as information obtained by encoding the image to obtain a target image, wherein the target image comprises the at least one target feature vector.
  13. The method of claim 12, wherein the neural network is a convolutional neural network.
  14. The method according to claim 10 or 11, wherein the neural network is a cyclic neural network, and the decoding the target bitstream by the neural network to obtain at least one target feature vector comprises:
    Inputting the target bit stream to the recurrent neural network;
    and decoding the target bit stream by using the target bit stream as information obtained by encoding a sequence through the cyclic neural network to obtain a target sequence, wherein the target sequence comprises the at least one target feature vector.
  15. The method of claim 14, wherein the recurrent neural network comprises a long-short-term memory LSTM neural network.
  16. The method according to claim 10 or 11, wherein decoding the target bitstream via a neural network results in at least one target feature vector, comprising:
    and decoding the target bit stream based on an attention mechanism to obtain at least one target feature vector.
  17. The method of claim 16, wherein decoding the target bitstream based on an attention mechanism results in at least one target feature vector, comprising:
    extracting features of the target bit stream to obtain a first feature map;
    determining weights for elements in the first feature map based on an attention mechanism;
    performing dot multiplication on the first feature map and the weight of the element in the first feature map to obtain a second feature map;
    Decompressing the second feature map to obtain the at least one target feature vector.
  18. The method according to any one of claims 10-17, further comprising:
    the receiving end device performs precoding according to the at least one target feature vector.
  19. A transmitting-end apparatus, characterized by comprising:
    the communication module is used for receiving the reference signal sent by the receiving end equipment;
    the processing module is used for carrying out channel estimation according to the reference signal to obtain channel information between the transmitting end equipment and the receiving end equipment, and carrying out feature decomposition on the channel information to obtain at least one first feature vector;
    the encoding module is used for encoding the at least one first feature vector through a neural network to obtain a target bit stream;
    the communication module is further configured to: and sending the target bit stream to the receiving end equipment.
  20. The transmitting device of claim 19, wherein the channel information comprises channel information corresponding to each of a plurality of subcarrier groups of the transmitting device, and wherein the at least one first eigenvector comprises an eigenvector corresponding to each of the plurality of subcarrier groups.
  21. The transmitting device according to claim 19 or 20, wherein the encoding module is specifically configured to:
    splicing the at least one first eigenvector into an eigenvector matrix, and inputting the eigenvector matrix into the neural network;
    and encoding the feature vector matrix serving as an image to be compressed through the neural network to obtain the target bit stream.
  22. The transmitting-end device of claim 21, wherein the neural network is a convolutional neural network.
  23. The transmitting device according to claim 19 or 21, wherein the neural network is a recurrent neural network, and the encoding module is further configured to:
    inputting each of the at least one first eigenvector into the recurrent neural network in turn;
    and encoding each characteristic vector serving as a sequence element through the cyclic neural network to obtain the target bit stream.
  24. The transmitting-end device of claim 23, wherein the recurrent neural network comprises a long short-term memory LSTM neural network.
  25. The sender device according to any of claims 19-24, wherein the encoding module is further configured to:
    And encoding the at least one first feature vector based on an attention mechanism to obtain the target bit stream.
  26. The transmitting device of claim 25, wherein the encoding module further comprises:
    an attention module, configured to extract correlation features between elements in the at least one first feature vector based on an attention mechanism, and obtain at least one second feature vector;
    and the feature vector compression module is used for carrying out feature compression on the at least one second feature vector to obtain the target bit stream.
  27. The transmitting device according to any one of claims 19-26, wherein the transmitting device is a terminal device and the receiving device is a network device.
  28. A receiving-end apparatus, characterized by comprising:
    the communication module is used for receiving a target bit stream sent by the sending end equipment, wherein the target bit stream is obtained by encoding at least one first feature vector by the sending end equipment, and the at least one first feature vector is obtained by carrying out feature decomposition on a channel estimation result by the sending end equipment;
    and the decoding module is used for decoding the target bit stream through a neural network to obtain at least one target feature vector.
  29. The receiver device of claim 28, wherein the channel information comprises channel information corresponding to each of a plurality of subcarrier groups of the transmitter device, and wherein the at least one first eigenvector comprises an eigenvector corresponding to each of the plurality of subcarrier groups.
  30. The receiver device of claim 28 or 29, wherein the decoding module is configured to:
    inputting the target bit stream to the neural network;
    and decoding the target bit stream through the neural network as information obtained by encoding the image to obtain a target image, wherein the target image comprises the at least one target feature vector.
  31. The receiver device of claim 30, wherein the neural network is a convolutional neural network.
  32. The receiver device of claim 28 or 29, wherein the neural network is a recurrent neural network, and wherein the decoding module is further configured to:
    inputting the target bit stream to the recurrent neural network;
    and decoding the target bit stream by using the target bit stream as information obtained by encoding a sequence through the cyclic neural network to obtain a target sequence, wherein the target sequence comprises the at least one target feature vector.
  33. The receiver device of claim 32, wherein the recurrent neural network comprises a long short term memory LSTM neural network.
  34. The receiver device of any of claims 28-33, wherein the decoding module is further configured to:
    and decoding the target bit stream based on an attention mechanism to obtain at least one target feature vector.
  35. The receiver device of claim 34, wherein the encoding module comprises:
    the feature extraction module is used for carrying out feature extraction on the target bit stream to obtain a first feature map;
    an attention module for determining weights of elements in the first feature map based on an attention mechanism;
    the point multiplication module is used for carrying out point multiplication on the first feature map and the weight values of the elements in the first feature map to obtain a second feature map;
    and the decompression module is used for decompressing the second feature map to obtain the at least one target feature vector.
  36. The receiver device of any of claims 28-35, wherein the receiver device further comprises:
    and the processing module is used for precoding according to the at least one target feature vector.
  37. A transmitting-end apparatus, characterized by comprising: a processor and a memory for storing a computer program, the processor being adapted to invoke and run the computer program stored in the memory for performing the method according to any of claims 1 to 9.
  38. A chip, comprising: a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method of any one of claims 1 to 9.
  39. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 9.
  40. A computer program product comprising computer program instructions for causing a computer to perform the method of any one of claims 1 to 9.
  41. A computer program, characterized in that the computer program causes a computer to perform the method according to any one of claims 1 to 9.
  42. A network device, comprising: a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory, performing the method of any of claims 10 to 18.
  43. A chip, comprising: a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method of any of claims 10 to 18.
  44. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 10 to 18.
  45. A computer program product comprising computer program instructions for causing a computer to perform the method of any one of claims 10 to 18.
  46. A computer program, characterized in that the computer program causes a computer to perform the method according to any one of claims 10 to 18.
CN202180079843.3A 2021-04-14 2021-04-14 Channel information feedback method, transmitting end equipment and receiving end equipment Pending CN116569527A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087288 WO2022217506A1 (en) 2021-04-14 2021-04-14 Channel information feedback method, sending end device, and receiving end device

Publications (1)

Publication Number Publication Date
CN116569527A true CN116569527A (en) 2023-08-08

Family

ID=83639978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180079843.3A Pending CN116569527A (en) 2021-04-14 2021-04-14 Channel information feedback method, transmitting end equipment and receiving end equipment

Country Status (2)

Country Link
CN (1) CN116569527A (en)
WO (1) WO2022217506A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024088006A1 (en) * 2022-10-26 2024-05-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatuses for processing of channel information

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101977593B1 (en) * 2012-06-25 2019-08-28 삼성전자주식회사 Method for transmitting secret information in transmitting end and method for receiving secret information in sending end based on a mimo multiplexing using plural antennas
KR102441982B1 (en) * 2018-07-05 2022-09-13 삼성전자주식회사 Method and apparatus for performing beamforming in a wireless communication system
CN109714086B (en) * 2019-01-23 2021-09-14 上海大学 Optimized MIMO detection method based on deep learning
CN110581732B (en) * 2019-09-30 2021-02-26 山东建筑大学 Multi-objective optimization system and method for indoor visible light communication based on neural network
CN112511472B (en) * 2020-11-10 2022-04-01 北京大学 Time-frequency second-order equalization method based on neural network and communication system

Also Published As

Publication number Publication date
WO2022217506A1 (en) 2022-10-20

Similar Documents

Publication Publication Date Title
WO2022121797A1 (en) Data transmission method and apparatus
CN116569527A (en) Channel information feedback method, transmitting end equipment and receiving end equipment
CN109889247B (en) Low-overhead dynamic feedback safe transmission method and system suitable for narrow-band Internet of things
US20230136416A1 (en) Neural network obtaining method and apparatus
WO2023123062A1 (en) Quality evaluation method for virtual channel sample, and device
WO2024020793A1 (en) Channel state information (csi) feedback method, terminal device and network device
WO2024108356A1 (en) Csi feedback method, transmitter device and receiver device
WO2023004638A1 (en) Channel information feedback methods, transmitting end devices, and receiving end devices
WO2024098259A1 (en) Sample set generation method and device
WO2022236785A1 (en) Channel information feedback method, receiving end device, and transmitting end device
CN117136528A (en) Channel recovery method and receiving terminal equipment
WO2024011456A1 (en) Data processing method and apparatus, communication method and apparatus, and terminal device and network device
WO2023028948A1 (en) Model processing method, electronic device, network device, and terminal device
WO2023015499A1 (en) Wireless communication method and device
WO2023133886A1 (en) Channel information feedback method, sending end device, and receiving end device
CN114157722A (en) Data transmission method and device
WO2024077621A1 (en) Channel information feedback method, transmitting end device, and receiving end device
WO2023004563A1 (en) Method for obtaining reference signal and communication devices
WO2023030538A1 (en) Method for processing channel state information, and terminal, base station and computer-readable storage medium
WO2023115254A1 (en) Data processing method and device
WO2023060503A1 (en) Information processing method and apparatus, device, medium, chip, product, and program
EP4354782A1 (en) Information transmission method and apparatus
WO2022257121A1 (en) Communication method and device, and storage medium
WO2022236788A1 (en) Communication method and device, and storage medium
WO2022257042A1 (en) Codebook reporting method, and terminal device and network device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination