CN117203898A - Channel information feedback method, transmitting device and receiving device - Google Patents

Channel information feedback method, transmitting device and receiving device Download PDF

Info

Publication number
CN117203898A
CN117203898A CN202180097508.6A CN202180097508A CN117203898A CN 117203898 A CN117203898 A CN 117203898A CN 202180097508 A CN202180097508 A CN 202180097508A CN 117203898 A CN117203898 A CN 117203898A
Authority
CN
China
Prior art keywords
network
channel data
data set
bit stream
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180097508.6A
Other languages
Chinese (zh)
Inventor
肖寒
田文强
刘文东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN117203898A publication Critical patent/CN117203898A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/35Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The embodiment of the application provides a channel information feedback method, an originating device and a receiving device, which are used for performing online scene migration training respectively aiming at an encoding network of the originating device and a decoding network of the receiving device. The method for feeding back the channel information comprises the following steps: the originating device performs channel scene migration training on a first coding network deployed on the originating device according to the first channel data set and the second channel data set to obtain a second coding network; the first channel data set comprises channel data in a first channel scene, the second channel data set comprises channel data in a second channel scene, the first coding network is trained based on the first channel data set, the first coding network is adapted to the first channel scene, and the second coding network is adapted to the first channel scene and the second channel scene; the source device encodes the target channel data through a second encoding network to obtain a target bit stream; the originating device sends a target bitstream to the receiving device.

Description

Channel information feedback method, transmitting device and receiving device Technical Field
The embodiment of the application relates to the field of communication, and more particularly relates to a method for channel information feedback, an originating device and a receiving device.
Background
In wireless communication, a transmitting end can compress channel information by using an encoder, and a receiving end reconstructs the channel information by using a decoder. However, due to the increasing complexity of the current channel environment, channels of different cells also have different potential characteristics, and in this case, how to perform feedback of channel information is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a channel information feedback method, an originating device and a receiving device, which respectively carry out online scene migration training aiming at an encoding network of a sending end and a decoding network of a receiving end under the condition that a channel scene is changed, so as to realize the self-adaptive updating of the encoding network and the decoding network aiming at the channel scene change in practical application, improve the adaptive generalization capability of the encoding network and the decoding network, and further improve the compression feedback precision of channel information feedback when the channel environment characteristics are changed.
In a first aspect, a method for channel information feedback is provided, the method comprising:
The method comprises the steps that an originating device carries out channel scene migration training on a first coding network deployed on the originating device according to a first channel data set and a second channel data set to obtain a second coding network; wherein the first channel data set includes channel data in a first channel scene, the second channel data set includes channel data in a second channel scene, the first coding network is trained based on the first channel data set, and the first coding network is adapted to the first channel scene, and the second coding network is adapted to the first channel scene and the second channel scene;
the originating device encodes target channel data through the second encoding network to obtain a target bit stream; the target channel data is the channel data in the first channel scene or the second channel scene;
the originating device sends the target bitstream to a receiving device.
In a second aspect, a method for channel information feedback is provided, the method comprising:
the receiving end device carries out channel scene migration training on a first decoding network deployed on the receiving end device according to a first channel data set, a first bit stream set and a second bit stream set to obtain a second decoding network; wherein the first channel data set includes channel data in a first channel scene, the first bit stream set includes a bit stream obtained by encoding the channel data in the first channel scene, the second bit stream set includes a bit stream obtained by encoding the channel data in a second channel scene, the first decoding network is trained based on the first channel data set, the first decoding network adapts the first channel scene, and the second decoding network adapts the first channel scene and the second channel scene;
The receiving end equipment receives a target bit stream sent by the transmitting end equipment;
the receiving end equipment decodes the target bit stream through the second decoding network to obtain target channel data; the target channel data is the channel data in the first channel scene or the second channel scene.
In a third aspect, an originating device is provided for performing the method of the first aspect described above.
In particular, the originating device comprises functional modules for performing the method in the first aspect described above.
In a fourth aspect, a terminating device is provided for performing the method in the second aspect.
Specifically, the sink device includes a functional module for executing the method in the second aspect.
In a fifth aspect, an originating device is provided that includes a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the method in the first aspect.
In a sixth aspect, a terminating device is provided that includes a processor and a memory. The memory is for storing a computer program and the processor is for calling and running the computer program stored in the memory for performing the method of the second aspect described above.
In a seventh aspect, there is provided an apparatus for implementing the method of any one of the first to second aspects.
Specifically, the device comprises: a processor for calling and running a computer program from a memory, causing a device in which the apparatus is installed to perform the method of any of the first to second aspects as described above.
In an eighth aspect, a computer-readable storage medium is provided for storing a computer program that causes a computer to execute the method of any one of the first to second aspects.
In a ninth aspect, there is provided a computer program product comprising computer program instructions for causing a computer to perform the method of any one of the first to second aspects above.
In a tenth aspect, there is provided a computer program which, when run on a computer, causes the computer to perform the method of any of the first to second aspects described above.
According to the technical scheme of the first aspect, under the condition that the channel scene is changed, the transmitting device performs channel scene migration training on the coding network deployed on the transmitting device according to the first channel data set and the second channel data set, so that the self-adaptive updating of the coding network for the channel scene change in practical application is realized, the adaptive generalization capability of the coding network is improved, and the compression feedback precision of channel information feedback when the channel environment characteristics are changed is further improved.
According to the technical scheme of the second aspect, under the condition that the channel scene is changed, the receiving end device carries out channel scene migration training on the decoding network deployed on the receiving end device according to the first channel data set, the first bit stream set and the second bit stream set, so that the self-adaptive updating of the decoding network aiming at the channel scene change in practical application is realized, the adaptive generalization capability of the decoding network is improved, and the compression feedback precision of channel information feedback when the channel environment characteristics are changed is further improved.
Drawings
Fig. 1 is a schematic diagram of a communication system architecture to which embodiments of the present application apply.
Fig. 2 is a schematic diagram of a neural network provided by the present application.
Fig. 3 is a schematic diagram of a convolutional neural network provided by the present application.
Fig. 4 is a schematic diagram of an LSTM cell provided by the present application.
Fig. 5 is a schematic diagram of channel information feedback provided by the present application.
Fig. 6 is a schematic diagram of another channel information feedback provided by the present application.
Fig. 7 is a schematic diagram of still another channel information feedback provided by the present application.
Fig. 8 is a schematic flow chart of a method for channel information feedback according to an embodiment of the present application.
Fig. 9 is a schematic diagram of channel scene migration training performed by an originating device according to an embodiment of the present application.
Fig. 10 is a schematic diagram of channel scene migration training performed by another originating device according to an embodiment of the present application.
Fig. 11 is a schematic flow chart of another method for channel information feedback provided according to an embodiment of the present application.
Fig. 12 is a schematic diagram of channel scene migration training performed by a receiving device according to an embodiment of the present application.
Fig. 13 is a schematic diagram of channel scene migration training performed by another sink device according to an embodiment of the present application.
Fig. 14 is a schematic block diagram of an originating device provided according to an embodiment of the present application.
Fig. 15 is a schematic block diagram of a sink device according to an embodiment of the present application.
Fig. 16 is a schematic block diagram of a communication device provided in accordance with an embodiment of the present application.
Fig. 17 is a schematic block diagram of an apparatus provided in accordance with an embodiment of the present application.
Fig. 18 is a schematic block diagram of a communication system provided in accordance with an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art to which the application pertains without inventive faculty, are intended to fall within the scope of the application.
The technical scheme of the embodiment of the application can be applied to various communication systems, such as: global system for mobile communications (Global System of Mobile communication, GSM), code division multiple access (Code Division Multiple Access, CDMA) system, wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) system, universal packet Radio service (General Packet Radio Service, GPRS), long term evolution (Long Term Evolution, LTE) system, advanced long term evolution (Advanced long term evolution, LTE-a) system, new Radio, NR system evolution system, LTE-based access to unlicensed spectrum, LTE-U system on unlicensed spectrum, NR-based access to unlicensed spectrum, NR-U system on unlicensed spectrum, non-terrestrial communication network (Non-Terrestrial Networks, NTN) system, universal mobile communication system (Universal Mobile Telecommunication System, UMTS), wireless local area network (Wireless Local Area Networks, WLAN), wireless fidelity (Wireless Fidelity, wiFi), fifth Generation communication (5 th-Generation, 5G) system, sixth Generation communication (6 th-Generation, 6G) system, or other subsequently evolved communication systems, etc.
Generally, the number of connections supported by the conventional communication system is limited and easy to implement, however, as the communication technology advances, the mobile communication system will support not only conventional communication but also, for example, device-to-Device (D2D) communication, machine-to-machine (Machine to Machine, M2M) communication, machine type communication (Machine Type Communication, MTC), inter-vehicle (Vehicle to Vehicle, V2V) communication, or internet of vehicles (Vehicle to everything, V2X) communication, etc., to which the embodiments of the present application can also be applied.
In some embodiments, the communication system in the embodiments of the present application may be applied to a carrier aggregation (Carrier Aggregation, CA) scenario, a dual connectivity (Dual Connectivity, DC) scenario, or a Stand Alone (SA) networking scenario.
In some embodiments, the communication system in the embodiments of the present application may be applied to unlicensed spectrum, where unlicensed spectrum may also be considered as shared spectrum; alternatively, the communication system in the embodiment of the present application may also be applied to licensed spectrum, where licensed spectrum may also be considered as non-shared spectrum.
Embodiments of the present application are described in connection with a network device and a terminal device, where the terminal device may also be referred to as a User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a User terminal, a wireless communication device, a User agent, a User Equipment, or the like.
The terminal device may be a STATION (ST) in a WLAN, may be a cellular telephone, a cordless telephone, a session initiation protocol (Session Initiation Protocol, SIP) phone, a wireless local loop (Wireless Local Loop, WLL) STATION, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a vehicle mounted device, a wearable device, a terminal device in a next generation communication system such as an NR network, or a terminal device in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
In the embodiment of the application, the terminal equipment can be deployed on land, including indoor or outdoor, handheld, wearable or vehicle-mounted; can also be deployed on the water surface (such as ships, etc.); but may also be deployed in the air (e.g., on aircraft, balloon, satellite, etc.).
In the embodiment of the present application, the terminal device may be a Mobile Phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented Reality (Augmented Reality, AR) terminal device, a wireless terminal device in industrial control (industrial control), a wireless terminal device in unmanned driving (self driving), a wireless terminal device in remote medical (remote medical), a wireless terminal device in smart grid (smart grid), a wireless terminal device in transportation security (transportation safety), a wireless terminal device in smart city (smart city), or a wireless terminal device in smart home (smart home), and the like.
By way of example, and not limitation, in embodiments of the present application, the terminal device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In the embodiment of the present application, the network device may be a device for communicating with a mobile device, where the network device may be an Access Point (AP) in a WLAN, a base station (Base Transceiver Station, BTS) in GSM or CDMA, a base station (NodeB, NB) in WCDMA, an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, a relay station or an Access Point, a vehicle device, a wearable device, a network device or a base station (gNB) in an NR network, a network device in a PLMN network evolved in the future, or a network device in an NTN network, etc.
By way of example, and not limitation, in embodiments of the present application, a network device may have a mobile nature, e.g., the network device may be a mobile device. In some embodiments, the network device may be a satellite, a balloon station. For example, the satellite may be a Low Earth Orbit (LEO) satellite, a medium earth orbit (medium earth orbit, MEO) satellite, a geosynchronous orbit (geostationary earth orbit, GEO) satellite, a high elliptical orbit (High Elliptical Orbit, HEO) satellite, or the like. In some embodiments, the network device may also be a base station located on land, in water, etc.
In the embodiment of the present application, a network device may provide services for a cell, where a terminal device communicates with the network device through a transmission resource (e.g., a frequency domain resource, or a spectrum resource) used by the cell, where the cell may be a cell corresponding to the network device (e.g., a base station), and the cell may belong to a macro base station, or may belong to a base station corresponding to a Small cell (Small cell), where the Small cell may include: urban cells (Metro cells), micro cells (Micro cells), pico cells (Pico cells), femto cells (Femto cells) and the like, and the small cells have the characteristics of small coverage area and low transmitting power and are suitable for providing high-rate data transmission services.
An exemplary communication system to which embodiments of the present application may be applied is shown in fig. 1. As shown in fig. 1, the communication system 100 may include a network device 110, and the network device 110 may be a device that communicates with a terminal device 120 (or referred to as a communication terminal, terminal). Network device 110 may provide communication coverage for a particular geographic area and may communicate with terminal devices located within the coverage area.
Fig. 1 illustrates one network device and two terminal devices, and in some embodiments, the communication system 100 may include multiple network devices and may include other numbers of terminal devices within the coverage area of each network device, which is not limited by the embodiments of the present application.
In some embodiments, the communication system 100 may further include a network controller, a mobility management entity, and other network entities, which are not limited in this embodiment of the present application.
It should be understood that a device having a communication function in a network/system according to an embodiment of the present application may be referred to as a communication device. Taking the communication system 100 shown in fig. 1 as an example, the communication device may include a network device 110 and a terminal device 120 with communication functions, where the network device 110 and the terminal device 120 may be specific devices described above, and are not described herein again; the communication device may also include other devices in the communication system 100, such as a network controller, a mobility management entity, and other network entities, which are not limited in this embodiment of the present application.
It should be understood that the terms "system" and "network" are used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application. The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
It should be understood that the "indication" mentioned in the embodiments of the present application may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, etc.
In the embodiment of the present application, the "pre-defining" or "pre-configuring" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the present application is not limited to the specific implementation manner thereof. Such as predefined may refer to what is defined in the protocol.
In the embodiment of the present application, the "protocol" may refer to a standard protocol in the communication field, for example, may include an LTE protocol, an NR protocol, and related protocols applied in a future communication system, which is not limited in the present application.
In order to facilitate understanding of the technical solution of the embodiments of the present application, the technical solution of the present application is described in detail below through specific embodiments. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application. Embodiments of the present application include at least some of the following.
To facilitate a better understanding of the embodiments of the present application, the channel information feedback related to the present application will be described.
In the NR system, for the channel state information (Channel State Information, CSI) feedback scheme, a codebook-based scheme is mainly used to implement extraction and feedback of channel characteristics. After the transmitting end carries out channel estimation, a precoding matrix which is most matched with the current channel is selected from a preset precoding codebook according to a certain optimization criterion according to the result of the channel estimation, and index information of the matrix is fed back to the receiving end through an empty feedback link so as to realize precoding by the receiving end.
In order to facilitate a better understanding of the embodiments of the present application, the neural network and deep learning related to the present application will be described.
A neural network is an operational model composed of a plurality of neuronal nodes connected to each other, wherein the connections between the nodes represent weight values from an input signal to an output signal, called weights; each node performs a weighted Summation (SUM) of the different input signals and outputs through a specific activation function (f).
A simple neural network is shown in FIG. 2, and comprises an input layer, a hidden layer and an output layer, wherein different outputs can be generated through different connection modes, weights and activation functions of a plurality of neurons, and then the mapping relation from the input to the output is fitted.
The deep learning adopts a deep neural network with multiple hidden layers, greatly improves the capability of the network to learn characteristics, and can fit complex nonlinear mapping from input to output, so that the method is widely applied to the fields of voice and image processing. In addition to deep neural networks, facing different tasks, deep learning also includes common basic structures such as convolutional neural networks (Convolutional Neural Network, CNN), recurrent neural networks (Recurrent Neural Network, RNN), and the like.
The basic structure of a convolutional neural network comprises: input layer, multiple convolution layers, multiple pooling layers, full connection layer, and output layer, as shown in fig. 3. Each neuron of the convolution kernel in the convolution layer is locally connected with the input of the neuron, and a pooling layer is introduced to extract the local maximum value or average value characteristic of a certain layer, so that the parameters of the network are effectively reduced, the local characteristic is mined, the convolution neural network can be quickly converged, and excellent performance is obtained.
RNNs are a type of neural network modeling sequence data with significant performance in the field of natural language processing, such as machine translation, speech recognition, etc. The method is characterized in that the network memorizes information at the past moment and is used for calculating the current output, namely, nodes between hidden layers are not connectionless but connected, and the input of the hidden layers comprises not only the input layer but also the output of the hidden layer at the last moment. Common RNNs include Long Short-Term Memory (LSTM) and gated loop units (gated recurrent unit, GRU) structures. FIG. 4 shows a basic LSTM cell structure that may contain tanh activation functions, unlike RNNs that only consider the most recent states, LSTM cell states determine which states should be left and which should be forgotten, solving the deficiencies of conventional RNNs in long-term memory.
In order to better understand the embodiment of the application, the channel information feedback method based on deep learning related to the application is described.
In view of the great success of artificial intelligence (Artificial Intelligence, AI) technology, especially deep learning, in terms of computer vision, natural language processing, etc., the communication field has begun to try to solve technical problems that are difficult to solve by conventional communication methods, such as deep learning, using deep learning. The neural network architecture commonly used in deep learning is nonlinear and data-driven, can perform feature extraction on actual channel matrix data and restore channel matrix information fed back by the terminal side compression as much as possible at the base station side, and provides possibility for reducing CSI feedback overhead at the terminal side while ensuring restoring the channel information. Channel information is regarded as an image to be compressed based on the CSI feedback of the deep learning, the channel information is compressed and fed back by a deep learning self-encoder, and the compressed channel image is reconstructed at a transmitting end, so that the channel information can be reserved to a greater extent, as shown in fig. 5.
A typical channel information feedback system is shown in fig. 6. The whole feedback system is divided into an encoder part and a decoder part which are respectively arranged at a transmitting end and a receiving end. After the transmitting end obtains the channel information through channel estimation, the channel information matrix is compressed and encoded through a neural network of the encoder, the compressed bit stream is fed back to the receiving end through an air interface feedback link, and the receiving end recovers the channel information through the decoder according to the feedback bit stream, so that complete feedback channel information is obtained. The encoder shown in fig. 6 uses a superposition of multiple fully-connected layers and the design of the convolutional layer and residual structure in the decoder. With the codec framework unchanged, the network model structure inside the encoder and decoder can be flexibly designed.
In order to facilitate better understanding of the embodiments of the present application, the technology related to the present application and the problems that exist will be described.
The channel information feedback in the 5G NR standard is a codebook-based feedback scheme. However, this scheme simply selects the optimal channel information eigenvalue vector from the codebook according to the estimated channel, and the codebook itself has a finite property, i.e., the mapping process from the estimated channel to the channel in the codebook is quantized and lossy, which degrades the accuracy of the fed-back channel information and thus degrades the precoding performance.
The AI-based channel information feedback contemplates compressing the channel information at the sender using the encoder of the AI self-encoder and reconstructing the channel information at the receiver using the decoder of the AI self-encoder. The AI-based scheme utilizes the nonlinear fitting capability of the neural network to perform compression feedback on the channel information, so that the compression efficiency and the feedback precision can be greatly improved. However, due to the increasing complexity of the current channel environment, the channels of different cells also have different potential characteristics. The inherent disadvantage of the generalization problem of the neural network itself in practical application results in that the trained network is only suitable for the channel test set with the same characteristics as the channel data of the training set, that is, the training set is often difficult to encompass all the situations, and when the scene characteristics change, the trained model is difficult to maintain better generalization performance.
In a CSI feedback codec framework based on an AI self-encoder, the encoder and the decoder are respectively distributed and deployed at a transmitting end and a receiving end of a communication system. The transmitting end uses the encoder network to compress and encode the channel information into bit stream, and feeds back the bit stream to the receiving end through an air interface, and the receiving end uses the decoder network to restore the bit stream into the reconstructed channel information. In the offline training stage, as shown in fig. 7, the existing data set h_1 of the channel scene 1 may be utilized to perform end-to-end training for the encoder En (·) and the decoder De (·), to obtain a self-encoder network model adapted to the channel scene 1, and be distributed and deployed on the transmitting end and the receiving end. B_1 in fig. 7 is a bit stream obtained after the data set h_1 is encoded by the encoder En (·) and H' _1 is a data set obtained after the data set b_1 is decoded by the decoder De (·).
When the channel environment changes, the transmission environment is changed from channel scene 1 to channel scene 2, the already trained encoder and decoder do not adapt well to the data set h_2 of channel scene 2. And the scheme of reusing the data set H_2 for offline training and redeploying to the sending end and the receiving end is complicated, and time and labor are consumed.
Based on the above problems, the application provides a channel information feedback scheme, which performs online scene migration training for the coding network of the transmitting end and the decoding network of the receiving end respectively under the condition that the channel scene is changed, so as to realize the self-adaptive update of the coding network and the decoding network for the channel scene change in practical application, improve the adaptive generalization capability of the coding network and the decoding network, and further improve the compression feedback precision of the channel information feedback when the channel environment characteristics are changed. That is, when the trained pre-training model on the data set of the channel scene 1 is distributed and deployed at the transmitting end and the receiving end, the data set of the channel scene 2 is utilized to perform online migration and update of the model so as to obtain the self-encoder models adapting to the channel scene 1 and the channel scene 2.
The technical scheme of the application is described in detail below through specific embodiments.
Fig. 8 is a schematic flowchart of a channel information feedback method 200 according to an embodiment of the present application, and as shown in fig. 8, the channel information feedback method 200 may include at least some of the following:
s210, the originating device performs channel scene migration training on a first coding network deployed on the originating device according to a first channel data set and a second channel data set to obtain a second coding network; wherein the first channel data set includes channel data in a first channel scene, the second channel data set includes channel data in a second channel scene, the first coding network is trained based on the first channel data set, the first coding network adapts to the first channel scene, and the second coding network adapts to the first channel scene and the second channel scene;
S220, the originating device encodes target channel data through the second encoding network to obtain a target bit stream; the target channel data is the channel data in the first channel scene or the second channel scene;
s230, the originating device sends the target bitstream to the receiving device.
In the embodiment of the present application, the channel data included in the first channel data set and the second channel data set may be CSI data, but may be other channel data, which is not limited in this aspect of the present application.
In the embodiment of the application, under the condition that the coding network and the decoding network are distributed and deployed at the transmitting end and the receiving end, when the channel environment characteristics change (namely the channel scene changes), online migration updating is carried out on the single-ended network model of the transmitting end and the single-ended network model of the receiving end by utilizing a migration learning method respectively.
In the embodiment of the application, when the channel scene is switched or migrated from the first channel scene to the second channel scene, the originating device executes scene migration training for the coding network, so that the coding network after the migration training (namely the second coding network) can adapt to the first channel scene and the second channel scene at the same time, thereby realizing the self-adaptive updating of the coding network for the channel scene change in practical application, improving the adaptation generalization capability of the coding network, and further improving the compression feedback precision of channel information feedback when the channel environment characteristics change.
The encoding network may be referred to as an encoder, and the decoding network may be referred to as a decoder, which is not limited in this regard.
In some embodiments, the first channel scenario and the second channel scenario may be different cells, such as the first channel scenario corresponding to an NR cell and the second channel scenario corresponding to an LTE cell.
In some embodiments, the first channel scene and the second channel scene may be different environments. For example, the first channel scene corresponds to an environment in which the terminal is stationary or slowly moving relative to the base station, and the second channel scene corresponds to an environment in which the terminal is rapidly moving relative to the base station, such as a high-speed rail, an airplane, and the like. For another example, the first channel scene corresponds to a closed or semi-closed environment and the second channel scene corresponds to an open environment.
In some embodiments, the specific forms of the first channel scene and the second channel scene are not limited, and only the difference between the first channel scene and the second channel scene is required to be satisfied.
In some embodiments, the first set of channel data includes a greater amount of channel data than the second set of channel data. That is, to some extent, the second set of channel data includes channel data that does not meet the need to train the coding network alone. For example, the first channel data set includes 10000 channel data, and the second channel data set includes 500 channel data.
In some embodiments, the first channel data set further includes tag information to enable supervised training of the encoded network, and the second channel data set does not include tag information.
In some embodiments, the terminating device is a terminal device and the originating device is a network device; alternatively, the sink device is a network device and the originating device is a terminal device.
In still other embodiments, the terminating device is one terminal device and the originating device is another terminal device. The embodiment of the application is applied to Side Link (SL) communication.
In still other embodiments, the originating device is a network device and the receiving device is another network device. The embodiment of the application is applied to backhaul link (backhaul link) communication.
In some embodiments, the first channel data set may also be channel data in a plurality of channel scenarios, the first coding network being trained based on the first channel data set, i.e. the first coding network may adapt to the plurality of channel scenarios.
In some embodiments, the originating device performs channel scene migration training on the first encoded network deployed on the originating device according to the first channel data set, the second channel data set, and the first authentication network to obtain the second encoded network.
In some embodiments, as shown in fig. 9, the originating device may specifically perform channel scene migration training through S11-S14:
s11: the originating device encodes the first channel data set through the first encoding network to obtain a first bit stream set;
s12: under the condition of keeping the parameters of the first coding network unchanged, the originating device trains the first identification network according to the first channel data set and the second channel data set until convergence; wherein, in case of convergence of the first authentication network, the first authentication network has the capability to distinguish between the bit stream output by the first encoding network for the first channel data set and the bit stream output by the first encoding network for the second channel data set;
s13: under the condition of keeping the parameters of the first authentication network unchanged, the originating device trains the first coding network according to the first channel data set and the second channel data set until convergence; wherein, under the condition that the first coding network converges, the first identification network cannot distinguish the bit stream output by the first coding network for the first channel data set and the bit stream output by the first coding network for the second channel data set;
S14: the originating device training the first encoding network according to the first channel data set and the first bit stream set until convergence;
in the case that the first condition is not satisfied, the originating device continues to perform S12 to S14 after performing S14 until the first condition is satisfied, and the originating device determines the first encoding network when the first condition is satisfied as the second encoding network.
In some embodiments, the first condition is that the number of times S14 is performed is greater than or equal to a first threshold, or the first condition is that the coding performance score at the time of convergence of the first coding network in S14 is greater than or equal to a second threshold.
In some embodiments, the coding performance score when the first coding network converges in S14 may be determined by the user, or the coding performance score when the first coding network converges in S14 may be determined by the originating device, or the coding performance score when the first coding network converges in S14 may be determined by a pre-trained neural network, which is not limited by the present application.
In some embodiments, the first threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the originating device.
In some embodiments, the second threshold may be pre-configured or agreed upon, or configured for the network device, or determined for the originating device.
In S11-S14, the originating device uses the domain adaptability of the output bit stream of the coding network in the new and old channel scenarios without using the decoding network to realize the migration update of the coding network.
In some embodiments, in S12, the optimization function of the first authentication network is:
wherein D is 1 i denotes the first authentication network, en denotes the first encoding network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
Wherein En (h_1) =b_1, en (h_2) =b_2, b_1 represents a bit stream set obtained after channel data in the first channel data set is encoded by the first encoding network, and b_2 represents a bit stream set obtained after channel data in the second channel data set is encoded by the first encoding network.
In some embodiments, in S13, the optimization function of the first encoding network is:
wherein En represents the first encoding network, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S14, the optimization function of the first encoding network is:
where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first coding network comprises a coding network front layer and a coding network back layer; as shown in fig. 10, the originating device may specifically perform channel scene migration training through S21-S24:
s21: the originating device encodes the first channel data set through the first encoding network to obtain a first bit stream set;
s22: under the condition of keeping the parameters of the first coding network unchanged, the originating device trains the first identification network according to the first channel data set and the second channel data set until convergence; wherein, in case of convergence of the first authentication network, the first authentication network has the capability of distinguishing between the feature vector output by the encoding network front part layer for the first channel data set and the feature vector output by the encoding network front part layer for the second channel data set;
s23: under the condition of keeping the parameters of the first authentication network unchanged, the originating device trains the front part layer of the coding network until convergence according to the first channel data set and the second channel data set; under the condition that the front part layer of the coding network converges, the first identification network cannot distinguish the characteristic vector output by the front part layer of the coding network for the first channel data set and the characteristic vector output by the front part layer of the coding network for the second channel data set;
S24: the originating device training the first encoding network according to the first channel data set and the first bit stream set until convergence;
in the case that the second condition is not satisfied, the originating device continues to perform S22 to S24 after performing S24 until the second condition is satisfied, and the originating device determines the first encoding network when the second condition is satisfied as the second encoding network.
In some embodiments, the second condition is that the number of times S24 is performed is greater than or equal to a third threshold, or the second condition is that the coding performance score at the time of convergence of the first coding network in S24 is greater than or equal to a fourth threshold.
In some embodiments, the coding performance score when the first coding network converges in S24 may be determined by the user, or the coding performance score when the first coding network converges in S24 may be determined by the originating device, or the coding performance score when the first coding network converges in S24 may be determined by a pre-trained neural network, which is not limited by the present application.
In some embodiments, the third threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the originating device.
In some embodiments, the fourth threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the originating device.
In S21-S24, the originating device uses the domain adaptability of the output bit stream of the layer in front of the coding network in the new and old channel scene without using the decoding network to realize the migration and update of the coding network.
In some embodiments, in S22, the optimization function of the first authentication network is:
wherein D is 1 i denotes the first authentication network, enf denotes the coding network front layer, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S23, the optimization function of the coding network front layer is:
wherein Enf represents the front layer of the coding network, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S24, the optimization function of the first encoding network is:
where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
Therefore, in the embodiment of the application, under the condition that the channel scene is changed, the transmitting device carries out channel scene migration training on the coding network deployed on the transmitting device according to the first channel data set and the second channel data set, so that the self-adaptive updating of the coding network aiming at the channel scene change in practical application is realized, the adaptive generalization capability of the coding network is improved, and the compression feedback precision of channel information feedback when the channel environment characteristics are changed is further improved.
The originating device side embodiment of the present application is described in detail above with reference to fig. 8 to 10, and the terminating device side embodiment of the present application is described in detail below with reference to fig. 11 to 13, it being understood that the terminating device side embodiment corresponds to the originating device side embodiment, and similar descriptions may refer to the originating device side embodiment.
Fig. 11 is a schematic flow chart of a method 300 of channel information feedback according to an embodiment of the present application, as shown in fig. 11, the method 300 of channel information feedback may include at least some of the following:
s310, the receiving end equipment carries out channel scene migration training on a first decoding network deployed on the receiving end equipment according to a first channel data set, a first bit stream set and a second bit stream set to obtain a second decoding network; wherein the first channel data set includes channel data in a first channel scene, the first bit stream set includes a bit stream obtained by encoding the channel data in the first channel scene, the second bit stream set includes a bit stream obtained by encoding the channel data in a second channel scene, the first decoding network is trained based on the first channel data set, the first decoding network adapts the first channel scene, and the second decoding network adapts the first channel scene and the second channel scene;
S320, the receiving end equipment receives a target bit stream sent by the sending end equipment;
s330, the receiving end equipment decodes the target bit stream through the second decoding network to obtain target channel data; the target channel data is the channel data in the first channel scene or the second channel scene.
In the embodiment of the present application, the channel data included in the first channel data set may be CSI data, but of course, may also be other channel data, which is not limited in this aspect of the present application.
That is, the channel data in the first channel scenario may be CSI data, and the channel data in the second channel scenario may be CSI data.
In the embodiment of the application, under the condition that the coding network and the decoding network are distributed and deployed at the transmitting end and the receiving end, when the channel environment characteristics change (namely the channel scene changes), online migration updating is carried out on the single-ended network model of the transmitting end and the single-ended network model of the receiving end by utilizing a migration learning method respectively.
In the embodiment of the application, when the channel scene is switched or migrated from the first channel scene to the second channel scene, the receiving end equipment executes scene migration training aiming at the decoding network, so that the decoding network after the migration training (namely the second decoding network) can adapt to the first channel scene and the second channel scene at the same time, the self-adaptive updating of the decoding network aiming at the channel scene change in practical application is realized, the adaptation generalization capability of the decoding network is improved, and the compression feedback precision of channel information feedback when the channel environment characteristics change is further improved.
The encoding network may be referred to as an encoder, and the decoding network may be referred to as a decoder, which is not limited in this regard.
In some embodiments, the first channel scenario and the second channel scenario may be different cells, such as the first channel scenario corresponding to an NR cell and the second channel scenario corresponding to an LTE cell.
In some embodiments, the first bit stream set may be a bit stream set obtained by encoding the channel data by the encoding network (i.e., the second encoding network) after training the channel scene transition training scheme in the method 200 for channel information feedback, or may be a bit stream set obtained by encoding the channel data by the encoding network (i.e., the first encoding network) without training the channel scene transition training scheme in the method 200 for channel information feedback.
In some embodiments, the second bit stream set may be a bit stream set obtained by encoding the channel data by the encoding network (i.e., the second encoding network) after training the channel scene transition training scheme in the method 200 for channel information feedback, or may be a bit stream set obtained by encoding the channel data by the encoding network (i.e., the first encoding network) without training the channel scene transition training scheme in the method 200 for channel information feedback.
In some embodiments, the first channel scene and the second channel scene may be different environments. For example, the first channel scene corresponds to an environment in which the terminal is stationary or slowly moving relative to the base station, and the second channel scene corresponds to an environment in which the terminal is rapidly moving relative to the base station, such as a high-speed rail, an airplane, and the like. For another example, the first channel scene corresponds to a closed or semi-closed environment and the second channel scene corresponds to an open environment.
In some embodiments, the specific forms of the first channel scene and the second channel scene are not limited, and only the difference between the first channel scene and the second channel scene is required to be satisfied.
In some embodiments, the first set of bitstreams includes a greater number of bitstreams than the second set of bitstreams. For example, the first set of bitstreams includes 10000 bitstreams and the second set of bitstreams includes 500 bitstreams.
In some embodiments, the terminating device is a terminal device and the originating device is a network device; alternatively, the sink device is a network device and the originating device is a terminal device.
In still other embodiments, the terminating device is one terminal device and the originating device is another terminal device. The embodiment of the application is applied to Side Link (SL) communication.
In still other embodiments, the originating device is a network device and the receiving device is another network device. The embodiment of the application is applied to backhaul link (backhaul link) communication.
In some embodiments, the first channel data set may also be channel data in a plurality of channel scenarios, the first decoding network being trained based on the first channel data set, i.e. the first decoding network may adapt to the plurality of channel scenarios.
In some embodiments, the receiving device performs channel scene migration training on the first decoding network deployed on the receiving device according to the first channel data set, the first bit stream set, the second bit stream set and the second authentication network, to obtain the second decoding network.
In some embodiments, as shown in fig. 12, the receiving device may specifically perform channel scene migration training through S31-S33:
s31: under the condition of keeping the parameters of the first decoding network unchanged, the receiving end equipment trains the second authentication network until convergence according to the first bit stream set and the second bit stream set; wherein, in case the second authentication network converges, the second authentication network has the capability to distinguish between channel data output by the first decoding network for the first set of bitstreams and channel data output by the first decoding network for the second set of bitstreams;
S32: under the condition of keeping the parameters of the second authentication network unchanged, the receiving end equipment trains the first decoding network until convergence according to the first bit stream set and the second bit stream set; wherein, under the condition that the first decoding network converges, the second discrimination network cannot distinguish the channel data output by the first decoding network for the first bit stream set and the channel data output by the first decoding network for the second bit stream set;
s33: the receiving end equipment trains the first decoding network according to the first channel data set and the first bit stream set until convergence;
in the case that the third condition is not satisfied, the sink device continues to perform S31 to S33 after performing S33 until the third condition is satisfied, and the sink device determines the first decoding network when the third condition is satisfied as the second decoding network.
In some embodiments, the third condition is that the number of times S33 is performed is greater than or equal to a fifth threshold, or the third condition is that the decoding performance score at the time of convergence of the first decoding network in S33 is greater than or equal to a sixth threshold.
In some embodiments, the decoding performance score when the first decoding network converges in S33 may be determined by the user, or the decoding performance score when the first decoding network converges in S33 may be determined by the originating device, or the decoding performance score when the first decoding network converges in S33 may be determined by a pre-trained neural network, which is not limited by the present application.
In some embodiments, the fifth threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the sink device.
In some embodiments, the sixth threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the sink device.
In S31-S33, the receiving device uses the domain adaptability of the decoding network output channel data in the new and old channel scenes without using the encoding network to realize the migration update of the decoding network.
In some embodiments, in S31, the optimization function of the second authentication network is:
wherein D is 2 i denotes the second authentication network, de denotes the first decoding network, b_1 denotes the first bitstream set, and b_2 denotes the second bitstream set.
In some embodiments, in S32, the optimization function of the first decoding network is:
wherein De represents the first decoding network, D 2 i denotes the second authentication network, b_1 denotes the first set of bitstreams, and b_2 denotes the second set of bitstreams.
In some embodiments, in S33, the optimization function of the first decoding network is:
Where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first decoding network comprises a decoding network front layer and a decoding network back layer; as shown in fig. 13, the sink device may specifically perform channel scene migration training through S41-S43:
s41: under the condition of keeping the parameters of the first decoding network unchanged, the receiving end equipment trains the second authentication network until convergence according to the first bit stream set and the second bit stream set; wherein, in case of convergence of the second authentication network, the second authentication network has the capability of distinguishing the feature vector output by the decoding network front part layer for the first bit stream set from the feature vector output by the decoding network front part layer for the second bit stream set;
s42: under the condition of keeping the parameters of the second authentication network unchanged, the receiving end equipment trains the front part layer of the decoding network until convergence according to the first bit stream set and the second bit stream set; under the condition that the decoding network front part layer converges, the second identification network cannot distinguish the characteristic vector output by the decoding network front part layer aiming at the first bit stream set from the characteristic vector output by the decoding network front part layer aiming at the second bit stream set;
S43: the receiving end equipment trains the first decoding network according to the first channel data set and the first bit stream set until convergence;
in the case where the fourth condition is not satisfied, the sink device continues to perform S41 to S43 after performing S43 until the fourth condition is satisfied, and the sink device determines the first decoding network when the fourth condition is satisfied as the second decoding network.
In some embodiments, the decoding performance score when the first decoding network converges in S43 may be determined by a user, or the decoding performance score when the first decoding network converges in S43 may be determined by an originating device, or the decoding performance score when the first decoding network converges in S43 may be determined by a pre-trained neural network, which is not limited by the present application.
In some embodiments, the fourth condition is that the number of times S43 is performed is greater than or equal to a seventh threshold, or the fourth condition is that the decoding performance score at the time of convergence of the first decoding network in S43 is greater than or equal to an eighth threshold.
In some embodiments, the seventh threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the sink device.
In some embodiments, the eighth threshold may be preconfigured or agreed upon by the protocol, or configured for the network device, or determined for the sink device.
In S41-S43, the receiving device uses the domain adaptability of the feature vector output by the layer before the decoding network in the new and old channel scene without using the encoding network to realize the migration and update of the decoding network.
In some embodiments, in S41, the optimization function of the second authentication network is:
wherein D is 2 i denotes the second authentication network, def denotes the decoding network front layer, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
In some embodiments, in S42, the optimization function of the first decoding network is:
wherein Def represents the layer of the front part of the decoding network, D 2 i denotes the second authentication network, b_1 denotes the first set of bitstreams, and b_2 denotes the second set of bitstreams.
In some embodiments, in S43, the optimization function of the first decoding network is:
where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
Therefore, in the embodiment of the application, under the condition that the channel scene is changed, the receiving end device carries out channel scene migration training on the decoding network deployed on the receiving end device according to the first channel data set, the first bit stream set and the second bit stream set, so that the self-adaptive updating of the decoding network aiming at the channel scene change in practical application is realized, the adaptive generalization capability of the decoding network is improved, and the compression feedback precision of channel information feedback when the channel environment characteristics are changed is further improved.
The method embodiment of the present application is described in detail above with reference to fig. 8 to 13, and the apparatus embodiment of the present application is described in detail below with reference to fig. 14 to 15, it being understood that the apparatus embodiment and the method embodiment correspond to each other, and similar descriptions can be made with reference to the method embodiment.
Fig. 14 shows a schematic block diagram of an originating device 400 according to an embodiment of the application. As shown in fig. 14, the originating device 400 includes:
a processing unit 410, configured to perform channel scene migration training on a first coding network deployed on the originating device according to the first channel data set and the second channel data set, so as to obtain a second coding network; wherein the first channel data set includes channel data in a first channel scene, the second channel data set includes channel data in a second channel scene, the first coding network is trained based on the first channel data set, and the first coding network is adapted to the first channel scene, and the second coding network is adapted to the first channel scene and the second channel scene;
The processing unit 410 is further configured to encode target channel data through the second encoding network to obtain a target bit stream; the target channel data is the channel data in the first channel scene or the second channel scene;
a communication unit 420, configured to send the target bitstream to a sink device.
In some embodiments, the processing unit 410 is specifically configured to:
and performing channel scene migration training on the first coding network deployed on the originating device according to the first channel data set, the second channel data set and the first authentication network to obtain the second coding network.
In some embodiments, the processing unit 410 is specifically configured to:
s11: encoding the first channel data set through the first encoding network to obtain a first bit stream set;
s12: training the first authentication network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first encoding network unchanged; wherein, in case of convergence of the first authentication network, the first authentication network has the capability to distinguish between the bit stream output by the first encoding network for the first channel data set and the bit stream output by the first encoding network for the second channel data set;
S13: training the first encoded network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first authentication network unchanged; wherein, under the condition that the first coding network converges, the first identification network cannot distinguish the bit stream output by the first coding network for the first channel data set and the bit stream output by the first coding network for the second channel data set;
s14: training the first coding network according to the first channel data set and the first bit stream set until convergence;
the first encoding network when the first condition is satisfied is determined to be the second encoding network.
In some embodiments, the first condition is that the number of times S14 is performed is greater than or equal to a first threshold, or the first condition is that the coding performance score at the time of convergence of the first coding network in S14 is greater than or equal to a second threshold.
In some embodiments, in S12, the optimization function of the first authentication network is:
wherein D is 1 i denotes the first authentication network, en denotes the first encoding network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S13, the optimization function of the first encoding network is:
Wherein En represents the first encoding network, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S14, the optimization function of the first encoding network is:
where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first coding network comprises a coding network front layer and a coding network back layer; the processing unit 410 is specifically configured to:
s21: encoding the first channel data set through the first encoding network to obtain a first bit stream set;
s22: training the first authentication network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first encoding network unchanged; wherein, in case of convergence of the first authentication network, the first authentication network has the capability of distinguishing between the feature vector output by the encoding network front part layer for the first channel data set and the feature vector output by the encoding network front part layer for the second channel data set;
S23: training the coding network front layer until convergence according to the first channel data set and the second channel data set while keeping parameters of the first authentication network unchanged; under the condition that the front part layer of the coding network converges, the first identification network cannot distinguish the characteristic vector output by the front part layer of the coding network for the first channel data set and the characteristic vector output by the front part layer of the coding network for the second channel data set;
s24: training the first coding network according to the first channel data set and the first bit stream set until convergence;
the first encoding network when the second condition is satisfied is determined to be the second encoding network.
In some embodiments, the second condition is that the number of times S24 is performed is greater than or equal to a third threshold, or the second condition is that the coding performance score at the time of convergence of the first coding network in S24 is greater than or equal to a fourth threshold.
In some embodiments, in S22, the optimization function of the first authentication network is:
wherein D is 1 i denotes the first authentication network, enf denotes the coding network front layer, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S23, the optimization function of the coding network front layer is:
wherein Enf represents the front layer of the coding network, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
In some embodiments, in S24, the optimization function of the first encoding network is:
where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first set of channel data includes a greater amount of channel data than the second set of channel data.
In some embodiments, the communication unit may be a communication interface or transceiver, or an input/output interface of a communication chip or a system on a chip. The processing unit may be one or more processors.
It should be understood that the originating device 400 according to the embodiment of the present application may correspond to the originating device in the embodiment of the method of the present application, and the foregoing and other operations and/or functions of each unit in the originating device 400 are respectively for implementing the corresponding flow of the originating device in the method 200 for channel information feedback shown in fig. 8, and are not described herein for brevity.
Fig. 15 shows a schematic block diagram of a sink device 500 according to an embodiment of the application. As shown in fig. 15, the sink device 500 includes:
a processing unit 510, configured to perform channel scene migration training on a first decoding network deployed on the receiving device according to the first channel data set, the first bit stream set, and the second bit stream set, to obtain a second decoding network; wherein the first channel data set includes channel data in a first channel scene, the first bit stream set includes a bit stream obtained by encoding the channel data in the first channel scene, the second bit stream set includes a bit stream obtained by encoding the channel data in a second channel scene, the first decoding network is trained based on the first channel data set, the first decoding network adapts the first channel scene, and the second decoding network adapts the first channel scene and the second channel scene;
a communication unit 520, configured to receive a target bit stream sent by an originating device;
the processing unit 510 is further configured to decode, through the second decoding network, the target bitstream to obtain target channel data; the target channel data is the channel data in the first channel scene or the second channel scene.
In some embodiments, the processing unit 510 is specifically configured to:
and performing channel scene migration training on the first decoding network deployed on the receiving end equipment according to the first channel data set, the first bit stream set, the second bit stream set and the second authentication network to obtain the second decoding network.
In some embodiments, the processing unit 510 is specifically configured to:
s31: training the second authentication network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the first decoding network unchanged; wherein, in case the second authentication network converges, the second authentication network has the capability to distinguish between channel data output by the first decoding network for the first set of bitstreams and channel data output by the first decoding network for the second set of bitstreams;
s32: training the first decoding network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the second authentication network unchanged; wherein, under the condition that the first decoding network converges, the second discrimination network cannot distinguish the channel data output by the first decoding network for the first bit stream set and the channel data output by the first decoding network for the second bit stream set;
S33: training the first decoding network according to the first channel data set and the first bit stream set until convergence;
the first decoding network when the third condition is satisfied is determined as the second decoding network.
In some embodiments, the third condition is that the number of times S33 is performed is greater than or equal to a fifth threshold, or the third condition is that the decoding performance score at the time of convergence of the first decoding network in S33 is greater than or equal to a sixth threshold.
In some embodiments, in S31, the optimization function of the second authentication network is:
wherein D is 2 i denotes the second authentication network, de denotes the first decoding network, b_1 denotes the first bitstream set, and b_2 denotes the second bitstream set.
In some embodiments, in S32, the optimization function of the first decoding network is:
wherein De represents the first decoding network, D 2 i denotes the second authentication network, b_1 denotes the first set of bitstreams, and b_2 denotes the second set of bitstreams.
In some embodiments, in S33, the optimization function of the first decoding network is:
where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first decoding network comprises a decoding network front layer and a decoding network back layer; the processing unit 510 is specifically configured to:
s41: training the second authentication network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the first decoding network unchanged; wherein, in case of convergence of the second authentication network, the second authentication network has the capability of distinguishing the feature vector output by the decoding network front part layer for the first bit stream set from the feature vector output by the decoding network front part layer for the second bit stream set;
s42: training the decoding network front layer until convergence according to the first bit stream set and the second bit stream set while keeping parameters of the second authentication network unchanged; under the condition that the decoding network front part layer converges, the second identification network cannot distinguish the characteristic vector output by the decoding network front part layer aiming at the first bit stream set from the characteristic vector output by the decoding network front part layer aiming at the second bit stream set;
s43: training the first decoding network according to the first channel data set and the first bit stream set until convergence;
The first decoding network when the fourth condition is satisfied is determined as the second decoding network.
In some embodiments, the fourth condition is that the number of times S43 is performed is greater than or equal to a seventh threshold, or the fourth condition is that the decoding performance score at the time of convergence of the first decoding network in S43 is greater than or equal to an eighth threshold.
In some embodiments, in S41, the optimization function of the second authentication network is:
wherein D is 2 i denotes the second authentication network, def denotes the decoding network front layer, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
In some embodiments, in S42, the optimization function of the first decoding network is:
wherein Def represents the layer of the front part of the decoding network, D 2 i denotes the second authentication network, b_1 denotes the first set of bitstreams, and b_2 denotes the second set of bitstreams.
In some embodiments, in S43, the optimization function of the first decoding network is:
where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
In some embodiments, the first set of bitstreams includes a greater number of bitstreams than the second set of bitstreams.
In some embodiments, the communication unit may be a communication interface or transceiver, or an input/output interface of a communication chip or a system on a chip. The processing unit may be one or more processors.
It should be understood that the sink device 500 according to the embodiment of the present application may correspond to the sink device in the embodiment of the method of the present application, and the foregoing and other operations and/or functions of each unit in the sink device 500 are respectively for implementing the corresponding flow of the sink device in the method 300 for channel information feedback shown in fig. 11, which is not described herein for brevity.
Fig. 16 is a schematic structural diagram of a communication device 600 provided in an embodiment of the present application. The communication device 600 shown in fig. 16 comprises a processor 610, from which the processor 610 may call and run a computer program to implement the method in an embodiment of the application.
In some embodiments, as shown in fig. 16, the communication device 600 may also include a memory 620. Wherein the processor 610 may call and run a computer program from the memory 620 to implement the method in an embodiment of the application.
The memory 620 may be a separate device from the processor 610 or may be integrated into the processor 610.
In some embodiments, as shown in fig. 16, the communication device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, and in particular, may transmit information or data to other devices, or receive information or data transmitted by other devices.
The transceiver 630 may include a transmitter and a receiver, among others. Transceiver 630 may further include antennas, the number of which may be one or more.
In some embodiments, the communication device 600 may be an originating device in the embodiments of the present application, and the communication device 600 may implement corresponding flows implemented by the originating device in the methods in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the communication device 600 may be a sink device in the embodiments of the present application, and the communication device 600 may implement corresponding flows implemented by the sink device in the methods in the embodiments of the present application, which are not described herein for brevity.
Fig. 17 is a schematic structural view of an apparatus of an embodiment of the present application. The apparatus 700 shown in fig. 17 includes a processor 710, and the processor 710 may call and execute a computer program from a memory to implement the method in an embodiment of the present application.
In some embodiments, as shown in fig. 17, the apparatus 700 may further include a memory 720. Wherein the processor 710 may invoke and run a computer program from the memory 720 to implement the method in the embodiments of the present application.
Wherein the memory 720 may be a separate device from the processor 710 or may be integrated into the processor 710.
In some embodiments, the apparatus 700 may further include an input interface 730. The processor 710 may control the input interface 730 to communicate with other devices or chips, and in particular, may obtain information or data sent by other devices or chips.
In some embodiments, the apparatus 700 may further comprise an output interface 740. The processor 710 may control the output interface 740 to communicate with other devices or chips, and in particular, may output information or data to other devices or chips.
In some embodiments, the apparatus may be applied to an originating device in the embodiments of the present application, and the apparatus may implement corresponding flows implemented by the originating device in each method in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the apparatus may be applied to a receiving device in the embodiments of the present application, and the apparatus may implement corresponding flows implemented by the receiving device in each method in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the device according to the embodiments of the present application may also be a chip. For example, a system-on-chip or a system-on-chip, etc.
Fig. 18 is a schematic block diagram of a communication system 800 provided by an embodiment of the present application. As shown in fig. 18, the communication system 800 includes a receiving device 810 and an originating device 820.
The receiving device 810 may be used to implement the corresponding functions implemented by the receiving device in the above method, and the sending device 820 may be used to implement the corresponding functions implemented by the sending device in the above method, which are not described herein for brevity.
It should be appreciated that the processor of an embodiment of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that the above memory is illustrative but not restrictive, and for example, the memory in the embodiments of the present application may be Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), direct RAM (DR RAM), and the like. That is, the memory in embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the application also provides a computer readable storage medium for storing a computer program.
In some embodiments, the computer readable storage medium may be applied to the originating device in the embodiments of the present application, and the computer program causes a computer to execute corresponding processes implemented by the originating device in the methods in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the computer readable storage medium may be applied to the sink device in the embodiments of the present application, and the computer program causes a computer to execute corresponding processes implemented by the sink device in the methods in the embodiments of the present application, which are not described herein for brevity.
The embodiment of the application also provides a computer program product comprising computer program instructions.
In some embodiments, the computer program product may be applied to an originating device in the embodiments of the present application, and the computer program instructions cause the computer to execute corresponding processes implemented by the originating device in the methods in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the computer program product may be applied to the sink device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding processes implemented by the sink device in the methods in the embodiments of the present application, which are not described herein for brevity.
The embodiment of the application also provides a computer program.
In some embodiments, the computer program may be applied to an originating device in the embodiments of the present application, and when the computer program runs on a computer, the computer is caused to execute corresponding processes implemented by the originating device in the methods in the embodiments of the present application, which are not described herein for brevity.
In some embodiments, the computer program may be applied to the receiving device in the embodiments of the present application, and when the computer program runs on a computer, the computer is caused to execute corresponding processes implemented by the receiving device in the methods in the embodiments of the present application, which are not described herein for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. For such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (58)

  1. A method for channel information feedback, comprising:
    the method comprises the steps that an originating device carries out channel scene migration training on a first coding network deployed on the originating device according to a first channel data set and a second channel data set to obtain a second coding network; the first channel data set comprises channel data in a first channel scene, the second channel data set comprises channel data in a second channel scene, the first coding network is trained based on the first channel data set, the first coding network is adapted to the first channel scene, and the second coding network is adapted to the first channel scene and the second channel scene;
    the originating device encodes target channel data through the second encoding network to obtain a target bit stream; the target channel data are channel data in the first channel scene or the second channel scene;
    The originating device sends the target bit stream to a receiving device.
  2. The method of claim 1, wherein the originating device performs channel scene migration training on a first encoded network deployed on the originating device based on a first channel data set and a second channel data set to obtain a second encoded network, comprising:
    and the originating equipment performs channel scene migration training on the first coding network deployed on the originating equipment according to the first channel data set, the second channel data set and the first identification network to obtain the second coding network.
  3. The method of claim 2, wherein the originating device performs channel scene migration training on the first encoded network deployed on the originating device based on the first channel data set, the second channel data set, and a first authentication network to obtain the second encoded network, comprising:
    s11: the originating device encodes the first channel data set through the first encoding network to obtain a first bit stream set;
    s12: under the condition of keeping the parameters of the first coding network unchanged, the originating device trains the first identification network according to the first channel data set and the second channel data set until convergence; wherein, in case of convergence of the first authentication network, the first authentication network has the capability to distinguish between a bit stream output by the first encoding network for the first channel data set and a bit stream output by the first encoding network for the second channel data set;
    S13: under the condition that the parameters of the first identification network are kept unchanged, the originating device trains the first coding network according to the first channel data set and the second channel data set until convergence; wherein, in case the first encoding network converges, the first authentication network cannot distinguish between a bit stream output by the first encoding network for the first channel data set and a bit stream output by the first encoding network for the second channel data set;
    s14: the originating device trains the first coding network according to the first channel data set and the first bit stream set until convergence;
    the originating device determines the first encoding network as the second encoding network when a first condition is satisfied.
  4. The method of claim 3, wherein the first condition is that the number of times S14 is performed is greater than or equal to a first threshold, or wherein the first condition is that the coding performance score at which the first coding network converges in S14 is greater than or equal to a second threshold.
  5. The method of claim 3 or 4, wherein,
    in S12, the optimization function of the first identification network is:
    Wherein D is 1 i denotes the first authentication network, en denotes the first encoding network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  6. The method according to claim 3 to 5,
    in S13, the optimization function of the first coding network is:
    wherein En denotes the first encoding network,D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  7. The method according to claim 3 to 6,
    in S14, the optimization function of the first coding network is:
    where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  8. The method of claim 2, wherein the first encoding network comprises an encoding network front layer and an encoding network back layer;
    the originating device performs channel scene migration training on the first coding network deployed on the originating device according to the first channel data set, the second channel data set and the first identification network, to obtain the second coding network, and the method includes:
    S21: the originating device encodes the first channel data set through the first encoding network to obtain a first bit stream set;
    s22: under the condition of keeping the parameters of the first coding network unchanged, the originating device trains the first identification network according to the first channel data set and the second channel data set until convergence; wherein, in case of convergence of the first authentication network, the first authentication network has the capability of distinguishing between the feature vectors output by the encoding network front part layer for the first channel data set and the feature vectors output by the encoding network front part layer for the second channel data set;
    s23: under the condition of keeping the parameters of the first authentication network unchanged, the originating device trains the front part layer of the coding network until convergence according to the first channel data set and the second channel data set; under the condition that the coding network front part layer converges, the first identification network cannot distinguish the characteristic vector output by the coding network front part layer for the first channel data set from the characteristic vector output by the coding network front part layer for the second channel data set;
    S24: the originating device trains the first coding network according to the first channel data set and the first bit stream set until convergence;
    the originating device determines the first encoding network as the second encoding network when a second condition is satisfied.
  9. The method of claim 8, wherein the second condition is that the number of times S24 is performed is greater than or equal to a third threshold, or wherein the second condition is that the coding performance score at which the first coding network converges in S24 is greater than or equal to a fourth threshold.
  10. The method of claim 8 or 9, wherein,
    in S22, the optimization function of the first identification network is:
    wherein D is 1 i denotes the first authentication network, enf denotes the coding network front layer, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  11. The method according to any one of claim 8 to 10, wherein,
    in S23, the optimization function of the coding network front layer is:
    wherein Enf represents the coding network front layer, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  12. The method according to any one of claim 8 to 11, wherein,
    in S24, the optimization function of the first coding network is:
    where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  13. The method of any of claims 1 to 12, wherein the first set of channel data comprises a greater amount of channel data than the second set of channel data.
  14. A method for channel information feedback, comprising:
    the method comprises the steps that a receiving end device carries out channel scene migration training on a first decoding network deployed on the receiving end device according to a first channel data set, a first bit stream set and a second bit stream set to obtain a second decoding network; the first decoding network is trained based on the first channel data set, the first decoding network is adapted to the first channel scene, and the second decoding network is adapted to the first channel scene and the second channel scene;
    The receiving end equipment receives a target bit stream sent by the transmitting end equipment;
    the receiving end equipment decodes the target bit stream through the second decoding network to obtain target channel data; the target channel data is the channel data in the first channel scene or the second channel scene.
  15. The method of claim 14, wherein the receiving device performs channel scene migration training on a first decoding network deployed on the receiving device according to a first channel data set, a first bit stream set, and a second bit stream set, to obtain a second decoding network, comprising:
    and the receiving end equipment performs channel scene migration training on the first decoding network deployed on the receiving end equipment according to the first channel data set, the first bit stream set, the second bit stream set and the second identification network to obtain the second decoding network.
  16. The method of claim 15, wherein the receiving device performs channel scene migration training on the first decoding network deployed on the receiving device according to the first channel data set, the first bit stream set, the second bit stream set, and a second authentication network, to obtain the second decoding network, comprising:
    S31: under the condition of keeping the parameters of the first decoding network unchanged, the receiving end equipment trains the second authentication network until convergence according to the first bit stream set and the second bit stream set; wherein, in the event that the second authentication network converges, the second authentication network has the ability to distinguish between channel data output by the first decoding network for the first set of bitstreams and channel data output by the first decoding network for the second set of bitstreams;
    s32: under the condition of keeping the parameters of the second authentication network unchanged, the receiving end equipment trains the first decoding network until convergence according to the first bit stream set and the second bit stream set; wherein, in case the first decoding network converges, the second discrimination network cannot distinguish between channel data output by the first decoding network for the first bit stream set and channel data output by the first decoding network for the second bit stream set;
    s33: the receiving end equipment trains the first decoding network according to the first channel data set and the first bit stream set until convergence;
    The receiving device determines the first decoding network as the second decoding network when a third condition is satisfied.
  17. The method of claim 16, wherein the third condition is that the number of times S33 is performed is greater than or equal to a fifth threshold, or wherein the third condition is that the decoding performance score at the time of convergence of the first decoding network in S33 is greater than or equal to a sixth threshold.
  18. The method of claim 16 or 17, wherein,
    in S31, the optimization function of the second authentication network is:
    wherein D is 2 i denotes the second authentication network, de denotes the first decoding network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  19. The method according to any one of claim 16 to 18, wherein,
    in S32, the optimization function of the first decoding network is:
    wherein De represents the first decoding network, D 2 i denotes the second authentication network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  20. The method according to any one of claim 16 to 19, wherein,
    in S33, the optimization function of the first decoding network is:
    Where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  21. The method of claim 15, wherein the first decoding network comprises a decoding network front part layer and a decoding network back part layer;
    the receiving device performs channel scene migration training on the first decoding network deployed on the receiving device according to the first channel data set, the first bit stream set, the second bit stream set and the second authentication network to obtain the second decoding network, and the method includes:
    s41: under the condition of keeping the parameters of the first decoding network unchanged, the receiving end equipment trains the second authentication network until convergence according to the first bit stream set and the second bit stream set; wherein, in the event that the second authentication network converges, the second authentication network has the ability to distinguish between feature vectors output by the decoding network front-part layer for the first set of bitstreams and feature vectors output by the decoding network front-part layer for the second set of bitstreams;
    s42: under the condition of keeping the parameters of the second authentication network unchanged, the receiving end equipment trains the front part layer of the decoding network until convergence according to the first bit stream set and the second bit stream set; wherein, in case the decoding network front part layer converges, the second authentication network cannot distinguish between a feature vector output by the decoding network front part layer for the first bit stream set and a feature vector output by the decoding network front part layer for the second bit stream set;
    S43: the receiving end equipment trains the first decoding network according to the first channel data set and the first bit stream set until convergence;
    the receiving device determines the first decoding network when the fourth condition is satisfied as the second decoding network.
  22. The method of claim 21, wherein the fourth condition is that the number of times S43 is performed is greater than or equal to a seventh threshold, or wherein the fourth condition is that the decoding performance score at the time of convergence of the first decoding network in S43 is greater than or equal to an eighth threshold.
  23. The method of claim 21 or 22, wherein,
    in S41, the optimization function of the second authentication network is:
    wherein D is 2 i denotes the second authentication network, def denotes the decoding network front layer, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  24. The method according to any one of claim 21 to 23,
    in S42, the optimization function of the first decoding network is:
    wherein Def represents the decoding network front layer, D 2 i denotes the second authentication network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  25. The method of any one of claim 21 to 24,
    in S43, the optimization function of the first decoding network is:
    where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  26. The method of any of claims 14 to 25, wherein the first set of bitstreams includes a greater number of bitstreams than the second set of bitstreams.
  27. An originating device, comprising:
    the processing unit is used for performing channel scene migration training on the first coding network deployed on the originating equipment according to the first channel data set and the second channel data set to obtain a second coding network; the first channel data set comprises channel data in a first channel scene, the second channel data set comprises channel data in a second channel scene, the first coding network is trained based on the first channel data set, the first coding network is adapted to the first channel scene, and the second coding network is adapted to the first channel scene and the second channel scene;
    The processing unit is further configured to encode target channel data through the second encoding network to obtain a target bit stream; the target channel data are channel data in the first channel scene or the second channel scene;
    and the communication unit is used for sending the target bit stream to the receiving end equipment.
  28. The originating device of claim 27, wherein the processing unit is specifically configured to:
    and performing channel scene migration training on the first coding network deployed on the originating equipment according to the first channel data set, the second channel data set and the first authentication network to obtain the second coding network.
  29. The originating device of claim 28, wherein the processing unit is specifically configured to:
    s11: encoding the first channel data set through the first encoding network to obtain a first bit stream set;
    s12: training the first authentication network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first coding network unchanged; wherein, in case of convergence of the first authentication network, the first authentication network has the capability to distinguish between a bit stream output by the first encoding network for the first channel data set and a bit stream output by the first encoding network for the second channel data set;
    S13: training the first coding network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first authentication network unchanged; wherein, in case the first encoding network converges, the first authentication network cannot distinguish between a bit stream output by the first encoding network for the first channel data set and a bit stream output by the first encoding network for the second channel data set;
    s14: training the first coding network according to the first channel data set and the first bit stream set until convergence;
    the first encoding network when the first condition is satisfied is determined as the second encoding network.
  30. The originating device of claim 29, wherein the first condition is that the number of times S14 is performed is greater than or equal to a first threshold, or wherein the first condition is that the coding performance score at which the first coding network converges in S14 is greater than or equal to a second threshold.
  31. The originating device of claim 29 or 30,
    in S12, the optimization function of the first identification network is:
    wherein D is 1 i denotes the first authentication network, en denotes the first encoding network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  32. The originating device of any one of claim 29 to 31,
    in S13, the optimization function of the first coding network is:
    wherein En denotes the first encoding network, D 1 i denotes the first authentication network, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  33. The originating device of any one of claim 29 to 32,
    in S14, the optimization function of the first coding network is:
    where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  34. The originating device of claim 28, wherein the first encoding network comprises an encoding network front layer and an encoding network back layer; the processing unit is specifically configured to:
    s21: encoding the first channel data set through the first encoding network to obtain a first bit stream set;
    s22: training the first authentication network according to the first channel data set and the second channel data set until convergence while keeping parameters of the first coding network unchanged; wherein, in case of convergence of the first authentication network, the first authentication network has the capability of distinguishing between the feature vectors output by the encoding network front part layer for the first channel data set and the feature vectors output by the encoding network front part layer for the second channel data set;
    S23: training the coding network front layer until convergence according to the first channel data set and the second channel data set under the condition that parameters of the first identification network are kept unchanged; under the condition that the coding network front part layer converges, the first identification network cannot distinguish the characteristic vector output by the coding network front part layer for the first channel data set from the characteristic vector output by the coding network front part layer for the second channel data set;
    s24: training the first coding network according to the first channel data set and the first bit stream set until convergence;
    the first encoding network when a second condition is satisfied is determined to be the second encoding network.
  35. The originating device of claim 34, wherein the second condition is that the number of times S24 is performed is greater than or equal to a third threshold, or wherein the second condition is that the coding performance score at which the first coding network converges in S24 is greater than or equal to a fourth threshold.
  36. The originating device of claim 34 or 35,
    in S22, the optimization function of the first identification network is:
    Wherein D is 1 i denotes the first authentication network, enf denotes the coding network front layer, h_1 denotes the first channel data set, and h_2 denotes the second channel data set.
  37. The originating device of any one of claims 34 to 36,
    in S23, the optimization function of the coding network front layer is:
    wherein Enf represents the coding network front layer, D 1 i represents whatThe first authentication network, h_1, represents the first channel data set and h_2 represents the second channel data set.
  38. The originating device of any one of claims 34 to 37,
    in S24, the optimization function of the first coding network is:
    where En denotes the first encoding network, H1 denotes the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  39. The originating device of any of claims 27-38, wherein the first set of channel data comprises a greater amount of channel data than the second set of channel data.
  40. A sink device, comprising:
    the processing unit is used for performing channel scene migration training on the first decoding network deployed on the receiving terminal equipment according to the first channel data set, the first bit stream set and the second bit stream set to obtain a second decoding network; the first decoding network is trained based on the first channel data set, the first decoding network is adapted to the first channel scene, and the second decoding network is adapted to the first channel scene and the second channel scene;
    The communication unit is used for receiving the target bit stream sent by the originating equipment;
    the processing unit is further configured to decode a target bitstream through the second decoding network to obtain target channel data; the target channel data is the channel data in the first channel scene or the second channel scene.
  41. The sink device of claim 40, wherein the processing unit is specifically configured to:
    and performing channel scene migration training on the first decoding network deployed on the receiving end equipment according to the first channel data set, the first bit stream set, the second bit stream set and the second authentication network to obtain the second decoding network.
  42. The sink device of claim 41, wherein the processing unit is specifically configured to:
    s31: training the second authentication network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the first decoding network unchanged; wherein, in the event that the second authentication network converges, the second authentication network has the ability to distinguish between channel data output by the first decoding network for the first set of bitstreams and channel data output by the first decoding network for the second set of bitstreams;
    S32: training the first decoding network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the second authentication network unchanged; wherein, in case the first decoding network converges, the second discrimination network cannot distinguish between channel data output by the first decoding network for the first bit stream set and channel data output by the first decoding network for the second bit stream set;
    s33: training the first decoding network until convergence according to the first channel data set and the first bit stream set;
    the first decoding network when a third condition is satisfied is determined as the second decoding network.
  43. The sink device of claim 42, wherein the third condition is that the number of times S33 is performed is greater than or equal to a fifth threshold, or wherein the third condition is that the decoding performance score at the time of convergence of the first decoding network in S33 is greater than or equal to a sixth threshold.
  44. The sink device of claim 42 or 43,
    in S31, the optimization function of the second authentication network is:
    wherein D is 2 i denotes the second authentication network, de denotes the first decoding network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  45. The sink device of any one of claims 42 to 44,
    in S32, the optimization function of the first decoding network is:
    wherein De represents the first decoding network, D 2 i denotes the second authentication network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  46. The sink device of any one of claims 42 to 45,
    in S33, the optimization function of the first decoding network is:
    where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  47. The sink device of claim 46, wherein the first decoding network comprises a decoding network front layer and a decoding network back layer; the processing unit is specifically configured to:
    s41: training the second authentication network according to the first bit stream set and the second bit stream set until convergence while keeping parameters of the first decoding network unchanged; wherein, in the event that the second authentication network converges, the second authentication network has the ability to distinguish between feature vectors output by the decoding network front-part layer for the first set of bitstreams and feature vectors output by the decoding network front-part layer for the second set of bitstreams;
    S42: training the decoding network front part layer according to the first bit stream set and the second bit stream set until convergence under the condition that parameters of the second authentication network are kept unchanged; wherein, in case the decoding network front part layer converges, the second authentication network cannot distinguish between a feature vector output by the decoding network front part layer for the first bit stream set and a feature vector output by the decoding network front part layer for the second bit stream set;
    s43: training the first decoding network until convergence according to the first channel data set and the first bit stream set;
    the first decoding network when the fourth condition is satisfied is determined as the second decoding network.
  48. The sink device of claim 47, wherein the fourth condition is that the number of times S43 is performed is greater than or equal to a seventh threshold, or wherein the fourth condition is that the decoding performance score at the time of convergence of the first decoding network in S43 is greater than or equal to an eighth threshold.
  49. The sink device of claim 47 or 48,
    in S41, the optimization function of the second authentication network is:
    wherein D is 2 i denotes the second authentication network, def denotes the decoding network front layer, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  50. The sink device of any one of claims 47 to 49,
    in S42, the optimization function of the first decoding network is:
    wherein Def represents the decoding network front layer, D 2 i denotes the second authentication network, b_1 denotes the first bit stream set, and b_2 denotes the second bit stream set.
  51. The sink device of any one of claims 47 to 50,
    in S43, the optimization function of the first decoding network is:
    where De represents the first decoding network, H_1 represents the first channel data set, B _1 represents the first set of bit streams, I.I F Indicating the Frobenius norm.
  52. The sink device of any one of claims 40 to 51, wherein the first set of bitstreams includes a greater number of bitstreams than the second set of bitstreams.
  53. An originating device, comprising: a processor and a memory for storing a computer program, the processor being adapted to invoke and run the computer program stored in the memory, to perform the method according to any of claims 1 to 13.
  54. A sink device, comprising: a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory, performing the method of any of claims 14 to 26.
  55. A chip, comprising: a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method of any one of claims 1 to 13 or to perform the method of any one of claims 14 to 26.
  56. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 13 or to perform the method of any one of claims 14 to 26.
  57. A computer program product comprising computer program instructions for causing a computer to perform the method of any one of claims 1 to 13 or to perform the method of any one of claims 14 to 26.
  58. A computer program, characterized in that the computer program causes a computer to perform the method according to any one of claims 1 to 13 or to perform the method according to any one of claims 14 to 26.
CN202180097508.6A 2021-07-28 2021-07-28 Channel information feedback method, transmitting device and receiving device Pending CN117203898A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/109003 WO2023004638A1 (en) 2021-07-28 2021-07-28 Channel information feedback methods, transmitting end devices, and receiving end devices

Publications (1)

Publication Number Publication Date
CN117203898A true CN117203898A (en) 2023-12-08

Family

ID=85086123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180097508.6A Pending CN117203898A (en) 2021-07-28 2021-07-28 Channel information feedback method, transmitting device and receiving device

Country Status (2)

Country Link
CN (1) CN117203898A (en)
WO (1) WO2023004638A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111512323A (en) * 2017-05-03 2020-08-07 弗吉尼亚科技知识产权有限公司 Learning and deployment of adaptive wireless communications
EP3642970A4 (en) * 2017-06-19 2021-03-31 Virginia Tech Intellectual Properties, Inc. Encoding and decoding of information for wireless transmission using multi-antenna transceivers
US10686859B2 (en) * 2017-12-28 2020-06-16 Intel Corporation Content scenario and network condition based multimedia communication
EP3759654A4 (en) * 2018-03-02 2021-09-08 Deepsig Inc. Learning communication systems using channel approximation
CN112671505B (en) * 2019-10-16 2023-04-11 维沃移动通信有限公司 Encoding method, decoding method and device
WO2021108940A1 (en) * 2019-12-01 2021-06-10 Nokia Shanghai Bell Co., Ltd. Channel state information feedback

Also Published As

Publication number Publication date
WO2023004638A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
US20230019669A1 (en) Systems and methods for enhanced feedback for cascaded federated machine learning
WO2022001822A1 (en) Method and device for acquiring neural network
CN114614955A (en) Method and device for transmitting data
WO2022217506A1 (en) Channel information feedback method, sending end device, and receiving end device
CN113837349A (en) Multitask learning method and device
US20240072927A1 (en) Signal processing method, communication device, and communication system
WO2022236785A1 (en) Channel information feedback method, receiving end device, and transmitting end device
CN109889247B (en) Low-overhead dynamic feedback safe transmission method and system suitable for narrow-band Internet of things
CN117203898A (en) Channel information feedback method, transmitting device and receiving device
WO2023283785A1 (en) Method for processing signal, and receiver
WO2022257121A1 (en) Communication method and device, and storage medium
WO2022222116A1 (en) Channel recovery method and receiving end device
WO2023015499A1 (en) Wireless communication method and device
WO2024020793A1 (en) Channel state information (csi) feedback method, terminal device and network device
WO2023004563A1 (en) Method for obtaining reference signal and communication devices
US20240259072A1 (en) Model processing method, electronic device, network device, and terminal device
WO2024108356A1 (en) Csi feedback method, transmitter device and receiver device
WO2023060503A1 (en) Information processing method and apparatus, device, medium, chip, product, and program
WO2024098259A1 (en) Sample set generation method and device
WO2024077621A1 (en) Channel information feedback method, transmitting end device, and receiving end device
WO2023115254A1 (en) Data processing method and device
WO2023133886A1 (en) Channel information feedback method, sending end device, and receiving end device
CN117639867A (en) Communication method and device
CN117676630A (en) Communication method and device
CN117411526A (en) Communication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination