WO2024067878A1 - 信道预测方法、装置及通信设备 - Google Patents
信道预测方法、装置及通信设备 Download PDFInfo
- Publication number
- WO2024067878A1 WO2024067878A1 PCT/CN2023/123160 CN2023123160W WO2024067878A1 WO 2024067878 A1 WO2024067878 A1 WO 2024067878A1 CN 2023123160 W CN2023123160 W CN 2023123160W WO 2024067878 A1 WO2024067878 A1 WO 2024067878A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- channel
- information
- neural network
- prediction
- feedback
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 119
- 238000004891 communication Methods 0.000 title claims abstract description 33
- 238000013528 artificial neural network Methods 0.000 claims abstract description 259
- 230000015654 memory Effects 0.000 claims description 30
- 230000006870 function Effects 0.000 description 36
- 239000011159 matrix material Substances 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 230000004913 activation Effects 0.000 description 15
- 230000005540 biological transmission Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 230000013016 learning Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/373—Predicting channel quality or other radio frequency [RF] parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/391—Modelling the propagation channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/024—Channel estimation channel estimation algorithms
- H04L25/0254—Channel estimation channel estimation algorithms using neural network algorithms
Definitions
- the present application belongs to the field of communication technology, and specifically relates to a channel prediction method, device and communication equipment.
- the pilot signal is sent every few time slots. Therefore, in the slot where the pilot signal is not sent, the real massive MIMO channel is unknown. For this reason, the existing system generally uses the channel estimated by the slot where the pilot signal was sent most recently as the channel of the current slot. However, the channel obtained in this way is far from the actual channel corresponding to the current slot.
- MIMO massive multiple-input multiple-output
- the embodiments of the present application provide a channel prediction method, apparatus and communication equipment, which can solve the problem in the related art that there is a large gap between the channel estimated by the channel and the channel corresponding to the actual time slot.
- a channel prediction method comprising:
- the first device obtains N channel information estimated for N time slots, where the N time slots are time slots corresponding to the first information, and N is an integer greater than or equal to 1;
- the first device predicts a first channel through a first target neural network based on the N channel information to obtain first channel prediction information;
- the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment
- the first channel prediction information is used by the second target neural network to predict the second channel between the current time slot corresponding to the first information and the first time slot.
- a channel prediction method comprising:
- the second device receives first feedback information sent by the first device, where the first feedback information is obtained based on first channel prediction information, and the first channel prediction information is obtained by the first device through prediction by a first target neural network;
- the second device predicts a second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1;
- the L channel information is the channel information estimated by the second device for the L time slots corresponding to the first information.
- the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment
- the second channel is the channel between the time slot corresponding to the first information at the current moment and the first time slot.
- a channel prediction device comprising:
- An acquisition module configured to acquire N channel information estimated for N time slots, where the N time slots are time slots corresponding to the first information, and N is an integer greater than or equal to 1;
- a first prediction module configured to predict a first channel through a first target neural network based on the N channel information to obtain first channel prediction information
- the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment
- the first channel prediction information is used by the second target neural network to predict the second channel between the current time slot corresponding to the first information and the first time slot.
- a channel prediction device comprising:
- a receiving module configured to receive first feedback information sent by a first device, where the first feedback information is obtained based on first channel prediction information, where the first channel prediction information is obtained by the first device through prediction by a first target neural network;
- a second prediction module configured to predict a second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1;
- the L channel information is the channel information estimated by the device for the L time slots corresponding to the first information
- the first channel is the channel of the next first time slot corresponding to the first information that is closest to the current moment
- the second channel is the channel between the time slot corresponding to the first information at the current moment and the first time slot.
- a communication device comprising a processor and a memory, wherein the memory stores a program or instruction that can be run on the processor, and when the program or instruction is executed by the processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the second aspect are implemented.
- a communication device including a processor and a communication interface, wherein the processor is used to obtain N channel information estimated for N time slots, where the N time slots are time slots corresponding to first information, and N is an integer greater than or equal to 1; based on the N channel information, a first channel is predicted by a first target neural network to obtain first channel prediction information, where the first channel is the channel of the next first time slot corresponding to the first information closest to a current moment, and the first channel prediction information is used by a second target neural network to predict a second channel between a current time slot corresponding to the first information and the first time slot;
- the communication interface is used to receive first feedback information sent by a first device, where the first feedback information is obtained based on first channel prediction information, and the first channel prediction information is obtained by the first device through a first target neural network prediction;
- the processor is used to predict the second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1; wherein the L channel information is channel information estimated by the second device for L time slots corresponding to the first information, the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment, and the second channel is the channel between the time slot corresponding to the first information at the current moment and the first time slot.
- a readable storage medium on which a program or instruction is stored.
- the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the second aspect are implemented.
- a chip comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instruction to implement the method described in the first aspect, or to implement the method described in the second aspect.
- a computer program product is provided.
- the computer program product is stored in a storage medium, and the computer program product is executed by at least one processor to implement the method as described in the first aspect, or to implement the method as described in the second aspect.
- the first device obtains N channel information estimated for N time slots corresponding to the first information, and predicts the first channel through the first target neural network based on the N channel information to obtain the first channel prediction information, where the first channel is the channel of the first time slot that sends the first information next to the current moment; and the first channel prediction information can be used by the second target neural network to predict the second channel between the time slot that currently sends the first information and the first time slot.
- the first channel prediction information can be used by the second target neural network to predict the second channel between the time slot that currently sends the first information and the first time slot.
- FIG1a is a block diagram of a wireless communication system applicable to an embodiment of the present application.
- FIG1 b is a schematic diagram of a time slot for periodically transmitting SRS/CSI-RS
- FIG1c is a schematic diagram of a time slot for periodically sending channel feedback
- FIG2 is a flow chart of a channel prediction method provided by an embodiment of the present application.
- FIG3a is a second schematic diagram of a time slot for periodically sending SRS/CSI-RS
- FIG3 b is a schematic diagram of the structure of a neural network
- FIG3c is a third schematic diagram of a time slot for periodically sending SRS/CSI-RS;
- FIG4a is a second schematic diagram of a time slot for periodically sending channel feedback
- FIG4b is a third schematic diagram of a time slot for periodically sending channel feedback
- FIG5 is a flow chart of another channel prediction method provided in an embodiment of the present application.
- FIG6 is a structural diagram of a channel prediction device provided in an embodiment of the present application.
- FIG7 is a structural diagram of another channel prediction device provided in an embodiment of the present application.
- FIG8 is a structural diagram of a communication device provided in an embodiment of the present application.
- FIG9 is a structural diagram of a terminal provided in an embodiment of the present application.
- FIG10 is a structural diagram of a network-side device provided in an embodiment of the present application.
- first, second, etc. in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the terms used in this way are interchangeable under appropriate circumstances, so that the embodiments of the present application can be implemented in an order other than those illustrated or described here, and the objects distinguished by “first” and “second” are generally of the same type, and the number of objects is not limited.
- the first object can be one or more.
- “and/or” in the specification and claims represents at least one of the connected objects, and the character “/" generally represents that the objects associated with each other are in an "or” relationship.
- LTE Long Term Evolution
- LTE-A Long Term Evolution
- CDMA Code Division Multiple Access
- TDMA Time Division Multiple Access
- FDMA Frequency Division Multiple Access
- OFDMA Orthogonal Frequency Division Multiple Access
- SC-FDMA Single-carrier Frequency Division Multiple Access
- NR new radio
- FIG1a shows a block diagram of a wireless communication system applicable to an embodiment of the present application.
- the wireless communication system includes a terminal 11 and a network side device 12 .
- the terminal 11 may be a mobile phone, a tablet computer, a laptop computer or a notebook computer, a personal digital assistant (PDA), a handheld computer, a netbook, an ultra-mobile personal computer (UMPC), a mobile Internet device (MID), an augmented reality (AR)/virtual reality (VR) device, a robot, a wearable device (Wearable Device), a vehicle user equipment (VUE), a pedestrian terminal (Pedestrian User Equipment, PUE), a smart home (a home appliance with wireless communication function, such as a refrigerator, a television, a washing machine or furniture, etc.), a game console, a personal computer (PC), a teller machine or a self-service machine and other terminal side devices, and the wearable device includes: a smart watch, a smart bracelet, a smart headset, a smart glasses, smart jewelry
- the network side device 12 may include access network equipment or core network equipment, wherein the access network equipment may also be referred to as wireless access network equipment, wireless access network (RAN), wireless access network function or wireless access network unit.
- the access network equipment may include base stations, wireless local area network (WLAN) access points or WiFi nodes, etc.
- the base stations may be referred to as node B, evolved node B (eNB), access nodes, etc.
- base transceiver station Base Transceiver Station, BTS
- radio base station radio transceiver
- basic service set Basic Service Set, BSS
- extended service set Extended Service Set, ESS
- home B node home evolved B node
- transmitting receiving point Transmitting Receiving Point, TRP
- TRP Transmitting Receiving Point
- the transmitting end there are N antennas at the transmitting end (terminal or base station).
- the frame structure of the data transmission of the system is based on time slots, and each slot is 1ms long.
- the transmitting end sends pilot signals of N ports.
- the pilot signal can be a sounding reference signal (SRS) or a channel state information reference signal (CSI-RS).
- SRS sounding reference signal
- CSI-RS channel state information reference signal
- the pilot signal is not sent in every slot, but once every K slots (with a period of K).
- the receiving end In the slot where SRS or CSI-RS is sent, after the receiving end receives the SRS or CSI-RS signal, the receiving end (base station or terminal) obtains the channel information of the slot through channel estimation. It can be seen that in these slots, the system can obtain relatively accurate channel information on each antenna of massive MIMO. However, for the slots between the slots that send SRS or CSI-RS, the system cannot use the channel estimation method to obtain relatively accurate channel information because there is no pilot information.
- the base station In other scenarios, based on the frequency division duplex (FDD) massive MIMO system, there are N antennas in the base station.
- the frame structure of the system data transmission is based on slots, and each slot is 1ms long.
- the base station In order to enable the base station to obtain the channel information of the downlink massive MIMO, the base station first sends the pilot signal CSI-RS. After the terminal receives the CSI-RS signal, the terminal obtains the channel information of the slot through channel estimation, and the terminal needs to feedback the channel information through the uplink.
- the channel feedback In order to reduce the feedback overhead, the channel feedback is not sent in every slot, but once every K slots (with a period of K).
- the interval between two transmissions is K-1 slots.
- the value of K is configured by the base station through the radio resource control (RRC) signaling.
- RRC radio resource control
- the frame structure of the channel feedback is shown in Figure 1c.
- the base station can obtain relatively accurate channel information on each antenna of the massive MIMO. For those time slots between the channel feedback slots, since there is no feedback channel information, the system cannot obtain relatively accurate channel information.
- an embodiment of the present application provides a channel prediction method.
- FIG. 2 is a flow chart of a channel prediction method provided in an embodiment of the present application. As shown in FIG. 2 , the method includes the following steps:
- Step 201 A first device obtains N channel information estimated for N time slots, where the N time slots are time slots corresponding to first information, and N is an integer greater than or equal to 1.
- the first information may be a reference signal, such as a sounding reference signal (SRS), a channel state information reference signal (CSI-RS), etc., or the first information may also be channel feedback information, etc.
- the N time slots are time slots corresponding to the first information, for example, the N time slots may be the time corresponding to the reference signal sending time, or the time corresponding to the reference signal receiving time, or the time corresponding to the reference signal estimation completion time, etc., and the embodiment of the present application does not specifically limit this.
- the first device obtains N channel information estimated for N time slots.
- the terminal may obtain N channel information estimated for N time slots of the transmitted CSI-RS.
- this step may also be other possible situations, which are not listed in detail here.
- Step 202 The first device predicts a first channel through a first target neural network based on the N channel information to obtain first channel prediction information.
- the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment
- the first channel prediction information is used by the second target neural network to predict the second channel between the current time slot corresponding to the first information and the first time slot.
- the first device may use the N channel information related to the first information as training samples to train the first target neural network, so that the trained first target neural network can predict the channel of the next first time slot corresponding to the first information at the current moment, that is, the first channel, and then obtain the first channel prediction information.
- the first device may be a device that utilizes the channel values estimated from the first N time slots for sending SRS or CSI-RS signals at the current moment, and predicts in advance the channel of the next slot for sending SRS or CSI-RS signals after the current moment through the first target neural network.
- the first channel prediction information is used by the second target neural network to predict a second channel between the current time slot corresponding to the first information and the first time slot corresponding to the predicted first channel.
- the transmission period of CSI-RS is once every T time slots.
- the first device uses the channel value estimated by the first N time slots that send CSI-RS closest to the current moment to predict the channel of the next (i.e., the N+1th) slot that sends CSI-RS after the current moment through the first target neural network, and then predicts the channel of the T-1 time slots that do not send CSI-RS between the time slots that send CSI-RS in the Nth period and the time slots that send CSI-RS in the N+1th period through the second target neural network, i.e., the second channel.
- the first device can obtain the channel information of the time slots that do not send CSI-RS based on the prediction of the second target neural network, which effectively improves the performance of channel estimation and prediction of the communication system, thereby helping to improve system performance.
- the first device obtains N channel information estimated for N time slots corresponding to the first information, predicts the first channel through the first target neural network based on the N channel information, and obtains the first channel prediction information, where the first channel is the channel of the first time slot of the next transmission of the first information that is closest to the current moment;
- the first channel prediction information can be used by the second target neural network to predict the second channel between the time slot where the first information is currently sent and the first time slot.
- two stages of channel prediction are implemented through two neural networks, and then the channel information of the time slots where the first information is not sent between the time slots where the first information is periodically sent can be predicted, which effectively improves the performance of channel estimation and prediction of the communication system, thereby helping to improve system performance.
- the method further includes:
- the first device predicts the second channel through a second target neural network based on K channel information and the first channel prediction information to obtain second channel prediction information, wherein the K channel information is channel information estimated by the first device for K time slots corresponding to the first information, and K is an integer greater than or equal to 1; or
- the first device sends first feedback information to the second device, and the second device is used to predict the second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, the first feedback information is obtained based on the first channel prediction information, the L channel information is the channel information estimated by the second device for L time slots corresponding to the first information, and L is an integer greater than or equal to 1.
- the first device after the first device predicts the first channel through the first target neural network based on the N channel information and obtains the first channel prediction information, the first device further predicts the second channel through the second target neural network based on the N channel information and the first channel prediction information to obtain the second channel prediction information. That is to say, the first target neural network and the second target neural network are both located on the first device side, and the first device predicts the first channel through the first target neural network and predicts the second channel through the second target neural network in turn, thereby improving the performance of the first device for channel prediction, and further improving the communication performance of the first device.
- the first device may process the first channel prediction information, such as compression processing, to obtain first feedback information, and send the first feedback information to the second device, and then the second device predicts the second channel through the second target neural network based on the N channel information and the first feedback information to obtain second channel prediction information.
- the first target neural network is located on the first device side
- the second target neural network is located on the second device side. The first device is used to realize the prediction of the first channel, and the second channel is predicted through the second device. In this way, channel prediction is realized based on the collaboration between the first device and the second device, and the performance of the communication device for channel prediction is improved.
- the first information includes at least one of a pilot signal and a channel feedback.
- the first information is a pilot signal
- the N channel information is channel estimation information obtained based on the pilot signal.
- the first device predicts the second channel through the second target neural network. That is, both the first target neural network and the second target neural network are located on the first device side, and the first device can achieve prediction of the first channel and prediction of the second channel.
- the first device when the pilot signal is a CSI-RS, the first device is a terminal; when the pilot signal is an SRS, the first device is a network side device.
- the first device predicts the second channel through the second target neural network, and the first device is a network side device.
- the first device obtains N channel information estimated for N time slots, including:
- the terminal obtains N channel information estimated for N time slots corresponding to the CSI-RS;
- the first device sends first feedback information to the second device, and the second device is used to predict the second channel through a second target neural network based on L channel information and the first feedback information, including:
- the terminal sends first feedback information to a network side device, and the network side device is used to predict the second channel through a second target neural network based on the channel information of L time slots for channel feedback and the first feedback information, where the second channel is a channel between the time slot for currently sending the channel feedback and the next time slot for sending the channel feedback.
- the terminal estimates N channel information based on the N time slots corresponding to the CSI-RS, and the terminal predicts the first channel through the first target neural network based on the N channel information to obtain the first channel prediction information.
- the first channel is also the channel of the next time slot that sends CSI-RS closest to the current moment.
- the first target neural network can be obtained by the terminal through training based on the N channel information, that is, the terminal can train the first target neural network based on the channel estimation result of CSI-RS, and then predict the next channel for sending CSI-RS based on the trained first target neural network.
- the terminal compresses the predicted first channel prediction information to obtain first feedback information, and sends the first feedback information to the network side device.
- the network side device predicts the second channel through the second target neural network through the channel information estimated by L time slots for channel feedback and the first feedback information.
- the second channel is the channel between the time slot in which the network side device currently sends the channel feedback and the next time slot in which the channel feedback is sent.
- the second target neural network can be trained by the network side device based on the L channel information estimated by the time slots of L channel feedback, and then the second target neural network can also realize the prediction of the channel of the time slot of channel feedback. In this way, the first target neural network and the second target neural network are trained in the terminal and the network side device respectively, so as to improve the performance of the terminal and the network side device for channel prediction.
- the number of second channels predicted by the second target neural network is M, where M is an integer greater than or equal to 1.
- M is an integer greater than or equal to 1.
- the first information is CSI-RS
- CSI-RS is not sent in every time slot, for example, CSI-RS may be sent once every 5 time slots; then the first channel is the predicted time slot for the next CSI-RS transmission, and the second channel is the channel corresponding to the time slot between the time slot for the next CSI-RS transmission and the time slot for the most recent CSI-RS transmission, then the number of second channels is 4.
- the method further includes:
- the first device sends a first prediction result to the second device, where the first prediction result includes the number of the second channels and a time slot indication corresponding to each of the second channels.
- the first device may send relevant information including all the second channels to the second device, or may only send relevant information about part of the second channels.
- the method before the first device predicts the first channel through the first target neural network based on the N channel information, the method further includes:
- the first device receives a first instruction
- the first device predicts a first channel through a first target neural network based on the N channel information, including:
- the first device predicts a first channel through a first neural network based on the N channel information
- the first device predicts a first channel through a second neural network based on the N channel information, and the second neural network is a neural network trained based on default channel information.
- the first target neural network is also the first neural network or the second neural network.
- the second neural network may also be referred to as an initial neural network, and the second neural network is obtained by offline training through default channel information.
- the second neural network may be a neural network pre-trained by other devices and then sent to the first device. In other words, when the second neural network predicts the first channel, the channel information of the first channel will not be used as a training sample for the second neural network, that is, the second neural network will not be trained online.
- the first neural network may be trained online, that is, after the first neural network predicts the first channel, the channel information of the first channel may be used as a training sample for the first neural network to train the first neural network, so that the network parameters of the first neural network can be adjusted based on the latest channel information, thereby improving the prediction accuracy of the first neural network and enabling the first neural network to adapt to the current channel environment.
- the terminal may send an instruction to the network side device whether to reset the network parameters (e.g., Initial_parameter_reset).
- the terminal responds to changes in the channel environment through the feedback of the Initial_parameter_reset instruction, thereby better ensuring the channel prediction performance of the network side device.
- the method may further include:
- the first device acquires channel information of an N+1th time slot, where the N+1th time slot is a next time slot closest to the current moment in which the first information is actually sent;
- the first device receives a second instruction, wherein the second instruction is used to instruct to update the first target neural network
- the first device performs a first operation, the first operation comprising:
- the first target neural network is trained based on a first training sample set, and the trained first target neural network is used to predict the channel of the time slot in which the first information is sent after the N+1th time slot, and the first training sample set includes the channel information of the N+1th time slot.
- the N time slots are the first N time slots for sending the first information closest to the current moment.
- the first device predicts the first channel through the first target neural network.
- the first channel is also the channel predicted by the first target neural network for the N+1th time slot for sending the first information.
- the first device can obtain the time slot where the first information is actually sent in real time.
- the first device can train the first target neural network based on the latest training sample set (that is, the first training sample set), that is, the online training mentioned above, and the trained first target neural network can be used to predict the channel of the next time slot for sending the first information.
- the first device every time the first device obtains the channel information of a time slot that actually sends the first information, it adds the channel information to the training sample set of the first target neural network to train the first target neural network, so as to optimize its neural network parameters and improve the accuracy of the first target neural network.
- the method further includes:
- the first device sends the number of training samples of the first target neural network to the second device.
- the first device may send the number of training samples in the first training sample set to the second device, so as to better support online training of the first target neural network.
- the method further comprises:
- the first device is configured with a target value, where the target value is the maximum value of the number of training samples of the first target neural network.
- the network side device may be configured with a target value through RRC, that is, the maximum number of training samples of the first target neural network that the network side device can support.
- the number of training samples input into the first target neural network is the minimum value of the target value and the number of training samples in the first training sample set.
- the first training sample set is a sample set for online training of the first target neural network, that is, each time the first device obtains channel information of a time slot in which the first information is actually sent, the channel information is added to the first training sample set, and the number of the first training sample set may increase infinitely.
- the number of training samples in the first training sample set is greater than the target value
- the number of training samples of the first target neural network is the target value, for example, it may be the first target value of training samples in the first training sample set, so that the computational burden of training the first target neural network can be avoided due to too many training samples.
- the scenario of this embodiment is based on a large-scale MIMO system.
- N antennas at the transmitting end terminal or base station.
- the transmitting end sends pilot signals of N ports.
- the pilot signals can be SRS or CSI-RS.
- the pilot signal is not sent in each slot.
- the pilot signal is not sent at all, but is sent once every K slots (with a period of K), with an interval of K-1 slots between two transmissions. For the K-1 time slots where no pilot signal is sent, since there is no pilot signal, it is impossible to use the channel estimation method to obtain more accurate channel information.
- the communication device after receiving the signal of the slot currently sending SRS/CSI-RS, the communication device obtains the channel information of the slot through channel estimation. For the slots that do not send SRS/CSI-RS between the slot currently sending SRS/CSI-RS and the slot sending SRS/CSI-RS next time, that is, the slots during the period of sending SRS/CSI-RS, the channel information of these slots is obtained through a two-stage channel prediction method based on a neural network.
- the first stage of channel prediction uses the channel values estimated by the previous L slots for sending SRS/CSI-RS signals to predict the channel of the next slot for sending SRS/CSI-RS signals in advance through the first neural network, that is, the channel of the L+1th slot for sending SRS/CSI-RS signals.
- the time relationship of the channel estimated and predicted by the neural network is shown in Figure 3a.
- the first stage channel prediction is implemented by the first neural network, whose input X is the estimated channel vector of the slot for sending SRS/CSI-RS signals for L periods.
- h i [h 1,i h 2,i ... h N,i ] T is the channel vector from the N antennas to the receiving end of the massive MIMO in the i-th period of sending SRS/CSI-RS signals.
- h n,i represents the channel from the n-th antenna to the receiving end during the i-th period slot
- X can be expressed as:
- the output of the neural network is Y, which is the predicted value of the channel from the N antennas to the receiving end in the L+1th SRS/CSI-RS signal transmission period slot, so Y is an N ⁇ 1 vector.
- Y In order to represent the channel information with the symbol h, Y can be recorded as It can be expressed as:
- the neural network for implementing the first stage channel prediction is composed of an input layer, an output layer and several hidden layers. Taking one hidden layer as an example, the first stage prediction neural network structure is shown in FIG3b.
- the first stage predicts the basic structure of each layer of the neural network: for the input signal or the data from the previous layer, multiply it by a matrix WR on the right and a matrix WL on the left, and then add an offset matrix B. Except for the output layer, after adding the offset matrix, the other layers must pass through an activation function.
- the activation function can be the ReLU function or other functions. Specifically:
- the dimension of W L1 is M ⁇ N
- the dimension of W R1 is L ⁇ M
- the dimension of B 1 is M ⁇ M.
- the dimension of the hidden layer is M
- the dimension of WL2 is M ⁇ M
- the dimension of WR2 is M ⁇ M
- the dimension of B2 is M ⁇ M.
- the neural network is set to 3 layers, with activation functions in layers 1 and 2 and no activation function in layer 3.
- the dimension of the hidden layer is M
- the dimension of WL3 is N ⁇ M
- the dimension of WR3 is M ⁇ 1
- the dimension of B3 is N ⁇ 1.
- the network parameters need to be trained.
- the training data X can be estimated from a large number of SRS/CSI-RS channels in L cycles, and the matching target is the actual channel h L+1 of the SRS/CSI-RS slot in L+1 cycles.
- the training optimization goal (cost function) is to minimize the normalized mean square error between the output Y of the neural network and the actual channel h L+1 , that is,
- One type of neural network training in the first stage of channel prediction There are two types of neural network training in the first stage of channel prediction.
- One is offline training, which uses a large amount of offline data to train network parameters. These parameters are used as the initial parameters of the network when the neural network performs channel prediction in the actual system.
- the SRS/CSI-RS of the cycle is used to obtain the channel h L+1 value, and then the network parameters are trained online (or fine-tuned) to make the neural network model more suitable for the current environment.
- the second stage channel prediction uses the estimated channel values of the previous L slots that sent SRS/CSI-RS signals, and the first neural network to predict the channel of the slot that sends SRS/CSI-RS signals one cycle later, that is, the L+1th cycle, to predict the channels of the K-1 slots that do not send SRS/CSI-RS between the slot that sends SRS/CSI-RS in the Lth cycle and the slot that sends SRS/CSI-RS in the L+1th cycle.
- the second stage channel prediction is implemented using the second stage neural network, and the channel time relationship of the second stage channel prediction is shown in Figure 3c.
- the second-stage channel prediction is also implemented through a neural network.
- the input X of the neural network is the estimated channel vector of the slot for sending SRS/CSI-RS signals in L cycles plus the channel of the slot for sending SRS/CSI-RS signals in the L+1th cycle predicted in advance by the first neural network.
- X can be expressed as:
- h i [h 1,i h 2,i ... h N,i ] T is the channel vector from the N antennas of massive MIMO to the receiving end during the i-th period of sending SRS/CSI-RS signals.
- h n,i represents the channel from the n-th antenna to the receiving end during the i-th period slot.
- X can be expressed as:
- the output Y of the neural network is the channel of the K-1 slots that do not send SRS/CSI-RS between the slot that sends SRS/CSI-RS in the Lth period and the slot that sends SRS/CSI-RS in the L+1th period. Therefore, Y is an N ⁇ (K-1) matrix. In order to represent all channel information with the symbol h, Y can be recorded as It can be expressed as:
- the neural network for implementing the second stage channel prediction in this embodiment is composed of an input layer, an output layer and several hidden layers. Taking one hidden layer as an example, the first stage prediction neural network structure is still as shown in FIG. 3b.
- each layer of the neural network that implements the second-stage channel prediction is as follows: for the input signal or the data from the previous layer, multiply it by a matrix WR on the right and a matrix WL on the left, and then add an offset matrix B. Except for the output layer, after adding the offset matrix, the other layers must pass through an activation function, which can be the ReLU function or other functions. Specifically:
- the dimension of W L1 is M ⁇ N
- the dimension of W R1 is (L+1) ⁇ M
- the dimension of B 1 is M ⁇ M.
- the dimension of the hidden layer is M
- the dimension of WL2 is M ⁇ M
- the dimension of WR2 is M ⁇ M
- the dimension of B2 is M ⁇ M.
- the neural network is set to 3 layers, with activation functions in layers 1 and 2 and no activation function in layer 3.
- the dimension of the hidden layer is M
- the dimension of W L3 is N ⁇ M
- the dimension of WR3 is M ⁇ (K-1)
- the dimension of B 3 is N ⁇ (K-1).
- the network parameters need to be trained.
- the training data X can be taken from a large number of channels in L+1 SRS/CSI-RS cycles, and the matching target is the actual channel [h (1) h (2) ... h (K-1) ] of K-1 slots that do not send SRS/CSI-RS between the Lth cycle and the L+1th cycle.
- the training optimization target (cost function) is to minimize the normalized mean square error between the output Y of the neural network and the actual channel [h (1) h (2) ... h (K-1) ], that is:
- the neural network used in the first stage for channel prediction and the neural network used in the second stage for channel prediction can be used together to complete the task of channel prediction.
- channel estimation is performed to obtain the channel information of the current slot. Then, the channel information obtained by estimating the current slot and the estimated channel values of the previous L-1 slots that sent SRS/CSI-RS signals (a total of L slots that sent SRS/CSI-RS signals) are input into the neural network used to implement channel prediction in the first stage, and the neural network outputs (i.e. predicts) the channel of the next slot that sends SRS/CSI-RS signals after one cycle.
- the channel values of the L slots that transmit SRS/CSI-RS signals and the channel of the next slot that transmits SRS/CSI-RS signals in the future output by the neural network for implementing channel prediction in the first stage are input into the neural network for implementing channel prediction in the second stage, and the neural network in the second stage outputs the channels of the K-1 slots that do not transmit SRS/CSI-RS between the current slot that transmits SRS/CSI-RS and the next slot that transmits SRS/CSI-RS.
- channel prediction is completed through two neural networks.
- the parameters of the neural network used to implement channel prediction in the first stage are trained online (or fine-tuned), so that the neural network in the first stage can be more adapted to the current channel environment, thereby improving the performance and accuracy of the channel prediction in the first stage, and thus improving the accuracy of the channel prediction in the entire two stages.
- the scenario of this embodiment is based on an FDD massive MIMO system, and there are N antennas in the base station.
- the base station In order to enable the base station to obtain the channel information of the downlink massive MIMO, the base station first sends a pilot signal CSI-RS. After the mobile user receives the CSI-RS signal, the mobile user obtains the channel information of the slot through channel estimation. The mobile user is required to feedback the channel information through the uplink.
- the channel feedback is not sent in every slot, but once every K slots (with a period of K), and the two transmissions are separated by K-1 slots. For the K-1 time slots where no channel feedback is performed, the system cannot obtain more accurate channel information because there is no feedback channel information.
- the value of K is configured by the base station through RRC signaling.
- the channel information of the slot feedback is obtained.
- the channel information of these slots is obtained by a two-stage channel prediction method based on a neural network.
- the first stage channel prediction uses the feedback channel values of the first L channel feedback slots to predict the feedback channel of the next channel feedback slot one cycle later through the first neural network, that is, the L+1th channel feedback slot.
- the channel of the feedback cycle slot is shown in Figure 4a.
- the first stage channel prediction is implemented by the first neural network, whose input X is the slot estimated channel vector of L-cycle channel feedback.
- h i [h 1,i h 2,i ... h N,i ] T is the channel vector from the N antennas of massive MIMO to the receiving end fed back in the i-th channel feedback period slot.
- h n,i represents the channel from the n-th antenna to the receiving end during the i-th period slot
- X can be expressed as:
- the output Y of the first neural network is the predicted value of the channel from N antennas to the receiving end in the L+1th channel feedback cycle slot. Therefore, Y is an N ⁇ 1 vector and can be expressed as:
- the neural network for implementing the first stage channel prediction is composed of an input layer, an output layer and several hidden layers. Taking one hidden layer as an example, the first stage prediction neural network structure is shown in FIG3b.
- the first stage predicts the basic structure of each layer of the neural network: for the input signal or the data from the previous layer, multiply it by a matrix WR on the right and a matrix WL on the left, and then add an offset matrix B. Except for the output layer, after adding the offset matrix, the other layers must pass through an activation function.
- the activation function can be the ReLU function or other functions. Specifically:
- the dimension of W L1 is M ⁇ N
- the dimension of W R1 is L ⁇ M
- the dimension of B 1 is M ⁇ M.
- the dimension of the hidden layer is M
- the dimension of WL2 is M ⁇ M
- the dimension of WR2 is M ⁇ M
- the dimension of B2 is M ⁇ M.
- the neural network is set to 3 layers, with activation functions in layers 1 and 2 and no activation function in layer 3.
- the dimension of the hidden layer is M
- the dimension of WL3 is N ⁇ M
- the dimension of WR3 is M ⁇ 1
- the dimension of B3 is N ⁇ 1.
- the training data X can be obtained from the channel fed back by a large number of L periodic channel feedbacks, and the matching target is the actual channel h L+1 fed back by L+ 1 periodic channel feedbacks.
- the training optimization target (cost function) is to minimize the normalized mean square error between the output Y of the neural network and the actual channel h L+1 , that is,
- the second-stage channel prediction uses the estimated channel values of the previous L channel feedback slots and the channel values predicted one cycle in advance by the first neural network, that is, the predicted feedback of the L+1th cycle, to predict the channels of the K-1 slots that do not send channel feedback between the Lth cycle channel feedback slot and the L+1th cycle channel feedback slot.
- the second-stage channel prediction is implemented using the second neural network, and the channel time relationship of the second-stage channel prediction is shown in Figure 4b.
- the second stage channel prediction is also implemented through a neural network, the input X of which is the channel vector of the channel feedback slot of L cycles plus the channel of the channel feedback slot of the L+1th cycle predicted in advance by the first neural network.
- X can be expressed as:
- h i [h 1,i h 2,i ... h N,i ] T is the channel vector from the N antennas to the receiver of the massive MIMO in the i-th period of sending channel feedback.
- h n,i represents the channel from the n-th antenna to the receiver during the i-th period slot.
- X can be expressed as:
- the output Y of the second neural network is the channel of the K-1 slots without channel feedback between the Lth period channel feedback slot and the L+1th period channel feedback slot. Therefore, Y is an N ⁇ (K-1) matrix, which can be expressed as:
- the second neural network of this embodiment is composed of an input layer, an output layer and several hidden layers. Taking one hidden layer as an example, the first stage prediction neural network structure is still as shown in Figure 3b.
- each layer of the second neural network is as follows: for the input signal or the data from the previous layer, multiply it by a matrix WR on the right and a matrix WL on the left, and then add an offset matrix B. Except for the output layer, after adding the offset matrix, the other layers must pass through an activation function.
- the activation function can be the ReLU function or other functions. Specifically:
- the dimension of W L1 is M ⁇ N
- the dimension of W R1 is (L+1) ⁇ M
- the dimension of B 1 is M ⁇ M.
- the dimension of the hidden layer is M
- the dimension of WL2 is M ⁇ M
- the dimension of WR2 is M ⁇ M
- the dimension of B2 is M ⁇ M.
- the neural network is set to 3 layers, with activation functions in layers 1 and 2 and no activation function in layer 3.
- the dimension of the hidden layer is M
- the dimension of W L3 is N ⁇ M
- the dimension of WR3 is M ⁇ (K-1)
- the dimension of B 3 is N ⁇ (K-1).
- the network parameters need to be trained.
- the training data X can be taken from a large number of channels with L+1 channel feedback cycles, and the matching target is the actual channel [h (1) h (2) ... h (K-1) ] of the K-1 slots without channel feedback between the Lth cycle and the L+1th cycle.
- the training optimization target (cost function) is to minimize the normalized mean square error between the output Y of the neural network and the actual channel [h (1) h (2) ... h (K-1) ], that is:
- the first neural network and the second neural network can be used together to complete the task of channel prediction.
- the current slot channel information and the channel values of the previous L-1 channel feedback slots are input into the first neural network.
- the first neural network outputs (that is, predicts in advance) the channel of the next channel feedback slot one cycle later.
- the channel values of the L channel feedback slots and the channel of the next channel feedback slot in the future output by the first neural network are input into the second neural network.
- the second neural network outputs the channels of the K-1 slots between the current channel feedback slot and the next channel feedback slot in the future that do not perform channel feedback. In this way, channel prediction is completed through two neural networks.
- the feedback of the slot is used to obtain the channel h L+1 value, and the parameters of the first neural network in this embodiment are trained online (or fine-tuned), so that the first neural network is more adapted to the current environment, thereby improving the performance and accuracy of the first stage channel prediction. This improves the performance and accuracy of the entire two-stage channel prediction.
- the network parameters trained offline are universal and can adapt to various environments, but the performance is worse, while the parameters trained online are more adapted to the current channel environment, but when the channel environment suddenly changes drastically, it takes longer to adjust.
- This instruction is used to control the size of the online training batch. Since the training of the neural network is carried out in batches, the batch size (the number of samples contained in each batch) will affect the performance of online learning.
- the feedback of this instruction can be in two ways:
- the first is that the mobile user directly feedbacks the batch size, that is, feedback Batch_size.
- the larger the batch the slower the first neural network of online training is adjusted, but the more stable the training is.
- Mobile users can judge the speed of channel scene changes based on the results of channel estimation, adjust the online training Batch_size accordingly, and feedback to the base station;
- the second type is the instruction for online training fed back by the mobile user, such as feedback of Finetuning_indicator.
- the unused data between two online learnings can be collected together as the data for online learning. That is, the size of the batch is the size of the new data between two online learnings. Since Batch size cannot be increased all the time, the base station needs to configure a maximum target value Batch_size_max using RRC.
- the mobile user can reasonably plan the feedback of Finetuning_indicator according to Batch_size_max and the speed of channel environment changes.
- the batch size is the minimum value between Batch_size_max and the number of new data obtained after the last online training.
- FIG. 5 is a flow chart of another channel prediction method provided by an embodiment of the present application. As shown in FIG. 5 , the method includes the following steps:
- Step 501 A second device receives first feedback information sent by a first device, where the first feedback information is obtained based on first channel prediction information, where the first channel prediction information is obtained by the first device through prediction by a first target neural network;
- Step 502 The second device predicts a second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1.
- the first information includes at least one of a pilot signal and a channel feedback.
- the first information includes channel feedback
- the second device is a network side device; the step 502 may include:
- the network side device predicts the second channel through a second target neural network based on the L channel information estimated for the L time slots of channel feedback and the first feedback information, where the second channel is the channel between the time slot for currently sending the channel feedback and the next time slot for sending the channel feedback.
- the method further includes:
- the second device sends a first prediction result to the first device, where the first prediction result includes the number of the second channels and a time slot indication corresponding to each of the second channels.
- the method further comprises:
- the second device sends a first instruction to the first device, where the first instruction is used to instruct to reset network parameters or not reset network parameters for the first target neural network.
- the method further comprises:
- the second device sends a second instruction to the first device, where the second instruction is used to instruct to update or not update the first target neural network.
- the method further comprises:
- the second device receives the number of training samples of the first target neural network sent by the first device.
- the second device after receiving the first feedback information sent by the first device, the second device predicts the channel of the time slot where the first information is not sent between the time slot where the first information is sent at the current moment and the time slot where the first information is sent next, based on L channel information and the first feedback information, through the second target neural network.
- the channel information of the time slot where the first information is not sent between the time slots where the first information is periodically sent can be predicted through the neural network, which effectively improves the accuracy of the channel estimation and prediction of the communication system, thereby helping to improve the system performance.
- the embodiment of the present application is a channel prediction method applied to the second device, which corresponds to the above-mentioned channel prediction method applied to the first device.
- the specific implementation process and related concepts of the embodiment of the present application can refer to the description in the embodiment described in Figure 2 above, and will not be repeated here.
- the channel prediction method provided in the embodiment of the present application can be executed by a channel prediction device.
- the channel prediction device provided in the embodiment of the present application is described by taking the channel prediction method executed by the channel prediction device as an example.
- FIG. 6 is a structural diagram of a channel prediction device provided in an embodiment of the present application.
- the channel prediction device 600 includes:
- An acquisition module 601 is used to acquire N channel information estimated for N time slots, where the N time slots are time slots corresponding to the first information, and N is an integer greater than or equal to 1;
- a first prediction module 602 is used to predict a first channel through a first target neural network based on the N channel information to obtain first channel prediction information;
- the first channel is the channel of the next first time slot corresponding to the first information closest to the current moment
- the first channel prediction information is used by the second target neural network to predict the second channel between the current time slot corresponding to the first information and the first time slot.
- the device further comprises:
- the second prediction module is used to predict the second channel through a second target neural network based on K channel information and the first channel prediction information to obtain second channel prediction information, wherein the K channel information is channel information estimated by the device for K time slots corresponding to the first information, and K is an integer greater than or equal to 1.
- the device further comprises:
- a first sending module is used to send first feedback information to a second device, where the second device is used to predict the second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where the first feedback information is obtained based on the first channel prediction information, and the L channel information is channel information estimated by the second device for L time slots corresponding to the first information, where L is an integer greater than or equal to 1.
- the first information includes at least one of a pilot signal and a channel feedback.
- the device predicts the second channel through the second target neural network.
- the device when the pilot signal is a CSI-RS, the device is a terminal;
- the apparatus is a network-side device.
- the device predicts the second channel through the second target neural network, and the device is a network side device.
- the acquisition module 601 is further used for:
- the first prediction module 602 is further configured to:
- Sending first feedback information to a network side device wherein the network side device is used to predict the second channel through a second target neural network based on L channel information estimated for L time slots of channel feedback and the first feedback information, wherein the second channel is a channel between a time slot for currently sending the channel feedback and a next time slot for sending the channel feedback.
- the number of the second channels is M, where M is an integer greater than or equal to 1; and the device further includes:
- the second sending module is used to send a first prediction result to the second device, where the first prediction result includes the number of the second channels and a time slot indication corresponding to each of the second channels.
- the device further comprises:
- a first receiving module used for receiving a first instruction
- the first prediction module 602 is further configured to:
- the neural network predicts the first channel
- the first channel is predicted by a second neural network based on the N channel information, and the second neural network is a neural network trained based on default channel information.
- the device further includes an execution module, configured to:
- Acquire channel information of an N+1th time slot where the N+1th time slot is a next time slot closest to the current moment in which the first information is actually sent;
- a first operation is performed, wherein the first operation includes:
- the first target neural network is trained based on a first training sample set, and the trained first target neural network is used to predict the channel of the time slot in which the first information is sent after the N+1th time slot, and the first training sample set includes the channel information of the N+1th time slot.
- the device further comprises:
- the third sending module is used to send the number of training samples of the first target neural network to the second device.
- the device further comprises:
- a configuration module is used to configure a target value, where the target value is the maximum value of the number of training samples of the first target neural network.
- the number of training samples input into the first target neural network is the minimum value of the target value and the number of training samples in the first training sample set.
- two stages of channel prediction are implemented through two neural networks, so that the channel information of the time slots in which the first information is not sent between the time slots in which the first information is periodically sent can be predicted, thereby effectively improving the accuracy of channel estimation and prediction of the communication system, thereby helping to improve system performance.
- the channel prediction device in the embodiment of the present application can be an electronic device, such as an electronic device with an operating system, or a component in an electronic device, such as an integrated circuit or a chip.
- the electronic device can be a terminal, or it can be other devices other than a terminal.
- the terminal can include but is not limited to the types of terminal 11 listed above, and other devices can be servers, network attached storage (NAS), etc., which are not specifically limited in the embodiment of the present application.
- the channel prediction device provided in the embodiment of the present application can implement each process implemented by the method embodiment described in Figure 2 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
- FIG. 7 is a structural diagram of another channel prediction device provided in an embodiment of the present application.
- the channel prediction device 700 includes:
- a receiving module 701 is configured to receive first feedback information sent by a first device, where the first feedback information is obtained based on first channel prediction information, where the first channel prediction information is obtained by the first device through prediction by a first target neural network;
- a second prediction module 702 is used to predict a second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1;
- the L channel information is the channel information estimated by the device for the L time slots corresponding to the first information
- the first channel is the channel of the next first time slot corresponding to the first information that is closest to the current moment
- the second channel is the channel between the time slot corresponding to the first information at the current moment and the first time slot.
- the first information includes at least one of a pilot signal and a channel feedback.
- the first information includes channel feedback
- the apparatus is a network-side device
- the second prediction module 702 is further used to:
- the second channel is predicted by a second target neural network, where the second channel is the channel between the time slot currently sending the channel feedback and the next time slot sending the channel feedback.
- the number of the second channels is M, where M is an integer greater than or equal to 1; and the device further includes:
- the first sending module is used to send a first prediction result to the first device, where the first prediction result includes the number of the second channels and a time slot indication corresponding to each of the second channels.
- the device further comprises:
- the second sending module is used to send a first instruction to the first device, where the first instruction is used to instruct to reset the network parameters for the first target neural network or not to reset the network parameters.
- the device further comprises:
- the third sending module is used to send a second instruction to the first device, where the second instruction is used to instruct to update or not update the first target neural network.
- the receiving module 701 is further used for:
- the channel prediction device provided in the embodiment of the present application can implement each process implemented by the method embodiment described in Figure 5 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
- an embodiment of the present application also provides a communication device 800, including a processor 801 and a memory 802, and the memory 802 stores a program or instruction that can be executed on the processor 801.
- the program or instruction is executed by the processor 801
- the various steps of the method embodiment described in Figure 2 or Figure 5 are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
- FIG9 is a schematic diagram of the hardware structure of a terminal implementing the present application embodiment.
- the terminal 900 includes but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909 and at least some of the components of a processor 910.
- the terminal 900 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so that the power management system can manage charging, discharging, and power consumption.
- a power source such as a battery
- the present invention may include more or fewer components than shown in the figure, or combine certain components, or arrange the components differently, which will not be described in detail here.
- the input unit 904 may include a graphics processing unit (GPU) 9041 and a microphone 9042, and the graphics processor 9041 processes the image data of the static picture or video obtained by the image capture device (such as a camera) in the video capture mode or the image capture mode.
- the display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
- the user input unit 907 includes a touch panel 9071 and at least one of other input devices 9072.
- the touch panel 9071 is also called a touch screen.
- the touch panel 9071 may include two parts: a touch detection device and a touch controller.
- Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
- the RF unit 901 can transmit the data to the processor 910 for processing; in addition, the RF unit 901 can send uplink data to the network side device.
- the RF unit 901 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
- the memory 909 can be used to store software programs or instructions and various data.
- the memory 909 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
- the memory 909 may include a volatile memory or a non-volatile memory, or the memory 909 may include both volatile and non-volatile memories.
- the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
- the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDRSDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchronous link dynamic random access memory (SLDRAM) and a direct memory bus random access memory (DRRAM).
- the memory 909 in the embodiment of the present application includes but is not limited to these and any other suitable types of memories.
- the processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 910.
- the terminal 900 is a first device, and the processor 910 is configured to obtain N channel information estimated for N time slots, where the N time slots are time slots corresponding to the first information, and N is an integer greater than or equal to 1;
- the first channel is the first time slot of the next time slot corresponding to the first information closest to the current time.
- the first channel prediction information is used by the second target neural network to predict a second channel between a time slot currently corresponding to the first information and the first time slot.
- the terminal 900 is a second device, and the radio frequency unit 901 is used to receive first feedback information sent by a first device, where the first feedback information is obtained based on first channel prediction information, and the first channel prediction information is obtained by the first device through a first target neural network prediction;
- the processor 910 is configured to predict a second channel through a second target neural network based on L channel information and the first feedback information to obtain second channel prediction information, where L is an integer greater than or equal to 1;
- the L channel information is the channel information estimated by the terminal for the L time slots corresponding to the first information
- the first channel is the channel of the next first time slot corresponding to the first information that is closest to the current moment
- the second channel is the channel between the time slot corresponding to the first information at the current moment and the first time slot.
- terminal 900 provided in the embodiment of the present application can implement all the technical processes of the method embodiment described in Figure 2 or Figure 5 above, and can achieve the same technical effect. In order to avoid repetition, it will not be described here.
- An embodiment of the present application also provides a network side device, which corresponds to the method embodiment described in Figure 2 or Figure 5 above.
- the various implementation processes and implementation methods of the above method embodiments can be applied to the network side device embodiment and can achieve the same technical effect.
- the embodiment of the present application also provides a network side device.
- the network side device 1000 includes: an antenna 1001, a radio frequency device 1002, a baseband device 1003, a processor 1004 and a memory 1005.
- the antenna 1001 is connected to the radio frequency device 1002.
- the radio frequency device 1002 receives information through the antenna 1001 and sends the received information to the baseband device 1003 for processing.
- the baseband device 1003 processes the information to be sent and sends it to the radio frequency device 1002.
- the radio frequency device 1002 processes the received information and sends it out through the antenna 1001.
- the method executed by the network-side device in the above embodiment may be implemented in the baseband device 1003, which includes a baseband processor.
- the baseband device 1003 may include, for example, at least one baseband board, on which a plurality of chips are arranged, as shown in FIG10 , wherein one of the chips is, for example, a baseband processor, which is connected to the memory 1005 through a bus interface to call a program in the memory 1005 and execute the network device operations shown in the above method embodiment.
- the network side device may also include a network interface 1006, which is, for example, a common public radio interface (CPRI).
- a network interface 1006 which is, for example, a common public radio interface (CPRI).
- CPRI common public radio interface
- the network side device 1000 of the embodiment of the present invention also includes: instructions or programs stored in the memory 1005 and executable on the processor 1004.
- the processor 1004 calls the instructions or programs in the memory 1005 to execute the method executed by each module shown in Figure 6 or Figure 7, and achieves the same technical effect. To avoid repetition, it will not be repeated here.
- An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
- a program or instruction is stored.
- the various processes of the method embodiment described in Figure 2 or Figure 5 above are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
- the processor is the processor in the terminal described in the above embodiment.
- the readable storage medium includes Computer-readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
- An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the method embodiments described in Figures 2 or 5 above, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
- the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
- the embodiments of the present application further provide a computer program/program product, which is stored in a storage medium and is executed by at least one processor to implement the various processes of the method embodiments described in Figures 2 or 5 above, and can achieve the same technical effect. To avoid repetition, it will not be described here.
- the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for enabling a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in each embodiment of the present application.
- a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
- a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Power Engineering (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本申请公开了一种信道预测方法、装置及通信设备,属于通信技术领域,本申请实施例的信道预测方法包括:第一设备获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
Description
相关申请的交叉引用
本申请主张在2022年9月30日提交的中国专利申请No.202211217546.3的优先权,其全部内容通过引用包含于此。
本申请属于通信技术领域,具体涉及一种信道预测方法、装置及通信设备。
目前,为了减少大规模多输入多输出(Multiple-Input Multiple-Output,MIMO)导频开销,导频信号间隔几个时隙(slot)发送一次。因此在没有发送导频信号的slot,其真实的大规模MIMO的信道是无法知道的。为此,现有的系统一般采用最近一次发送导频信号的slot估计的信道作为当前slot的信道,然而这样的方式得到的信道与当前slot实际对应的信道差距较大。
发明内容
本申请实施例提供一种信道预测方法、装置及通信设备,能够解决相关技术中信道估计得到的信道与实际时隙对应的信道差距较大的问题。
第一方面,提供了一种信道预测方法,包括:
第一设备获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;
所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;
其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
第二方面,提供了一种信道预测方法,包括:
第二设备接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
所述第二设备基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;
其中,所述L个信道信息为所述第二设备针对与第一信息对应的L个时隙估计的信道
信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
第三方面,提供了一种信道预测装置,包括:
获取模块,用于获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;
第一预测模块,用于基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;
其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
第四方面,提供了一种信道预测装置,包括:
接收模块,用于接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
第二预测模块,用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;
其中,所述L个信道信息为所述装置针对与第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
第五方面,提供了一种通信设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第六方面,提供了一种通信设备,包括处理器及通信接口,其中,所述处理器用于获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道;
或者,
所述通信接口用于接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
所述处理器用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;其中,所述L个信道信息为所述第二设备针对与所述第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
第七方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第八方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法,或实现如第二方面所述的方法。
第九方面,提供了一种计算机程序产品,所述计算机程序产品被存储在存储介质中,所述计算机程序产品被至少一个处理器执行以实现如第一方面所述的方法,或实现如第二方面所述的方法。
在本申请实施例中,第一设备通过获取针对与第一信息对应的N个时隙估计得到的N个信道信息,基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息,该第一信道为距离当前时刻最近的下一个发送所述第一信息的第一时隙的信道;而所述第一信道预测信息能够用于第二目标神经网络预测当前发送所述第一信息的时隙和所述第一时隙之间的第二信道。这样,通过两个神经网络实现了两个阶段的信道预测,进而能够预测得到周期性发送第一信息的时隙之间不发送所述第一信息的时隙的信道信息,有效提升了通信系统信道估计和预测的准确性,从而有助于提升系统性能。
图1a是本申请实施例可应用的一种无线通信系统的框图;
图1b是周期性发送SRS/CSI-RS的时隙的示意图之一;
图1c是周期性发送信道反馈的时隙的示意图之一;
图2是本申请实施例提供的一种信道预测方法的流程图;
图3a是周期性发送SRS/CSI-RS的时隙的示意图之二;
图3b是一种神经网络的结构示意图;
图3c是周期性发送SRS/CSI-RS的时隙的示意图之三;
图4a是周期性发送信道反馈的时隙的示意图之二;
图4b是周期性发送信道反馈的时隙的示意图之三;
图5是本申请实施例提供的另一种信道预测方法的流程图;
图6是本申请实施例提供的一种信道预测装置的结构图;
图7是本申请实施例提供的另一种信道预测装置的结构图;
图8是本申请实施例提供的一种通信设备的结构图;
图9是本申请实施例提供的一种终端的结构图;
图10是本申请实施例提供的一种网络侧设备的结构图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
图1a示出本申请实施例可应用的一种无线通信系统的框图。无线通信系统包括终端11和网络侧设备12。其中,终端11可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴式设备(Wearable Device)、车载设备(Vehicle User Equipment,VUE)、行人终端(Pedestrian User Equipment,PUE)、智能家居(具有无线通信功能的家居设备,如冰箱、电视、洗衣机或者家具等)、游戏机、个人计算机(personal computer,PC)、柜员机或者自助机等终端侧设备,可穿戴式设备包括:智能手表、智能手环、智能耳机、智能眼镜、智能首饰(智能手镯、智能手链、智能戒指、智能项链、智能脚镯、智能脚链等)、智能腕带、智能服装等。需要说明的是,在本申请实施例并不限定终端11的具体类型。网络侧设备12可以包括接入网设备或核心网设备,其中,接入网设备也可以称为无线接入网设备、无线接入网(Radio Access Network,RAN)、无线接入网功能或无线接入网单元。接入网设备可以包括基站、无线局域网(Wireless Local Area Networks,WLAN)接入点或WiFi节点等,基站可被称为节点B、演进节点B(eNB)、接
入点、基收发机站(Base Transceiver Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、家用B节点、家用演进型B节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例进行介绍,并不限定基站的具体类型。
为更好地理解本申请实施例的技术方案,以下对本申请实施例中可能涉及的相关概念及背景进行解释说明。
在一些场景下,对于大规模MIMO系统,在发送端(终端或基站)有N个天线。系统的数据发送的帧结构是以时隙(slot)为基础进行的,每个slot时长为1ms。为了能使接收端(基站或终端)获得大规模MIMO的信道信息,发送端发送N个端口的导频信号,导频信号可以是探测参考信号(Sounding Reference Signal,SRS)或者信道状态信息参考信号(Channel State Information Reference Signal,CSI-RS)。为了减少导频开销,导频信号不是在每个slot都发送的,而是每K个slot发送一次(周期为K)。两次发送间隔K-1个slot。以K=5为例,发送导频信号SRS或CSI-RS的帧结构如图1b所示。在发送SRS或CSI-RS的slot,接收端收到SRS或CSI-RS的信号后,接收端(基站或者终端)通过信道估计获得该slot的信道信息。可见,在这些slot,系统是可以获得大规模MIMO各个天线上比较精确的信道信息。而对于那些发送SRS或CSI-RS的slot之间的slot,由于没有导频信息,系统无法利用信道估计的方法得到比较准确的信道信息。
在另一些场景下,基于频分复用(Frequency Division Duplex,FDD)大规模MIMO系统,在基站有N个天线。系统的数据发送的帧结构是以slot为基础进行的,每个slot时长为1ms。为了能使基站获得下行大规模MIMO的信道信息,首先基站发送导频信号CSI-RS,终端收到CSI-RS的信号后,终端通过信道估计获得该slot的信道信息,需要终端通过上行链路反馈信道信息。为了降低反馈开销,信道反馈不是在每个slot都发送的,而是每K个slot发送一次(周期为K)。两次发送间隔K-1个slot。K的值由基站通过无线资源控制(Radio Resource Control,RRC)信令配置。以K=5为例,信道反馈的帧结构如图1c所示。在发送信道反馈信号的slot,基站可以获得大规模MIMO各个天线上比较精确的信道信息。而对于那些信道反馈slot之间的时隙,由于没有反馈的信道信息,系统无法到比较准确的信道信息。
基于上述背景,本申请实施例提供了一种信道预测方法。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的信道预测方法进行详细地说明。
请参照图2,图2是本申请实施例提供的一种信道预测方法的流程图,如图2所示,所述方法包括以下步骤:
步骤201、第一设备获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数。
需要说明的是,所述第一设备可以是终端或网络侧设备等通信设备,所述第一设备的具体类型可以是与第一信息相关,后续实施例中会具体说明,此处不做赘述。
本申请实施例中,所述第一信息可以是参考信号,如探测参考信号(Sounding Reference Signal,SRS)、信道状态信息参考信号(Channel State Information Reference Signal,CSI-RS)等,或者所述第一信息也可以是信道反馈信息等。其中,所述N个时隙为与第一信息对应的时隙,例如所述N个时隙可以是与参考信号发送时刻对应的时间,也可以是与参考信号接收时刻对应的时间,或者也可以是参考信号估计完成时刻对应的时间等,本申请实施例对此不做具体限定。
该步骤中,第一设备获取针对N个时隙估计的N个信道信息,例如可以是终端获取针对已发送CSI-RS的N个时隙估计得到的N个信道信息,当然该步骤还可以是其他的可能情况,此处不做过多列举。
步骤202、所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息。
其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
本申请实施例中,第一设备在获取到针对N个时隙估计的N个信道信息后,第一设备可以是将与第一信息相关的所述N个信道信息作为训练样本来训练第一目标神经网络,以使得训练后的第一目标神经网络能够预测当前时刻的下一个与所述第一信息对应的第一时隙的信道,也即所述第一信道,进而以获得第一信道预测信息。
例如,第一设备可以是利用当前时刻的前N个发送SRS或CSI-RS信号的时隙(slot)估计的信道值,通过第一目标神经网络来提前预测当前时刻之后下一个发送SRS或CSI-RS信号的slot的信道。
本申请实施例中,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和预测得到的第一信道对应的第一时隙之间的第二信道。
例如,CSI-RS的发送周期为每T个时隙发送一次,第一设备利用距离当前时刻最近的前N个发送CSI-RS的时隙(slot)估计的信道值,通过第一目标神经网络预测出当前时刻之后下一个(也即第N+1个)发送CSI-RS的slot的信道后,通过第二目标神经网络来预测第N个周期发送CSI-RS的时隙与第N+1个周期发送CSI-RS的时隙之间的T-1个不发送CSI-RS时隙的信道,也即所述第二信道。这样,第一设备也就能够基于第二目标神经网络预测得到不发送CSI-RS的时隙的信道信息,有效提升了通信系统信道估计和预测的性能,从而有助于提升系统性能。
本申请实施例中,第一设备通过获取针对与第一信息对应的N个时隙估计得到的N个信道信息,基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息,该第一信道为距离当前时刻最近的下一个发送所述第一信息的第一时隙的信道;
而所述第一信道预测信息能够用于第二目标神经网络预测当前发送所述第一信息的时隙和所述第一时隙之间的第二信道。这样,通过两个神经网络实现了两个阶段的信道预测,进而能够预测得到周期性发送第一信息的时隙之间不发送所述第一信息的时隙的信道信息,有效提升了通信系统信道估计和预测的性能,从而有助于提升系统性能。
可选地,所述步骤202之后,所述方法还包括:
所述第一设备基于K个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述K个信道信息为所述第一设备针对与所述第一信息对应的K个时隙估计的信道信息,K为大于或等于1的整数;或者,
所述第一设备向第二设备发送第一反馈信息,所述第二设备用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述第一反馈信息基于所述第一信道预测信息得到,所述L个信道信息为所述第二设备针对与所述第一信息对应的L个时隙估计的信道信息,L为大于或等于1的整数。
例如,在一种实施方式中,第一设备在基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息后,第一设备进一步基于所述N个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道,得到第二信道预测信息。也即是说,第一目标神经网络和第二目标神经网络都位于第一设备侧,第一设备依次通过第一目标神经网络来预测第一信道,以及通过第二目标神经网络来预测第二信道,从而能够提升第一设备对于信道预测的性能,进而以提升第一设备的通信性能。
或者,在另一种实施方式中,第一设备在基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息后,可以是对第一信道预测信息进行处理,如压缩处理,得到第一反馈信息,将所述第一反馈信息发送给第二设备,进而第二设备基于所述N个信道信息及所述第一反馈信息,通过第二目标神经网络来预测第二信道,得到第二信道预测信息。该实施方式中,第一目标神经网络位于第一设备侧,第二目标神经网络位于第二设备侧,第一设备用于实现对第一信道的预测,第二信道通过第二设备来实现预测。这样,通过基于第一设备与第二设备的协同来实现信道预测,提升通信设备对于信道预测的性能。
本申请实施例中,所述第一信息包括导频信号和信道反馈中的至少一种。例如,所述第一信息为导频信号,则所述N个信道信息为基于导频信号获取的信道估计信息。
可选地,在所述第一信息仅包括导频信号的情况下,所述第一设备通过所述第二目标神经网络预测所述第二信道。也即是说,第一目标神经网络和第二目标神经网络均位于第一设备侧,第一设备能够实现对第一信道的预测和第二信道的预测。
可选地,在所述导频信号为CSI-RS的情况下,所述第一设备为终端;在所述导频信号为SRS的情况下,所述第一设备为网络侧设备。
可选地,在所述第一信息仅包括信道反馈的情况下,所述第一设备通过所述第二目标神经网络预测所述第二信道,所述第一设备为网络侧设备。
可选地,作为一种具体的实施方式,所述第一设备获取针对N个时隙估计的N个信道信息,包括:
终端获取针对与CSI-RS对应的N个时隙估计的N个信道信息;
这种情况下,所述第一设备向第二设备发送第一反馈信息,所述第二设备用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,包括:
所述终端向网络侧设备发送第一反馈信息,所述网络侧设备用于基于L个用于信道反馈的时隙的信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。
具体地,终端基于与CSI-RS对应的N个时隙估计得到N个信道信息,终端基于所述N个信道信息通过第一目标神经网络预测第一信道,得到第一信道预测信息。所述第一信道也即距离当前时刻最近的下一个发送CSI-RS的时隙的信道。其中,所述第一目标神经网络可以是终端基于所述N个信道信息进行训练得到,也即终端能够基于CSI-RS的信道估计结果来训练第一目标神经网络,进而以基于训练的第一目标神经网络来预测下一个发送CSI-RS的信道。
进一步地,终端将预测得到的第一信道预测信息进行压缩处理,得到第一反馈信息,并将所述第一反馈信息发送给网络侧设备,网络侧设备通过L个用于信道反馈的时隙估计的信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,这种情况下,第二信道为网络侧设备当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。其中,所述第二目标神经网络可以是网络侧设备基于L个信道反馈的时隙估计得到的L个信道信息训练得到,进而第二目标神经网络也就能够实现对信道反馈的时隙的信道的预测。这样,第一目标神经网络和第二目标神经网络分别在终端和网络侧设备进行训练,从而以提升终端和网络侧设备对于信道预测的性能。
本申请实施例中,第二目标神经网络预测的第二信道的数量为M个,M为大于或等于1的整数。例如,假设第一信息为CSI-RS,为了减少系统开销,CSI-RS不是在每个时隙都发送的,例如可能是每5个时隙发送一次CSI-RS;则第一信道也即预测到的下一次发送CSI-RS的时隙,第二信道为下一次发送CSI-RS的时隙与当前最近一次发送CSI-RS的时隙之间的时隙对应的信道,那么第二信道的数量也即4。
可选地,所述第一设备基于K个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道之后,所述方法还包括:
所述第一设备向所述第二设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
可以理解地,第一设备在通过第二目标神经网络预测第二信道后,若第二信道的数量大于1个,则第一设备可以是向第二设备发送包括所有的第二信道的相关信息,或者也可以只发送其中部分第二信道的相关信息。
可选地,第一设备向第二设备发送第一预测结果,该第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。例如,若第一设备预测得到的第二信道的数量为4个,则可以是向第二设备发送第一预测结果,例如向第二设备发送数值4以及这四个第二信道各自对应的时隙指示(slot index)。这样,也就使得第二设备能够获知周期性发送第一信息的时隙之间不发送第一信息的时隙数量及各自对应的时隙指示,更有助于第一设备与第二设备之间第一信息的传输。
本申请实施例中,所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道之前,所述方法还包括:
所述第一设备接收第一指令;
所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,包括:
在所述第一指令用于指示不重置网络参数的情况下,所述第一设备基于所述N个信道信息通过第一神经网络预测第一信道;
在所述第一指令用于指示重置网络参数的情况下,所述第一设备基于所述N个信道信息通过第二神经网络预测第一信道,所述第二神经网络为基于默认的信道信息训练得到的神经网络。
这种情况下,所述第一目标神经网络也即第一神经网络或者第二神经网络。
在一些实施例中,所述第二神经网络也可以称为初始神经网络,所述第二神经网络通过默认的信道信息离线训练得到,例如所述第二神经网络可以是其他设备预先训练好的神经网络,然后再发送给第一设备。换句话说,第二神经网络在预测出第一信道,该第一信道的信道信息不会作为第二神经网络的训练样本,也即第二神经网络不会进行在线训练。而所述第一神经网络可以是进行在线训练,也即第一神经网络在预测出第一信道后,该第一信道的信道信息可以作为第一神经网络的训练样本,以对第一神经网络进行训练,这样也就能够基于最新得到的信道信息来对第一神经网络的网络参数进行调整,从而以提升第一神经网络的预测精度,也使得第一神经网络能够适配当前的信道环境。
例如,当信道环境发生变化,例如信道环境突然巨变,终端可以是向网络侧设备发送是否重置网络参数的指令(例如Initial_parameter_reset)。当网络侧设备接收到Initial_parameter_reset=1时,网络侧设备就会重新利用离线训练得到的初始神经网络(也即第二神经网络)来进行第一信道的预测;如果接收到的Initial_parameter_reset=0,则网络侧设备继续进行在线训练来适配当前的信道环境,此时用在线训练的第一神经网络来预测第一信道。这样,终端通过Initial_parameter_reset指令的反馈,在应对信道环境的变化,从而以更好地保障网络侧设备的信道预测性能。
本申请实施例中,所述方法还可以包括:
所述第一设备获取第N+1个时隙的信道信息,所述第N+1个时隙为距离所述当前时刻最近的下一个实际发送所述第一信息的时隙;
在所述第一设备接收到第二指令,所述第二指令用于指示更新所述第一目标神经网络
的情况下,所述第一设备执行第一操作,所述第一操作包括:
基于第一训练样本集对所述第一目标神经网络进行训练,训练后的所述第一目标神经网络用于预测所述第N+1个时隙之后发送所述第一信息的时隙的信道,所述第一训练样本集包括所述第N+1个时隙的信道信息。
例如,所述N个时隙为距离当前时刻最近的前N个发送第一信息的时隙,第一设备通过第一目标神经网络预测第一信道,该第一信道也即第一目标神经网络预测的第N+1个发送第一信息的时隙的信道;第一设备可以是实时地获取实际发送第一信息的时隙。
当第一设备接收到第二指令,且该第二指令指示更新所述第一目标神经网络的情况下,则第一设备可以是基于最新的训练样本集(也即所述第一训练样本集)对第一目标神经网络进行训练,也即前面所述的在线训练,训练后的第一目标神经网络也就能够用于预测下一个发送第一信息的时隙的信道。这样,第一设备每获取一个实际发送第一信息的时隙的信道信息,则将该信道信息加入第一目标神经网络的训练样本集中,来对第一目标神经网络进行训练,以优化其神经网络参数,提升第一目标神经网络的精度。
可选地,所述第一设备执行第一操作之后,所述方法还包括:
所述第一设备向第二设备发送所述第一目标神经网络的训练样本的数量。
例如,第一设备可以是向第二设备发送所述第一训练样本集中训练样本的数量,进而以更好地支持对第一目标神经网络的在线训练。
可选地,所述方法还包括:
所述第一设备配置目标数值,所述目标数值为所述第一目标神经网络的训练样本的数量的最大值。
例如,网络侧设备可以是通过RRC配置一个目标数值,也即网络侧设备能够支持的第一目标神经网络的训练样本数量的最大值。
可选地,在所述第一设备执行第一操作的情况下,输入所述第一目标神经网络的训练样本的数量为所述目标数值和所述第一训练样本集中训练样本数量的最小值。其中,第一训练样本集为所述第一目标神经网络进行在线训练时的样本集,也即第一设备每获取一个实际发送第一信息的时隙的信道信息,则将该信道信息加入到第一训练样本集中,则第一训练样本集的数量也就可能无限增大。本申请实施例中,当第一训练样本集中训练样本的数量大于所述目标数值时,则第一目标神经网络的训练样本的数量为所述目标数值,例如可以是所述第一训练样本集中的前目标数值个训练样本,这样也就能够避免因训练样本过多而加大第一目标神经网络训练的运算负担。
为更好地理解,以下通过具体的实施例对本申请提供的技术方案进行说明。
实施例一
本实施例场景为基于大规模MIMO系统,在发送端(终端或基站)有N个天线。为了能使接收端(基站或终端)获得大规模MIMO的信道信息,发送端发送N个端口的导频信号,导频信号可以是SRS或者CSI-RS。为了减少导频开销,导频信号不是在每个slot
都发送的,而是每K个slot发送一次(周期为K),两次发送间隔K-1个slot,而对于不发送导频信号的K-1个时隙,由于没有导频信号,无法利用信道估计的方法得到比较准确的信道信息。
本实施例中,通信设备在收到当前发送SRS/CSI-RS的slot的信号后,通过信道估计获得该slot的信道信息。对于当前发送SRS/CSI-RS的slot和下一次发送SRS/CSI-RS的slot之间的那些不发送SRS/CSI-RS的slot,即发送SRS/CSI-RS周期期间的slot,通过基于神经网络的两阶段信道预测的方法,获得这几个slot的信道信息。
具体地,第一阶段信道预测是利用以前L个发送SRS/CSI-RS信号的slot估计的信道值,通过第一个神经网络来提前预测出下一个发送SRS/CSI-RS信号的slot的信道,即第L+1个发送SRS/CSI-RS信号周期slot的信道。其中,利用神经网络估计和预测的信道时间关系如图3a所示。
第一阶段信道预测是通过第一个神经网络来实现的,该神经网络的输入X为L个周期的发送SRS/CSI-RS信号的slot的估计的信道向量。其中,X可以表示为:
X=[h1 h2 ... hL]
X=[h1 h2 ... hL]
其中,hi=[h1,i h2,i ... hN,i]T为第i个发送SRS/CSI-RS信号的周期的大规模MIMO的N个天线到接收端的信道向量。这里hn,i表示第i个周期slot期间第n天线到接收端的信道,进而X可以表示为:
神经网络的输出为Y为第L+1个发送SRS/CSI-RS信号周期slot的N个天线到接收端的信道的预测值,因此Y为N×1的向量。为了将信道信息都用符号h表示,可以将Y记为可以表示为:
本实施例中实现第一阶段信道预测的神经网络有一个输入层,一个输出层和若干个隐藏层构成,以一个隐藏层为例,第一阶段预测神经网结构如图3b所示。
其中,第一阶段预测神经网络每层基本层结构:对于输入信号或上一层来的数据,右乘一个矩阵WR,并且左乘一个矩阵WL,然后加上一个偏移量矩阵B。除输出层外,其他各层在加了偏移量矩阵后,要经过一个激活函数,激活函数可以选择ReLU函数,也可选用别的函数作为激活函数。具体而言:
1)输入层的输出X1=ReLU(WL1XWR1+B1)
如果下一隐藏层的维度为M,WL1的维度为M×N,WR1的维度为L×M,B1的维度为M×M。
2)隐藏层的输出X2=ReLU(WL2X1WR2+B2)
如果隐藏层的维度为M,WL2的维度为M×M,WR2的维度为M×M,B2的维度为M×M。
神经网络设定为3层,1、2层有激活函数,第3层没有激活函数。
3)输出层的输出Y=WL3X2WR3+B3
如果隐藏层的维度为M,WL3的维度为N×M,WR3的维度为M×1,B3的维度为N×1。
为了使用第一阶段的神经网络进行信道估计和预测,网络参数需要进行训练。训练时,可以将训练数据中X来自大量的L个周期上SRS/CSI-RS信道估计,匹配的目标为L+1个周期上SRS/CSI-RS slot实际信道hL+1。训练优化的目标(代价函数cost function)为神经网络的输出Y和实际信道hL+1之间的归一化均方误差最小,即
第一阶段信道预测中神经网络的训练有两种,一种是离线训练,用大量的离线数据训练网络参数,该参数作为该神经网络在实际系统进行信道预测时网络的初始参数。实际工作时,到L+1个周期到来时,利用该周期的SRS/CSI-RS得到信道hL+1值后,对该网络的参数进行在线训练(或微调fine-tuning),使得该神经网络模型更加适配当前的环境。
第二阶段信道预测是利用以前L个发送SRS/CSI-RS信号的slot的估计的信道值,以及通过第一个神经网络来提前预测出一个周期后,即第L+1个周期发送SRS/CSI-RS信号slot的信道,来预测出第L个周期发送SRS/CSI-RS的slot和第L+1个周期发送SRS/CSI-RS的slot之间的K-1个不发送SRS/CSI-RS的slot的信道。第二阶段信道预测是利用第二阶段神经网络实现的,第二阶段信道预测的信道时间关系如图3c所示。
第二阶段信道预测也是通过一个神经网络来实现的,该神经网络的输入X为L个周期的发送SRS/CSI-RS信号的slot的估计的信道向量加上通过第一个神经网络来提前预测出的第L+1个周期发送SRS/CSI-RS信号slot的信道。其中,X可以表示为:
其中hi=[h1,i h2,i ... hN,i]T为第i个发送SRS/CSI-RS信号的周期的大规模MIMO的N个天线到接收端的信道向量。这里hn,i表示第i个周期slot期间第n天线到接收端的信道。进而,X可以表示为:
神经网络的输出Y为第L个周期发送SRS/CSI-RS的slot和第L+1个周期发送SRS/CSI-RS的slot之间的K-1个不发送SRS/CSI-RS的slot的信道,因此,Y为N×(K-1)矩阵。为了将信道信息都用符号h表示,可以将Y记为可以表示为:
本实施例实现第二阶段信道预测的神经网络由一个输入层,一个输出层和若干个隐藏层构成,以一个隐藏层为例,第一阶段预测神经网结构仍如图3b所示。
其中,实现第二阶段信道预测的神经网络每层基本层结构:对于输入信号或上一层来的数据,右乘一个矩阵WR,并且左乘一个矩阵WL,然后加上一个偏移量矩阵B。除输出层外,其他各层在加了偏移量矩阵后,要经过一个激活函数,激活函数可以选择ReLU函数,也可选用别的函数作为激活函数。具体而言:
1)输入层的输出X1=ReLU(WL1XWR1+B1)
如果下一隐藏层的维度为M,WL1的维度为M×N,WR1的维度为(L+1)×M,B1的维度为M×M。
2)隐藏层的输出X2=ReLU(WL2X1WR2+B2)
如果隐藏层的维度为M,WL2的维度为M×M,WR2的维度为M×M,B2的维度为M×M。
神经网络设定为3层,1、2层有激活函数,第3层没有激活函数。
3)输出层的输出Y=WL3X2WR3+B3
如果隐藏层的维度为M,WL3的维度为N×M,WR3的维度为M×(K-1),B3的维度为N×(K-1)。
为了使用第二阶段的神经网络进行信道估计和预测,网络参数需要进行训练。训练时,可以将训练数据中X来自大量的L+1个SRS/CSI-RS周期的信道,匹配的目标为第L个周期和第L+1个周期之间K-1个不发送SRS/CSI-RS slot实际信道[h(1) h(2) ... h(K-1)]。训练优化的目标(代价函数cost function)为神经网络的输出Y和实际信道[h(1) h(2) ... h(K-1)]之间的归一化均方误差最小,即:
第二阶段的神经网络参数训练好了以后,就可以联合使用第一阶段用于实现信道预测的神经网络和第二阶段用于实现信道预测的神经网络,完成信道预测的任务。
利用上述训练得到的两个神经网络进行信道预测时,在当前发送SRS/CSI-RS的slot收到SRS/CSI-RS导频信号后,进行信道估计,得到当前slot的信道信息。然后将当前slot估计得到的信道信息和前L-1个发送SRS/CSI-RS信号的slot的估计的信道值(共L个发送SRS/CSI-RS信号的slot的信道值),输入到第一阶段用于实现信道预测的神经网络,该神经网络输出(也即预测)一个周期后,下一个发送SRS/CSI-RS信号的slot的信道。进一步地,将L个发送SRS/CSI-RS信号的slot的信道值和第一阶段用于实现信道预测的神经网络输出的未来下一个发送SRS/CSI-RS信号的slot的信道(共L+1个发送SRS/CSI-RS信号的slot的信道值),输入到第二阶段用于实现信道预测的神经网络,该第二阶段的神经网络输出当前发送SRS/CSI-RS的slot和下一次发送SRS/CSI-RS的slot之间的K-1个不发送SRS/CSI-RS的slot的信道。这样,也就通过两个神经网络完成了信道预测。
另外,随着时间的推移,到L+1个周期发送SRS/CSI-RS的slot到来时,利用该slot的SRS/CSI-RS得到信道hL+1值后,对上述第一阶段用于实现信道预测的神经网络的参数进行在线训练(或微调fine-tuning),从而使得第一阶段的神经网络能够更加适配当前的信道环境,提高了第一阶段信道预测的性能和准确性,进而提高了整个两阶段的信道预测的准确性。
实施例二
本实施例场景为基于FDD大规模MIMO系统,在基站有N个天线。为了能使基站获得下行大规模MIMO的信道信息,首先基站发送导频信号CSI-RS,移动用户收到CSI-RS的信号后,移动用户通过信道估计获得该slot的信道信息。需要移动用户通过上行链路反馈信道信息。为了降低反馈开销,信道反馈不是在每个slot都发送的,而是每K个slot发送一次(周期为K),两次发送间隔K-1个slot,而对于不进行信道反馈的K-1个时隙,由于没有反馈的信道信息,系统无法得到比较准确的信道信息。其中,K的值由基站通过RRC信令配置。
本实施例中,在收到当前信道反馈slot的信号后,获得该slot反馈的信道信息。对于当前信道反馈slot和下一次信道反馈slot之间的那些不进行信道反馈的slot,即发送SRS/CSI-RS周期期间的slot,通过基于神经网络的两阶段信道预测的方法,获得这几个slot的信道信息。
具体地,第一阶段信道预测是利用前L个信道反馈slot的反馈的信道值,通过第一个神经网络来提前预测出一个周期后,下一个信道反馈slot的反馈的信道,即第L+1个信
道反馈周期slot的信道。其中,利用第一个神经网络估计和预测的信道时间关系如图4a所示。
第一阶段信道预测是通过第一个神经网络来实现的,该神经网络的输入X为L个周期的信道反馈的slot估计的信道向量。其中,X可以表示为:
X=[h1 h2 ... hL]
X=[h1 h2 ... hL]
其中,hi=[h1,i h2,i ... hN,i]T为第i个信道反馈周期slot反馈的大规模MIMO的N个天线到接收端的信道向量。这里hn,i表示第i个周期slot期间第n天线到接收端的信道,进而X可以表示为:
第一个神经网络的输出Y为预测的第L+1个信道反馈周期slot的N个天线到接收端的信道的预测值,因此Y为N×1的向量,可以表示为:
本实施例中实现第一阶段信道预测的神经网络由一个输入层,一个输出层和若干个隐藏层构成,以一个隐藏层为例,第一阶段预测神经网结构如图3b所示。
其中,第一阶段预测神经网络每层基本层结构:对于输入信号或上一层来的数据,右乘一个矩阵WR,并且左乘一个矩阵WL,然后加上一个偏移量矩阵B。除输出层外,其他各层在加了偏移量矩阵后,要经过一个激活函数,激活函数可以选择ReLU函数,也可选用别的函数作为激活函数。具体而言:
1)输入层的输出X1=ReLU(WL1XWR1+B1)
如果下一隐藏层的维度为M,WL1的维度为M×N,WR1的维度为L×M,B1的维度为M×M。
2)隐藏层的输出X2=ReLU(WL2X1WR2+B2)
如果隐藏层的维度为M,WL2的维度为M×M,WR2的维度为M×M,B2的维度为M×M。
神经网络设定为3层,1、2层有激活函数,第3层没有激活函数。
3)输出层的输出Y=WL3X2WR3+B3
如果隐藏层的维度为M,WL3的维度为N×M,WR3的维度为M×1,B3的维度为N×1。
为了使用第一阶段的神经网络进行信道估计和预测,网络参数需要进行训练。训练时,
可以将训练数据中X来自大量的L个周期信道反馈上反馈的信道,匹配的目标为L+1个周期信道反馈的实际信道hL+1。训练优化的目标(代价函数cost function)为神经网络的输出Y和实际信道hL+1之间的归一化均方误差最小,即
第一阶段信道预测中神经网络的训练有两种,一种是离线训练,用大量的离线数据训练网络参数,该参数作为该神经网络在实际系统进行信道预测时网络的初始参数。实际工作时,到L+1个周期到来时,利用该周期的反馈的信道hL+1值,对该网络的参数进行在线训练(或微调fine-tuning),使得该神经网络模型更加适配当前的环境。
第二阶段信道预测是利用以前L个信道反馈slot的估计的信道值,以及通过第一个神经网络来提前预测出一个周期后,即预测的第L+1个周期反馈的信道值,来预测出第L个周期信道反馈slot和第L+1个周期信道反馈slot之间的K-1个不发送信道反馈的slot的信道。第二阶段信道预测是利用第二个神经网络实现的,第二阶段信道预测的信道时间关系如图4b所示。
第二阶段信道预测也是通过一个神经网络来实现的,该神经网络的输入X为L个周期的信道反馈slot的信道向量加上通过第一个神经网络提前预测出的第L+1个周期信道反馈slot的信道。其中,X可以表示为:
其中hi=[h1,i h2,i ... hN,i]T为第i个发送信道反馈的周期的大规模MIMO的N个天线到接收端的信道向量。这里hn,i表示第i个周期slot期间第n天线到接收端的信道。进而,X可以表示为:
第二个神经网络的输出Y为第L个周期信道反馈slot和第L+1个周期信道反馈slot之间的K-1个无信道反馈的slot的信道,因此,Y为N×(K-1)矩阵,可以表示为:
本实施例第二个神经网络由一个输入层,一个输出层和若干个隐藏层构成,以一个隐藏层为例,第一阶段预测神经网结构仍如图3b所示。
其中,第二个神经网络每层基本层结构:对于输入信号或上一层来的数据,右乘一个矩阵WR,并且左乘一个矩阵WL,然后加上一个偏移量矩阵B。除输出层外,其他各层在加了偏移量矩阵后,要经过一个激活函数,激活函数可以选择ReLU函数,也可选用别的函数作为激活函数。具体而言:
1)输入层的输出X1=ReLU(WL1XWR1+B1)
如果下一隐藏层的维度为M,WL1的维度为M×N,WR1的维度为(L+1)×M,B1的维度为M×M。
2)隐藏层的输出X2=ReLU(WL2X1WR2+B2)
如果隐藏层的维度为M,WL2的维度为M×M,WR2的维度为M×M,B2的维度为M×M。
神经网络设定为3层,1、2层有激活函数,第3层没有激活函数。
3)输出层的输出Y=WL3X2WR3+B3
如果隐藏层的维度为M,WL3的维度为N×M,WR3的维度为M×(K-1),B3的维度为N×(K-1)。
为了使用第二个神经网络进行信道估计和预测,网络参数需要进行训练。训练时,可以将训练数据中X来自大量的L+1个信道反馈周期的信道,匹配的目标为第L个周期和第L+1个周期之间K-1个不进行信道反馈slot实际信道[h(1) h(2) ... h(K-1)]。训练优化的目标(代价函数cost function)为神经网络的输出Y和实际信道[h(1) h(2) ... h(K-1)]之间的归一化均方误差最小,即:
第二个神经网络参数训练好了以后,就可以联合使用第一个神经网络和第二个神经网络,完成信道预测的任务。
利用上述两个神经网络进行信道预测时,在当前信道反馈的slot得到信道信息后,当前slot信道信息和前L-1个信道反馈slot的信道值(共L个发送信道道反馈信号的slot的信道值),输入到第一个神经网络。第一个神经网络输出(也即提前预测)一个周期后,下一个信道反馈slot的信道。接着将L个信道反馈slot的信道值和第一个神经网络输出的未来下一个信道反馈slot的信道(共L+1个信道反馈slot的信道值),输入到第二个神经网络。第二个神经网络输出当前信道反馈slot和未来下一次信道反馈slot之间的K-1个不进行信道反馈的slot的信道。这样,也即通过两个神经网络完成了信道预测。
另外,随着时间的推移,到L+1个周期信道反馈slot到来时,利用该slot的反馈得到信道hL+1值,对本实施例中第一个神经网络的参数进行在线训练(或微调fine-tuning),使得第一个神经网络更加适配当前的环境,从而提高了第一阶段信道预测的性能和准确性,
进而提高了整个两阶段信道预测的性能和准确性。
对于第一阶段的第一个神经网络的训练有两种,一种是离线训练,用大量的离线数据训练网络参数。该参数作为该神经网络在实际系统进行信道预测时网络的初始参数。实际工作时,到L+1个周期到来时,利用该周期的反馈的信道hL+1值后,对该网络的参数进行在线训练(或微调fine-tuning),使得该神经网络更加适配当前的环境。离线训练的网络参数具有普适性,能适应各种环境,但性能差一些,而在线训练的参数更适配当前的信道环境,但信道环境突发剧烈变化时,需要更长时间调整。为了更好的应对这种情况,移动用户除了反馈周期性的反馈信道信息外,还需要反馈神经网络参数重启的指令(例如Initial_parameter_reset)。当基站收到该指令Initial_parameter_reset=1时,基站就会重新利用离线训练的神经网络来进行第一阶段信道预测;如果收到的Initial_parameter_reset=0,则继续对第一个神经网络进行在线训练来适配当前的信道环境。这样,移动用户可以通过Initial_parameter_reset指令的反馈,来应对信道环境的突然巨变。
为了支持更好的对实现第一阶段信道预测的第一个神经网络进行在线训练,移动用户除了反馈周期性的信道信息外,还需要额外反馈跟在线训练有关的指令信息。这个指令用来控制在线训练批次(batch)的大小。由于神经网络的训练是分批次进行的,批次大小(每个batch含有的样本数)会影响在线学习的性能。该指令的反馈可以由两种方式:
第一种是移动用户直接反馈批次的大小,即反馈Batch_size。Batch越大,在线训练的第一个神经网络调整的越慢,但是训练越稳定。移动用户可以根据信道估计的结果,判断信道场景的变化快慢,由此调整在线训练Batch_size,并反馈给基站;
第二种是移动用户反馈进行在线训练的指令,例如反馈Finetuning_indicator。当基站收到Finetuning_indicator=1时,进行在线训练;如果收到的Finetuning_indicator=0,则不进行在线训练,仍沿用原来实现第一阶段信道预测的神经网络的参数进行预测。对于Batch size,两次在线学习之间未用的数据可以集中在一起作为在线学习的数据。即批次的大小就是两次在线学习之间新的数据量的大小。由于Batch size不能一直增大,因此基站需要用RRC配置一个最大的目标数值Batch_size_max。由于批次的大小不能超过Batch_size_max的值,移动用户可以根据Batch_size_max和信道环境变化的快慢合理规划Finetuning_indicator的反馈。某次在线训练时,批次大小为Batch_size_max和上次在线训练以后获得新数据的数量中的最小值。
请参照图5,图5是本申请实施例提供的另一种信道预测方法的流程图,如图5所示,所述方法包括以下步骤:
步骤501、第二设备接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
步骤502、所述第二设备基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数。
其中,所述L个信道信息为所述第二设备针对与所述第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
可选地,所述第一信息包括导频信号和信道反馈中的至少一种。
可选地,所述第一信息包括信道反馈,所述第二设备为网络侧设备;所述步骤502可以包括:
所述网络侧设备基于针对信道反馈的L个时隙估计的L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。
可选地,所述第二信道的数量为M个,M为大于或等于1的整数;所述步骤502之后,所述方法还包括:
所述第二设备向所述第一设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
可选地,所述方法还包括:
所述第二设备向所述第一设备发送第一指令,所述第一指令用于指示针对所述第一目标神经网络重置网络参数或不重置网络参数。
可选地,所述方法还包括:
所述第二设备向所述第一设备发送第二指令,所述第二指令用于指示针对所述第一目标神经网络更新或不更新。
可选地,所述方法还包括:
所述第二设备接收所述第一设备发送的所述第一目标神经网络的训练样本的数量。
本申请实施例中,第二设备在接收到第一设备发送的第一反馈信息后,基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测出当前时刻发送第一信息的时隙与下一个发送第一信息的时隙之间不发送所述第一信息的时隙的信道。这样,也就能够通过神经网络预测得到周期性发送第一信息的时隙之间不发送所述第一信息的时隙的信道信息,有效提升了通信系统信道估计和预测的准确性,从而有助于提升系统性能。
需要说明地,本申请实施例为应用于第二设备的信道预测方法,与上述应用于第一设备的信道预测方法对应,本申请实施例的具体实现过程及相关概念可以参照上述图2所述实施例中的描述,此处不再赘述。
本申请实施例提供的信道预测方法,执行主体可以为信道预测装置。本申请实施例中以信道预测装置执行信道预测方法为例,说明本申请实施例提供的信道预测装置。
请参照图6,图6是本申请实施例提供的一种信道预测装置的结构图,如图6所示,信道预测装置600包括:
获取模块601,用于获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;
第一预测模块602,用于基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;
其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
可选地,所述装置还包括:
第二预测模块,用于基于K个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述K个信道信息为所述装置针对与所述第一信息对应的K个时隙估计的信道信息,K为大于或等于1的整数。
可选地,所述装置还包括:
第一发送模块,用于向第二设备发送第一反馈信息,所述第二设备用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述第一反馈信息基于所述第一信道预测信息得到,所述L个信道信息为所述第二设备针对与所述第一信息对应的L个时隙估计的信道信息,L为大于或等于1的整数。
可选地,所述第一信息包括导频信号和信道反馈中的至少一种。
可选地,在所述第一信息仅包括导频信号的情况下,所述装置通过所述第二目标神经网络预测所述第二信道。
可选地,在所述导频信号为CSI-RS的情况下,所述装置为终端;
在所述导频信号为SRS的情况下,所述装置为网络侧设备。
可选地,在所述第一信息仅包括信道反馈的情况下,所述装置通过所述第二目标神经网络预测所述第二信道,所述装置为网络侧设备。
可选地,所述获取模块601还用于:
获取针对与CSI-RS相关的N个时隙估计的N个信道信息;
所述第一预测模块602还用于:
向网络侧设备发送第一反馈信息,所述网络侧设备用于基于针对信道反馈的L个时隙估计的L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。
可选地,所述第二信道的数量为M个,M为大于或等于1的整数;所述装置还包括:
第二发送模块,用于向所述第二设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
可选地,所述装置还包括:
第一接收模块,用于接收第一指令;
所述第一预测模块602还用于:
在所述第一指令用于指示不重置网络参数的情况下,基于所述N个信道信息通过第一
神经网络预测第一信道;
在所述第一指令用于指示重置网络参数的情况下,基于所述N个信道信息通过第二神经网络预测第一信道,所述第二神经网络为基于默认的信道信息训练得到的神经网络。
可选地,所述装置还包括执行模块,用于:
获取第N+1个时隙的信道信息,所述第N+1个时隙为距离所述当前时刻最近的下一个实际发送所述第一信息的时隙;
在接收到第二指令,所述第二指令用于指示更新所述第一目标神经网络的情况下,所执行第一操作,所述第一操作包括:
基于第一训练样本集对所述第一目标神经网络进行训练,训练后的所述第一目标神经网络用于预测所述第N+1个时隙之后发送所述第一信息的时隙的信道,所述第一训练样本集包括所述第N+1个时隙的信道信息。
可选地,所述装置还包括:
第三发送模块,用于向第二设备发送所述第一目标神经网络的训练样本的数量。
可选地,所述装置还包括:
配置模块,用于配置目标数值,所述目标数值为所述第一目标神经网络的训练样本的数量的最大值。
可选地,在执行第一操作的情况下,输入所述第一目标神经网络的训练样本的数量为所述目标数值和所述第一训练样本集中训练样本数量的最小值。
本申请实施例中,通过两个神经网络实现了两个阶段的信道预测,进而能够预测得到周期性发送第一信息的时隙之间不发送所述第一信息的时隙的信道信息,有效提升了通信系统信道估计和预测的准确性,从而有助于提升系统性能。
本申请实施例中的信道预测装置可以是电子设备,例如具有操作系统的电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,终端可以包括但不限于上述所列举的终端11的类型,其他设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)等,本申请实施例不作具体限定。
本申请实施例提供的信道预测装置能够实现图2所述方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
请参照图7,图7是本申请实施例提供的另一种信道预测装置的结构图,如图7所示,信道预测装置700包括:
接收模块701,用于接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
第二预测模块702,用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;
其中,所述L个信道信息为所述装置针对与所述第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
可选地,所述第一信息包括导频信号和信道反馈中的至少一种。
可选地,所述第一信息包括信道反馈,所述装置为网络侧设备;所述第二预测模块702还用于:
基于针对信道反馈的L个时隙估计的L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。
可选地,所述第二信道的数量为M个,M为大于或等于1的整数;所述装置还包括:
第一发送模块,用于向所述第一设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
可选地,所述装置还包括:
第二发送模块,用于向所述第一设备发送第一指令,所述第一指令用于指示针对所述第一目标神经网络重置网络参数或不重置网络参数。
可选地,所述装置还包括:
第三发送模块,用于向所述第一设备发送第二指令,所述第二指令用于指示针对所述第一目标神经网络更新或不更新。
可选地,所述接收模块701还用于:
接收所述第一设备发送的所述第一目标神经网络的训练样本的数量。
本申请实施例提供的信道预测装置能够实现图5所述方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
可选的,如图8所示,本申请实施例还提供一种通信设备800,包括处理器801和存储器802,存储器802上存储有可在所述处理器801上运行的程序或指令,该程序或指令被处理器801执行时实现上述图2或图5所述方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种终端,上述方法实施例的各个实施过程和实现方式均可适用于该终端实施例中,且能达到相同的技术效果。具体地,图9为实现本申请实施例的一种终端的硬件结构示意图。
该终端900包括但不限于:射频单元901、网络模块902、音频输出单元903、输入单元904、传感器905、显示单元906、用户输入单元907、接口单元908、存储器909以及处理器910等中的至少部分部件。
本领域技术人员可以理解,终端900还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器910逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图9中示出的终端结构并不构成对终端的限定,终端可以包
括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元904可以包括图形处理单元(Graphics Processing Unit,GPU)9041和麦克风9042,图形处理器9041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元906可包括显示面板9061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板9061。用户输入单元907包括触控面板9071以及其他输入设备9072中的至少一种。触控面板9071,也称为触摸屏。触控面板9071可包括触摸检测装置和触摸控制器两个部分。其他输入设备9072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元901接收来自网络侧设备的下行数据后,可以传输给处理器910进行处理;另外,射频单元901可以向网络侧设备发送上行数据。通常,射频单元901包括但不限于天线、放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器909可用于存储软件程序或指令以及各种数据。存储器909可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器909可以包括易失性存储器或非易失性存储器,或者,存储器909可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器909包括但不限于这些和任意其它适合类型的存储器。
处理器910可包括一个或多个处理单元;可选的,处理器910集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器910中。
在一种实施方式中,所述终端900为第一设备,处理器910用于获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;
以及用于基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;
其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的
信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
在另一种实施方式,所述终端900为第二设备,射频单元901用于接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;
处理器910用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;
其中,所述L个信道信息为所述终端针对与所述第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
需要说明地,本申请实施例提供的终端900能够实现上述图2或图5所述方法实施例的全部技术过程,并能达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种网络侧设备,该网络侧设备实施例与上述图2或图5所述方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该网络侧设备实施例中,且能达到相同的技术效果。
具体地,本申请实施例还提供了一种网络侧设备。如图10所示,该网络侧设备1000包括:天线1001、射频装置1002、基带装置1003、处理器1004和存储器1005。天线1001与射频装置1002连接。在上行方向上,射频装置1002通过天线1001接收信息,将接收的信息发送给基带装置1003进行处理。在下行方向上,基带装置1003对要发送的信息进行处理,并发送给射频装置1002,射频装置1002对收到的信息进行处理后经过天线1001发送出去。
以上实施例中网络侧设备执行的方法可以在基带装置1003中实现,该基带装置1003包括基带处理器。
基带装置1003例如可以包括至少一个基带板,该基带板上设置有多个芯片,如图10所示,其中一个芯片例如为基带处理器,通过总线接口与存储器1005连接,以调用存储器1005中的程序,执行以上方法实施例中所示的网络设备操作。
该网络侧设备还可以包括网络接口1006,该接口例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本发明实施例的网络侧设备1000还包括:存储在存储器1005上并可在处理器1004上运行的指令或程序,处理器1004调用存储器1005中的指令或程序执行图6或图7所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图2或图5所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括
计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图2或图5所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述图2或图5所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。
Claims (24)
- 一种信道预测方法,包括:第一设备获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
- 根据权利要求1所述的方法,其中,所述方法还包括:所述第一设备基于K个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述K个信道信息为所述第一设备针对与所述第一信息对应的K个时隙估计的信道信息,K为大于或等于1的整数;或者,所述第一设备向第二设备发送第一反馈信息,所述第二设备用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,获得第二信道预测信息,所述第一反馈信息基于所述第一信道预测信息得到,所述L个信道信息为所述第二设备针对与所述第一信息对应的L个时隙估计的信道信息,L为大于或等于1的整数。
- 根据权利要求2所述的方法,其中,所述第一信息包括导频信号和信道反馈中的至少一种。
- 根据权利要求3所述的方法,其中,在所述第一信息仅包括导频信号的情况下,所述第一设备通过所述第二目标神经网络预测所述第二信道。
- 根据权利要求4所述的方法,其中,在所述导频信号为信道状态信息参考信号CSI-RS的情况下,所述第一设备为终端;在所述导频信号为探测参考信号SRS的情况下,所述第一设备为网络侧设备。
- 根据权利要求3所述的方法,其中,在所述第一信息仅包括信道反馈的情况下,所述第一设备通过所述第二目标神经网络预测所述第二信道,所述第一设备为网络侧设备。
- 根据权利要求2所述的方法,其中,所述第一设备获取针对N个时隙估计的N个信道信息,包括:终端获取针对与CSI-RS相关的N个时隙估计的N个信道信息;所述第一设备向第二设备发送第一反馈信息,所述第二设备用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,包括:所述终端向网络侧设备发送第一反馈信息,所述网络侧设备用于基于针对信道反馈的L个时隙估计的L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之 间的信道。
- 根据权利要求2所述的方法,其中,所述第二信道的数量为M个,M为大于或等于1的整数;所述第一设备基于K个信道信息及所述第一信道预测信息,通过第二目标神经网络预测所述第二信道之后,所述方法还包括:所述第一设备向所述第二设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
- 根据权利要求1所述的方法,其中,所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道之前,所述方法还包括:所述第一设备接收第一指令;所述第一设备基于所述N个信道信息通过第一目标神经网络预测第一信道,包括:在所述第一指令用于指示不重置网络参数的情况下,所述第一设备基于所述N个信道信息通过第一神经网络预测第一信道;在所述第一指令用于指示重置网络参数的情况下,所述第一设备基于所述N个信道信息通过第二神经网络预测第一信道,所述第二神经网络为基于默认的信道信息训练得到的神经网络。
- 根据权利要求1所述的方法,其中,所述方法还包括:所述第一设备获取第N+1个时隙的信道信息,所述第N+1个时隙为距离所述当前时刻最近的下一个实际发送所述第一信息的时隙;在所述第一设备接收到第二指令,所述第二指令用于指示更新所述第一目标神经网络的情况下,所述第一设备执行第一操作,所述第一操作包括:基于第一训练样本集对所述第一目标神经网络进行训练,训练后的所述第一目标神经网络用于预测所述第N+1个时隙之后发送所述第一信息的时隙的信道,所述第一训练样本集包括所述第N+1个时隙的信道信息。
- 根据权利要求10所述的方法,其中,所述第一设备执行第一操作之后,所述方法还包括:所述第一设备向第二设备发送所述第一目标神经网络的训练样本的数量。
- 根据权利要求11所述的方法,其中,所述方法还包括:所述第一设备配置目标数值,所述目标数值为所述第一目标神经网络的训练样本的数量的最大值。
- 根据权利要求12所述的方法,其中,在所述第一设备执行第一操作的情况下,输入所述第一目标神经网络的训练样本的数量为所述目标数值和所述第一训练样本集中训练样本数量的最小值。
- 一种信道预测方法,包括:第二设备接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信 息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;所述第二设备基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;其中,所述L个信道信息为所述第二设备针对与第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
- 根据权利要求14所述的方法,其中,所述第一信息包括导频信号和信道反馈中的至少一种。
- 根据权利要求15所述的方法,其中,所述第一信息包括信道反馈,所述第二设备为网络侧设备;所述第二设备基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,包括:所述网络侧设备基于针对信道反馈的L个时隙估计的L个信道信息及所述第一反馈信息,通过第二目标神经网络预测所述第二信道,所述第二信道为当前发送所述信道反馈的时隙与下一个发送所述信道反馈的时隙之间的信道。
- 根据权利要求14所述的方法,其中,所述第二信道的数量为M个,M为大于或等于1的整数;所述第二设备基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道之后,所述方法还包括:所述第二设备向所述第一设备发送第一预测结果,所述第一预测结果包括所述第二信道的数量及每一个所述第二信道各自对应的时隙指示。
- 根据权利要求14所述的方法,其中,所述方法还包括:所述第二设备向所述第一设备发送第一指令,所述第一指令用于指示针对所述第一目标神经网络重置网络参数或不重置网络参数。
- 根据权利要求14所述的方法,其中,所述方法还包括:所述第二设备向所述第一设备发送第二指令,所述第二指令用于指示针对所述第一目标神经网络更新或不更新。
- 根据权利要求19所述的方法,其中,所述方法还包括:所述第二设备接收所述第一设备发送的所述第一目标神经网络的训练样本的数量。
- 一种信道预测装置,包括:获取模块,用于获取针对N个时隙估计的N个信道信息,所述N个时隙为与第一信息对应的时隙,N为大于或等于1的整数;第一预测模块,用于基于所述N个信道信息通过第一目标神经网络预测第一信道,获得第一信道预测信息;其中,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的 信道,所述第一信道预测信息用于第二目标神经网络预测当前与所述第一信息对应的时隙和所述第一时隙之间的第二信道。
- 一种信道预测装置,包括:接收模块,用于接收第一设备发送的第一反馈信息,所述第一反馈信息基于第一信道预测信息得到,所述第一信道预测信息为所述第一设备通过第一目标神经网络预测得到;第二预测模块,用于基于L个信道信息及所述第一反馈信息,通过第二目标神经网络预测第二信道,获得第二信道预测信息,L为大于或等于1的整数;其中,所述L个信道信息为所述装置针对与第一信息对应的L个时隙估计的信道信息,所述第一信道为距离当前时刻最近的下一个与所述第一信息对应的第一时隙的信道,所述第二信道为当前时刻与所述第一信息对应的时隙和所述第一时隙之间的信道。
- 一种通信设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-13中任一项所述的信道预测方法的步骤,或者实现如权利要求14-20中任一项所述的信道预测方法的步骤。
- 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-13中任一项所述的信道预测方法的步骤,或者实现如权利要求14-20中任一项所述的信道预测方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211217546.3A CN117879737A (zh) | 2022-09-30 | 2022-09-30 | 信道预测方法、装置及通信设备 |
CN202211217546.3 | 2022-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024067878A1 true WO2024067878A1 (zh) | 2024-04-04 |
Family
ID=90476428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/123160 WO2024067878A1 (zh) | 2022-09-30 | 2023-10-07 | 信道预测方法、装置及通信设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117879737A (zh) |
WO (1) | WO2024067878A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108429611A (zh) * | 2018-05-23 | 2018-08-21 | 东南大学 | 一种巨连接下的导频分配和信道估计方法 |
CN110661734A (zh) * | 2019-09-20 | 2020-01-07 | 西安交通大学 | 基于深度神经网络的信道估计方法、设备和可读存储介质 |
US20210351885A1 (en) * | 2019-04-16 | 2021-11-11 | Samsung Electronics Co., Ltd. | Method and apparatus for reporting channel state information |
CN114301742A (zh) * | 2021-12-23 | 2022-04-08 | 北京邮电大学 | 信道估计方法及装置 |
CN114422059A (zh) * | 2022-01-24 | 2022-04-29 | 清华大学 | 信道预测方法、装置、电子设备及存储介质 |
CN114826832A (zh) * | 2021-01-29 | 2022-07-29 | 华为技术有限公司 | 信道估计方法、神经网络的训练方法及装置、设备 |
CN114915523A (zh) * | 2022-07-19 | 2022-08-16 | 南昌大学 | 基于模型驱动的智能超表面信道估计方法及系统 |
-
2022
- 2022-09-30 CN CN202211217546.3A patent/CN117879737A/zh active Pending
-
2023
- 2023-10-07 WO PCT/CN2023/123160 patent/WO2024067878A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108429611A (zh) * | 2018-05-23 | 2018-08-21 | 东南大学 | 一种巨连接下的导频分配和信道估计方法 |
US20210351885A1 (en) * | 2019-04-16 | 2021-11-11 | Samsung Electronics Co., Ltd. | Method and apparatus for reporting channel state information |
CN110661734A (zh) * | 2019-09-20 | 2020-01-07 | 西安交通大学 | 基于深度神经网络的信道估计方法、设备和可读存储介质 |
CN114826832A (zh) * | 2021-01-29 | 2022-07-29 | 华为技术有限公司 | 信道估计方法、神经网络的训练方法及装置、设备 |
CN114301742A (zh) * | 2021-12-23 | 2022-04-08 | 北京邮电大学 | 信道估计方法及装置 |
CN114422059A (zh) * | 2022-01-24 | 2022-04-29 | 清华大学 | 信道预测方法、装置、电子设备及存储介质 |
CN114915523A (zh) * | 2022-07-19 | 2022-08-16 | 南昌大学 | 基于模型驱动的智能超表面信道估计方法及系统 |
Non-Patent Citations (1)
Title |
---|
QUALCOMM INCORPORATED: "Joint channel estimation for PUSCH", 3GPP DRAFT; R1-2107361, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG1, no. e-Meeting; 20210816 - 20210827, 6 August 2021 (2021-08-06), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , XP052038306 * |
Also Published As
Publication number | Publication date |
---|---|
CN117879737A (zh) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023143581A1 (zh) | 信息传输方法、装置、终端及网络侧设备 | |
WO2024067878A1 (zh) | 信道预测方法、装置及通信设备 | |
WO2024169797A1 (zh) | Ai模型指示方法及通信设备 | |
WO2024067281A1 (zh) | Ai模型的处理方法、装置及通信设备 | |
WO2024067665A1 (zh) | Csi预测处理方法、装置、通信设备及可读存储介质 | |
WO2023151650A1 (zh) | 信息激活方法、终端及网络侧设备 | |
WO2023179540A1 (zh) | 信道预测方法、装置及无线通信设备 | |
WO2024120444A1 (zh) | 模型监督方法、装置、终端、网络侧设备及可读存储介质 | |
WO2024067280A1 (zh) | 更新ai模型参数的方法、装置及通信设备 | |
WO2024032694A1 (zh) | Csi预测处理方法、装置、通信设备及可读存储介质 | |
WO2024149157A1 (zh) | Csi传输方法、装置、终端及网络侧设备 | |
WO2024120358A1 (zh) | 信息传输方法、信息传输装置和通信设备 | |
WO2024164961A1 (zh) | 信息处理方法、装置、终端及网络侧设备 | |
WO2024120445A1 (zh) | 模型输入信息的确定方法、装置、设备、系统及存储介质 | |
WO2024125525A1 (zh) | Ai算力上报方法、终端及网络侧设备 | |
WO2023151649A1 (zh) | 信息激活方法、终端及网络侧设备 | |
WO2024093771A1 (zh) | 参考信号确定方法、终端及网络侧设备 | |
WO2023109763A1 (zh) | Prach传输方法、装置及终端 | |
WO2024114607A1 (zh) | 传输控制方法、装置及通信设备 | |
WO2023186014A1 (zh) | 信号发送方法、信号接收方法及通信设备 | |
WO2024093999A1 (zh) | 信道信息的上报和接收方法、终端及网络侧设备 | |
WO2024140442A1 (zh) | 模型更新方法、装置及设备 | |
EP4443831A1 (en) | Information interaction method and apparatus, and communication device | |
WO2024169911A1 (zh) | 多天线发射分集的指示方法、终端及网络侧设备 | |
WO2024149156A1 (zh) | 信息传输方法、装置、终端及网络侧设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23871102 Country of ref document: EP Kind code of ref document: A1 |