WO2020052394A1 - 一种信道预测方法及相关设备 - Google Patents

一种信道预测方法及相关设备 Download PDF

Info

Publication number
WO2020052394A1
WO2020052394A1 PCT/CN2019/100276 CN2019100276W WO2020052394A1 WO 2020052394 A1 WO2020052394 A1 WO 2020052394A1 CN 2019100276 W CN2019100276 W CN 2019100276W WO 2020052394 A1 WO2020052394 A1 WO 2020052394A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
change
value
sequence
channel coefficient
Prior art date
Application number
PCT/CN2019/100276
Other languages
English (en)
French (fr)
Inventor
皇甫幼睿
李榕
王坚
戴胜辰
王俊
余荣道
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19859240.4A priority Critical patent/EP3836435B1/en
Publication of WO2020052394A1 publication Critical patent/WO2020052394A1/zh
Priority to US17/196,337 priority patent/US11424963B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0212Channel estimation of impulse response
    • H04L25/0214Channel estimation of impulse response of a single coefficient
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/373Predicting channel quality or other radio frequency [RF] parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • H04B17/3913Predictive models, e.g. based on neural network models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0204Channel estimation of multiple channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Definitions

  • the present application relates to the technical field of wireless networks, and in particular, to a channel prediction method and related equipment.
  • a transmitter sends a signal
  • a receiver receives a signal sent by the transmitter.
  • the wireless link between the transmitter and the receiver is called a wireless channel.
  • the propagation path between the transmitter and the receiver is very complicated, such as the movement of buildings, streets and other objects can cause reflection, refraction and diffraction of signals.
  • the signal received by the receiver is a superimposed signal of all signals arriving through different transmission paths. Therefore, the wireless channel is an important factor affecting the wireless communication system.
  • channel estimation is to estimate the response of the wireless channel from the transmitting antenna to the receiving antenna, and the amplitude and phase changes are generated according to the influence of the multipath channel of the receiver, and the white noise is superimposed to estimate the time domain or frequency of the channel. Domain characteristics.
  • the accuracy of the channel coefficients estimated by the existing channel estimation algorithms is low.
  • the embodiments of the present application provide a channel prediction method and related equipment, which can improve the accuracy of channel prediction.
  • an embodiment of the present application provides a channel prediction method, including: first obtaining a first channel coefficient sequence in a first period, where the first channel coefficient sequence includes multiple complex values of the channel coefficient; and then according to the first channel coefficient
  • the sequence and the preset channel change dictionary determine the predicted value of the channel coefficient in the second period.
  • the channel change dictionary includes a mapping relationship between each change amount of the channel coefficient and the value of the channel change.
  • the second period is later than the first period.
  • the channel change dictionary is used to realize the prediction of complex-valued channel coefficients, so that the original information of the channel coefficients is retained, and the amplitude and phase of the channel can be predicted simultaneously, which improves the accuracy of the predicted channel coefficients.
  • the first channel coefficient change sequence is first determined according to the first channel coefficient sequence, and the first channel coefficient change sequence includes multiple changes of the channel coefficient; then, the first channel is looked up from the channel change dictionary. The value of the channel change corresponding to each change in the coefficient change sequence, and generating a first channel change value sequence; inputting the first channel change value sequence into a channel prediction model and performing prediction to obtain a prediction sequence of the channel change value; Finally, the predicted value of the channel coefficient is determined according to the predicted sequence of the value of the channel change.
  • each change in the first channel coefficient change sequence is converted into the value of the channel change and input to the channel prediction model for prediction, thereby realizing the prediction of complex-valued channel coefficients and improving the predicted channel coefficients. Accuracy.
  • the complex value of the channel coefficient at a later time in the first channel coefficient sequence may be subtracted from the complex value of the channel coefficient at the previous time to obtain each change in the first channel coefficient change sequence. Or subtracting the complex value of the channel coefficient at the previous time from the complex value of the channel coefficient at the previous time in the first channel coefficient sequence to obtain each change amount in the first channel coefficient change sequence.
  • the complex value of the channel coefficient before the last moment in the first channel coefficient sequence may be subtracted from the complex value of the channel coefficient at the last moment to obtain each change amount in the first channel coefficient change sequence;
  • the complex value of the channel coefficients after the start time in the first channel coefficient sequence is subtracted from the complex value of the channel coefficients at the start time to obtain each change amount in the first channel coefficient change sequence.
  • a sequence of the amount of change of the channel coefficient corresponding to the predicted sequence of the value of the channel change is determined; and the predicted value of the channel coefficient is determined according to the sequence of the change of the channel coefficient.
  • the prediction sequence of the value of the channel change is converted into a sequence of the change amount of the channel coefficient, thereby obtaining the complex value of the prediction value of the channel coefficient.
  • the frequency of occurrence of each change in the channel coefficient change sequence can be counted; and then, each change of the channel coefficient is assigned an integer based on the frequency of occurrence The channel change value is obtained; finally, a mapping relationship between the channel change value and each change amount is established, and a channel change dictionary is generated.
  • the dictionary is composed of multiple key-value pairs, and the values corresponding to the keys can be obtained by searching the keys in the dictionary.
  • the channel change dictionary may include a channel change dictionary A and a channel change dictionary B.
  • the key in the channel change dictionary A is the change amount of the channel coefficient, and the value is the value of the channel change.
  • the key in the channel change dictionary B is the value of the channel change, and the value is the change amount of the channel coefficient.
  • a channel change dictionary A is used.
  • the channel change dictionary B is used when switching from a sequence of values of channel coefficient changes to a sequence of channel coefficient changes.
  • the channel coefficient conversion before the channel prediction is realized through the channel change dictionary A, and the channel coefficient conversion after the channel prediction is realized through the channel change dictionary B, so that the complex-valued channel coefficient prediction is realized.
  • a second channel coefficient sequence may be obtained first, and the second channel coefficient sequence includes multiple complex values of the channel coefficient. Secondly, according to the second channel coefficient sequence, a second channel coefficient change sequence is determined, and the second channel coefficient change sequence includes multiple changes in the channel coefficient. Then, a channel change value corresponding to each change amount in the second channel coefficient change sequence is looked up from the channel change dictionary, and a second channel change value sequence is generated. Finally, the second channel change value sequence is input to the neural network for training to obtain a channel prediction model. By inputting a large number of channel coefficient sequences into the neural network for training, the prediction accuracy of the channel prediction model is improved, thereby improving the accuracy of the channel prediction.
  • the size of the input dimension of the neural network and the size of the output dimension of the neural network are the same as the number of mapping relationships in the channel change dictionary
  • the probability of each predicted value output by the neural network can be obtained; then the gap between the complex values of every two changes in the channel change dictionary is determined, and a channel change gap matrix is generated; according to the channel
  • the variation gap matrix and the probability of each predicted value determine the weighted average of the probability of each predicted value; finally, based on the weighted average, determine whether the neural network has been trained.
  • the redundant information related to the complex value in the channel change dictionary is added to the loss function for calculation, and the original loss function is replaced after the training is performed to a certain degree, or the gradually increasing weight is added to the training as the training progresses.
  • the original loss function in order to reduce interference in the real environment.
  • the channel prediction model may be trained in order to obtain an updated channel prediction model.
  • the parameters of some layers in the channel prediction model can be kept unchanged, and the parameters of other layers can be changed to obtain an updated channel prediction model.
  • the accuracy of the channel prediction model prediction channel coefficient is improved.
  • the structure of the channel prediction model is a model of a neural network, and parameters (such as weights) in the model can be changed by training through the neural network, or by means of assignment.
  • the communication device can send or receive the position and value of these parameters by wire or wirelessly.
  • the position may include the number of the channel prediction model to which the parameter belongs, the layer number in the channel prediction model to which the parameter belongs, and the position number in the parameter matrix of the layer in the channel prediction model to which the parameter belongs.
  • the data after determining the predicted value of the channel coefficient in the second period according to the first channel coefficient sequence and the preset channel change dictionary, the data may be demodulated or decoded according to the predicted value. You can also perform adaptive transmission based on predicted values to improve system throughput.
  • a channel prediction model may be used for channel prediction.
  • the channel coefficient can be obtained using channel estimation.
  • the channel estimation value obtained through channel estimation may be used to demodulate, decode, or adaptively transmit data in the channel estimation window T1, or the channel prediction window may be used to predict the channel coefficient.
  • the data in T2 is demodulated, decoded or adaptively transmitted.
  • the channel estimation value obtained through channel estimation may be obtained, and the channel prediction weight value may be determined according to the channel estimation channel estimation value and the prediction value; according to the channel prediction weight value, the data in the channel prediction window T2 may be demodulated and decoded.
  • adaptive transmission By combining the channel estimation and the channel prediction model to predict the channel coefficients, the accuracy of the channel prediction is improved.
  • extrapolation processing may be performed on the channel coefficient sequence of the channel prediction window to obtain the channel extrapolation value of the channel prediction window, and then determine the channel estimation value, the channel extrapolation value, and the weighted average value of the prediction value.
  • the weighted average value is used for demodulation, decoding or adaptive transmission. This can improve the robustness of the channel prediction value.
  • the signal value of the channel prediction window can be obtained, and the channel coefficient is predicted by combining the signal value and the channel prediction model to improve the accuracy of the channel prediction.
  • an embodiment of the present application provides a channel prediction apparatus configured to implement the methods and functions performed by a communication device in the first aspect, which are implemented by hardware / software, and the hardware / software includes and Corresponding unit for the above functions.
  • an embodiment of the present application provides a channel prediction device, including a processor, a memory, and a communication bus, where the communication bus is used to implement connection and communication between the processor and the memory, and the processor executes a program stored in the memory. For implementing the steps in a channel prediction method provided by the first aspect.
  • the channel prediction device provided in the embodiment of the present application may include a module corresponding to the behavior of the channel prediction device in the above method design.
  • Modules can be software and / or hardware.
  • the processor and memory can also be integrated together.
  • the channel prediction device may be a chip.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions, and when the computer-readable storage medium runs on the computer, the computer executes the methods in the foregoing aspects.
  • an embodiment of the present application provides a computer program product including instructions, which when executed on a computer, causes the computer to execute the methods in the foregoing aspects.
  • FIG. 1 is a schematic structural diagram of a wireless communication system according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a channel prediction method according to an embodiment of the present application.
  • FIG. 5 is a diagram of a channel change dictionary provided by an embodiment of the present application.
  • FIG. 6 is a diagram of another channel change dictionary provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a channel prediction apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a channel prediction device according to an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a wireless communication system according to an embodiment of the present application.
  • the wireless communication system includes a plurality of communication devices, for example, a communication device 1, a communication device 2, and a communication device 3.
  • the communication device may be a terminal device or a base station.
  • the base station provides communication services for terminal equipment.
  • the base station sends downlink data to the terminal equipment, and uses channel coding to encode the data.
  • the channel-coded data is transmitted to the terminal equipment after constellation modulation.
  • the terminal device sends uplink data to the base station.
  • the uplink data may also be encoded using channel coding.
  • the encoded data is transmitted to the base station after constellation modulation.
  • a terminal device may refer to a user device, a device that provides a voice and / or data connection to a user, may be connected to a computing device such as a laptop or desktop computer, or it may be a personal digital assistant ( personal digital assistant (PDA), etc.
  • PDA personal digital assistant
  • a terminal device may also be referred to as a system, a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, an access point, a remote terminal, an access terminal, a user terminal, a user agent, or a user device.
  • a base station can be a device used to communicate with terminal equipment.
  • the base station can act as a router between the wireless terminal and the rest of the access network, which can include an Internet Protocol network.
  • the base station can also coordinate the management of the attributes of the air interface.
  • the wireless communication system may be applied to channel prediction.
  • the channel coefficient is obtained through a channel prediction method, and the channel coefficient is used for data demodulation, decoding, or adaptive transmission of the base station or terminal device, thereby obtaining the original transmitted data.
  • FIG. 2 is a linear time-varying channel model provided by an embodiment of the present application.
  • n (t) is the additive noise existing on the modulation channel, which is also called additive interference and has nothing to do with the input signal s (t).
  • channel estimation can be expressed as estimating the response of the wireless channel from the transmitting antenna to the receiving antenna, and the receiving sequence with amplitude and phase changes generated by the influence of the multipath channel of the receiver and superimposed with white noise is received.
  • the usual channel estimation algorithm only calculates based on the measured signal at the current time, without considering the channel coefficient estimated at the historical time.
  • the use of neural networks at seemingly random historical times Channel data is used to learn channel characteristics and make inferences to predict the channel coefficients at the current time and even in the future.
  • the randomness of the channel environment comes from complex multipath effects, shadow effects, and small-scale fading.
  • the value of the channel coefficient conforms to some statistical characteristics.
  • the channel models proposed by the standards organization for link and system simulation are fitted based on these statistical characteristics. These channel models are powerless for channel prediction in the real environment, but neural networks are suitable for processing. This is a very complex specific problem that cannot be modeled.
  • Existing neural network-based channel prediction methods such as adaptive linear (adaline) or non-linear methods, including multilayer perceptron (MLP), recurrent neural networks (RNN), and convolutional neural networks (convolutional neural network, CNN), deep neural network (DNN), and so on.
  • FIG. 3 is a schematic structural diagram of a convolutional neural network provided by an embodiment of the present application.
  • the convolutional neural network includes input layers, convolutional layers, fully connected layers, and output layers.
  • the input layer can be used to input data, and the input data is generally an integer.
  • the convolutional layer is mainly used to extract the features of the input data. Different convolution kernels have different features for extracting input data. The more the number of convolution kernels in the convolution layer, the more features the extracted input data have.
  • a fully connected layer can contain multiple fully connected layers.
  • the neuron nodes in the next layer are connected to each neural node in the previous layer.
  • the neuron nodes in each layer are forward propagated through the weights on the connection line, and
  • the weighted combination gets the input of the next layer of neuron nodes.
  • the number of neuron nodes in the output layer can be set according to the specific application task.
  • the channel coefficient is a complex value, and the amplitude and phase of the channel coefficient represent the transformation of the transmission signal in power and delay. Because classical neural networks cannot handle complex numbers, they can only learn the channel's amplitude (that is, power), or input the amplitude and phase of the channel to the neural network for prediction, which destroys the original information of the channel coefficients and the accuracy of the predicted channel coefficients. low. To solve the above technical problems, the embodiments of the present application provide the following solutions.
  • FIG. 4 is a schematic flowchart of a channel prediction method according to an embodiment of the present application. As shown in the figure, the steps in the embodiment of the present application include at least:
  • the communication device can estimate channel coefficients in a period of time by using a channel estimation method to form a first channel coefficient sequence.
  • the communication device may also receive the first channel coefficient sequence sent by other communication devices in a wired or wireless manner.
  • the first channel coefficient sequence includes multiple complex values of the channel coefficients.
  • the complex values of the channel coefficients can be expressed in Cartesian coordinates, such as the form of x + yi, or can be expressed in the polar coordinate form of ae ib .
  • the channel coefficient can be a time domain channel coefficient or a frequency domain channel coefficient. Time-domain channel coefficients and frequency-domain channel coefficients can be converted to each other.
  • frequency-domain channel coefficients can be converted to time-domain channel coefficients by inverse Fourier transform
  • time-domain channel coefficients can be converted to frequency-domain channels by Fourier transform. coefficient.
  • signal processing operations such as sampling, noise reduction, filtering, amplitude normalization, and fixed-pointing can be performed on the estimated channel coefficients in the time or frequency domain.
  • the channel estimation value with the placed pilot sequence near the time-frequency position may also be calculated.
  • the calculation includes signal processing operations such as inward interpolation and outward interpolation.
  • the calculation also includes inputting a neural network.
  • the first period may be a time point or a time period.
  • S402. Determine a predicted value of the channel coefficient in the second period according to the first channel coefficient sequence and a preset channel change dictionary.
  • the channel change dictionary includes each change amount of the channel coefficient and a channel change. Value mapping relationship, the second period is later than the first period.
  • the second period may be a time point or a time period.
  • a first channel coefficient change sequence is determined according to the first channel coefficient sequence, and the first channel coefficient change sequence includes multiple changes in the channel coefficient.
  • the complex value of the channel coefficient at a later moment in the first channel coefficient sequence may be subtracted from the complex value of the channel coefficient at a previous moment to obtain the first channel coefficient change sequence.
  • Each change in; or subtracting the complex value of the channel coefficient at the previous moment from the complex value of the channel coefficient at the previous moment in the first channel coefficient sequence to obtain the first channel coefficient change sequence Each change in.
  • the first channel coefficient sequence is ⁇ x 1 , x 2 , x 3 , x 4 , ..., x i , ...
  • the first channel coefficient change sequence is ⁇ x 2 -x 1 , x 3 -x 2 , x 4 -x 3 , ... , X n -x n-1 ⁇ .
  • the complex value of the channel coefficient before the last moment in the first channel coefficient sequence may be subtracted from the complex value of the channel coefficient at the last moment to obtain the first channel coefficient.
  • Each amount of change in the change sequence; or subtracting the complex value of the channel coefficient at the start time from the complex value of the channel coefficient after the start time in the first channel coefficient sequence to obtain the first channel Each change in the coefficient change sequence.
  • the first channel coefficient change sequence includes the change amount of the first channel coefficient sequence, which can be characterized by addition, subtraction, multiplication and division, first derivative, second derivative, etc., which can characterize the complex value of the channel coefficient at the previous moment and the channel at the later moment. The characteristic calculation of the complex value of the coefficient is obtained.
  • the channel change may be fast or slow.
  • the difference in frequency offset may also cause the phase change of the channel coefficient to be fast or slow. Therefore, the first channel coefficient sequence may be sampled or interpolated first, and then the first channel coefficient change sequence may be obtained according to the first channel coefficient sequence after the sampling or interpolation operation.
  • the phase of the first channel coefficient sequence may be multiplied by a frequency offset to make the phase rotation of the channel faster or slower.
  • the first channel coefficient sequence with a length of 300,000 symbols can be sampled, and a value is taken every 30 symbols to reconstitute the sampling sequence.
  • the length of the sampling sequence is 10,000, and then the first channel coefficient change sequence is determined according to the sampling sequence.
  • an interpolation smoothing operation may be performed on a first channel coefficient sequence with a length of 1000 symbols, and nine values are inserted between every two symbols to obtain an interpolation sequence with a length of 10000, and then the first channel coefficient change sequence is determined according to the interpolation sequence.
  • a value of a channel change corresponding to each change in the first channel coefficient change sequence is searched from the channel change dictionary, and a first channel change value sequence is generated.
  • the channel change dictionary includes a set consisting of a key and a value
  • the key is a change amount of the channel coefficient in the first channel coefficient change sequence.
  • the change amount of the channel coefficient may be a complex value of the channel coefficient at a later time in the first channel coefficient sequence minus a complex value of the channel coefficient at the previous time, or a previous one in the first channel coefficient sequence.
  • the complex value of the channel coefficient at a time is subtracted from the complex value of the channel coefficient at a later time.
  • Keys are unique in the channel change dictionary, one key corresponds to one value.
  • the value is generally an integer, that is, the value of the change in the channel coefficient corresponding to the change in the channel.
  • the channel change dictionary may include a channel change dictionary A and a channel change dictionary B.
  • the key in the channel change dictionary A is the change amount of the channel coefficient, and the value is the value of the channel change.
  • the key in the channel change dictionary B is the value of the channel change, and the value is the change amount of the channel coefficient.
  • a channel change dictionary A is used.
  • the channel change dictionary B is used when switching from a sequence of values of channel coefficient changes to a sequence of channel coefficient changes. For the channel change dictionary B, the channel change value is searched by the value of the channel change.
  • the channel change dictionary can be degraded into a vector of the channel change amount, and therefore no storage Integer value.
  • the index of the first position of the vector is the integer 0
  • the index of the second position of the vector is the integer 1
  • the index of the third position of the vector is the integer 2, and so on, corresponding to the corresponding channel change.
  • FIG. 5 is a diagram of a channel change dictionary B provided by an embodiment of the present application.
  • the value of the channel change in the channel change dictionary is an integer that increases from 1 or 0, and is reduced to the first 11 elements of the vector representation.
  • the channel change dictionary may be temporarily generated according to a current change amount of channel coefficients, or may be pre-stored (generated according to other channel coefficient change sequences).
  • the process of generating a channel change dictionary includes: statistics of the frequency of occurrence of each change in a plurality of changes of channel coefficients in a sequence of channel coefficient changes; and then, according to the frequency of occurrence, each change of channel coefficients An integer is assigned to obtain the value of the channel change; finally, a mapping relationship between the value of the channel change and each change amount is established, and the channel change dictionary is generated.
  • the occurrence frequency of the change amount of each channel coefficient in the channel coefficient change sequence may be counted, and sorted according to the appearance frequency from high to low, or from low to high.
  • an integer can be assigned in increments starting from 1, and the assigned integer can be used as the channel change value in the channel change dictionary.
  • the channel change dictionary can be expressed as: ⁇ key1: value1, key2: value2, key3: value3, ..., keyN: valueN ⁇ , where key is the key of the channel conversion dictionary and value is the value of the channel conversion dictionary.
  • an integer is assigned to the target change amount of the channel coefficient.
  • a threshold can be set.
  • an integer value is assigned to the change amount. It is saved to the channel change dictionary.
  • an integer is not assigned to the change, and the change is not saved in the channel change dictionary.
  • the preset threshold is 10, and in a channel coefficient change sequence with a length of 100,000, for low frequency changes that occur only 10 times or less, these low frequency changes may not be stored in the channel change dictionary.
  • you can add a new word to the channel change dictionary and assign a value for example, use the new word "unk” as a key and assign a value of 0.
  • the new word "unk” can be used instead because the change and the corresponding integer cannot be found, and Corresponds to the integer 0.
  • FIG. 6 is a diagram of another channel change dictionary provided by an embodiment of the present application.
  • N is the total number of changes in the channel change dictionary, and the chart only lists the changes with the highest frequency 10 and the changes with the lowest frequency 10 and the corresponding channel changes.
  • the channel change dictionary may retain only the amount of change whose appearance frequency is greater than 10. For the variation with frequency less than or equal to 10, you can use "unk" instead and correspond to the integer 0.
  • the first channel change value sequence is input into a channel prediction model and prediction is performed to obtain a prediction sequence of the channel change value.
  • the channel prediction model can be obtained by training the neural network. The specific method is as follows:
  • a second channel coefficient sequence may be obtained, where the second channel coefficient sequence includes multiple complex values of the channel coefficient.
  • the second channel coefficient sequence may be a channel coefficient obtained based on a simulated scenario and obtained through simulation of a channel model; or a channel coefficient obtained based on a real scenario and obtained through channel estimation collection by a communication device in a real communication environment.
  • a second channel coefficient change sequence is determined according to the second channel coefficient sequence, and the second channel coefficient change sequence includes multiple changes in the channel coefficient. This step is the same as the method for determining the second channel coefficient change sequence according to the first channel coefficient sequence, and this step is not described again.
  • a value of a channel change corresponding to each change amount in the second channel coefficient change sequence is looked up from the channel change dictionary, and a second channel change value sequence is generated.
  • This step is the same as the method for searching the channel change value corresponding to each change amount in the first channel coefficient change sequence from the channel change dictionary, and this step is not described again.
  • the second channel change value sequence is input to a neural network for training to obtain the channel prediction model.
  • the neural network may be a recurrent neural network RNN, a convolutional neural network CNN, or a deep neural network DNN, or any combination of the three.
  • the input of the neural network may include a signal value in addition to a value of channel change, that is, a value obtained by performing various calculations on r (t) in FIG. 2.
  • RNNs can be combined with backpropagation through time (BPTT) methods or long short-term memory networks (LSTMs) and their variants.
  • BPTT backpropagation through time
  • LSTMs long short-term memory networks
  • Seq2Seq sequence-to-sequence
  • the Seq2Seq network can complete the conversion of two sequences, which is usually used for the translation of two languages.
  • the sequence at the previous moment may be regarded as one language, and the sequence at the later moment may be regarded as another language.
  • the Seq2Seq network implements translation between the two languages, it is equivalent to using the previous moment Sequence to generate the sequence at the next moment to achieve channel prediction.
  • the Seq2Seq network has two sets of RNN networks, which are more expressive. Among them, two sets of RNN networks can use the same channel change dictionary during training.
  • each change in the channel coefficient change sequence corresponds to an integer.
  • each change corresponds to a word vector.
  • the word vector represents the change as a neural network.
  • the first layer is trained in a neural network. This layer can also be called an embedding layer, which constantly updates parameters.
  • the size of the input dimension of the neural network, the size of the output dimension of the neural network, and the number of mapping relationships in the channel change dictionary may be the same.
  • each amount of change can be represented by a word vector with a length of 100.
  • Each word vector can be obtained by initialization before entering the neural network, or a pre-trained (pre-trained) word vector can be used.
  • pre-trained word vector in order to accelerate the convergence of the neural network and avoid training from scratch, in addition to introducing pre-stored word vectors, it can also be replaced by a pre-stored pre-trained channel prediction model.
  • the neural network can be assisted to converge faster based on the prior information of the change amount of the channel coefficient. For example, from the channel change dictionary, “+ 0.03 + 0.001i” corresponds to the integer 105, and “+ 0.03 + 0.002i” corresponds to the integer 2370. You cannot see the relationship between the two through the integers, but the distance between the two complex values is actually Recently, even the system can tolerate the interchange of these two complex values, but this information has been lost in the process of integerization.
  • each change amount corresponds to a word vector. If the difference between multiple change amounts is less than a certain value, the word vectors corresponding to the multiple change amounts are bound.
  • the word vector corresponding to the smallest change among the plurality of changes may be assigned to other changes in the plurality of changes; or a certain change in the plurality of changes may be selected
  • the word vector corresponding to the change is used for training.
  • the loss of the change can also be used in its entirety.
  • the average value may be assigned to the word vector corresponding to a certain change or the word vectors corresponding to all the changes with a close distance.
  • the gap between the complex values of each two changes in the channel change dictionary can be determined, and a channel change gap matrix can be generated.
  • the dimension of the channel change gap matrix is N * N.
  • the element in the i-th row of the channel change matrix indicates a difference between the i-th channel change amount and the 1-N channel change amount in the channel change dictionary.
  • a loss function can be set. By comparing the predicted value of the neural network output with the target value in the training set, the gradient reduction is performed on the neural network with the goal of minimizing the loss function. Statistical methods such as cross entropy, negative maximum likelihood, and mean square error can be used to calculate the loss function. And add the redundant information related to the complex value in the channel change dictionary to the loss function for calculation. Use the redundant information to calculate the loss function, and replace the original loss function after training to a certain degree, or The training is performed by adding increasing weights to the original loss function in order to reduce interference in the real environment.
  • the channel change gap matrix and the loss function are combined to determine a channel prediction model. Specifically, it includes: first obtaining the probability of each predicted value output by the neural network; determining the gap between the complex values of each two changes in the channel change dictionary, and generating a channel change gap matrix; and then according to the channel change gap matrix and The probability of each predicted value determines a weighted average of the probabilities of each predicted value; finally, according to the weighted average, it is determined whether the neural network has been trained.
  • first determine the gap between the complex values of every two changes in the channel change dictionary, and generate a channel change gap matrix.
  • Second normalize the number of k with the smallest gap in each row of the channel change gap matrix, and set the other Nk numbers in the row to zero, or you can set the gap in each row of the channel change gap matrix to be less than Set the k number of the preset value to 1, and set the other Nk numbers in the row to zero.
  • one of the k numbers whose gap in each row in the channel change gap matrix is less than a preset value may be set to 1, and other numbers in the row may be set to zero.
  • k is a positive integer greater than 1 and less than N
  • N is a positive integer.
  • the channel change gap matrix and the probability of each predicted value of the output of the neural network are called, and the j-th row of the channel change gap matrix is read according to the target value j.
  • the data is weighted by the probability of the predicted value.
  • the channel prediction model may be trained in order to obtain an updated channel prediction model.
  • the parameters of some layers in the channel prediction model can be kept unchanged, and the parameters of other layers can be changed to obtain an updated channel prediction model.
  • the communication device may send the updated channel prediction model to other communication devices in a wired or wireless manner.
  • the structure of the channel prediction model is a model of a neural network, and parameters (such as weights) in the model can be changed through training by the neural network, and can also be changed by means of assignment.
  • the communication device can send or receive the position and value of these parameters in a wired or wireless manner.
  • the position may include the number of the channel prediction model to which the parameter belongs, the layer number in the channel prediction model to which the parameter belongs, and the position number in the parameter matrix of the layer in the channel prediction model to which the parameter belongs.
  • the base station may keep parameters of some layers in the channel prediction model unchanged, and only change a small number of parameters in the channel prediction model W to predict the current channel environment.
  • the base station may also broadcast or send the parameters to be modified to the specific user when the network is idle, so that the communication equipment of the specific user updates the channel prediction model according to the modified parameters.
  • the average bit error rate obtained after each update may be used as a reward punishment measure, and the update action of the channel prediction model is reinforced to obtain the optimal update action.
  • the updated channel prediction model may be used for decoding to obtain the bit error rate generated by the decoding, and a reward or punishment action may be selected according to the bit error rate, or a scoring system may be adopted. The lower the bit error rate, the closer the reward, or the higher the score. The higher the bit error rate, the closer the penalty, or the lower the score.
  • the reward, penalty, or score is then fed back to the training module of the channel prediction model, and the training module is motivated to train under the reinforcement mechanism to obtain an updated channel prediction model.
  • the throughput obtained after each update may be used as a reward punishment measure to reinforce the learning of the update action of the channel prediction model to obtain the optimal update action.
  • the updated channel prediction model can be used for adaptive transmission to obtain the throughput rate generated by the transmission system, select reward or punishment actions based on the throughput rate, or adopt a scoring system. The higher the throughput, the closer the reward, or the higher the score. The lower the throughput, the closer the penalty, or the lower the score.
  • the reward, penalty, or score is then fed back to the training module of the channel prediction model, and the training module is motivated to train under the reinforcement mechanism to obtain an updated channel prediction model.
  • a prediction value of the channel coefficient is determined according to a prediction sequence of the value of the channel change.
  • the channel coefficient change prediction sequence is ⁇ y 1 , y 2 , y 3 , y 4 , ..., y n ⁇
  • the predicted values of the channel coefficients are ⁇ y 0 -y 1 , y 0 -y 1 -y 2 , y 0 -y 1 -y 2 -y 3 , ..., y 0 -y 1 -...- y n ⁇ .
  • data may be demodulated or decoded according to the predicted value.
  • adaptive transmission includes link adaptation, adaptive modulation, scheduling, power control, and selection of precoding.
  • the channel prediction model may be used for channel prediction.
  • the system throughput rate generated by using the predicted value for demodulation, decoding, or adaptive transmission is not higher than the predicted throughput threshold, traditional channel estimation may be used.
  • the communication device A uses the prediction value for decoding, no channel estimation is performed.
  • the communication device A can directly close the channel prediction model and restart the channel. estimate.
  • the communication device B may be requested to close the channel prediction model. After receiving the instruction issued by the communication device B, the channel prediction model is closed and the channel estimation is restarted.
  • the prediction termination judgment time Tp is less than or equal to the prediction time T2.
  • the channel prediction model is closed.
  • the communication device A requests the communication device B to close the channel prediction model, if the instruction sent by the communication device B is not received after the waiting time T, the communication device A may resend the request, or directly close the channel prediction model and restart Channel estimation.
  • the data in the channel estimation window T1 may be demodulated, decoded, or adaptively transmitted using the channel estimation value, or the data in the channel prediction window T2 may be demodulated and decoded using the predicted value of the channel coefficient.
  • adaptive transmission Alternatively, a channel estimation value obtained through channel estimation may be acquired, and a channel prediction weight value may be determined according to the channel estimation value obtained through the channel estimation and the prediction value; and the data in the channel prediction window T2 may be determined according to the channel prediction weight value.
  • the channel estimation is performed in the previous time period, and then the channel prediction model is used in the later time period for prediction.
  • extrapolation processing may be performed on the channel coefficient sequence of the channel prediction window to obtain the channel extrapolation value of the channel prediction window, and then a weighted average value of the channel estimation value, the channel extrapolation value, and the prediction value is determined, and the weighted average value is used to perform Demodulation, decoding or adaptive transmission. This can improve the robustness of the channel prediction value.
  • a channel is generated by acquiring a channel coefficient sequence including a plurality of complex-valued channel coefficients, determining a change amount of the channel coefficients, and then finding a value of a channel change corresponding to each change amount from a channel change dictionary to generate a channel.
  • the sequence of changed values is input to the channel prediction model for prediction, and a predicted sequence of values of channel changes is obtained. Realize the prediction of complex-valued channel coefficients, thereby retaining the original information of the channel coefficients, and simultaneously predicting the amplitude and phase of the channel, thereby improving the accuracy of the predicted channel coefficients. Prediction through a channel prediction model can also improve the efficiency of obtaining channel coefficients.
  • FIG. 7 is a schematic structural diagram of a channel prediction apparatus according to an embodiment of the present application.
  • the device in the embodiment of the present application includes:
  • the obtaining module 701 is configured to obtain a first channel coefficient sequence in a first period, where the first channel coefficient sequence includes multiple complex values of a channel coefficient.
  • a processing module 702 configured to determine a predicted value of the channel coefficient in the second period according to the first channel coefficient sequence and a preset channel change dictionary, where the channel change dictionary includes each change amount of the channel coefficient A mapping relationship with a value of a channel change, the second period is later than the first period.
  • the processing module 702 is further configured to determine the first channel coefficient sequence, and the first channel coefficient sequence includes multiple changes of the channel coefficient. Find a value of a channel change corresponding to each change in the first channel coefficient change sequence in a change dictionary, and generate a value sequence of the first channel change; input the value sequence of the first channel change to a channel prediction model The prediction sequence of the channel change value is obtained by performing prediction; and the prediction value of the channel coefficient is determined according to the prediction sequence of the channel change value.
  • the processing module 702 is further configured to subtract the complex value of the channel coefficient at the previous moment in the first channel coefficient sequence from the complex value of the channel coefficient at the previous moment to obtain the first channel.
  • Each change in the coefficient change sequence or subtracting the complex value of the channel coefficient at the previous time from the complex value of the channel coefficient at the previous time in the first channel coefficient sequence to obtain the first channel.
  • the processing module 702 is further configured to determine, based on the channel change dictionary, a sequence of a change amount of a channel coefficient corresponding to a prediction sequence of the value of the channel change; and determine according to the sequence of a change amount of the channel coefficient. A predicted value of the channel coefficient.
  • the obtaining module 701 is further configured to obtain a second channel coefficient sequence, where the second channel coefficient sequence includes multiple complex values of the channel coefficient;
  • the processing module 702 is further configured to determine a second channel coefficient change sequence according to the second channel coefficient sequence, where the second channel coefficient change sequence includes multiple changes in the channel coefficient; from the channel change dictionary Find the value of the channel change corresponding to each change in the second channel coefficient change sequence, and generate a value sequence of the second channel change; input the value sequence of the second channel change to the neural network for training to obtain The channel prediction model is described.
  • the obtaining module 701 is further configured to obtain a probability of each predicted value output by the neural network
  • the processing module 702 is further configured to determine a gap between complex values of every two changes in the channel change dictionary, and generate a channel change gap matrix; according to the channel change gap matrix and the probability of each predicted value To determine a weighted average of the probabilities of each predicted value; and determine whether the neural network has been trained according to the weighted average.
  • the size of the input dimension of the neural network and the size of the output dimension of the neural network are the same as the number of the mapping relationships in the channel change dictionary.
  • the processing module 702 is further configured to count the frequency of occurrence of each variation among the plurality of variations of the channel coefficient; and assign a value to each of the variations of the channel coefficient according to the frequency of occurrence.
  • An integer obtains the value of the channel change; establishes a mapping relationship between the value of the channel change and each of the changes, and generates the channel change dictionary.
  • the processing module 702 is further configured to assign an integer to the target change amount of the channel coefficient when the occurrence frequency of the target change amount among the plurality of change amounts is greater than a preset threshold.
  • processing module 702 is further configured to demodulate, decode, or adaptively transmit data according to the predicted value.
  • the processing module 702 is further configured to use the channel prediction model to perform channel prediction when a system throughput rate generated by using the predicted value for demodulation, decoding, or adaptive transmission is higher than a predicted throughput threshold.
  • the obtaining module 701 is further configured to obtain a channel estimation value obtained through channel estimation;
  • the processing module 702 is further configured to determine a channel prediction weight value according to the channel estimation value and the prediction value, and perform demodulation, decoding, or adaptive transmission on the data according to the channel prediction weight value.
  • each module may also correspond to the corresponding description of the method embodiment shown in FIG. 4, and execute the methods and functions performed by the communication device in the foregoing embodiment.
  • the channel prediction device 801 may include: at least one processor 801, at least one communication interface 802, at least one memory 803, and at least one communication bus 804.
  • the processor and the memory may also be integrated together.
  • the channel prediction device may be a chip.
  • the memory 803 may be used to store a channel prediction model and a channel change dictionary.
  • the processor 801 may include a central processing unit, a baseband processor, and a neural network processor.
  • the central processor can read the channel prediction model from the memory 803 and move it to the neural network processor.
  • the baseband processor writes the channel coefficient sequence into the neural network processing through the central processing module.
  • the neural network processor processes the channel coefficient sequence according to the channel change dictionary, and inputs the processed channel change value sequence into the channel prediction model for prediction, and finally obtains the predicted value of the channel coefficient.
  • the central processor writes the predicted value of the channel coefficient to the baseband processor, and the baseband processor demodulates, decodes, or adaptively transmits the data according to the predicted value.
  • the processor 801 may implement or execute various exemplary logical blocks, modules, and circuits described in combination with the disclosure of the present application.
  • the processor may also be a combination that implements computing functions, such as a combination including one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the communication bus 804 may be a peripheral component interconnect standard PCI bus or an extended industry standard structure EISA bus.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in FIG. 8, but it does not mean that there is only one bus or one type of bus.
  • the communication bus 804 is used to implement connection and communication between these components.
  • the communication interface 802 of the device in the embodiment of the present application is used to perform signaling or data communication with other node devices.
  • the memory 803 may include volatile memory, such as nonvolatile dynamic random access memory (NVRAM), phase change random access memory (PRAM), magnetoresistive random access memory (magetoresistive RAM, MRAM), etc., may also include non-volatile memory, such as at least one disk storage device, electrically erasable programmable read-only memory (EEPROM), flash-memory (EEPROM), flash memory devices, such as flash memory (NOR flash memory) or anti-flash memory (NAND flash memory), semiconductor devices, such as solid state hard disk (solid state disk (SSD), etc.).
  • the memory 803 may optionally be at least one storage device located far from the foregoing processor 801.
  • the memory 803 may further store a set of program codes, and the processor 801 may optionally execute a program executed in the memory 803.
  • the second period is later than the first period.
  • processor 801 is further configured to perform the following operations:
  • a prediction value of the channel coefficient is determined according to a prediction sequence of the channel change value.
  • processor 801 is further configured to perform the following operations:
  • processor 801 is further configured to perform the following operations:
  • a predicted value of the channel coefficient is determined according to a sequence of the amount of change of the channel coefficient.
  • processor 801 is further configured to perform the following operations:
  • processor 801 is further configured to perform the following operations:
  • the weighted average it is determined whether the neural network has been trained.
  • the size of the input dimension of the neural network and the size of the output dimension of the neural network are the same as the number of the mapping relationships in the channel change dictionary.
  • processor 801 is further configured to perform the following operations:
  • processor 801 is further configured to perform the following operations:
  • processor 801 is further configured to perform the following operations:
  • Data is demodulated, decoded, or adaptively transmitted according to the predicted value.
  • processor 801 is further configured to perform the following operations:
  • the channel prediction model is used for channel prediction.
  • processor 801 is further configured to perform the following operations:
  • the processor may also cooperate with the memory and the communication interface to perform an operation of the channel prediction apparatus in the foregoing application embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or data center Transmission by wire (for example, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (for example, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请实施例公开了一种信道预测方法及相关设备,包括:获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值;根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。采用本申请实施例,可以提高信道预测的准确度。

Description

一种信道预测方法及相关设备 技术领域
本申请涉及无线网络技术领域,尤其涉及一种信道预测方法及相关设备。
背景技术
在无线通信系统中,发射机发送信号,接收机接收发射机发送的信号,在发射机到接收机之间的无线链路称为无线信道。发射机与接收机之间的传播路径非常复杂,如建筑物、街道和其它物体的移动会导致信号的反射、折射以及衍射等。接收机接收到的信号是经过不同的传输路径到达的所有信号的叠加信号。因此,无线信道是影响无线通信系统的一个重要因素。其中,信道估计就是估计从发送天线到接收天线之间的无线信道的响应,根据接收机多径信道影响产生了幅度和相位变化并叠加了白噪声的接收序列来估计出信道的时域或频域特性。但是,现有的信道估计算法估计的信道系数的准确度低。
发明内容
本申请实施例提供一种信道预测方法及相关设备,可以提高信道预测的准确度。
第一方面,本申请实施例提供了一种信道预测方法,包括:首先获取第一时段的第一信道系数序列,第一信道系数序列包括信道系数的多个复数值;然后根据第一信道系数序列以及预设的信道变化词典,确定第二时段的信道系数的预测值,信道变化词典包括信道系数的每个变化量与信道变化的值的映射关系,第二时段晚于第一时段。通过信道变化词典,实现对复数值的信道系数的预测,从而保留信道系数的原本信息,可以对信道的幅度和相位同时进行预测,提高了预测的信道系数准确率。
在一种可能的设计中,首先根据第一信道系数序列,确定第一信道系数变化序列,第一信道系数变化序列包括信道系数的多个变化量;然后从信道变化词典中查找与第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列;将第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列;最后根据信道变化的值的预测序列,确定信道系数的预测值。通过信道变化词典,将第一信道系数变化序列中每个变化量转化为信道变化的值,输入到信道预测模型中进行预测,从而实现对复数值的信道系数的预测,提高了预测的信道系数准确率。
在另一种可能的设计中,可以将第一信道系数序列中后一时刻的信道系数的复数值减去前一时刻的信道系数的复数值得到第一信道系数变化序列中的每个变化量;或将第一信道系数序列中前一时刻的信道系数的复数值减去后一时刻的信道系数的复数值得到第一信道系数变化序列中的每个变化量。通过计算变化量,如改变变化量的数值精度,可以减少信道变化词典中词的数量,提高查找效率。
在另一种可能的设计中,可以将第一信道系数序列中最后时刻之前的信道系数的复数值减去最后时刻的信道系数的复数值得到第一信道系数变化序列中的每个变化量;或将第一信道系数序列中开始时刻之后的信道系数的复数值减去开始时刻的信道系数的复数值得到第一信道系数变化序列中的每个变化量。通过计算变化量,如改变变化量的数值精度, 可以减少信道变化词典中词的数量,提高查找效率。
在另一种可能的设计中,基于信道变化词典,确定信道变化的值的预测序列对应的信道系数的变化量的序列;根据信道系数的变化量的序列,确定信道系数的预测值。通过信道变化词典,将信道变化的值的预测序列转化为信道系数的变化量的序列,从而得到信道系数的预测值的复数值。
在另一种可能的设计中,可以统计信道系数的多个变化量中的每个变化量在信道系数变化序列中的出现频次;然后根据出现频次,对信道系数的每个变化量进行赋值整数得到信道变化的值;最后建立信道变化的值与每个变化量的映射关系,生成信道变化词典。
在另一种可能的设计中,当多个变化量中目标变化量的出现频次大于预设阈值时,对信道系数的目标变化量进行赋值整数。通过设置阈值减少信道变化词典中的词的数量。
在另一种可能的设计中,词典由多个键值对组成,通过查找词典中的键可以得到键对应的值,信道变化词典可以包括信道变化词典A和信道变化词典B。其中,信道变化词典A中的键为信道系数的变化量,值为信道变化的值。信道变化词典B中的键为信道变化的值,值为信道系数的变化量。在从信道系数变化序列转换到信道系数变化的值序列时,使用信道变化词典A。在从信道系数变化的值序列转换到信道系数变化序列时,使用信道变化词典B。通过信道变化词典A实现信道预测前的信道系数转换,通过信道变化词典B实现信道预测后的信道系数转换,从而实现对复数值的信道系数的预测。
在另一种可能的设计中,可以首先获取第二信道系数序列,第二信道系数序列包括信道系数的多个复数值。其次,根据第二信道系数序列,确定第二信道系数变化序列,第二信道系数变化序列包括信道系数的多个变化量。然后,从信道变化词典中查找与第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列。最后,将第二信道变化的值序列输入到神经网络进行训练得到信道预测模型。通过将大量的信道系数序列输入到神经网络中进行训练,提高信道预测模型的预测准确度,从而提高信道预测的准确度。
在另一种可能的设计中,神经网络的输入维度的大小、神经网络的输出维度的大小与信道变化词典中映射关系的个数相同
在另一种可能的设计中,可以获取神经网络输出的每个预测值的概率;然后确定信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;根据信道变化差距矩阵和每个预测值的概率,确定每个预测值的概率的加权平均值;最后根据加权平均值,确定神经网络是否已经训练完成。通过将与信道变化词典中的复数值相关的冗余信息加入到损失函数进行计算,在训练进行到一定程度后替换原有的损失函数,或者随着训练的进行将逐渐增加的权值加到原有的损失函数上,以便减少真实环境中的干扰。
在另一种可能的设计中,在训练得到信道预测模型之后,可以对信道预测模型进行训练,以便得到更新后的信道预测模型。例如,可以保持信道预测模型中某些层的参数不变,改变其他部分层的参数,得到更新后的信道预测模型。通过对信道预测模型进行更新,提高信道预测模型预测信道系数得准确性。
在另一种可能的设计中,信道预测模型的结构为神经网络的模型,该模型内的参数(如权重)可以通过神经网络进行训练改变,也可以通过赋值的方式改变。通信设备可以通过 有线或无线的方式发送或接收这些参数的位置和值。其中,位置可以包括参数所属信道预测模型的编号、参数所属信道预测模型中的层编号以及参数所属信道预测模型中的层的参数矩阵中的位置编号。通过对信道预测模型进行更新,提高信道预测模型预测信道系数得准确性。
在另一种可能的设计中,在根据第一信道系数序列以及预设的信道变化词典,确定第二时段的信道系数的预测值之后,可以根据预测值,对数据进行解调或译码,也可以根据预测值进行自适应传输来提高系统吞吐。
在另一种可能的设计中,当使用预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,可以使用信道预测模型进行信道预测。当使用预测值进行解调、译码或自适应传输所产生的系统吞吐率不高于预测吞吐阈值时,可以使用信道估计得到信道系数。
在另一种可能的设计中,可以使用通过信道估计得到的信道估计值对信道估计窗口T1内的数据进行解调、译码或自适应传输,也可以使用信道系数的预测值对信道预测窗口T2内的数据进行解调、译码或自适应传输。或者,可以获取通过信道估计得到的信道估计值,根据信道估计的信道估计值以及预测值,确定信道预测加权值;根据信道预测加权值,对信道预测窗口T2内的数据进行解调、译码或自适应传输。通过结合信道估计和信道预测模型进行信道系数的预测,提高信道预测的准确性。
在另一种可能的设计中,可以在信道预测窗口的信道系数序列上进行外插处理,得到信道预测窗口的信道外插值,然后确定信道估计值、信道外插值以及预测值的加权平均值,并使用该加权平均值进行解调、译码或自适应传输。这样可以提高信道预测值的鲁棒性。
在另一种可能的设计中,可以得到信道预测窗口的信号值,通过结合信号值和信道预测模型进行信道系数的预测,提高信道预测的准确性。
第二方面,本申请实施例提供了一种信道预测装置,该信道预测装置被配置为实现上述第一方面中通信设备所执行的方法和功能,由硬件/软件实现,其硬件/软件包括与上述功能相应的单元。
第三方面,本申请实施例提供了一种信道预测设备,包括:处理器、存储器和通信总线,其中,通信总线用于实现处理器和存储器之间连接通信,处理器执行存储器中存储的程序用于实现上述第一方面提供的一种信道预测方法中的步骤。
在一个可能的设计中,本申请实施例提供的信道预测设备可以包含用于执行上述方法设计中信道预测装置的行为相对应的模块。模块可以是软件和/或是硬件。
在另一种可能的设计中,处理器和存储器还可以集成在一起。该信道预测设备可以是芯片。
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面的方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面的方法。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例提供的一种无线通信系统的架构示意图;
图2是本申请实施例提供的一种线性时变信道模型;
图3是本申请实施例提供的一种卷积神经网络的结构示意图;
图4是本申请实施例提供的一种信道预测方法的流程示意图;
图5是本申请实施例提供的一种信道变化词典的图表;
图6是本申请实施例提供的另一种信道变化词典的图表;
图7是本申请实施例提供的一种信道预测装置的结构示意图;
图8是本申请实施例提出的一种信道预测设备的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
请参见图1,图1是本申请实施例提供的一种无线通信系统的架构示意图。该无线通信系统包括多个通信设备,例如,通信设备1、通信设备2和通信设备3。该通信设备可以为终端设备,也可以为基站。基站为终端设备提供通信服务。基站向终端设备发送下行数据,采用信道编码对数据进行编码,信道编码后的数据经过星座调制后传输到终端设备。终端设备向基站发送上行数据,上行数据也可以采用信道编码进行编码,编码后的数据经过星座调制后传输到基站。终端设备可以指用户设备,可以是指提供到用户的语音和/或数据连接的设备,也可以被连接到诸如膝上型计算机或台式计算机等的计算设备,或者其可以是诸如个人数字助理(personal digital assistant,PDA)等的独立设备。终端设备还可以称为系统、用户单元、用户站、移动站、移动台、远程站、接入点、远程终端、接入终端、用户终端、用户代理或用户装置。基站可以是用于与终端设备通信的设备,可以为接入点、中继节点、基站收发台(base transceiver station,BTS)、节点B(nodeB,NB)、演进型节点(evolved node B,eNB)或5G基站(next generation node B,gNB),指在空中接口上通过一个或多个扇区与无线终端进行通信的接入网络中的设备。通过将已接收的空中接口帧转换为IP分组,基站可以作为无线终端和接入网络的其余部分之间的路由器,接入网络可以包括因特网协议网络。基站还可以对空中接口的属性的管理进行协调。在本申请实施例中,该无线通信系统可以应用于信道预测。通过信道预测方法获取信道系数,并将信道系数用于基站或终端设备的数据解调、译码或自适应传输,从而获取原始传输的数据。
如图2所示,图2是本申请实施例提供的一种线性时变信道模型。该线性时变信道模型存在:r(t)=s(t)*h(t;τ)+n(t),其中,s(t)是时刻t的输入信号,r(t)是时刻t的输出信号,h(t;τ)是信道的冲激响应,τ表示时延,h(t;τ)表示在时刻t、延时为τ时信道对输入信号的畸变和延时,*为卷积算子。n(t)是调制信道上存在的加性噪声,又被称为加性干扰,与输入信号s(t)无关。从该线性时变信道模型可知,信道估计可以表示为估计从发送天线到接收天线之间的无线信道的响应,根据接收机多径信道影响产生了幅度和相位变化并叠加了白噪声的接收序列来估计出信道的时域或频域特性。由于真实信道环境的随机性,通常的 信道估计算法只是基于当前时刻的测量信号进行计算,没考虑历史时刻估计的信道系数,但随着神经网络的发展,利用神经网络在看似随机的历史时刻的信道数据中学习信道特征并做出推理,预测出当前时刻甚至未来时刻的信道系数。
应理解,信道环境的随机性来自于复杂的多径效应,阴影效应以及小尺度衰落等等。信道系数的取值符合一些统计特性,标准组织提出的用于链路和系统仿真的信道模型就是基于这些统计特性拟合出来的,这些信道模型对于真实环境的信道预测无能为力,但是神经网络适合处理这种非常复杂不能建模的具体问题。现有的基于神经网络的信道预测方法,如自适应线性(adaline)或非线性方法,包括多层感知机(multilayer perceptron,MLP)、循环神经网络(recurrent neural networks,RNN)、卷积神经网络(convolutional neural network,CNN)和深度神经网络(deep neural network,DNN)等等。
如图3所示,图3是本申请实施例提供的一种卷积神经网络的结构示意图。该卷积神经网络包括输入层,卷积层,全连接层和输出层等部分。其中,输入层可以用于输入数据,输入数据一般为整数。卷积层主要用于提取输入数据的特征。不同的卷积核提取输入数据的特征不同,卷积层的卷积核的数量越多,提取的输入数据的特征越多。全连接层可以包含多个全连接层,后一层的神经元节点跟前一层的每个神经节点连接,每一层的神经元节点分别通过连接线上的权值进行前向传播,并进行加权组合得到下一层神经元节点的输入。输出层的神经元节点的数目可以根据具体应用任务进行设定。
基于上述神经网络进行信道预测过程中,信道系数是一个复数值,信道系数的幅度和相位代表传输信号在功率和延时上产生的变换。经典的神经网络由于无法处理复数,只能学习信道的幅度(即功率),或者将信道的幅度和相位分别输入到神经网络进行预测,这样破坏了信道系数的原本信息,预测的信道系数准确率低。为了解决上述技术问题,本申请实施例提供了如下解决方案。
请参考图4,图4是本申请实施例提供的一种信道预测方法的流程示意图。如图所示,本申请实施例中的步骤至少包括:
S401,获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值。
具体实现中,通信设备可以通过信道估计方式估计出一段时间内的信道系数,组成第一信道系数序列。通信设备也可以通过有线或无线的方式接收其他通信设备发送的第一信道系数序列。其中,第一信道系数序列中包括信道系数的多个复数值,信道系数的复数值可以使用笛卡尔坐标表示,如x+yi的形式,也可以表示为ae ib的极坐标形式。信道系数可以是时域信道系数,也可以是频域信道系数。时域信道系数与频域信道系数之间可以相互转换,例如,频域信道系数可以通过傅里叶逆变换转换到时域信道系数,时域信道系数可以通过傅里叶变换转换到频域信道系数。在真实场景下获取信道系数时,可以对估计出的信道系数在时域或频域上进行采样、降噪、滤波、幅度归一化、定点化等信号处理操作。为了得到没有放置导频序列的时频资源处的信道系数,也可以对该时频位置附近的有放置导频序列的信道估计值进行计算。其中,计算包括向内插值、向外插值等信号处理操作。计算还包括输入一个神经网络。其中,第一时段可以为一个时间点,也可以为一个时间段。
S402,根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。其中,第二时段可以为一个时间点,也可以为一个时间段。本步骤具体包括:
第一,根据所述第一信道系数序列,确定第一信道系数变化序列,所述第一信道系数变化序列包括所述信道系数的多个变化量。
在一种实现方式中,可以将所述第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或将所述第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。例如,第一信道系数序列为{x 1,x 2,x 3,x 4,…,x i,…x n},其中,x i(i=1,2,……,n)为复数值,将后一时刻的信道系数的复数值减去前一时刻的信道系数的复数值,第一信道系数变化序列为{x 2-x 1,x 3-x 2,x 4-x 3,…,x n-x n-1}。
在另一种实现方式中,可以将所述第一信道系数序列中最后时刻之前的所述信道系数的复数值减去所述最后时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或将所述第一信道系数序列中开始时刻之后的所述信道系数的复数值减去所述开始时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。其中,第一信道系数变化序列包括第一信道系数序列的变化量,可以通过加减乘除、一阶导数、二阶导数等任何可以表征前一时刻的信道系数的复数值和后一时刻的信道系数的复数值的特征运算得到。
可选的,由于不同的终端设备的移动速度会导致信道变化有快有慢,在真实系统中,频率偏移的不同也会导致信道系数的相位变化有快有慢。因此,可以首先对第一信道系数序列做采样或插值操作,然后根据采样或插值操作后的第一信道系数序列,得到第一信道系数变化序列。或者可以对第一信道系数序列的相位乘以一个频率偏移量,使得信道的相位旋转变快或变慢。例如,可以对长度300000个符号的第一信道系数序列进行采样,每30个符号取一个值重新组成采样序列,采样序列长度为10000,然后根据该采样序列确定第一信道系数变化序列。又如,可以对长度1000个符号的第一信道系数序列进行插值平滑操作,每两个符号中间插入九个值,得到长度10000的插值序列,然后根据该插值序列确定第一信道系数变化序列。
第二,从所述信道变化词典中查找与所述第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列。
其中,信道变化词典包括键(key)与值(value)组成的集合,键为第一信道系数变化序列中信道系数的变化量。例如,该信道系数的变化量可以为第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值,或者第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值。键在信道变化词典中是唯一的,一个键对应一个值。值一般为整数,即信道系数的变化量对应信道变化的值。
在本申请实施例中,信道变化词典可以包括信道变化词典A和信道变化词典B。其 中,信道变化词典A中的键为信道系数的变化量,值为信道变化的值。信道变化词典B中的键为信道变化的值,值为信道系数的变化量。在从信道系数变化序列转换到信道系数变化的值序列时,使用信道变化词典A。在从信道系数变化的值序列转换到信道系数变化序列时,使用信道变化词典B。对于信道变化词典B,通过信道变化的值来查找信道变化量,如果信道变化的值是从1或0开始递增的整数,则信道变化词典可以退化成一个信道变化量的向量,因而不需要存储整数值了。例如,向量的第一个位置的索引就是整数0,向量的第二个位置索引就是整数1,向量的第三个位置索引就是整数2,以此类推,对应相应的信道变化量。如图5所示,图5是本申请实施例提供的一种信道变化词典B的图表。该信道变化词典中的信道变化的值是从1或0开始递增的整数,退化为向量表示形式的前11个元素。
其中,信道变化词典可以是根据当前信道系数的变化量而临时生成的,也可以是预存的(根据其他信道系数变化序列生成的)。生成信道变化词典的过程包括:可以统计信道系数的多个变化量中的每个变化量在信道系数变化序列中的出现频次;然后根据所述出现频次,对信道系数的所述每个变化量进行赋值整数得到所述信道变化的值;最后建立所述信道变化的值与所述每个变化量的映射关系,生成所述信道变化词典。
具体的,可以统计每个信道系数的变化量在信道系数变化序列中的出现频次,并按照出现频次从高到低进行排序、或从低到高进行排序。对于排序后的变化量,可以按照从1开始递增赋值整数,并将赋值的整数作为信道变化词典中信道变化的值。例如,信道变化词典可以表示为:{key1:value1,key2:value2,key3:value3,……,keyN:valueN},其中,key为信道变换词典的键,value为信道变换词典的值。例如,在信道变化词典中{“+0.02-0.02i”:1,“+0.02+0.02i”:2,“-0.02-0.02i”:3,……,“+0.001-0.2i”:1234},其中,信道系数的变化量+0.02-0.02i作为键对应的信道变化的值为1,信道系数的变化量-0.02-0.02i作为键对应的信道变化的值为3,信道系数的变化量+0.001-0.2i作为键对应的信道变化的值为1234。
可选的,当所述多个变化量中目标变化量的所述出现频次大于预设阈值时,对所述信道系数的所述目标变化量进行赋值整数。具体的,在统计每个信道系数的变化量在信道系数变化序列中的出现频次时,可以设置一个阈值,当某个变化量的出现频次大于该预设阈值时,对该变化量赋值整数后保存到信道变化词典,当某个变化量的出现频次小于等于该预设阈值时,不对该变化量赋值整数,并且不将该变化量保存到信道变化词典中。例如,预设阈值为10,在长度100000的信道系数变化序列中,对于仅出现了10次或10次以下的低频变化量,可以将这些低频变化量不保存到信道变化词典中。在这种情况下,可以在信道变化词典中加入一个新词并赋值,例如,使用新词“unk”作为键并赋值为0。在信道系数变化序列转换为信道变化的值序列时,对于没有保存到信道变化词典中的变化量,由于查找不到该变化量以及对应的整数,可以统一使用新词“unk”来代替,并对应整数0。
如图6所示,图6是本申请实施例提供的另一种信道变化词典的图表。其中,N为信道变化词典中的变化量的总数,图表中仅列举了频次最高10个的变化量和频次最低的10个的变化量、以及对应的信道变化的值。另外,该信道变化词典可以只保留出现频次大于10的变化量。而对于出现频次小于等于10的变化量,可以使用“unk”来代替并对应整数 0。
第三,将所述第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列。可以通过对神经网络进行训练得到信道预测模型,具体方法如下:
首先,可以获取第二信道系数序列,所述第二信道系数序列包括所述信道系数的多个复数值。其中,所述第二信道系数序列可以是基于模拟场景获取的信道系数,通过信道模型仿真得到;也可以是基于真实场景获取的信道系数,通过通信设备在真实通信环境中进行信道估计采集得到。其次,根据所述第二信道系数序列,确定第二信道系数变化序列,所述第二信道系数变化序列包括所述信道系数的多个变化量。本步骤与上述根据第一信道系数序列确定第二信道系数变化序列的方法相同,本步骤不再赘述。然后,从所述信道变化词典中查找与所述第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列。本步骤与上述从信道变化词典中查找与第一信道系数变化序列中每个变化量对应的信道变化的值的方法相同,本步骤不再赘述。最后,将所述第二信道变化的值序列输入到神经网络进行训练得到所述信道预测模型。
其中,神经网络可以为循环神经网络RNN,也可以为卷积神经网络CNN,也可以为深度神经网络DNN,或者是三者任意组合。其中,神经网络的输入除了信道变化的值,也可以包含信号值,即对图2中的r(t)所进行的各种计算得到的值。RNN可以结合随时间反向传播(back propagation through time,BPTT)方法或长短期记忆网络(long short-term memory,LSTM)及它们的变种,也可以利用基于RNN的序列到序列(Seq2Seq)网络及其变种。其中,Seq2Seq网络可以完成两个序列的转换,通常用于两种语言的翻译。在本申请实施例中,可以将前一时刻的序列当作一种语言,后一时刻的序列当作另一种语言,当Seq2Seq网络实现这两种语言的翻译后,相当于使用前一时刻的序列生成下一时刻的序列,实现信道预测。Seq2Seq网络具有两套RNN网络,表达能力更强。其中,两套RNN网络在训练时可以使用相同的信道变化词典。
需要说明的是,信道系数变化序列中的每个变化量对应一个整数之外,在输入神经网络之前每个变化量还对应一个词向量(word vector),词向量代表该变化量作为神经网络的第一层在神经网络中训练,该层也可称为嵌入层(embedding layer),不断更新参数。其中,神经网络的输入维度的大小、神经网络的输出维度的大小与信道变化词典中映射关系的个数可以相同。例如,每个变化量可以由一个长度为100的词向量表示,在输入神经网络之前每个词向量可以通过初始化得到,也可以使用预存的经过预训练(pre-trained)的词向量。另外,为了加快神经网络的收敛,避免训练从零开始,除了引入预存的词向量,也可以使用预先存储的预训练的信道预测模型进行替换。
在对神经网络进行训练时,可以根据信道系数的变化量的先验信息,辅助神经网络更快的收敛。例如,从信道变化词典中可知,“+0.03+0.001i”对应整数105,“+0.03+0.002i”对应整数2370,通过整数看不出两者联系,但实际上两个复数值的距离很近,甚至系统可以容忍这两个复数值互换,但是这些信息在整数化的过程中已经丢失。在训练过程中,每个变化量对应一个词向量,如果多个变化量之间的差距小于某个值,则绑定所述多个变化量对应的词向量。例如,可以将所述多个变化量中损失最小的变化量对应的词向量赋值给所述多个变化量中的其他变化量;或者,可以选中所述多个变化量中的某个变化量,其 它变化量都使用该变化量对应的词向量进行训练,在计算其它变化量损失时也可以全部使用该变化量的损失。或者,可以将所有距离接近的变化量对应的词向量取平均值后,再将平均值赋值给某个变化量对应的词向量或所有距离接近的变化量对应的词向量。另外,可以确定信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵。如果信道变化词典中存在N个映射关系,则信道变化差距矩阵的维度为N*N。其中,信道变化矩阵中的第i行的元素表示信道变化词典中的第i个信道变化量与1~N信道变化量的之间的差距。
在对神经网络进行训练时,可以设置一个损失函数,通过比较神经网络输出的预测值和训练集中的目标值,以最小化损失函数为目标,对神经网络做梯度下降。可以采用交叉熵、负最大似然、均方误差等统计方法来计算损失函数。并且将与信道变化词典中的复数值相关的冗余信息加入到损失函数进行计算,使用这些冗余信息计算得到损失函数,并在训练进行到一定程度后替换原有的损失函数,或者随着训练的进行将逐渐增加的权值加到原有的损失函数上,以便减少真实环境中的干扰。
结合上述信道变化差距矩阵和损失函数,以便确定得到信道预测模型。具体包括:首先获取所述神经网络输出的每个预测值的概率;确定信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;然后根据信道变化差距矩阵和每个预测值的概率,确定每个预测值的概率的加权平均值;最后根据所述加权平均值,确定所述神经网络是否已经训练完成。
例如,首先,确定信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵。其次,对信道变化差距矩阵中每行中的差距最小的k个数做归一化,将该行中的其他N-k个数置零,或者,可以将信道变化差距矩阵中的每行中差距小于预设值的k个数置1,将该行中的其他N-k个数置零。或者,可以将信道变化差距矩阵中的每行中差距小于预设值的k个数中的某个数置1,将该行中的其他数置零。其中,k为大于1小于N的正整数,N为正整数。然后,在损失函数中,调用信道变化差距矩阵、以及神经网络的输出的每个预测值的概率,根据目标值j读取信道变化差距矩阵的第j行,将信道变化矩阵中第j行的数据与预测值的概率进行加权求和。最后,对损失函数计算得到的所有损失值取平均值。随着训练的进行,当训练集的损失函数小于验证集的损失函数,则神经网络出现过拟合,此时可以调整超参,提高网络的泛化能力并重新训练,或者提前停止神经网络的当前状态。随着损失值的减小,神经网络训练成功,即得到信道预测模型。
可选的,在训练得到信道预测模型之后,可以对信道预测模型进行训练,以便得到更新后的信道预测模型。例如,可以保持信道预测模型中某些层的参数不变,改变其他部分层的参数,得到更新后的信道预测模型。在更新完成信道预测模型后,通信设备可以通过有线或无线的方式向其它通信设备发送更新后的信道预测模型。可选的,信道预测模型的结构为神经网络的模型,该模型内的参数(如权重)可以通过神经网络进行训练改变,也可以通过赋值的方式改变。通信设备可以通过有线或无线的方式发送或接收这些参数的位置和值。其中,位置可以包括参数所属信道预测模型的编号、参数所属信道预测模型中的层编号以及参数所属信道预测模型中的层的参数矩阵中的位置编号。例如,基站在对信道预测模型W的迁移学习中,可以保持信道预测模型中的部分层的参数不变,只改变信道 预测模型W中的少量参数,来预测当前信道环境。基站也可以在网络空闲时将需要修改的参数广播下发或者发给特定用户,以便特定用户的通信设备按照修改的参数来更新信道预测模型。
进一步可选的,在更新信道预测模型过程中,可以使用每次更新后得到的平均误码率作为奖励惩罚措施,对信道预测模型的更新动作强化学习,得到最优的更新动作。还可以撤回某个更新动作。例如,可以利用更新后的信道预测模型进行译码,得到译码所产生误码率,根据误码率选取奖励或惩罚动作,或采取打分制。误码率越低越接近奖励,或高分值。误码率越高越接近惩罚,或低分值。然后将奖励、惩罚或者分值反馈给信道预测模型的训练模块,激励训练模块在强化机制下训练,从而得到更新的信道预测模型。
进一步可选的,在更新信道预测模型过程中,可以使用每次更新后得到的吞吐率作为奖励惩罚措施,对信道预测模型的更新动作强化学习,得到最优的更新动作。还可以撤回某个更新动作。例如,可以利用更新后的信道预测模型进行自适应传输,得到传输系统所产生的吞吐率,根据吞吐率选取奖励或惩罚动作,或采取打分制。吞吐率越高越接近奖励,或高分值。吞吐率越低越接近惩罚,或低分值。然后将奖励、惩罚或者分值反馈给信道预测模型的训练模块,激励训练模块在强化机制下训练,从而得到更新的信道预测模型。
第四,根据所述信道变化的值的预测序列,确定所述信道系数的预测值。
具体实现中,可以基于所述信道变化词典,确定所述信道变化的值的预测序列对应的信道系数的变化量的序列;然后根据所述信道系数的变化量的序列,确定所述信道系数的预测值。例如,如果第一信道系数变化序列为后一时刻的信道系数减前一时刻的信道系数,第一信道系数变化序列为{x 2-x 1,x 3-x 2,x 4-x 3,……,x n-x n-1},信道系数变化预测序列为{y 1,y 2,y 3,y 4,……,y n},信道系数序列的最后一个值为y 0=x n,则信道系数的预测值为{y 0+y 1,y 0+y 1+y 2,y 0+y 1+y 2+y 3,……,y 0+y 1+……+y n}。如果第一信道系数变化序列是前一时刻的信道系数减后一时刻的信道系数,信道系数变化预测序列为{y 1,y 2,y 3,y 4,……,y n},信道系数序列的最后一个值为y 0=x n,则信道系数的预测值为{y 0-y 1,y 0-y 1-y 2,y 0-y 1-y 2-y 3,……,y 0-y 1-……-y n}。
可选的,在根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值之后,可以根据所述预测值,对数据进行解调或译码,也可以根据预测值进行自适应传输来提高系统吞吐,其中,自适应传输包括链路自适应、自适应调制、调度、功率控制、预编码的选择等。
可选的,当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,可以使用所述信道预测模型进行信道预测。当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率不高于预测吞吐阈值时,可以使用传统的信道估计。
例如,通信设备A使用预测值进行译码时,不进行信道估计,当预测终止判断时间Tp内的误码率超过预测误码阈值Ft时,通信设备A可以直接关闭信道预测模型,并重启信道估计。或者可以向通信设备B请求关闭信道预测模型,在接收到通信设备B下发的指令之后,关闭信道预测模型,并重启信道估计。当预测终止判断时间Tp内的误码率未超过预测误码阈值Ft时,可以继续使用信道预测模型进行预测。其中,预测终止判断时 间Tp小于等于预测时长T2。可选的,在使用预测值进行译码时,当预测终止判断时间内的误码率超过预测误码阈值N次之后,关闭信道预测模型。可选的,在通信设备A向通信设备B请求关闭信道预测模型之后,如果等待时间长度T后没有接收通信设备B发送的指令,通信设备A可以重新发送请求,或者直接关闭信道预测模型并重启信道估计。
可选的,可以使用信道估计值对信道估计窗口T1内的数据进行解调、译码或自适应传输,也可以使用信道系数的预测值对信道预测窗口T2内的数据进行解调、译码或自适应传输。或者,可以获取通过信道估计得到的信道估计值,根据通过信道估计得到的信道估计值以及所述预测值,确定信道预测加权值;根据所述信道预测加权值,对信道预测窗口T2内的数据进行解调、译码或自适应传输。或者,在前面时间段进行信道估计,然后在后面时间段使用信道预测模型进行预测。或者,可以在信道预测窗口的信道系数序列上进行外插处理,得到信道预测窗口的信道外插值,然后确定信道估计值、信道外插值以及预测值的加权平均值,并使用该加权平均值进行解调、译码或自适应传输。这样可以提高信道预测值的鲁棒性。
在本申请实施例中,通过获取包括信道系数的多个复数值的信道系数序列,并确定信道系数的变化量,然后从信道变化词典中查找与每个变化量对应的信道变化的值生成信道变化的值序列,将信道变化的值序列输入到信道预测模型进行预测,得到信道变化的值的预测序列。实现对复数值的信道系数的预测,从而保留信道系数的原本信息,对信道的幅度和相位同时进行预测,提高了预测的信道系数准确度。通过信道预测模型进行预测,也可以提高获取信道系数的效率。
请参考图7,图7是本申请实施例提供的一种信道预测装置的结构示意图。如图所示,本申请实施例中的装置包括:
获取模块701,用于获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值。
处理模块702,用于根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。
可选的,处理模块702,还用于所述第一信道系数序列,确定第一信道系数变化序列,所述第一信道系数变化序列包括所述信道系数的多个变化量;从所述信道变化词典中查找与所述第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列;将所述第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列;根据所述信道变化的值的预测序列,确定所述信道系数的预测值。
可选的,处理模块702,还用于将所述第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或将所述第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。
可选的,处理模块702,还用于基于所述信道变化词典,确定所述信道变化的值的预测序列对应的信道系数的变化量的序列;根据所述信道系数的变化量的序列,确定所述信 道系数的预测值。
可选的,获取模块701,还用于获取第二信道系数序列,所述第二信道系数序列包括所述信道系数的多个复数值;
处理模块702,还用于根据所述第二信道系数序列,确定第二信道系数变化序列,所述第二信道系数变化序列包括所述信道系数的多个变化量;从所述信道变化词典中查找与所述第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列;将所述第二信道变化的值序列输入到神经网络进行训练得到所述信道预测模型。
可选的,获取模块701,还用于获取所述神经网络输出的每个预测值的概率;
处理模块702,还用于确定所述信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;根据所述信道变化差距矩阵和所述每个预测值的概率,确定所述每个预测值的概率的加权平均值;根据所述加权平均值,确定所述神经网络是否已经训练完成。
其中,所述神经网络的输入维度的大小、所述神经网络的输出维度的大小与所述信道变化词典中所述映射关系的个数相同。
可选的,处理模块702,还用于统计所述信道系数的多个变化量中每个变化量的出现频次;根据所述出现频次,对所述信道系数的所述每个变化量进行赋值整数得到所述信道变化的值;建立所述信道变化的值与所述每个变化量的映射关系,生成所述信道变化词典。
可选的,处理模块702,还用于当所述多个变化量中目标变化量的所述出现频次大于预设阈值时,对所述信道系数的所述目标变化量进行赋值整数。
可选的,处理模块702,还用于根据所述预测值,对数据进行解调、译码或自适应传输。
可选的,处理模块702,还用于当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,使用所述信道预测模型进行信道预测。
可选的,获取模块701,还用于获取通过信道估计得到的信道估计值;
处理模块702,还用于根据所述信道估计值以及所述预测值,确定信道预测加权值;根据所述信道预测加权值,对所述数据进行解调、译码或自适应传输。
需要说明的是,各个模块的实现还可以对应参照图4所示的方法实施例的相应描述,执行上述实施例中通信设备所执行的方法和功能。
请继续参考图8,图8是本申请实施例提出的一种信道预测设备的结构示意图。如图8所示,该信道预测设备801可以包括:至少一个处理器801,至少一个通信接口802,至少一个存储器803和至少一个通信总线804。当然,在有些实施方式中,处理器和存储器还可以集成在一起。该信道预测设备可以是芯片。
在本申请实施例中,存储器803可以用于存储信道预测模型和信道变化词典。处理器801可以包括中央处理器、基带处理器和神经网络处理器。例如,中央处理器在接收基带处理器的信道预测指令之后,可以从存储器803读取信道预测模型并移入到神经网络处理器中,基带处理器通过中央处理模块将信道系数序列写入神经网络处理器。神经网络处理器根据信道变化词典处理信道系数序列,并将处理得到的信道变化的值序列输入到信道预 测模型中进行预测,最后得到信道系数的预测值。最后中央处理器将信道系数的预测值写入到基带处理器,基带处理器根据预测值对数据进行解调、译码或自适应传输。
其中,处理器801可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。通信总线804可以是外设部件互连标准PCI总线或扩展工业标准结构EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信总线804用于实现这些组件之间的连接通信。其中,本申请实施例中设备的通信接口802用于与其他节点设备进行信令或数据的通信。存储器803可以包括易失性存储器,例如非挥发性动态随机存取内存(nonvolatile random access memory,NVRAM)、相变化随机存取内存(phase change RAM,PRAM)、磁阻式随机存取内存(magetoresistive RAM,MRAM)等,还可以包括非易失性存储器,例如至少一个磁盘存储器件、电子可擦除可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、闪存器件,例如反或闪存(NOR flash memory)或是反及闪存(NAND flash memory)、半导体器件,例如固态硬盘(solid state disk,SSD)等。存储器803可选的还可以是至少一个位于远离前述处理器801的存储装置。存储器803中可选的还可以存储一组程序代码,且处理器801可选的还可以执行存储器803中所执行的程序。
获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值;
根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。
可选的,处理器801还用于执行如下操作:
根据所述第一信道系数序列,确定第一信道系数变化序列,所述第一信道系数变化序列包括所述信道系数的多个变化量;
从所述信道变化词典中查找与所述第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列;
将所述第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列;
根据所述信道变化的值的预测序列,确定所述信道系数的预测值。
可选的,处理器801还用于执行如下操作:
将所述第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或
将所述第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。
可选的,处理器801还用于执行如下操作:
基于所述信道变化词典,确定所述信道变化的值的预测序列对应的信道系数的变化量的序列;
根据所述信道系数的变化量的序列,确定所述信道系数的预测值。
可选的,处理器801还用于执行如下操作:
获取第二信道系数序列,所述第二信道系数序列包括所述信道系数的多个复数值;
根据所述第二信道系数序列,确定第二信道系数变化序列,所述第二信道系数变化序列包括所述信道系数的多个变化量;
从所述信道变化词典中查找与所述第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列;
将所述第二信道变化的值序列输入到神经网络进行训练得到所述信道预测模型。
可选的,处理器801还用于执行如下操作:
获取所述神经网络输出的每个预测值的概率;
确定所述信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;
根据所述信道变化差距矩阵和所述每个预测值的概率,确定所述每个预测值的概率的加权平均值;
根据所述加权平均值,确定所述神经网络是否已经训练完成。
其中,所述神经网络的输入维度的大小、所述神经网络的输出维度的大小与所述信道变化词典中所述映射关系的个数相同。
可选的,处理器801还用于执行如下操作:
统计所述信道系数的多个变化量中每个变化量的出现频次;
根据所述出现频次,对所述信道系数的所述每个变化量进行赋值整数得到所述信道变化的值;
建立所述信道变化的值与所述每个变化量的映射关系,生成所述信道变化词典。
可选的,处理器801还用于执行如下操作:
当所述多个变化量中目标变化量的所述出现频次大于预设阈值时,对所述信道系数的所述目标变化量进行赋值整数。
可选的,处理器801还用于执行如下操作:
根据所述预测值,对数据进行解调、译码或自适应传输。
可选的,处理器801还用于执行如下操作:
当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,使用所述信道预测模型进行信道预测。
可选的,处理器801还用于执行如下操作:
获取通过信道估计得到的信道估计值;
根据所述信道估计值以及所述预测值,确定信道预测加权值;
根据所述信道预测加权值,对所述数据进行解调、译码或自适应传输。
进一步的,处理器还可以与存储器和通信接口相配合,执行上述申请实施例中信道预测装置的操作。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk(SSD))等。
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (28)

  1. 一种信道预测方法,其特征在于,所述方法包括:
    获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值;
    根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值包括:
    根据所述第一信道系数序列,确定第一信道系数变化序列,所述第一信道系数变化序列包括所述信道系数的多个变化量;
    从所述信道变化词典中查找与所述第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列;
    将所述第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列;
    根据所述信道变化的值的预测序列,确定所述信道系数的预测值。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一信道系数序列,确定第一信道系数变化序列包括:
    将所述第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或
    将所述第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。
  4. 如权利要求2或3所述的方法,其特征在于,所述根据所述信道变化的值的预测序列,确定所述信道系数的预测值包括:
    基于所述信道变化词典,确定所述信道变化的值的预测序列对应的信道系数的变化量的序列;
    根据所述信道系数的变化量的序列,确定所述信道系数的预测值。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    获取第二信道系数序列,所述第二信道系数序列包括所述信道系数的多个复数值;
    根据所述第二信道系数序列,确定第二信道系数变化序列,所述第二信道系数变化序列包括所述信道系数的多个变化量;
    从所述信道变化词典中查找与所述第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列;
    将所述第二信道变化的值序列输入到神经网络进行训练得到所述信道预测模型。
  6. 如权利要求5所述的方法,其特征在于,所述方法还包括:
    获取所述神经网络输出的每个预测值的概率;
    确定所述信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;
    根据所述信道变化差距矩阵和所述每个预测值的概率,确定所述每个预测值的概率的加权平均值;
    根据所述加权平均值,确定所述神经网络是否已经训练完成。
  7. 如权利要求5或6所述的方法,其特征在于,所述神经网络的输入维度的大小、所述神经网络的输出维度的大小与所述信道变化词典中所述映射关系的个数相同。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    统计所述信道系数的多个变化量中每个变化量的出现频次;
    根据所述出现频次,对所述信道系数的所述每个变化量进行赋值整数得到所述信道变化的值;
    建立所述信道变化的值与所述每个变化量的映射关系,生成所述信道变化词典。
  9. 如权利要求8所述的方法,其特征在于,所述根据所述出现频次,对所述信道系数的所述每个变化量进行赋值整数得到所述信道变化的值包括:
    当所述多个变化量中目标变化量的所述出现频次大于预设阈值时,对所述信道系数的所述目标变化量进行赋值整数。
  10. 如权利要求1-9任一项所述的方法,其特征在于,所述根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值之后,还包括:
    根据所述预测值,对数据进行解调、译码或自适应传输。
  11. 如权利要求10所述的方法,其特征在于,所述根据所述预测值,对数据进行解调、译码或自适应传输之后,还包括:
    当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,使用所述信道预测模型进行信道预测。
  12. 如权利要求10所述的方法,其特征在于,所述根据所述预测值,对数据进行解调、译码或自适应传输包括:
    获取通过信道估计得到的信道估计值;
    根据所述信道估计值以及所述预测值,确定信道预测加权值;
    根据所述信道预测加权值,对所述数据进行解调、译码或自适应传输。
  13. 一种信道预测装置,其特征在于,所述装置包括:
    获取模块,用于获取第一时段的第一信道系数序列,所述第一信道系数序列包括信道系数的多个复数值;
    处理模块,用于根据所述第一信道系数序列以及预设的信道变化词典,确定第二时段的所述信道系数的预测值,所述信道变化词典包括所述信道系数的每个变化量与信道变化的值的映射关系,所述第二时段晚于所述第一时段。
  14. 如权利要求13所述的装置,其特征在于,
    所述处理模块,还用于根据所述第一信道系数序列,确定第一信道系数变化序列,所述第一信道系数变化序列包括所述信道系数的多个变化量;从所述信道变化词典中查找与所述第一信道系数变化序列中每个变化量对应的信道变化的值,并生成第一信道变化的值序列;将所述第一信道变化的值序列输入到信道预测模型中进行预测得到信道变化的值的预测序列;根据所述信道变化的值的预测序列,确定所述信道系数的预测值。
  15. 如权利要求14所述的装置,其特征在于,
    所述处理模块,还用于将所述第一信道系数序列中后一时刻的所述信道系数的复数值减去前一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量;或将所述第一信道系数序列中前一时刻的所述信道系数的复数值减去后一时刻的所述信道系数的复数值得到所述第一信道系数变化序列中的每个变化量。
  16. 如权利要求14或15所述的装置,其特征在于,
    所述处理模块,还用于基于所述信道变化词典,确定所述信道变化的值的预测序列对应的信道系数的变化量的序列;根据所述信道系数的变化量的序列,确定所述信道系数的预测值。
  17. 如权利要求13-16任一项所述的装置,其特征在于,
    所述获取模块,还用于获取第二信道系数序列,所述第二信道系数序列包括所述信道系数的多个复数值;
    所述处理模块,还用于根据所述第二信道系数序列,确定第二信道系数变化序列,所述第二信道系数变化序列包括所述信道系数的多个变化量;从所述信道变化词典中查找与所述第二信道系数变化序列中每个变化量对应的信道变化的值,并生成第二信道变化的值序列;将所述第二信道变化的值序列输入到神经网络进行训练得到所述信道预测模型。
  18. 如权利要求17所述的装置,其特征在于,
    所述获取模块,还用于获取所述神经网络输出的每个预测值的概率;
    所述处理模块,还用于确定所述信道变化词典中每两个变化量的复数值之间的差距,并生成信道变化差距矩阵;根据所述信道变化差距矩阵和所述每个预测值的概率,确定所述每个预测值的概率的加权平均值;根据所述加权平均值,确定所述神经网络是否已经训 练完成。
  19. 如权利要求17或18所述的装置,其特征在于,所述神经网络的输入维度的大小、所述神经网络的输出维度的大小与所述信道变化词典中所述映射关系的个数相同。
  20. 如权利要求13-19任一项所述的装置,其特征在于,
    所述处理模块,还用于统计所述信道系数的多个变化量中每个变化量的出现频次;根据所述出现频次,对所述信道系数的所述每个变化量进行赋值整数得到所述信道变化的值;建立所述信道变化的值与所述每个变化量的映射关系,生成所述信道变化词典。
  21. 如权利要求20所述的装置,其特征在于,
    所述处理模块,还用于当所述多个变化量中目标变化量的所述出现频次大于预设阈值时,对所述信道系数的所述目标变化量进行赋值整数。
  22. 如权利要求13-21任一项所述的装置,其特征在于,
    所述处理模块,还用于根据所述预测值,对数据进行解调、译码或自适应传输。
  23. 如权利要求22所述的装置,其特征在于,
    所述处理模块,还用于当使用所述预测值进行解调、译码或自适应传输所产生的系统吞吐率高于预测吞吐阈值时,使用所述信道预测模型进行信道预测。
  24. 如权利要求22所述的装置,其特征在于,
    所述获取模块,还用于获取通过信道估计得到的信道估计值;
    所述处理模块,还用于根据所述信道估计值以及所述预测值,确定信道预测加权值;根据所述信道预测加权值,对所述数据进行解调、译码或自适应传输。
  25. 一种信道预测设备,其特征在于,包括:存储器、通信总线以及处理器,其中,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,用于执行权利要求1-12任一项所述的方法。
  26. 如权利要求25所述的信道预测设备,其特征在于,所述信道预测设备为芯片。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行权利要求1-12任一项所述的方法。
  28. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-12任一项所述的方法。
PCT/CN2019/100276 2018-09-10 2019-08-12 一种信道预测方法及相关设备 WO2020052394A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19859240.4A EP3836435B1 (en) 2018-09-10 2019-08-12 Channel prediction method and related device
US17/196,337 US11424963B2 (en) 2018-09-10 2021-03-09 Channel prediction method and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811054284.7 2018-09-10
CN201811054284.7A CN110890930B (zh) 2018-09-10 2018-09-10 一种信道预测方法、相关设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/196,337 Continuation US11424963B2 (en) 2018-09-10 2021-03-09 Channel prediction method and related device

Publications (1)

Publication Number Publication Date
WO2020052394A1 true WO2020052394A1 (zh) 2020-03-19

Family

ID=69745370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100276 WO2020052394A1 (zh) 2018-09-10 2019-08-12 一种信道预测方法及相关设备

Country Status (4)

Country Link
US (1) US11424963B2 (zh)
EP (1) EP3836435B1 (zh)
CN (1) CN110890930B (zh)
WO (1) WO2020052394A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817903B2 (en) * 2015-02-18 2020-10-27 Verizon Media Inc. Systems and methods for inferring matches and logging-in of online users across devices

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11696119B2 (en) * 2019-12-16 2023-07-04 Qualcomm Incorporated Neural network configuration for wireless communication system assistance
CN111541505B (zh) * 2020-04-03 2021-04-27 武汉大学 一种面向ofdm无线通信系统的时域信道预测方法及系统
CN113810155B (zh) * 2020-06-17 2022-11-18 华为技术有限公司 信道编译码方法和通信装置
KR20230030850A (ko) * 2021-08-26 2023-03-07 현대자동차주식회사 머신러닝 기반 방송 정보를 제공하는 방법 및 그 장치
CN116418432A (zh) * 2021-12-30 2023-07-11 维沃移动通信有限公司 模型更新方法及通信设备
CN115001896B (zh) * 2022-06-28 2024-01-19 中国人民解放军海军工程大学 一种冗余通道的自适应切换方法
CN117811941A (zh) * 2022-09-30 2024-04-02 华为技术有限公司 一种信道时间序列的预测方法和相关设备
CN115396055B (zh) * 2022-10-27 2023-01-10 中兴通讯股份有限公司 信道预测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812992A (en) * 1995-05-24 1998-09-22 David Sarnoff Research Center Inc. Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space
CN1543085A (zh) * 2003-03-27 2004-11-03 ٿ��� 无线通信系统
CN1578181A (zh) * 2003-07-01 2005-02-09 因芬尼昂技术股份公司 瑞克接收器之信道系数加权方法及装置
CN105142177A (zh) * 2015-08-05 2015-12-09 西安电子科技大学 复数神经网络信道预测方法
CN107135041A (zh) * 2017-03-28 2017-09-05 西安电子科技大学 一种基于相空间重构的rbf神经网络信道预测方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10350362B4 (de) * 2003-10-29 2008-06-19 Infineon Technologies Ag Verfahren zum Vorhersagen eines Kanalkoeffizienten
JP4410267B2 (ja) * 2007-03-27 2010-02-03 株式会社東芝 無線通信方法と装置
SG177277A1 (en) * 2009-06-24 2012-02-28 Fraunhofer Ges Forschung Audio signal decoder, method for decoding an audio signal and computer program using cascaded audio object processing stages
US9020871B2 (en) 2010-06-18 2015-04-28 Microsoft Technology Licensing, Llc Automated classification pipeline tuning under mobile device resource constraints
WO2014055939A1 (en) 2012-10-04 2014-04-10 Huawei Technologies Co., Ltd. User behavior modeling for intelligent mobile companions
US9244888B2 (en) 2013-03-15 2016-01-26 Microsoft Technology Licensing, Llc Inferring placement of mobile electronic devices
CN104753635B (zh) * 2013-12-31 2018-03-23 展讯通信(上海)有限公司 通信系统中信道质量指示的反馈方法与装置、通信终端
US20160091965A1 (en) 2014-09-30 2016-03-31 Microsoft Corporation Natural motion-based control via wearable and mobile devices
WO2016072893A1 (en) * 2014-11-05 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Training of models predicting the quality of service after handover for triggering handover
CN104753835B (zh) * 2015-01-23 2019-05-31 北京信息科技大学 一种阅读器多接收天线的分片调整的信道参数估计实现方法
US10514799B2 (en) 2016-09-08 2019-12-24 Google Llc Deep machine learning to perform touch motion prediction
US20180150742A1 (en) 2016-11-28 2018-05-31 Microsoft Technology Licensing, Llc. Source code bug prediction
EP3352387B1 (en) * 2017-01-23 2021-02-24 Alcatel Lucent Digital multi-channel nonlinearity compensation scheme for optical coherent communication
CN107729927B (zh) 2017-09-30 2020-12-18 南京理工大学 一种基于lstm神经网络的手机应用分类方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812992A (en) * 1995-05-24 1998-09-22 David Sarnoff Research Center Inc. Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space
CN1543085A (zh) * 2003-03-27 2004-11-03 ٿ��� 无线通信系统
CN1578181A (zh) * 2003-07-01 2005-02-09 因芬尼昂技术股份公司 瑞克接收器之信道系数加权方法及装置
CN105142177A (zh) * 2015-08-05 2015-12-09 西安电子科技大学 复数神经网络信道预测方法
CN107135041A (zh) * 2017-03-28 2017-09-05 西安电子科技大学 一种基于相空间重构的rbf神经网络信道预测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3836435A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817903B2 (en) * 2015-02-18 2020-10-27 Verizon Media Inc. Systems and methods for inferring matches and logging-in of online users across devices
US20210012378A1 (en) * 2015-02-18 2021-01-14 Verizon Media Inc. Systems and methods for inferring matches and logging-in of online users across devices
US11704694B2 (en) 2015-02-18 2023-07-18 Yahoo Ad Tech LIC Systems and methods for inferring matches and logging-in of online users across devices

Also Published As

Publication number Publication date
EP3836435A4 (en) 2021-10-06
EP3836435B1 (en) 2023-01-11
EP3836435A1 (en) 2021-06-16
CN110890930A (zh) 2020-03-17
CN110890930B (zh) 2021-06-01
US20210194733A1 (en) 2021-06-24
US11424963B2 (en) 2022-08-23

Similar Documents

Publication Publication Date Title
WO2020052394A1 (zh) 一种信道预测方法及相关设备
WO2021057245A1 (zh) 带宽预测方法、装置、电子设备及存储介质
Saxena et al. Reinforcement learning for efficient and tuning-free link adaptation
CN107135041B (zh) 一种基于相空间重构的rbf神经网络信道预测方法
CN112713966B (zh) 基于似然估计修正信噪比的编码调制切换方法
Saxena et al. Contextual multi-armed bandits for link adaptation in cellular networks
CN114553963B (zh) 移动边缘计算中基于深度神经网络的多边缘节点协作缓存方法
CN105519030A (zh) 通信系统中进行快速链路自适应的装置与计算机程序产品
CN113422812B (zh) 一种服务链部署方法及装置
WO2021107697A1 (en) Smart decoder
Mashhadi et al. Deep reinforcement learning based adaptive modulation with outdated CSI
WO2022203761A4 (en) Estimating direction of arrival of electromagnetic energy using machine learning
Wang et al. Adaptive resource allocation for semantic communication networks
CN108494447B (zh) 一种物理层安全通信中的资源分配方法
CN114125962B (zh) 一种自适应网络切换方法、系统及存储介质
US20220303158A1 (en) End-to-end channel estimation in communication networks
Gao et al. An Efficient Approximation for Nakagami‐m Quantile Function Based on Generalized Opposition‐Based Quantum Salp Swarm Algorithm
CN113037409B (zh) 基于深度学习的大规模mimo系统信号检测方法
Saxena et al. Model-based adaptive modulation and coding with latent thompson sampling
Chuan et al. Machine learning based popularity regeneration in caching-enabled wireless networks
CN117581493A (zh) 链路适配
CN111769975A (zh) Mimo系统信号检测方法及系统
Wang et al. Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning
WO2022227061A1 (zh) 资源配置方法及相关设备
TWI806707B (zh) 通訊方法及其通訊裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19859240

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019859240

Country of ref document: EP

Effective date: 20210308