CN113285896B - Time-varying channel prediction method based on stack type ELM - Google Patents
Time-varying channel prediction method based on stack type ELM Download PDFInfo
- Publication number
- CN113285896B CN113285896B CN202110479200.XA CN202110479200A CN113285896B CN 113285896 B CN113285896 B CN 113285896B CN 202110479200 A CN202110479200 A CN 202110479200A CN 113285896 B CN113285896 B CN 113285896B
- Authority
- CN
- China
- Prior art keywords
- neural network
- output
- matrix
- channel
- hidden layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 239000011159 matrix material Substances 0.000 claims abstract description 72
- 238000013528 artificial neural network Methods 0.000 claims abstract description 63
- 238000012549 training Methods 0.000 claims abstract description 56
- 230000014509 gene expression Effects 0.000 claims description 22
- 239000013598 vector Substances 0.000 claims description 7
- 210000002364 input neuron Anatomy 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 150000001875 compounds Chemical class 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 210000004205 output neuron Anatomy 0.000 claims description 3
- 238000004949 mass spectrometry Methods 0.000 abstract 1
- 241000209094 Oryza Species 0.000 description 6
- 235000007164 Oryza sativa Nutrition 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 235000009566 rice Nutrition 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 239000002253 acid Substances 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/0224—Channel estimation using sounding signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/391—Modelling the propagation channel
- H04B17/3913—Predictive models, e.g. based on neural network models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/024—Channel estimation channel estimation algorithms
- H04L25/0254—Channel estimation channel estimation algorithms using neural network algorithms
Landscapes
- Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a time-varying channel prediction method based on stacked ELM, which comprises the following steps: acquiring a time domain channel state information estimation value according to the pilot frequency; constructing an offline historical channel training sample set according to the time domain channel state information estimation value; inputting a historical channel training sample set into a neural network, and training by using an ELM (element-free mass spectrometry) method to obtain a neural network hidden layer output matrix; obtaining the output weight and output of the neural network according to the hidden layer output matrix; training a neural network by using a stack type ELM method with the depth of J, outputting a final characteristic matrix, and calculating a hidden layer output matrix and an initial output weight; performing online prediction by using an offline training method to obtain a feature matrix and a hidden layer output matrix corresponding to the sample and a final neural network output weight; and obtaining the predicted channel state information of the neural network according to the output weight of the neural network. The method comprises an offline training part and an online predicting part, and the time-varying channel predicting precision is remarkably improved through the offline training part and the online predicting part.
Description
Technical Field
The invention relates to the technical field of prediction of wireless communication channel information, in particular to a time-varying channel prediction method based on stack type ELM.
Background
In recent years, wireless communication technology has been rapidly developed, and results related to wireless channels have been diversified. With the large-scale deployment of High Speed Railways (HSRs) operating at speeds in excess of 300 km, wireless communication in the HSR environment is drawing increasing attention worldwide. In an HSR environment, high-speed running of a train causes a large doppler shift, and the large doppler shift causes a channel to be rapidly time-varying, and in this scenario, high-precision Channel State Information (CSI) acquisition is critical. Although the CSI can be obtained through channel estimation, the CSI obtained by fast time-varying of the channel and the processing delay of the channel estimation are outdated and cannot reflect the current channel condition faithfully. However, channel prediction can predict channel conditions at future transmission times based on outdated channel state information, significantly improving system performance.
The conventional channel prediction method mainly uses a linear prediction model to perform channel prediction, and obtains the CSI at the future time by linearly combining the CSI at the current time and the CSI at the past time. Although the linear prediction method has better prediction performance, the method is not suitable for a fast time-varying channel with nonlinear characteristics.
Typical non-linear channel prediction methods include a Support Vector Machine (SVM) method and a deep learning method. An article, the Rayleigh Fading Channel Prediction via Deep Learning published in 2018 by LIAO rf et al discloses a Channel Prediction method based on a back propagation BP neural network, but The BP algorithm has a slow Learning speed, poor generalization capability and a local optimum defect.
The Huang G B et al, in 2006, published article Extreme learning machine, discloses a novel channel prediction method based on a single hidden layer feedforward neural network, called ELM method for short, but the ELM method adopts a shallow layer structure and has certain limitation when processing original data with complex characteristics.
An article a fast and acid online sequential learning for feedback networks published in 2006 by LIANG N Y et al discloses an online sequence extreme learning machine (OS-ELM) channel prediction method, which is based on a network output weight and a new training sample obtained by ELM, but the method adopts shallow ELM, cannot extract useful deep features from original data, and has low channel prediction accuracy.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems, the invention aims to provide a time-varying channel prediction method based on a stacked ELM, which is used for establishing an offline training method, training a neural network by using the stacked ELM, and then training an online newly-constructed historical channel sample set by using the offline training method to acquire channel information at the current moment.
The technical scheme is as follows: the invention discloses a time-varying channel prediction method based on a stack type ELM, which comprises the following steps:
(1) Acquiring a time domain channel state information estimation value according to the pilot frequency;
(2) Constructing an offline historical channel training sample set according to the time domain channel state information estimation value, wherein the training sample set comprises input samples and ideal channel information;
(3) Inputting a historical channel training sample set into a neural network, training the neural network by using an ELM method, and obtaining a hidden layer output matrix of the neural network according to a weight vector, a bias value and an input sample;
(4) Obtaining an output weight value of the neural network according to the output matrix of the hidden layer of the neural network and the ideal channel information;
(5) Obtaining neural network output according to the neural network hidden layer output matrix and the output weight;
(6) Carrying out data processing on the original characteristic matrix by utilizing the output of the neural network to obtain a new characteristic matrix;
(7) Repeating the steps (3) to (6), training the neural network by using a stack type ELM method with the depth of J, and outputting a final characteristic matrix X J ;
(8) Calculating a hidden layer output matrix and an initial output weight of the neural network with the training depth of J by using the characteristic matrix in the step (7), and finishing the construction of the offline training method;
(9) Inputting channel parameter estimated values of previous D moments into a neural network, and constructing a new historical channel sample, wherein D is the number of input neurons;
(10) Performing online prediction on the new historical channel sample in the step (9) by using an offline training method to obtain a feature matrix and a hidden layer output matrix corresponding to the sample;
(11) Updating the neural network output weight value after offline training in the step (8) by using a recursion formula according to the hidden layer output matrix obtained in the step (10) to obtain an online predicted final neural network output weight value;
(12) And obtaining the predicted channel state information of the neural network according to the output weight of the neural network.
Further, the step (1) of obtaining the time domain channel state information estimated value at the receiving end by using the received signal, the least square method and the linear interpolation method
Further, step (2) trains the sample set X 1 The expression is as follows:
X 1 ={(x 1 ,h 1 ),...,(x u ,h u ),...,(x U ,h U )}
in the formula (x) u ,h u ) For the u-th training sample, x u To input samples, h u For ideal channel information, the expression is:
h u =[Γ(h(u+D)),Γ(h(u+D+1)),...,Γ(h(u+D+P-1))]
where D denotes the number of input neurons, P denotes the number of output neurons, h (n) is ideal time domain channel information, Γ (-) is the operation of converting complex numbers into real numbers, and the expression is:
Γ(h(n))=[h R (n),h I (n)]
in the formula, h R (n)=Re(h(n)),h I (n) = Im (h (n)) is the operation taking the real part and the imaginary part, respectively.
Further, the step (3) comprises: training sample set X obtained according to step (2) 1 After the neural network is trained by using ELM in calculation, the expression of the output matrix of the hidden layer is obtained as follows:
in the formula, a (x) u ) Hidden layer output matrix for the u-th sample, G (w) q ,b q ,x u ) Hidden layer feature mapping function for the q hidden node, w q Representing weight vectors between the input layer and the q-th hidden node, b q Denotes the bias of the Q-th hidden node, Q being the number of hidden layer nodes, x u Is an input sample;
the expression of the neural network output weight beta in the step (4) is as follows:
the neural network output expression in the step (5) is as follows:
O 1 =Aβ
the step (6) comprises the following steps:
output O according to neural network 1 Performing random offset on the original feature matrix once, and performing change on the offset feature matrix once by using a kernel function to obtain a new feature matrix X 2 The expression is as follows:
X 2 =G(X 1 +λO 1 θ 1 )
in the formula, a projection matrix theta j ∈R P×D The samples are randomly sampled in a normal distribution N (0,1), λ is a weight parameter for controlling the degree of random offset, and j is an identifier of an ELM module serial number.
Further, the step (7) comprises:
the input and output relationship of each ELM module is expressed as:
X j+1 =G(X 1 +λO j θ j )
further, the step (8) includes:
according to the final feature matrix X J Obtaining a hidden layer output matrix A J So as to obtain the initial output weight beta of the network after the stacking type ELM method with the depth of J (0) I.e. by
Further, the step (9) includes: the newly constructed historical channel sample x is the historical channel information of the previous D time instants.
Further, the step (11) recursion formula is as follows:
C=C 0 -C 0 A T (I+AC 0 A T ) -1 AC 0
β=β (0) +CA T (h-Aβ (0) )
wherein beta represents the final output weight, and C is used to convert beta (0) β is calculated and I is expressed as an identity matrix.
Further, the step (12) includes:
according to the updated output weight beta obtained in the step (11), by using the relationship between the output weight and the output, the neural network output is obtained as follows:
in the formula, delta J A weight matrix between the input layer and the hidden layer randomly generated for the J-th time, and δ J =[w 1 ,...,w Q ] T ,b J For the J-th randomly generated hidden layer bias matrix, Γ -1 (. Cndot.) is the inverse operation of Γ (·), representing the operation of converting a real number into a complex number.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: the method comprises two parts of offline training and online prediction, wherein in the offline training, deep features of a channel are extracted from a historical channel by using a stack type ELM method to obtain an initial output weight of a neural network so as to capture deep information of input data; and the online prediction part updates the network output weight in real time based on the newly constructed historical channel sample and the initial output weight to adapt to the change of the channel, and obtains the channel information at the current moment based on the updated output weight.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram comparing MSE performance of channel prediction implemented by the present invention and a conventional stacked ELM-based channel prediction method under different depth stacked ELM methods;
FIG. 3 is a graph illustrating a comparison of MSE performance for channel prediction with different training sample numbers according to the present invention;
fig. 4 is a diagram comparing MSE performance under different snr conditions between the present invention and the prior art channel prediction method.
Detailed Description
A flowchart of the time-varying channel prediction method based on the stacked ELM described in this embodiment is shown in fig. 1, and includes the following steps:
(1) According to pilot frequency, at receiving end, using received signal, least square method and linear interpolation method to obtain estimated value of time domain channel state information
(2) Estimating value according to time domain channel state informationConstruction of offline historical channel training sample set X 1 The expression is:
X 1 ={(x 1 ,h 1 ),...,(x u ,h u ),...,(x U ,h U )}
in the formula (x) u ,h u ) For the u-th training sample, x u To input samples, h u For ideal channel information, x u And h u The expressions are respectively:
h u =[Γ(h(u+D)),Γ(h(u+D+1)),...,Γ(h(u+D+P-1))]
where D denotes the number of input neurons, P denotes the number of output neurons, h (n) is ideal time-domain channel information, Γ (-) is the operation of converting complex numbers into real numbers, and the expression:
Γ(h(n))=[h R (n),h I (n)]
in the formula, h R (n)=Re(h(n)),h I (n) = Im (h (n)), which are real and imaginary parts operations, respectively.
(3) Training sample set X of historical channel 1 Inputting the weight vector into a neural network, training the neural network by using an ELM method, and obtaining a hidden layer output matrix of the neural network according to the weight vector, the offset value and the input sample, wherein the hidden layer output matrix has an expression as follows:
in the formula, a (x) u ) Hidden layer output matrix for the u-th sample, G (w) q ,b q ,x u ) Hidden layer feature mapping function for the q hidden node, w q Representing weight vectors between the input layer and the q-th hidden node, b q Denotes the bias of the Q-th hidden node, Q being the number of hidden layer nodes, x u Is the input sample. G (-) is the hidden layer activation function, where the Sigmoid function is chosen.
(4) Obtaining an output weight beta expression of the neural network according to the output matrix A of the hidden layer of the neural network and the ideal channel information, wherein the expression is as follows:
in the formula (I), the compound is shown in the specification,h u is the ideal channel information.
(5) Obtaining the output of the neural network according to the output matrix A of the hidden layer of the neural network and the output weight beta, wherein the expression is as follows:
O 1 =Aβ
(6) Output O according to neural network 1 Performing random offset on the original feature matrix once, and performing change on the offset feature matrix once by using a kernel function to obtain a new feature matrixFeature matrix X 2 The expression is as follows:
X 2 =G(X 1 +λO 1 θ 1 )
in the formula, a projection matrix theta j ∈R P×D The samples are randomly sampled in normal distribution N (0,1), λ is a weight parameter for controlling the degree of random offset, and j is the identification of the ELM module serial number.
(7) Repeating the steps (3) to (6), and training the sample set X 1 Obtaining a final characteristic matrix X by a stacking type ELM method with the depth of J J And the relation between the input and the output of each ELM module is as follows:
X j+1 =G(X 1 +λO j θ j )
(8) Calculating a hidden layer output matrix and an initial output weight of the neural network with the training depth of J by using the characteristic matrix in the step (7), and finishing the construction of the offline training method;
according to the final feature matrix X J Obtaining a hidden layer output matrix A J So as to obtain the initial output weight beta (J) of the network after the stacking type ELM method with the depth of J 0) I.e. by
(9) According to the number of samples needing to update the weight, constructing on-line historical channel samples; the newly constructed historical channel sample x is the historical channel information of the previous D time instants.
(10) Performing online prediction on the historical channel sample in the step (9) by using an offline training method to obtain a feature matrix and a hidden layer output matrix corresponding to the sample;
(11) Updating the neural network output weight value after offline training in the step (8) by using a recursion formula according to the hidden layer output matrix obtained in the step (10) to obtain an online predicted final neural network output weight value;
the recurrence formula is:
C=C 0 -C 0 A T (I+AC 0 A T ) -1 AC 0
β=β (0) +CA T (h-Aβ (0) )
wherein beta represents the final output weight, and C is used to convert beta (0) β is calculated and I is expressed as an identity matrix.
(12) According to the updated output weight beta obtained in the step (11), by using the relationship between the output weight and the output, the neural network output is obtained as follows:
in the formula, x J Is a feature matrix, delta, obtained for x by a stacked ELM method of depth J J A weight matrix between the input layer and the hidden layer randomly generated for the J-th time, and δ J =[w 1 ,...,w Q ] T ,b J For the J-th randomly generated hidden layer bias matrix, Γ -1 (. Cndot.) is the inverse operation of Γ (·), meaning the operation of converting a real number into a complex number.
The time-varying channel prediction method based on the stack type ELM further illustrates the effect of the method by using a specific embodiment.
Constructing an input module, considering a single-input single-output OFDM system, assuming S m Is the m-th transmitted OFDM symbol of the frequency domain, and S m =[S(m,0),S(m,1),...,S(m,N-1)] T Where S (m, k) denotes a transmission signal on the kth subcarrier of the mth OFDM symbol, and N is an OFDM symbol length. To S m The IFFT is carried out to obtain a time domain sending signal of
In an HSR communication environment, since the base stations are all built near the rail, there will be a strong direct LOS component, so the rice channel is usually used as the channel model in the HSR environment, i.e. the channel model is
In the formula, alpha 0 (m, n) is the LOS path component of the channel, α l (m, n), L = 1.. The L-1 is the scattering component path, obeying Rayleigh distribution, L is the number of paths of the multipath Rice channel, τ l Is the normalized time delay of the first path, and the Rice factor is defined as
Assuming that the length of the cyclic prefix is greater than the maximum delay of the wireless transmission channel and the receiving end considers the ideal timing synchronization, the received signal of the nth sampling point of the mth OFDM symbol is
The performance of the time-varying channel prediction method based on the stacked ELM is analyzed by combining simulation. Consider an OFDM system where the FFT/IFFT length is 128, a comb pilot structure is used, the number of pilots is 32, and they are evenly distributed. Assuming that the moving speed of the train is 500km/h, the channel adopts a Rice channel with 5 paths, and the Rice factor is 5. The carrier frequency takes into account 3.5GHz and the subcarrier spacing is 15kHz. The number of input neurons D =10, the number of hidden layer neurons Q =10, and the number of output layer neurons P =1 of the network. In the simulation, the number of samples in the initial training phase considered U =2000. In order to compare the performance of the invention, a prediction method based on ELM, a prediction method based on stacked ELM and a prediction method based on OS-ELM are also provided in the simulation.
Fig. 2 shows an MSE performance curve for channel prediction implemented by the present invention and the prediction method based on the stacked ELM under the stacked ELM methods of different depths when the carrier frequency is 3.5 GHz. It can be seen from the figure that, no matter what kind of prediction method is based on, the prediction performance of the prediction method based on the stack-type ELM becomes better as the signal-to-noise ratio increases, and increasing the depth of the stack can improve the performance of channel prediction, because more useful feature representations can be better extracted from the original data as the depth of the stack increases. In the stack algorithm with the same depth, the performance of the method is superior to that of the prediction method based on the stack type ELM, because the method fully utilizes the newly-arrived historical channel data to realize the real-time update of the output weight of the prediction network, and is closer to the current channel environment.
Fig. 3 shows the MSE performance curve of the present invention at different training samples with a rice factor of 10 and a carrier frequency of 3.5 GHz. As can be seen from the figure, the MSE performance of the present invention improves as the number of training samples increases, since the increase in the number of training samples facilitates the extraction of data features. Under the condition that the number of training samples used for updating the parameters is the same, the performance of the method is improved along with the increase of the number of training samples in the off-line training stage. Under the condition that the training samples in the off-line training stage are the same, the performance of the invention can be improved along with the increase of the number of the samples used for updating the parameters.
Fig. 4 shows MSE performance curves of various channel prediction methods under different snr. As can be seen from the figure, the prediction performance of various prediction methods will improve as the signal-to-noise ratio increases. Compared with the prediction method based on the ELM, the prediction method based on the stack type ELM adopts a deep ELM model, and is beneficial to extracting complex data characteristics, so that better performance is obtained. The prediction method based on the OS-ELM utilizes a new historical channel training sample, adopts a recursion algorithm to update the output weight of the network, and has stronger adaptability, so the performance is improved. The invention adopts a deep ELM model, updates the output weight of the prediction network in real time by using a newly arrived historical channel, not only can extract useful feature representation from original data, but also fully utilizes the historical channel to realize the updating and adjustment of the output weight of the prediction network, thereby having better performance.
Claims (7)
1. The time-varying channel prediction method based on the stack type ELM is characterized by comprising the following steps:
(1) Acquiring a time domain channel state information estimation value according to the pilot frequency;
(2) Constructing an offline historical channel training sample set according to the time domain channel state information estimation value, wherein the training sample set comprises input samples and ideal channel information;
(3) Inputting a historical channel training sample set into a neural network, training the neural network by using an ELM method, and obtaining a neural network hidden layer output matrix according to a weight vector, a bias value and an input sample:
training sample set X obtained according to step (2) 1 After the neural network is trained by using the ELM, the expression of the output matrix of the hidden layer is obtained as follows:
in the formula, a (x) u ) Hidden layer output matrix for the u-th sample, G (w) q ,b q ,x u ) Hidden layer feature mapping function for the q hidden node, w q Representing weight vectors between the input layer and the q-th hidden node, b q Denotes the bias of the Q-th hidden node, Q being the number of hidden layer nodes, x u Is an input sample;
(4) Obtaining an output weight value of the neural network according to the output matrix of the hidden layer of the neural network and the ideal channel information;
the expression of the output weight beta of the neural network is as follows:
(5) Obtaining neural network output according to the neural network hidden layer output matrix and the output weight;
the neural network output expression is:
O 1 =Aβ
(6) The method for processing the data of the original characteristic matrix by utilizing the neural network output to obtain a new characteristic matrix comprises the following steps:
output O according to neural network 1 Performing random offset on the original feature matrix once, and performing change on the offset feature matrix once by using a kernel function to obtain a new feature matrix X 2 The expression is as follows:
X 2 =G(X 1 +λO 1 θ 1 )
in the formula, theta 1 For the first projection matrix, λ is a weight parameter used to control the degree of random offset;
(7) Repeating the steps (3) to (6), training the neural network by using a stacking type ELM method with the depth of J, and outputting a final characteristic matrix X J ;
(8) Calculating a hidden layer output matrix and an initial output weight of the neural network with the training depth of J by using the characteristic matrix in the step (7), and finishing the construction of the offline training method;
(9) Inputting channel parameter estimated values of previous D moments into a neural network, and constructing a new historical channel sample, wherein D is the number of input neurons;
(10) Performing online prediction on the new historical channel sample in the step (9) by using an offline training method to obtain a feature matrix and a hidden layer output matrix corresponding to the sample;
(11) Updating the neural network output weight value after offline training in the step (8) by using a recursion formula according to the hidden layer output matrix obtained in the step (10) to obtain an online predicted final neural network output weight value;
the recurrence formula is:
C=C 0 -C 0 A T (I+AC 0 A T ) -1 AC 0
β=β (0) +CA T (h-Aβ (0) )
wherein beta represents the final output weight, and C is used to convert beta (0) Calculating to beta, and expressing I as an identity matrix;
(12) And obtaining the predicted channel state information of the neural network according to the output weight of the neural network.
3. The time-varying channel prediction method of claim 2, wherein step (2) trains sample set X 1 The expression is as follows:
X 1 ={(x 1 ,h 1 ),...,(x u ,h u ),...,(x U ,h U )}
in the formula (x) u ,h u ) For the u-th training sample, x u To input samples, h u For ideal channel information, the expression is:
h u =[Γ(h(u+D)),Γ(h(u+D+1)),...,Γ(h(u+D+P-1))]
where P denotes the number of output neurons, h (n) is ideal time-domain channel information, Γ (·) is the operation of converting complex numbers into real numbers, and the expression:
Γ(h(n))=[h R (n),h I (n)]
in the formula, h R (n)=Re(h(n)),h I (n) = Im (h (n)) is the operation taking the real part and the imaginary part, respectively.
4. The time-varying channel prediction method of claim 3, wherein step (7) comprises:
the input and output relationships of each ELM module are expressed as:
X j+1 =G(X 1 +λO j θ j )
in the formula, theta j Is the jth projection matrix, and j is the identification of the ELM module serial number.
5. The time-varying channel prediction method of claim 4, wherein step (8) comprises:
according to the final feature matrix X J Obtaining a hidden layer output matrix A J So as to obtain the initial output weight beta of the network after the stacking type ELM method with the depth of J (0) I.e. by
6. The time-varying channel prediction method of claim 5, wherein step (9) comprises:
the newly constructed historical channel sample x is the historical channel information of the previous D time instants.
7. The time-varying channel prediction method of claim 6, wherein step (12) comprises:
according to the updated output weight beta obtained in the step (11), by using the relationship between the output weight and the output, the neural network output is obtained as follows:
in the formula, delta J A weight matrix between the input layer and the hidden layer randomly generated for the J-th time, and δ J =[w 1 ,...w, Q ] T ,b J For the J-th randomly generated hidden layer bias matrix, Γ -1 (. Cndot.) is the inverse operation of Γ (·), representing the operation of converting a real number into a complex number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110479200.XA CN113285896B (en) | 2021-04-30 | 2021-04-30 | Time-varying channel prediction method based on stack type ELM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110479200.XA CN113285896B (en) | 2021-04-30 | 2021-04-30 | Time-varying channel prediction method based on stack type ELM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113285896A CN113285896A (en) | 2021-08-20 |
CN113285896B true CN113285896B (en) | 2023-04-07 |
Family
ID=77277790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110479200.XA Active CN113285896B (en) | 2021-04-30 | 2021-04-30 | Time-varying channel prediction method based on stack type ELM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113285896B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762529B (en) * | 2021-09-13 | 2023-06-20 | 西华大学 | Machine learning timing synchronization method based on statistical prior |
CN114509730B (en) * | 2022-01-14 | 2024-07-16 | 西安电子科技大学 | ELM-based interference frequency domain state online prediction method |
CN114884783B (en) * | 2022-05-07 | 2023-06-02 | 重庆邮电大学 | Method for estimating power line system channel by utilizing neural network |
CN115296761B (en) * | 2022-10-10 | 2022-12-02 | 香港中文大学(深圳) | Channel prediction method based on electromagnetic propagation model |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107045785B (en) * | 2017-02-08 | 2019-10-22 | 河南理工大学 | A method of the short-term traffic flow forecast based on grey ELM neural network |
CN109510676B (en) * | 2019-01-11 | 2021-09-21 | 杭州电子科技大学 | Wireless channel prediction method based on quantum computation |
CN110300075B (en) * | 2019-04-30 | 2020-10-02 | 北京科技大学 | Wireless channel estimation method |
CN110708129B (en) * | 2019-08-30 | 2023-01-31 | 北京邮电大学 | Wireless channel state information acquisition method |
CN112134816B (en) * | 2020-09-27 | 2022-06-10 | 杭州电子科技大学 | ELM-LS combined channel estimation method based on intelligent reflection surface |
-
2021
- 2021-04-30 CN CN202110479200.XA patent/CN113285896B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113285896A (en) | 2021-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113285896B (en) | Time-varying channel prediction method based on stack type ELM | |
Wang et al. | Distributed learning for automatic modulation classification in edge devices | |
CN112737987B (en) | Novel time-varying channel prediction method based on deep learning | |
Liao et al. | ChanEstNet: A deep learning based channel estimation for high-speed scenarios | |
CN111404849A (en) | OFDM channel estimation and signal detection method based on deep learning | |
CN112600772B (en) | OFDM channel estimation and signal detection method based on data-driven neural network | |
CN113206809B (en) | Channel prediction method combining deep learning and base extension model | |
CN111510402B (en) | OFDM channel estimation method based on deep learning | |
CN108540419B (en) | OFDM detection method for resisting inter-subcarrier interference based on deep learning | |
CN111884976B (en) | Channel interpolation method based on neural network | |
CN1802831A (en) | Method and device for adaptive phase compensation of OFDM signals | |
Jiang et al. | AI-aided online adaptive OFDM receiver: Design and experimental results | |
CN113472706A (en) | MIMO-OFDM system channel estimation method based on deep neural network | |
CN111614584B (en) | Transform domain adaptive filtering channel estimation method based on neural network | |
CN111786923A (en) | Channel estimation method for time-frequency double-channel selection of orthogonal frequency division multiplexing system | |
CN113381953B (en) | Channel estimation method of extreme learning machine based on reconfigurable intelligent surface assistance | |
CN102404268A (en) | Doppler frequency offset estimation and compensation method in Rice channel under high-speed mobile environment | |
CN112822130B (en) | Doppler frequency offset estimation method based on deep learning in 5G high-speed mobile system | |
CN112242969A (en) | Novel single-bit OFDM receiver based on model-driven deep learning | |
CN108881080B (en) | OFDM anti-ICI detection method based on sliding window and deep learning | |
Zhang et al. | Generative adversarial network-based channel estimation in high-speed mobile scenarios | |
CN113242203B (en) | OFDMA uplink carrier frequency offset estimation method and interference suppression device in high-speed mobile environment | |
CN113762529B (en) | Machine learning timing synchronization method based on statistical prior | |
CN115208729A (en) | Systems, methods, and apparatus for machine learning based symbol timing recovery | |
CN112422208A (en) | Signal detection method based on antagonistic learning under unknown channel model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |