CN115134024B - Spectrum prediction method based on two-dimensional empirical mode decomposition - Google Patents
Spectrum prediction method based on two-dimensional empirical mode decomposition Download PDFInfo
- Publication number
- CN115134024B CN115134024B CN202210610500.1A CN202210610500A CN115134024B CN 115134024 B CN115134024 B CN 115134024B CN 202210610500 A CN202210610500 A CN 202210610500A CN 115134024 B CN115134024 B CN 115134024B
- Authority
- CN
- China
- Prior art keywords
- spectrum
- term
- memory network
- short
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001228 spectrum Methods 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 37
- 239000011159 matrix material Substances 0.000 claims abstract description 58
- 230000015654 memory Effects 0.000 claims abstract description 53
- 230000006870 function Effects 0.000 claims abstract description 43
- 230000007787 long-term memory Effects 0.000 claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 18
- 230000006403 short-term memory Effects 0.000 claims description 17
- 230000007774 longterm Effects 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 230000002441 reversible effect Effects 0.000 claims description 2
- 238000000926 separation method Methods 0.000 claims description 2
- 230000003595 spectral effect Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 230000002457 bidirectional effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/373—Predicting channel quality or other radio frequency [RF] parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/382—Monitoring; Testing of propagation channels for resource allocation, admission control or handover
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a spectrum prediction method based on two-dimensional empirical mode decomposition, which comprises the steps of firstly obtaining spectrum data, dividing the spectrum data into a plurality of closely connected matrix blocks, separating the spectrum matrix from small to large according to information scale by utilizing a two-dimensional empirical mode decomposition algorithm to obtain natural modal function components and residual quantities of different frequencies, then training a network to obtain separated and trained two-way convolution long-short-term memory networks, predicting the natural modal function components and residual quantities by adopting the trained two-way convolution long-short-term memory networks, and reconstructing a predicted value by combining each two-way convolution long-term memory network model. According to the invention, the frequency spectrum prediction is carried out from the two-dimensional angles of frequency and time through the correlation between the frequency spectrum and the channels, so that the performance of radio frequency spectrum prediction is improved.
Description
Technical Field
The invention belongs to the field of cognitive radio spectrum prediction, and particularly relates to a spectrum prediction method based on two-dimensional empirical mode decomposition.
Background
With the development of wireless communication technology, the number of wireless frequency-using devices is rapidly increasing, and the spectrum requirement is increasing. However, a large number of spectrum analysis reports show that due to the static allocation strategy of the spectrum, many licensed spectrums are not utilized enough, and a great resource waste exists. The cognitive radio technology is proposed to enable an unauthorized user (slave user) to access and use spectrum resources of a master user opportunistically on the premise of not affecting the authorized user (master user), so that the spectrum utilization rate is improved to a certain extent, and the current situation of shortage of spectrum resources is relieved.
The traditional spectrum prediction method mainly comprises methods based on an autoregressive moving average model, a hidden Markov model, a support vector machine model and the like. The autoregressive moving average model requires that the time series data be stable or that the time series data be stable after the differentiation is performed, but the spectrum data which are usually actually collected generally do not meet the requirement, and the autoregressive moving average model can only capture a linear relationship in nature, but cannot capture a nonlinear relationship, and the change rule of the spectrum cannot be expressed by a simple linear relationship. Hidden markov models assume that the current state is related to the previous state only, but the actual spectral change law often does not meet this assumption. The support vector machine model is a machine learning method widely used in recent years for classification and regression prediction, but the model performance is greatly affected by feature engineering as in the traditional machine learning method, and the selection of the features is very dependent on the experience of researchers.
In recent years, due to the continuous upgrading of the computing capability of the computer and the continuous improvement of the number and accuracy of data sets, deep learning technology is rapidly developed, and is widely applied and has remarkable results in various research fields. Deep learning is generally realized through a neural network model, and has the characteristics of nonlinearity, characteristic engineering automation and the like. The prediction of a time sequence with correlation will naturally consider the use of a Recurrent Neural Network (RNN), and the spectrum occupancy state has the feature of being consistent with the time sequence in the direction taking time as the axis, so that the prediction of the spectrum occupancy state of a single channel can be performed by using the recurrent neural network. Among them, long short-term memory (LSTM) is a typical improved structure in a recurrent neural network, and is widely used in spectrum prediction at present because it can solve the problems existing in the conventional recurrent neural network. However, in the spectrum prediction problem, the spectrum occupancy state does not have a correlation only in time, but there is also a potential correlation between channels.
Therefore, a spectrum prediction method based on two-dimensional empirical mode decomposition is urgently needed at present, and spectrum prediction is performed from the two-dimensional angles of frequency and time by utilizing correlation between spectrum and channels, so that the performance of current radio spectrum prediction is improved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a spectrum prediction method based on two-dimensional empirical mode decomposition, which predicts the spectrum from the two-dimensional angles of frequency and time by the correlation of the spectrum in time and between channels, and improves the performance of radio spectrum prediction.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the embodiment of the invention provides a spectrum prediction method based on two-dimensional empirical mode decomposition, which comprises the following steps:
s1, acquiring spectrum data, dividing the spectrum data into a plurality of closely connected matrix blocks, and setting the matrix block in the current time state as χ t The spectrum matrix takes on the value of χ t-n+1 ,χ t-n+2 ,…,χ t The method comprises the steps of carrying out a first treatment on the surface of the Wherein t is the current time; n is the number of spectrum matrix blocks used to predict future spectrum states using the network;
s2, separating the spectrum matrix segmented in the step S1 according to the order from small to large of the information scale by utilizing a two-dimensional empirical mode decomposition algorithm to obtain natural mode function components and residual quantities of different frequencies, wherein the information scale is defined as the distance scale between extreme points;
s3, training the two-way convolution long-term memory network through the BIMF natural mode function components and the residual quantity obtained in the step S2 to obtain a separated and trained two-way convolution long-term memory network;
s4, predicting the components and the residual quantity of each inherent mode function by adopting a trained two-way convolution long-short-term memory network, and reconstructing a predicted value by combining each two-way convolution long-short-term memory network model.
Further, in the step S2, the specific steps of the separation process are as follows:
s21, setting a data block χ t The lower left corner is the origin of coordinates, data block χ t The horizontal direction of (a) is the time axis X, the data block χ t Is channel Y, a two-dimensional coordinate plane XOY is obtained, wherein the data block χ t Setting the corresponding value as Z coordinate, and marking as f (x, y); analyzing a local maximum value point and a local minimum value point of the coordinate value of f (x, y), wherein the local maximum value point is a point with a data value larger than a surrounding data value, and the local minimum value point is a point with a data value smaller than the surrounding data value;
S22forming a maximum envelope surface E by local maximum points and local minimum points respectively MAX (x, y) and minimum envelope surface E MIN (x, y) envelope surface E using maxima MAX (x, y), minimum envelope surface E MIN The coordinate values of (x, y) and the primary data matrix f (x, y) are used for obtaining algebraic mean E 1 (x, y) and difference D 1 (x, y), the specific calculation mode is as follows:
D 1 (x,y)=f(x,y)-E 1 (x,y);
wherein D is 1 (x, y) is an intermediate process value of f (x, y);
s23, repeatedly executing the steps S21 to S23 until D 1k (x, y) is an intrinsic mode function, wherein k is the number of repeated execution times, and the specific calculation mode is as follows:
D 1(k-1) (x,y)-E 1k (x,y)=D 1k (x,y);
C 1 (x,y)=D 1k (x,y);
wherein C is 1 (x, y) is a separated natural mode function;
s24, judging the end of the screening process of the intrinsic mode function of each layer by limiting the size SD of the standard deviation, wherein the specific calculation mode is as follows:
wherein SD is less than or equal to 0.3; m and N are the maximum values of the spectrum matrix in the two-dimensional coordinate plane XOY along the X axis and the Y axis respectively; i is the component of the natural mode function which is being separated by the current algorithm and is the ith order;
s25, C 1 (x, y) separating from the raw data to obtain the remainder R 1 (x, y), the specific calculation mode is as follows:
f(x,y)-C 1 (x,y)=R 1 (x,y);
s26, the steps are performedThe remainder R obtained in step S25 1 (x, y) as new data, repeating the above steps S24 to S25, to obtain the final expression as follows:
wherein f (x, y) is the original data matrix; c (C) i (x, y) the i-th decomposed natural mode function component in the data matrix contains detail information with smaller scale; r is R n (x, y) is the final residual component, which is used to indicate the final large scale trend of the data, where n is the number of repetitions.
Further, in the step S2, the number of the maximum value points and the minimum value points in the two-dimensional empirical mode decomposition algorithm is not less than 1, and if no extreme value points exist in the two-dimensional empirical mode decomposition algorithm, first-order or several-order derivative operation is performed on the data to construct a set of data meeting the condition.
Further, the bidirectional convolution long-short-term memory network consists of a forward convolution long-short-term memory network and a reverse convolution long-short-term memory network, two matrixes are obtained through the forward convolution long-short-term memory network and the backward convolution long-short-term memory network, hidden layer states of the two matrixes along a time axis are opposite, the hidden layer states are connected to obtain a final prediction result, and a calculation formula is as follows:
wherein N is the total number of input matrix blocks; n is the number of the input matrix; x is x n An nth matrix for inputting a two-way convolution long-term and short-term memory network;h n-1 and h n+1 The hidden states of the forward convolution long-term memory network and the backward convolution long-term memory network when the nth matrix is input are respectively; c n-1 And c n+1 The states of memory units of the forward convolution long-period memory network and the backward convolution long-period memory network are input into the nth matrix;the output state of the network is memorized for a forward convolution long-term and short-term; />Memorizing the output state of the network for a backward convolution long-term and short-term; y is n Output of the comprehensive forward convolution long-term and short-term memory network after inputting the nth matrix>And backward convolution long-term and short-term memory network output +.>The output states of the obtained two-way convolution long-period memory network, namely a single-step prediction result of multi-channel spectrum prediction and a prediction result of spectrum occupation state at the next moment of each input channel; />Is +.>Weight of->Is backward->Weights of (2); />Is h n Is a bias term of (2); * Is a convolution operator; g is the tanh function.
Further, the convolution long-term memory network comprises an input gate, an output gate, a forgetting gate, a memory unit and a convolution layer, wherein the function of the convolution layer is to capture the spatial correlation by convolution operation instead of matrix multiplication, and the specific steps are as follows:
acquiring characteristics of different gates and cell states, and extracting convolution kernelAnd->Wherein, p is E (i, f, o, c), the calculation formula is as follows:
the input door is:
wherein, when initializing, +.is Hadamard inequality; h is a t-1 Is in a hidden state; * Is a convolution operator;b is the weight of each input quantity of the input gate i The sigma is a sigmoid function;
the output door is:
wherein,,to output the weight of each input quantity of the gate, B o A bias term for the output gate;
the forgetting door is as follows:
in the method, in the process of the invention,the weight of each input quantity of the forgetting gate is given; b (B) f Bias items for forget gates;
the memory unit is as follows:
in the method, in the process of the invention,the state of the current input unit; c t For the current memory->And long-term memory c t-1 Calculating new states of the formed input units by combining; />The weight of each input quantity of the memory unit; b (B) c Is a bias term for the memory cell; g is a tanh function;
finally, the final output of the convolution long-short-period memory network determined by the output gate and the unit state is calculated as followsIn (1) the->Is a tanh function.
The beneficial effects are that:
firstly, the spectrum prediction method based on the two-dimensional empirical mode decomposition, provided by the invention, adopts a two-dimensional empirical mode decomposition algorithm to separate the spectrum matrix from small to large according to the information scale by extracting the characteristics of the spectrum data matrix in advance, and uses the separated components of each layer as input data of a two-way convolution long-short-term memory network to train and predict so as to achieve a more accurate spectrum prediction effect.
Secondly, the spectrum prediction method based on the two-dimensional empirical mode decomposition provided by the invention processes input data in two ways simultaneously by adopting a two-way convolution long-short-term memory network and using a two-way idea, wherein the spectrum prediction is carried out in two dimensions from the frequency and time by utilizing the correlation between the spectrum and the channel from the past to the future and from the time to the future, so that the prediction performance of the current radio spectrum is improved.
Thirdly, according to the spectrum prediction method based on the two-dimensional empirical mode decomposition, the spectrum data is decomposed by adopting a two-dimensional empirical mode decomposition algorithm, so that on one hand, the instability of the data can be reduced, and on the other hand, as each decomposed component contains the characteristics of the original data in different scales, the characteristics of the data can be accurately extracted, and the information utilization rate of the data is improved.
Drawings
Fig. 1 is a flowchart of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a spectrum data segmentation of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a spectral data division matrix arrangement mode of a spectral prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a convolutional long-term and short-term memory network of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention.
Fig. 5 is a schematic illustration of a two-way convolution long-term and short-term memory network principle of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention.
Detailed Description
The following examples will provide those skilled in the art with a more complete understanding of the invention, but are not intended to limit the invention in any way.
Fig. 1 is a flowchart of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention. Fig. 3 is a schematic diagram of a spectral data division matrix arrangement mode of a spectral prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention. The embodiment provides a spectrum prediction method based on two-dimensional empirical mode decomposition, which comprises the following steps:
s1, acquiring spectrum data, dividing the spectrum data into a plurality of closely connected matrix blocks, and setting the matrix block in the current time state as χ t The spectrum matrix takes on the value of χ t-n+1 ,χ t-n+2 ,…,χ t The method comprises the steps of carrying out a first treatment on the surface of the Wherein t is the current time; n is the number of spectral matrix blocks used to predict future spectral states with the network.
S2, separating the spectrum matrix segmented in the step S1 according to the order from small to large of the information scale by using a two-dimensional empirical mode decomposition algorithm to obtain the intrinsic mode function components and the residual quantity of different frequencies, wherein the information scale is defined as the distance scale between extreme points.
And S3, training the two-way convolution long-term memory network through the BIMF natural mode function components and the residual quantity obtained in the step S2, and obtaining a separated and trained two-way convolution long-term memory network.
S4, predicting the components and the residual quantity of each inherent mode function by adopting a trained two-way convolution long-short-term memory network, and reconstructing a predicted value by combining each two-way convolution long-short-term memory network model.
1. Two-dimensional empirical mode decomposition algorithm
The two-dimensional empirical mode decomposition algorithm may adaptively decompose two-dimensional data into a series of natural mode function signals and residual quantities. The natural mode function needs to satisfy two points:
(1) And if no extreme point exists in the two-dimensional data, performing first-order or several-order derivative operation on the data to construct a group of data meeting the condition.
(2) The feature scale of the two-dimensional data is defined as the distance scale between extreme points.
As shown in fig. 2, the acquired spectrum data is divided into a plurality of closely connected matrix blocks, which are designated as χ t-n+1 ,χ t-n+2 ,…,χ t For example, for data block χ t The decomposition steps for performing the two-dimensional empirical mode decomposition algorithm are as follows:
first, data block χ t Let the lower left corner be the origin of coordinates, data block χ t The horizontal direction of (a) is the time axis X, the data block χ t The vertical direction of (a) is channel Y, a two-dimensional coordinate plane XOY is obtained, and the corresponding value of the data block is set as Z coordinate and is denoted as f (x, Y). Firstly, analyzing local maximum value points and local minimum value points of f (x, y), wherein the local maximum value points are points with data values larger than surrounding data values, and otherwise, the local maximum value points and the local minimum value points respectively form a maximum value envelope surface E MAX (x, y) and minimum envelope surface E MIN (x, y). Maximum envelope surface E MAX (x, y) and minimum envelope surface E MIN The algebraic mean of (x, y) is denoted as E 1 (x, y), i.e
Its difference from the original data matrix f (x, y) is defined as D 1 (x, y), namely: d (D) 1 (x,y)=f(x,y)-E 1 (x, y), wherein D 1 (x, y) is an intermediate process value of f (x, y), and the above process is repeated k times until D 1k (x, y) is an intrinsic mode function, in which case D 1(k-1) (x,y)-E 1k (x,y)=D 1k (x, y), definition C 1 (x,y)=D 1k (x, y), then C 1 (x, y) is the first natural mode function separated.
For this process, a criterion must be determined for the stopping of the screening process per layer. This can be done by limiting the standard deviation dimension SD. The standard deviation discrimination function for judging the screening end of the i-th layer inherent mode function is as follows:
wherein SD is less than or equal to 0.3; m and N are the maximum values of the spectrum matrix in the two-dimensional coordinate plane XOY along the X axis and the Y axis respectively; i is the component of the natural mode function which is being separated by the current algorithm, and is the ith order.
Then C is taken up 1 (x, y) separating the remainder R from the raw data 1 (x, y), i.e. f (x, y) -C 1 (x,y)=R 1 (x, y), R 1 (x, y) as new data, repeating the above-mentioned process n times to obtain a final expression
In the calculation formula, f (x, y) is an original data matrix; c (C) i (x, y) is the BIMF (intrinsic mode function) component obtained by the ith decomposition in the data matrix, and the BIMF component contains detail information of smaller scale; r is R n (x, y) is the final residual component, indicating the final large scale trend of the data.
2. Bidirectional convolution long-term and short-term memory network
Fig. 4 is a schematic illustration of a two-way convolution long-term and short-term memory network principle of a spectrum prediction method based on two-dimensional empirical mode decomposition according to an embodiment of the present invention. The two-way convolution long-short-term memory network can simultaneously utilize the correlation contained between past data and future data of the data, and the structural expansion is shown in fig. 5. The two-way convolution long-short-term memory network can be regarded as being composed of two convolution long-short-term memory networks in one forward direction and one backward direction, the working principle of the two-way convolution long-short-term memory network can be summarized as that two matrixes are obtained through the forward convolution long-short-term memory network and the backward convolution long-term memory network to obtain hidden layer states opposite to each other along a time axis, and then the hidden layer states are connected to obtain output, wherein the forward convolution long-term memory network and the backward convolution long-term memory network are used for respectively obtaining past and future information of an input matrix along the time axis.
The state y of the bidirectional convolution long-term and short-term memory network when inputting the nth matrix n Comprising forward directionAnd backward->
Wherein N is the total number of input matrix blocks; n is the number of the input matrix; x is x n An nth matrix for inputting a two-way convolution long-term and short-term memory network; h is a n-1 And h n+1 The hidden states of the forward convolution long-term memory network and the backward convolution long-term memory network when the nth matrix is input are respectively; c n-1 And c n+1 The states of memory units of the forward convolution long-period memory network and the backward convolution long-period memory network are input into the nth matrix;the output state of the network is memorized for a forward convolution long-term and short-term; />Output for backward convolution long-short-term memory networkA state; y is n Output of the comprehensive forward convolution long-term and short-term memory network after inputting the nth matrix>And backward convolution long-term and short-term memory network output +.>The output states of the obtained two-way convolution long-period memory network, namely a single-step prediction result of multi-channel spectrum prediction and a prediction result of spectrum occupation state at the next moment of each input channel; />Is +.>Weight of->Is backward->Weights of (2); />Is h n Is a bias term of (2); * Is a convolution operator; g is the tanh function.
Preferably, the convolutional long-term memory network comprises several key elements, namely an input gate, an output gate, a forget gate, a memory unit and a convolutional layer. The structure of the convolution long-term memory network is shown in fig. 4, and can be regarded as adding a convolution layer on the basis of the long-term memory network, wherein the convolution layer is used for capturing the spatial correlation by convolution operation instead of matrix multiplication. The calculation formula of each part of the convolution long-term and short-term memory network is as follows:
the input gate is i t The calculation formula is as follows:
in the method, in the process of the invention,is the weight of each input quantity of the input gate; b (B) i Is an offset term of the input gate; sigma generally employs a sigmoid function: />
The output gate is o t The calculation formula is as follows:
in the method, in the process of the invention,the weight of each input quantity of the output gate; b (B) o Is the bias term of the output gate.
Forgetting door f t The calculation formula is as follows:
in the method, in the process of the invention,is the weight of each input quantity of the forgetting gate; b (B) f Is a bias term for forgetting gates.
The memory cell is c t The calculation formula is as follows:
in the method, in the process of the invention,the state of the current input unit; c t For the current memory->And long-term memory c t-1 Calculating new states of the formed input units by combining; />Is the weight of each input quantity of the memory unit; b (B) c Is the bias term of the memory cell; g generally employs the tanh function: />
The final output of the convolution long-short-period memory network is determined by the output gate and the unit state together, and the calculation formula is thatIn (1) the->Is a tanh function.
In the above calculation formula, when initialized, +.and, +.are respectively indicated by Hadamard and convolution operators; h is a t-1 Representing a hidden state;and->Extracting convolution kernels corresponding to the characteristics of different gates and cell states respectively, wherein p epsilon (i, f, o, c); x-shaped articles t Representing the input, in this patent a two-dimensional matrix of spectral data, the output unit is h t And (3) representing.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.
Claims (3)
1. A method of spectrum prediction based on two-dimensional empirical mode decomposition, the method comprising the steps of:
s1, acquiring spectrum data, dividing the spectrum data into a plurality of closely connected matrix blocks, and setting the matrix block in the current time state as χ t The spectrum matrix takes on the value of χ t-n+1 ,χ t-n+2 ,…,χ t The method comprises the steps of carrying out a first treatment on the surface of the Wherein t is the current time, n is the number of spectrum matrix blocks used for predicting the future spectrum state by using the network;
s2, separating the spectrum matrix segmented in the step S1 according to the order from small to large of the information scale by utilizing a two-dimensional empirical mode decomposition algorithm to obtain natural mode function components and residual quantities of different frequencies, wherein the information scale is defined as the distance scale between extreme points;
s3, training the two-way convolution long-term memory network through the BIMF natural mode function components and the residual quantity obtained in the step S2 to obtain a separated and trained two-way convolution long-term memory network;
s4, predicting the components and the residual quantity of each inherent mode function by adopting a trained two-way convolution long-short-term memory network, and reconstructing a predicted value by combining each two-way convolution long-short-term memory network model;
the two-way convolution long-short-term memory network consists of a forward convolution long-short-term memory network and a reverse convolution long-short-term memory network, two matrixes are obtained through the forward convolution long-short-term memory network and the backward convolution long-short-term memory network, hidden layer states of the two matrixes along a time axis are opposite, the hidden layer states are connected to obtain a final prediction result, and a calculation formula is as follows:
wherein N is the total number of input matrix blocks; n is the number of the input matrix; x is x n An nth matrix for inputting a two-way convolution long-term and short-term memory network; h is a n-1 And h n+1 The hidden states of the forward convolution long-term memory network and the backward convolution long-term memory network when the nth matrix is input are respectively; c n-1 And c n+1 The states of memory units of the forward convolution long-period memory network and the backward convolution long-period memory network are input into the nth matrix;the output state of the network is memorized for a forward convolution long-term and short-term; />Memorizing the output state of the network for a backward convolution long-term and short-term; y is n Output of the comprehensive forward convolution long-term and short-term memory network after inputting the nth matrix>And backward convolution long-term and short-term memory network output +.>The output states of the obtained two-way convolution long-period memory network, namely a single-step prediction result of multi-channel spectrum prediction and a prediction result of spectrum occupation state at the next moment of each input channel; />Is +.>Weight of->Is backward->Weight of->Is h n Is the convolution operator, g is the tanh function;
the convolution long-term and short-term memory network comprises an input gate, an output gate, a forgetting gate, a memory unit and a convolution layer, wherein the function of the convolution layer is to capture the spatial correlation by convolution operation instead of matrix multiplication, and the method comprises the following specific steps of:
acquiring characteristics of different gates and cell states, and extracting convolution kernelAnd->Wherein, p is E (i, f, o, c), the calculation formula is as follows:
the input door is:
wherein, when initializing, +.is Hadamard inequality; h is a t-1 Is in a hidden state; * Is a convolution operator;b is the weight of each input quantity of the input gate i The sigma is a sigmoid function;
the output door is:
wherein,,to output the weight of each input quantity of the gate, B o A bias term for the output gate;
the forgetting door is as follows:
in the method, in the process of the invention,weight of input quantity of forgetting gate B f Bias items for forget gates;
the memory unit is as follows:
in the method, in the process of the invention,the state of the current input unit; c t For the current memory->And long-term memory c t-1 Calculating new states of the formed input units by combining; />The weight of each input quantity of the memory unit; b (B) c Is a bias term for the memory cell; g is a tanh function;
2. The spectrum prediction method based on two-dimensional empirical mode decomposition according to claim 1, wherein in step S2, the specific steps of the separation process are as follows:
s21, setting a data block χ t The lower left corner is the origin of coordinates, data block χ t The horizontal direction of (a) is the time axis X, the data block χ t Is channel Y, a two-dimensional coordinate plane XOY is obtained, wherein the data block χ t Setting the corresponding value as Z coordinate, and marking as f (x, y); analyzing a local maximum value point and a local minimum value point of the coordinate value of f (x, y), wherein the local maximum value point is a point with a data value larger than a surrounding data value, and the local minimum value point is a point with a data value smaller than the surrounding data value;
s22, forming a maximum envelope surface E through the local maximum points and the local minimum points respectively MAX (x, y) and minimum envelope surface E MIN (x,y),Envelope surface E using maxima MAX (x, y), minimum envelope surface E MIN The coordinate values of (x, y) and the primary data matrix f (x, y) are used for obtaining algebraic mean E 1 (x, y) and difference D 1 (x, y), the specific calculation mode is as follows:
D 1 (x,y)=f(x,y)-E 1 (x,y);
wherein D is 1 (x, y) is an intermediate process value of f (x, y);
s23, repeatedly executing the steps S21 to S23 until D 1k (x, y) is an intrinsic mode function, wherein k is the number of repeated execution times, and the specific calculation mode is as follows:
D 1(k-1) (x,y)-E 1k (x,y)=D 1k (x,y);
C 1 (x,y)=D 1k (x,y);
wherein C is 1 (x, y) is a separated natural mode function;
s24, judging the end of the screening process of the intrinsic mode function of each layer by limiting the size SD of the standard deviation, wherein the specific calculation mode is as follows:
wherein SD is less than or equal to 0.3; m and N are the maximum values of the spectrum matrix in the two-dimensional coordinate plane XOY along the X axis and the Y axis respectively; i is the component of the natural mode function which is being separated by the current algorithm and is the ith order;
s25, C 1 (x, y) separating from the raw data to obtain the remainder R 1 (x, y), the specific calculation mode is as follows:
f(x,y)-C 1 (x,y)=R 1 (x,y);
s26, the remainder R obtained in the step S25 is used 1 (x, y) as new data, repeating the above steps S24-S25,the final expression is obtained as follows:
wherein f (x, y) is the original data matrix; c (C) i (x, y) the i-th decomposed natural mode function component in the data matrix contains detail information with smaller scale; r is R n (x, y) is the final residual component, which is used to indicate the final large scale trend of the data, where n is the number of repetitions.
3. The spectrum prediction method based on two-dimensional empirical mode decomposition according to claim 1, wherein in step S2, the number of maximum value points and minimum value points in the two-dimensional empirical mode decomposition algorithm is equal to or greater than 1, respectively, and if no extreme value point exists in the two-dimensional empirical mode decomposition algorithm, performing first-order or several-order derivative operation on the data to construct a set of data meeting the condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210610500.1A CN115134024B (en) | 2022-05-31 | 2022-05-31 | Spectrum prediction method based on two-dimensional empirical mode decomposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210610500.1A CN115134024B (en) | 2022-05-31 | 2022-05-31 | Spectrum prediction method based on two-dimensional empirical mode decomposition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115134024A CN115134024A (en) | 2022-09-30 |
CN115134024B true CN115134024B (en) | 2023-07-11 |
Family
ID=83377662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210610500.1A Active CN115134024B (en) | 2022-05-31 | 2022-05-31 | Spectrum prediction method based on two-dimensional empirical mode decomposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115134024B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116545556B (en) * | 2023-04-28 | 2024-03-29 | 哈尔滨工程大学 | Electromagnetic spectrum occupancy rate two-dimensional prediction method based on dynamic threshold and residual convolution network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110138475A (en) * | 2019-05-08 | 2019-08-16 | 南京邮电大学 | A kind of adaptive threshold channel occupation status prediction technique based on LSTM neural network |
CN111738520A (en) * | 2020-06-24 | 2020-10-02 | 中国电子科技集团公司第二十八研究所 | System load prediction method fusing isolated forest and long-short term memory network |
CN112383369A (en) * | 2020-07-23 | 2021-02-19 | 哈尔滨工业大学 | Cognitive radio multi-channel spectrum sensing method based on CNN-LSTM network model |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019006473A1 (en) * | 2017-06-30 | 2019-01-03 | The Johns Hopkins University | Systems and method for action recognition using micro-doppler signatures and recurrent neural networks |
US11057079B2 (en) * | 2019-06-27 | 2021-07-06 | Qualcomm Incorporated | Dynamic thresholds for antenna switching diversity |
-
2022
- 2022-05-31 CN CN202210610500.1A patent/CN115134024B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110138475A (en) * | 2019-05-08 | 2019-08-16 | 南京邮电大学 | A kind of adaptive threshold channel occupation status prediction technique based on LSTM neural network |
CN111738520A (en) * | 2020-06-24 | 2020-10-02 | 中国电子科技集团公司第二十八研究所 | System load prediction method fusing isolated forest and long-short term memory network |
CN112383369A (en) * | 2020-07-23 | 2021-02-19 | 哈尔滨工业大学 | Cognitive radio multi-channel spectrum sensing method based on CNN-LSTM network model |
Also Published As
Publication number | Publication date |
---|---|
CN115134024A (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | Dsanet: Dual self-attention network for multivariate time series forecasting | |
El-Midany et al. | A proposed framework for control chart pattern recognition in multivariate process using artificial neural networks | |
CN110138475B (en) | Self-adaptive threshold channel occupancy state prediction method based on LSTM neural network | |
Liu | On-line soft sensor for polyethylene process with multiple production grades | |
CN103295075B (en) | A kind of ultra-short term load forecast and method for early warning | |
CN111950191B (en) | Rotary kiln sintering temperature prediction method based on hybrid deep neural network | |
CN110135635B (en) | Regional power saturated load prediction method and system | |
CN112363896B (en) | Log abnormality detection system | |
CN115134024B (en) | Spectrum prediction method based on two-dimensional empirical mode decomposition | |
CN114066073A (en) | Power grid load prediction method | |
Kowalski et al. | PM10 forecasting through applying convolution neural network techniques | |
CN115759461A (en) | Internet of things-oriented multivariate time sequence prediction method and system | |
Zhuravlev et al. | Comparison of different dichotomous classification algorithms | |
CN112765894B (en) | K-LSTM-based aluminum electrolysis cell state prediction method | |
CN111984514B (en) | Log anomaly detection method based on Prophet-bLSTM-DTW | |
CN111797979A (en) | Vibration transmission system based on LSTM model | |
Shaikh et al. | Wavelet Decomposition Impacts on Traditional Forecasting Time Series Models. | |
CN108134687B (en) | Gray model local area network peak flow prediction method based on Markov chain | |
Yang et al. | FreqSense: Adaptive Sampling Rates for Sensor-Based Human Activity Recognition Under Tunable Computational Budgets | |
CN113435055B (en) | Self-adaptive migration prediction method and system in shield cutter head torque field | |
Fan et al. | DWNet: dual-window deep neural network for time series prediction | |
CN112862190B (en) | Water quality time sequence prediction method based on improved Seq2Seq frame | |
CN114726751A (en) | Intelligent early warning method, system, equipment and storage medium for resource quality monitoring | |
CN112508176A (en) | Hoisting machinery fault prediction method | |
CN112596391A (en) | Deep neural network large time lag system dynamic modeling method based on data driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |