CN113852432A - RCS-GRU model-based spectrum prediction sensing method - Google Patents
RCS-GRU model-based spectrum prediction sensing method Download PDFInfo
- Publication number
- CN113852432A CN113852432A CN202110017312.3A CN202110017312A CN113852432A CN 113852432 A CN113852432 A CN 113852432A CN 202110017312 A CN202110017312 A CN 202110017312A CN 113852432 A CN113852432 A CN 113852432A
- Authority
- CN
- China
- Prior art keywords
- channel
- model
- spectrum
- rcs
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/382—Monitoring; Testing of propagation channels for resource allocation, admission control or handover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention belongs to the technical field of cognitive radio, and particularly relates to a spectrum prediction sensing method based on an RCS-GRU model. The method comprises the following steps of simulating a channel by adopting an M/M/N queuing theory model; combining and splicing the channel state sequences simulated in the step 1 into a matrix; establishing a convolutional neural network model based on residual CBAM, inputting a spliced channel state matrix set into the model, and extracting the characteristics of the spectrum occupation state among channels; inputting the extracted characteristic data set into a GRU model, excavating the characteristics of the channel state on the time sequence, performing spectrum prediction, and outputting the channel state of the next time slot; an Adam optimization algorithm is adopted to set a variable learning rate optimization cross loss function to train the RCS-GRU network, and a dropout method is added in the training process; and evaluating the prediction performance of the RCS-GRU by adopting a relation curve of the false alarm prediction probability and the detection prediction probability and the root mean square error RMSE.
Description
Technical Field
The invention belongs to the technical field of cognitive radio, and particularly relates to a spectrum prediction sensing method based on an RCS-GRU model.
Background
Cognitive Radio networks (crns) can effectively solve the contradiction between the shortage of wireless spectrum resources and the low utilization rate of the wireless spectrum and improve the communication capacity of the system by dynamic spectrum access (dsa) (dynamic spectrum access) and spectrum resource management technology.
However, CRNs systems face a number of technical challenges, one of which is spectrum sensing. The spectrum sensing technology enables a cognitive user to obtain authorized spectrum resource use conditions in a wireless communication system by using an effective signal detection or sensing method. However, in the conventional spectrum sensing, the scanning and sensing of the whole spectrum by the cognitive user often causes huge processing delay and energy consumption, and further affects the accuracy and spectrum utilization rate of the spectrum decision of the cognitive user. In order to solve the problems, a spectrum sensing scheme based on spectrum prediction is provided, in the scheme, a cognitive user firstly predicts the future spectrum state, then selects a spectrum with a prediction result of being idle for sensing, and does not sense a spectrum with a prediction result of being busy, so that the time and energy consumption of broadband spectrum sensing is reduced.
Therefore, the predictive sensing technology has become a research hotspot of the CRNs technology. Currently, with the rapid development of machine learning technology, deep neural networks have a good effect in spectrum prediction. Some scholars predict the real-world spectrum data through a long-short-term memory (LSTM) model based on Taguchi, the LSTM model replaces an implicit layer in a traditional Recurrent Neural Network (RNN) with a storage block, when an error reversely propagates from an output layer, the memory block of the module can be used for recording, the phenomenon of gradient index attenuation during reverse propagation in the RNN is effectively avoided, and information in a longer time is recorded. Meanwhile, a Taguchi method is adopted to replace a grid search method, and time consumption and computing resources are reduced when the neural network optimization configuration is designed. Also, the authors propose a method of deep learning neural networks using convolutional long-short term memory ConvLSTM (convolutional long-short-term memory) for long-term temporal prediction, which is trained to learn the joint spatio-temporal dependencies observed in spectral use.
The above prediction-based spectrum sensing method still has the following problems: (1) correlation of spectrum occupation among a plurality of channels is not considered, and detailed features cannot be comprehensively captured in the process of extracting spectrum occupation state features; (2) the time consumed in prediction is too long, and the prediction precision is insufficient.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a spectrum prediction sensing method based on an RCS-GRU model, compared with the traditional spectrum prediction sensing method, the method can better mine the correlation among the spectrums, improve the prediction precision and reduce the prediction time in the prediction.
In order to solve the technical problems, the invention adopts the following technical scheme:
the spectrum prediction sensing method based on the RCS-GRU model comprises the following steps,
step 3, establishing a convolutional neural network model based on residual CBAM, inputting the channel state matrix set spliced in the step 2 into the model, and extracting the characteristics of the spectrum occupation state among the channels;
step 4, inputting the feature data set extracted in the step 3 into a GRU model, excavating the features of the channel state on the time sequence, performing spectrum prediction, and outputting the channel state of the next time slot;
step 5, setting a variable learning rate optimization cross-loss function by adopting an Adam optimization algorithm to train the RCS-GRU network in the step 4, and adding a dropout method in the training process;
step 6, dividing the perception frame of the secondary user into three time intervals: spectral prediction period TpTime interval of spectrum sensingsData transmission period TdSelecting a channel with the predicted channel state result in the step 4 as idle for spectrum sensing;
and 7, evaluating the prediction performance of the RCS-GRU by adopting a relation curve of the false alarm prediction probability and the detection prediction probability and the root mean square error RMSE.
In the further optimization of the technical scheme, the cognitive radio network including N authorized spectrum channels in step 1, the time from when the master user leaves the authorized spectrum to when the master user accesses the authorized spectrum again obeys negative exponential distribution with a parameter λ, and a probability Density function pdf (probability Density function) of the cognitive radio network is:
the duration of service transmission of the master user in the authorized spectrum obeys negative exponential distribution with the parameter of mu to form a first-order Markov chain model with N +1 states, when the model is in the state N, the system is considered to be busy, when the model is in other states, the secondary user can be accessed, and when the model is in the busy state, the probability of the model being in the busy state can be used by PNAnd (4) showing.
According to the technical scheme, the probability P that the system is in the state N is further optimizedNThe expression of (a) is:
in the formula (10), the compound represented by the formula (10),indicating the traffic strength of the primary user.
In a further optimization of the present technical solution, in step 2, each wireless channel has two possible states: free is represented by "0" and busy is represented by "1", with vector X of N1tIndicating the occupancy state of all channels over time slot t.
In the further optimization of the technical scheme, in the step 3, in the convolutional neural network model based on residual CBAM, a ResNet network is used as a backbone network, CBAM is inserted thereafter, channel attention mechanisms are respectively inserted after an original output matrix to emphasize what the characteristic is, then a spatial attention mechanism is inserted to emphasize position information,
the input of a residual block ResBlock in the ResNet network is x, the potential mapping of the expected output is H (x), and the residual is defined as:
F(x)=H(x)-x (2)
for a ResBlock, the functional expression is:
the whole flow of CBAM can be summarized as: input feature map F ∈ RH×W×CFirstly, the channel weighting coefficient M is obtained by the channel attention modulecMultiplying the input feature graph F to obtain a feature graph F 'containing key features on more channel dimensions, and taking the F' as the input feature of the space attention module to obtain a space weight coefficient MsAnd multiplying the obtained result by the F 'to obtain a feature map F' containing more space position key information, wherein the whole process can be represented by the following formula:
the technical scheme is further optimized in that the channel attention module inputs a H multiplied by W multiplied by C characteristic F belonging to RH ×W×CFirst, the Average pooling layer (Average pooling) and the maximum pooling layer (Max pooling) are used to obtain the feature Favg∈R1×1×CAnd Fmax∈R1×1×CAnd sending the two characteristics into a multilayer perceptron (MLP) with a hidden layer to perform characteristic dimension reduction and dimension increasing operations, wherein the number of neurons of the hidden layer is C/r, an activation function is Relu, the number of neurons of an output layer is C, and parameters in the two layers of neural networks are shared. Then adding the two obtained characteristics and obtaining a weight coefficient M through a Sigmoid activation functionc(F)∈R1×1×CFinally, the weight coefficient and the original characteristic F are multiplied element by element to obtain a new characteristic after scaling, and the weight coefficient M of the channel attention module is obtained in the processc(F) Can be represented by the following formula:
Mc(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))=σ(W1(W0(Favg))+W1(W0(Fmax) Equation (11) where σ represents Sigmoid activation function, W0∈RC/r×CAnd W1∈RC×C/rRespectively representing the weights of the hidden layer and the output layer of the multi-layer perceptron.
The technical scheme is further optimized, and the space attention module outputs the characteristic F' epsilon R by the channel attention moduleH×W×CAs the input of the spatial attention module, firstly, the average pooling layer and the maximum pooling layer of one channel dimension are compressed respectively to obtain a one-dimensional channel feature map F'avg∈RH×W×1And F'max∈RH×W×1And splicing the feature maps into a feature map with the channel number of 2 in the channel dimension. Compressing it by 7 × 7 convolutional layer in channel dimension, activating it by Sigmoid to obtain spatial attention weight coefficient Ms (F') ∈ RH×W×1. And finally, multiplying the weight coefficient and the characteristic F' by corresponding elements to obtain a new scaled characteristic. The spatial attention module weight coefficient M in the processc(F') can be represented by the following formula:
Mc(F′)=σ(f7×7([AvgPool(F′);MaxPool(F′)]))=σ(f7×7([F′avg;F′max])) (12)
in the formula (12), f7×7Represents a convolution layer with a convolution kernel size of 7.
In step 4, the GRU model uses a reset gate to control how much the previous hidden layer state is updated to the current candidate hidden layer state, and the calculation formula is:
rt=σ(Wr·[ht-1,xt]+br) (5)
the updating gate is used for controlling how much the hidden layer state at the previous moment is updated to the current hidden layer state, the range is 0-1, the closer to 1, the more data representing 'memory', the closer to 0, the more data representing 'forgetting', and the calculation formula is as follows:
zt=σ(Wz·[ht-1,xt]+bz) (6)
r in formulae (5) and (6)tResetting the gate vector for time t, ztUpdate the vector of the gate for time t, σ being the Sigmoid function, Wr、WzIs a weight matrix between the connected vectors, ht-1For the previous node to imply a layer state, xtIs an input at time t, br、bzIs an offset;
after obtaining the gating signal, reset gating is used to obtain reset data rt*ht-1It is then summed with the input xtSplicing, zooming the data to the range of-1 to 1 by activating a function tanh, and simultaneously performing two steps of forgetting and memorizing in the memory updating stage, wherein the calculation process is as follows:
the technical scheme is further optimized, wherein the step 5 comprises that in an Adam optimization algorithm, a step length factor belongs to, and an exponential decay rate rho of first-order moment estimation1Exponential decay Rate ρ of second moment estimation2For a small constant delta with stable values, the parameter theta is initialized, the first and second moment variables s, r are initialized, the time step t is initialized, and then m samples { x ] are taken from the training set(1),…,x(m)Is corresponding to the target y(1)And calculating the gradient:
updating biased first and second moment estimates:
correcting the deviation of the first order moment and the second order moment:
and (3) updating calculation parameters:
application updating:
θ←θ+Δθ (17)。
in the further optimization of the technical scheme, the false alarm prediction and detection prediction probability in step 7 is as follows:
the root mean square error is:
compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) before the secondary user carries out spectrum sensing, the use condition of the spectrum is predicted, and the channel with the predicted result being idle is sensed, so that the energy loss is reduced.
(2) Aiming at the problem that the spectrum occupation state features are not extracted comprehensively, the CBAM is added into the residual convolution neural network, so that the network is concentrated on the current most useful information, the spectrum occupation features among different channels in a certain time slot can be extracted better, the spectrum occupation features among the same channel can be extracted, and the subsequent prediction result is more accurate.
(3) According to the method, the LSTM model is mostly adopted for prediction in the conventional frequency spectrum prediction model, and the simplified LSTM model, namely the GRU model is used for prediction, so that the computation space of network learning can be greatly reduced, the training efficiency is improved, the time consumed by prediction is reduced, the accurate learning of the processed data is realized, and the prediction precision is improved.
Drawings
FIG. 1 is a schematic representation of the RCS-GRU model;
FIG. 2 is an M/M/N state transition diagram;
FIG. 3 is an integrated graph of blocks in CBAM and ResNet;
FIG. 4 is a schematic diagram of a channel attention module of the CBAM;
FIG. 5 is a schematic diagram of a spatial attention module of the CBAM;
fig. 6 is a schematic diagram of a GRU module.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures. Elements in the figures are not drawn to scale and like reference numerals are generally used to indicate like elements.
The invention will now be further described with reference to the accompanying drawings and detailed description.
Referring to FIG. 1, a schematic diagram of the RCS-GRU model is shown. The invention preferably selects a spectrum prediction sensing method based on the RCS-GRU model,
the method comprises the following steps:
the duration of service transmission by the master user in the authorized spectrum follows a negative exponential distribution with a parameter μ, forming a first-order markov chain model with N +1 states, the state transition of which is shown in fig. 2. When the model is in state N, the system is considered busy; while in the other state, the secondary user may be accessed. Probability P that system is in state NNThe expression of (a) is:
in the formula (10), the compound represented by the formula (10),indicating the traffic strength of the primary user.
And 2, combining and splicing the channel state sequences simulated in the step 1 into a matrix through a sliding window with the length of 100. Each radio channel now has two possible states: free (represented by "0") and busy (represented by "1"). Using Nx 1 vectors XtIndicating the occupancy state of all channels over time slot t.
And 3, establishing a convolutional neural network model based on residual CBAM, inputting the channel state matrix spliced in the step 2 into the model, performing characteristic extraction on the spectrum occupation state among the channels, and outputting a matrix containing the spectrum occupation characteristics among different channels. In the convolutional neural network model based on residual CBAM, the ResNet network is used as a backbone network, and CBAM is inserted thereafter. CBAM is inserted into ResBlock, as shown in FIG. 3, which is an integrated graph of blocks in CBAM and ResNet. Respectively inserting a channel attention mechanism behind the original output matrix to mine the occupation characteristics of frequency spectrums among different channels; and then inserting a space attention mechanism for mining the occupation characteristics of the frequency spectrums of different time slots of the same channel.
The input of a residual block ResBlock in the ResNet network is x, the potential mapping of the expected output is H (x), and the residual is defined as:
F(x)=H(x)-x (2)
for a ResBlock, the functional expression is:
the whole flow of CBAM can be summarized as: input feature map F ∈ RH×W×CFirstly, the channel weighting coefficient M is obtained by the channel attention modulecMultiplying the input feature graph F to obtain a feature graph F 'containing key features on more channel dimensions, and taking the F' as the input feature of the space attention module to obtain a space weight coefficient MsAnd multiplying the obtained result by the F 'to obtain a feature map F' containing more space position key information, wherein the whole process can be represented by the following formula:
the channel attention module inputs a H W C feature F ∈ R as shown in FIG. 4H×W×CFirst, the Average pooling layer (Average pooling) and the maximum pooling layer (Max pooling) are used to obtain the feature Favg∈R1×1×CAnd Fmax∈R1×1×CAnd sending the two features into a multilayer perceptron (MLP) with a hidden layer to perform feature dimension reduction and dimension raising operations. The number of neurons in the hidden layer is C/r, the number of the neurons in the output layer is C, the number of the neurons in the hidden layer is Relu, and parameters in the neural networks in the hidden layer and the output layer are shared. Then adding the two obtained characteristics and obtaining a weight coefficient M through a Sigmoid activation functionc(F)∈R1 ×1×C. Finally, the new feature after scaling can be obtained by multiplying the weight coefficient and the original feature F element by element. The channel attention module weight coefficient M in the processc(F) Can be represented by the following formula:
Mc(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))=σ(W1(W0(Favg))+W1(W0(Fnax) Equation (11) where σ represents Sigmoid activation function, W0∈RC/r×CAnd W1∈RC×C/rRespectively representing the weights of the hidden layer and the output layer of the multi-layer perceptron.
The spatial attention module is shown in FIG. 5 with the feature F' ∈ R output by the channel attention moduleH×W×CAs the input of the spatial attention module, firstly, the average pooling layer and the maximum pooling layer of one channel dimension are compressed respectively to obtain a one-dimensional channel feature map F'avg∈RH×W×1And F'max∈RH×W×1And splicing the feature maps into a feature map with the channel number of 2 in the channel dimension. Compressing it by 7 × 7 convolutional layer in channel dimension, activating it by Sigmoid to obtain space attention weight coefficient Ms(F′)∈RH×W×1. And finally, multiplying the weight coefficient and the characteristic F' by corresponding elements to obtain a new scaled characteristic. The spatial attention module weight coefficient M in the processc(F') can be represented by the following formula:
Mc(F′)=σ(f7×7([AvgPool(F′);MaxPool(F′)]))=σ(f7×7([F′avg;F′max])) (12)
in the formula (12), f7×7Represents a convolution layer with a convolution kernel size of 7.
And 4, inputting a GRU model shown in the figure 6 according to the characteristic data set extracted in the step 3, excavating the characteristics of the channel state on a time sequence, performing spectrum prediction, and outputting the channel state of the next time slot.
The GRU model uses a reset gate to control how much the hidden layer state at the previous moment is updated to the current candidate hidden layer state, and the calculation formula is as follows:
rt=σ(Wr·[ht-1,xt]+br) (5)
the updating gate is used for controlling how much the hidden layer state at the previous moment is updated to the current hidden layer state, the range is 0-1, the closer to 1, the more data representing 'memory', the closer to 0, the more data representing 'forgetting', and the calculation formula is as follows:
zt=σ(Wz·[ht-1,xt]+bz) (6)
r in formulae (5) and (6)tResetting the gate vector for time t, ztUpdate the vector of the gate for time t, σ being the Sigmoid function, Wr、WzIs a weight matrix between the connected vectors, ht-1For the previous node to imply a layer state, xtIs an input at time t, br、bzIs an offset.
After obtaining the gating signal, reset gating is used to obtain reset data rt*ht-1It is then summed with the input xtSplicing, zooming the data to the range of-1 to 1 by activating a function tanh, and simultaneously performing two steps of forgetting and memorizing in the memory updating stage, wherein the calculation process is as follows:
and 5, setting a variable learning rate optimization cross-loss function by adopting an Adam optimization algorithm to train the RCS-GRU network in the step 4, and adding a dropout method in the training process.
In the Adam optimization algorithm, the step factor epsilon is set to be 0.001, and the exponential decay rate rho of first-order moment estimation1Set to 0.9, the exponential decay Rate ρ of the second moment estimate2Set to 0.999 and a small constant delta for numerical stability set to 10-8. First, a parameter θ is initialized, a first-order and second-order moment variable s is 0, r is 0, and an initialization time step t is 0. Then take m samples { x ] from the training set(1),…,x(m)Is corresponding to the target y(i). Calculating the gradient:
updating biased first and second moment estimates:
correcting the deviation of the first order moment and the second order moment:
and (3) updating calculation parameters:
application updating:
θ←θ+Δθ (17)
step 6, dividing the perception frame of the secondary user into three time intervals: spectral prediction period TpTime interval of spectrum sensingsData transmission period Td. And 4, selecting the channel with the predicted channel state result of being idle in the step 4 for spectrum sensing.
Step 7, adopting false alarm prediction probability PfAnd detecting the prediction probability PdAnd root Mean Square error rmse (root Mean Square error), the predicted performance of the RCS-GRU is evaluated.
The false alarm prediction and detection prediction probabilities are:
h in formula (8)1And H0Indicating that the channel state is occupied and idle.
The root mean square error is:
y 'of formula (9)'iRepresenting true results, yiIndicating the result of the prediction.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. The spectrum prediction sensing method based on the RCS-GRU model is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step 1, simulating a channel by adopting an M/M/N queuing theory model, wherein a customer represents a master user, and a service window represents frequency spectrum resources;
step 2, combining and splicing the channel state sequences simulated in the step 1 into a matrix;
step 3, establishing a convolutional neural network model based on residual CBAM, inputting the channel state matrix set spliced in the step 2 into the model, and extracting the characteristics of the spectrum occupation state among the channels;
step 4, inputting the feature data set extracted in the step 3 into a GRU model, excavating the features of the channel state on the time sequence, performing spectrum prediction, and outputting the channel state of the next time slot;
step 5, setting a variable learning rate optimization cross-loss function by adopting an Adam optimization algorithm to train the RCS-GRU network in the step 4, and adding a dropout method in the training process;
step 6, dividing the perception frame of the secondary user into three time intervals: spectral prediction period TpTime interval of spectrum sensingsData transmission period TdSelecting a channel with the predicted channel state result in the step 4 as idle for spectrum sensing;
and 7, evaluating the prediction performance of the RCS-GRU by adopting a relation curve of the false alarm prediction probability and the detection prediction probability and the root mean square error RMSE.
2. The RCS-GRU model-based spectrum prediction sensing method of claim 1, wherein: in the cognitive radio network including N authorized spectrum channels in step 1, a time from when the master user leaves the authorized spectrum to when the master user accesses the authorized spectrum again obeys negative exponential distribution with a parameter λ, and a probability Density function pdf (probability Density function) of the cognitive radio network is:
the duration of service transmission of the master user in the authorized spectrum obeys negative exponential distribution with the parameter of mu to form a first-order Markov chain model with N +1 states, when the model is in the state N, the system is considered to be busy, when the model is in other states, the secondary user can be accessed, and when the model is in the busy state, the probability of the model being in the busy state can be used by PNAnd (4) showing.
4. The RCS-GRU model-based spectrum prediction sensing method of claim 1, wherein: said step 2 has two possible states per radio channel: free is represented by "0" and busy is represented by "1", with vector X of N1tIndicating the occupancy state of all channels over time slot t.
5. The RCS-GRU model-based spectrum prediction sensing method of claim 1, wherein: in the step 3, in a convolutional neural network model based on residual CBAM, a ResNet network is used as a backbone network, CBAM is inserted after the backbone network, a channel attention mechanism is inserted after an original output matrix to emphasize what the characteristic is, a spatial attention mechanism is inserted to highlight position information, an input of a residual block ResBlock in the ResNet network is x, a potential mapping of expected output is h (x), and residual is defined as:
F(x)=H(x)-x (2)
for a ResBlock, the functional expression is:
the whole flow of CBAM can be summarized as: input feature map F ∈ RH×W×CFirstly, the channel weighting coefficient M is obtained by the channel attention modulecMultiplying the input feature graph F to obtain a feature graph F 'containing key features on more channel dimensions, and taking the F' as the input feature of the space attention module to obtain a space weight coefficient MsAnd multiplying the obtained result by the F 'to obtain a feature map F' containing more space position key information, wherein the whole process can be represented by the following formula:
6. the RCS-GRU model-based spectrum prediction sensing method of claim 5, wherein: the channel attention module inputs a feature F epsilon R of H multiplied by W multiplied by CH×W×CFirst, the Average pooling layer (Average pooling) and the maximum pooling layer (Max pooling) are used to obtain the feature Favg∈R1×1×CAnd Fmax∈R1×1×CAnd sending the two characteristics into a multilayer perceptron (MLP) with a hidden layer to perform characteristic dimension reduction and dimension increasing operations, wherein the number of neurons of the hidden layer is C/r, an activation function is Relu, the number of neurons of an output layer is C, and parameters in the two layers of neural networks are shared. Then adding the two obtained characteristics and obtaining a weight coefficient M through a Sigmoid activation functionc(F)∈R1×1×CFinally, element-by-element multiplication is carried out on the weight coefficient and the original characteristic F to obtain the scalingThe latter new feature, in the process the channel attention module weighting factor Mc(F) Can be represented by the following formula:
Mc(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))=σ(W1(W0(Favg))+W1(W0(Fmax))) (11)
in the formula (11), σ represents Sigmoid activation function, W0∈RC/r×CAnd W1∈RC×C/rRespectively representing the weights of the hidden layer and the output layer of the multi-layer perceptron.
7. The RCS-GRU model-based spectrum prediction sensing method of claim 6, wherein: the spatial attention module outputs the characteristic F' epsilon R with the channel attention moduleH×W×CAs the input of the spatial attention module, firstly, the average pooling layer and the maximum pooling layer of one channel dimension are compressed respectively to obtain a one-dimensional channel feature map F'avg∈RH×W×1And F'max∈RH×W×1And splicing the feature maps into a feature map with the channel number of 2 in the channel dimension. Compressing it by 7 × 7 convolutional layer in channel dimension, activating it by Sigmoid to obtain space attention weight coefficient Ms(F′)∈RH ×W×1. And finally, multiplying the weight coefficient and the characteristic F' by corresponding elements to obtain a new scaled characteristic. The spatial attention module weight coefficient M in the processc(F') can be represented by the following formula:
Mc(F′)=σ(f7×7([AvgPool(F′);MaxPool(F′)]))=σ(f7×7([F′avg;F′max])) (12)
in the formula (12), f7×7Represents a convolution layer with a convolution kernel size of 7.
8. The RCS-GRU model-based spectrum prediction sensing method of claim 6, wherein: in the step 4, the GRU model uses a reset gate to control how much the previous hidden layer state is updated to the current candidate hidden layer state, and the calculation formula is:
rt=σ(Wr·[ht-1,xt]+br) (5)
the updating gate is used for controlling how much the hidden layer state at the previous moment is updated to the current hidden layer state, the range is 0-1, the closer to 1, the more data representing 'memory', the closer to 0, the more data representing 'forgetting', and the calculation formula is as follows:
zt=σ(Wz·[ht-1,xt]+bz) (6)
r in formulae (5) and (6)tResetting the gate vector for time t, ztUpdate the vector of the gate for time t, σ being the Sigmoid function, Wr、WzIs a weight matrix between the connected vectors, ht-1For the previous node to imply a layer state, xtIs an input at time t, br、bzIs an offset;
after obtaining the gating signal, reset gating is used to obtain reset data rt*ht-1It is then summed with the input xtSplicing, zooming the data to the range of-1 to 1 by activating a function tanh, and simultaneously performing two steps of forgetting and memorizing in the memory updating stage, wherein the calculation process is as follows:
9. the RCS-GRU model-based spectrum prediction sensing method of claim 8, wherein: the step 5 comprises that in an Adam optimization algorithm, a step length factor belongs to, and an exponential decay rate rho of first-order moment estimation1Exponential decay Rate ρ of second moment estimation2For a small constant delta with stable values, the parameter theta is initialized, the first and second moment variables s, r are initialized, the time step t is initialized, and then m samples { x ] are taken from the training set(1),…,x(m)Is corresponding to the target y(i)And calculating the gradient:
updating biased first and second moment estimates:
correcting the deviation of the first order moment and the second order moment:
and (3) updating calculation parameters:
application updating:
θ←θ+Δθ (17)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110017312.3A CN113852432B (en) | 2021-01-07 | 2021-01-07 | Spectrum Prediction Sensing Method Based on RCS-GRU Model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110017312.3A CN113852432B (en) | 2021-01-07 | 2021-01-07 | Spectrum Prediction Sensing Method Based on RCS-GRU Model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113852432A true CN113852432A (en) | 2021-12-28 |
CN113852432B CN113852432B (en) | 2023-08-25 |
Family
ID=78972180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110017312.3A Active CN113852432B (en) | 2021-01-07 | 2021-01-07 | Spectrum Prediction Sensing Method Based on RCS-GRU Model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113852432B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115276855A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | ResNet-CBAM-based spectrum sensing method |
CN115276856A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | Channel selection method based on deep learning |
CN115276853A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | CNN-CBAM-based spectrum sensing method |
CN115276854A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | ResNet-CBAM-based energy spectrum sensing method for random arrival and departure of main user signal |
CN115802401A (en) * | 2022-10-19 | 2023-03-14 | 苏州大学 | Wireless network channel state prediction method, device, equipment and storage medium |
CN117238420A (en) * | 2023-11-14 | 2023-12-15 | 太原理工大学 | Method and device for predicting mechanical properties of ultrathin strip |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107070569A (en) * | 2017-03-06 | 2017-08-18 | 广西大学 | Multipoint cooperative frequency spectrum sensing method based on HMM model |
CN109194423A (en) * | 2018-08-13 | 2019-01-11 | 中国人民解放军陆军工程大学 | The single-frequency point spectrum prediction method of shot and long term memory models based on optimization |
US20200128473A1 (en) * | 2017-07-11 | 2020-04-23 | Beijing University Of Posts And Telecommunications | Frequency spectrum prediction method and apparatus for cognitive wireless network |
CN111612097A (en) * | 2020-06-02 | 2020-09-01 | 华侨大学 | GRU network-based master user number estimation method |
-
2021
- 2021-01-07 CN CN202110017312.3A patent/CN113852432B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107070569A (en) * | 2017-03-06 | 2017-08-18 | 广西大学 | Multipoint cooperative frequency spectrum sensing method based on HMM model |
US20200128473A1 (en) * | 2017-07-11 | 2020-04-23 | Beijing University Of Posts And Telecommunications | Frequency spectrum prediction method and apparatus for cognitive wireless network |
CN109194423A (en) * | 2018-08-13 | 2019-01-11 | 中国人民解放军陆军工程大学 | The single-frequency point spectrum prediction method of shot and long term memory models based on optimization |
CN111612097A (en) * | 2020-06-02 | 2020-09-01 | 华侨大学 | GRU network-based master user number estimation method |
Non-Patent Citations (1)
Title |
---|
郭佳;余永斌;杨晨阳;: "基于全注意力机制的多步网络流量预测", 信号处理, no. 05 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115276855A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | ResNet-CBAM-based spectrum sensing method |
CN115276856A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | Channel selection method based on deep learning |
CN115276853A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | CNN-CBAM-based spectrum sensing method |
CN115276854A (en) * | 2022-06-16 | 2022-11-01 | 宁波大学 | ResNet-CBAM-based energy spectrum sensing method for random arrival and departure of main user signal |
CN115276855B (en) * | 2022-06-16 | 2023-09-29 | 宁波大学 | Spectrum sensing method based on ResNet-CBAM |
CN115276856B (en) * | 2022-06-16 | 2023-09-29 | 宁波大学 | Channel selection method based on deep learning |
CN115276854B (en) * | 2022-06-16 | 2023-10-03 | 宁波大学 | ResNet-CBAM-based energy spectrum sensing method for randomly arriving and leaving main user signal |
CN115276853B (en) * | 2022-06-16 | 2023-10-03 | 宁波大学 | Spectrum sensing method based on CNN-CBAM |
CN115802401A (en) * | 2022-10-19 | 2023-03-14 | 苏州大学 | Wireless network channel state prediction method, device, equipment and storage medium |
CN115802401B (en) * | 2022-10-19 | 2024-02-09 | 苏州大学 | Wireless network channel state prediction method, device, equipment and storage medium |
CN117238420A (en) * | 2023-11-14 | 2023-12-15 | 太原理工大学 | Method and device for predicting mechanical properties of ultrathin strip |
Also Published As
Publication number | Publication date |
---|---|
CN113852432B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113852432B (en) | Spectrum Prediction Sensing Method Based on RCS-GRU Model | |
CN110223517B (en) | Short-term traffic flow prediction method based on space-time correlation | |
WO2022027937A1 (en) | Neural network compression method, apparatus and device, and storage medium | |
CN114818515A (en) | Multidimensional time sequence prediction method based on self-attention mechanism and graph convolution network | |
CN111310672A (en) | Video emotion recognition method, device and medium based on time sequence multi-model fusion modeling | |
CN111148118A (en) | Flow prediction and carrier turn-off method and system based on time sequence | |
CN111260124A (en) | Chaos time sequence prediction method based on attention mechanism deep learning | |
CN112766496B (en) | Deep learning model safety guarantee compression method and device based on reinforcement learning | |
CN113840297B (en) | Frequency spectrum prediction method based on radio frequency machine learning model drive | |
CN112183742B (en) | Neural network hybrid quantization method based on progressive quantization and Hessian information | |
CN112910711A (en) | Wireless service flow prediction method, device and medium based on self-attention convolutional network | |
CN107704426A (en) | Water level prediction method based on extension wavelet-neural network model | |
CN115018193A (en) | Time series wind energy data prediction method based on LSTM-GA model | |
CN111832228A (en) | Vibration transmission system based on CNN-LSTM | |
CN115347571A (en) | Photovoltaic power generation power short-term prediction method and device based on transfer learning | |
CN114124260A (en) | Spectrum prediction method, apparatus, medium, and device based on composite 2D-LSTM network | |
CN114596726A (en) | Parking position prediction method based on interpretable space-time attention mechanism | |
CN117035464A (en) | Enterprise electricity consumption carbon emission prediction method based on time sequence network improved circulation network | |
CN113283576A (en) | Spectrum sensing method for optimizing LSTM based on whale algorithm | |
CN115762147B (en) | Traffic flow prediction method based on self-adaptive graph meaning neural network | |
Peng et al. | Hmm-lstm for proactive traffic prediction in 6g wireless networks | |
CN111144473A (en) | Training set construction method and device, electronic equipment and computer readable storage medium | |
CN116522594A (en) | Time self-adaptive transient stability prediction method and device based on convolutional neural network | |
CN114566048A (en) | Traffic control method based on multi-view self-adaptive space-time diagram network | |
CN115310355A (en) | Multi-energy coupling-considered multi-load prediction method and system for comprehensive energy system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |