CN112422213A - Efficient spectrum sensing method based on support vector machine - Google Patents

Efficient spectrum sensing method based on support vector machine Download PDF

Info

Publication number
CN112422213A
CN112422213A CN202011252562.7A CN202011252562A CN112422213A CN 112422213 A CN112422213 A CN 112422213A CN 202011252562 A CN202011252562 A CN 202011252562A CN 112422213 A CN112422213 A CN 112422213A
Authority
CN
China
Prior art keywords
matrix
signal
sample
hyperplane
covariance matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011252562.7A
Other languages
Chinese (zh)
Other versions
CN112422213B (en
Inventor
包建荣
鲁彪
姜斌
刘超
曾嵘
吴俊�
邱雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011252562.7A priority Critical patent/CN112422213B/en
Publication of CN112422213A publication Critical patent/CN112422213A/en
Application granted granted Critical
Publication of CN112422213B publication Critical patent/CN112422213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/382Monitoring; Testing of propagation channels for resource allocation, admission control or handover
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/14Spectrum sharing arrangements between different networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a high-efficiency spectrum sensing method based on a support vector machine, which comprises the following steps: s1, inputting a received signal to be sensed; s2, preprocessing a signal to be perceived by a Principal Component Analysis (PCA) method, and decomposing a covariance matrix of the signal to be perceived by adopting a Durlite to obtain characteristic statistics; s3, obtaining a label of a received signal to be sensed through an energy detection algorithm, and forming a sample training set by the obtained label and the obtained characteristic statistics; s4, inputting the formed sample training set into a Support Vector Machine (SVM) classifier for training to obtain a spectrum classifier; and S5, inputting the collected data into a frequency spectrum classifier for processing to obtain a classification result. The invention still has higher frequency spectrum recognition rate under the condition of low signal-to-noise ratio, and meanwhile, the introduction of the non-progressive threshold enables the progressive threshold to change along with the environment, so that the frequency spectrum sensing is more accurate.

Description

Efficient spectrum sensing method based on support vector machine
Technical Field
The invention relates to the technical field of digital communication, in particular to a high-efficiency spectrum sensing method based on a support vector machine.
Background
The traditional spectrum allocation mode is static, so that the spectrum cannot be fully utilized, the spectrum resources become increasingly scarce, and the development of wireless communication is limited.
The traditional energy detection algorithm mainly takes the energy of SU received signals as a criterion, and the process is as follows: firstly, a received signal passes through a band-pass filter, an out-of-band signal in the signal is removed, then the signal is input into an A/D converter to obtain a discrete time domain signal, then the signal is sampled according to Shannon's theorem, the signal energy is obtained, statistics is obtained, the statistics is compared with a preset threshold, if the statistics is larger than the threshold, a frequency spectrum is occupied, and if the statistics is not larger than the threshold, the frequency spectrum is idle. The energy detection algorithm is simple to implement, needs no prior information, is widely applied, but is easily influenced by noise or environmental factors, and the detection performance is obviously reduced particularly under the condition of low signal-to-noise ratio. Meanwhile, the fixed detection threshold also makes the energy detection algorithm very sensitive to environmental changes, and cannot meet the requirements of practical application.
With the appearance of Cognitive Radio (CR) technology, a master user (PU) can intelligently access unoccupied idle frequency spectrum, and the frequency spectrum utilization rate is greatly improved. The spectrum sensing is used as a CR key, so that the idle spectrum can be accurately and intelligently identified, the spectrum resources are fully utilized, and the spectrum utilization rate is effectively improved. The traditional single-user spectrum sensing technology is not suitable for the actual complex environment due to the limitation. For this reason, multi-user cooperative spectrum sensing techniques have been developed. The performance of spectrum sensing can be effectively improved by fusing multi-user sensing results. However, the above sensing methods all fix the decision threshold in the sensing process. Therefore, uncertainty factors such as environment and noise have a large influence on the sensing result. The introduction of the machine learning technology enables the decision threshold to be dynamic, can adapt to more complex environments, and particularly can greatly improve the detection performance by adopting a machine learning method under the condition of low signal to noise ratio.
Therefore, the invention provides an efficient spectrum sensing method based on a support vector machine based on a machine learning method.
Disclosure of Invention
The invention aims to provide a high-efficiency spectrum sensing method based on a support vector machine aiming at the defects of the prior art, the high spectrum recognition rate can still be achieved under the condition of low signal-to-noise ratio, and meanwhile, the introduction of a non-progressive threshold enables the progressive threshold to change along with the environment, so that the spectrum sensing is more accurate.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high-efficiency spectrum sensing method based on a support vector machine comprises the following steps:
s1, inputting a received signal to be sensed;
s2, preprocessing a signal to be perceived by a Principal Component Analysis (PCA) method, and decomposing a covariance matrix of the signal to be perceived by adopting a Durlite to obtain characteristic statistics;
s3, obtaining a label of a received signal to be sensed through an energy detection algorithm, and forming a sample training set by the obtained label and the obtained characteristic statistics;
s4, inputting the formed sample training set into a Support Vector Machine (SVM) classifier for training to obtain a spectrum classifier;
and S5, inputting the collected data into a frequency spectrum classifier for processing to obtain a classification result.
Further, in step S2, the preprocessing of the to-be-sensed received signal by the principal component analysis PCA is to perform dimension reduction processing and feature extraction on the to-be-sensed received signal.
Further, the performing the dimensionality reduction processing and the feature extraction on the to-be-sensed received signal specifically includes:
given an original sample set X ═ X containing M samples1,...,XMEach sample vector X is a vector of dimension 1 × N, i.e., Xi=(xi1,xi2,...,xMN)∈RNI 1.. M, and arranging the samples in a matrix form, resulting in a sample matrix, denoted as:
Figure BDA0002772053300000021
wherein S represents a sample matrix, and S is equal to RM×NAnd S is an M multiplied by N dimensional matrix; m denotes the number of samples, N denotes the vector dimension, RNRepresenting a real number of dimension N;
subtracting the mean value of the corresponding column of the sample matrix S from each element in the sample matrix S, performing centralization processing, and calculating the covariance matrix of the sample matrix S, wherein the covariance matrix is expressed as:
Figure BDA0002772053300000031
wherein, CxA covariance matrix, C, representing the sample matrix Sx∈RN×N;STRepresents a transpose of a sample matrix;
the covariance matrix CxDiagonalized and covariance matrix C is calculatedxCharacteristic value λ of12,...,λN1≥λ2≥L≥λN) Sum covariance matrix CxCorresponding feature vector mu12,L,μNThe feature vector mu12,L,μNA new eigenvector matrix U is formed, denoted as:
U=[μ12,L,μN]
wherein, muTCxμ=∧;μ,∧∈RN×N
Defining a variance contribution rate phi (L), wherein
Figure BDA0002772053300000032
When phi (L) is more than or equal to 0.8, the first L eigenvectors form an eigenvector matrix UL=[μ12,L,μL],UL∈RN×LAs a basis, the sample matrix is linearly transformed to obtain a matrix S' subjected to dimension reduction processing and feature extraction, which is expressed as:
Figure BDA0002772053300000033
wherein S represents a matrix with dimension of M multiplied by N; u shapeLA matrix with dimension N × L is represented; s' represents a matrix of dimension M × L.
Further, in step S2, the covariance matrix R of the received signal to be perceived in the feature statistics is obtained by decomposing the covariance matrix of the received signal to be perceived with a dollet methodxExpressed as:
Figure BDA0002772053300000034
wherein, IMRepresenting an M × M order identity matrix; rSRepresenting a statistical covariance matrix of M multiplied by M order primary user PU signals; x represents the matrix of the received signal to be perceived, expressed as:
Figure BDA0002772053300000035
wherein x isi(k) Representing a signal value on an ith antenna of a kth received main user PU signal;
H0and H1The assumed conditions respectively representing whether the primary user PU exists are as follows:
Figure BDA0002772053300000041
wherein, s (k) and N (k) (k is 1, 2.. N) respectively indicate that the k-th received primary user PU signal sequence has a mean value of zero and a variance of zero
Figure BDA0002772053300000047
Additive white gaussian noise of (1); n represents the total number of samples of the time interval; h (k) represents the channel gain of the k-th main user PU signal sequence; x (k) represents a reception signal of the cognitive user SU.
Further, the step S2 of obtaining the feature statistics by decomposing the covariance matrix of the received signal to be sensed by using the dollet, specifically, the feature statistics obtained by:
the covariance matrix of the received signal to be perceived is normalized and expressed as:
Figure BDA0002772053300000042
wherein R'xA covariance matrix representing a normalization process;
R′xthe order of the principal formula of each order is not zero, then the unique dollet decomposition is obtained as:
R′x=BDBT
wherein, B represents a unit lower triangular matrix; d denotes a diagonal matrix, BTA transposed matrix representing B;
Figure BDA0002772053300000043
Figure BDA0002772053300000044
covariance matrix R 'if normalized'xHas the element coordinate of aij(i is not less than 1 and not more than n, j is not less than 1 and not more than n), the calculation expression of each element under the Durlett decomposition is as follows:
Figure BDA0002772053300000045
Figure BDA0002772053300000046
wherein k, j are integers, and k is 1,2, a., N, j is k +1, k +2, a., N;
let DkA k-th matrix D is represented, where k is 1, 2.., N represents the number of perceived users;
Figure BDA0002772053300000051
the ith eigenvalue of the kth matrix is represented and arranged according to descending order to obtain
Figure BDA0002772053300000052
Wherein, i is 1, 2.. times.n;
matrices B and BTIs a unit triangular matrix and matrix D is a diagonal matrix, matrix DkIs a matrix DkAccording to the normalized covariance matrix after Durlite decomposition at H1And H0Under different conditions, the characteristic statistics are respectively constructed as follows:
Figure BDA0002772053300000053
wherein, TkFeature statistics representing the structure.
Further, in step S3, the obtaining of the label of the received signal to be sensed through an energy detection algorithm specifically includes:
judging whether the average energy of the received signal to be sensed is greater than a threshold value, if so, setting the label as + 1; if not, setting the label as-1.
Further, in step S3, the obtained labels and the obtained feature statistics form a sample training set, specifically:
calculating the number of +1 and-1, judging whether the number of +1 is greater than the number of-1, if so, determining the characteristic vector TkThe label of (1) is set as + 1; if not, the feature vector T is processedkThe label of (a) is-1;
the feature statistic TkAnd the feature statistic TkCorresponding label composition sample training set G ═ Tk,f}。
Further, the step S4 is specifically:
let (x)1,y1),...,(xL,yL),xi∈RNTo train sample data, yiE { +1, -1} is xiA corresponding label; wherein (x)i,yi) The combination of the associated data represents the received data and the corresponding label; n represents the sample dimension, L represents the number of samples, then the maximum separation hyperplane is represented as:
w·x+d=0
wherein w represents a normal vector of the hyperplane, i.e., a vector perpendicular to the hyperplane; d represents an offset from the origin;
samples distributed on two sides of the hyperplane satisfy the following constraint conditions:
w·xi+d≥0(yi=+1)
w·xi+d≤0(yi=-1)
adding a mapping function phi (x) in the hyperplane, mapping x to a high-dimensional space, and expressing a decision function for each sample point as follows:
f(x)=sign(w·φ(x)+b)
the classification interval determined by the hyperplane is 2/| w | | luminance2;||g||2Representing the L2 norm, the best goal is to maximize the classification interval;
when | | w | | purple light2When the minimum is the maximum classification interval, the optimization objective function can be expressed as:
Figure BDA0002772053300000061
Figure BDA0002772053300000062
points on the hyperplane classification interval are called support vectors, a real relaxation variable xi adjustment interval is added, overfitting in a high-dimensional space is relieved, and the optimized hyperplane interval is expressed as follows:
Figure BDA0002772053300000063
Figure BDA0002772053300000064
wherein ξiIs a real number and represents the ith relaxation variable; c represents a penalty parameter for limiting xii(ii) a And optimizing the optimized hyperplane interval equation by adopting Lagrange, wherein the optimized hyperplane interval equation is expressed as:
Figure BDA0002772053300000065
Figure BDA0002772053300000066
wherein the Lagrangian factor is alphai≥0,βi≥0,i=1,2,...,L;
To La(w, b, α, β) and making the partial derivative zero, expressed as:
Figure BDA0002772053300000067
the hyperplane optimization equation is expressed as:
Figure BDA0002772053300000071
Figure BDA0002772053300000072
wherein, K (x)i,xj)=<φ(xi),φ(xj) Represents the kernel function, is the inner product of the mapping function phi (x) in the feature space;
the core optimization problem in the SVM classifier is solved by adopting a sequence minimum optimization method to obtain a Lagrange factor alpha and an offset b, and then a classification function, namely a spectrum classifier is expressed as follows:
Figure BDA0002772053300000073
wherein sign (& lt) is a sign function.
Further, the classification result obtained in the step S5 is a spectrum occupation condition of the primary user PU signal; the frequency spectrum occupation condition is as follows: if the output result of the frequency spectrum classifier is +1, the PU signal frequency spectrum of the main user is occupied; and if the output result of the frequency spectrum classifier is-1, the frequency spectrum of the PU signal of the main user is not occupied.
Compared with the prior art, the method has the advantages that the PCA is utilized to preprocess the perception signal, the sample dimension is effectively reduced, the Doolittle decomposition covariance matrix is utilized to construct the feature statistics, the SVM training is carried out to find the optimal hyperplane, and the PU and noise interval is maximized. The method has higher detection rate under low signal-to-noise ratio, effectively improves the frequency spectrum utilization rate, and has higher application value.
Drawings
Fig. 1 is a flowchart of a method for efficient spectrum sensing based on a support vector machine according to an embodiment;
FIG. 2 is a schematic diagram illustrating a flow of statistics for PCA preprocessing and Doolittle decomposition covariance matrix construction according to an embodiment;
FIG. 3 is a schematic diagram of a training set flow formed by generation and statistics of tags according to an embodiment;
FIG. 4 is a schematic flowchart of a training spectrum classifier according to an embodiment;
FIG. 5 is a flowchart of a test spectrum classifier according to an embodiment;
FIG. 6 is a schematic diagram of a linear maximum spacing hyperplane of an SVM according to an embodiment;
FIG. 7 is a diagram illustrating an exemplary Cognitive Radio Network (CRN) system architecture according to an embodiment;
FIG. 8 is a schematic diagram illustrating a variation of an average detection probability of each spectrum sensing scheme under different SNR according to an embodiment;
FIG. 9 is a schematic diagram illustrating a variation of an average error probability of each spectrum sensing scheme under different SNR according to an embodiment;
FIG. 10 is a schematic diagram of ROC curves for various spectral detection schemes provided in accordance with one embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide an efficient spectrum sensing method based on a support vector machine, aiming at the defects of the prior art.
Example one
The embodiment provides a method for sensing a high-efficiency spectrum based on a support vector machine, as shown in fig. 1, including the steps of:
s1, inputting a received signal to be sensed;
s2, preprocessing a signal to be perceived by a Principal Component Analysis (PCA) method, and decomposing a covariance matrix of the signal to be perceived by adopting a Durlite to obtain characteristic statistics;
s3, obtaining a label of a received signal to be sensed through an energy detection algorithm, and forming a sample training set by the obtained label and the obtained characteristic statistics;
s4, inputting the formed sample training set into a Support Vector Machine (SVM) classifier for training to obtain a spectrum classifier;
and S5, inputting the collected data into a frequency spectrum classifier for processing to obtain a classification result.
The embodiment discloses an SVM spectrum sensing method based on PCA preprocessing and Doolittle covariance matrix decomposition. Firstly, the result of spectrum sensing only has two conditions of whether the spectrum is occupied or not, and the property of SVM classification is met. In addition, the dimension of the perception signal has a large influence on the algorithm operation speed, so that the PCA is proposed to preprocess the perception signal, and the signal dimension is reduced under the condition of keeping the main information of the signal, compared with the traditional ED algorithm, the complexity of the method is reduced from M (2N-1) to 2M/5(4M/5+1) (2N-1), and then the statistic is constructed by decomposing the covariance matrix through Doolittle, and the matrix decomposition complexity is formed by the MME and the o (M) of the EME algorithm3) Reduction to o (2M)3/3). Compared with the traditional SVM spectrum sensing algorithm, the method greatly reduces the sample dimension, effectively reduces the complexity and improves the spectrum sensing efficiency.
The embodiment adopts the following technical scheme: and extracting the average energy of the perception signal as a characteristic, and comparing the characteristic with a known threshold to generate a label. The PCA preprocessing is used for carrying out dimensionality reduction processing on a sensing signal, a Doolittle decomposition covariance matrix is used for obtaining feature statistics, finally a training sample consisting of the statistics and a label is input into an SVM classifier for training to obtain a spectrum classifier, and an RBF kernel function is selected in the invention. The test data is input into the spectrum classifier, the output "+ 1" indicates the presence of a PU signal, and the output "-1" indicates the absence of a PU signal.
The specific implementation mode is as follows:
in step S1, a reception signal to be sensed is input.
Given an original sample set X ═ X containing M samples1,...,XMEach sample vector X is a vector of dimension 1 × N, i.e., Xi=(xi1,xi2,...,xMN)∈RNI 1.. M, and arranging the samples in a matrix form, resulting in a sample matrix, denoted as:
Figure BDA0002772053300000091
wherein S represents a sample matrix, and S is equal to RM×NAnd S is an M multiplied by N dimensional matrix; m denotes the number of samples, N denotes the vector dimension, RNRepresenting real numbers of dimension N.
In step S2, the received signal to be perceived is preprocessed by principal component analysis PCA, and the covariance matrix of the received signal to be perceived is decomposed by using dollet to obtain the feature statistic. As shown in fig. 2.
In this embodiment, the preprocessing of the to-be-perceived received signal by the principal component analysis PCA is to perform dimension reduction processing and feature extraction on the to-be-perceived received signal.
Principal Component Analysis (PCA) is a dimension reduction and feature extraction method in statistics. The essence is to transform the coordinates of the measurement data in the high-dimensional space and to retain those coordinates representing the primary data direction as the coordinate direction of the new data in the low-dimensional space. And subtracting the mean value of the corresponding column from each element of the received data, reducing the correlation among the data, then calculating the eigenvalue of the covariance matrix, sequencing the eigenvalues from large to small, replacing all information of the matrix by the eigenvector of the first k eigenvalues with the largest eigenvalue ratio, and forming a new matrix through matrix transformation. Through PCA preprocessing, data dimensionality is reduced, and operation efficiency is improved.
The method for carrying out dimensionality reduction processing and feature extraction on the received signal to be perceived specifically comprises the following steps:
and (3) subtracting the mean value of the corresponding column of the sample matrix S from each element in the sample matrix S to perform centralization treatment, and calculating a covariance matrix of the sample matrix S, wherein the covariance matrix is expressed as:
Figure BDA0002772053300000101
wherein, CxA covariance matrix, C, representing the sample matrix Sx∈RN×N;STRepresents a transpose of a sample matrix;
the covariance matrix CxDiagonalized and covariance matrix C is calculatedxCharacteristic value λ of12,...,λN1≥λ2≥L≥λN) Sum covariance matrix CxCorresponding feature vector mu12,L,μNI.e. satisfy muTCxMu ═ Λ, where mu ∈ RN×N. The feature vector mu12,L,μNA new eigenvector matrix U is formed, denoted as:
U=[μ12,L,μN]
defining a variance contribution rate phi (L), wherein
Figure BDA0002772053300000102
When phi (L) is more than or equal to 0.8, the first L eigenvectors form an eigenvector matrix UL=[μ12,L,μL],UL∈RN×LAs a basis, the sample matrix is linearly transformed to obtain a matrix S' subjected to dimension reduction processing and feature extraction, which is expressed as:
Figure BDA0002772053300000103
wherein S represents a matrix with dimension of M multiplied by N; u shapeLA matrix with dimension N × L is represented; s' represents a matrix of dimension M × L.
Through the calculation, the dimensionality reduction processing and the characteristic extraction from the N-dimensional sample matrix to the L-dimensional matrix are completed, and a matrix S' is obtained and is used as the input of the subsequent steps.
In this embodiment, the covariance matrix of the received signal to be sensed is decomposed using a dollet to obtain the feature statistics.
Dolilter (Doolittle) decomposition is a special case of triangle (LU) decomposition and can be expressed as R ═ LU. Wherein L is a lower triangular matrix, U is an upper triangular matrix, and when the matrix R is not a negative symmetric matrix and the sequence primary subformulae of each order are not zero, a unique Doolittle decomposition R ═ BDB existsTWhere B is a unit lower triangular matrix, D is a diagonal matrix, BTWhich is the transpose of matrix B.
In CNR, assuming that a cognitive user SU has M antennas, the SU perceptual signal can be represented as a binary hypothesis problem:
Figure BDA0002772053300000111
wherein H0And H1Respectively indicate the assumed condition whether the primary user PU exists, and s (k) and N (k) (1, 2.. N) respectively indicate the k-th received primary user PU signal sequence and the mean value is zero variance
Figure BDA0002772053300000115
Additive white gaussian noise of (1); n represents the total number of samples for a time interval; h (k) represents the channel gain of the k-th main user PU signal sequence; x (k) represents a reception signal of the cognitive user SU.
By collecting the M antenna signals of the SU, a matrix of the received signal to be perceived is obtained, which is expressed as:
Figure BDA0002772053300000112
wherein, the element X in the matrix Xi(k) And representing the signal value of the ith antenna of the kth received main user PU signal.
Under the two assumptions above, the covariance matrix R of the received signal to be perceivedxRespectively as follows:
Figure BDA0002772053300000113
wherein, IMRepresenting an M × M order identity matrix; rSAnd (3) representing a statistical covariance matrix of M multiplied by M primary user PU signals.
The obtained feature statistics are specifically:
the covariance matrix of the received signal to be perceived is normalized and expressed as:
Figure BDA0002772053300000114
wherein, the covariance matrix of the normalization processing is represented;
under the assumption of H1And H0Under the condition of being satisfied, covariance matrix R'xIs a non-negative symmetric matrix, and R'xThe order of the orders of the major sub-formula(s) is not zero, there is a unique Doolittle decomposition that can be decomposed into:
R′x=BDBT (4)
wherein, B represents a unit lower triangular matrix; d denotes a diagonal matrix, BTRepresenting the transposed matrix of B.
At H1And H0In both cases, the matrices B and D obtained by Doolittle decomposition are different, but because of the matrices B and BTAre unit triangular matrices, so matrix R'xMay be expressed as a product of the diagonal elements of the diagonal matrix D. Under the assumption of H0When, the matrix D is a diagonal matrix having the same elements. And in the assumption of H1In this case, the matrix D is a diagonal matrix with different diagonal elements, and can be represented as:
Figure BDA0002772053300000121
Figure BDA0002772053300000122
covariance matrix R 'if normalized'xHas the element coordinate of aij(i is more than or equal to 1 and less than or equal to n, and j is more than or equal to 1 and less than or equal to n), the calculation expression of each element under Doolittle decomposition is as follows:
Figure BDA0002772053300000123
Figure BDA0002772053300000124
wherein k, j are integers, and k is 1,2, a., N, j is k +1, k +2, a., N;
let DkA k-th matrix D is represented, where k is 1, 2.., N represents the number of perceived users;
Figure BDA0002772053300000125
the ith eigenvalue of the kth matrix is represented and arranged in descending order to obtain
Figure BDA0002772053300000126
Wherein, i is 1, 2.. times.n;
factor matrices B and BTAre all unit triangular matrices, and matrix D is a diagonal matrix, matrix DkIs a matrix DkAccording to the normalized covariance matrix after Doolittle decomposition at H1And H0Under different conditions, the characteristic statistics are respectively constructed as follows:
Figure BDA0002772053300000127
wherein, TkFeature statistics representing the structure.
In step S3, a label of the received signal to be sensed is obtained through an energy detection algorithm, and the obtained label and the obtained feature statistics are combined into a sample training set. As shown in fig. 3.
The method comprises the following steps of obtaining a label of a received signal to be sensed through an energy detection algorithm, specifically:
in the energy detection algorithm, the detection threshold is
Figure BDA0002772053300000131
The average energy of the received signal is
Figure BDA0002772053300000132
. Wherein Q is-1Is a Q functionHas a reciprocal function of
Figure BDA0002772053300000133
,PfIs the false alarm probability.
Under the condition of high signal-to-noise ratio, judging whether the average energy of the received signal to be sensed is larger than a threshold value, if so, setting the label as + 1; if not, setting the label as-1.
Forming a sample training set by the acquired labels and the acquired feature statistics, specifically:
calculating the number of +1 and-1, judging whether the number of +1 is greater than the number of-1, if so, determining the characteristic vector TkThe label of (1) is set as + 1; if not, the feature vector T is processedkThe label of (1) is set as-1. The feature statistic TkAnd the feature statistic TkCorresponding label composition sample training set G ═ Tk,f}。
In step S4, the composed sample training set is input into a Support Vector Machine (SVM) classifier for training, and a spectrum classifier is obtained. As shown in fig. 4.
Support Vector Machine (SVM) is a supervised classifier in machine learning. It solves the pattern recognition and classification problem by quadratic programming. The SVM classifier finds the optimal hyperplane of the linear separable feature space by the support vector, that is, the probability of correct classification is maximized and the maximum hyperplane interval is reached. The maximum hyperplane equation expression is (w · x) + b ═ 0, where w is the normal vector to the hyperplane and b is the offset from the origin. Points on both sides of the hyperplane indicate two cases to be classified, if y indicates classification categoryiWhen the value is +1, then there is (w.x)i) + b is more than or equal to 0; if yiWhen the value is-1, then there is (w.x)i) + b is less than or equal to 0. The classification interval determined by the hyperplane is 2/| w | | luminance2The point on the classification interval becomes the support vector, the maximum hyperplane, that is, the classification interval, is the maximum, that is, the 2/| w | | luminance is achieved under the condition of satisfying the above conditions2At a maximum, i.e.
Figure BDA0002772053300000134
Let there be several samples (x)1,y1),...,(xL,yL),xi∈RNTo train sample data, yiE { +1, -1} is xiA corresponding label; wherein (x)i,yi) Is a combination of associated data representing the received data and its corresponding tag; n represents the dimension of the sample, L represents the number of samples, and then the maximum interval hyperplane is represented as:
w·x+d=0 (10)
wherein w represents a normal vector of the hyperplane, i.e., a vector perpendicular to the hyperplane; d represents an offset from the origin.
Samples distributed on two sides of the hyperplane satisfy the following constraint conditions:
w·xi+d≥0(yi=+1) (11)
w·xi+d≤0(yi=-1) (12)
if the nonlinear hyperplane needs to be added with a mapping function phi (x), mapping x to a high-dimensional space, and expressing a decision function for each sample point as follows:
f(x)=sign(w·φ(x)+b) (13)
the classification interval determined by the hyperplane is 2/| w | | luminance2;||g||2Representing the L2 norm, the best goal is to maximize the classification interval;
when | | w | | purple light2When the minimum is the maximum classification interval, the optimization objective function can be expressed as:
Figure BDA0002772053300000141
points on the hyperplane classification interval are called support vectors, a real relaxation variable xi adjustment interval is added, overfitting in a high-dimensional space is relieved, and the optimized hyperplane interval is expressed as follows:
Figure BDA0002772053300000142
Figure BDA0002772053300000143
wherein ξiIs a real number and represents the ith relaxation variable; c represents a penalty parameter for limiting xiiTo reduce losses; c represents the sensitivity of the SVM classifier to sample error classification. The larger C, the more sensitive the classifier is to misclassification samples.
And optimizing the optimized hyperplane interval equation by adopting Lagrange, wherein the optimized hyperplane interval equation is expressed as:
Figure BDA0002772053300000144
wherein the Lagrangian factor is alphai≥0,βi≥0,i=1,2,...,L;
To La(w, b, α, β) and making the partial derivative zero, the equation (17) is obtained by calculating the partial derivative of the two variables w and d:
Figure BDA0002772053300000151
substituting equation (17) into equation (15), the hyperplane optimization equation is converted into equation (18), which is expressed as:
Figure BDA0002772053300000152
wherein, K (x)i,xj)=<φ(xi),φ(xj) Denotes the kernel function, which is the inner product of the mapping function phi (x) in the feature space.
The lagrangian factor α and the offset b are obtained by solving the SVM core optimization problem shown above by the Sequence Minimum Optimization (SMO) method mentioned in the background. Substituting the solved w and b into equation (13), a spectrum classifier is obtained as a classification function shown in equation (19):
Figure BDA0002772053300000153
wherein sign (& lt) is a sign function. There are two results, +1 and-1, when f (x) is +1, then there is a hypothesis of H1True, i.e., PU signal present; otherwise the PU signal is not present.
Sequence Minimum Optimization (SMO) is an algorithm that solves the quadratic optimization problem, and its most classical application is in solving the SVM problem. By observing the optimization target of the SVM, we can find that the final objective is to calculate an optimal set of values of lagrangian coefficients and an offset d. The central idea of the SMO algorithm is to select two at a time for optimization (two because the alpha constraint determines that the sum of its products with the label is equal to 0, so two must be optimized at a time, otherwise the constraint is violated), and then fix the other values. This process is repeated until the end condition is reached and the program exits and obtains the optimization results we need.
In step S5, the collected data is input to a spectrum classifier and processed to obtain a classification result. As shown in fig. 5.
The method comprises the steps of firstly conducting PCA preprocessing on data collected by a cognitive user, conducting Doolittle decomposition after reducing correlation and data dimensionality to obtain a characteristic statistic T, inputting the statistic into a trained spectrum classifier and processing to obtain the spectrum occupation condition of a PU signal. If the output of the frequency spectrum classifier is +1, the frequency spectrum of the PU signal is occupied; if the output is-1, it indicates that the PU signal spectrum is unoccupied.
The method has the advantages that the PCA algorithm is used for preprocessing the sensing signals, signal dimensionality is effectively reduced, complexity of subsequent training and testing is reduced, feature statistics are constructed by Doolittle decomposition covariance matrixes, correlation among the sensing signals is reduced, complexity of statistics is reduced, noise and PU are successfully separated through the SVM algorithm using the RBF kernel function, spectrum detection probability is effectively improved, and the method has high application value.
Fig. 6 is a schematic diagram of the linear maximum separation hyperplane of the SVM. The key to SVM is by maximizing the classifier spacing marginThe sum of the errors is minimized, thereby maximizing the generalization capability. But in a low signal-to-noise ratio environment, it cannot realize spectrum sensing through a linear hyperplane. Typically using a kernel function (i.e. the
Figure BDA0002772053300000161
A non-linear mapping function of the representation) maps the input low-dimensional space to a high-dimensional dot product space of the feature space, the low-dimensional space linearity inseparability can be improved.
Fig. 7 is a schematic diagram illustrating a typical Cognitive Radio Network (CRN) system architecture. A typical cognitive radio network (CNR) consists of a Primary User (PU) and a Secondary User (SU), and it is generally assumed that wireless network communications of the PU and SU are physically separated, and the SU cannot directly obtain the channel status of the PU. In this system, the PU has priority access to the occupied channels, and the Cognitive Base Station (CBS) first determines the free channels in the spectrum by detecting PU signals in the channels, and then it transmits the status of the PU receiver (PU-R) and determines the free spectrum. Until the PU no longer occupies the spectrum, the SU can reuse the spectrum. If the frequency spectrum used by the SU is accessed by the PU, the SU needs to quit the frequency spectrum and transfer the frequency spectrum into a cache, and the cognitive device detects other idle frequency spectrums at the same time.
Fig. 8 is a schematic diagram illustrating the variation of the average detection probability of each spectrum sensing scheme under different signal-to-noise ratios. As can be seen from FIG. 8, with the increase of the SNR, the detection probabilities of various detection schemes all show an increasing trend, and the detection probability of the proposed method is optimal, at-15 dB, the average detection probabilities of the ED and SVM algorithms are 0.25 and 0.65, respectively, the detection probability in the method is 0.8, and the detection performance is improved by 55% and 15%, respectively. In the method, the covariance matrix is subjected to Doolittle decomposition to extract feature statistics containing all information of the PU signals, and the perception has good generalization capability due to the use of the SVM classifier. Therefore, compared with the traditional energy detection and other blind spectrum detection schemes, the method has higher detection probability.
Fig. 9 is a schematic diagram illustrating the variation of the average error probability of each spectrum sensing scheme under different signal-to-noise ratios. When the signal-to-noise ratio is the same, the average error probability of the scheme provided by the invention is obviously lower than that of other schemes, and compared with algorithms such as ED and MME, the average error probability of the scheme provided by the invention is reduced more rapidly, mainly because: when the signal-to-noise ratio is low, the PU signal and the noise signal are mixed, so that the PU signal is easily covered by the noise signal, the detection performance is reduced, and the error probability is increased. However, the scheme of the invention separates PU signals and noise to the maximum extent through RBF kernel function nonlinear mapping, so that the error probability is far lower than that of ED detection. Secondly, compared with other original blind spectrum detection schemes, the characteristic statistics extracted by the Doolittle decomposition covariance matrix in the invention contains all information of PU signals, and can effectively distinguish noise and useful information, so that the invention is far superior to other traditional algorithms in the aspect of average error probability.
Fig. 10 shows a schematic diagram of ROC curves for each spectrum detection scheme. The ROC curve is the change of the detection probability with the false alarm probability, and it can be known from fig. 6 that the detection probability and the false alarm probability are positively correlated. When the false alarm probability is fixed, the larger the signal-to-noise ratio is, the higher the detection performance is, because the PU signal is more easily detected under the high signal-to-noise ratio, as can be seen from fig. 6, the SVM algorithm has a better detection performance compared with other algorithms such as ED, KNN, and the like.
The embodiment provides a Support Vector Machine (SVM) efficient spectrum sensing of PCA preprocessing and Doolittle decomposition under low signal-to-noise ratio, which comprises the construction of PCA preprocessing, Doolittle decomposition covariance matrix and statistic and SVM training and testing processes. Aiming at two conditions that the frequency spectrum sensing result is whether the frequency spectrum is occupied or not, an SVM classification model is introduced. Under the condition of low signal-to-noise ratio, the spectrum is sensed efficiently and quickly, the spectrum utilization rate is improved, and the method has high application value.
Although the embodiments of the present invention have been clearly described, it will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the method of the present invention, the scope of which is defined by the appended claims and their equivalents. According to the method, model parameters such as PU bandwidth, sampling frequency and sampling time of the main signal of the cognitive radio, generation modes of a training signal and a test signal, dimensionality of a test sample, a construction mode of a covariance matrix, parameters of an RBF kernel function, penalty factors in an SVM and the like can be changed according to actual conditions. Still falling within the scope of the method of the invention and still being protected by the invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A high-efficiency spectrum sensing method based on a support vector machine is characterized by comprising the following steps:
s1, inputting a received signal to be sensed;
s2, preprocessing a signal to be perceived by a Principal Component Analysis (PCA) method, and decomposing a covariance matrix of the signal to be perceived by adopting a Durlite to obtain characteristic statistics;
s3, obtaining a label of a received signal to be sensed through an energy detection algorithm, and forming a sample training set by the obtained label and the obtained characteristic statistics;
s4, inputting the formed sample training set into a Support Vector Machine (SVM) classifier for training to obtain a spectrum classifier;
and S5, inputting the collected data into a frequency spectrum classifier for processing to obtain a classification result.
2. The method as claimed in claim 1, wherein the preprocessing of the received signal to be perceived by the Principal Component Analysis (PCA) in step S2 is performed by performing dimensionality reduction and feature extraction on the received signal to be perceived.
3. The efficient spectrum sensing method based on the support vector machine according to claim 2, wherein the performing the dimensionality reduction processing and the feature extraction on the received signal to be sensed specifically comprises:
given an original sample set X ═ X containing M samples1,...,XMEach sample vector X is a vector of dimension 1 × N, i.e., Xi=(xi1,xi2,...,xMN)∈RNI 1.. M, and arranging the samples in a matrix form, resulting in a sample matrix, denoted as:
Figure FDA0002772053290000011
wherein S represents a sample matrix, and S is equal to RM×NAnd S is an M multiplied by N dimensional matrix; m denotes the number of samples, N denotes the vector dimension, RNRepresenting a real number of dimension N;
subtracting the mean value of the corresponding column of the sample matrix S from each element in the sample matrix S, performing centralization processing, and calculating the covariance matrix of the sample matrix S, wherein the covariance matrix is expressed as:
Figure FDA0002772053290000012
wherein, CxA covariance matrix, C, representing the sample matrix Sx∈RN×N;STRepresents a transpose of a sample matrix;
the covariance matrix CxDiagonalized and covariance matrix C is calculatedxCharacteristic value λ of12,...,λN1≥λ2≥L≥λN) Sum covariance matrix CxCorresponding feature vector mu12,L,μNThe feature vector mu12,L,μNA new eigenvector matrix U is formed, denoted as:
U=[μ12,L,μN]
wherein, muTCxμ=∧;μ,∧∈RN×N
Defining a variance contribution rate phi (L), wherein
Figure FDA0002772053290000021
When phi (L) is more than or equal to 0.8, the first L eigenvectors form an eigenvector matrix UL=[μ12,L,μL],UL∈RN ×LAs a basis, the sample matrix is linearly transformed to obtain a matrix S' subjected to dimension reduction processing and feature extraction, which is expressed as:
Figure FDA0002772053290000022
wherein S represents a matrix with dimension of M multiplied by N; u shapeLA matrix with dimension N × L is represented; s' represents a matrix of dimension M × L.
4. The SVM-based efficient spectrum sensing method as claimed in claim 3, wherein the covariance matrix R of the received signal to be sensed in the feature statistics obtained by decomposition of the covariance matrix of the received signal to be sensed in the step S2 using a Durlite method is usedxExpressed as:
Figure FDA0002772053290000023
wherein, IMRepresenting an M × M order identity matrix; rSRepresenting a statistical covariance matrix of M multiplied by M order primary user PU signals; x represents the matrix of the received signal to be perceived, expressed as:
Figure FDA0002772053290000024
wherein x isi(k) To representA signal value of the ith antenna of the kth received primary user PU signal is taken;
H0and H1The assumed conditions respectively representing whether the primary user PU exists are as follows:
Figure FDA0002772053290000025
wherein, s (k) and N (k) (k is 1, 2.. N) respectively indicate that the k-th received primary user PU signal sequence has a mean value of zero and a variance of zero
Figure FDA0002772053290000031
Additive white gaussian noise of (1); n represents the total number of samples of the time interval; h (k) represents the channel gain of the k-th main user PU signal sequence; x (k) represents a reception signal of the cognitive user SU.
5. The method as claimed in claim 4, wherein the obtaining of the feature statistics in the feature statistics obtained by decomposing the covariance matrix of the received signal to be sensed with the dollet in step S2 is specifically as follows:
the covariance matrix of the received signal to be perceived is normalized and expressed as:
Figure FDA0002772053290000032
wherein R'xA covariance matrix representing a normalization process;
R′xthe order of the principal formula of each order is not zero, then the unique dollet decomposition is obtained as:
R′x=BDBT
wherein, B represents a unit lower triangular matrix; d denotes a diagonal matrix, BTA transposed matrix representing B;
Figure FDA0002772053290000033
Figure FDA0002772053290000034
covariance matrix R 'if normalized'xHas the element coordinate of aij(i is not less than 1 and not more than n, j is not less than 1 and not more than n), the calculation expression of each element under the Durlett decomposition is as follows:
Figure FDA0002772053290000035
Figure FDA0002772053290000036
wherein k, j are integers, and k is 1,2, a., N, j is k +1, k +2, a., N;
let DkA k-th matrix D is represented, where k is 1, 2.., N represents the number of perceived users;
Figure FDA0002772053290000037
the ith eigenvalue of the kth matrix is represented and arranged according to descending order to obtain
Figure FDA0002772053290000041
Wherein, i is 1, 2.. times.n;
matrices B and BTIs a unit triangular matrix and matrix D is a diagonal matrix, matrix DkIs a matrix DkAccording to the normalized covariance matrix after Durlite decomposition at H1And H0Under different conditions, the characteristic statistics are respectively constructed as follows:
Figure FDA0002772053290000042
wherein, TkFeature statistics representing the structure.
6. The method according to claim 1, wherein in step S3, the label of the received signal to be sensed is obtained through an energy detection algorithm, and specifically:
judging whether the average energy of the received signal to be sensed is greater than a threshold value, if so, setting the label as + 1; if not, setting the label as-1.
7. The method for sensing the efficient spectrum based on the support vector machine according to claim 6, wherein the step S3 includes the step of forming a sample training set by the obtained labels and the obtained feature statistics, specifically:
calculating the number of +1 and-1, judging whether the number of +1 is greater than the number of-1, if so, determining the characteristic vector TkThe label of (1) is set as + 1; if not, the feature vector T is processedkThe label of (a) is-1;
the feature statistic TkAnd the feature statistic TkCorresponding label composition sample training set G ═ Tk,f}。
8. The method for efficient spectrum sensing based on a support vector machine according to claim 1, wherein the step S4 specifically comprises:
let (x)1,y1),...,(xL,yL),xi∈RNTo train sample data, yiE { +1, -1} is xiA corresponding label; wherein (x)i,yi) The combination of the associated data represents the received data and the corresponding label; n represents the sample dimension, L represents the number of samples, then the maximum separation hyperplane is represented as:
w·x+d=0
wherein w represents a normal vector of the hyperplane, i.e., a vector perpendicular to the hyperplane; d represents an offset from the origin;
samples distributed on two sides of the hyperplane satisfy the following constraint conditions:
w·xi+d≥0(yi=+1)
w·xi+d≤0(yi=-1)
adding a mapping function phi (x) in the hyperplane, mapping x to a high-dimensional space, and expressing a decision function for each sample point as follows:
f(x)=sign(w·φ(x)+b)
the classification interval determined by the hyperplane is 2/| w | | luminance2;||g||2Representing the L2 norm, the best goal is to maximize the classification interval;
when | | w | | purple light2When the minimum is the maximum classification interval, the optimization objective function can be expressed as:
Figure FDA0002772053290000051
Figure FDA0002772053290000052
points on the hyperplane classification interval are called support vectors, a real relaxation variable xi adjustment interval is added, overfitting in a high-dimensional space is relieved, and the optimized hyperplane interval is expressed as follows:
Figure FDA0002772053290000053
Figure FDA0002772053290000054
wherein ξiIs a real number and represents the ith relaxation variable; c represents a penalty parameter for limiting xii
And optimizing the optimized hyperplane interval equation by adopting Lagrange, wherein the optimized hyperplane interval equation is expressed as:
Figure FDA0002772053290000055
wherein the Lagrangian factor is alphai≥0,βi≥0,i=1,2,...,L;
To La(w, b, α, β) and making the partial derivative zero, expressed as:
Figure FDA0002772053290000056
the hyperplane optimization equation is expressed as:
Figure FDA0002772053290000061
Figure FDA0002772053290000062
wherein, K (x)i,xj)=<φ(xi),φ(xj) Represents the kernel function, is the inner product of the mapping function phi (x) in the feature space;
the core optimization problem in the SVM classifier is solved by adopting a sequence minimum optimization method to obtain a Lagrange factor alpha and an offset b, and then a classification function, namely a spectrum classifier is expressed as follows:
Figure FDA0002772053290000063
wherein sign (& lt) is a sign function.
9. The method according to claim 8, wherein the classification result obtained in step S5 is a spectrum occupation status of a primary user PU signal; the frequency spectrum occupation condition is as follows: if the output result of the frequency spectrum classifier is +1, the PU signal frequency spectrum of the main user is occupied; and if the output result of the frequency spectrum classifier is-1, the frequency spectrum of the PU signal of the main user is not occupied.
CN202011252562.7A 2020-11-11 2020-11-11 Efficient spectrum sensing method based on support vector machine Active CN112422213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011252562.7A CN112422213B (en) 2020-11-11 2020-11-11 Efficient spectrum sensing method based on support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011252562.7A CN112422213B (en) 2020-11-11 2020-11-11 Efficient spectrum sensing method based on support vector machine

Publications (2)

Publication Number Publication Date
CN112422213A true CN112422213A (en) 2021-02-26
CN112422213B CN112422213B (en) 2022-06-14

Family

ID=74781412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011252562.7A Active CN112422213B (en) 2020-11-11 2020-11-11 Efficient spectrum sensing method based on support vector machine

Country Status (1)

Country Link
CN (1) CN112422213B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095162A (en) * 2021-03-24 2021-07-09 杭州电子科技大学 Spectrum sensing method based on semi-supervised deep learning
CN113138201A (en) * 2021-03-24 2021-07-20 北京大学 Metamaterial Internet of things system and method for wireless passive environment state detection
CN113345443A (en) * 2021-04-22 2021-09-03 西北工业大学 Marine mammal vocalization detection and identification method based on mel-frequency cepstrum coefficient
CN113705446A (en) * 2021-08-27 2021-11-26 电子科技大学 Open set identification method for individual radiation source
CN114531324A (en) * 2021-09-16 2022-05-24 北京理工大学 Classification method based on channel measurement
CN115022790A (en) * 2022-05-23 2022-09-06 桂林电子科技大学 Loudspeaker abnormal sound classification method based on combination of auditory perception and principal component analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135327A (en) * 2014-07-10 2014-11-05 上海大学 Spectrum sensing method based on support vector machine
CN108768563A (en) * 2018-05-17 2018-11-06 广东工业大学 A kind of cooperative frequency spectrum sensing method and relevant apparatus
CN109286458A (en) * 2018-10-31 2019-01-29 天津大学 Cooperation frequency spectrum sensing method based on fuzzy support vector machine
CN109525994A (en) * 2018-12-17 2019-03-26 中国空间技术研究院 High energy efficiency frequency spectrum sensing method based on support vector machines
CN109547133A (en) * 2018-12-06 2019-03-29 杭州电子科技大学 A kind of SVM high-efficiency frequency spectrum cognitive method decomposing sample covariance matrix based on Cholesky
CN111860602A (en) * 2020-06-22 2020-10-30 中国科学院沈阳自动化研究所 Machine learning-based efficient and rapid industrial spectrum cognition method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135327A (en) * 2014-07-10 2014-11-05 上海大学 Spectrum sensing method based on support vector machine
CN108768563A (en) * 2018-05-17 2018-11-06 广东工业大学 A kind of cooperative frequency spectrum sensing method and relevant apparatus
CN109286458A (en) * 2018-10-31 2019-01-29 天津大学 Cooperation frequency spectrum sensing method based on fuzzy support vector machine
CN109547133A (en) * 2018-12-06 2019-03-29 杭州电子科技大学 A kind of SVM high-efficiency frequency spectrum cognitive method decomposing sample covariance matrix based on Cholesky
CN109525994A (en) * 2018-12-17 2019-03-26 中国空间技术研究院 High energy efficiency frequency spectrum sensing method based on support vector machines
CN111860602A (en) * 2020-06-22 2020-10-30 中国科学院沈阳自动化研究所 Machine learning-based efficient and rapid industrial spectrum cognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANRONG BAO: "Improved Blind Spectrum Sensing by Covariance Matrix Cholesky Decomposition and RBF-SVM Decision Classification at Low SNRs", 《IEEE ACCESS》 *
XI YANG等: "Blind detection for primary user based on the sample covariance matrix in cognitive radio", 《IEEE COMMUNICATIONS LETTERS》 *
聂建园: "基于采样协方差矩阵的混合核SVM高效频谱感知", 《电信科学》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095162A (en) * 2021-03-24 2021-07-09 杭州电子科技大学 Spectrum sensing method based on semi-supervised deep learning
CN113138201A (en) * 2021-03-24 2021-07-20 北京大学 Metamaterial Internet of things system and method for wireless passive environment state detection
CN113138201B (en) * 2021-03-24 2022-05-20 北京大学 Metamaterial Internet of things system and method for wireless passive environment state detection
CN113345443A (en) * 2021-04-22 2021-09-03 西北工业大学 Marine mammal vocalization detection and identification method based on mel-frequency cepstrum coefficient
CN113705446A (en) * 2021-08-27 2021-11-26 电子科技大学 Open set identification method for individual radiation source
CN113705446B (en) * 2021-08-27 2023-04-07 电子科技大学 Open set identification method for individual radiation source
CN114531324A (en) * 2021-09-16 2022-05-24 北京理工大学 Classification method based on channel measurement
CN115022790A (en) * 2022-05-23 2022-09-06 桂林电子科技大学 Loudspeaker abnormal sound classification method based on combination of auditory perception and principal component analysis

Also Published As

Publication number Publication date
CN112422213B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN112422213B (en) Efficient spectrum sensing method based on support vector machine
CN109547133B (en) SVM high-efficiency spectrum sensing method based on Cholesky decomposition sampling covariance matrix
CN110224956B (en) Modulation recognition method based on interference cleaning and two-stage training convolutional neural network model
CN109379153B (en) Spectrum sensing method
CN108596154A (en) Classifying Method in Remote Sensing Image based on high dimensional feature selection and multi-level fusion
Zhang et al. Machine learning techniques for spectrum sensing when primary user has multiple transmit powers
CN113541834B (en) Abnormal signal semi-supervised classification method and system and data processing terminal
CN109274625A (en) A kind of modulates information mode determines method, apparatus, electronic equipment and storage medium
CN112821968B (en) Efficient spectrum sensing method based on compressed sensing and support vector machine
CN112787736A (en) Long-short term memory cooperative spectrum sensing method based on covariance matrix
CN103714340B (en) Self-adaptation feature extracting method based on image partitioning
CN114024808A (en) Modulation signal identification method and system based on deep learning
CN116192307A (en) Distributed cooperative multi-antenna cooperative spectrum intelligent sensing method, system, equipment and medium under non-Gaussian noise
Liu et al. L1-subspace tracking for streaming data
CN115038091A (en) Method and system for sensing wireless communication frequency spectrum on arctic sea
CN111934797B (en) Collaborative spectrum sensing method based on covariance eigenvalue and mean shift clustering
CN110995631B (en) Communication signal modulation mode identification method and system based on LSTM and SVM
CN116865884A (en) Broadband spectrum sensing method based on online learning
CN114337883B (en) CNN collaborative spectrum sensing method and system for covariance matrix Cholesky decomposition
CN111401440A (en) Target classification recognition method and device, computer equipment and storage medium
CN116720060A (en) Radio frequency fingerprint identification method based on lightweight deep learning model
Valadão et al. Cooperative spectrum sensing system using residual convolutional neural network
CN115276857A (en) Total-blind spectrum detection method based on combination of Cholesky decomposition and convolutional neural network
Majumder A variational Bayesian approach to multiantenna spectrum sensing under correlated noise
CN109241886B (en) Face recognition method and system based on OLBP and PCA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant