CN112464836A - AIS radiation source individual identification method based on sparse representation learning - Google Patents
AIS radiation source individual identification method based on sparse representation learning Download PDFInfo
- Publication number
- CN112464836A CN112464836A CN202011393425.5A CN202011393425A CN112464836A CN 112464836 A CN112464836 A CN 112464836A CN 202011393425 A CN202011393425 A CN 202011393425A CN 112464836 A CN112464836 A CN 112464836A
- Authority
- CN
- China
- Prior art keywords
- dictionary
- ais
- feature
- neural network
- radiation source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000005855 radiation Effects 0.000 title claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000007246 mechanism Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 abstract description 13
- 230000000694 effects Effects 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000009467 reduction Effects 0.000 abstract description 2
- 230000000630 rising effect Effects 0.000 description 5
- 230000005284 excitation Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000004576 sand Substances 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005312 nonlinear dynamic Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Biodiversity & Conservation Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
The invention belongs to the technical field of signal identification, and relates to an AIS radiation source individual identification method based on sparse representation learning. The method has the advantages that the shallow feature and the deep feature of the category are extracted based on the neural network respectively, and the sparse representation method based on the multi-level features is adopted. In the aspect of multi-level feature extraction, the method carries out supervised training on a feature extraction network, excavates shallow and deep features which are beneficial to classification in signals, expands an original signal dictionary by utilizing the shallow and deep features extracted by the feature extraction network, carries out dimensionality reduction and sparse reconstruction on a test sample on the expanded multi-level dictionary, and carries out classification judgment according to reconstruction errors. The experimental results show that: the method provided by the invention has a good identification effect on the actually acquired AIS data set.
Description
Technical Field
The invention belongs to the technical field of signal identification, and relates to an AIS radiation source individual identification method based on sparse representation learning.
Background
An Automatic Identification System (AIS) for a general ship is an on-board broadcast response system. In the AIS system, if someone modifies a Maritime Mobile Service Identity (MMSI) that uniquely identifies ship Identity information, a great threat is brought to navigation security. And a Radio frequency fingerprint (RF) is an essential characteristic of a physical layer of the AIS terminal transmitting device and is difficult to tamper. Therefore, the radiation source individual identification technology based on the radio frequency fingerprint provides a physical layer method for protecting the safety of the AIS communication system, and can be applied to detecting illegal radiation source signals. The advanced AIS radiation source individual identification technology is effectively and comprehensively applied to marine traffic transportation, and intelligent management of marine traffic can be enhanced, so that a marine comprehensive transportation system which guarantees safety, improves efficiency and saves resources is formed.
In the field of individual identification of communication radiation sources, a traditional processing mode is to extract the characteristics of signals and then classify the signals by using a classification model such as a Support Vector Machine (SVM). By utilizing statistical characteristics such as high-order spectrum, nonlinear dynamic characteristics and the like and transformation domain characteristics obtained by decomposing a radiation source or Hilbert-Huang transformation, better identification effect can be realized. The above methods usually require manual parameter setting, rely on a certain priori knowledge, and have no universality. In recent years, Deep Learning (DL) has been widely used in various fields such as medical care and transportation. Compared with the traditional feature extraction method, the deep Neural Network can automatically extract essential features of strong discriminative power of signals, so that some scholars begin to use various classical Neural Network models for individual radiation source identification, such as a method of using a Convolutional Neural Network (CNN) to identify a specific radiation source, and a residual error Network is used to process a Hilbert spectrogram of a signal, so as to identify individual radiation sources. However, most of the radiation source individual identification methods based on the neural network also have the traditional characteristics, and have similar defects with the traditional methods. Some networks specially designed for Time series classification, such as inclusion Time, Encoder, Resnet, etc., can also be used for individual identification of radiation sources because the processed objects are all one-dimensional Time sequence signals.
Sparse representation of data is another emerging research focus in recent years. Sparse Representation theory is used to solve the Classification problem, also called Sparse Representation Based Classification algorithm (SRC). In the method, training samples of all classes form an over-complete dictionary, a test sample is sparsely represented by basis vectors in the dictionary, and classification judgment is made according to sparse representation coefficients. The sparse representation coefficient can be used for solving a representative algorithm such as Basis Pursuit (BP), Orthogonal Matching Pursuit (OMP), and the like.
Disclosure of Invention
The invention provides a classification algorithm based on multilevel sparse representation learning for the first time, which is used for AIS radiation source individual identification. The technical scheme of the invention is that a neural network is innovatively combined with a sparse representation classifier, a multi-scale convolution neural network is designed to extract hidden features in signals, a dictionary is expanded by using features of a shallow neural network layer and a deep neural network layer, and AIS signals are sparsely represented and classified based on the expanded dictionary.
The technical scheme of the invention is as follows: an AIS radiation source individual identification method based on sparse representation learning comprises the following steps:
s1, acquiring AIS signals to construct a training data set; the invention intercepts the rising edge, the training sequence and the start mark as effective data of the AIS signal.
The individual identification process of the radiation source usually needs to intercept a section of effective signal to extract the radio frequency fingerprint. For general signals, the start-stop position of valid data is usually located by selecting the detection signal variation part. For AIS type signals with strict transmission specifications, the effective data is positioned by using the synchronization sequence in the signals, which is obviously more accurate and efficient. The rising edge, the training sequence and the start mark in the AIS signal require the transmitted symbols to be consistent, do not contain any data information, and contain a signal section from zero to rated power of a transmitter, thereby representing the subtle characteristics caused by hardware between different AIS radiation sources, which can be used for distinguishing. The invention therefore intercepts the rising edge, training sequence and start flag as valid data for the AIS signal.
S2, constructing a neural network: the method comprises the steps that a neural network is constructed by adopting two inclusion modules with channel attention mechanisms, wherein the neural network is respectively defined as a first inclusion module and a second inclusion module, the first inclusion module and the second inclusion module are cascaded, a training data set is input into the first inclusion module, the output of the first inclusion module reduces the number of channels to 1 through a bottleneck layer, and then shallow layer characteristics are obtained; obtaining deep features after the output of the second inclusion module is subjected to global average pooling;
the invention uses an inclusion module in a classical neural network to extract the characteristics of an input signal, and integrates a channel Attention Mechanism (Attention Mechanism) into neural network learning, thereby focusing on a more useful channel. Two inclusion modules with channel attention mechanisms are cascaded, and the number of channels is reduced to 1 by a bottleneck layer in the middle as a shallow layer characteristic. The output of the second inclusion module is globally averaged pooled in the time dimension as a deep feature. The inclusion module adopts three convolution kernels with different scales to slide on an input signal at the same time, and the convolution kernels with different scales have different receptive fields, so that local information with different resolutions can be extracted from a time sequence signal. Meanwhile, the integrated channel attention mechanism focuses the neural network on a useful channel, and effective characteristics can be obtained.
S3, training the constructed neural network by adopting a training data set to obtain a trained neural network;
s4, constructing a multi-level feature dictionary: hypothesis original signal dictionaryThe number of classes of all samples is K, M is the dimension of the original signal, N is the number of training samples, and each class corresponds to the original sub-dictionaryAre all composed of NiAn ith type original sampleThe structure of the utility model is that the material,each original signal sample soAfter the trained neural network is subjected to feature extraction network, two corresponding features, namely shallow features, are obtainedAnd deep layer characteristicsThe original signal dictionary is expanded by the two characteristics to obtain an expanded multi-level characteristic dictionaryWherein the sub-dictionary corresponding to each class is expanded toFrom the original sub-dictionaryShallow feature sub-dictionaryDeep-layer feature sub-dictionaryForming;
s5, reducing the dimension of the multilevel feature dictionary S by adopting principal component analysis to obtain a multilevel dictionary D:
wherein,is a vector formed by corresponding mean values of each line in a dictionary S, and (S-m.1) is solved for decentralized operationCovariance matrix Cov ═ (S-m.1) · (S-m.1)Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to smallP is the feature dimension after projection;
s6, AIS radiation source individual identification: signal to be identifiedIs mapped into through a projection matrixSolving for sparse representation coefficients of y
The code vector of y on the multilevel dictionary matrix D is theta, and l is obtained by adopting a basis pursuit algorithm1Norm minimum solution, i.e.According toAnd (3) performing signal reconstruction and classification judgment:
wherein,is a corresponding encoded coefficient vector of class i, i.e. sparse representation coefficient of yAnd (4) retaining the element corresponding to the ith class, setting all the other elements to be zero, reconstructing on each class respectively, and solving a reconstruction error, wherein the class with the minimum reconstruction error is judged as the radiation source individual to which the AIS signal to be identified belongs.
The principle of the invention is as follows: each original signal sample soTwo corresponding features, namely shallow feature s, are obtained after the feature extraction networksAnd deep layer characteristics sd. The original signal dictionary is expanded by utilizing the two characteristics, and sparse reconstruction is carried out on a multi-level dictionary, because of the shallow characteristic dictionary SsAnd deep feature dictionary SdIs an original dictionary SoThey can be regarded as S after learning of neural networkoProvides a higher level of features. Therefore, compared with an original dictionary, shallow and deep feature dictionaries which can represent category information better are introduced, and samples can be described better. And respectively reconstructing on each type and solving a reconstruction error, wherein the type with the minimum reconstruction error is judged as the radiation source individual to which the test AIS signal belongs.
The method has the advantages that the shallow feature and the deep feature of the category are extracted based on the neural network respectively, and the sparse representation method based on the multi-level features is adopted. In the aspect of multi-level feature extraction, the method carries out supervised training on a feature extraction network, excavates shallow and deep features which are beneficial to classification in signals, expands an original signal dictionary by utilizing the shallow and deep features extracted by the feature extraction network, carries out dimensionality reduction and sparse reconstruction on a test sample on the expanded multi-level dictionary, and carries out classification judgment according to reconstruction errors. The experimental results show that: the method proposed herein has a good identification effect on the AIS data sets actually collected.
Drawings
FIG. 1 is a schematic flow chart of an implementation of the recognition method of the present invention;
FIG. 2 is a schematic diagram of a feature extraction architecture;
fig. 3 is a schematic structural diagram of an inclusion module, which extracts local signals with different resolutions from a time sequence signal;
FIG. 4 is a graph of experimental results of different methods.
Detailed Description
The present invention is described in detail below with reference to the attached drawings so that those skilled in the art can better understand the present invention.
As shown in fig. 1, the method of the present invention mainly comprises the following steps: firstly, effective data interception is carried out, then a feature extraction network and a multi-level dictionary structure are trained, and finally an AIS radiation source individual is identified.
The method comprises the following specific steps:
The individual identification process of the radiation source usually needs to intercept a section of effective signal to extract the radio frequency fingerprint. For general signals, the start-stop position of valid data is usually located by selecting the detection signal variation part. For AIS type signals with strict transmission specifications, the effective data is positioned by using the synchronization sequence in the signals, which is obviously more accurate and efficient. The rising edge, the training sequence and the start mark in the AIS signal require the transmitted symbols to be consistent, do not contain any data information, and contain a signal section from zero to rated power of a transmitter, thereby representing the subtle characteristics caused by hardware between different AIS radiation sources, which can be used for distinguishing. The invention therefore intercepts the rising edge, training sequence and start flag as valid data for the AIS signal.
The architecture of the feature extraction network is shown in fig. 2. The method makes full use of the category information provided by the dictionary data set, and performs supervised training on the network, so that the network learns more effective characteristics for subsequent dictionary expansion and classification.
In fig. 2, the network employs an inclusion module to extract the characteristics of the input signal. The acquired effective data set is passed through the input layer, and three convolution kernels with different scales are adopted to simultaneously slide on the input signal, and as shown in the figure, the sizes of the convolution kernels are respectively set to 40,20 and 10. The convolution kernels with different scales have different receptive fields, and the method can extract local information with different resolutions from the time sequence signal. Parallel maximum pooling operation is introduced, and the number of channels is adjusted through a Bottleneck layer (Bottleneeck), so that the model has robustness, overfitting is prevented, and the generalization capability of the model is improved. The Channel Attention mechanism (Channel Attention) is integrated in the inclusion module, as shown in fig. 3, to focus the neural network to a more useful Channel. The Squeeze and Excitation Network are used as a channel attention mechanism for timing signals. First, using Global Average firing as the Squeeze operation, feature compression is performed in the time dimension, changing each one-dimensional feature channel into a real number with a Global receptive field. And then performing an Excitation operation, namely increasing nonlinearity and reducing parameter quantity of a bottleneck structure consisting of two full-connected (FC) layers based on a Softmax function, and obtaining a normalized weight w between 0 and 1 through a Sigmoid activation function. And finally, performing a re-weighting operation, and weighting the weight w output by the Excitation channel by channel through multiplication to realize re-calibration of the original features on the channel dimension. The output of the inclusion module at this time is a shallow feature.
As shown in fig. 2, two inclusion modules with channel attention mechanism are stacked, and the output of the second inclusion module is globally averaged and pooled in the time dimension as the output feature of the deep neural network.
suppose an original dictionaryWhere K is the number of classes of all samples, M is the dimension of the original signal, and N is the number of training samples. Original sub-dictionary corresponding to each classAre all composed of NiAn ith type original sampleThe structure of the utility model is that the material,testing AIS samples assuming sparse reconstruction only on original dictionariesCan be expressed as:
x=So·α (1)
wherein, alpha is the original dictionary S of the test sample xoThe above sparse representation coefficients, and the classification result of the test sample x can be obtained by processing α. Each original signal sample soTwo corresponding features, namely shallow features, are obtained after the feature extraction networkAnd deep layer characteristicsFirstly, the original signal dictionary is expanded by utilizing the two characteristics to obtain an expanded dictionaryWherein the sub-dictionary corresponding to each class is expanded toFrom the original sub-dictionaryShallow feature sub-dictionaryDeep-layer feature sub-dictionaryAnd (4) forming. Sparse reconstruction is performed on a multi-level dictionary, and a test sample x can be expressed as:
x=So·α+Ss·β+Sd·γ (2)
dictionary for shallow features SsAnd deep feature dictionary SdIs an original wordDian SoObtained after passing through a neural network, they can be regarded as SoProvides a higher level of features. Therefore, compared with the formula (1), the formula (2) not only uses the original dictionary for representing details, but also introduces the shallow and deep feature dictionaries which can represent the category information better, and can better describe the sample.
However, the relevance between each column basis vector of the multilevel feature dictionary S is large, and the effect of directly using sparse representation is poor. Reducing the dimension by Principal Component Analysis (PCA), weakening the correlation among the basis vectors, and obtaining a multilayer dictionary D:
wherein,is a vector composed of corresponding means of each line in the dictionary S, and (S-m.1) is a decentralized operation. The covariance matrix Cov of (S-m.1) ·Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to smallP is the projected feature dimension. D obtained after projection is a final multilayer dictionary, and then, the multilayer dictionary D is used for carrying out classification based on sparse representation.
Step 4, AIS radiation source individual identification
For a test specimenIs mapped into through a projection matrixSolving for sparse representation coefficients of y
Wherein theta is a coding vector of the test sample y on the multi-level dictionary matrix D, and l of the test sample y can be solved by adopting a basis pursuit algorithm1Norm minimum solution, i.e.According toAnd (3) performing signal reconstruction and classification judgment:
wherein,is the corresponding coding coefficient vector of the ith class, i.e. the sparse representation coefficient of the test sample yThe element corresponding to the ith type in the middle is reserved, and all the other elements are set to be zero. And respectively reconstructing on each type and solving a reconstruction error, wherein the type with the minimum reconstruction error is judged as the radiation source individual to which the test AIS signal belongs.
By contrast, as shown in fig. 4, the recognition accuracy of the neural network-based method is higher than that of the conventional method (SIB + SVM), which represents the superiority of the neural network for AIS radiation source individual recognition. Meanwhile, a sparse representation Method (MSRC) based on multi-level feature learning is superior to other identification methods, namely increment Time and Resnet.
Claims (1)
1. An AIS radiation source individual identification method based on sparse representation learning is characterized by comprising the following steps:
s1, acquiring AIS signals to construct a training data set;
s2, constructing a neural network: the method comprises the steps that a neural network is constructed by adopting two inclusion modules with channel attention mechanisms, wherein the neural network is respectively defined as a first inclusion module and a second inclusion module, the first inclusion module and the second inclusion module are cascaded, a training data set is input into the first inclusion module, the output of the first inclusion module reduces the number of channels to 1 through a bottleneck layer, and then shallow layer characteristics are obtained; obtaining deep features after the output of the second inclusion module is subjected to global average pooling;
s3, training the constructed neural network by adopting a training data set to obtain a trained neural network;
s4, constructing a multi-level feature dictionary: hypothesis original signal dictionaryThe number of classes of all samples is K, M is the dimension of the original signal, N is the number of training samples, and each class corresponds to the original sub-dictionaryAre all composed of NiAn ith type original sampleThe structure of the utility model is that the material,each original signal sample soAfter the trained neural network is subjected to feature extraction network, two corresponding features, namely shallow features, are obtainedAnd deep layer characteristicsThe original signal dictionary is expanded by utilizing the two characteristics, and the expanded original signal dictionary is expandedMulti-level feature dictionaryWherein the sub-dictionary corresponding to each class is expanded toFrom the original sub-dictionaryShallow feature sub-dictionaryDeep-layer feature sub-dictionaryForming;
s5, reducing the dimension of the multilevel feature dictionary S by adopting principal component analysis to obtain a multilevel dictionary D:
wherein,is a vector composed of corresponding mean values of each line in a dictionary S, and the covariance matrix Cov of (S-m.1) · (S-m.1) is solved for decentralization operation (S-m.1)Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to smallP is the feature dimension after projection;
s6, AIS radiation source individual identification: signal to be identifiedIs mapped into through a projection matrixSolving for sparse representation coefficients of y
The code vector of y on the multilevel dictionary matrix D is theta, and l is obtained by adopting a basis pursuit algorithm1Norm minimum solution, i.e.According toAnd (3) performing signal reconstruction and classification judgment:
wherein,is a corresponding encoded coefficient vector of class i, i.e. sparse representation coefficient of yAnd (4) retaining the element corresponding to the ith class, setting all the other elements to be zero, reconstructing on each class respectively, and solving a reconstruction error, wherein the class with the minimum reconstruction error is judged as the radiation source individual to which the AIS signal to be identified belongs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011393425.5A CN112464836A (en) | 2020-12-02 | 2020-12-02 | AIS radiation source individual identification method based on sparse representation learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011393425.5A CN112464836A (en) | 2020-12-02 | 2020-12-02 | AIS radiation source individual identification method based on sparse representation learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112464836A true CN112464836A (en) | 2021-03-09 |
Family
ID=74805318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011393425.5A Withdrawn CN112464836A (en) | 2020-12-02 | 2020-12-02 | AIS radiation source individual identification method based on sparse representation learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112464836A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113901974A (en) * | 2021-10-15 | 2022-01-07 | 深圳市唯特视科技有限公司 | Signal classification method and device, computer equipment and storage medium |
CN115130498A (en) * | 2022-06-09 | 2022-09-30 | 西北工业大学 | Method and device for identifying electromagnetic radiation source signal and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778863A (en) * | 2016-12-12 | 2017-05-31 | 武汉科技大学 | The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher |
US20180137393A1 (en) * | 2015-06-04 | 2018-05-17 | Siemens Healthcare Gmbh | Medical pattern classification using non-linear and nonnegative sparse representations |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN111934749A (en) * | 2020-08-07 | 2020-11-13 | 上海卫星工程研究所 | Satellite-borne AIS message real-time receiving and processing system with wide and narrow beam cooperation |
CN112183300A (en) * | 2020-09-23 | 2021-01-05 | 厦门大学 | AIS radiation source identification method and system based on multi-level sparse representation |
-
2020
- 2020-12-02 CN CN202011393425.5A patent/CN112464836A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137393A1 (en) * | 2015-06-04 | 2018-05-17 | Siemens Healthcare Gmbh | Medical pattern classification using non-linear and nonnegative sparse representations |
CN106778863A (en) * | 2016-12-12 | 2017-05-31 | 武汉科技大学 | The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher |
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN111934749A (en) * | 2020-08-07 | 2020-11-13 | 上海卫星工程研究所 | Satellite-borne AIS message real-time receiving and processing system with wide and narrow beam cooperation |
CN112183300A (en) * | 2020-09-23 | 2021-01-05 | 厦门大学 | AIS radiation source identification method and system based on multi-level sparse representation |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113901974A (en) * | 2021-10-15 | 2022-01-07 | 深圳市唯特视科技有限公司 | Signal classification method and device, computer equipment and storage medium |
CN115130498A (en) * | 2022-06-09 | 2022-09-30 | 西北工业大学 | Method and device for identifying electromagnetic radiation source signal and electronic equipment |
CN115130498B (en) * | 2022-06-09 | 2024-09-20 | 西北工业大学 | Electromagnetic radiation source signal identification method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107491729B (en) | Handwritten digit recognition method based on cosine similarity activated convolutional neural network | |
CN112464836A (en) | AIS radiation source individual identification method based on sparse representation learning | |
CN113378971B (en) | Classification model training method and system for near infrared spectrum and classification method and system | |
Sampath et al. | Decision tree and deep learning based probabilistic model for character recognition | |
Nasrollahi et al. | Printed persian subword recognition using wavelet packet descriptors | |
CN112183300A (en) | AIS radiation source identification method and system based on multi-level sparse representation | |
CN110414587A (en) | Depth convolutional neural networks training method and system based on progressive learning | |
Prasad et al. | Gujrati character recognition using weighted k-NN and mean χ 2 distance measure | |
CN110781822B (en) | SAR image target recognition method based on self-adaptive multi-azimuth dictionary pair learning | |
CN112364809A (en) | High-accuracy face recognition improved algorithm | |
Chen et al. | Subcategory-aware feature selection and SVM optimization for automatic aerial image-based oil spill inspection | |
Du et al. | Low-shot palmprint recognition based on meta-siamese network | |
CN115941407A (en) | Signal modulation identification method based on recursive convolutional network and attention mechanism | |
Vishwakarma et al. | Generalized DCT and DWT hybridization based robust feature extraction for face recognition | |
CN116894207A (en) | Intelligent radiation source identification method based on Swin transducer and transfer learning | |
CN108052981B (en) | Image classification method based on nonsubsampled Contourlet transformation and convolutional neural network | |
Yamada et al. | The character generation in handwriting feature extraction using variational autoencoder | |
CN116055270A (en) | Modulation recognition model, training method thereof and signal modulation processing method | |
Kishan et al. | Handwritten character recognition using CNN | |
Hao et al. | A study on the use of Gabor features for Chinese OCR | |
Sui et al. | Wavelet packet and granular computing with application to communication emitter recognition | |
Bian et al. | Binarization of color character strings in scene images using deep neural network | |
CN109444880A (en) | A kind of SAR target identification method based on the fusion of multiple features low-rank representation | |
Su et al. | A Solver of Fukunaga Koontz Transformation without Matrix Decomposition | |
Shin et al. | Enhancing low-resolution face recognition with feature similarity knowledge distillation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210309 |