CN112529035B - Intelligent identification method for identifying individual types of different radio stations - Google Patents

Intelligent identification method for identifying individual types of different radio stations Download PDF

Info

Publication number
CN112529035B
CN112529035B CN202011190513.5A CN202011190513A CN112529035B CN 112529035 B CN112529035 B CN 112529035B CN 202011190513 A CN202011190513 A CN 202011190513A CN 112529035 B CN112529035 B CN 112529035B
Authority
CN
China
Prior art keywords
layer
network
convolution
individual
radio station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011190513.5A
Other languages
Chinese (zh)
Other versions
CN112529035A (en
Inventor
梁先明
陈文洁
赵若冰
李奇真
曾翔宇
幸晨杰
陈涛
余博
张志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Electronic Technology Institute No 10 Institute of Cetc
Original Assignee
Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Electronic Technology Institute No 10 Institute of Cetc filed Critical Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority to CN202011190513.5A priority Critical patent/CN112529035B/en
Publication of CN112529035A publication Critical patent/CN112529035A/en
Application granted granted Critical
Publication of CN112529035B publication Critical patent/CN112529035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an intelligent individual recognition method based on a time sequence deep network, which solves the problems of difficult feature extraction and low generalization capability of the existing radio station individual classification recognition method. The implementation scheme is as follows: based on a time sequence deep network, inputting radio station individual time sequence signals reflecting different radio station individual types into the deep network, carrying out zero filling processing on original radio station individual data, and generating a training sample set and a test sample set in proportion; constructing three sub-networks and a one-dimensional time sequence multi-sub-network deep integration network; training three sub-networks by using a training sample set and connecting output layers in parallel to obtain a trained one-dimensional time sequence multi-sub-network deep integration network; and directly inputting the original time sequence data of the radio station into the trained deep integration network to predict the test data set, and obtaining the individual radio station category predicted by the network on the time sequence data of the radio station. The invention improves the generalization capability and robustness of the deep network in the aspect of individual radio station identification, and can be used in the technical field of individual radio station classification and identification.

Description

Intelligent identification method for identifying individual types of different radio stations
Technical Field
The invention relates to a radio station individual classification and identification technology in the technical field of communication, in particular to an intelligent identification method for identifying individual types of different radio stations.
Background
Because the individual micro-features contained in different radio stations are different, in order to distinguish the radio stations, the individual features of the radio stations need to be researched, which is a central link of the whole radio station individual identification system. As a basis for distinguishing individual stations, the characteristics of the communication individual belong to the unique characteristics of the individual, and the individual stations are different from each other and do not change significantly with the passage of time or the change in environmental conditions. The ultimate goal of communication station identification is to achieve individual identification of communication stations, i.e. to distinguish communication stations by radio signals transmitted by the communication stations. This not only requires that different types of station signals be identified, but that each individual station must be identified among multiple stations of the same model. In fact, technical features reflecting individual characteristics of the stations are included in the intercepted communication signals. In principle, due to the randomness of the production process, the discreteness of components, the debugging and the like of each radio station transmitter, signals transmitted by the radio stations have individual characteristics different from those of other radio stations due to the difference of hardware, namely, each radio station has unique fingerprint characteristics capable of reflecting the individual attributes of the radio stations. The individual identification technology of the communication radio station is to acquire and detect individual fingerprint characteristics representing the communication radio station through a certain method so as to realize individual identification. The intercepted communication radio station signals are processed and analyzed through a signal processing technology, the attribute characteristics of the acquired signal individuals are extracted, and each signal is quickly corresponding to a corresponding radio station. The technology can accurately obtain the attribute information of the radio station individuals, can use the communication radio station individual identification technology for civil radio spectrum management in the civil field, and can sense the safety of the frequency band, thereby realizing the identification of non-malicious interference, frequency conflict and other illegal interference. The communication station identification is mainly divided into two parts of feature extraction and classification identification. The conventional processing method is generally to process transient characteristics and steady-state characteristics of individual signals of a station. When any communication radio station is started, the communication radio station is gradually transited from an under-stable working state to a stable working state, and in the period of just starting the machine, all parts of the radio station are in an unstable state to work, including the electrification of a circuit, the initialization of a frequency source, the initialization of a frequency conversion and amplification module and the like. This stage is mainly represented by nonlinear non-stationary characteristics, and the individual characteristics of the station represented in this stage are called transient characteristics. The transient characteristics come from various aspects of each module of the radio station, even from the radio station of the same model, the complete same of the performance cannot be achieved in the aspects of the component characteristics and the process, and the difference is particularly obvious in the starting-up under-stable state, and meanwhile, the overall transient characteristics are greatly influenced by the transition time of different modules from the under-stable state to the stable state. The time that the process of the station from transient state to steady state is short, even if the individual characteristics of the station are relatively obvious in the period, the interception of the signal sample in such a short time becomes a major bottleneck limiting the development of the field. In the process of radio station identification, information acquisition and machine learning are required, and in the face of increasingly complex and complicated machine characteristics, under the current technology, interference such as startup information, pulse noise and the like cannot be distinguished in real time in massive signals, so that the startup information of a radio station is difficult to capture, meanwhile, transient characteristics of the radio station have a great relationship with the running environment of the radio station and artificial operation, and have certain randomness, and therefore, the method for extracting the transient characteristics usually has certain defects in practicability. After the radio station is in a stable working state, stable modulation transmission of signals can be realized according to an expected design, a large number of signal transmissions are carried out in the period, sufficient time is provided for acquiring and analyzing the signals transmitted by the radio station, and the individual characteristics of the radio station acquired in the period are called as stable characteristics. The steady-state characteristics are mainly represented by the difference of signal modulation patterns, system noise superposition, signal frequency stability difference, spurious characteristic superposition and the like. In the prior art, the transient characteristics distinguish radio station individuals by using a bispectrum characteristic analysis orthogonal approximation expression, and each emission source is identified by embedding individual information into an emission signal, however, the method has high requirements on processing time and sample length, and the feasibility of equipment startup information capture restricts the realization of the method. For the aspect of steady-state characteristics, theory of fractal theory, chaotic characteristic processing, hilbert-Huang transform, integral bispectrum and the like are gradually applied to individual station identification, and various characteristic extraction improvements are generated. However, these algorithms cannot measure a certain feature of the radio station singly, but extract and classify the feature reflection of the whole radio station feature, so that the pertinence of feature extraction is not clear, and the algorithm effect is difficult to measure quantitatively. Meanwhile, the traditional research method usually needs strong domain knowledge, needs a large amount of characteristic engineering work, is complex in operation process, and has the defects of poor noise immunity, high requirement on the quality of intercepted signals, difficulty in description by a single model and high difficulty in characteristic extraction. The method is characterized in that the radio station fingerprint concept which is used for representing the regularity carried by a communication radio station individual when transmitting signals and reflects the characteristic information of the radio station individual is provided, the identification of the radio station is realized by capturing the characteristic information, the information carried by the fingerprint is the same as that of a specific individual which can be correspondingly searched, however, the characteristics can be summarized to be a set representing the radio station fingerprint, the radio frequency fingerprint is used for processing the signals of a transmitter in the prior art, the amplitude, the phase and the wavelet coefficient are extracted, and the RBF neural network is used for classification identification, and the method utilizes a generated simulation data set to train the CNN; and outputting a classification result. Although higher identification accuracy is obtained, a bispectral feature matrix still needs to be manually extracted and converted into a two-dimensional feature image, the efficiency of individual radio station identification is reduced, meanwhile, the optimal design of the CNN network for feature extraction and identification needs to be optimized through a large number of experiments, time and labor are wasted, meanwhile, the single feature extraction and identification network faces the risk of overfitting, and the robustness is poor.
In deep learning, the design of feature extraction and identification networks is an important task, a network structure of feature extraction can be called as a basic network model, and a basic network model with excellent performance largely determines the identification performance of the station individual identification system. However, a great deal of time is often required for designing and optimizing the basic network model, and evaluation of one optimization result is very time-consuming. Meanwhile, the phenomena of overfitting and insufficient generalization capability of a single basic network model easily occur on a small data set, and the result of a test set is poor, so that the problems of time and labor consumption in model design, easiness in overfitting and poor robustness adaptability exist when the conventional deep learning model is adopted to classify radio station individuals.
Because the radio station feature extraction is an important content in identification, strong field professional knowledge is needed, expert knowledge is excessively relied on, and the related feature extraction method is complex and tedious. The design and the recognition effect of the classifier are directly influenced, and how to extract the characteristics of the signal under the environment with low signal-to-noise ratio becomes a difficult problem of radio station recognition.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the intelligent identification method for identifying the individual types of different radio stations, which can effectively reduce the difficulty in extracting the radio station features, has high individual identification efficiency and can improve the generalization capability and robustness of a single deep network in the aspect of individual identification of the radio stations. The intelligent individual identification method based on the time sequence deep network effectively reduces the difficulty of feature extraction by directly inputting individual signals of a time sequence radio station into the deep network; and by designing a one-dimensional time sequence multi-subnetwork deep integration network model, the feature expression capability of a single deep network model is improved, and the generalization capability of the single deep network model to radio station individuals is further improved.
In order to achieve the above object, the present invention provides an intelligent identification method for identifying individual types of different radio stations, which is characterized by comprising the following steps: based on a time sequence deep network, inputting radio station individual time sequence signals reflecting different radio station individual types into the deep network, setting a fixed length value of data in the deep network, preprocessing original data of the radio station individual, and performing zero filling processing to obtain time sequence data and corresponding category labels thereof; dividing a training sample set and a test sample set to obtain signal samples of individual data of all radio stations, performing zero filling processing on the individual data of an original radio station to form individual radio station sample sets, and dividing the individual radio station sample sets in proportion to generate the training sample set and the test sample set; constructing a one-dimensional time sequence multi-sub-network depth integration model according to the training sample set and the testing sample set, and constructing a convolution sub-network and a one-dimensional time sequence multi-sub-network depth integration model; setting a loss function of a convolution sub-network as a cross entropy loss function, and calculating error loss between a predicted category score and a real category confidence score of individual data of radio stations in a training set according to the set loss function; training the one-dimensional time sequence multi-sub-network deep integration model until the set training iteration number is reached to obtain the optimal network parameter weight in the iteration process, and finishing the training process of the multi-sub-network deep integration model; inputting a test sample set into a trained one-dimensional time sequence multi-subnetwork deep integration model, predicting test data, evaluating classification effects, averaging corresponding element values of prediction vectors to obtain predicted result vectors, and taking the position of the maximum value in the vector of the result vectors of radio station individual categories predicted by the network for the radio station time sequence data as the predicted radio station individual category.
Compared with the prior art, the invention has the following advantages:
the invention is based on the time sequence deep network, directly adopts the one-dimensional convolution network to extract the characteristics of the original time sequence data of the radio station individual, simplifies the steps of extracting the characteristics of the radio station individual data, greatly simplifies the process of extracting the characteristics, reduces the difficulty of data processing, and improves the performance and the efficiency of the radio station individual identification system. Therefore, data preprocessing operation such as time-frequency conversion is not needed, and meanwhile, complex expert manual feature extraction work under the high-domain knowledge base is avoided.
According to the method, the error loss between the predicted category score and the real category confidence score of the individual data of the radio stations in the training set is calculated according to the set loss function; training the one-dimensional time sequence multi-sub-network deep integration model until the set training iteration number is reached to obtain the optimal network parameter weight in the iteration process, and finishing the training process of the multi-sub-network deep integration model; the generalization capability and the robustness of the radio station individual recognition system are improved. Because the one-dimensional time sequence multi-subnetwork deep integration model is constructed, the overfitting risk of a single feature extraction identification network is reduced.
The method comprises the steps of inputting a test sample set into a trained one-dimensional time sequence multi-subnetwork deep integration model, averaging corresponding element values of prediction vectors to obtain predicted result vectors, and taking the positions of maximum values in the vector of the result vectors of radio station individual categories predicted by the network on radio station time sequence data as predicted radio station individual categories. The processed characteristics have good classification performance, so that the classification of the communication radio stations is more practical. The dependence on carrier frequency estimation is avoided, and the equivalence problem in the existing spectrum symmetry measurement is solved. The invention not only improves the efficiency of the radio station individual identification network, but also improves the generalization capability and robustness of a single deep network in the radio station individual identification aspect, and can be used in the technical field of radio station individual classification identification.
The invention can be used for identifying different individual radio station types in a complex electromagnetic environment.
Drawings
FIG. 1 is a flow chart of the present invention for intelligent identification of individual categories for different stations;
FIG. 2 is a schematic diagram of the timing waveforms of 10 station individual signals;
FIG. 3 is a graph of the results of a simulation experiment of the present invention, wherein FIG. 3 (a) shows a confusion matrix for a first sub-network on a test data set; FIG. 3 (b) shows a confusion matrix for a second sub-network on the test data set; FIG. 3 (c) shows a confusion matrix for a third sub-network on the test data set; FIG. 3 (d) shows a confusion matrix for an integrated network on a test data set.
Embodiments and effects of the present invention will be further described below with reference to the accompanying drawings.
Detailed Description
Refer to fig. 1. According to the invention, based on a time sequence deep network, radio station individual time sequence signals reflecting different radio station individual types are input into the deep network, a fixed length value of data is set in the deep network, the original data of the radio station individual is preprocessed, and zero padding processing is carried out to obtain time sequence data and corresponding category labels thereof; dividing a training sample set and a test sample set to obtain signal samples of individual data of all radio stations, performing zero filling processing on the individual data of an original radio station to form individual radio station sample sets, and dividing the individual radio station sample sets in proportion to generate the training sample set and the test sample set; constructing a one-dimensional time sequence multi-sub-network depth integration model according to the training sample set and the testing sample set, and constructing a convolution sub-network and a one-dimensional time sequence multi-sub-network depth integration model; setting a loss function of a convolution sub-network as a cross entropy loss function, and calculating error loss between a predicted category score and a real category confidence score of individual data of radio stations in a training set according to the set loss function; training the one-dimensional time sequence multi-sub-network deep integration model until the set training iteration number is reached to obtain the optimal network parameter weight in the iteration process, and finishing the training process of the multi-sub-network deep integration model; inputting a test sample set into a trained one-dimensional time sequence multi-subnetwork deep integration model, predicting test data, evaluating classification effects, averaging corresponding element values of prediction vectors to obtain predicted result vectors, and taking the positions of maximum values in the result vectors of radio station individual categories predicted by the network on the radio station time sequence data in the vectors as predicted radio station individual categories.
In the division of the training sample set and the test sample set, the time sequence data and the corresponding category labels form a signal sample pair, the signal sample pairs of all radio station individual data are obtained and form a radio station individual sample set, and the training sample set, the verification sample set and the test sample set are divided in proportion; building a deep integration model consisting of three convolution sub-networks, and setting convolution layers, pooling layers and full-connection layer network parameters of the three sub-networks in the deep integration model consisting of the three convolution sub-networks; setting cross entropy loss functions, optimization algorithms and related parameters of the loss functions of the three sub-networks; constructing a one-dimensional time sequence multi-subnetwork deep integration model, randomly disordering the sequence of samples in a training sample set, and inputting the samples into the one-dimensional time sequence multi-subnetwork deep integration model in batches according to the set training step length to carry out network iterative training; calculating error loss between the predicted category score and the actual category confidence score of the individual data of the radio station in the training set according to the set loss function; randomly disordering the sample sequence in the verification sample set every iteration of the training data set, inputting the sample sequence into the one-dimensional time sequence multi-subnetwork deep integration model in batches according to the training step length set in the step (3 d) for network verification, and obtaining the optimal network parameter weight in the iteration process; inputting the test sample set into the trained one-dimensional time sequence multi-sub-network deep integration model, averaging corresponding element values of prediction vectors of the three sub-models to obtain a predicted result vector, and taking the position of the maximum value in the result vector in the vector as the predicted individual radio station category.
The invention is realized by the following steps:
step 1, preprocessing individual data of the radio station,
the method comprises the steps of inputting individual station data, preprocessing the individual station data, setting the fixed length of the set data to be 1024 when zero filling processing is carried out on original data of the individual station, and filling 0 in short pulses with the length of 400-550 points and parts with the length of 9000-1100 points, which are short in length, in the original data of the individual station to obtain time sequence data with the length of 1024.
And 2, dividing a training sample set and a testing sample set.
2.1 For each radio station individual data, the time sequence data and the corresponding category label form a signal sample pair, and the signal sample pairs of all the radio station individual data are obtained and form a radio station individual sample set;
2.2 Randomly selecting 80% of sample data in the individual sample set of the radio station to form a training sample set, and carrying out the following steps on the rest 20% of sample data according to the ratio of 1: the randomly sampled signal samples with the ratio of 1 constitute a verification sample set and a test sample set, respectively.
And 3, constructing a one-dimensional time sequence multi-subnetwork depth integration model.
3.1 Build a one-dimensional time-series multi-subnetwork deep integration model composed of three subnetworks:
the first sub-network comprises an input layer, ten convolutional layers, four pooling layers, two full-connection layers, two batch normalization layers, a classifier layer and an output layer, and the structural relationship of the first sub-network is as follows in sequence: the input layer → the first buildup layer → the second buildup layer → the first pooling layer → the third buildup layer → the fourth buildup layer → the second pooling layer → the fifth buildup layer → the sixth buildup layer → the third pooling layer → the seventh buildup layer → the eighth buildup layer → the fourth pooling layer → the ninth buildup layer → the tenth buildup layer → the first fully-connected layer → the first batch of normalization layers → the second fully-connected layer → the second batch of normalization layers → the classifier layer → the output layer;
a second subnetwork comprising: input layer, eight layers of convolution layer, three-layer pooling layer, two-layer full-connection layer, two-layer normalization layer, classifier layer and output layer of criticizing, its structural relationship is in proper order: input layer → 1 st buildup layer → 2 nd buildup layer → 1 st buildup layer → 3 rd buildup layer → 4 th buildup layer → 2 nd buildup layer → 5 th buildup layer → 6 th buildup layer → 3 rd buildup layer → 7 th buildup layer → 8 th buildup layer → 1 st full-connected layer → 1 st normalization layer → 2 nd full-connected layer → 2 nd normalization layer → classifier layer → output layer;
the third sub-network comprises: input layer, six layers of convolution layer, two-layer pooling layer, two-layer full-link layer, two-layer normalization layer, classifier layer and output layer of criticizing, its structural relationship is in proper order: the input layer → the first convolution layer → the second convolution layer → the first pooling layer → the third convolution layer → the fourth convolution layer → the second pooling layer → the fifth convolution layer → the sixth convolution layer → the first fully-connected layer → the first batch normalization layer → the second fully-connected layer → the second batch normalization layer → the classifier layer → the output layer, and then the output layers of the three sub-networks are connected in parallel to form a one-dimensional time-sequential multi-sub-network deep integrated network including the three sub-networks.
3.2 Set the network parameters of three sub-networks:
the layer parameters in the first subnetwork are set as follows: the number of the nerve units of the input layer is 1024; the number of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer is set to be not less than 16, the number of convolution kernels of the fifth convolution layer and the sixth convolution layer is set to be not less than 32, the number of convolution kernels of the seventh convolution layer, the eighth convolution kernel, the ninth convolution kernel and the tenth convolution kernel is set to be not less than 64, and the sizes of the convolution kernels are set to be 1 multiplied by 3; the activation functions all use linear rectification functions ReLU; the first, second, third and fourth pooling layers all use maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses a multi-classification function Softmax;
the parameters of the layers in the second subnetwork are set as follows: the number of the nerve units of the input layer is 1024; the number of convolution kernels of the 1 st convolution layer and the 2 nd convolution layer is not less than 16, the number of convolution kernels of the 3 rd convolution layer and the 4 th convolution layer is 32, and the sizes of the convolution kernels are both set to be 1 x 3; the number of convolution kernels of the 5 th convolution layer and the 6 th convolution layer is set to be larger than or equal to 64, and the size of the convolution kernels is set to be 1 x 5; the number of convolution kernels of the 7 th convolution layer and the 8 th convolution layer is set to be 128, and the size of the convolution kernels is set to be 1 multiplied by 7; the activation functions all use linear rectification functions ReLU; the 1 st, 2 nd and 3 rd pooling layers are subjected to maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses a multi-classification function Softmax;
the layer parameters in the third subnetwork are set as follows: the number of the nerve units of the input layer is 1024; the number of convolution kernels of the I and II convolution layers is not less than 16, the number of convolution kernels of the III and IV convolution layers is 32, and the sizes of the convolution kernels are 1 multiplied by 5; the number of convolution kernels of the V-th convolution layer and the VI-th convolution layer is set to be larger than or equal to 64, and the size of the convolution kernels is set to be 1 multiplied by 5; the activation functions all use a linear rectification function ReLU; the first and second pooling layers are both subjected to maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses the multi-classification function Softmax.
3.3 Set the penalty function for three sub-networks and the optimization algorithm:
the loss functions of the three sub-networks all use cross-entropy loss functions
Figure GDA0002943146020000071
Wherein M represents the number of individual categories of stations,
Figure GDA0002943146020000073
a tag representing the true category of station individual data,
Figure GDA0002943146020000072
representing a prediction class vector of the network to the training samples.
The optimization algorithms of the three sub-networks each use an optimization algorithm Adam (adaptive moment estimation) based on adaptive matrix estimation.
3.4 In setting the training step size and iteration number for three subnetworks: the training step size of the network refers to the number of samples for training a training sample set which is sent into three sub-networks in each Batch, and the Batch _ size is set to 512; the number of iterations is the number of times of training by repeatedly sending the training sample set into three sub-networks, and Epoch (number of iterations) is set to 50.
Step 4, in the training of the one-dimensional time sequence multi-subnetwork deep integration model, the sample sequence in the training sample set is randomly disturbed, the training sample set after the disturbance sequence is input into the one-dimensional time sequence multi-subnetwork deep integration model in batches according to the training step length for network iterative training, and the result vector P of the training sample prediction is obtained c ,P c =1/3(P 1 +P 2 +P 3 ) Substituting the obtained prediction result vector and the label actually corresponding to the sample into a loss function, and calculating the error loss between the prediction category and the real category of the individual data of the radio station in the training set; back propagation error loss and gradient optimization by using an optimization function so as to adjust network parameters; obtaining a candidate network by iterating the training data set once, randomly disordering the sample sequence in the verification sample set, and inputting the disordered verification sample set into the one-dimensional time sequence multi-subnetwork deep integration module in batches according to the training step lengthPerforming network verification to obtain the error loss of the candidate network on a verification set, comparing the error loss with the minimum error loss recorded in the previous iteration process, if the error loss is less than the recorded minimum error loss, taking the network weight of the iteration as the optimal network weight parameter in the iteration process, and updating the minimum error loss value in the record to be the error loss; if the error loss is not less than the recorded minimum error loss, no operation is performed; and repeatedly training the one-dimensional time sequence multi-sub-network deep integration model until the training iteration times reach 50, finishing the training process of the multi-sub-network deep integration model, and obtaining the optimal network weight parameters.
In the process of predicting test data, a test sample set is input into a trained one-dimensional time sequence multi-sub-network deep integration model, corresponding element values of prediction vectors of three sub-models are averaged to obtain predicted result vector element values, the position of the maximum value in the result vector in the vector is used as a predicted radio station individual category, and a prediction result vector P which represents the three sub-network prediction result vectors after averaging is obtained c ,P c =1/3(P 1 +P 2 +P 3 )
Wherein, P 1 Representing a vector of prediction results obtained by inputting test sample data into a first subnetwork; p 2 Representing a vector of prediction results obtained by inputting a test sample data into the second subnetwork; p 3 Representing a vector of predictors for a sample of the test data input into the third subnetwork.
Step 5, average prediction result vector P c And the position of the medium maximum value in the vector is used as the radio station individual category predicted by the test sample, and the intelligent radio station individual identification based on the time sequence deep network is completed.
The effect of the present invention will be further described with reference to simulation experiments.
Simulation experiment conditions are as follows:
the hardware platform of the simulation experiment is Hewlett packard server Z840, the CPU model is Intel to a strong processor, the display card is a dual-card 1080 8G display memory, and the software platform of the simulation experiment is Ubuntu16.04LTS, tensorFlow1.6.0, keras2.2.0, CUDA9.0+ cudnn7 and python3.6.
The categories and sample numbers of 10 radio station individuals in the simulation experiment of the invention are shown in the following table:
TABLE 1 station Individual Categories and Numbers
Radio station class (ID number) Label (R) Number of samples
4342464 Radio station 1 935
7864436 Radio station 2 960
7864953 Radio station 3 3233
7865962 Radio station 4 1362
7866074 Radio station 5 2536
7866276 Radio station 6 5919
7866417 Radio station 7 672
7866627 Radio station 8 423
7867187 Station 9 1341
7867828 Radio station 10 2204
Among them, 10 kinds of stations are individual: the waveforms of the timing signals of category 4342464, category 7864436, category 7864953, category 7865962, category 7866074, category 7866276, category 7866417, category 7866627, category 7867187, category 7867828 are shown in fig. 2 (1) to 2 (10).
2. Simulation content and results:
the simulation experiment is carried out according to the steps of the invention, 80% of individual data of the radio station in the time sequence individual data of the radio station is taken as a training sample, 10% of individual data of the radio station is taken as a verification sample and sent into a designed one-dimensional time sequence multi-subnetwork deep integration model for training and verification, after the training and verification are completed, the optimal network parameter weight is loaded, and 10% of individual data of the radio station in the remaining time sequence individual data of the radio station is taken as a test sample and sent into the one-dimensional time sequence multi-subnetwork deep integration model for prediction, so that the individual category of the radio station of the test sample is obtained. To verify the effect of the invention, three evaluation indices were utilized: the overall classification accuracy OA, the average classification accuracy AA and the Kappa coefficient respectively evaluate the classification results of each sub-network and the depth integration model related to the method, and the obtained calculation results are shown in table 2.
TABLE 2 evaluation of the Classification results of the methods of the invention
Figure GDA0002943146020000091
As can be seen from table 2, the integrated network has improved values under three evaluation indexes compared with the three sub-networks, and the one-dimensional time sequence multi-sub-network deep integration model is proved to have the advantages of high recognition accuracy and strong generalization capability when the radio station individual recognition is carried out.
In order to more intuitively show the overall accuracy index OA of the present invention, the prediction results of the three sub-networks and the integrated network on the test data are shown in a confusion matrix, as shown in fig. 3, the confusion matrix is also called an error matrix, and is a standard format for representing accuracy evaluation, wherein the horizontal axis of the confusion matrix represents the predicted radio station individual category, the vertical axis represents the real radio station individual category, and the numerical value of each cell of the matrix represents the probability that the vertical axis category is predicted as the horizontal axis category.
The numerical value of the cells on the diagonal line of the confusion matrix represents the classification accuracy of each category, and the larger the numerical value of the cells on the diagonal line is, the better the network performance is.
Comparing the values of the diagonal cells in fig. 3 (d) with the values of the diagonal cells in fig. 3 (a), the values of the diagonal cells in fig. 3 (d) are not less than the values of the diagonal cells in fig. 3 (a), which shows that the performance of the integrated network corresponding to fig. 3 (d) is better than the performance of the first subnetwork corresponding to fig. 3 (a);
comparing the values of the diagonal cells of fig. 3 (d) with the values of the diagonal cells of fig. 3 (b), the values of the diagonal cells of fig. 3 (d) are not less than the values of the diagonal cells of fig. 3 (b), which indicates that the performance of the integrated network corresponding to fig. 3 (d) is better than the performance of the second subnetwork corresponding to fig. 3 (b);
comparing the values of the diagonal cells in fig. 3 (d) with the values of the diagonal cells in fig. 3 (c), the values of the diagonal cells in fig. 3 (d) are not less than the values of the diagonal cells in fig. 3 (c), which indicates that the performance of the integrated network corresponding to fig. 3 (d) is better than the performance of the third subnetwork corresponding to fig. 3 (c);
the simulation experiment results show that the method directly extracts and identifies the characteristics of the original data of the radio station individual through the time sequence depth network of one-dimensional convolution, so that a large amount of characteristic extraction work based on expert knowledge is omitted, and the method has high identification accuracy; meanwhile, the one-dimensional time sequence multi-subnetwork deep integration model constructed by the method can reduce the overfitting problem caused by a single feature extraction identification network, and improves the generalization capability and robustness of the radio station individual identification system.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. An intelligent identification method for identifying individual types of different radio stations is characterized by comprising the following steps: based on a time sequence deep network, inputting individual radio station time sequence signals reflecting different individual radio station types into the deep network, setting a fixed length value of data in the deep network, preprocessing original data of the individual radio stations, and performing zero filling processing to obtain time sequence data and corresponding category labels thereof; dividing a training sample set and a test sample set to obtain signal samples of individual data of all radio stations, performing zero filling processing on the individual data of an original radio station to form individual radio station sample sets, and dividing the individual radio station sample sets in proportion to generate the training sample set and the test sample set; constructing a one-dimensional time sequence multi-subnetwork depth integration model according to the training sample set and the testing sample set, and constructing a one-dimensional time sequence multi-subnetwork depth integration model; setting a loss function of a convolution sub-network as a cross entropy loss function, and calculating error loss between a predicted category score and a real category confidence score of individual data of radio stations in a training set according to the set loss function; training the one-dimensional time sequence multi-sub-network deep integration model until the set training iteration number is reached to obtain the optimal network parameter weight in the iteration process, and finishing the training process of the multi-sub-network deep integration model; inputting a test sample set into a trained one-dimensional time sequence multi-subnetwork deep integration model, predicting test data, evaluating classification effects, averaging corresponding element values of prediction vectors to obtain predicted result vectors, and taking the positions of maximum values in the result vectors of radio station individual categories predicted by the network on the radio station time sequence data in the vectors as predicted radio station individual categories;
the one-dimensional time sequence multi-subnetwork deep integration model comprises three subnetworks, wherein:
the first sub-network comprises an input layer, ten convolutional layers, four pooling layers, two full-connection layers, two batch normalization layers, a classifier layer and an output layer, and the structural relationship of the first sub-network is as follows in sequence: the input layer → the first buildup layer → the second buildup layer → the first pooling layer → the third buildup layer → the fourth buildup layer → the second pooling layer → the fifth buildup layer → the sixth buildup layer → the third pooling layer → the seventh buildup layer → the eighth buildup layer → the fourth pooling layer → the ninth buildup layer → the tenth buildup layer → the first fully-connected layer → the first batch of normalization layers → the second fully-connected layer → the second batch of normalization layers → the classifier layer → the output layer;
a second subnetwork comprising: input layer, eight layers of convolution layer, three-layer pooling layer, two-layer full-connection layer, two-layer normalization layer, classifier layer and output layer of criticizing, its structural relationship is in proper order: input layer → 1 st buildup layer → 2 nd buildup layer → 1 st buildup layer → 3 rd buildup layer → 4 th buildup layer → 2 nd buildup layer → 5 th buildup layer → 6 th buildup layer → 3 rd buildup layer → 7 th buildup layer → 8 th buildup layer → 1 st full-connected layer → 1 st normalization layer → 2 nd full-connected layer → 2 nd normalization layer → classifier layer → output layer;
the third sub-network comprises: input layer, six layers of convolution layer, two-layer pooling layer, two-layer full-link layer, two-layer normalization layer, classifier layer and output layer of criticizing, its structural relationship is in proper order: the input layer → the first convolution layer → the second convolution layer → the first pooling layer → the third convolution layer → the fourth convolution layer → the second pooling layer → the V convolution layer → the vi convolution layer → the first fully connected layer → the first batch normalization layer → the second fully connected layer → the second batch normalization layer → the classifier layer → the output layer, and then the output layers of the three sub-networks are connected in parallel to form a one-dimensional time-sequential multi-sub-network deep integrated network including the three sub-networks.
2. An intelligent recognition method for individual categories of different stations as defined in claim 1, wherein: the method comprises the steps of dividing a training sample set and a test sample set, enabling time sequence data and corresponding category labels to form a signal sample pair, obtaining signal sample pairs of all radio station individual data and forming the radio station individual sample set, dividing the training sample, a verification sample set and the test sample set according to a proportion, and building a deep integration model formed by three convolution sub-networks.
3. An intelligent recognition method for identifying individual categories of different stations as recited in claim 2, wherein: in a deep integration model composed of three convolution sub-networks, convolution layers, pooling layers and full-connection layer network parameters of the three sub-networks are set, cross entropy loss functions, optimization algorithms and related parameters of the loss functions of the three sub-networks are set, a one-dimensional time sequence multi-sub-network deep integration model is constructed, the sequence of samples in a training sample set is randomly disordered, and the samples are input into the one-dimensional time sequence multi-sub-network deep integration model in batches according to set training step lengths to carry out network iterative training.
4. An intelligent recognition method for individual classes of different stations as defined in claim 3, wherein: calculating error loss between the predicted category score and the actual category confidence score of the individual data of the radio station in the training set according to the set loss function; randomly disordering the sample sequence in the verification sample set every iteration of the training data set, inputting the samples into a one-dimensional time sequence multi-subnetwork deep integration model in batches according to the set training step length for network verification, and obtaining the optimal network parameter weight in the iteration process; inputting a test sample set into the trained one-dimensional time sequence multi-sub-network deep integration model, averaging corresponding element values of prediction vectors of the three sub-models to obtain a predicted result vector, and taking the position of the maximum value in the result vector in the vector as the predicted radio station individual category.
5. An intelligent recognition method for identifying individual categories of different stations as recited in claim 1, wherein: in the process of preprocessing the individual data of the radio station, the individual data of the radio station is input, the individual data of the radio station is preprocessed, zero padding processing is carried out on the original data of the individual radio station, the fixed length of the set data is set to be 1024, short pulses with the length of 400-550 points and the part with the length of 9000-1100 points, which is short, in the original data of the individual radio station are padded with 0, and time sequence data with the length of 1024 is obtained.
6. An intelligent recognition method for identifying individual categories of different stations as recited in claim 1, wherein: setting network parameters of three sub-networks, wherein: the parameters of the layers in the first subnetwork are set as follows: the number of the neural units of the input layer is 1024; the number of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer is set to be not less than 16, the number of convolution kernels of the fifth convolution layer and the sixth convolution layer is set to be not less than 32, the number of convolution kernels of the seventh convolution layer, the eighth convolution kernel, the ninth convolution kernel and the tenth convolution kernel is set to be not less than 64, and the sizes of the convolution kernels are set to be 1 multiplied by 3; the activation functions all use linear rectification functions ReLU; the first, second, third and fourth pooling layers all use maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses a multi-classification function Softmax;
the layer parameters in the second subnetwork are set as follows: the number of the neural units of the input layer is 1024; the number of convolution kernels of the 1 st convolution layer and the 2 nd convolution layer is not less than 16, the number of convolution kernels of the 3 rd convolution layer and the 4 th convolution layer is 32, and the sizes of the convolution kernels are both set to be 1 x 3; the number of convolution kernels of the 5 th convolution layer and the 6 th convolution layer is set to be larger than or equal to 64, and the size of the convolution kernels is set to be 1 x 5; the number of convolution kernels of the 7 th convolution layer and the 8 th convolution layer is set to be 128, and the size of the convolution kernels is set to be 1 multiplied by 7; the activation functions all use linear rectification functions ReLU; the 1 st, 2 nd and 3 rd pooling layers are subjected to maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses a multi-classification function Softmax;
the parameters of the layers in the third subnetwork are set as follows: the number of the nerve units of the input layer is 1024; the number of convolution kernels of the I and II convolution layers is set to be more than or equal to 16, the number of convolution kernels of the III and IV convolution layers is set to be 32, and the sizes of the convolution kernels are set to be 1 x 5; the number of convolution kernels of the Vth convolution layer and the VI th convolution layer is not less than 64, and the size of the convolution kernels is 1 multiplied by 5; the activation functions all use linear rectification functions ReLU; the first and second pooling layers are both subjected to maximum pooling, and the size of the pooling is more than or equal to 2; the first layer of full-connection layer is set to be more than or equal to 40 full-connection neurons, and the second layer of full-connection layer neurons are set to be more than or equal to 10 full-connection neurons; the classifier layer uses the multi-classification function Softmax.
7. An intelligent recognition method for individual categories of different stations as defined in claim 1, wherein: setting loss functions and optimization algorithms for three sub-networks: the loss functions of the three sub-networks all use cross-entropy loss functions
Figure FDA0003932015220000031
Where M represents the number of individual classes of stations, y c E {0,1} represents station individual data true category label, p c E {0,1} represents the prediction class vector of the network to the training sample, and the optimization algorithms for the three subnetworks all use the optimization algorithm Adam based on adaptive matrix estimation.
8. An intelligent recognition method for identifying individual categories of different stations as recited in claim 1, wherein: in setting the training step size and the number of iterations for three subnetworks: training step size of networkThe method comprises the steps of sending a training sample set into three sub-networks for training in each Batch, and setting the Batch processing size Batch _ size to 512; the iteration times refer to the times of repeatedly sending the training sample set into three sub-networks for training, and the iteration times Epoch is set to be 50; in the training of the one-dimensional time sequence multi-subnetwork deep integrated model, the sample sequence in a training sample set is randomly disturbed, the training sample set after the disturbance sequence is input into the one-dimensional time sequence multi-subnetwork deep integrated model in batches according to the training step length for network iterative training, and the result vector P of the training sample prediction is obtained c ,P c =1/3(P 1 +P 2 +P 33 ) Substituting the obtained prediction result vector and the label actually corresponding to the sample into a loss function, and calculating the error loss between the prediction category and the real category of the individual data of the radio station in the training set; (ii) a Back propagation error loss and gradient optimization by using an optimization function so as to adjust network parameters;
wherein, P 1 Representing a vector of prediction results obtained by inputting test sample data into a first subnetwork; p 2 Representing a vector of prediction results obtained by inputting a test sample data into the second subnetwork; p 3 Representing a vector of predictors obtained by inputting a sample of test data into a third subnetwork.
9. An intelligent recognition method for identifying individual categories of different stations as recited in claim 1, wherein: obtaining a candidate network by iterating the training data set once, randomly disorganizing the sample sequence in the verification sample set at the moment, inputting the verification sample set after disorganizing the sequence into a one-dimensional time sequence multi-subnetwork deep integration model in batches according to the training step length for network verification to obtain the error loss of the candidate network on the verification set, comparing the error loss with the minimum error loss recorded in the previous iteration process, if the error loss is less than the recorded minimum error loss, taking the network weight of the iteration as the optimal network weight parameter in the iteration process, and updating the minimum error loss value in the record as the error loss; if the error loss is not less thanMinimal error loss of record, no operation; repeatedly training the one-dimensional time sequence multi-sub-network deep integration model until the training iteration number reaches 50, completing the training process of the multi-sub-network deep integration model, and obtaining the optimal network weight parameter; in the process of predicting test data, a test sample set is input into a trained one-dimensional time sequence multi-sub-network deep integration model, corresponding element values of prediction vectors of three sub-models are averaged to obtain predicted result vector element values, the position of the maximum value in the result vector in the vector is used as a predicted radio station individual category, and a prediction result vector P which represents the three sub-network prediction result vectors after averaging is obtained c ,P c =1/3(P 1 +P 2 +P 3 )。
CN202011190513.5A 2020-10-30 2020-10-30 Intelligent identification method for identifying individual types of different radio stations Active CN112529035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011190513.5A CN112529035B (en) 2020-10-30 2020-10-30 Intelligent identification method for identifying individual types of different radio stations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011190513.5A CN112529035B (en) 2020-10-30 2020-10-30 Intelligent identification method for identifying individual types of different radio stations

Publications (2)

Publication Number Publication Date
CN112529035A CN112529035A (en) 2021-03-19
CN112529035B true CN112529035B (en) 2023-01-06

Family

ID=74979267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011190513.5A Active CN112529035B (en) 2020-10-30 2020-10-30 Intelligent identification method for identifying individual types of different radio stations

Country Status (1)

Country Link
CN (1) CN112529035B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103000172A (en) * 2011-09-09 2013-03-27 中兴通讯股份有限公司 Signal classification method and device
KR20170095582A (en) * 2016-02-15 2017-08-23 한국전자통신연구원 Apparatus and method for audio recognition using neural network
WO2019104217A1 (en) * 2017-11-22 2019-05-31 The Trustees Of Columbia University In The City Of New York System method and computer-accessible medium for classifying breast tissue using a convolutional neural network
GB201720059D0 (en) * 2017-12-01 2018-01-17 Ucb Biopharma Sprl Three-dimensional medical image analysis method and system for identification of vertebral fractures
CN108984481A (en) * 2018-06-26 2018-12-11 华侨大学 A kind of homography matrix estimation method based on convolutional neural networks
CN109271926B (en) * 2018-09-14 2021-09-10 西安电子科技大学 Intelligent radiation source identification method based on GRU deep convolutional network
CN109645980A (en) * 2018-11-14 2019-04-19 天津大学 A kind of rhythm abnormality classification method based on depth migration study
CN110717416B (en) * 2019-09-24 2021-07-09 上海数创医疗科技有限公司 Neural network training method for ST segment classification recognition based on feature selection
CN111553186A (en) * 2020-03-05 2020-08-18 中国电子科技集团公司第二十九研究所 Electromagnetic signal identification method based on depth long-time and short-time memory network
CN111657915A (en) * 2020-04-30 2020-09-15 上海数创医疗科技有限公司 Electrocardiogram form recognition model based on deep learning and use method thereof

Also Published As

Publication number Publication date
CN112529035A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN110826630B (en) Radar interference signal feature level fusion identification method based on deep convolutional neural network
CN109639739B (en) Abnormal flow detection method based on automatic encoder network
Wong et al. Clustering learned CNN features from raw I/Q data for emitter identification
CN108696331B (en) Signal reconstruction method based on generation countermeasure network
CN109450834B (en) Communication signal classification and identification method based on multi-feature association and Bayesian network
CN109299741B (en) Network attack type identification method based on multi-layer detection
CN114564982B (en) Automatic identification method for radar signal modulation type
CN109446804B (en) Intrusion detection method based on multi-scale feature connection convolutional neural network
CN113095370B (en) Image recognition method, device, electronic equipment and storage medium
CN114881093B (en) Signal classification and identification method
CN112749633B (en) Separate and reconstructed individual radiation source identification method
CN114692665A (en) Radiation source open set individual identification method based on metric learning
CN110458189A (en) Compressed sensing and depth convolutional neural networks Power Quality Disturbance Classification Method
CN110502989A (en) A kind of small sample EO-1 hyperion face identification method and system
CN112596016A (en) Transformer fault diagnosis method based on integration of multiple one-dimensional convolutional neural networks
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
Tang et al. Specific emitter identification for IoT devices based on deep residual shrinkage networks
CN112529035B (en) Intelligent identification method for identifying individual types of different radio stations
CN111983569A (en) Radar interference suppression method based on neural network
CN115238749B (en) Modulation recognition method based on feature fusion of transducer
Xu et al. Individual identification of electronic equipment based on electromagnetic fingerprint characteristics
CN114143210A (en) Deep learning-based command control network key node identification method
CN114358058A (en) Wireless communication signal open set identification method and system based on deep neural network
Feng et al. FCGCN: Feature Correlation Graph Convolution Network for Few-Shot Individual Identification
CN113705695A (en) Power distribution network fault data identification method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant