Disclosure of Invention
The method aims at the problems that the conventional radar HRRP target recognition algorithm is low in recognition accuracy, and the combination of feature fusion and decision fusion is not realized. The invention provides a Stacking ensemble learning mode-based radar HRRP target identification method, which is inspired by that the identification efficiency and the identification accuracy are reduced due to useless redundancy characteristics, and different classifiers have different identification advantages. The method can effectively improve the identification accuracy, has low complexity of the identification algorithm, and has better identification efficiency.
The invention adopts the technical scheme that a Stacking integrated learning mode-based radar HRRP target identification method comprises the following steps:
s1, establishing a training set and a testing set according to the rule that the training set should contain data of each angle of the target as much as possible and the samples of the testing set and the training set do not repeatedly appear: the jth HRRP sample for the ith class of target in the training set can be expressed as
The jth HRRP sample for the ith class of target in the test set may be represented as
N denotes the nth sample point of the HRRP sample, and N denotes N total sample points of the HRRP sample.
In the S2 training stage, the mean distance image feature, the radial length feature and the PCA transformation feature are extracted from the HRRP training sample during training. Thirdly, connecting the three types of features in series to form combined features, then screening the features by using a Relieff algorithm, extracting an optimal feature subset from the screened features by using an SVM _ RFE algorithm, and finally obtaining a trained base classifier and a trained meta classifier by using a Stacking ensemble learning mode based on the optimal feature subset, wherein the method specifically comprises the following steps:
s2.1 HRRP training sample of j of i-th class target
2 norm normalization is carried out to obtain a normalized HRRP training sample signal
S2.2 training sample signals based on normalized HRRP
Carrying out feature extraction, and connecting the extracted three types of features in series to form combined features, wherein the method comprises the following specific steps:
s2.2.1 extracting normalized HRRP training sample signal
Mean range image feature of
The mean distance image characteristic corresponding to the whole training set is
Where z represents z training samples per class of targets.
S2.2.2 extracting normalized HRRP training sample signal
Characteristic of radial length of
The method comprises the following specific steps: computing
Amplitude mean of
Threshold value is set to
In turn will
1 st to 1 th
The amplitude of each sample point is compared with a threshold value from 1 st to the second
The coordinates of each sampling point are sequentially
The coordinate value of the first point larger than the threshold value is recorded as
Then sequentially make the above-mentioned materials pass through
N to n of
The amplitude of each sample point is compared with a threshold value from the n-th to the n-th
The coordinates of each sampling point are sequentially
The coordinate value of the first point larger than the threshold value is recorded as
The radial length feature corresponding to the jth HRRP training sample of the ith target can be expressed as
The radial length characteristic corresponding to the whole training set is
Where z represents z training samples per class of targets.
S2.2.3 extracting normalized HRRP training sample signal
Characteristic of PCA transformation of
For the features obtained after transformation, pca is a function in MATLAB. The whole training set pairThe desired PCA transformation is characterized by F
pca:
Where z represents z training samples per class of targets.
S2.2.4, connecting the three kinds of features in series to form a combined feature, and the jth HRRP sample of the ith target corresponds to the combined feature as
The combined features corresponding to the whole training set are
S2.3 weight calculation and sorting of F by RelifF algorithm [ Fr,sortr](ii) RelifF (F), wherein FrIs a feature that is ranked well from high to low in weight, sortrThe generated feature sorting matrix can be used for sorting sample feature weights in test data, and the reason for using the algorithm is to preliminarily screen out useless features and reduce the operating pressure of the SVM _ RFE algorithm.
S2.4 finding F by SVM _ RFE algorithmrOptimal feature subset of [ F ]s,sorts]=SVM_RFE(Fr) Obtaining an optimal feature subset FsAnd an optimal feature subset extraction matrix sortsAnd preparing for extracting the optimal feature subset of the test data.
S2.5, training three classes of base classifiers and a meta classifier by means of a Stacking ensemble learning mode, wherein the first class of base classifier is a support vector machine classifier (SVM), the second class of base classifier is a k-nearest neighbor classifier (KNN), the third class of base classifier is a random forest classifier (RF), the meta classifier is a k-nearest neighbor classifier (KNN), and the operation is as follows:
s2.5.1 first, the optimal feature subset F obtained in S2.4 is obtainedsEqually dividing into 5 parts: fs 1,Fs 2,Fs 3,Fs 4,Fs 5Training the 1 st classifier SVM of the first class base classifier SVM1When usingFs 1As verification test data, with Fs 2,Fs 3,Fs 4,Fs 5Training classifier SVM for training data1And the verification test result is marked as Psvm 1Training the 2 nd classifier SVM in the SVM classifier SVM2When using Fs 2As verification test data, with Fs 1,Fs 3,Fs 4,Fs 5Training classifier SVM based on training data2And the verification test result is marked as Psvm 2… …, in turn, converting Fs 3,Fs 4,Fs 5Respectively as verification test data, and the rest four data as training data to obtain five trained first class base classifiers SVM1,SVM2,SVM3,SVM4,SVM5And five verification test results P can be obtained simultaneouslysvm=[Psvm 1,Psvm 2,Psvm 3,Psvm 4,Psvm 5]T。
S2.5.2 training the second class KNN and the third class RF to verify the test according to S2.5.1 to obtain five trained second class KNN classifiers1,KNN2,KNN3,KNN4,KNN5And five trained third class base classifiers RF1,RF2,RF3,RF4,RF5And also can obtain verification test result Pf_knn=[Pf_knn 1,Pf_knn 2,Pf_knn 3,Pf_knn 4,Pf_knn 5]TAnd Prf=[Prf 1,Prf 2,Prf 3,Prf 4,Prf 5]T。
S2.5.3 splicing the above test results as the latest training data Flast=[Psvm,Pf_knn,Prf];
S2.6 treating Flast=[Psvm,Pf_knn,Prf]And importing a meta classifier KNN for training to obtain a trained meta classifier.
S3 test phase: during testing, mean distance image characteristics, radial length characteristics and PCA conversion characteristics are extracted from HRRP training samples. And connecting the three types of features in series to form combined features, and sequentially multiplying the combined features by a feature sequencing matrix and an optimal feature subset extraction matrix obtained in training to obtain an optimal feature subset. And finally, classifying the latest test data set by using the trained meta classifier to obtain a final classification result, wherein the method specifically comprises the following steps:
s3.1 HRRP test sample of jth class i target
Normalization by 2 norms to obtain normalized HRRP test sample signal
S3.2 test sample signals based on normalized HRRP
Carrying out feature extraction, and connecting the extracted three types of features in series to form combined features, wherein the method comprises the following specific steps:
s3.2.1 extracting normalized HRRP test sample signal
Mean range image feature of
The mean range profile corresponding to the whole test set is characterized in that
Where y indicates that there are y test samples per class of target.
S3.2.2 extracting normalized HRRP test sample signal
Characteristic of radial length of
The specific process is as follows: find out
Amplitude mean of
Threshold value is set to
In turn will
1 st to 1 th of
The amplitude of each sample point is compared with a threshold value from 1 st to the second
The coordinates of each sampling point are sequentially
The coordinate value of the first point larger than the threshold value is recorded as
Then sequentially make the above-mentioned materials pass through
N to n of
The amplitude of each sample point is compared with a threshold value from the n-th to the n-th
The coordinates of each sampling point are sequentially
The coordinate value of the first point larger than the threshold value is recorded as
The radial length feature corresponding to the jth HRRP test sample of the ith target can be expressed as
The radial length corresponding to the whole test set is characterized by
Where y indicates that there are y test samples per class of target.
S3.2.3 extracting normalized HRRP test sample signal
Characteristic of PCA transformation of
For the features obtained after transformation, PCA is a function in MATLAB, and the PCA transformation feature corresponding to the whole test set is f
pca:
Where y indicates that there are y test samples per class of target.
S3.2.4, connecting the three characteristics in series to form a combined characteristic, wherein the combined characteristic corresponding to the jth HRRP test sample of the ith target is
The corresponding combination characteristics of the whole test set are
S3.3 sort obtained by S2.3rMultiplying the obtained result by the combined characteristic f to carry out characteristic screening to obtain the screened characteristic fr=f×sortr。
S3.4 sort obtained by S2.4sMultiplying by the filtered feature frObtaining the desired optimal feature subset fs=fr×sorts。
S3.5 sub-set f of optimal featuressSequentially importing five trained first class base classifiers SVM obtained in training1,SVM2,SVM3,SVM4,SVM5In (1), five classification results R are obtainedsvm 1,Rsvm 2,Rsvm 3,Rsvm 4,Rsvm 5Averaging the five classification results to obtain RsvmAs a subset of the most recent test data.
According to the method, the optimal feature subset f is divided intosSequentially importing five trained second-class base classifiers KNN obtained in training1,KNN2,KNN3,KNN4,KNN5And five trained third class base classifiers RF1,RF2,RF3,RF4,RF5Respectively obtaining the latest subsets R of test dataf_knnAnd Rrf. Stitching the three test subsets into a total test data set Tlast=[Rsvm,Rf_knn,Rrf]。
S4 is S2.6 trained Meta-classifier on the latest test data set Tlast=[Rsvm,Rf_knn,Rrf]And (5) classifying to obtain a final classification result.
The invention has the following beneficial effects: the method can effectively improve the identification accuracy of the HRRP multi-classification targets of the radar, has low complexity and better identification efficiency, and has important engineering application value for automatic target identification of the radar.
Detailed Description
The present invention is further illustrated below by the following examples of implementation routines, it being understood that these implementations are merely illustrative of the present invention and are not intended to limit the scope of the present invention, which is to be read and modified by those skilled in the art in various equivalent forms within the scope of the present invention as defined in the appended claims.
The invention provides a radar HRRP target identification method based on a Stacking integrated learning mode, a general flow chart is shown in figure 1, and the method comprises the following steps:
s1, establishing a training set and a testing set according to the rule that the training set should contain data of each angle of the target as much as possible and the samples of the testing set and the training set do not appear repeatedly;
in the S2 training stage, the mean distance image feature, the radial length feature and the PCA transformation feature are extracted from the HRRP training sample during training. Thirdly, connecting the three types of features in series to form combined features, screening the features by using a Relieff algorithm, extracting an optimal feature subset from the screened features by using an SVM _ RFE algorithm, and finally obtaining a trained base classifier and a trained meta classifier by using a Stacking ensemble learning mode based on the optimal feature subset;
s3 test phase: during testing, mean distance image characteristics, radial length characteristics and PCA conversion characteristics are extracted from HRRP training samples. And connecting the three types of features in series to form combined features, and sequentially multiplying the combined features by a feature sequencing matrix and an optimal feature subset extraction matrix obtained in training to obtain an optimal feature subset. Then, the optimal feature subset is led into a trained base classifier to obtain a latest test data set, and finally, the latest test data set is classified by using the trained meta classifier to obtain a final classification result;
s4 test the latest test data set T by using the meta classifier trained in S2.6last=[Rsvm,Rf_knn,Rrf]And (5) classifying to obtain a final classification result.
The effect of the invention can be further verified and explained by the following simulation experiment:
(I) Experimental conditions
1. Experimental data
The data used in the experiment are measured data of high resolution range profiles of 10 types of ground targets, optical images of 10 types of targets and HRRP, as shown in fig. 2, from left to right and from top to bottom, BMP2, BTR70, T72, BTR60,2S1, BRDM2, D7, T62, ZIL and ZSU, respectively. Each type of target contains omnidirectional angle data at 15 degrees and 17 degrees pitch angle. Data at 17 degrees pitch angle were used for training in the experiment, and data at 15 degrees pitch angle were used for testing. The number of training samples is 2747 and the number of testing samples is 2425.
2. Experimental Environment
Software environment of simulation experiment: an operating system windows 10; the processor is Intel (R) core (TM) i7-8700k, and the main frequency of the processor is 3.70 GHz; the software platform is as follows: MATLAB 2019 b.
3. Experimental parameters
In the training stage, when the useless features are screened out by utilizing the RelifF algorithm, the threshold value is set to be 0, the features with the weight higher than 0 are reserved, and the features with the weight lower than 0 are screened out.
In the training stage, when the SVM _ RFE algorithm is used for solving the optimal feature subset, through a plurality of experiments, when the feature number of the feature subset is set to be 125, the classification effect is good.
The kernel function of the first class of base classifier SVM is a polynomial kernel function polynomial, and the polynomial order is set to be 2;
the second type base classifier is a K neighbor classifier KNN, the number of neighbor elements is 1, and the distance weights are set to be equal;
the third class of base classifier is a random forest classifier RF, using the fitsensing function in MATLAB, where the parameter Method is set to bag and the parameter NumLearningCycle is set to 30.
(II) contents and results of the experiment
Compared with other existing radar target identification methods, the method of the invention has the following results in Table 1:
TABLE 1
Classification method
|
Recognition rate
|
Single classifier Support Vector Machine (SVM)
|
83.3%
|
Single classifier K mean value near neighbor (KNN)
|
84.5%
|
Single classifier Random Forest (RF)
|
85.1%
|
Multi-classifier voting
|
87.4%
|
Bayes Compressed Sensing (BCS)
|
86.2%
|
Joint dynamic sparse form classification (JDSRC)
|
86.5%
|
Multitask compressed sensing (MtCS)
|
86.7%
|
Methods of the invention
|
89.1% |
As can be seen from Table 1, compared with other radar HRRP target identification methods, the method has high identification accuracy which reaches 89.1 percent and is obviously superior to other methods.