CN109344767B - SAR target identification method based on multi-azimuth multi-feature collaborative representation - Google Patents
SAR target identification method based on multi-azimuth multi-feature collaborative representation Download PDFInfo
- Publication number
- CN109344767B CN109344767B CN201811148970.0A CN201811148970A CN109344767B CN 109344767 B CN109344767 B CN 109344767B CN 201811148970 A CN201811148970 A CN 201811148970A CN 109344767 B CN109344767 B CN 109344767B
- Authority
- CN
- China
- Prior art keywords
- azimuth
- test sample
- feature
- features
- neighborhood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012360 testing method Methods 0.000 claims description 136
- 238000000513 principal component analysis Methods 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 32
- 238000013145 classification model Methods 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 5
- 238000005065 mining Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 101150061927 BMP2 gene Proteins 0.000 description 1
- 230000004186 co-expression Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/259—Fusion by voting
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a multi-azimuth multi-feature collaborative representation-based SAR target recognition method, which improves the traditional collaborative representation algorithm by mining the azimuth correlation of adjacent SAR images for the first time and provides a new multi-azimuth CRC algorithm. The method not only retains the advantage of simple operation of cooperative representation, but also improves the accuracy of SAR target classification, and has excellent anti-noise capability and robustness to various parameter changes.
Description
Technical Field
The invention relates to the technical field of radar target identification, in particular to an SAR target identification method based on multi-azimuth multi-feature collaborative representation.
Background
Synthetic Aperture Radars (SAR) are widely used in military and civilian applications due to their many advantages. The SAR target recognition technology is the technology for distinguishing and classifying specific targets by utilizing SAR, and is an important application of SAR.
SAR target identification methods can be mainly classified into three categories: (1) a template matching based approach. The good recognition performance of the method is required to be established on the basis of a template library which is as complete as possible, and if the template library is small, the recognition performance is greatly influenced. The large template library results in higher space complexity and lower efficiency of the template matching method. (2) A model-based approach. The model-based method has better clutter robustness, but the method has higher requirements on the quality of the SAR image, and higher theoretical level and computing power are required when the model is constructed, so that the current model-based SAR target identification method is not widely applied. (3) A machine learning based method. The sparse representation-based SAR target recognition is widely concerned by virtue of its good recognition performance. It should be noted that although SRC (sparse representation-based classification) has good classification performance, there are still disadvantages that the computation is complex, and it is difficult to obtain a global optimal solution. Therefore, as an alternative representation learning method, a collaborative representation algorithm is proposed. The results demonstrate that CRC (Collaborative reporting-Based classification) Based on the L2 norm constraint can achieve similar classification performance to SRC, but the computational complexity is much lower than SRC. However, to date, all CRC models do not take into account the dependence of SAR target scattering on azimuth angle and the complementarity between different features of the SAR target image. According to the SAR imaging mechanism, SAR target images of adjacent positions have similar scattering characteristics, so that the SAR target images have strong correlation in an image domain. And different features extracted from different aspects of the SAR image, each feature describes the SAR scene from a single angle, and the information provided by the features has strong complementarity.
Therefore, how to improve the classification performance of the high-cooperation representation on the SAR image target by a better method is an important issue of research in the field.
Disclosure of Invention
Aiming at the defects in the prior art, the application discloses an SAR target recognition method based on multi-azimuth multi-feature collaborative representation, the traditional CRC algorithm is improved by mining the azimuth correlation of adjacent SAR images, and a new multi-azimuth CRC algorithm is provided.
In order to solve the technical problem, the following technical scheme is adopted in the application:
a SAR target recognition method based on multi-azimuth multi-feature collaborative representation comprises the following steps:
(1) aiming at different known radar targets of multiple types, SAR images of multiple directions of multiple known radar targets are respectively collected to serve as training samples, PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional space-time scaling) features of the training samples are respectively extracted, and therefore a training sample set X is formed by a set of the PCA features, the wavelet features and the 2DSZMS features of the training samples of the various typesu×vU represents the dimension of the sample, v represents the total number of training samples in the training sample set, and X is based on the training sample setu×vConstructing a feature dictionary { Dk}k=1,2,3K is a feature type index;
(2) aiming at a radar target to be tested, SAR images of the radar target to be tested in multiple directions are collected to serve as test samples, PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional space-time scaling) features of the test samples are respectively extracted, and therefore a test sample set Y is formed by a set of the PCA features, the wavelet features and the 2DSZMS features of the test samplesu×wW represents the total number of test samples in the test sample set, and the test sample set Yu×wAny one of the test samples can be represented as yk}k=1,2,3;
(3) Taking one test sample in the test sample set as a central test samplem is the index number of the central test sample, the central test sample and the test samples in the azimuth neighborhood of the central test sample are used as a multi-azimuth neighborhood test sample set, the sector range of the adjacent eta degrees of the central test sample is the azimuth neighborhood of the central test sample, and the degrees are [ c-eta, c + eta ]]The test samples in the range are the test samples in the azimuth neighborhood of the central test sample, c is the angle of the central test sample, and the multi-azimuth neighborhood test sample set is expressed asp is the number of test samples in the upper half azimuth sector of the central test sample, and q is the number of test samples in the lower half azimuth sector of the central test sample;
(4) linearly representing each multi-azimuth neighborhood test sample in the multi-azimuth neighborhood test sample set;
(5) establishing a multi-azimuth multi-feature collaborative representation classification model of the test sample set by utilizing each training sample in the training sample set and the multi-azimuth neighborhood test sample line represented linearly;
(6) respectively calculating temporary class labels of PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional singular value decomposition) features by utilizing a multi-azimuth multi-feature collaborative representation classification model;
(7) and performing multi-feature voting according to the PCA features, the wavelet features and the temporary class labels of the 2DSZMS features to obtain a final class label of the central test sample, thereby realizing the identification of the radar target to be detected.
Preferably, in step (4), the multi-azimuth neighborhood test samples in the multi-azimuth neighborhood test sample set may be linearly expressed as:
wherein,denotes the τ -th class sub-dictionary under the feature k, N is the total number of classes,indicating that the ith test sample is related to the sub-dictionaryIs used to represent a sub-vector of coefficients,representing the random error at feature k.
Preferably, in step (5), the multi-azimuth multi-feature collaborative representation classification model is represented as:
wherein, BkIs a set of sub-vectors of coefficients,and theta is a multi-azimuth multi-feature collaborative representation regularization parameter.
Preferably, the step (6) comprises the steps of:
1) solving an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWhereinI represents an identity matrix;
2) solving a representation error set of the test samples in the azimuth neighborhood of the central test sample based on an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWherein,
Preferably, the step (7) comprises the steps of:
temporary class labelThe possibility of belonging to the h-th class isThe multi-feature voting criteria isOrder toThe final class label of the center test specimen is Representing results representing a multi-feature voting criterion.
In summary, the invention discloses a multi-azimuth multi-feature collaborative representation-based SAR target recognition method, the method improves the traditional collaborative representation algorithm by mining the azimuth correlation of adjacent SAR images for the first time, and provides a new multi-azimuth CRC algorithm. The method not only retains the advantage of simple operation of cooperative representation, but also improves the accuracy of SAR target classification, and has excellent anti-noise capability and robustness to various parameter changes.
Drawings
For purposes of clarity, technical solutions and advantages, the present application will be described in further detail below with reference to the accompanying drawings, in which:
FIG. 1 is a near-azimuth SAR image of a target BMP 2;
FIG. 2 is a near-bearing SAR image of the target BTR 70;
FIG. 3 is a near-azimuth SAR image of the target T72;
FIG. 4 is a graph of classification effects (PCACRC, WCRC, 2 DSZCRC, MFCRC, MAMFCRC) of different methods under the same parameters;
FIG. 5 is a graph of classification effect of five methods (PCACRC, WCRC, 2 DSZCRC, MFCRC, MAMFCRC) as a function of feature dimension;
FIG. 6 is a graph of the variation of the classification effectiveness performance of the five methods (PCACRC, WCRC, 2 DSZCRC, MFCRC, MAMFCRC) with respect to the regularization parameter θ;
FIG. 7 is a graph of MAMFCRC average classification accuracy versus azimuth neighborhood angle;
FIG. 8 is a graph of MAMFCRC classification performance variation under different SNR noises;
FIG. 9 is a SAR image of BRDM 2;
fig. 10 is a SAR image of 2S 1;
FIG. 11 is a SAR image of ZSU 23/4;
fig. 12 is a flowchart of an SAR target identification method based on multi-azimuth multi-feature collaborative representation disclosed by the present invention.
Detailed Description
The present application will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 12, the present invention discloses an SAR target identification method based on multi-azimuth multi-feature collaborative representation, which comprises the following steps:
(1) aiming at different known radar targets of multiple types, SAR images of multiple directions of multiple known radar targets are respectively collected to serve as training samples, PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional space-time scaling) features of the training samples are respectively extracted, and therefore a training sample set X is formed by a set of the PCA features, the wavelet features and the 2DSZMS features of the training samples of the various typesu×vU represents the dimension of the sample, v represents the total number of training samples in the training sample set, and X is based on the training sample setu×vConstructing a feature dictionary { Dk}k=1,2,3K is a feature type index;
as shown in fig. 1 to 3 and 9 to 11, in the method of the present invention, various typical SAR target image features may be applied. These different features are extracted from different aspects of the SAR image, each describing the SAR scene from a single perspective. Because the information provided by the characteristics has strong complementarity, the accuracy and the robustness of SAR target identification can be improved by comprehensively utilizing the different characteristics.
PCA is a very common data compression and feature extraction technique. Firstly, the method centers data and then calculates the eigenvalue and eigenvector of the data covariance matrix. And then, arranging the eigenvalues from large to small, selecting the largest k eigenvalues, and forming an eigenvector matrix by the corresponding k eigenvectors. And finally, projecting the sample points to the selected feature vectors to obtain PCA feature data after dimension reduction.
The wavelet transformation can effectively extract the multi-scale characteristics of the two-dimensional image and can highlight singular points in the image. Two-dimensional discrete wavelet transform (2D-DWT) of an image is to decompose rows and columns by using low-pass and high-pass filters respectively to obtain a low-frequency approximate sub-image, a horizontal high-frequency sub-image, a vertical high-frequency sub-image and a diagonal high-frequency sub-image. In the invention, a low-frequency sub-image of two-dimensional discrete wavelet transform is used as a multi-scale wavelet feature.
The Zernike moments can easily construct arbitrarily high-order moments of an image and can reconstruct an image using fewer moments. In the present invention we use the 2DS-ZMs feature based on image domain. The method includes the steps that an SAR image is evenly cut into k pieces according to the amplitude direction, and multilayer 2D-Slice of the image is obtained. And then, the Zernike moments are used for describing each layer, so that each layer obtains a Zernike moment feature vector. And finally, stacking the feature vectors together, and describing the SAR image by using the column vector, wherein the column vector is called the 2D-Slice Zernike moments feature vector of the SAR image.
(2) Aiming at a radar target to be tested, SAR images of the radar target to be tested in multiple directions are collected to serve as test samples, and PCA (principal component analysis) characteristics, wavelet characteristics and 2DSZMS (two-dimensional gradient magnetic field) characteristics of each test sample are respectively extractedWhereby a set of test samples Y is formed from a set of PCA features, wavelet features and 2DSZMS features of each test sampleu×wW represents the total number of test samples in the test sample set, and the test sample set Yu×wAny one of the test samples can be represented as yk}k=1,2,3;
(3) Taking one test sample in the test sample set as a central test samplem is the index number of the central test sample, the central test sample and the test samples in the azimuth neighborhood of the central test sample are used as a multi-azimuth neighborhood test sample set, the sector range of the adjacent eta degrees of the central test sample is the azimuth neighborhood of the central test sample, and the degrees are [ c-eta, c + eta ]]The test samples in the range are the test samples in the azimuth neighborhood of the central test sample, c is the angle of the central test sample, and the multi-azimuth neighborhood test sample set is expressed asp is the number of test samples in the upper half azimuth sector of the central test sample, and q is the number of test samples in the lower half azimuth sector of the central test sample;
according to an imaging mechanism of the SAR sensor, SAR images with similar azimuth angles have strong correlation. That is, there is a high correlation and similarity between several SAR target image test samples within a small azimuthal neighborhood. We introduce this mechanism into the basic collaborative representation model. By taking the azimuth angle of a central test sample as a center, a plurality of test samples in a small azimuth angle sector are taken to form a multi-azimuth neighborhood sample set.
(4) Linearly representing each multi-azimuth neighborhood test sample in the multi-azimuth neighborhood test sample set;
(5) establishing a multi-azimuth multi-feature collaborative representation classification model of the test sample set by utilizing each training sample in the training sample set and the multi-azimuth neighborhood test sample line represented linearly;
(6) respectively calculating temporary class labels of PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional singular value decomposition) features by utilizing a multi-azimuth multi-feature collaborative representation classification model;
(7) and performing multi-feature voting according to the PCA features, the wavelet features and the temporary class labels of the 2DSZMS features to obtain a final class label of the central test sample, thereby realizing the identification of the radar target to be detected.
As shown in fig. 8, the classification performance change chart of the present invention under different SNR noises is obtained, the present invention improves the conventional collaborative representation algorithm by mining the azimuth correlation of adjacent SAR images for the first time, and provides a new multi-azimuth CRC algorithm. The method not only retains the advantage of simple operation of cooperative representation, but also improves the accuracy of SAR target classification, and has excellent anti-noise capability and robustness to various parameter changes.
In specific implementation, in step (4), the multi-directional neighborhood test samples in the multi-directional neighborhood test sample set can be linearly expressed as:
wherein,denotes the τ -th class sub-dictionary under the feature k, N is the total number of classes,indicating that the ith test sample is related to the sub-dictionaryIs used to represent a sub-vector of coefficients,representing the random error at feature k.
In the specific implementation, in the step (5), the multi-azimuth multi-feature collaborative representation classification model is represented as:
wherein, BkIs a set of sub-vectors of coefficients,and theta is a multi-azimuth multi-feature collaborative representation regularization parameter.
According to the target multi-azimuth strong correlation characteristic, the representation coefficientShould be similar in nature. If strict requirements are made on this similarity, let eachAll are equal, the above problem can be expressed as:
here, ,according to the principle of the collaborative representation classifier, a multi-azimuth multi-feature collaborative representation classification model in the form of a matrix can be obtained as follows:
in specific implementation, the step (6) comprises the following steps:
1) solving an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWhereinI represents an identity matrix;
in the above multi-azimuth multi-feature collaborative representation classification model, the Frobenius norm of the matrix can be written as l of each column vector2The sum of the norms, i.e. the formula:and
therefore, the multi-azimuth neighborhood collaborative representation model becomes the following solution problem:
on the other hand, due to noise and the like, there is a difference between the test samples at different azimuth angles, and the representation coefficients should also have a difference to ensure that sufficient complementary information is provided, which makes the representation more flexible. Thus, we will still eachConsidered different, the representation coefficients of the multi-azimuth neighborhood co-representation can thus be solved by the following optimization problem:
the above-mentioned multidirectional neighborhood collaborative representation model optimization problem has an analytic solution as follows:
2) solving a representation error set of the test samples in the azimuth neighborhood of the central test sample based on an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWherein,
according toWe can find the co-expression coefficients of the test samples in the azimuthal neighborhood. Further, the representation error of each orientation neighborhood test sample can be found as follows:
Temporary class labels for current test samplesThe method can be obtained according to the principle that the representation error sum of all the test samples in the azimuth neighborhood is minimum.
In specific implementation, the step (7) comprises the following steps:
temporary class labelThe possibility of belonging to the h-th class isThe multi-feature voting criteria isWhen in useWhen the final class label of the central test specimen is Representing results representing a multi-feature voting criterion.
In order to fully utilize discrimination information of different features of an SAR target, a multi-azimuth multi-feature collaborative representation classification model classification result under three features (PCA, wavelet and 2DSZMs) of a current test sample is fused. That is, y for the current unclassified samplemObtaining three temporary labels under each feature through a multi-azimuth multi-feature collaborative representation classification modelAssume temporary class labelsThe possibility of belonging to the h-th class isThe multi-feature voting criteria are:
the technical solution of the present invention will be further described below by way of examples.
Example (b):
in the experiment, five methods of PCACRC, WCRC, 2 DSZCRC, MFCRC and MAMFCRC are subjected to experimental comparison under the same parameters and the same conditions. Here, the regularization parameter for each method is 0.1, each feature dimension is 200 dimensions, and the azimuth neighborhood angle in mamcrc is 5 °. The five methods are respectively realized as follows:
(1) PCACRC: and extracting PCA characteristic data of the SAR training sample and the test sample, and inputting the PCA characteristic data into a basic CRC frame to obtain a recognition result.
(2) WCRC: extracting wavelet feature data of the SAR training sample and the test sample, and inputting the wavelet feature data into a basic CRC framework to obtain a recognition result.
(3)2 DSZMRCC: and extracting 2DSZMs characteristic data of the SAR training sample and the testing sample, and inputting the 2DSZMs characteristic data into a basic CRC frame to obtain a recognition result.
(4) MFCRC: first, PCA, wavelet and 2DSZMs features are extracted from the SAR image. Then, the training sample and the current test sample under each feature are input to the basic CRC model, thereby obtaining a temporary output label. And finally, voting decision is carried out on the three temporary labels to obtain a classification result of the current test sample.
(5) MAMFCRC: firstly, three typical characteristics of PCA, wavelet and 2DSZMS are extracted from the SAR image. And then, according to the principle that the adjacent directions of the SAR target image are strongly correlated, expanding the basic collaborative representation model into a multi-direction neighborhood collaborative representation model. And for each feature, respectively inputting the multi-azimuth neighborhood test sample of the current test sample into the model to obtain a temporary output label corresponding to each feature. And finally, voting decision is carried out on the multidirectional neighborhood temporary output labels with the three types of characteristics to obtain a final identification result of the current test sample.
The results of this experiment are shown in table 2 and fig. 4. Fig. 4 compares the five methods against the average classification effect of the same SAR target image, and table 2 shows specific data. From table 2, it can be seen that under the condition that the parameters are the same, the recognition rate of the multi-feature based collaborative representation algorithm (MFCRC) is improved to 96.36% compared with the single-feature CRC algorithm. Nevertheless, as is apparent from table 2 and fig. 4, the accuracy of the mamcrc algorithm proposed by the present invention is greatly improved, which can reach 99.32%, compared with the other 4 methods, and at this time, the accuracy of the classification of the BTR70 class data and the T72 class data reaches even 100%, which sufficiently illustrates the superiority of mamcrc. In addition, as shown in fig. 5 and 6, the classification effect of the method of the present invention is not obvious along with the change of the feature dimension and the regularization parameter, so that the method of the present invention has better stability compared with other existing methods.
TABLE 1 type and number of training and test samples
TABLE 2 Performance comparison of Single feature CRC (PCACRC, WCRC, 2 DSZCRC), MFCRC and MAMFCRC algorithms
TABLE 3 Classification accuracy of MAMFCRC algorithm under different azimuth neighborhood angles
As can be seen from table 3 and fig. 7, when the neighborhood angles of different azimuths are set, the accuracy of the mamcrc algorithm is high, and the classification accuracy gradually increases with the increase of the domain angle, so that the optimal neighborhood angle is 10 °.
Table 4: data set for pitch angle experiment
TABLE 5 MAMFCRC algorithm Classification accuracy at different pitch angles
The data in table 4 are used as a test set, and the obtained results are shown in table 5, and the accuracy of the algorithm adopting the method of the invention is reduced with the increase of the depression angle, so that the method of the invention is suitable for being used in scenes with smaller depression angles.
Finally, it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, while the application has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application as defined by the appended claims.
Claims (2)
1. A SAR target recognition method based on multi-azimuth multi-feature collaborative representation is characterized by comprising the following steps:
(1) aiming at different known radar targets of multiple types, SAR images of multiple directions of multiple known radar targets are respectively collected to serve as training samples, PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional space-time scaling) features of the training samples are respectively extracted, and therefore a training sample set X is formed by a set of the PCA features, the wavelet features and the 2DSZMS features of the training samples of the various typesu×vU represents the dimension of the sample, v represents the total number of training samples in the training sample set, and X is based on the training sample setu×vConstructing a feature dictionary { Dk}k=1,2,3K is a feature type index; the 2DSZMs of the SAR image refer to 2D-Slice Zernike moments feature vectors of the SAR image;
(2) aiming at a radar target to be tested, SAR images of the radar target to be tested in multiple directions are collected to serve as test samples, PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional space-time scaling) features of the test samples are respectively extracted, and therefore a test sample set Y is formed by a set of the PCA features, the wavelet features and the 2DSZMS features of the test samplesu×wW represents the total number of test samples in the test sample set, and the test sample set Yu×wAny one of the test samples can be represented as yk}k=1,2,3;
(3) Taking one test sample in the test sample set as a central test samplem is the index number of the central test sample, the central test sample and the test samples in the azimuth neighborhood of the central test sample are used as a multi-azimuth neighborhood test sample set, the sector range of the adjacent eta degrees of the central test sample is the azimuth neighborhood of the central test sample, and the degrees are [ c-eta, c + eta ]]The test samples in the range are the test samples in the azimuth neighborhood of the central test sample, c is the angle of the central test sample, and the multi-azimuth neighborhood test sample set is expressed asp is the number of test samples in the upper half azimuth sector of the central test sample, and q is the number of test samples in the lower half azimuth sector of the central test sample;
(4) linearly representing each multi-azimuth neighborhood test sample in the multi-azimuth neighborhood test sample set; wherein the multi-azimuth neighborhood test samples in the multi-azimuth neighborhood test sample set can be linearly represented as:
wherein,denotes the τ -th class sub-dictionary under the feature k, N is the total number of classes,indicating that the ith test sample is related to the sub-dictionaryIs used to represent a sub-vector of coefficients,representing the random error under the characteristic k;
(5) establishing a multi-azimuth multi-feature collaborative representation classification model of the test sample set by utilizing each training sample in the training sample set and the multi-azimuth neighborhood test sample line represented linearly; wherein, the multi-azimuth multi-feature collaborative representation classification model is expressed as:wherein, BkIs a set of sub-vectors of coefficients, the regularization parameters are represented in a multi-azimuth and multi-feature collaborative mode;
(6) respectively calculating temporary class labels of PCA (principal component analysis) features, wavelet features and 2DSZMS (two-dimensional singular value decomposition) features by utilizing a multi-azimuth multi-feature collaborative representation classification model; the method specifically comprises the following steps:
1) solving an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWhereinI represents an identity matrix;
2) solving a representation error set of the test samples in the azimuth neighborhood of the central test sample based on an analytic solution of a multi-azimuth multi-feature collaborative representation classification modelWherein,
(7) And performing multi-feature voting according to the PCA features, the wavelet features and the temporary class labels of the 2DSZMS features to obtain a final class label of the central test sample, thereby realizing the identification of the radar target to be detected.
2. The SAR target recognition method based on multi-azimuth multi-feature collaborative representation according to claim 1, wherein the step (7) comprises the steps of:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811148970.0A CN109344767B (en) | 2018-09-29 | 2018-09-29 | SAR target identification method based on multi-azimuth multi-feature collaborative representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811148970.0A CN109344767B (en) | 2018-09-29 | 2018-09-29 | SAR target identification method based on multi-azimuth multi-feature collaborative representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109344767A CN109344767A (en) | 2019-02-15 |
CN109344767B true CN109344767B (en) | 2021-09-28 |
Family
ID=65307474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811148970.0A Active CN109344767B (en) | 2018-09-29 | 2018-09-29 | SAR target identification method based on multi-azimuth multi-feature collaborative representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344767B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161128A (en) * | 2019-11-18 | 2020-05-15 | 田树耀 | Image transformation based on frequency domain direction filtering and application thereof in sparse decomposition |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103576164A (en) * | 2012-07-20 | 2014-02-12 | 上海莱凯数码科技有限公司 | High-resolution remote sensing image fusion method based on linear Bayesian estimation |
CN104463193A (en) * | 2014-11-04 | 2015-03-25 | 西安电子科技大学 | Polarization SAR image classifying method based on depth sparsity ICA |
CN104732224A (en) * | 2015-04-08 | 2015-06-24 | 重庆大学 | SAR object identification method based on two-dimension Zernike moment feature sparse representation |
CN104751173A (en) * | 2015-03-12 | 2015-07-01 | 西安电子科技大学 | Polarized SAR (Synthetic Aperture Radar) image classifying method based on cooperative representation and deep learning. |
CN104881670A (en) * | 2015-05-20 | 2015-09-02 | 电子科技大学 | Rapid target extraction method used for SAR azimuth estimation |
CN106022383A (en) * | 2016-05-26 | 2016-10-12 | 重庆大学 | SAR target recognition method based on azimuth relevant dynamic dictionary sparse representation |
CN106096505A (en) * | 2016-05-28 | 2016-11-09 | 重庆大学 | The SAR target identification method of expression is worked in coordination with based on Analysis On Multi-scale Features |
CN106485279A (en) * | 2016-10-13 | 2017-03-08 | 东南大学 | A kind of image classification method based on Zernike square network |
CN108229551A (en) * | 2017-12-28 | 2018-06-29 | 湘潭大学 | A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9898811B2 (en) * | 2015-05-08 | 2018-02-20 | Kla-Tencor Corporation | Method and system for defect classification |
-
2018
- 2018-09-29 CN CN201811148970.0A patent/CN109344767B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103576164A (en) * | 2012-07-20 | 2014-02-12 | 上海莱凯数码科技有限公司 | High-resolution remote sensing image fusion method based on linear Bayesian estimation |
CN104463193A (en) * | 2014-11-04 | 2015-03-25 | 西安电子科技大学 | Polarization SAR image classifying method based on depth sparsity ICA |
CN104751173A (en) * | 2015-03-12 | 2015-07-01 | 西安电子科技大学 | Polarized SAR (Synthetic Aperture Radar) image classifying method based on cooperative representation and deep learning. |
CN104732224A (en) * | 2015-04-08 | 2015-06-24 | 重庆大学 | SAR object identification method based on two-dimension Zernike moment feature sparse representation |
CN104881670A (en) * | 2015-05-20 | 2015-09-02 | 电子科技大学 | Rapid target extraction method used for SAR azimuth estimation |
CN106022383A (en) * | 2016-05-26 | 2016-10-12 | 重庆大学 | SAR target recognition method based on azimuth relevant dynamic dictionary sparse representation |
CN106096505A (en) * | 2016-05-28 | 2016-11-09 | 重庆大学 | The SAR target identification method of expression is worked in coordination with based on Analysis On Multi-scale Features |
CN106485279A (en) * | 2016-10-13 | 2017-03-08 | 东南大学 | A kind of image classification method based on Zernike square network |
CN108229551A (en) * | 2017-12-28 | 2018-06-29 | 湘潭大学 | A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation |
Non-Patent Citations (2)
Title |
---|
Fusion of Multifeature Low-Rank Representation for Synthetic Aperture Radar Target Configuration Recognition;Xinzheng Zhang等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20180619;第1402-1406页 * |
基于多特征-多表示融合的SAR图像目标识别;张新征等;《雷达学报》;20171031;第6卷(第5期);第492-502页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109344767A (en) | 2019-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860612B (en) | Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
CN107992891B (en) | Multispectral remote sensing image change detection method based on spectral vector analysis | |
CN109766858A (en) | Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering | |
CN102902979B (en) | A kind of method of synthetic-aperture radar automatic target detection | |
CN108133232A (en) | A kind of Radar High Range Resolution target identification method based on statistics dictionary learning | |
CN108734199B (en) | Hyperspectral image robust classification method based on segmented depth features and low-rank representation | |
CN103699874B (en) | Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation | |
CN109766934B (en) | Image target identification method based on depth Gabor network | |
Ma et al. | Hyperspectral anomaly detection based on low-rank representation with data-driven projection and dictionary construction | |
CN111680579B (en) | Remote sensing image classification method for self-adaptive weight multi-view measurement learning | |
CN107273919B (en) | Hyperspectral unsupervised classification method for constructing generic dictionary based on confidence | |
CN104715266B (en) | The image characteristic extracting method being combined based on SRC DP with LDA | |
Verma et al. | Wild animal detection from highly cluttered images using deep convolutional neural network | |
CN106951822B (en) | One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection | |
CN110991547A (en) | Image significance detection method based on multi-feature optimal fusion | |
CN105160353A (en) | Polarimetric SAR data ground object classification method based on multiple feature sets | |
CN108846414B (en) | SAR image subcategory classification method based on decision-level fusion idea | |
CN106096658A (en) | Based on the Aerial Images sorting technique without supervision deep space feature coding | |
CN112001257A (en) | SAR image target recognition method and device based on sparse representation and cascade dictionary | |
Poojary et al. | Automatic target detection in hyperspectral image processing: A review of algorithms | |
CN107133648A (en) | The sparse one-dimensional range profile recognition methods for keeping projecting is merged based on self-adapting multi-dimension | |
CN109344767B (en) | SAR target identification method based on multi-azimuth multi-feature collaborative representation | |
Trottier et al. | Sparse dictionary learning for identifying grasp locations | |
Singh et al. | Wavelet based histogram of oriented gradients feature descriptors for classification of partially occluded objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |