CN116310401A - Cross-view SAR identification method based on single-performance feature joint sparse representation - Google Patents
Cross-view SAR identification method based on single-performance feature joint sparse representation Download PDFInfo
- Publication number
- CN116310401A CN116310401A CN202211629285.6A CN202211629285A CN116310401A CN 116310401 A CN116310401 A CN 116310401A CN 202211629285 A CN202211629285 A CN 202211629285A CN 116310401 A CN116310401 A CN 116310401A
- Authority
- CN
- China
- Prior art keywords
- sar
- view
- correlation
- polar coordinate
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000000007 visual effect Effects 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 42
- 239000013598 vector Substances 0.000 claims description 40
- 238000012549 training Methods 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 10
- 238000012952 Resampling Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 2
- 238000012512 characterization method Methods 0.000 abstract 1
- 230000000295 complement effect Effects 0.000 abstract 1
- 238000007500 overflow downdraw method Methods 0.000 abstract 1
- 230000006978 adaptation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
Abstract
The invention discloses a cross-view SAR identification method based on single-view feature joint sparse representation. In order to mine complementary information among multi-view SAR images, a multi-view feature fusion method is provided to realize correlation prolongation, and an overcomplete dictionary learning objective function combining classification errors and representation errors is established based on multi-view monogenic features. In the classifier learning process, a multi-task joint discrimination dictionary learning method is adopted, and the multi-task joint discrimination dictionary and the optimal linear classifier parameters are subjected to joint learning to obtain a dictionary with good characterization and discrimination capability and a corresponding classifier, so that the cross-view SAR target recognition is realized. Experimental results on an MSTAR data set show that the method can keep high recognition rate under the condition of crossing visual angles, and can effectively improve the recognition robustness under different scenes.
Description
Technical Field
The invention relates to the field of SAR target recognition, in particular to a cross-view SAR recognition method based on single-performance feature joint sparse representation.
Background
As an active coherent radar imaging system, the synthetic aperture radar (Synthetic Aperture Radar, SAR) has the advantages of all-day, all-weather, high resolution, strong penetrability and the like, and is widely applied to civil and military fields such as resource exploration, ocean monitoring, target detection and the like. Along with the wide application of SAR in various fields, the number and types of acquired SAR images are increased, and the traditional SAR target recognition method is limited by an imaging mechanism, so that the requirement of accurately interpreting massive targets is gradually difficult to meet. By utilizing SAR images of the same target and different view angles, more complete characteristic information can be provided for target identification, so that identification performance is improved, and a series of new problems are brought to the current SAR target identification. Unlike optical images, SAR images mainly reflect electromagnetic scattering characteristics of targets, and target information contained in the SAR images is difficult to intuitively understand; meanwhile, the SAR image also contains inherent speckle noise, geometric distortion and other phenomena, is very sensitive to observation parameters, and makes the robust feature description and accurate identification of the SAR target more difficult.
Because of the imaging specificity of the SAR system, for the same observation target, even if the observed azimuth angle only changes slightly, the SAR images of the obtained targets have larger difference, namely the intra-class difference becomes larger. For different observation targets, the same azimuth angle shows larger similarity, namely the difference between classes is smaller. These situations result in that the conventional single-view SAR target recognition method cannot meet the increasing image recognition requirements, and new feature extraction and classification recognition flows need to be designed for specific multi-view scenes.
Disclosure of Invention
In order to solve the technical problems, the invention provides a cross-view SAR identification method based on single-performance feature joint sparse representation, which comprises the following steps,
firstly, extracting a monogenic feature vector of an SAR original image in an SAR original image data set, generating a multi-scale monogenic signal of the SAR original image, resampling the multi-scale monogenic signal of the SAR original image, and extracting a multi-scale monogenic polar coordinate mapping feature vector of the SAR original image data set under a polar coordinate system;
preprocessing by utilizing correlation among SAR original images in an SAR original image dataset, and constructing a view angle subset of the SAR correlation images according to a clustering algorithm;
extracting the single-carrier polar coordinate mapping feature vector of the SAR correlation image in the view angle subset in the step two, extracting the single-carrier polar coordinate mapping feature vector of the SAR correlation image in any view angle, inputting the single-carrier polar coordinate mapping feature vector into a multi-task joint sparse representation classifier for training, storing the trained multi-task joint sparse representation classifier, selecting any SAR image from the SAR original image dataset, and inputting the SAR image into the trained multi-task joint sparse representation classifier to obtain a recognition result.
Further, in the first step, a Log-Gabor filter is adopted to extract a monogenic eigenvector of the SAR original image:
f M (z)=(h LG (z)*f(z))-j(h LG (z)*f(z))
wherein f M (z) is a monogenic eigenvector, f (z) is an SAR original image, h LG (z) is a Log-Gabor kernel, j is an imaginary unit, the scale of the Log-Gabor filter is adjusted, and monogenic signals of different scales of the SAR original image are generated and expressed asWherein S is the degree of scale, ">A single-cast signal representing the kth scale, the components of the single-cast signal of different scales are expressed as:
wherein A is k Representing the local amplitude of the kth scale mono signal,represents the phase, θ, of the kth scale monograph signal k Represents the directional component of the kth scale mono signal, (k=1,..s),
component A of two-dimensional multi-scale monograph signal under rectangular coordinate system k 、And theta k Resampling to generate polar image +.>And->Two orthogonal coordinate axes of the polar coordinate system are radius d and angle alpha, an SAR original image is randomly designated, firstly, a region containing a target is segmented, and then the centroid (x c ,y c ) As origin of polar mapping:
wherein M is the number of sampling points, (x) p ,y q ) For the pixel coordinates of the SAR original image, the principal component analysis PCA algorithm is used for carrying out dimensionality reduction on the single-carrier eigenvectors of the SAR original image, corresponding single-carrier polar coordinate mapping eigenvectors are generated, and the single-carrier polar coordinate mapping eigenvectors obtained after the dimensionality reduction operation are expressed as follows:
wherein d represents the radius of two orthogonal coordinate axes of the polar coordinate system, alpha represents the angle of the two orthogonal coordinate axes of the polar coordinate system, vec (·) represents the matrix vectorization operation, PCA (·) represents the PCA processing of the feature vector,and->Representing polar images generated by monogenic eigenvectors, < >>And->And representing the calculated single-carrier polar coordinate mapping feature vector of the kth scale.
Further, the specific steps of the second step are as follows,
the first step, taking the image correlation coefficient r as a basic criterion for evaluating the correlation between images of different viewing angles, wherein the expression is as follows:
wherein f 1 (x, y) and f 2 (x, y) represents two SAR original images,and->Respectively representing clutter average values of two SAR original images, m and l representing bias of convolution operation, f 2 (x-m, y-l) represents a translation operation,
carrying out normalized segmentation on pixel points of the two SAR original images, calculating clutter mean value, and respectively zero-filling the segmented two SAR original imagesAnd fourier transforming to f 2 The frequency spectrum of (x, y) is conjugated, then the frequency spectrums of the two SAR original images are multiplied to obtain a correlation coefficient r,
secondly, the SAR original image dataset comprises n different visual angles, correlation coefficients among the SAR original images of the n different visual angles are calculated respectively, and a correlation coefficient matrix of the SAR original image dataset comprising n different visual angles of the same target is obtained as follows:
thirdly, clustering calculation is carried out on the multi-view SAR original images, and a data set I= { I is constructed by taking N multi-view SAR original images 1 ,I 2 ,…,I N A correlation threshold of T, an initial number of iterations t=1,
step a, calculating correlation coefficient matrixes of SAR original images of different view angles,
step b, selecting a first SAR original image I 1 As an initial clustering center, the view subset sequence is S t = {1}, the following operations are performed:
step c, obtaining a group of view subset sequences, and updating the view subsets: i=i\s t ,
Step d, repeating the steps a-c until all SAR original images are clustered,
wherein, U represents the union operation, and \represents the remainder operation,
finally, a view subset of the SAR-related image is obtained.
Further, the specific steps of the third step are as follows,
in the view angle subsets obtained in the second step, extracting monogenic polar coordinate mapping feature vectors for SAR correlation images of different view angle subsets respectivelySAR correlation images with R view angle subsets are provided, and a training sample set is expressed as:
X=(X 1 ,X 2 ,…,X R )
X=(X 1 ,X 2 ,…·,X R ) A single polar mapping feature vector representing the extraction of the respective view subsets,
in the classifier training stage, a linear representation model is established for the SAR correlation image: y is g =X g α g +δ g Wherein y is g Is the monogenic polar coordinate mapping feature vector, X of the training sample g Is the feature vector of the single polar coordinate mapping of the SAR correlation image, alpha g Is a sparse representation coefficient matrix, delta g Is that the upper bound in the g task is delta g || 2 And (3) noise less than or equal to epsilon, epsilon representing noise margin, and using a least squares fitting algorithm to minimize the following objective function to solve for a sparse representation coefficient:
wherein lambda is the Lagrangian multiplier,for the sparse representation coefficient matrix to be optimized, the SAR correlation image is utilized to carry out joint training and joint learning, so that the single-carrier polar coordinate mapping feature vector extracted by each view angle subset has a common sparse mode to obtain a trained multi-task joint sparse representation classifier,
in the test stage, extracting single-carrier polar coordinate mapping feature vectors of input samples in SAR original image data set, generating nuclear feature vectors of a plurality of tasks through nuclear space mapping, solving joint sparse representation coefficients of the plurality of tasks through L2,1 norm constraint, identifying and classifying by using weighted minimum global reconstruction errors of the plurality of tasks, measuring global reconstruction errors of different types of dictionaries on the test samples, and obtaining labels of the test samples:
furthermore, the invention also provides a computer readable storage medium, which stores a computer program, wherein the computer program enables a computer to execute the cross-view SAR identification method based on the single-performance feature joint sparse representation.
Compared with the prior art, the invention has the advantages that:
the SAR target recognition method based on the correlation continuation idea realizes the SAR target recognition method under the cross-view condition. In order to ensure the correlation between images, the independence and the correlation between the images are analyzed by adopting a correlation clustering method, and the view angle subsets are divided. And then, extracting the single-performance characteristics from each view angle subset respectively, wherein the extracted single-performance characteristics not only contain rich classification and identification information, but also improve the robustness of characteristic representation in azimuth angle change. And finally, researching the inherent correlation of each group of SAR images based on the thought of multi-task joint sparse representation, and obtaining a reconstruction error through linear weighted fusion so as to more accurately predict the target category.
Drawings
FIG. 1 is a schematic diagram of the present invention;
FIG. 2 is a schematic diagram of a sparse representation classification algorithm;
fig. 3 is a schematic diagram of the construction of a view-incomplete dataset with a view interval of 30 ° and a loss rate of 0.5.
Detailed Description
The present invention is described in further detail below with reference to the accompanying drawings.
What has been described is merely a preferred embodiment of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Fig. 1 shows a flow chart of the present invention:
a cross-view SAR target recognition system based on monogenic feature joint sparse representation is constructed based on the thought of correlation continuation. The method comprises the following steps:
1. resampling the SAR original image dataset, and extracting multi-scale monogenic features under a polar coordinate system.
Extracting a monogenic eigenvector of the SAR image by adopting a Log-Gabor filter:
f M (z)=(h LG (z)*f(z))-j(h LG (z)*f(z))
wherein f M (z) is a monogenic eigenvector, f (z) is an SAR original image, h LG (z) is a Log-Gabor kernel, j is an imaginary unit, the scale of the Log-Gabor filter is adjusted, and monogenic signals of different scales of the SAR original image are generated and expressed asWherein S is the degree of scale, ">A single-cast signal representing the kth scale, the components of the single-cast signal of different scales are expressed as:
wherein A is k Representing the local amplitude of the kth scale mono signal,represents the phase, θ, of the kth scale monograph signal k Represents the directional component of the kth scale mono signal, (k=1,..s),
component A of two-dimensional multi-scale monograph signal under rectangular coordinate system k 、And theta k Resampling to generate polar image +.>And->Two orthogonal coordinate axes of the polar coordinate system are radius d and angle alpha, an SAR original image is randomly designated, firstly, a region containing a target is segmented, and then the centroid (x c ,y c ) As origin of polar mapping:
wherein M is the number of sampling points, (x) p ,y q ) For the pixel coordinates of the SAR original image, the multi-scale monogenic characteristic component of polar coordinate mapping has high dimension and high redundancy, and is directly applied to the recognition of the problems such as dimension disasters. A classical principal component analysis (Principal Component Analysis, PCA) algorithm is used for the SAR image polar mapping coefficient matrix dimension reduction, and then corresponding monogenic polar mapping feature vectors are generated for subsequent feature fusion and classification. The feature vector of the monogenic polar coordinate mapping obtained after the dimension reduction operation can be expressed as follows:
wherein d represents the radius of two orthogonal coordinate axes of the polar coordinate system, alpha represents the angle of the two orthogonal coordinate axes of the polar coordinate system, vec (·) represents the matrix vectorization operation, PCA (·) represents the PCA processing of the feature vector,and->Representing polar images generated by monogenic eigenvectors, < >>And->And representing the calculated single-carrier polar coordinate mapping feature vector of the kth scale.
2. In order to fully utilize the correlation between SAR images, the data set is preprocessed, and the view angle subset is constructed according to the clustering thought. The method comprises the following specific steps:
taking the image correlation coefficient r as a basic criterion for evaluating the correlation between images of different viewing angles, the expression is as follows:
wherein f 1 (x, y) and f 2 (x, y) represents two SAR original images,and->Respectively representing clutter average values of two SAR original images, m and l representing bias of convolution operation, f 2 (x-m, y-l) represents a translation operation,
carrying out normalized segmentation on pixel points of the two SAR original images, calculating clutter mean values, respectively filling zero into the segmented two SAR original images, carrying out Fourier transformation, and carrying out f 2 The frequency spectrum of (x, y) is conjugated, then the frequency spectrums of the two SAR original images are multiplied to obtain a correlation coefficient r,
the SAR original image dataset comprises n different visual angles, and correlation coefficients among the SAR original images of the n different visual angles are calculated respectively to obtain a correlation coefficient matrix of the SAR original image dataset comprising n different visual angles of the same target, wherein the correlation coefficient matrix comprises the following components:
after obtaining the correlation coefficient matrix, clustering calculation is carried out on the multi-view SAR original images according to the following algorithm, and a data set I= { I is constructed by taking N multi-view SAR original images 1 ,I 2 ,…,I N A correlation threshold of T, an initial number of iterations t=1,
step a, calculating correlation coefficient matrixes of SAR original images of different view angles,
step b, selecting a first SAR original image I 1 As an initial clustering center, the view subset sequence is S t = {1}, the following operations are performed:
step c, obtaining a group of view subset sequences, and updating the view subsets: i=i\s t ,
Step d, repeating the steps a-c until all SAR original images are clustered,
wherein, U represents the union operation, \represents the remainder operation, carries out the relevant clustering pretreatment to each view angle subset, can obtain strong correlation. From the perspective of quantitative analysis, the correlation coefficients of the images in the obtained view angle subset are larger than a preset threshold value, and finally, the view angle subset of the SAR correlation image is obtained.
3. In the view angle subsets obtained in the step 2, extracting monogenic polar coordinate mapping feature vectors for SAR correlation images of different view angle subsets respectivelySAR correlation image with R visual angle subsets and training methodThe training sample set is expressed as:
X=(X 1 ,X 2 ,…,X R )
X=(X 1 ,X 2 ,…,X R ) A single polar mapping feature vector representing the extraction of the respective view subsets,
in the classifier training stage, a linear representation model is established for the SAR correlation image: y is g =X g α g +δ g Wherein y is g Is the monogenic polar coordinate mapping feature vector, X of the training sample g Is the feature vector of the single polar coordinate mapping of the SAR correlation image, alpha g Is a sparse representation coefficient matrix, delta g Is that the upper bound in the g task is delta g || 2 And (3) noise less than or equal to epsilon, epsilon representing noise margin, and using a least squares fitting algorithm to minimize the following objective function to solve for a sparse representation coefficient:
wherein lambda is the Lagrangian multiplier,for the sparse representation coefficient matrix to be optimized, the SAR correlation image is utilized to carry out joint training and joint learning, so that the single-carrier polar coordinate mapping feature vector extracted by each view angle subset has a common sparse mode to obtain a trained multi-task joint sparse representation classifier,
in a test stage, based on the thought of multi-task joint sparse representation, extracting monogenic polar coordinate mapping feature vectors of input samples in SAR original image data sets, generating multi-task nucleated feature vectors, solving joint sparse representation coefficients of the plurality of tasks through L2 and 1 norm constraint, identifying and classifying by using weighted multi-task minimum global reconstruction errors, measuring global reconstruction errors of different types of dictionaries on the test samples, and obtaining labels of the test samples:
the technical effects of the invention are further illustrated by the following specific examples and related experimental parameters:
the sparse representation classifier (Sparse Representation Classifier, SRC) is one of the most commonly used classifiers in the current SAR image target recognition, does not need a complex training process, and has certain robustness to certain interference conditions. The sparse representation has the advantage that efficient data compression can be achieved, and more importantly, intrinsic characteristics of the signals can be captured by utilizing the redundancy characteristics of the dictionary. A specific schematic diagram of sparse representation classification is shown in fig. 2.
In the experiment, firstly, a training set with a pitch angle of 15 degrees is subjected to clustering pretreatment, then is input into a subsequent Multi-task joint sparse representation classifier (Multi-task Joint Sparse Representation Classifier, MJSRC) to perform offline training, an image used for testing is randomly selected from a complete data set with a pitch angle of 17 degrees, and a construction method of an incomplete view angle data set when the loss rate is 0.5 and the view angle interval is 30 degrees is shown in fig. 3. The multi-scale mono feature dimension extracted by the experiment was 4096, the clustering threshold was 0.5, and all parameters of the experiment are listed in table 1.
TABLE 1
Parameters (parameters) | Value taking |
Clustering algorithm threshold | 0.5 |
Data set miss rate | 20% |
Target class | 10 |
Total number of dataset images | 1914 |
Monofacial feature dimension | 4096 |
Classifier training time | 24.5s |
Table 2 shows comparison of recognition rate results of the method of the present invention and other five main stream recognition methods. The experimental result shows that under the cross-view angle condition, the method provided by the invention obtains the highest recognition precision on ten kinds of targets. The method can extract more stable characteristics and perform effective training, so that SAR target recognition tasks under a cross-view scene can be realized.
TABLE 2
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Claims (5)
1. A method for identifying a cross-view SAR based on monogenic characteristic joint sparse representation is characterized by comprising the following steps,
firstly, extracting a monogenic feature vector of an SAR original image in an SAR original image data set, generating a multi-scale monogenic signal of the SAR original image, resampling the multi-scale monogenic signal of the SAR original image, and extracting a multi-scale monogenic polar coordinate mapping feature vector of the SAR original image data set under a polar coordinate system;
preprocessing by utilizing correlation among SAR original images in an SAR original image dataset, and constructing a view angle subset of the SAR correlation images according to a clustering algorithm;
extracting the single-carrier polar coordinate mapping feature vector of the SAR correlation image in the view angle subset in the step two, extracting the single-carrier polar coordinate mapping feature vector of the SAR correlation image in any view angle, inputting the single-carrier polar coordinate mapping feature vector into a multi-task joint sparse representation classifier for training, storing the trained multi-task joint sparse representation classifier, selecting any SAR image from the SAR original image dataset, and inputting the SAR image into the trained multi-task joint sparse representation classifier to obtain a recognition result.
2. The method for identifying the cross-view SAR based on the single-carrier feature joint sparse representation according to claim 1, wherein in the first step, a Log-Gabor filter is adopted to extract single-carrier feature vectors of an SAR original image:
f M (z)=(h LG (z)*f(z))-j(h LG (z)*f(z))
wherein f M (z) is a monogenic eigenvector, f (z) is an SAR original image, h LG (z) is a Log-Gabor kernel, j is an imaginary unit, the scale of the Log-Gabor filter is adjusted, and monogenic signals of different scales of the SAR original image are generated and expressed asWherein S is the degree of scale, ">A single-cast signal representing the kth scale, the components of the single-cast signal of different scales are expressed as:
wherein A is k Representing the local amplitude of the kth scale mono signal,represents the phase, θ, of the kth scale monograph signal k Represents the directional component of the kth scale mono signal, (k=1,..s),
component A of two-dimensional multi-scale monograph signal under rectangular coordinate system k 、And theta k Resampling to generate polar coordinate imageAnd->Two orthogonal coordinate axes of the polar coordinate system are radius d and angle alpha, an SAR original image is randomly designated, firstly, a region containing a target is segmented, and then the centroid (x c ,y c ) As origin of polar mapping:
wherein M is the number of sampling points, (x) p ,y q ) For the pixel coordinates of the SAR original image, the principal component analysis PCA algorithm is used for carrying out dimensionality reduction on the single-carrier eigenvectors of the SAR original image, corresponding single-carrier polar coordinate mapping eigenvectors are generated, and the single-carrier polar coordinate mapping eigenvectors obtained after the dimensionality reduction operation are expressed as follows:
wherein d represents the radius of two orthogonal coordinate axes of the polar coordinate system, alpha represents the angle of the two orthogonal coordinate axes of the polar coordinate system, vec (·) represents the matrix vectorization operation, PCA (·) represents the PCA processing of the feature vector,and->Representing polar images generated by monogenic eigenvectors, < >>And->And representing the calculated single-carrier polar coordinate mapping feature vector of the kth scale.
3. The method for identifying the cross-view SAR based on the single-carrier feature joint sparse representation according to claim 2, wherein the specific steps of the second step are as follows,
the first step, taking the image correlation coefficient r as a basic criterion for evaluating the correlation between images of different viewing angles, wherein the expression is as follows:
wherein f 1 (x, y) and f 2 (x, y) represents two SAR original images,and->Respectively representing clutter average values of two SAR original images, m and l representing bias of convolution operation, f 2 (x-m, y-l) represents a translation operation,
carrying out normalized segmentation on pixel points of the two SAR original images, calculating clutter mean values, respectively filling zero into the segmented two SAR original images, carrying out Fourier transformation, and carrying out f 2 The frequency spectrum of (x, y) is conjugated, then the frequency spectrums of the two SAR original images are multiplied to obtain a correlation coefficient r,
secondly, the SAR original image dataset comprises n different visual angles, correlation coefficients among the SAR original images of the n different visual angles are calculated respectively, and a correlation coefficient matrix of the SAR original image dataset comprising n different visual angles of the same target is obtained as follows:
thirdly, clustering calculation is carried out on the multi-view SAR original images, and a data set I= { I is constructed by taking N multi-view SAR original images 1 ,I 2 ,…,I N A correlation threshold of T, an initial number of iterations t=1,
step a, calculating correlation coefficient matrixes of SAR original images of different view angles,
step b, selecting a first SAR original image I 1 As an initial clustering center, the view subset sequence is S t = {1}, the following operations are performed:
step c, obtaining a group of view subset sequences, and updating the view subsets: i=i\s t ,
Step d, repeating the steps a-c until all SAR original images are clustered,
wherein, U represents the union operation, and \represents the remainder operation,
finally, a view subset of the SAR-related image is obtained.
4. The method for identifying the cross-view SAR based on the single-carrier feature joint sparse representation according to claim 3, wherein the specific steps of the third step are as follows,
in the view angle subsets obtained in the second step, extracting monogenic polar coordinate mapping feature vectors for SAR correlation images of different view angle subsets respectivelySAR correlation images with R view angle subsets are provided, and a training sample set is expressed as:
X=(X 1 ,X 2 ,...,X R )
X=(X 1 ,X 2 ,...,X R ) A single polar mapping feature vector representing the extraction of the respective view subsets,
in the classifier training stage, a linear representation model is established for the SAR correlation image: y is g =X g α g +δ g Wherein y is g Is the monogenic polar coordinate mapping feature vector, X of the training sample g Is the feature vector of the single polar coordinate mapping of the SAR correlation image, alpha g Is a sparse representation coefficient matrix, delta g Is that the upper bound in the g task is delta g || 2 And (3) noise less than or equal to epsilon, epsilon representing noise margin, and using a least squares fitting algorithm to minimize the following objective function to solve for a sparse representation coefficient:
wherein lambda is the Lagrangian multiplier,for the sparse representation coefficient matrix to be optimized, the SAR correlation image is utilized to carry out joint training and joint learning, so that the single-carrier polar coordinate mapping feature vector extracted by each view angle subset has a common sparse mode to obtain a trained multi-task joint sparse representation classifier,
in the test stage, extracting single-carrier polar coordinate mapping feature vectors of input samples in SAR original image data set, generating nuclear feature vectors of a plurality of tasks through nuclear space mapping, solving joint sparse representation coefficients of the plurality of tasks through L2,1 norm constraint, identifying and classifying by using weighted minimum global reconstruction errors of the plurality of tasks, measuring global reconstruction errors of different types of dictionaries on the test samples, and obtaining labels of the test samples:
5. a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the method for identifying the cross-view SAR based on the single-performance feature joint sparse representation according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211629285.6A CN116310401A (en) | 2022-12-19 | 2022-12-19 | Cross-view SAR identification method based on single-performance feature joint sparse representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211629285.6A CN116310401A (en) | 2022-12-19 | 2022-12-19 | Cross-view SAR identification method based on single-performance feature joint sparse representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116310401A true CN116310401A (en) | 2023-06-23 |
Family
ID=86798453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211629285.6A Pending CN116310401A (en) | 2022-12-19 | 2022-12-19 | Cross-view SAR identification method based on single-performance feature joint sparse representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116310401A (en) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216553A (en) * | 2007-12-27 | 2008-07-09 | 南京航空航天大学 | Synthetic aperture radar polar coordinates format image-forming algorithm based on variable metric principle |
CN103049905A (en) * | 2012-12-07 | 2013-04-17 | 中国人民解放军海军航空工程学院 | Method for realizing image registration of synthetic aperture radar (SAR) by using three components of monogenic signals |
CN103630901A (en) * | 2013-03-29 | 2014-03-12 | 中国科学院电子学研究所 | Method for imaging of airborne down-looking array 3-D SAR |
CN104732224A (en) * | 2015-04-08 | 2015-06-24 | 重庆大学 | SAR object identification method based on two-dimension Zernike moment feature sparse representation |
CN104851097A (en) * | 2015-05-19 | 2015-08-19 | 西安电子科技大学 | Multichannel SAR-GMTI method based on target shape and shadow assistance |
GB201514673D0 (en) * | 2014-08-18 | 2015-09-30 | Jaguar Land Rover Ltd | Display system and method |
CN106842201A (en) * | 2017-02-22 | 2017-06-13 | 南京航空航天大学 | A kind of Ship Target ISAR chiasmal image method of discrimination based on sequence image |
CN108008387A (en) * | 2017-11-23 | 2018-05-08 | 内蒙古工业大学 | Three-D imaging method is regarded under a kind of airborne array antenna |
CN108038445A (en) * | 2017-12-11 | 2018-05-15 | 电子科技大学 | A kind of SAR automatic target recognition methods based on various visual angles deep learning frame |
CN108416290A (en) * | 2018-03-06 | 2018-08-17 | 中国船舶重工集团公司第七二四研究所 | Radar signal feature method based on residual error deep learning |
CN109583293A (en) * | 2018-10-12 | 2019-04-05 | 复旦大学 | Aircraft Targets detection and discrimination method in satellite-borne SAR image |
CN109753887A (en) * | 2018-12-17 | 2019-05-14 | 南京师范大学 | A kind of SAR image target recognition method based on enhancing nuclear sparse expression |
CN110930980A (en) * | 2019-12-12 | 2020-03-27 | 苏州思必驰信息科技有限公司 | Acoustic recognition model, method and system for Chinese and English mixed speech |
CN112001257A (en) * | 2020-07-27 | 2020-11-27 | 南京信息职业技术学院 | SAR image target recognition method and device based on sparse representation and cascade dictionary |
US20210117659A1 (en) * | 2019-10-21 | 2021-04-22 | Analog Devices International Unlimited Company | Radar-based indoor localization and tracking system |
US20210138655A1 (en) * | 2019-11-13 | 2021-05-13 | Nvidia Corporation | Grasp determination for an object in clutter |
US20210264645A1 (en) * | 2020-02-21 | 2021-08-26 | Siemens Healthcare Gmbh | Multi-contrast mri image reconstruction using machine learning |
WO2022165876A1 (en) * | 2021-02-06 | 2022-08-11 | 湖南大学 | Wgan-based unsupervised multi-view three-dimensional point cloud joint registration method |
-
2022
- 2022-12-19 CN CN202211629285.6A patent/CN116310401A/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216553A (en) * | 2007-12-27 | 2008-07-09 | 南京航空航天大学 | Synthetic aperture radar polar coordinates format image-forming algorithm based on variable metric principle |
CN103049905A (en) * | 2012-12-07 | 2013-04-17 | 中国人民解放军海军航空工程学院 | Method for realizing image registration of synthetic aperture radar (SAR) by using three components of monogenic signals |
CN103630901A (en) * | 2013-03-29 | 2014-03-12 | 中国科学院电子学研究所 | Method for imaging of airborne down-looking array 3-D SAR |
GB201514673D0 (en) * | 2014-08-18 | 2015-09-30 | Jaguar Land Rover Ltd | Display system and method |
CN104732224A (en) * | 2015-04-08 | 2015-06-24 | 重庆大学 | SAR object identification method based on two-dimension Zernike moment feature sparse representation |
CN104851097A (en) * | 2015-05-19 | 2015-08-19 | 西安电子科技大学 | Multichannel SAR-GMTI method based on target shape and shadow assistance |
CN106842201A (en) * | 2017-02-22 | 2017-06-13 | 南京航空航天大学 | A kind of Ship Target ISAR chiasmal image method of discrimination based on sequence image |
CN108008387A (en) * | 2017-11-23 | 2018-05-08 | 内蒙古工业大学 | Three-D imaging method is regarded under a kind of airborne array antenna |
CN108038445A (en) * | 2017-12-11 | 2018-05-15 | 电子科技大学 | A kind of SAR automatic target recognition methods based on various visual angles deep learning frame |
CN108416290A (en) * | 2018-03-06 | 2018-08-17 | 中国船舶重工集团公司第七二四研究所 | Radar signal feature method based on residual error deep learning |
CN109583293A (en) * | 2018-10-12 | 2019-04-05 | 复旦大学 | Aircraft Targets detection and discrimination method in satellite-borne SAR image |
CN109753887A (en) * | 2018-12-17 | 2019-05-14 | 南京师范大学 | A kind of SAR image target recognition method based on enhancing nuclear sparse expression |
US20210117659A1 (en) * | 2019-10-21 | 2021-04-22 | Analog Devices International Unlimited Company | Radar-based indoor localization and tracking system |
US20210138655A1 (en) * | 2019-11-13 | 2021-05-13 | Nvidia Corporation | Grasp determination for an object in clutter |
CN110930980A (en) * | 2019-12-12 | 2020-03-27 | 苏州思必驰信息科技有限公司 | Acoustic recognition model, method and system for Chinese and English mixed speech |
US20210264645A1 (en) * | 2020-02-21 | 2021-08-26 | Siemens Healthcare Gmbh | Multi-contrast mri image reconstruction using machine learning |
CN112001257A (en) * | 2020-07-27 | 2020-11-27 | 南京信息职业技术学院 | SAR image target recognition method and device based on sparse representation and cascade dictionary |
WO2022165876A1 (en) * | 2021-02-06 | 2022-08-11 | 湖南大学 | Wgan-based unsupervised multi-view three-dimensional point cloud joint registration method |
Non-Patent Citations (2)
Title |
---|
HAICHAO ZHANG;: "Multi-View Automatic Target Recognition using Joint Sparse Representation", 《 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》 * |
陈婕: "考察独立性和相关性的多视角SAR 图像目标识别方法", 《电光与控制》, pages 90 - 92 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pei et al. | SAR automatic target recognition based on multiview deep learning framework | |
Kandaswamy et al. | Efficient texture analysis of SAR imagery | |
CN106056070B (en) | Restore the SAR target identification method with rarefaction representation based on low-rank matrix | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN103955701A (en) | Multi-level-combined multi-look synthetic aperture radar image target recognition method | |
CN112001257A (en) | SAR image target recognition method and device based on sparse representation and cascade dictionary | |
Wang et al. | A novel hyperspectral image change detection framework based on 3D-wavelet domain active convolutional neural network | |
CN106951822B (en) | One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection | |
CN113592030B (en) | Image retrieval method and system based on complex value singular spectrum analysis | |
CN109753887B (en) | SAR image target identification method based on enhanced kernel sparse representation | |
Shrivastava et al. | Noise-invariant structure pattern for image texture classification and retrieval | |
Wen et al. | Feature extraction of hyperspectral images based on preserving neighborhood discriminant embedding | |
CN116310401A (en) | Cross-view SAR identification method based on single-performance feature joint sparse representation | |
CN107403136B (en) | SAR target model identification method based on structure-preserving dictionary learning | |
CN115205602A (en) | Zero-sample SAR target identification method based on optimal transmission distance function | |
CN109344767B (en) | SAR target identification method based on multi-azimuth multi-feature collaborative representation | |
CN111458689B (en) | Multipath scattering characteristic classification method based on polarization scattering center | |
Gao et al. | Feature matching for multi-beam sonar image sequence using KD-Tree and KNN search | |
Xu et al. | SAR target recognition based on variational autoencoder | |
CN110135280B (en) | Multi-view SAR automatic target recognition method based on sparse representation classification | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
CN113869119A (en) | Multi-temporal SAR ship target tracking method, system, equipment and medium | |
Ji et al. | SAR image target recognition based on monogenic signal and sparse representation | |
Harrison et al. | Novel consensus approaches to the reliable ranking of features for seabed imagery classification | |
Kanafiah et al. | Fundamental shape discrimination of underground metal object through one-axis ground penetrating radar (GPR) scan |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |