CN106908774B - One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection - Google Patents

One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection Download PDF

Info

Publication number
CN106908774B
CN106908774B CN201710010006.0A CN201710010006A CN106908774B CN 106908774 B CN106908774 B CN 106908774B CN 201710010006 A CN201710010006 A CN 201710010006A CN 106908774 B CN106908774 B CN 106908774B
Authority
CN
China
Prior art keywords
scale
kernel
sparse
feature vector
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710010006.0A
Other languages
Chinese (zh)
Other versions
CN106908774A (en
Inventor
戴为龙
刘文波
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201710010006.0A priority Critical patent/CN106908774B/en
Publication of CN106908774A publication Critical patent/CN106908774A/en
Application granted granted Critical
Publication of CN106908774B publication Critical patent/CN106908774B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity

Abstract

The invention discloses a one-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection. Firstly, extracting normalized amplitude characteristics of an actually measured one-dimensional range profile signal sample and carrying out translation alignment pretreatment; secondly, performing multi-scale kernel space mapping on the image by utilizing a Gaussian kernel function; then, obtaining signal sparse characteristic vectors of each scale space by using a sparse preserving projection method and carrying out characteristic fusion; and finally, carrying out feature identification by using a support vector machine classifier. The method realizes the extraction of the fusion characteristics of the sparse maintenance of the multi-scale kernel space based on the Gaussian scale kernel and the sparse maintenance projection, and compared with the traditional multi-scale Gaussian kernel fusion characteristic extraction method, the method has higher identification precision and better noise resistance under the condition of the same fusion characteristic dimension, and is a steady one-dimensional distance image identification method.

Description

One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection
Technical Field
The invention belongs to the technical field of radar target identification, and particularly relates to a one-dimensional range profile identification method.
Background
In the field of radar signal processing, radar target identification is an important research direction. The radar high-resolution one-dimensional range profile (HRRP) is a vector sum of echoes of scattering points of a broadband radar target in the radar sight direction, reflects the distribution of the scattering points of the target along the radar sight direction, and contains abundant target structure characteristics. Compared with SAR and ISAR images, the one-dimensional range profile has the inherent advantages of low requirement on measurement data precision, easiness in acquisition, small data volume and the like. Therefore, target identification based on HRRP is the most promising identification scheme in radar target identification. When a large-angle HRRP echo is obtained, particularly in a noise environment, HRRP data becomes linear inseparable, the separability and the classification precision of the extracted features are poor and the performance is rapidly deteriorated under the noise interference by the traditional feature extraction scheme such as principal component analysis, low-frequency wavelet feature extraction and the like when the data is processed, and the method is difficult to be applied to actual engineering.
Regarding robust recognition of one-dimensional range images, two main approaches are currently made: firstly, the characteristic that the one-dimensional distance image is easy to identify and stable is extracted. Secondly, the aim of improving the recognition rate by resisting interference is achieved by designing a novel classifier or fusing classification results.
Kernel Principal Component Analysis (KPCA) projects an original one-dimensional range profile signal to a high-dimensional space through a Kernel function to improve linear separability of the range profile signal, and then principal Component extraction is performed on high-dimensional features through a PCA (principal Component Analysis) algorithm, so that the purpose of reducing dimensions is achieved, the influence of noise can be reduced to a certain extent, and the recognition rate is improved. However, since a single kernel principal component is difficult to represent many intrinsic characteristics of a signal, different feature spaces have their respective advantages, and the adaptability to the environment is different. Therefore, the learner proposes the concept of the multi-scale kernel, the kernel mapping of the multi-scale space is realized by introducing the scale space into the kernel method, and then the features of the scale space are fused to obtain the new signal multi-scale kernel fusion feature, so that the identification precision of the signal in the complex environment is improved, and the method has higher stability and universality compared with the single-kernel feature. However, the dimension of the final feature is greatly increased due to the increased dimension, and the kernel method ignores effective identification information of the interrelation among signal samples, so that the improvement of the identification precision is limited.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention aims to provide a one-dimensional range profile identification method based on multi-scale kernel sparse preserving projection, which combines multi-scale kernel mapping and a sparse preserving projection method, makes up the defects of the multi-scale kernel mapping to a certain extent, and can maintain more stable and efficient identification performance in a complex environment.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
the one-dimensional range profile recognition method based on the multi-scale nuclear sparse preserving projection comprises a training stage and a testing stage, wherein the training stage comprises the following steps:
(1) training sample set X ═ X for one-dimensional range profile1,x2,...,xN]Extracting its normalized amplitude characteristic set
Figure BDA0001204288150000021
Then, the translation alignment operation is carried out to obtain an amplitude characteristic set H ═ H after the translation alignment1,h2,...,hN]N is the sample volume;
(2) performing multi-scale nuclear space mapping on the amplitude features by using a Gaussian kernel function to obtain a training sample multi-scale nuclear space feature vector set
Figure BDA0001204288150000022
M is the total number of scales;
(3) to pair
Figure BDA0001204288150000023
Carrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set
Figure BDA0001204288150000024
(4) To pair
Figure BDA0001204288150000025
Performing serial feature fusion to obtain a multi-scale kernel space sparse preserving fusion feature vector set Q ═ Q1,q2,...,qN];
(5) Learning Q by adopting a support vector machine classifier;
the steps of the test phase are as follows:
(6) extracting normalized amplitude characteristics of the one-dimensional range profile test sample y and carrying out translation alignment on the one-dimensional range profile test sample y and the training sample to obtain amplitude characteristics h after translation alignmenty
(7) Using a Gaussian kernel function to the amplitude feature hyPerforming multi-scale nuclear space mapping to obtain a multi-scale nuclear space feature vector set H of the test sampley=[hy1,hy2,...,hyM];
(8) To Hy=[hy1,hy2,...,hyM]Carrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set Cy=[cy1,cy2,...,cyM];
(9) To Cy=[cy1,cy2,...,cyM]Obtaining a multi-scale kernel space sparse preserving fusion feature vector q by performing serial feature fusiony
(10) Support vector machine classifier pair q completed by learning in step (5)yAnd classifying to obtain the target class of the one-dimensional distance image test sample y.
Further, in step (1), the formula for extracting the normalized amplitude feature of the one-dimensional distance image sample is as follows:
Figure BDA0001204288150000031
in the above formula, | · represents modulo, | ·| | non-conducting phosphor2The representation takes a 2 norm.
Further, in step (1), the process of performing the translation alignment operation on the amplitude features is to perform the translation alignment process based on the maximum correlation criterion starting from the second training sample and with reference to the amplitude features of the first training sample, wherein the translation alignment process is performed based on the maximum correlation criterion
Figure BDA0001204288150000032
And
Figure BDA0001204288150000033
the cross-correlation coefficient of (a) is:
Figure BDA0001204288150000034
order to
Figure BDA0001204288150000035
At the same time of not moving
Figure BDA0001204288150000036
Translating p distance units, wherein p satisfies:
Figure BDA0001204288150000041
in the above formula, the first and second carbon atoms are,
Figure BDA0001204288150000042
representing an derived vector
Figure BDA0001204288150000043
Andi-2, …, N.
Further, the specific process of step (2) is as follows:
the gaussian kernel function used:
Figure BDA0001204288150000045
in the above formula, a, b is 1,2, …, N, σmFor a Gaussian kernel parameter at scale m, H ═ H is obtained at scale m1,h2,...,hN]N × N dimensional kernel matrix K:
Ka,b=G(ha,hb)
in the above formula, Ka,bRepresenting the a row and b column elements of the kernel matrix K;
centralizing the kernel matrix K on a high-dimensional space to obtain a matrix
Figure BDA0001204288150000046
To pairPerforming principal component analysis to obtain eigenvalue matrix Lambda consisting of the maximum eigenvalues in the eigenvalue matrixlCorresponding eigenvector matrix Ul=[α12,...,αl]L is less than or equal to N, from UlConstructing a nuclear space projection matrix, and taking H as H at the scale m1,h2,...,hN]Performing nuclear space feature extraction:
Figure BDA0001204288150000048
in the above formula, (.)TDenotes a transposition operation, αl,kRepresents UlMiddle feature vector alphalThe (k) th element of (a),
Figure BDA0001204288150000049
represents hjThe kernel space mapping at the scale m, j is 1,2, …, N, so that the kernel space feature vector set Z at the scale m can be obtainedm=[z1,z2,...,zN];
In the same way, the kernel space feature vector set under other scales can be obtained, and thus the multi-scale kernel space feature vector set of the training sample is formed
Figure BDA0001204288150000051
Further, the specific process of step (3) is as follows:
for training sample nuclear space feature vector z under arbitrary scale msAnd s is 1,2, …, M, the residual training sample kernel space feature vector except the residual training sample kernel space feature vector is used for carrying out sparse representation on the training sample kernel space feature vector, and a sparse representation coefficient vector is obtained by solving a constraint optimization problem
Figure BDA0001204288150000052
Figure BDA0001204288150000053
s.t.||Zmrs-zs||2≤ε
1=eTrs
In the above formula, e represents a column vector in which all elements are 1, and rs=[rs,1,...,rs,s-1,0,rs,s+1,...,rs,M]TFor sparse representation of coefficient vectors, rs,tRepresenting a training sample kernel space feature vector ztFor reconstruction zsThe contribution of (1), t ≠ s, ε is the relaxation amount, | | · |. non-calculation1Representing taking a 1 norm;
calculating all training sample kernel space sparse representation coefficient vectors to obtain adjacency matrixSolving the generalized eigenequation:
Zm(R+RT-RRT)Zm Tw=λZmZm Tw
in the above formula, λ represents an unknown eigenvalue, w represents an eigenvector corresponding to λ, and a set w of eigenvectors corresponding to the first d largest eigenvalues is selecteddAs a sparse hold projection matrix, then kernel the sparse feature vector set C at scale mm
Cm=wd TZm
Similarly calculating the kernel sparse feature vector set under other scales to form a multi-scale kernel sparse feature vector set
Figure BDA0001204288150000055
Further, in step (5), the support vector machine classifier adopts a linear kernel support vector machine classifier.
Adopt the beneficial effect that above-mentioned technical scheme brought:
1. the identification precision is improved: the identification method provided by the invention is based on multi-scale kernel mapping and sparse preserving projection, can discover the relevance between the multi-scale information of the signal and the signal sample, and finally performs fusion classification on the information, so that better identification precision can be achieved under different environments. The recognition accuracy of the traditional multi-scale nuclear analysis method can be achieved under the condition of low characteristic dimension, and the recognition accuracy can be improved by 2-3 percentage points compared with the recognition accuracy under the condition of fixed final identification characteristic dimension and classifier.
2. The application range is wide: the recognition method provided by the invention can be properly changed according to different application scenes, so that the problems of processing various one-dimensional signals, such as the detection recognition problem of target infrared spectrum, the recognition of voice signals and the like, can be solved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a diagram illustrating an echo of an original real signal of a one-dimensional range profile in an embodiment;
FIG. 3 is a schematic diagram of a normalized amplitude feature of a one-dimensional range profile of the present invention;
FIG. 4 is a schematic diagram of Gaussian kernel functions at different scales according to the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The invention provides a one-dimensional distance image identification method based on multi-scale nuclear sparse preserving projection, and a general flow chart is shown in figure 1. The existing one-dimensional range profile real signal echo data of an airplane is shown in fig. 2, in the actual situation, echoes of airplanes of different models are different, and echo signals of airplanes of the same model at different angles are also different. The invention mainly solves the problem of identification and classification of the one-dimensional echo signals.
A training stage:
step 1: for training sample set X ═ X1,x2,...,xN]As shown in fig. 3, a normalized amplitude feature set is extracted
Figure BDA0001204288150000061
Figure BDA0001204288150000062
In the formula (1), | · | represents modulo, | ·| | represents 2 norms, and N represents the feature dimension.
And because the one-dimensional distance image has translational sensitivity, the translational alignment processing is carried out on each amplitude feature of the training sample by a correlation alignment method. Starting from the second training sample, performing the translation alignment process based on the maximum correlation criterion based on the feature vector of the first training sample, wherein the signalAnd
Figure BDA0001204288150000072
the cross-correlation coefficient of (a) is:
Figure BDA0001204288150000073
in the formula (2), the reaction mixture is,
Figure BDA0001204288150000074
representing an derived vector
Figure BDA0001204288150000075
Andi-2, …, N.
Order to
Figure BDA0001204288150000077
At the same time of not movingTranslating p distance units, wherein p satisfies:
Figure BDA0001204288150000079
obtaining a translation aligned amplitude feature vector set H ═ H1,h2,...,hN]。
Step 2: performing multi-scale nuclear space mapping on the amplitude features by using a Gaussian kernel function G to obtain a training sample multi-scale nuclear space feature vector set
Figure BDA00012042881500000710
Vector h as shown in FIG. 4a,hbGaussian kernel function of (a):
in formula (4), a, b is 1,2, …, N, σmThe number of the total scales is expressed by M which is a Gaussian kernel parameter under the scale M; at the scale m, H ═ H is obtained1,h2,...,hN]N × N dimensional kernel matrix K:
Ka,b=G(ha,hb) (5)
in the formula (5), Ka,bRepresenting the a-th row b-column elements of the kernel matrix K.
Centralizing the kernel matrix K on a high-dimensional space to obtain a matrix
Figure BDA00012042881500000712
To pair
Figure BDA00012042881500000713
Performing principal component analysis to obtain eigenvalue matrix Lambda consisting of the maximum eigenvalues in the eigenvalue matrixlCorresponding eigenvector matrix Ul=[α12,...,αl]L is less than or equal to N, from UlConstructing a projection matrix of the core space, then H is given in the dimension m1,h2,...,hN]Performing nuclear space feature extraction:
Figure BDA0001204288150000081
in the above formula, (.)TDenotes a transposition operation, αl,kRepresents UlMiddle feature vector alphalThe (k) th element of (a),
Figure BDA0001204288150000082
represents hjThe kernel space mapping at the scale m, j is 1,2, …, N, so that the kernel space feature vector set Z at the scale m can be obtainedm=[z1,z2,...,zN]。
In the same way, the kernel space feature vector set under other scales can be obtained, and thus the multi-scale kernel space feature vector set of the training sample is formed
Figure BDA0001204288150000083
And 3, step 3: to pair
Figure BDA0001204288150000084
Carrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set
Figure BDA0001204288150000085
For training sample nuclear space feature vector z under arbitrary scale ms∈Rl(s 1, 2.. said, M), sparsely representing it with the remaining training sample feature vectors except for itself, and obtaining a sparsely represented coefficient vector by solving the following constraint optimization problem
Figure BDA0001204288150000086
Figure BDA0001204288150000087
In the formula (7), e represents a column vector in which all elements are 1, and rs=[rs,1,...,rs,s-1,0,rs,s+1,...,rs,M]TFor sparse representation of coefficient vectors, rs,tRepresenting a training sample kernel space feature vector ztFor reconstruction zsThe contribution of (1), t ≠ s, ε is the relaxation amount, | | · |. non-calculation1The expression takes the 1 norm.
Calculating all training sample kernel space sparse representation coefficient vectors to obtain adjacency matrix
Figure BDA0001204288150000088
Solving the generalized eigenequation:
Zm(R+RT-RRT)Zm Tw=λZmZm Tw (8)
in the formula (8), λ represents an unknown eigenvalue, w represents an eigenvector corresponding to λ, and a set w of eigenvectors corresponding to the first d largest eigenvalues is selecteddAs a sparse hold projection matrix, then kernel the sparse feature vector set C at scale mm
Cm=wd TZm(9)
Similarly calculating nuclear rarity at other scalesSparse feature vector set, forming multi-scale kernel sparse feature vector set
Figure BDA0001204288150000091
And 4, step 4: according to the serial feature fusion method, a kernel sparse feature vector set C is generated under each scale of a training sample1,C2,...,CMCarrying out serial fusion to obtain a new fusion feature vector set Q ═ Q1,q2,...,qN]。C1,C2,...,CMAll the vectors are d × N dimensional matrixes, and the fused feature vector set Q is a (d · M) × N dimensional matrix.
And 5, step 5: q is learned using a linear Support Vector Machine (SVM) classifier. The linear kernel support vector machine is used as a classification tool, so that the signal characteristics have certain linear separability after multi-scale Gaussian kernel mapping and sparse preservation projection, the linear support vector machine is simple in design, few in parameters and high in classification speed.
And (3) a testing stage:
step 1: extracting normalized amplitude characteristics of the test sample y and carrying out translation alignment on the test sample y and the training sample to obtain amplitude characteristics h after translation alignmentyThe method is as described above in step 1 of the training phase.
Step 2: using a Gaussian kernel function G to the amplitude feature hyPerforming multi-scale nuclear space mapping to obtain a multi-scale nuclear space feature vector set H of the test sampley=[hy1,hy2,...,hyM]The method is as described above in step 2 of the training phase.
And 3, step 3: multi-scale kernel space feature vector set H for test sampley=[hy1,hy2,...,hyM]Carrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set Cy=[cy1,cy2,...,cyM]The method is as described above in step 3 of the training phase.
And 4, step 4: fusing the multi-scale sparse features to obtain a new multi-scale kernel space sparse preserving fusion feature vector qyThe method is as described above in step 4 of the training phase.
And 5, step 5: support Vector Machine (SVM) classifier pair q completed by learningyAnd classifying to obtain the target category of the test sample.
Table 1 shows the comparison of the accuracy of the present invention with the conventional multi-scale gaussian kernel fusion feature identification (the scales are all 0.8,0.9,1.0,1.1, 1.2).
TABLE 1
Figure BDA0001204288150000101
From the data, the one-dimensional distance image identification method provided by the invention has 2-3 percentage points higher than the traditional multi-scale Gaussian kernel fusion feature identification accuracy under the condition that the final feature dimensions are the same, and can finish the identification with the same accuracy by using features with lower dimensions than the traditional method under the condition of processing the same sample. Because the multi-scale nuclear space mapping and the sparse keeping projection are combined, the linear separability of data can be enhanced, the inherent relation among samples is considered, the information quantity of sample classification is increased, and the method has a good application prospect in engineering practice.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (5)

1. The one-dimensional range profile recognition method based on the multi-scale nuclear sparse preserving projection is characterized by comprising a training stage and a testing stage, wherein the training stage comprises the following steps:
(1) training sample set X ═ X for one-dimensional range profile1,x2,...,xN]Extracting its normalized amplitude characteristic set
Figure FDA0002145142600000011
Then, the translation alignment operation is carried out to obtain an amplitude characteristic set H ═ H after the translation alignment1,h2,...,hN]N is the sample volume;
(2) performing multi-scale nuclear space mapping on the amplitude features by using a Gaussian kernel function to obtain a training sample multi-scale nuclear space feature vector set
Figure FDA0002145142600000012
M is the total number of scales; the specific process of the step is as follows:
the gaussian kernel function used:
Figure FDA0002145142600000013
in the above formula, a, b is 1,2, …, N, σmFor a Gaussian kernel parameter at scale m, H ═ H is obtained at scale m1,h2,...,hN]N × N dimensional kernel matrix K:
Ka,b=G(ha,hb)
in the above formula, Ka,bRepresenting the a row and b column elements of the kernel matrix K;
centralizing the kernel matrix K on a high-dimensional space to obtain a matrix
Figure FDA0002145142600000014
To pair
Figure FDA0002145142600000015
Performing principal component analysis to obtain eigenvalue matrix Lambda consisting of the maximum eigenvalues in the eigenvalue matrixlCorresponding eigenvector matrix Ul=[α12,...,αl]L is less than or equal to N, from UlConstructing a nuclear space projection matrix, and taking H as H at the scale m1,h2,...,hN]Performing nuclear space feature extraction:
Figure FDA0002145142600000021
in the above formula, (.)TDenotes a transposition operation, αl,kRepresents UlMiddle feature vector alphalThe k thThe elements are selected from the group consisting of,
Figure FDA0002145142600000022
represents hjThe kernel space mapping at the scale m, j is 1,2, …, N, so that the kernel space feature vector set Z at the scale m can be obtainedm=[z1,z2,...,zN];
In the same way, the kernel space feature vector set under other scales can be obtained, and thus the multi-scale kernel space feature vector set of the training sample is formed
Figure FDA0002145142600000023
(3) To pairCarrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set
Figure FDA0002145142600000025
(4) To pair
Figure FDA0002145142600000026
Performing serial feature fusion to obtain a multi-scale kernel space sparse preserving fusion feature vector set Q ═ Q1,q2,...,qN];
(5) Learning Q by adopting a support vector machine classifier;
the steps of the test phase are as follows:
(6) extracting normalized amplitude characteristics of the one-dimensional range profile test sample y and carrying out translation alignment on the one-dimensional range profile test sample y and the training sample to obtain amplitude characteristics h after translation alignmenty
(7) Using a Gaussian kernel function to the amplitude feature hyPerforming multi-scale nuclear space mapping to obtain a multi-scale nuclear space feature vector set H of the test sampley=[hy1,hy2,...,hyM];
(8) To Hy=[hy1,hy2,...,hyM]Carrying out sparse hold projection to obtain a multi-scale kernel sparse feature vector set Cy=[cy1,cy2,...,cyM];
(9) To Cy=[cy1,cy2,...,cyM]Obtaining a multi-scale kernel space sparse preserving fusion feature vector q by performing serial feature fusiony
(10) Support vector machine classifier pair q completed by learning in step (5)yAnd classifying to obtain the target class of the one-dimensional distance image test sample y.
2. The one-dimensional distance image recognition method based on multi-scale nuclear sparsity preserving projection as claimed in claim 1, wherein in step (1), the formula for extracting the normalized amplitude features of the training samples is as follows:
Figure FDA0002145142600000031
in the above formula, | · represents modulo, | ·| | non-conducting phosphor2The representation takes a 2 norm.
3. The method for one-dimensional distance image recognition based on multi-scale nuclear sparse preserving projection as claimed in claim 1, wherein in step (1), the process of performing the shift alignment operation on the amplitude features is to perform the shift alignment process based on the maximum correlation criterion starting from the second training sample and using the amplitude features of the first training sample as the reference, wherein
Figure FDA0002145142600000032
Andthe cross-correlation coefficient of (a) is:
order to
Figure FDA0002145142600000035
At the same time of not moving
Figure FDA0002145142600000036
Translating p distance units, wherein p satisfies:
Figure FDA0002145142600000037
in the above formula, the first and second carbon atoms are,
Figure FDA0002145142600000038
is a set
Figure FDA0002145142600000039
The first of the elements in (a) is,expression finding
Figure FDA00021451426000000311
And
Figure FDA00021451426000000312
i-2, …, N.
4. The one-dimensional distance image recognition method based on multi-scale nuclear sparsity preserving projection as claimed in claim 1, wherein the specific process of step (3) is as follows:
for training sample nuclear space feature vector z under arbitrary scale msAnd s is 1,2, …, M, the residual training sample kernel space feature vector except the residual training sample kernel space feature vector is used for carrying out sparse representation on the training sample kernel space feature vector, and a sparse representation coefficient vector is obtained by solving a constraint optimization problem
Figure FDA0002145142600000041
Figure FDA0002145142600000042
s.t.||Zmrs-zs||2≤ε
1=eTrs
In the above formula, e represents a column vector in which all elements are 1, and rs=[rs,1,...,rs,s-1,0,rs,s+1,...,rs,M]TFor sparse representation of coefficient vectors, rs,tRepresenting a training sample kernel space feature vector ztFor reconstruction zsThe contribution of (1), t ≠ s, ε is the relaxation amount, | | · |. non-calculation1Representing taking a 1 norm;
calculating all training sample kernel space sparse representation coefficient vectors to obtain adjacency matrix
Figure FDA0002145142600000043
Solving the generalized eigenequation:
Zm(R+RT-RRT)Zm Tw=λZmZm Tw
in the above formula, λ represents an unknown eigenvalue, w represents an eigenvector corresponding to λ, and a set w of eigenvectors corresponding to the first d largest eigenvalues is selecteddAs a sparse hold projection matrix, then kernel the sparse feature vector set C at scale mm
Cm=wd TZm
Similarly calculating the kernel sparse feature vector set under other scales to form a multi-scale kernel sparse feature vector set
Figure FDA0002145142600000044
5. The one-dimensional distance image recognition method based on multi-scale nuclear sparse preserving projection as claimed in claim 1, characterized in that: in step (5), the support vector machine classifier adopts a linear kernel support vector machine classifier.
CN201710010006.0A 2017-01-06 2017-01-06 One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection Expired - Fee Related CN106908774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710010006.0A CN106908774B (en) 2017-01-06 2017-01-06 One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710010006.0A CN106908774B (en) 2017-01-06 2017-01-06 One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection

Publications (2)

Publication Number Publication Date
CN106908774A CN106908774A (en) 2017-06-30
CN106908774B true CN106908774B (en) 2020-01-10

Family

ID=59206924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710010006.0A Expired - Fee Related CN106908774B (en) 2017-01-06 2017-01-06 One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection

Country Status (1)

Country Link
CN (1) CN106908774B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107682109B (en) * 2017-10-11 2019-07-30 北京航空航天大学 A kind of interference signal classifying identification method suitable for UAV Communication system
CN108387866B (en) * 2018-01-16 2021-08-31 南京航空航天大学 Method for searching illegal broadcasting station by unmanned aerial vehicle based on reinforcement learning
CN109472239B (en) * 2018-10-28 2021-10-01 中国人民解放军空军工程大学 Individual identification method of frequency hopping radio station
CN113156416B (en) * 2021-05-17 2022-05-17 电子科技大学 Unknown target discrimination method based on multi-kernel dictionary learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5227801A (en) * 1992-06-26 1993-07-13 The United States Of America As Represented By The Secretary Of The Navy High resolution radar profiling using higher-order statistics
CN101144860A (en) * 2007-10-16 2008-03-19 哈尔滨工业大学 Hyperspectral image abnormal point detection method based on selective kernel principal component analysis
CN101241185B (en) * 2008-03-12 2010-09-29 电子科技大学 Radar target-range image non-linear projection recognition method
CN103544296A (en) * 2013-10-22 2014-01-29 中国人民解放军海军航空工程学院 Adaptive intelligent integration detection method of radar range extension target
JP2016151418A (en) * 2015-02-16 2016-08-22 日本電気株式会社 Target detection device, target detection method, target detection program, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5227801A (en) * 1992-06-26 1993-07-13 The United States Of America As Represented By The Secretary Of The Navy High resolution radar profiling using higher-order statistics
CN101144860A (en) * 2007-10-16 2008-03-19 哈尔滨工业大学 Hyperspectral image abnormal point detection method based on selective kernel principal component analysis
CN101241185B (en) * 2008-03-12 2010-09-29 电子科技大学 Radar target-range image non-linear projection recognition method
CN103544296A (en) * 2013-10-22 2014-01-29 中国人民解放军海军航空工程学院 Adaptive intelligent integration detection method of radar range extension target
JP2016151418A (en) * 2015-02-16 2016-08-22 日本電気株式会社 Target detection device, target detection method, target detection program, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Multiscale kernel sparse coding-based classifier for HRRP radar target recognition";Wei Xiong 等;《IET Radar, Sonar & Navigation》;20160614;正文第1-3节及图2、4 *
"基于KPCA算法的雷达目标一维距离像识别";张子刚 等;《科学技术》;20141231(第7期);第136页 *

Also Published As

Publication number Publication date
CN106908774A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
CN108133232B (en) Radar high-resolution range profile target identification method based on statistical dictionary learning
CN107515895B (en) Visual target retrieval method and system based on target detection
CN106908774B (en) One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection
CN110045015B (en) Concrete structure internal defect detection method based on deep learning
CN106951915B (en) One-dimensional range profile multi-classifier fusion recognition method based on category confidence
CN107194378B (en) Face recognition method and device based on mixed dictionary learning
CN108960330A (en) Remote sensing images semanteme generation method based on fast area convolutional neural networks
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN106443632B (en) The radar target identification method of multitask Factor Analysis Model is kept based on label
CN107085206B (en) One-dimensional range profile identification method based on adaptive sparse preserving projection
CN107133648B (en) One-dimensional range profile identification method based on adaptive multi-scale fusion sparse preserving projection
CN109543720B (en) Wafer map defect mode identification method based on countermeasure generation network
CN112149758B (en) Hyperspectral open set classification method based on Euclidean distance and deep learning
CN105654122B (en) Based on the matched spatial pyramid object identification method of kernel function
CN106951822B (en) One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection
CN108734115B (en) Radar target identification method based on label consistency dictionary learning
CN112836671A (en) Data dimension reduction method based on maximization ratio and linear discriminant analysis
CN107092805B (en) Magnetic resonance parallel imaging device
CN110161480A (en) Radar target identification method based on semi-supervised depth probabilistic model
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN106709428B (en) One-dimensional range profile robust identification method based on Euler kernel principal component analysis
CN108038467B (en) A kind of sparse face identification method of mirror image in conjunction with thickness level
CN109886315A (en) A kind of Measurement of Similarity between Two Images method kept based on core
CN104504391A (en) Hyperspectral image classification method based on sparse feature and Markov random field
CN112861929A (en) Image classification method based on semi-supervised weighted migration discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200110

CF01 Termination of patent right due to non-payment of annual fee