CN102073875A - Sparse representation-based background clutter quantification method - Google Patents

Sparse representation-based background clutter quantification method Download PDF

Info

Publication number
CN102073875A
CN102073875A CN 201110001480 CN201110001480A CN102073875A CN 102073875 A CN102073875 A CN 102073875A CN 201110001480 CN201110001480 CN 201110001480 CN 201110001480 A CN201110001480 A CN 201110001480A CN 102073875 A CN102073875 A CN 102073875A
Authority
CN
China
Prior art keywords
matrix
background
vector
target
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110001480
Other languages
Chinese (zh)
Other versions
CN102073875B (en
Inventor
杨翠
李倩
吴洁
张建奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN 201110001480 priority Critical patent/CN102073875B/en
Publication of CN102073875A publication Critical patent/CN102073875A/en
Application granted granted Critical
Publication of CN102073875B publication Critical patent/CN102073875B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses a sparse representation-based background clutter quantification method, which mainly solves the problem that the conventional cluster scale cannot accord with the physical essence of background clutter relativity well and cannot fully reflect human vision property. The method comprises the following implementation steps of: partitioning a gray background image to be quantified into a plurality of small equal-size units and combining the small units into a background matrix; extracting main characteristics of a target vector and the background matrix so as to obtain a target characteristic vector and a background characteristic matrix; normalizing the target characteristic vector and the background characteristic matrix; and calculating the sparsest representations of the normalized target characteristic vector in the normalized background characteristic matrix, wherein the summation of absolute values of the sparsest representations is taken as the background clutter scale of the entire image. Two major characteristics during human eye search are fully utilized, the consistency between a predicted target detection probability and a subjective practical target detection probability is enhanced, and the method can be used for predicting and evaluating the target acquisition performance of a photoelectric imaging system.

Description

Background clutter quantizing method based on rarefaction representation
Technical field
The invention belongs to technical field of image processing,, can be used for prediction and assessment that the photo electric imaging system target is obtained performance particularly by the background clutter quantizing method of rarefaction representation.
Background technology
It is a key concept in the military missions such as photoelectronic warfare, scouting, early warning and camouflage that the photo electric imaging system target is obtained performance.In recent years, along with the introducing of new material and the progress of manufacturing process, photodetector has reached or near the background limit, contextual factor has become the key factor that restriction photo electric imaging system target is obtained performance.The photo electric imaging system target is obtained and is used rationally background clutter quantization scale accurately, the outfield performance that can make it predict the outcome and embody imaging system more accurately in the performance characterization model.
Background clutter is a kind of visually-perceptible effect, refers to the class object that jamming target is surveyed, and it has two typical characteristics: based on background and target signature; Relative interesting target.Since the last century the eighties, the foreign study person is carrying out a large amount of research aspect the quantification sign of background clutter, multiple background clutter quantificational description yardstick has been proposed, wherein most widely used is statistical variance yardstick SV, as D.E.Schmieder and M.R.Weathersby, " Detection performance in clutter with variable resolution; " IEEE Trans.Aerosp.Electron.Syst.AES-19 (4), 622-630 (1983), with marginal probability yardstick POE, as G.Tidhar, G.Reiter, Z.Avital, Y.Hadar, S.R.Rotmam, V.George, and M.L.Kowalczyk, " Modeling human search and target acquisition performance:IV.Detection probability in the cluttered environment; " Opt.Eng.33,801-808 (1994).Yet statistical variance yardstick SV is based upon on the basis to the photoelectric image statistical treatment, does not consider the human eye vision apperceive characteristic; Marginal probability yardstick POE is reference with the background information only, and having run counter to background clutter is the physical substance of relative target.This feasible clutter yardstick by the two foundation can not reasonably reflect the influence of background to the target acquisition process, is difficult to be used for exactly the prediction and the assessment of photo electric imaging system outfield performance.
Summary of the invention
The objective of the invention is to overcome the deficiency of background clutter quantizing method in the past, propose a kind of background clutter yardstick, to improve accuracy performance prediction of imaging system outfield and assessment based on rarefaction representation.
In order to realize such purpose, the present invention by dimensionality reduction technology with target and background information by space field transformation to property field, and utilize the rarefaction representation theory that background clutter is quantized at feature space.Concrete steps are as follows:
(1) with the target image column vectorization of two dimension, obtains object vector x;
(2) background image is divided into N equal-sized junior unit, the size of each junior unit level and vertical direction is two times of target corresponding size;
(3) background junior unit column vectorization that each is two-dimentional, and be combined into background matrix Ψ;
(4) by principal component analysis (PCA) PCA to object vector x and background matrix Ψ dimensionality reduction, obtain target feature vector respectively
Figure BDA0000042845260000021
With background characteristics matrix Φ;
(5) to target feature vector
Figure BDA0000042845260000022
Carry out normalized, obtain the normalization target feature vector
Figure BDA0000042845260000023
x ^ = x ~ / | | x ~ | | 2
Wherein || || 2The l of expression vector 2Norm;
(6) each vector among the background characteristics matrix Φ is carried out the Θ as a result that normalized obtains i, by subscript sequence number order from small to large, constitute normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ;
(7) calculate the normalization target feature vector
Figure BDA0000042845260000025
Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied
Figure BDA0000042845260000031
The minimum l of s 0Norm is separated:
Min||s|| 0Satisfy
Figure BDA0000042845260000032
(8) get nonzero element absolute value among the similar vectorial s
Figure BDA0000042845260000033
I=1,2 ..., K, summation, clutter quantization scale as a setting:
Figure BDA0000042845260000034
Wherein K is the number of nonzero element among the similar vectorial s.
The present invention has following advantage:
1) the present invention is owing to adopt the PCA method dimension-reduction treatment of target and background information to be simulated first stage--the feature selecting stage of human eye vision in the search procedure, and simulate second stage--the Syndicating search stage of human eye vision in the search procedure by obtaining the rarefaction representation of target feature vector in the background characteristics matrix, meet the apperceive characteristic of human eye vision in the target acquisition process;
2) the present invention has been owing to not only considered background characteristics in background clutter quantizes, and is also that the relativity influence of target signature is included, meets the physical essence of background clutter.
Based on above 2 points, the physical mechanism that background information influences the target acquisition process in background clutter quantizing method of the present invention and the field trial more conforms to.Experimental result shows: compare with clutter quantization method commonly used in the past, background clutter quantizing method of the present invention is more consistent with the acquisition probability that observer's actual tests obtains to the prediction of acquisition probability, and the prediction of target being obtained performance is more accurate.
Description of drawings:
Fig. 1 is the synoptic diagram of implementation procedure of the present invention;
Low background clutter image, target image and similar vector distribution figure that Fig. 2 uses for the present invention;
Middle background clutter image, target image and similar vector distribution figure that Fig. 3 uses for the present invention;
High background clutter image, target image and similar vector distribution figure that Fig. 4 uses for the present invention;
Fig. 5 is for being experimental data with the Search_2 image data base, the matched curve between each background clutter quantization scale and the observer's realistic objective detection probability.
Embodiment
With reference to Fig. 1, the background clutter quantizing method performing step that the present invention is based on rarefaction representation is as follows:
Step 1 with the unit of classifying as, by initial column sequence number order from small to large, is formed object vector with the pixel value of target image
Figure BDA0000042845260000041
x={t 1,1,t 2,1,t 3,1,...,t C,1,t 1,2,...,t C,D} T
T wherein I, jThe expression position (i, the target pixel value of j) locating, C and D are represented the line number and the columns of target image respectively, T represents vector is carried out matrix transpose operation.
Step 2 is divided into N equal-sized junior unit with background image to be quantified, and the horizontal direction of each junior unit and the size of vertical direction are the twice of target corresponding size.
The size of N determined by the big or small M=C * D of the big or small A * B of background image to be quantified and each junior unit, promptly Wherein, A and B represent the line number and the columns of background image respectively,
Figure BDA0000042845260000043
The maximum integer that is less than or equal to x is got in expression.
Step 3, the column vector that each background junior unit column vectorization is successively obtained
Figure BDA0000042845260000044
I=1,2 ..., N, combination background matrix
Ψ={A 1,A 2,...,A N}
Step 4 is carried out dimension-reduction treatment with principal component analysis (PCA) PCA to object vector x and background matrix Ψ, and concrete steps are as follows:
(4a) will deduct the X as a result that place row element average obtains with each element among the background matrix Ψ Ij, with subscript (i j) is preface, constitutes background differential matrix X,
X ij = Ψ ij - Σ j = 1 N Ψ ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N
Wherein, Ψ IjAnd X IjBe respectively background matrix Ψ and background differential matrix X and be positioned at that (M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating;
(4b) take advantage of its transposed matrix X with the background differential matrix X right side T, obtain covariance matrix A:
A=X TX;
(4c) covariance matrix A is carried out characteristic value decomposition, obtain its nonzero eigenvalue λ kAnd corresponding proper vector v k, k=1,2 ..., t, wherein t is total number of covariance matrix A nonzero eigenvalue, λ 1〉=λ 2〉=Λ 〉=λ t>0, the proper vector mutually orthogonal;
(4d) with covariance matrix A nonzero eigenvalue summation 90% as threshold value, W subduplicate formation diagonal matrix D reciprocal of nonzero eigenvalue before getting:
D = 1 / λ 1 O 1 / λ W , Satisfy Σ k = 1 W λ k / Σ k = 1 t λ k ≈ 0.9
Simultaneously, get this W nonzero eigenvalue characteristic of correspondence vector v k, k=1,2 ..., W, composition characteristic matrix: v={v 1, v 2..., v W;
(4e) with background differential matrix X premultiplication eigenmatrix v, premultiplication diagonal matrix D obtains albefaction matrix R again M * W:
R=X*v*D
Wherein, the line number M of albefaction matrix R is far longer than its columns W;
(4f), obtain the background characteristics matrix with the transposed matrix premultiplication background differential matrix X of albefaction matrix R:
Φ=R TX;
(4g) will be with the d as a result of corresponding row element average among each element subtracting background matrix Ψ of object vector x i,, constitute target difference vector d={d according to subscript sequence number order from small to large 1, d 2..., d M} T,
d i = x i - Σ j = 1 N Ψ ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N ;
X wherein iBe i the element of object vector x, Ψ Ij(M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating for background matrix Ψ is positioned at;
(4h) the transposed matrix premultiplication target difference vector d with albefaction matrix R obtains target feature vector:
x ~ = R T d .
Except adopting described principal component analysis (PCA) PCA, also available following method is carried out dimensionality reduction to this step to the method for object vector and background matrix dimensionality reduction:
1) multi-dimentional scale method MDS (I.Borg, and P.Groenen, " Modern Multidimensional Scaling:theory and applications, " 2nd ed., Springer-Verlag New York, 2005);
2) independent component analysis method ICA (A. And E.Oja, " A Fast Fixed-Point Algorithm for Independent Component Analysis, " Neural Computation, vol.9, No.7, pp:1,483-1,492, Oct.1997.);
3) nonnegative matrix factorization method NMF (D.D.Lee and H.S.Seung. " Algorithms for non-negative matrix factorization, " In Advances in Neural Information Processing systems, 2001.);
4) local linear embedding inlay technique LLE (T.R.Sam and K.S.Lawrence, " Nonlinear Dimensionality Reduction by Locally Linear Embedding, " SCIENCE, Vol.290No.22, Dec.2000.);
5) laplacian eigenmaps method LE (M.Belkin and P.Niyogi, " Laplacian Eigenmaps for Dimensionality Reduction and Data Representation; " Neural Computation 15, pp:1373-1396,2003.).
Step 5 is to target feature vector
Figure BDA0000042845260000062
Carry out normalized and obtain the normalization target feature vector
Figure BDA0000042845260000063
x ^ = x ~ / | | x ~ | | 2
Wherein || || 2The l of expression vector 2Norm.
Step 6 is carried out the Θ as a result that normalized obtains with each vector among the background characteristics matrix Φ i, by subscript sequence number order from small to large, form normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ.
Step 7 is asked the normalization target feature vector Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied
Figure BDA0000042845260000072
The minimum l of s 0Norm is separated:
Min||s|| 0Satisfy
Figure BDA0000042845260000073
Solution procedure is as follows:
(7a) ask satisfied
Figure BDA0000042845260000074
The minimum l of s 1Norm is separated:
Min||s|| 1Satisfy
Figure BDA0000042845260000075
(7b) with formula<1〉relaxing is:
Min||s|| 1Satisfy
Figure BDA0000042845260000076
Wherein, ε is not less than 0 arbitrary constant, when ε=0, and formula<2〉will deteriorate to formula<1 〉;
(7c) utilize the LASSO algorithm, with formula<2〉be converted into:
Figure BDA0000042845260000077
Satisfy || s|| 1≤ σ<3 〉
Wherein, σ is not less than 0 arbitrary constant;
(7d) utilize Lagrangian algorithm, with formula<3〉be converted into the unconstrained optimization formula:
s * = arg min s 1 2 | | x ^ - &Theta;s | | 2 + &alpha; | | s | | 1 - - - < 4 >
Wherein, α is a Lagrange multiplier,
Figure BDA0000042845260000079
The value of the minimum variations per hour s of expression objective function;
(7e) utilize and to block some algorithm in the newton, with formula<4 turn to the quadratic programming formula of inequality constrain:
min 1 2 | | x ^ - &Theta;s | | 2 + &alpha; &Sigma; i = 1 N &mu; i - - - < 5 >
i≤s i≤μ i,i=1,2,...,N.
Wherein, s iBe i the element of similar vectorial s, μ iBe constraint s iThe factor ,-μ i≤ s i≤ μ iBe constraint condition;
(7f) be constraint condition-μ i≤ s i≤ μ iSet up the logarithm barrier function:
Figure BDA0000042845260000081
Utilize the logarithm barrier function, with formula<5〉be converted into and ask the centrode function F that defines by weight factor β β(s, μ, optimum solution α)=0:
Figure BDA0000042845260000082
(7g) utilize Newton iteration method solving equation<6 〉, can get iterative formula:
s ( k + 1 ) &mu; ( k + 1 ) &alpha; ( k + 1 ) = s ( k ) &mu; ( k ) &alpha; ( k ) - &dtri; 2 F &beta; - 1 ( s ( k ) , &mu; ( k ) , &alpha; ( k ) ) &CenterDot; &dtri; F &beta; ( s ( k ) , &mu; ( k ) , &alpha; ( k ) )
Wherein, s (k), μ (k)And α (k)Represent s respectively, μ and the α result after the k time iteration, s (k+1), μ (k+1)And α (k+1)Represent s respectively, μ and the α result after the k+1 time iteration, k is not more than 50 nonnegative integer,
Figure BDA0000042845260000084
The second derivative of function is asked in expression,
Figure BDA0000042845260000085
The first order derivative of function is asked in expression;
(7h) initial value of weighting repeated factor β=0.5 and solution vector:
s ( 0 ) &mu; ( 0 ) &alpha; ( 0 ) = &Theta; T x ^ 0.95 &CenterDot; sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ + 0.1 max ( sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ ) 1
Wherein, max represents the maximal value of amount of orientation element, and sgn represents the positive and negative attribute of vector element:
Figure BDA0000042845260000087
(7i) bring initial value and weight factor into step (7g) and carry out interative computation, bring formula<5 into up to the result with adjacent twice iteration the difference of subtracting each other is not more than 10 -3, this moment, the value of the s that obtains was formula<1〉in the minimum l of s 1Norm is separated, and jumps to step (7k); If, reach maximum iteration time 50, do not obtain optimum solution yet, execution in step (7j);
(7j) with final iteration result as initial value, weight factor is updated to original 2 times, iterations makes zero, and turns back to step (7i);
(7k) verify the minimum l of all trial image 1Norm is separated the sparse property of s, according to document D.Donoho, and " For most large underdetermined systems of linear eauations the minimal l 1-norm near solution approximates the sparest solution, " preprint, the conclusion that proposes in 2004.: as minimum l 1When norm is separated s and is had sparse characteristic, itself and minimum l 0Norm is separated equivalence, as can be known, and the minimum l that the present invention tried to achieve 1Norm is separated s and is the normalization target feature vector
Figure BDA0000042845260000091
Rarefaction representation in normalization background characteristics matrix Θ.
Find the solution minimum l 1The norm problem, the algorithm of in the present invention, giving, can also carry out in order to following method:
1) gradient reflection method (M.Figueiredo, R.Nowak, and S.Wright, " Gradient projection for sparse reconstruction:Application to compressed sensing and other inverse problems; " IEEE Journal of Selected Topics in Signal Processing, Vol.1, No.4, pp:586-597,2007.);
2) homotopy method (D.Malioutov, M.Cetin, and A.Willsky, " Homotopy continuation for sparse signal representation; " In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.);
3) iteration collapse threshold method (I.Daubechies, M.Defrise, and C.Mol, " Aniterative thresholding algorithm for linear inverse problems with asparsity constraint; " Communications on Pure and Applied Math, Vol.57, pp:1413-1457,2004.);
4) Nesterov ' s method (A.Beck and M.Teboulle, " A fast iterative shrinkage-thresholding algorithm for linear inverse problems; " SIAMJournal on Imaging Sciences, Vol.2, No.1, pp:183-202,2009.);
5) intersection guiding method (J.Yang and Y.Zhang, " Alternating direction algorithms for l 1-problems in compressive sensing, " (preprint) arXiv:0912.1185,2009.).
Step 8 is got the absolute value of nonzero element among the similar vectorial s I=1,2 ..., K, summation, as the background clutter quantization scale of entire image:
Figure BDA0000042845260000102
Wherein, K is the number of nonzero element among the similar vectorial s.
Rationality of the present invention and superiority can further describe by following experiment and comparative analysis:
Experimental verification:
1. experiment condition
To be example verify to the rationality of image background clutter yardstick of the present invention and in the superiority that target is obtained aspect the performance prediction Search_2 image data base that provides with Dutch TNO Human Factors research institute.The Search_2 image data base comprises high-resolution digital natural scene image and the concrete parameter of every width of cloth scene and the test result of observer's actual observation experiment of 44 width of cloth different background complexities, the detailed description of relevant this database can be referring to document A.Toet, P.Bijl, and J.M.Valeton, " Image data set for testing search and detection models; " Opt.Eng.40 (9), 1760-1767 (2001); A.Toet, P.Bijl, F.L.Kooi, and J.M.Valeton, " Ahigh-resolution image data set for testing search and detection models; " Report TM-98-A020, TNO Human Factors Research Institute, (1998) and A.Toet, " Errata in Report TNO-TM 1998A020:A high-resolution image data set for testing search and detection models, " (2001).
2. example explanation
That Fig. 2, Fig. 3 and Fig. 4 have provided respectively that the present invention uses is low, in be low background clutter image and target area image with background image, target image and similar vector distribution figure: Fig. 2 of the different clutter grades of Senior Three kind, wherein Fig. 2 (a) is low background clutter image, Fig. 2 (b) is a target area image, be the part that the white rectangle frame is marked among Fig. 2 (a), the corresponding similar vector distribution figure of Fig. 2 (c) for calculating; Fig. 3 is middle background clutter image and target area image, and wherein Fig. 3 (a) is middle background clutter image, and Fig. 3 (b) is a target area image, i.e. the part that the white rectangle frame is marked among Fig. 3 (a), the corresponding similar vector distribution figure of Fig. 3 (c) for calculating; Fig. 4 is high background clutter image and target area image, and wherein Fig. 4 (a) is high background clutter image, and Fig. 4 (b) is a target area image, i.e. the part that the white rectangle frame is marked among Fig. 4 (a), the corresponding similar vector distribution figure of Fig. 4 (c) for calculating.
From Fig. 2 (c), Fig. 3 (c) and Fig. 4 (c) as seen, image for three kinds of different clutter grades, its target image all has sparse characteristic to the similar vector of background image, thereby has proved the rationality of the calculating normalization target feature vector algorithm of rarefaction representation in normalization background characteristics vector that adopts among the present invention.
From Fig. 2 (a) and Fig. 2 (b) as seen, the similarity of background and target is low, and the background clutter of entire image is very low, and the detection of a target is easy to; From Fig. 3 (a) and Fig. 3 (b) as seen, the similarity of background and target is higher, and the background clutter of entire image is higher, and the detection of a target is difficulty; From Fig. 4 (a) and Fig. 4 (b) as seen, compare with front two width of cloth images, the similarity of its background and target is the highest, and the background clutter of entire image is the highest, and the detection of a target is the most difficult.Respectively the background clutter yardstick that Fig. 2 (a), Fig. 3 (a) and Fig. 4 (a) quantize to obtain them is respectively 2.0195,1.3060 and 1.0120 with image background clutter quantization scale of the present invention, this subjective perception with above-described human eye vision is consistent, and visible quantization scale of the present invention can reflect the truth of background clutter.
3. experimental result
The 7th, 15,23,26 and 4 width of cloth images in the Search_2 database when the present invention being experimentized checking, have been removed, this be since the present invention research be that single goal is surveyed, there is the binocular mark in preceding four width of cloth images, exceeded scope of the present invention, and target is too small in the last piece image, belong to Weak target and detect problem, do not belong to research field of the present invention.Thereby in the superiority experiment of checking background clutter quantization scale of the present invention aspect target is obtained performance prediction, final valid data are remaining 39 width of cloth image.
Fig. 5 utilizes result that POE, SV and background clutter yardstick of the present invention quantize to obtain to this 39 width of cloth image and the matched curve between observer's realistic objective detection probability, wherein, Fig. 5 (a), 5 (b) and 5 (c) are respectively the matched curve between POE, SV and background clutter quantization scale of the present invention and the observer's realistic objective detection probability.Fitting formula is:
PD = ( X / X 50 ) E 1 + ( X / X 50 ) E
Wherein, X represents the background clutter yardstick; X 50Be constant with E, can be by obtaining with the match of observer's realistic objective detection probability; PD is observer's realistic objective detection probability that subjective experiment obtains, can be by formula:
PD=N c/(N c+N f+N m)
Obtain, wherein N c, N fAnd N mBe respectively the correct detection of every width of cloth image correspondence in the Search_2 database to the number of the number of target, erroneous judgement target with do not detect the number of target.
Table 1 has provided the result of match between the acquisition probability that each background clutter yardstick and actual subjective experiment obtain with the form of data, comprising the X of each background clutter yardstick correspondence 50With the value of E, and performance measure RMSE, CC and SCC are to the target of prediction detection probability and the conforming evaluation result of observer's realistic objective detection probability of each background clutter yardstick.Wherein, X 50With E be curve fitting parameter; RMSE is a root-mean-square error; CC is a Pearson correlation coefficient; SCC is the Spearman rank correlation coefficient.
Table 1: the performance of background clutter yardstick of the present invention, POE and SV relatively
Figure BDA0000042845260000122
By table 1 as seen, the Pearson correlation coefficient of background clutter quantization scale of the present invention and observer's realistic objective detection probability and Spearman rank correlation coefficient are all greater than other background clutter yardstick, and root-mean-square error is less than other background clutter yardstick, thereby proved that background clutter quantization scale of the present invention obtains superiority aspect the performance prediction in target.

Claims (3)

1. background clutter quantizing method based on rarefaction representation comprises following process:
(1) with the target image column vectorization of two dimension, obtains object vector x;
(2) background image is divided into N equal-sized junior unit, the size of each junior unit level and vertical direction is two times of target corresponding size;
(3) background junior unit column vectorization that each is two-dimentional, and be combined into background matrix Ψ;
(4) by principal component analysis (PCA) PCA to object vector x and background matrix Ψ dimensionality reduction, obtain target feature vector respectively
Figure FDA0000042845250000011
With background characteristics matrix Φ;
(5) to target feature vector
Figure FDA0000042845250000012
Carry out normalized, obtain the normalization target feature vector
x ^ = x ~ / | | x ~ | | 2
Wherein, || || 2The l of expression vector 2Norm;
(6) each vector among the background characteristics matrix Φ is carried out the Θ as a result that normalized obtains i, by subscript sequence number order from small to large, constitute normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ;
(7) calculate the normalization target feature vector
Figure FDA0000042845250000015
Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied
Figure FDA0000042845250000016
The minimum l of s 0Norm is separated:
Min||s||0 satisfies
Figure FDA0000042845250000017
(8) get nonzero element absolute value among the similar vectorial s
Figure FDA0000042845250000018
I=1,2 ..., K, summation, clutter quantization scale as a setting:
Figure FDA0000042845250000019
Wherein K is the number of nonzero element among the similar vectorial s.
2. background clutter quantizing method according to claim 1, wherein step (4) described by principal component analysis (PCA) PCA to object vector x and background matrix Ψ dimensionality reduction, carry out as follows:
(4a) will deduct the X as a result that place row element average obtains with each element among the background matrix Ψ Ij, with subscript (i j) is preface, constitutes background differential matrix X,
X ij = &Psi; ij - &Sigma; j = 1 N &Psi; ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N
Wherein, Ψ IjAnd X IjBe respectively background matrix Ψ and background differential matrix X and be positioned at that (M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating;
(4b) take advantage of its transposed matrix X with the background differential matrix X right side T, obtain covariance matrix A:
A=X TX
(4c) covariance matrix A is carried out characteristic value decomposition, obtain its nonzero eigenvalue λ kAnd corresponding proper vector v k, k=1,2 ..., t, wherein t is total number of covariance matrix A nonzero eigenvalue, λ 1〉=λ 2〉=Λ 〉=λ t>0, the proper vector mutually orthogonal;
(4d) with covariance matrix A nonzero eigenvalue summation 90% as threshold value, W subduplicate formation diagonal matrix D reciprocal of nonzero eigenvalue before getting:
D = 1 / &lambda; 1 O 1 / &lambda; W , Satisfy &Sigma; k = 1 W &lambda; k / &Sigma; k = 1 t &lambda; k &ap; 0.9
Simultaneously, get this W nonzero eigenvalue characteristic of correspondence vector v k, k=1,2 ..., W, composition characteristic matrix: v={v 1, v 2..., v W;
(4e) with background differential matrix X premultiplication eigenmatrix v, premultiplication diagonal matrix D obtains albefaction matrix R again M * W:
R=X*v*D
Wherein, the line number M of albefaction matrix R is far longer than its columns W;
(4f), obtain the background characteristics matrix with the transposed matrix premultiplication background differential matrix X of albefaction matrix R:
Φ=R TX;
(4g) will be with the d as a result of corresponding row element average among each element subtracting background matrix Ψ of object vector x i,, constitute target difference vector d={d according to subscript sequence number order from small to large 1, d 2..., d M} T,
d i = x i - &Sigma; j = 1 N &Psi; ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N ;
Wherein, x iBe i the element of object vector x, Ψ Ij(M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating for background matrix Ψ is positioned at;
(4h) the transposed matrix premultiplication target difference vector d with albefaction matrix R obtains target feature vector:
x ~ = R T d .
3. background clutter quantizing method according to claim 1, the wherein described calculating normalization of step (7) target feature vector
Figure FDA0000042845250000033
Rarefaction representation in normalization background characteristics matrix Θ, calculate as follows:
(7a) ask satisfied
Figure FDA0000042845250000034
The minimum l of s 1Norm is separated:
Min||s|| 1Satisfy
(7b) with formula<1〉relaxing is:
Min||s|| 1Satisfy
Figure FDA0000042845250000036
Wherein, ε is not less than 0 arbitrary constant, when ε=0, and formula<2〉will deteriorate to formula<1 〉;
(7c) utilize the LASSO algorithm, with formula<2〉be converted into:
Figure FDA0000042845250000037
Satisfy || s|| 1≤ σ<3 〉
Wherein, σ is not less than 0 arbitrary constant;
(7d) utilize Lagrangian algorithm, with formula<3〉be converted into the unconstrained optimization formula:
s * = arg min s 1 2 | | x ^ - &Theta;s | | 2 + &alpha; | | s | | 1 - - - < 4 >
Wherein, α is a Lagrange multiplier,
Figure FDA0000042845250000039
The value of the minimum variations per hour s of expression objective function;
(7e) utilize and to block some algorithm in the newton, with formula<4 turn to the quadratic programming formula of inequality constrain:
min 1 2 | | x ^ - &Theta;s | | 2 + &alpha; &Sigma; i = 1 N &mu; i - - - < 5 >
i≤s i≤μ i,i=1,2,...,N.
Wherein, s iBe i the element of similar vectorial s, μ iBe constraint s iThe factor ,-μ i≤ s i≤ μ iBe constraint condition;
(7f) be constraint condition-μ i≤ s i≤ μ iSet up the logarithm barrier function:
Figure FDA0000042845250000042
Utilize the logarithm barrier function, with formula<5〉be converted into and ask the centrode function F that defines by weight factor β β(s, μ, optimum solution α)=0:
Figure FDA0000042845250000043
(7g) utilize Newton iteration method solving equation<6 〉, can get iterative formula:
s ( k + 1 ) &mu; ( k + 1 ) &alpha; ( k + 1 ) = s ( k ) &mu; ( k ) &alpha; ( k ) - &dtri; 2 F &beta; - 1 ( s ( k ) , &mu; ( k ) , &alpha; ( k ) ) &CenterDot; &dtri; F &beta; ( s ( k ) , &mu; ( k ) , &alpha; ( k ) )
Wherein, s (k), μ (k)And α (k)Represent s respectively, μ and the α result after the k time iteration, s (k+1), μ (k+1)And α (k+1)Represent s respectively, μ and the α result after the k+1 time iteration, k is not more than 50 nonnegative integer,
Figure FDA0000042845250000045
The second derivative of function is asked in expression, The first order derivative of function is asked in expression;
(7h) initial value of weighting repeated factor β=0.5 and solution vector:
s ( 0 ) &mu; ( 0 ) &alpha; ( 0 ) = &Theta; T x ^ 0.95 &CenterDot; sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ + 0.1 max ( sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ ) 1
Wherein, max represents the maximal value of amount of orientation element, and sgn represents the positive and negative attribute of vector element:
Figure FDA0000042845250000051
(7i) bring initial value and weight factor into step (7g) and carry out interative computation, bring formula<5 into up to the result with adjacent twice iteration the difference of subtracting each other is not more than 10 -3, this moment, the value of the s that obtains was formula<1〉in the minimum l of s 1Norm is separated, and jumps to step (7k); If, reach maximum iteration time 50, do not obtain optimum solution yet, execution in step (7j);
(7j) with final iteration result as initial value, weight factor is updated to original 2 times, iterations makes zero, and turns back to step (7i);
(7k) verify the minimum l of all trial image 1Norm is separated the sparse property of s, and with s as the normalization target feature vector
Figure FDA0000042845250000052
Rarefaction representation in normalization background characteristics matrix Θ.
CN 201110001480 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method Expired - Fee Related CN102073875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110001480 CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110001480 CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Publications (2)

Publication Number Publication Date
CN102073875A true CN102073875A (en) 2011-05-25
CN102073875B CN102073875B (en) 2012-12-05

Family

ID=44032409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110001480 Expired - Fee Related CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Country Status (1)

Country Link
CN (1) CN102073875B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542261A (en) * 2011-12-31 2012-07-04 华中科技大学 Two-dimensional computable target detection, recognition and identification performance predicting method
CN102636575A (en) * 2012-03-27 2012-08-15 深圳职业技术学院 Optimized sawtooth coverage scanning device and method used for ultrasonic imaging detecting instrument
CN102737253A (en) * 2012-06-19 2012-10-17 电子科技大学 SAR (Synthetic Aperture Radar) image target identification method
CN103901403A (en) * 2012-12-26 2014-07-02 现代摩比斯株式会社 Target tracking method reflecting restriction condition
CN107367715A (en) * 2017-07-28 2017-11-21 西安电子科技大学 Clutter suppression method based on rarefaction representation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460A (en) * 2007-11-27 2008-05-21 西安电子科技大学 Color picture background clutter quantizing method
CN101901352A (en) * 2010-08-06 2010-12-01 北京航空航天大学 Infrared background clutter quantifying method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460A (en) * 2007-11-27 2008-05-21 西安电子科技大学 Color picture background clutter quantizing method
CN101901352A (en) * 2010-08-06 2010-12-01 北京航空航天大学 Infrared background clutter quantifying method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《西安电子科技大学学报》 20090228 何国经等 一种基于结构相似度的杂波尺度 166-170 1-3 第36卷, 第01期 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542261A (en) * 2011-12-31 2012-07-04 华中科技大学 Two-dimensional computable target detection, recognition and identification performance predicting method
CN102542261B (en) * 2011-12-31 2014-03-26 华中科技大学 Two-dimensional computable target detection, recognition and identification performance predicting method
CN102636575A (en) * 2012-03-27 2012-08-15 深圳职业技术学院 Optimized sawtooth coverage scanning device and method used for ultrasonic imaging detecting instrument
CN102636575B (en) * 2012-03-27 2015-11-11 深圳职业技术学院 A kind of optimization sawtooth for ultrasonic imaging detecting instrument covers scanning equipment and method
CN102737253A (en) * 2012-06-19 2012-10-17 电子科技大学 SAR (Synthetic Aperture Radar) image target identification method
CN102737253B (en) * 2012-06-19 2014-03-05 电子科技大学 SAR (Synthetic Aperture Radar) image target identification method
CN103901403A (en) * 2012-12-26 2014-07-02 现代摩比斯株式会社 Target tracking method reflecting restriction condition
CN107367715A (en) * 2017-07-28 2017-11-21 西安电子科技大学 Clutter suppression method based on rarefaction representation
CN107367715B (en) * 2017-07-28 2020-04-14 西安电子科技大学 Clutter suppression method based on sparse representation

Also Published As

Publication number Publication date
CN102073875B (en) 2012-12-05

Similar Documents

Publication Publication Date Title
Xu et al. Joint reconstruction and anomaly detection from compressive hyperspectral images using Mahalanobis distance-regularized tensor RPCA
Zhong et al. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN102393911A (en) Background clutter quantization method based on compressive sensing
CN103984966B (en) SAR image target recognition method based on sparse representation
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN109145992A (en) Cooperation generates confrontation network and sky composes united hyperspectral image classification method
Liu et al. Enhancing spectral unmixing by local neighborhood weights
CN104091151A (en) Vehicle identification method based on Gabor feature extraction and sparse representation
CN101908138B (en) Identification method of image target of synthetic aperture radar based on noise independent component analysis
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN111090764B (en) Image classification method and device based on multitask learning and graph convolution neural network
CN102073875B (en) Sparse representation-based background clutter quantification method
CN107590515A (en) The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN107563442A (en) Hyperspectral image classification method based on sparse low-rank regular graph qualified insertion
CN104978573A (en) Non-negative matrix factorization method applied to hyperspectral image processing
CN109359525B (en) Polarized SAR image classification method based on sparse low-rank discrimination spectral clustering
CN107798345B (en) High-spectrum disguised target detection method based on block diagonal and low-rank representation
CN106096506A (en) Based on the SAR target identification method differentiating doubledictionary between subclass class
CN104268556A (en) Hyperspectral image classification method based on nuclear low-rank representing graph and spatial constraint
Khan et al. A customized Gabor filter for unsupervised color image segmentation
CN103295031A (en) Image object counting method based on regular risk minimization
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN109284668A (en) A kind of pedestrian&#39;s weight recognizer based on apart from regularization projection and dictionary learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121205

Termination date: 20180106

CF01 Termination of patent right due to non-payment of annual fee