CN102073875B - Sparse representation-based background clutter quantification method - Google Patents

Sparse representation-based background clutter quantification method Download PDF

Info

Publication number
CN102073875B
CN102073875B CN 201110001480 CN201110001480A CN102073875B CN 102073875 B CN102073875 B CN 102073875B CN 201110001480 CN201110001480 CN 201110001480 CN 201110001480 A CN201110001480 A CN 201110001480A CN 102073875 B CN102073875 B CN 102073875B
Authority
CN
China
Prior art keywords
matrix
background
vector
target
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110001480
Other languages
Chinese (zh)
Other versions
CN102073875A (en
Inventor
杨翠
李倩
吴洁
张建奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN 201110001480 priority Critical patent/CN102073875B/en
Publication of CN102073875A publication Critical patent/CN102073875A/en
Application granted granted Critical
Publication of CN102073875B publication Critical patent/CN102073875B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a sparse representation-based background clutter quantification method, which mainly solves the problem that the conventional cluster scale cannot accord with the physical essence of background clutter relativity well and cannot fully reflect human vision property. The method comprises the following implementation steps of: partitioning a gray background image to be quantified into a plurality of small equal-size units and combining the small units into a background matrix; extracting main characteristics of a target vector and the background matrix so as to obtain a target characteristic vector and a background characteristic matrix; normalizing the target characteristic vector and the background characteristic matrix; and calculating the sparsest representations of the normalized target characteristic vector in the normalized background characteristic matrix, wherein the summation of absolute values of the sparsest representations is taken as the background clutter scale of the entire image. Two major characteristics during human eye search are fully utilized, the consistency between a predicted target detection probability and a subjective practical target detection probability is enhanced, and the method can be used for predicting and evaluating the target acquisition performance of a photoelectric imaging system.

Description

Background clutter quantizing method based on rarefaction representation
Technical field
The invention belongs to technical field of image processing,, can be used for prediction and assessment that the photo electric imaging system target is obtained performance particularly by the background clutter quantizing method of rarefaction representation.
Background technology
It is a key concept in the military missions such as photoelectronic warfare, scouting, early warning and camouflage that the photo electric imaging system target is obtained performance.In recent years, along with the introducing of new material and the progress of manufacturing process, photodetector has reached or near the background limit, contextual factor has become the key factor that restriction photo electric imaging system target is obtained performance.The photo electric imaging system target is obtained and is used rationally background clutter quantization scale accurately, the outfield performance that can make it predict the outcome and embody imaging system more accurately in the performance characterization model.
Background clutter is a kind of visually-perceptible effect, refers to the class targets thing that jamming target is surveyed, and it has two typical characteristics: based on background and target signature; Relative interesting target.Since the last century the eighties, the foreign study person is carrying out a large amount of research aspect the quantification sign of background clutter, has proposed multiple background clutter quantificational description yardstick; Wherein most widely used is statistical variance yardstick SV, like D.E.Schmieder and M.R.Weathersby, " Detection performance in clutter with variable resolution; " IEEE Trans.Aerosp.Electron.Syst.AES-19 (4), 622-630 (1983) and marginal probability yardstick POE; Like G.Tidhar, G.Reiter, Z.Avital; Y.Hadar, S.R.Rotmam, V.George; And M.L.Kowalczyk; " Modeling human search and target acquisition performance:IV.Detection probability in the cluttered environment, " Opt.Eng.33,801-808 (1994).Yet statistical variance yardstick SV is based upon on the basis to the photoelectric image statistical treatment, does not consider the human eye vision perception characteristic; Marginal probability yardstick POE is reference with the background information only, and having run counter to background clutter is the physical substance of relative target.This feasible clutter yardstick by the two foundation can not reasonably reflect the influence of background to the target acquisition process, is difficult to be used for exactly the prediction and the assessment of photo electric imaging system outfield performance.
Summary of the invention
The objective of the invention is to overcome the deficiency of background clutter quantizing method in the past, propose a kind of background clutter yardstick, to improve accuracy performance prediction of imaging system outfield and assessment based on rarefaction representation.
In order to realize such purpose, the present invention through dimensionality reduction technology with target and background information by space field transformation to property field, and utilize the rarefaction representation theory that background clutter is quantized at feature space.Concrete steps are following:
(1) with the target image column vectorization of two dimension, obtains object vector x;
(2) background image is divided into N equal-sized junior unit, the size of each junior unit level and vertical direction is two times of target corresponding size;
(3) background junior unit column vectorization that each is two-dimentional, and be combined into background matrix Ψ;
(4) by principal component analysis (PCA) PCA to object vector x and background matrix Ψ dimensionality reduction, obtain target feature vector
Figure BDA0000042845260000021
and background characteristics matrix Φ respectively;
(5) target feature vector
Figure BDA0000042845260000022
is carried out normalization and handle, obtain normalization target feature vector
Figure BDA0000042845260000023
x ^ = x ~ / | | x ~ | | 2
Wherein || || 2The l of expression vector 2Norm;
(6) each vector among the background characteristics matrix Φ is carried out normalization and handle the Θ as a result that obtains i, by subscript sequence number order from small to large, constitute normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ;
(7) calculate the normalization target feature vector
Figure BDA0000042845260000025
Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied The minimum l of s 0Norm is separated:
Min||s|| 0Satisfy
Figure BDA0000042845260000032
(8) get nonzero element absolute value among the similar vectorial s i=1; 2; ...; K; Summation, clutter quantization scale as a setting:
Figure BDA0000042845260000034
wherein K is the number of nonzero element among the similar vectorial s.
The present invention has following advantage:
1) the present invention is owing to adopt the PCA method dimension-reduction treatment of target and background information to be simulated first stage--the feature selecting stage of human eye vision in the search procedure; And simulate second stage--the Syndicating search stage of human eye vision in the search procedure through obtaining the rarefaction representation of target feature vector in the background characteristics matrix, meet the apperceive characteristic of human eye vision in the target acquisition process;
2) the present invention has been owing to not only considered background characteristics in background clutter quantizes, and is also that the relativity influence of target signature is included, meets the physical essence of background clutter.
Based on above 2 points, the physical mechanism that background information influences the target acquisition process in background clutter quantizing method of the present invention and the field trial more conforms to.Experimental result shows: compare with clutter quantization method commonly used in the past; Background clutter quantizing method of the present invention is more consistent with the acquisition probability that observer's actual tests obtains to the prediction of acquisition probability, and the prediction of target being obtained performance is more accurate.
Description of drawings:
Fig. 1 is the synoptic diagram of implementation procedure of the present invention;
Low background clutter image, target image and similar vector distribution figure that Fig. 2 uses for the present invention;
Middle background clutter image, target image and similar vector distribution figure that Fig. 3 uses for the present invention;
High background clutter image, target image and similar vector distribution figure that Fig. 4 uses for the present invention;
Fig. 5 is for being experimental data with the Search_2 image data base, the matched curve between each background clutter quantization scale and the observer's realistic objective detection probability.
Embodiment
With reference to Fig. 1, the background clutter quantizing method performing step that the present invention is based on rarefaction representation is following:
Step 1; With the pixel value of target image with the unit of classifying as; By initial place row sequence number order from small to large, form object vector
Figure BDA0000042845260000041
x={t 1,1,t 2,1,t 3,1,...,t C,1,t 1,2,...,t C,D} T
T wherein I, jThe expression position (i, the target pixel value of j) locating, C and D are represented the line number and the columns of target image respectively, T representes vector is carried out matrix transpose operation.
Step 2 is divided into N equal-sized junior unit with background image to be quantified, and the horizontal direction of each junior unit and the size of vertical direction are the twice of target corresponding size.
The size of N is confirmed by the big or small A * B of background image to be quantified and the big or small M=C * D of each junior unit; Promptly
Figure BDA0000042845260000042
wherein; A and B represent the line number and the columns of background image respectively, expression get the maximum integer that is less than or equal to x.
Step 3; Column vector
Figure BDA0000042845260000044
i=1 that each background junior unit column vectorization is successively obtained; 2; ...; N, combination background matrix
Figure BDA0000042845260000045
Ψ={A 1,A 2,...,A N}
Step 4 is carried out dimension-reduction treatment with PCA PCA to object vector x and background matrix Ψ, and concrete steps are following:
(4a) will deduct the X as a result that place row element average obtains with each element among the background matrix Ψ Ij, with subscript (i j) is preface, constitutes background differential matrix X,
X ij = Ψ ij - Σ j = 1 N Ψ ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N
Wherein, Ψ IjAnd X IjBe respectively background matrix Ψ and background differential matrix X and be positioned at that (M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating;
(4b) take advantage of its transposed matrix X with the background differential matrix X right side T, obtain covariance matrix A:
A=X TX;
(4c) covariance matrix A is carried out characteristic value decomposition, obtain its nonzero eigenvalue λ kAnd corresponding proper vector v k, k=1,2 ..., t, wherein t is total number of covariance matrix A nonzero eigenvalue, λ 1>=λ 2>=Λ>=λ t>0, the proper vector mutually orthogonal;
(4d) with covariance matrix A nonzero eigenvalue summation 90% as threshold value, W subduplicate formation diagonal matrix D reciprocal of nonzero eigenvalue before getting:
D = 1 / λ 1 O 1 / λ W , Satisfy Σ k = 1 W λ k / Σ k = 1 t λ k ≈ 0.9
Simultaneously, get this W nonzero eigenvalue characteristic of correspondence vector v k, k=1,2 ..., W, composition characteristic matrix: v={v 1, v 2..., v W;
(4e) with background differential matrix X premultiplication eigenmatrix v, premultiplication diagonal matrix D obtains albefaction matrix R again M * W:
R=X*v*D
Wherein, the line number M of albefaction matrix R is far longer than its columns W;
(4f), obtain the background characteristics matrix with the transposed matrix premultiplication background differential matrix X of albefaction matrix R:
Φ=R TX;
(4g) will use the d as a result of corresponding row element average among each element subtracting background matrix Ψ of object vector x i,, constitute target difference vector d={d according to subscript sequence number order from small to large 1, d 2..., d M} T,
d i = x i - Σ j = 1 N Ψ ij / N , i = 1,2 , . . . , M , j = 1,2 , . . . , N ;
X wherein iBe i the element of object vector x, Ψ Ij(M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating for background matrix Ψ is positioned at;
(4h) the transposed matrix premultiplication target difference vector d with albefaction matrix R obtains target feature vector:
x ~ = R T d .
Except adopting described principal component analysis (PCA) PCA, also available following method is carried out dimensionality reduction to this step to the method for object vector and background matrix dimensionality reduction:
1) multi-dimentional scale method MDS (I.Borg, and P.Groenen, " Modern Multidimensional Scaling:theory and applications, " 2nd ed., Springer-Verlag New York, 2005);
2) independent component analysis method ICA (A.
Figure BDA0000042845260000061
and E.Oja; " A Fast Fixed-Point Algorithm for Independent Component Analysis; " Neural Computation, vol.9, No.7; Pp:1; 483-1,492, Oct.1997.);
3) nonnegative matrix factorization method NMF (D.D.Lee and H.S.Seung. " Algorithms for non-negative matrix factorization, " In Advances in Neural Information Processing systems, 2001.);
4) local linear embedding inlay technique LLE (T.R.Sam and K.S.Lawrence, " Nonlinear Dimensionality Reduction by Locally Linear Embedding, " SCIENCE, Vol.290No.22, Dec.2000.);
5) laplacian eigenmaps method LE (M.Belkin and P.Niyogi; " Laplacian Eigenmaps for Dimensionality Reduction and Data Representation; " Neural Computation 15, pp:1373-1396,2003.).
Step 5 is carried out the normalization processing to target feature vector
Figure BDA0000042845260000062
and is obtained normalization target feature vector
Figure BDA0000042845260000063
x ^ = x ~ / | | x ~ | | 2
Wherein || || 2The l of expression vector 2Norm.
Step 6 is carried out normalization with each vector among the background characteristics matrix Φ and is handled the Θ as a result that obtains i, by subscript sequence number order from small to large, form normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ.
Step 7 is asked the normalization target feature vector
Figure BDA0000042845260000071
Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied
Figure BDA0000042845260000072
The minimum l of s 0Norm is separated:
Min||s|| 0Satisfy
Figure BDA0000042845260000073
Solution procedure is following:
(7a) ask satisfied
Figure BDA0000042845260000074
The minimum l of s 1Norm is separated:
Min||s|| 1Satisfy
Figure BDA0000042845260000075
(7b) with formula < 1>is lax be:
Min||s|| 1Satisfy
Figure BDA0000042845260000076
Wherein, ε is not less than 0 arbitrary constant, and when ε=0, formula < 2>will deteriorate to formula < 1 >;
(7c) utilize the LASSO algorithm, formula < 2>be converted into:
Satisfy || s|| 1≤σ<3>
Wherein, σ is not less than 0 arbitrary constant;
(7d) utilize Lagrangian algorithm, formula < 3>be converted into the unconstrained optimization formula:
s * = arg min s 1 2 | | x ^ - &Theta;s | | 2 + &alpha; | | s | | 1 - - - < 4 >
Wherein, α is a Lagrange multiplier, value of the minimum variations per hour s of expression objective function;
(7e) some algorithm in the newton is blocked in utilization, formula < 4>is turned to the quadratic programming formula of inequality constrain:
min 1 2 | | x ^ - &Theta;s | | 2 + &alpha; &Sigma; i = 1 N &mu; i - - - < 5 >
i≤s i≤μ i,i=1,2,...,N.
Wherein, s iBe i the element of similar vectorial s, μ iBe constraint s iThe factor ,-μ i≤s i≤μ iBe constraint condition;
(7f) be constraint condition-μ i≤s i≤μ iSet up the logarithm barrier function:
Figure BDA0000042845260000081
Utilize the logarithm barrier function, with formula<5>Be converted into the centrode function F of asking by weight factor β definition β(s, μ, optimum solution α)=0:
Figure BDA0000042845260000082
(7g) utilize Newton iteration method solving equation < 6 >, can get iterative formula:
s ( k + 1 ) &mu; ( k + 1 ) &alpha; ( k + 1 ) = s ( k ) &mu; ( k ) &alpha; ( k ) - &dtri; 2 F &beta; - 1 ( s ( k ) , &mu; ( k ) , &alpha; ( k ) ) &CenterDot; &dtri; F &beta; ( s ( k ) , &mu; ( k ) , &alpha; ( k ) )
Wherein, s (k), μ (k)And α (k)Represent s respectively, μ and the α result after the k time iteration, s (k+1), μ (k+1)And α (k+1)Represent s respectively, μ and the α result after the k+1 time iteration, k is not more than 50 nonnegative integer,
Figure BDA0000042845260000084
The second derivative of function is asked in expression,
Figure BDA0000042845260000085
The first order derivative of function is asked in expression;
(7h) initial value of weighting repeated factor β=0.5 and solution vector:
s ( 0 ) &mu; ( 0 ) &alpha; ( 0 ) = &Theta; T x ^ 0.95 &CenterDot; sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ + 0.1 max ( sgn ( &Theta; T x ^ ) &CenterDot; &Theta; T x ^ ) 1
Wherein, max representes the maximal value of amount of orientation element, and sgn representes the positive and negative attribute of vector element:
(7i) bring initial value and weight factor into step (7g) and carry out interative computation, bring formula into up to result with adjacent twice iteration<5>The difference of subtracting each other is not more than 10 -3, the value of the s that obtain this moment is formula<1>The minimum l of middle s 1Norm is separated, and jumps to step (7k); If, reach maximum iteration time 50, do not obtain optimum solution yet, execution in step (7j);
(7j) with final iteration result as initial value, weight factor is updated to original 2 times, iterations makes zero, and turns back to step (7i);
(7k) verify the minimum l of all trial image 1Norm is separated the sparse property of s, according to document D.Donoho, and " For most large underdetermined systems of linear eauations the minimal l 1-norm near solution approximates the sparest solution, " preprint, the conclusion that proposes in 2004.: as minimum l 1When norm is separated s and is had sparse characteristic, itself and minimum l 0Norm is separated equivalence, can know the minimum l that the present invention tried to achieve 1Norm is separated s and is the normalization target feature vector Rarefaction representation in normalization background characteristics matrix Θ.
Find the solution minimum l 1The norm problem, the algorithm of in the present invention, giving, can also carry out in order to following method:
1) gradient reflection method (M.Figueiredo, R.Nowak, and S.Wright; " Gradient projection for sparse reconstruction:Application to compressed sensing and other inverse problems; " IEEE Journal of Selected Topics in Signal Processing, Vol.1, No.4; Pp:586-597,2007.);
2) homotopy method (D.Malioutov; M.Cetin, and A.Willsky, " Homotopy continuation for sparse signal representation; " In Proceedings of the IEEE International Conference on Acoustics; Speech, and Signal Processing, 2005.);
3) iteration collapse threshold method (I.Daubechies; M.Defrise, and C.Mol, " Aniterative thresholding algorithm for linear inverse problems with asparsity constraint; " Communications on Pure and Applied Math; Vol.57, pp:1413-1457,2004.);
4) Nesterov ' s method (A.Beck and M.Teboulle; " A fast iterative shrinkage-thresholding algorithm for linear inverse problems; " SIAMJournal on Imaging Sciences, Vol.2, No.1; Pp:183-202,2009.);
5) intersection guiding method (J.Yang and Y.Zhang, " Alternating direction algorithms for l 1-problems in compressive sensing, " (preprint) arXiv:0912.1185,2009.).
Step 8; Get absolute value
Figure BDA0000042845260000101
i=1 of nonzero element among the similar vectorial s; 2; ...; K; Summation, as the background clutter quantization scale of entire image:
Figure BDA0000042845260000102
wherein, K is the number of nonzero element among the similar vectorial s.
Rationality of the present invention and superiority can further describe through following experiment and comparative analysis:
Experimental verification:
1. experiment condition
To be example verify to the rationality of image background clutter yardstick of the present invention and in the superiority that target is obtained aspect the performance prediction Search_2 image data base that provides with Dutch TNO Human Factors research institute.The Search_2 image data base comprises high-resolution digital natural scene image and the concrete parameter of every width of cloth scene and the test result of observer's actual observation experiment of 44 width of cloth different background complexities; The detailed description of relevant this database can be referring to document A.Toet; P.Bijl, and J.M.Valeton, " Image data set for testing search and detection models; " Opt.Eng.40 (9), 1760-1767 (2001); A.Toet; P.Bijl, F.L.Kooi, and J.M.Valeton; " Ahigh-resolution image data set for testing search and detection models; " Report TM-98-A020, TNO Human Factors Research Institute, (1998) and A.Toet; " Errata in Report TNO-TM 1998A020:A high-resolution image data set for testing search and detection models, " (2001).
2. example description
That Fig. 2, Fig. 3 and Fig. 4 have provided respectively that the present invention uses is low, in be low background clutter image and target area image with background image, target image and similar vector distribution figure: Fig. 2 of the different clutter grades of Senior Three kind; Wherein Fig. 2 (a) is low background clutter image; Fig. 2 (b) is a target area image; Be the part that the white rectangle frame is marked among Fig. 2 (a), the corresponding similar vector distribution figure of Fig. 2 (c) for calculating; Fig. 3 is middle background clutter image and target area image, and wherein Fig. 3 (a) is middle background clutter image, and Fig. 3 (b) is a target area image, the part that promptly the white rectangle frame is marked among Fig. 3 (a), the corresponding similar vector distribution figure of Fig. 3 (c) for calculating; Fig. 4 is high background clutter image and target area image, and wherein Fig. 4 (a) is high background clutter image, and Fig. 4 (b) is a target area image, the part that promptly the white rectangle frame is marked among Fig. 4 (a), the corresponding similar vector distribution figure of Fig. 4 (c) for calculating.
Visible from Fig. 2 (c), Fig. 3 (c) and Fig. 4 (c); Image for three kinds of different clutter grades; Its target image all has sparse characteristic with the similar vector of background image, thereby has proved the rationality of the calculating normalization target feature vector algorithm of rarefaction representation in normalization background characteristics vector that adopts among the present invention.
Visible from Fig. 2 (a) and Fig. 2 (b), the similarity of background and target is low, and the background clutter of entire image is very low, and the detection of a target is easy to; Visible from Fig. 3 (a) and Fig. 3 (b), the similarity of background and target is higher, and the background clutter of entire image is higher, and the detection of a target is difficulty; Visible from Fig. 4 (a) and Fig. 4 (b), to compare with front two width of cloth images, the similarity of its background and target is the highest, and the background clutter of entire image is the highest, and the detection of a target is the most difficult.Respectively the background clutter yardstick that Fig. 2 (a), Fig. 3 (a) and Fig. 4 (a) quantize to obtain them is respectively 2.0195,1.3060 and 1.0120 with image background clutter quantization scale of the present invention; The subjective perception of this and above-described human eye vision is consistent, and visible quantization scale of the present invention can reflect the truth of background clutter.
3. experimental result
The 7th, 15,23,26 and 4 width of cloth images in the Search_2 database when the present invention being experimentized checking, have been removed; This be since the present invention research be that single goal is surveyed; Have two objective in preceding four width of cloth images, exceeded scope of the present invention, and target is too small in the last piece image; Belong to Weak target and detect problem, do not belong to research field of the present invention.Thereby in the superiority experiment of checking background clutter quantization scale of the present invention aspect target is obtained performance prediction, final valid data are remaining 39 width of cloth image.
Fig. 5 utilizes result that POE, SV and background clutter yardstick of the present invention quantize to obtain to this 39 width of cloth image and the matched curve between observer's realistic objective detection probability; Wherein, Fig. 5 (a), 5 (b) and 5 (c) are respectively the matched curve between POE, SV and background clutter quantization scale of the present invention and the observer's realistic objective detection probability.Fitting formula is:
PD = ( X / X 50 ) E 1 + ( X / X 50 ) E
Wherein, X representes the background clutter yardstick; X 50Be constant with E, can be through obtaining with the match of observer's realistic objective detection probability; PD is observer's realistic objective detection probability that subjective experiment obtains, can be by formula:
PD=N c/(N c+N f+N m)
Obtain, wherein N c, N fAnd N mBe respectively every width of cloth image is corresponding in the Search_2 database correct detection to the number of target, judge the number of target by accident and do not detect the number of target.
Table 1 has provided the result of match between the acquisition probability that each background clutter yardstick and actual subjective experiment obtain with the form of data, comprising the corresponding X of each background clutter yardstick 50With the value of E, and performance measure RMSE, CC and SCC are to the target of prediction detection probability and the conforming evaluation result of observer's realistic objective detection probability of each background clutter yardstick.Wherein, X 50With E be curve fitting parameter; RMSE is a root-mean-square error; CC is a Pearson correlation coefficient; SCC is the Spearman rank correlation coefficient.
Table 1: the performance of background clutter yardstick of the present invention, POE and SV relatively
Visible by table 1; The Pearson correlation coefficient of background clutter quantization scale of the present invention and observer's realistic objective detection probability and Spearman rank correlation coefficient are all greater than other background clutter yardstick; And root-mean-square error is less than other background clutter yardstick, thereby proved that background clutter quantization scale of the present invention obtains the superiority aspect the performance prediction in target.

Claims (2)

1. background clutter quantizing method based on rarefaction representation comprises following process:
(1) with the target image column vectorization of two dimension, obtains object vector x;
(2) background image is divided into N equal-sized junior unit, the size of each junior unit level and vertical direction is two times of target corresponding size;
(3) background junior unit column vectorization that each is two-dimentional, and be combined into background matrix Ψ;
(4) by principal component analysis (PCA) PCA to object vector x and background matrix Ψ dimensionality reduction, obtain target feature vector
Figure FDA0000154739230000011
and background characteristics matrix Φ respectively:
(4a) will deduct the X as a result that place row element average obtains with each element among the background matrix Ψ Ij, with subscript (i j) is preface, constitutes background differential matrix X,
Figure FDA0000154739230000012
i=1,2,...,M,j=1,2,...,N
Wherein, Ψ IjAnd X IjBe respectively background matrix Ψ and background differential matrix X and be positioned at that (M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating;
(4b) take advantage of its transposed matrix X with the background differential matrix X right side T, obtain covariance matrix A:
A=X TX
(4c) covariance matrix A is carried out characteristic value decomposition, obtain its nonzero eigenvalue λ kAnd corresponding proper vector v k, k=1,2 ..., t, wherein t is total number of covariance matrix A nonzero eigenvalue, λ 1>=λ 2>=...>=λ t>0, the proper vector mutually orthogonal;
(4d) with covariance matrix A nonzero eigenvalue summation 90% as threshold value, W subduplicate formation diagonal matrix D reciprocal of nonzero eigenvalue before getting:
Figure FDA0000154739230000013
satisfy
Figure FDA0000154739230000014
Simultaneously, get this W nonzero eigenvalue characteristic of correspondence vector v k, k=1,2 ..., W, composition characteristic matrix: v={v 1, v 2..., v W;
(4e) with background differential matrix X premultiplication eigenmatrix v, premultiplication diagonal matrix D obtains albefaction matrix R again M * W:
R=X*v*D
Wherein, the line number M of albefaction matrix R is far longer than its columns W;
(4f), obtain the background characteristics matrix with the transposed matrix premultiplication background differential matrix X of albefaction matrix R:
Φ=R TX;
(4g) will use the d as a result of corresponding row element average among each element subtracting background matrix Ψ of object vector x i,, constitute target difference vector d={d according to subscript sequence number order from small to large 1, d 2..., d M} T,
Figure FDA0000154739230000021
i=1,2,...,M,j=1,2,...,N;
Wherein, x iBe i the element of object vector x, Ψ Ij(M and N are respectively line number and the columns of background matrix Ψ for i, the value of j) locating for background matrix Ψ is positioned at;
(4h) the transposed matrix premultiplication target difference vector d with albefaction matrix R obtains target feature vector:
Figure FDA0000154739230000022
(5) target feature vector
Figure FDA0000154739230000023
is carried out normalization and handle, obtain normalization target feature vector
Figure FDA0000154739230000024
Figure FDA0000154739230000025
Wherein, || || 2The l of expression vector 2Norm;
(6) each vector among the background characteristics matrix Φ is carried out normalization and handle the Θ as a result that obtains i, by subscript sequence number order from small to large, constitute normalization background characteristics matrix Θ,
Θ i=Φ i/||Φ i|| 2,i=1,2,...,N
Wherein, Φ iAnd Θ iBe respectively i the column vector of background characteristics matrix Φ and normalization background characteristics matrix Θ, N is the number of column vector among the background characteristics matrix Φ;
(7) calculate the normalization target feature vector
Figure FDA0000154739230000031
Rarefaction representation in normalization background characteristics matrix Θ obtains similar vectorial s: promptly find the solution satisfied
Figure FDA0000154739230000032
The minimum l of s 0Norm is separated:
Min||s|| 0Satisfy
Figure FDA0000154739230000033
(8) get nonzero element absolute value among the similar vectorial s
Figure FDA0000154739230000034
i=1; 2; ...; K; Summation, clutter quantization scale as a setting:
Figure FDA0000154739230000035
wherein K is the number of nonzero element among the similar vectorial s.
2. background clutter quantizing method according to claim 1; The rarefaction representation of the described calculating normalization of step (7) target feature vector
Figure FDA0000154739230000036
in normalization background characteristics matrix Θ wherein, calculate as follows:
(7a) ask satisfied
Figure FDA0000154739230000037
The minimum l of s 1Norm is separated:
Min||s|| 1Satisfy
Figure FDA0000154739230000038
(7b) with formula < 1>is lax be:
Min||s|| 1Satisfy
Figure FDA0000154739230000039
Wherein, ε is not less than 0 arbitrary constant, and when ε=0, formula < 2>will deteriorate to formula < 1 >;
(7c) utilize the LASSO algorithm, formula < 2>be converted into:
Figure FDA00001547392300000310
Satisfy || s|| 1≤σ<3>
Wherein, σ is not less than 0 arbitrary constant;
(7d) utilize Lagrangian algorithm, formula < 3>be converted into the unconstrained optimization formula:
Figure FDA00001547392300000311
Wherein, α is a Lagrange multiplier,
Figure FDA00001547392300000312
value of the minimum variations per hour s of expression objective function;
(7e) some algorithm in the newton is blocked in utilization, formula < 4>is turned to the quadratic programming formula of inequality constrain:
Figure FDA0000154739230000041
i≤s i≤μ i,i=1,2,...,N.
Wherein, s iBe i the element of similar vectorial s, μ iBe constraint s iThe factor ,-μ i≤s i≤μ iBe constraint condition;
(7f) be constraint condition-μ i≤s i≤μ iSet up the logarithm barrier function:
Figure FDA0000154739230000042
Utilize the logarithm barrier function, with formula<5>Be converted into the centrode function F of asking by weight factor β definition β(s, μ, optimum solution α)=0:
Figure FDA0000154739230000043
(7g) utilize Newton iteration method solving equation < 6 >, can get iterative formula:
Figure FDA0000154739230000044
Wherein, s (k), μ (k)And α (k)Represent s respectively, μ and the α result after the k time iteration, s (k+1), μ (k+1)And α (k+1)Represent s respectively, μ and the α result after the k+1 time iteration, k is not more than 50 nonnegative integer,
Figure FDA0000154739230000045
The second derivative of function is asked in expression, The first order derivative of function is asked in expression;
(7h) initial value of weighting repeated factor β=0.5 and solution vector:
Wherein, max representes the maximal value of amount of orientation element, and sgn representes the positive and negative attribute of vector element:
Figure FDA0000154739230000051
(7i) bring initial value and weight factor into step (7g) and carry out interative computation, bring formula into up to result with adjacent twice iteration<5>The difference of subtracting each other is not more than 10 -3, the value of the s that obtain this moment is formula<1>The minimum l of middle s 1Norm is separated, and jumps to step (7k); If, reach maximum iteration time 50, do not obtain optimum solution yet, execution in step (7j);
(7j) with final iteration result as initial value, weight factor is updated to original 2 times, iterations makes zero, and turns back to step (7i);
(7k) verify the minimum l of all trial image 1Norm is separated the sparse property of s, and with s as the normalization target feature vector
Figure FDA0000154739230000052
Rarefaction representation in normalization background characteristics matrix Θ.
CN 201110001480 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method Expired - Fee Related CN102073875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110001480 CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110001480 CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Publications (2)

Publication Number Publication Date
CN102073875A CN102073875A (en) 2011-05-25
CN102073875B true CN102073875B (en) 2012-12-05

Family

ID=44032409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110001480 Expired - Fee Related CN102073875B (en) 2011-01-06 2011-01-06 Sparse representation-based background clutter quantification method

Country Status (1)

Country Link
CN (1) CN102073875B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542261B (en) * 2011-12-31 2014-03-26 华中科技大学 Two-dimensional computable target detection, recognition and identification performance predicting method
CN102636575B (en) * 2012-03-27 2015-11-11 深圳职业技术学院 A kind of optimization sawtooth for ultrasonic imaging detecting instrument covers scanning equipment and method
CN102737253B (en) * 2012-06-19 2014-03-05 电子科技大学 SAR (Synthetic Aperture Radar) image target identification method
KR20140083753A (en) * 2012-12-26 2014-07-04 현대모비스 주식회사 method for tracking target considering constraint
CN107367715B (en) * 2017-07-28 2020-04-14 西安电子科技大学 Clutter suppression method based on sparse representation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460A (en) * 2007-11-27 2008-05-21 西安电子科技大学 Color picture background clutter quantizing method
CN101901352A (en) * 2010-08-06 2010-12-01 北京航空航天大学 Infrared background clutter quantifying method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460A (en) * 2007-11-27 2008-05-21 西安电子科技大学 Color picture background clutter quantizing method
CN101901352A (en) * 2010-08-06 2010-12-01 北京航空航天大学 Infrared background clutter quantifying method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何国经等.一种基于结构相似度的杂波尺度.《西安电子科技大学学报》.2009,第36卷(第01期),166-170. *

Also Published As

Publication number Publication date
CN102073875A (en) 2011-05-25

Similar Documents

Publication Publication Date Title
CN109145992B (en) Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination
Luus et al. Multiview deep learning for land-use classification
CN102393911A (en) Background clutter quantization method based on compressive sensing
Zammit-Mangion et al. Deep compositional spatial models
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN107563442B (en) Hyperspectral image classification method based on sparse low-rank regular graph tensor embedding
Liu et al. Enhancing spectral unmixing by local neighborhood weights
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN104778482B (en) The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor
CN104392251B (en) Hyperspectral image classification method based on semi-supervised dictionary learning
CN102073875B (en) Sparse representation-based background clutter quantification method
CN105825200A (en) High-spectrum abnormal object detection method based on background dictionary learning and structure sparse expression
CN106096506A (en) Based on the SAR target identification method differentiating doubledictionary between subclass class
CN110569860B (en) Image interesting binary classification prediction method combining discriminant analysis and multi-kernel learning
CN104978573A (en) Non-negative matrix factorization method applied to hyperspectral image processing
CN109359525B (en) Polarized SAR image classification method based on sparse low-rank discrimination spectral clustering
CN107798345B (en) High-spectrum disguised target detection method based on block diagonal and low-rank representation
CN107145836A (en) Hyperspectral image classification method based on stack boundary discrimination self-encoding encoder
CN106056070A (en) SAR target identification method based on low-rank matrix recovery and sparse representation
Zhang et al. Randomized SVD methods in hyperspectral imaging
CN103295031A (en) Image object counting method based on regular risk minimization
CN108985161A (en) A kind of low-rank sparse characterization image feature learning method based on Laplace regularization
CN109886160B (en) Face recognition method under non-limited condition
CN104050489B (en) SAR ATR method based on multicore optimization
Li et al. Using improved ICA method for hyperspectral data classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121205

Termination date: 20180106