CN108664448A - A kind of higher-dimension sparse vector reconstructing method based on IQR_OMP - Google Patents

A kind of higher-dimension sparse vector reconstructing method based on IQR_OMP Download PDF

Info

Publication number
CN108664448A
CN108664448A CN201810431881.0A CN201810431881A CN108664448A CN 108664448 A CN108664448 A CN 108664448A CN 201810431881 A CN201810431881 A CN 201810431881A CN 108664448 A CN108664448 A CN 108664448A
Authority
CN
China
Prior art keywords
vector
matrix
omp
iteration
kth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810431881.0A
Other languages
Chinese (zh)
Other versions
CN108664448B (en
Inventor
赵健
申富饶
董珍君
赵金煕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201810431881.0A priority Critical patent/CN108664448B/en
Publication of CN108664448A publication Critical patent/CN108664448A/en
Application granted granted Critical
Publication of CN108664448B publication Critical patent/CN108664448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Abstract

The invention discloses a kind of higher-dimension sparse vector reconstructing method based on IQR_OMP.The characteristics of will calculating least square solution is often walked according to OMP algorithms, increment IGR_OMP algorithms is proposed based on Greville recursive processes, and obtain a series of useful recursion properties, is effectively improved calculating speed;On the basis of analyzing IGR_OMP algorithms, the relationship decomposed using Greville recursive algorithms and QR establishes the increment IQR_OMP algorithms on QR decomposition bases, which effectively reduces amount of calculation, improve the validity of algorithm solution large-scale problem;Higher-dimension sparse vector reconstructs simply exactly provides the most sparse coefficient expression of high dimension vector on excessively complete system of vectors.

Description

A kind of higher-dimension sparse vector reconstructing method based on IQR_OMP
Technical field
The present invention relates to a kind of higher-dimension sparse vector reconstructing method based on IQR_OMP.
Background technology
Compressed sensing (compressive Sensing, abbreviation CS) is that one of image or field of signal processing is new Key technology.Its essence be using the information far below signal vector or image vector dimension, effectively determine this higher-dimension to Most sparse coefficient of the amount on excessively complete dictionary indicate, key problem is just to solve for the sparse of sub- alignment sexual system Solution, with reconstruction signal or image vector (bibliography:Yan Jingwen etc., compressed sensing and application, 2015, National Defense Industry Press, Beijing .E.J.Candes, M.Wakin, " people hearing without listening " An introduction to compressive sampling,IEEE Signal Processing Magazine,2008,25(2),21-30.)。
Higher-dimension sparse vector reconstruct be signal processing, image processing field a particularly significant problem.Classical OMP (Orthogonal Matching Pursuit, orthogonal matching pursuit) is although complexity of the algorithm relative to other non-greedy algorithms It spends relatively low, but still has a distance apart from real-time signal processing in practical applications.
Invention content
The present invention in view of the deficiencies of the prior art, discloses a kind of based on IGR_OMP (Incremental Greville Recursion based Orthogonal Matching Pursuit, orthogonal matching of the incremental based on Greville recursion Tracing algorithm) higher-dimension sparse vector reconstructing method, include the following steps:
If matrix A=[a1, a2..., aK]∈RM×K, K < < M, rank (A)=K (RM×K:M × K ties up real number matrix set, ai(i=1,2 ..., K):I-th of real number column vector in dictionary matrix A, rank (A):The order of matrix A);
Remember A1=[a1], Ak=[Ak-1,ak], k=2,3 ..., K (a1:First column vector of matrix A, A1:1st iteration Matrix A, Ak:There are k-th of matrix A of k column vector, Ak-1:- 1 Iterative Matrix A of kth);
A is calculated by Greville recursive algorithms+, i.e. generalized inverse (the bibliography of matrix A:Ben-Isreal A., Greville T.N.E,Generalized Inverse Theory and Applications,Wiley,New York, 1974.):
Preliminary examination value
Recursion is walked to k=2,3 ..., K
bk=ak-Ak-1dk
Wherein dkIt is to solve forThe vector replaced temporarily, bkIt is dimension and akIdentical solutionThe column vector replaced temporarily. The characteristics of this algorithm isByRecursion obtains, that is, fromGradually calculateAnd It is not all directly to calculate each iterationIt is for the sparse recovering step of increment IGR_OMP high dimension vectors in compressed sensing:
Step A1, input observation data vector y ∈ RMWith dictionary matrix A=[a1, a2..., aN]∈RM×N, enable initial support Collect indexEnable initial residual vector r0Equal to observation data vector y, i.e. r0=y, enables k=1.Wherein RMIt is tieed up for M × 1 real Ordered series of numbers vector set, RM×NFor M × N-dimensional real number matrix set, aiRepresent i-th of real number column vector in dictionary matrix A, i=1, 2…,N;
Step A2, ask in dictionary matrix A with -1 iteration residual vector r of kthk-1Kth time iteration strongest correlation row jk
jk∈arg maxj|(rk-1, aj) |, Ωkk-1∪{jk,
Wherein ΩkFor kth time iteration supported collection index, Ωk-1For -1 iteration supported collection index of kth, ajFor dictionary matrix A In j-th of real number column vector;
Step A3, by -1 iteration support index Ω of kthk-1Corresponding dictionary group inverse matrices matrix Incremental computations kth time iteration support index ΩkCorresponding dictionary group inverse matrices matrix
If k=1, jkFor j1, then vectorialGeneralized inverse be equal toGeneralized inverse, i.e., Turn to step A4;
If k ≠ 1, sequence executes following calculate:
C=bT/(bTB),
WhereinIt is dictionary matrixMiddle jthkA real number column vector,It is vectorGeneralized inverse matrix, d is to face When a vector replacing, b is the dimension replaced temporarily and vectorThe dimensional vectors of identical M × 1, c are the 1 × M replaced temporarily Dimensional vector.
Step A4 solves minimization problemSolution xk
WhereinRepresentation vectorEuclid norm;
Step A5, update kth time iteration residual vector
Step A6 judges whether to meet stopping criterion, that is, judge kth time iteration residual vector Euclid norm whether Meet ‖ rk2(ε is a very small number to≤ε, such as:10-6), if it is satisfied, executing step A7, k is otherwise updated to k+1, And step A2 is repeated to step A5;
Step A7 exports sparse coefficient vector x ∈ RN
Wherein x (i) is i-th of element in sparse coefficient vector x, xk(i) it is vector xkIn i-th of element, RNIt is N × 1 ties up real vector.
bk=0 necessary and sufficient condition is ak∈R(Ak-1)=span { a1, a2..., ak-1,
That is akWith a1, a2..., ak-1Linear correlation, Ak-1=[a1, a2..., ak-1](akRepresenting matrix AkIn arrange for k-th to Amount, R (Ak-1) represent with matrix Ak-1K-1 column vector generate space, span { a1, a2..., ak-1Represent with vectorial a1, a2..., ak-1The vector space of generation, Ak-1For -1 iteration A matrix of kth,For Ak-1Generalized inverse matrix);
bk∈R(Ak-1)It is akIn R (Ak-1) rectangular projection in orthocomplement, orthogonal complement, if bk=0, indicate ak∈R(Ak-1), then ak With R (Ak-1) linear correlation (IMIt is the unit matrix of M × M,Representing matrix Ak-1Generalized inverse matrix, R (Ak-1)Indicate life At space R (Ak-1) the orthogonal complement space).
Indicate bkIt is akIn R (Ak-1)On rectangular projection, then have (ai(i=1, 2 ..., k-1) it is matrix Ak-1In the i-th column vector):
(bk, ai)=0, i=1,2 ..., k-1 (4.7)
That is bkWith a1, a2..., ak-1It is orthogonal.
The invention also discloses one kind being based on IQR_OMP (Incremental QR decomposition based Orthogonal Matching Pursuit, the orthogonal matching pursuit algorithm that incremental is decomposed based on QR) higher-dimension sparse vector Reconstructing method includes the following steps:
Step B1, input observation data vector y ∈ RmWith dictionary matrix A=[a1, a2..., an]∈Rm×n;Enable initial support Collect indexInitial residual vector is enabled to be equal to observation data vector, i.e. r0=y enables k=1, wherein RmIndicate that the dimensions of m × 1 are real Ordered series of numbers vector, Rm×nIndicate that m × n ties up real number matrix, aiIt represents i-th of m × 1 in dictionary matrix A and ties up real number column vector, i=1, 2…,n;
Step B2, ask in dictionary matrix A with -1 iteration residual vector r of kthk-1Kth time iteration strongest correlation row jk
jk∈arg maxj|(rk-1, aj) |, Ωkk-1∪{jk,
Wherein ΩkFor kth time iteration supported collection index, Ωk-1For -1 iteration supported collection index of kth, ajFor dictionary matrix A In jth row real number column vector;
Step B3, by the Q of -1 iteration of kthk-1And Rk-1The Q of recurrence calculation kth time iterationkAnd Rk
If k=1, jk=j1, thenQ1The first of matrix is classified as Q1(:, 1) or q1It is equal toI.e.Step B4 is turned to,
If k ≠ 1, following calculate is executed successively:
qk=q1k,
Qk=[Qk-1:qk],It is wherein vectorialIt is the jth of dictionary matrix AkA column vector, qkIt is K-th of column vector of matrix Q, βkInterim by kth time iteration replaces vector, αkInterim by kth time iteration replaces numerical value;
Step B4, update kth time iteration residual vector
Wherein qkIt is kth time Iterative Matrix QkK-th of column vector;
Step B5 judges whether to meet stopping criterion, that is, judges whether the Euclid norm of kth time residual vector is full Sufficient ‖ rk2(ε is a very small number to≤ε, such as:10-6), if it is satisfied, executing step B6;Otherwise k is updated to k+1, and Step B2 is repeated to step B4;
Step B6 solves minimization problemSolution xk
WhereinFor kth time iteration support index ΩkCorresponding dictionary matrix, j1, j2..., jk∈Ωk,Representation vectorEuclid norm;It is matrix RkGeneralized inverse,It is matrix QkTransposition;
Step B7 exports sparse coefficient vector x:
Wherein x (i) indicates i-th of element in sparse coefficient vector x, xk(i) vector x is indicatedkIn i-th of element.
It is obtained using Greville recurrence Relations:
Then have:Wherein dkIt is to solve forIt replaces temporarily Vector, bkIt is dimension and akIdentical solutionThe column vector replaced temporarily.
(QR Decomposition) is decomposed according to QR, if having calculated the matrix A of -1 execution of kthk-1
Ak-1=Qk-1Rk-1 (4.9)
Wherein Ak-1Kth -1 Iterative Matrix A, Qk-1=[q1,q2,…qk-1]∈RM×(k-1)Row directly hand over matrix (qi(i= 1,2 ..., k-1) it is matrix Qk-1The i-th column vector), qk-1It is matrix Qk-1- 1 column vector R of kthk-1For (k-1) × (K-1) Upper triangular matrix, consider kth step calculating:
Wherein βk∈R(k-1)×1, αkFor a scale, (I is pushed away:Unit matrix):
Due toThusWhereinFor matrix RkInverse square Battle array,
So:
Equally often step calculates residual vector rk=s-AkxkIt needs to calculate a matrix and multiplication of vectors (s is reconstruct vector), And rkOften step needs to be used in selection and r in AkInner product adds to A by the column vector of maximum absolute valuekMiddle formation Ak+1, at this moment for Recursion QR has for decomposing:rkIt is y in R (Ak)On rectangular projection, i.e.,:
Advantageous effect:
Two increment OMP methods of IGR_OMP methods and IQR_OMP of the sparse reconstruct of high-dimensional signal vector given herein There are following features:
1. calculating speed is fast.For matrix Ak=[a1, a2..., ak]∈RM×k, y ∈ RM, rank (Ak)=k is corresponding minx‖y-AkThe solution of x ‖ isIf decomposing to obtain by QRIt is traditional for each k OMP methods, calculation amount are aboutSecondary flop (bibliography:L.N.Trefethen,D.Bau, Numerical Linear Algebra,Chinese edition copyright 2006by Posts and Telecom Press).And delta algorithm IQR_OMP algorithms are aboutTherefore the workload of incremental algorithm is substantially better than tradition OMP algorithms, especially when higher-dimension problem.
2. from the point of view of computational stability, due toAnd incremental IQR_OMP Method design is cond2(Ak), rather than its square.Therefore work as AkConditional number it is slightly larger when, normal equation algorithm is not only It is computationally intensive, it can also seriously affect the stability of algorithm.Therefore the present invention is substantially better than in computational accuracy and in calculating speed Normal equation method.
3. the notional result about the sparse reconstruct of signal vector is difficult to be examined during actually calculating at present. And the r in the delta algorithm IQR_OMP and IGR_OMP provided in textk,kAnd bk, can be used as and examine akA relative to front1, a2,…,ak-1Measurement (the bibliography of linear independent degree:Zhao Jinxi, the Huang methods of compatible linear equation group and its pushes away Extensively, institution of higher education calculates mathematics journal, 01 phase, pp 8-17,1981/4/2.).It realizes and examines vector during calculating Linear independent degree between system, this is very important.The characteristics of incremental algorithm that the present invention provides is to substantially reduce meter Expense is calculated, algorithm has very strong robustness, and currently selected atom can be provided during calculating relative to front The linear independence degree of atom.Numerical result shows that algorithm given herein has apparent advantage.
Description of the drawings
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or Otherwise advantage will become apparent.
Fig. 1 is IGR_OMP algorithm flow charts.
Fig. 2 is IQR_OMP algorithm flow charts.
Specific implementation mode
The present invention will be further described with reference to the accompanying drawings and embodiments.
In the sparse restructing algorithm of the higher-dimension signal vector of compressed sensing, greedy (Greedy) algorithm is with its lower calculating Complexity and faster convergence rate are paid attention to, and orthogonal matching pursuit algorithm (OMP) is one algorithm of most important one (bibliography:Joel A.Tropp and Anna C.Gilbert.Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit[J].IEEE Transactions on Information Theory,VOL.53,NO.12,DECEMBER 2007.)。
Classical orthogonal matching pursuit algorithm (OMP, Orthogonal Matching Pursuit) is as follows:
Input:Observe data vector y ∈ RM(RM:M ties up real number column vector set) and dictionary matrix A=[a1, a2..., aN] ∈RM×N(ai(i=1,2 ..., N) is the i-th column vector of matrix A, RM×N:M × N-dimensional real number matrix).
Output:Sparse signal vector x ∈ RN(RN:N-dimensional real number column vector set).
Step1. it initializes:Enable initial support collection indexPreliminary examination residual vector is enabled to be equal to observation data vector, i.e., r0=y, enables k=1.
Step2. it recognizes:Ask in matrix A with -1 iteration residual vector r of kthk-1Kth time iteration strongest correlation row jk
Wherein ΩkFor kth time iteration supported collection index, Ωk-1For -1 iteration supported collection index of kth, aj(j=1,2 ... K) j-th of column vector in matrix A.
Step3. it calculatesSolution
WhereinIt is support index ΩkCorresponding dictionary matrix,ai(i=j1, j2... jk∈Ωk) it is matrixI-th column vector.
Step4. update kth time iteration residual vector
If Step5. some stopping criterion meets, i.e. the Euclid norm of kth time iteration residual vector meets ‖ rk2≤ε (ε is a very small number, such as:10-6), then stop iteration and turns Step6;
Otherwise k ← k+1 is enabled, and repeats Step2 to Step4.
Step6. output factor vector x (x (i):Indicate i-th of element of output factor vector x, xk(i):Indicate vector xk I-th of element):
Due to non-zero entry number K≤M < < N of solution vector, step3 is actually the overdetermination for solving a M × k in the algorithm Linear least squares minimization problem.That is it is to seek the sparse solution of sub- alignment sexual system, and in fact OMP algorithms are on the whole It is realized by gradually acquiring the determined linear least square problem of M × k (k=1,2 ..., K).That is it is calculated with greedy Method seeks the sparse reconstruction of higher-dimension signal vector, is exactly the column vector of A corresponding to the nonzero element asked by x in essence The least square solution for the determined linear system that (atom) is formed, it is understood that this point is critically important.There are two crucial for OMP algorithms Step:
1. for matrix (dictionary) A ∈ RM×NDetermine row (atom) index to match with nonzero element in x.According to greediness The principle of algorithm in kth step is found out and r in the N row of Ak-1=y-Axk-1Vector (the bibliography of inner product maximum absolute value: S.Mallat and Z.Zhang,Matching pursuits with time-frequency dictionaries,IEEE Trans.Signal Process, 1993,41, pp.3397-3415.), new supported collection is formed, here it is step2;
2. for large-scale problem, the work needed for least square solution is solved with common normal equation method in OMP algorithms Amount will often account for nearly the 80% of entire workload.Therefore finding the more efficiently algorithm for solving the least square solution just becomes A key problem in OMP algorithms.
Noted above, the sparse reconstruct of signal vector is to determine that a large-scale high-order Asia is fixed for its problem in science The most sparse solution of linear system.And OMP algorithms are actually introduced supported collection by column and are gradually solved such as next determined linear The least square solution of system:
And common OMP algorithms are normal equation methods, are calculated when kth walks iteration:
For solving the least square solution (3.3) of overdetermined linear system, due to iterations K < < M < < N. See that it is suitable effective for other direct methods to obtain (3.4) by normal equation method in this way.But here There are one basic problem, the coefficient matrix in (3.3)It is that each iteration increases by a row, and iteration will solve every time (3.3), the calculation amount accumulated is just very big.To general middle-size and small-size problem, also unobvious are influenced, but to large-scale high-order Problem, here it is a prodigious problems.How the information of OMP algorithm successive iteration is effectively utilized, this is construction of the present invention The starting point of incremental new algorithm.
In order to more intuitively describe the problem, if in the present invention determined linear problem (3.3) to be solvedNamely 2000 step of iteration, then the accumulation of usage equation method needs to calculate 1667.6 seconds, and present invention profit It is calculated and has only been used 137.36 seconds with the accumulation of the new algorithm of the information structuring of previous iteration, same problem speed improves 12 times. In the 2000th iteration, normal equation method has been used 4.5273 seconds, and the method for the present invention has only been used 0.2427 second.
Front, which is talked about, to discuss the existence of Large Scale Sparse solution using the concept of spark or the cross correlation value of A.But It has to be observed that these conditions are all less convenient for examining.In fact due to the present invention solve be one M × k determined linear system, institute Just to have:
Theorem 3.1 sets sub- alignment sexual system Ax=y, wherein A ∈ RM×N, x ∈ RN, y ∈ RM, rank (A)=M, ‖ x ‖0=K and K < < M < < N, if A=[a1, a2..., aN] arbitrary K row composition system of vectors all linear independences, then certainly exist satisfaction The solution vector x of sparse condition so that ‖ x ‖0=K.
This condition seems to be difficult verification, but since the coefficient matrix of this determined linear system is introduced by column, As long as ensureing that the system of vectors for the vector composition that this is introduced by column is linear independence in fact.How during calculating Can verify that newly into vector with into this point of the system of vectors linear independence of supported collection with regard to particularly important, and traditional OMP Algorithm does not accomplish this point.The characteristics of how being introduced by column using this considers and algorithm for design that this is that the present invention considers new The motivation of algorithm.
As depicted in figs. 1 and 2, the invention discloses a kind of higher-dimension sparse vector reconstructing method and one based on IGR_OMP Higher-dimension sparse vector reconstructing method of the kind based on IQR_OMP.
The principle of increment OMP methods:
OMP algorithms are substantially one determined linear systems of solution under certain condition, and the spy of this determined linear system Point is that gradually the selection maximum column vector of energy enters supported collection.Namely at the kth iteration, selection energy it is maximum arrange to M × k overdetermined systems of amount and the original k-1 Column vector groups Cheng Xin selectedWhereiny∈RM, so After seek its least square solution.The essence that just can more catch OMP methods is understood and analyzed with this viewpoint, can more find out tradition It is insufficient existing for OMP methods.The each iteration of OMP algorithms will calculated sub-matrix from the beginning generalized inverse.Either directly adjust It is still directly calculated using formula with canonical functionMany additional calculating will be increased, especially to large-scale problem with The increase of iterations, this problem are just all the more prominent.And the characteristics of delta algorithm kth time iteration has been obtained using preceding an iteration To information, to greatly improve the validity of algorithm.Two increment OMP algorithms are given below.
IGR_OMP algorithms:
If A=[a1, a2..., aK]∈RM×K, K < < M, rank (A)=K (wherein ai(i=1,2 ..., K) it is matrix A I-th of column vector, RM×K:M × K ties up real number matrix set);
Remember A1=[a1], Ak=[Ak-1,a1], k=2,3 ..., K (a1:First column vector of matrix A, A1:1st iteration Matrix A, Ak:Kth time Iterative Matrix A, Ak-1:- 1 Iterative Matrix A of kth);
A is calculated by Greville recursive algorithms+(bibliography:Ben-Isreal A.,Greville T.N.E, Generalized Inverse Theory and Applications,Wiley,New York,1974.):
Preliminary examination value
Recursion is walked to k=2,3 ..., K
bk=ak-Ak-1dk
The characteristics of this algorithm isByRecursion obtains, wherein dkIt is to solve forThe vector replaced temporarily, bkIt is dimension Degree and akIdentical solutionThe column vector replaced temporarily.Therefore there are following IGR_OMP algorithms:
Step A1, input observation data vector y ∈ RM(RM:M × 1 ties up real number column vector) and dictionary matrix A=[a1, a2..., aN]∈RM×N(ai(i=1,2 ..., N) is the i-th column vector of matrix A, RM×N:M × N-dimensional real number matrix), enable initial branch Support collection indexInitial residual vector is enabled to be equal to input observation data vector, i.e. r0=y, enables k=1;
Step A2, ask in dictionary matrix A with -1 iteration residual vector r of kthk-1Kth time iteration strongest correlation row jkk:Kth time iteration supported collection index, Ωk-1:- 1 iteration supported collection index of kth, aj:J-th of column vector in matrix A):
Step A3, by -1 iteration supported collection index Ω of kthk-1Corresponding matrixGeneralized inverseIncrement meter Calculate kth time iteration supported collection index ΩkCorresponding matrixGeneralized inverse(Represent matrixJthkRow):
If k=1, jk=j1, then vectorialGeneralized inverse be equal to vectorGeneralized inverse, i.e., Step A4 is turned to,
If k ≠ 1 executes the following steps successively:
C=bT/(bTB),
Wherein d be it is interim replace vector, b be it is interim replace withThe identical column vector of dimension, c be the row that replaces temporarily to Amount.
Step A4, minimization problemSolution
Wherein
Step A5, update kth time iteration residual vector
Step A6, judges whether to meet stopping criterion, i.e. the Euclid norm of kth time iteration residual vector meets ‖ rk2 (ε is a very small number to≤ε, such as:10-6), if it is satisfied, executing step A7, k is otherwise updated to k+1, and repeat step A2 to step A5;
Step A7 exports sparse coefficient vector x ∈ RN(RN:N × 1 ties up real number column vector, x (i):Export sparse coefficient I-th of element of vector x, xk(i):Step A4 minimization problem solution vectors xkI-th of element):
Arrange the property of Greville recurrence methods:
Property 4.1bk=0 necessary and sufficient condition is ak∈R(Ak-1)=span { α1, a2..., ak-1, that is, akWith a1, a2..., ak-1It is linearly related.
This is because:
This illustrates bk∈R(Ak-1), it is akIn R (Ak-1) rectangular projection in orthocomplement, orthogonal complement, if bk=0, it is meant that ak ∈R(Ak-1), therefore akWith R (Ak-1) linearly related, otherwise also set up.
The ‖ b in actually calculatingk2It is a critically important index, it reflects akWith R (Ak-1) linearly related degree.
In compressed sensing problem, the hypothesis by perception matrix is not in bk=0 the case where.But ‖ bk2This index is non- Often important, it reflects akWith R (Ak-1) linearly related degree.If ‖ bk2≠ 0, but very little just illustrates akWith a1, a2..., ak-1Very weak close to linear correlation, that is, linear independence degree, here it is the unstable situations of so-called numerical value.Its Real this point is extremely important, it solves one during calculating to examine akWith the front system of vectors a1, a2..., ak-1 Linear independence degree, this is the big problem in a numerical computations.
Property 4.2That is bkIt is akIn R (Ak-1)On rectangular projection, therefore have:
(bk, ai)=0, i=1,2 ..., k-1 (4.7)
That is bkWith a1, a2..., ak-1It is orthogonal.
IQR_OMP algorithms:
If having calculated:
Ak-1=Qk-1Rk_1 (4.9)
Wherein Qk_1=[q1,q2,…qk-1]∈RM×(k-1)Row directly hand over matrix, Rk-1For upper three angular moment of (k-1) × (k-1) Battle array.The calculating of kth step is considered below:
Wherein βk∈R(k-1)×1, αkFor a scale.It pushes away:
Due toThus
So:
Equally often step calculates residual rk=s-AkxkIt needs to calculate a matrix and multiplication of vectors (s:Need to reconstruct to Amount), and rkOften step needs to be used in selection and r in AkThe maximum column vector of inner product adds to AkMiddle formation Ak+1.At this moment for recursion QR has for decomposing
Property 4.4 is in QR_OMP algorithms, rkIt is y in fact in R (Ak)On rectangular projection, i.e.,:
K=1,2 ..., K
Residual vector r is calculated in (4.9) formula of utilizationkIt brings great convenience, avoids matrix and be directly multiplied with vector, often Step calculate inner product can, due to avoiding calculating intermediate quantity { x in recycling insidek, this can reduce large-scale problem Amount of calculation.Therefore have:
IQR-OMP algorithms:
Step B1, input observation data vector y ∈ Rm(Rm:M × 1 ties up real number column vector set) dictionary matrix A=[a1, a2..., an]∈Rm×n(ai(i=1,2 ... n) be matrix A the i-th column vector, Rm×n:M × n ties up real number matrix set);It enables just Beginning supported collection indexInitial residual vector is enabled to be equal to observation data vector, i.e. r0=y, enables k=1;
Step B2, ask in dictionary matrix A with -1 iteration residual vector r of kthk-1Strongest correlation row jk
jk∈arg maxj|(rk-1, aj) |, Ωkk-1∪{jk};
Wherein ΩkIt is kth time iteration supported collection index, Ωk-1It is -1 iteration supported collection index of kth, ajIt is in matrix A J-th of column vector.
Step B3, by -1 iteration Q of kthk-1And Rk-1Recurrence calculation kth time iteration QkAnd Rk
If k=1, jk=j1, thenMatrix Q1First be classified as Q1(:, 1) or q1It is equal toI.e.
Step B4 is turned to,
If k ≠ 1 executes following formula successively:
q1=ajk-Qk-1βk,
qk=q1k,
Qk=[Qk-1:qk],WhereinIt is jth in matrix AkA column vector, qkIt is the of matrix Q K column vector, βk、αkThe interim substitute element executed for kth time;
Step B4, update kth time iteration residual vector
Step B5, judges whether to meet stopping criterion, i.e. the Euclid norm of kth time iteration residual vector meets ‖ rk2 (ε is a very small number to≤ε, such as:10-6), if it is satisfied, executing step B6;
Otherwise k is updated to k+1, and repeats step B2 to step B4;
Step B6, minimization problemSolution:
It providesIt is support index ΩkCorresponding dictionary matrix,ai(i=j1, j2... jk∈Ωk) it is matrixI-th column vector.
Step B7 exports sparse coefficient vector x (x (i):Export i-th of element of sparse coefficient vector x, xk(i): Step B6 acquires solution xkI-th of element):
It can be obtained using Greville recurrence Relations:
Therefore have:
The process of theorem 4.5Grville recurrence calculation M-P generalized inverses is consistent with QR recursion.In fact in theorem 3 Under the conditions of, have
Embodiment
The computing environment of this paper design values experiment is DELL-PC WORK GROUP Intel (R) Core (TM) i7- 4790 3.6Ghz, memory 8.0GB, Windows7 Ultimate, software for calculation are MATLAB R2010b.
A ∈ R in Case12000×5000, K=600;
A ∈ R in Case23000×6000, K=1000;
A ∈ R in Case310000×15000, K=3000;
The calculating time of increment OMP algorithms 1-D is compared as follows shown in table 1:
Table 1
Situation type IQR_OMP IGR_OMP OMP
Case1 7.6331 9.112 30.8978
Case2 29.2647 34.8170 164.7083
Case3 810.4921 962.1627 10386.0
Three above algorithmic descriptions are as follows:
Given herein establish is decomposed and algorithm on the basis of new residual recurrence formula in QR when 1.IQR_OMP algorithms;
It is given herein when 2.IGR_OMP algorithms to establish in Greville recurrence formula and new residual recurrence formula basis On OMP algorithms;
3.OMP algorithms are exactly the OMP algorithms of classics, the wherein form of generalized inverse normal equation.I.e.Simultaneously it is to be appreciated that the above same type, the initial data A ∈ R used in algorithms of differentM×NWith y ∈ RMIt is identical.
The psnr of three kinds of computational methods of result of calculation second case is 118.7734, three kinds of sides of third type The psnr of method is 144.3529, MSE=2.38611e-010.As can be seen that algorithm GR_OMP and QR_OMP tool given herein There is obviously advantage, especially has the characteristics that large-scale problem fast and effective.Such as the third situation above, it gives herein / 10th of time less than OMP algorithms of calculating of the algorithm gone out, such as table 2 (2-D Lena images (256 × 256)) and 3 (2- of table The case where D (1024 × 1024)) shown in:
Table 2
IGR_OMP OMP IRLS SP GBP IHT CoSaMP
CPUtime(s) 0.18 0.31 8.37 1.40 4.18 0.23 2.30
Psnr 22.06 22.08 23.63 21.04 22.77 15.66 19.77
Table 3
IQR_OMP OMP IRLS SP GBP IHT CoSaMP
CPUtime(s) 11.30 31.86 5809 243 2325 14.03 533
Psnr 45.90 45.90 48.70 46.34 47.83 46.79 45.27
(1) in table 2 256 × 256 Lena images, calculation matrix is random matrix, and sparse transformation dct transform is by column It is calculated.Wherein it is respectively:
IGR_OMP:It is increment OMP algorithm of the foundation given herein on the basis of Greville recursion;
IQR_OMP:It is increment OMP algorithm of the foundation given herein on QR decomposition bases;
OMP:It is OMP method of the foundation of classics in normal equation;
IRLS:Iteration weighted least-squares method (Iteratively Reweighted Least Square);
SP:Subspace method for tracing;
GBP:Greedy base method for tracing (Greedy Basis Pursuit);
IHT:Iterative hard thresholding method;
CoSaMP:Compression sampling match tracing method.
IQR_OMP, IGR_OMP algorithm are the method for the present invention in these methods, Hong Kong section that other methods are all The software that skill university Chengfu doctors Huo provide.
It is apparent that result of calculation shows that delta algorithm IGR_OMP algorithms presented and discussed herein and IQR_OMP algorithms have Superiority.
The present invention provides a kind of increment orthogonal matching pursuit methods of higher-dimension sparse vector reconstruct, implement the technology There are many method and approach of scheme, the above is only a preferred embodiment of the present invention, it is noted that for the art Those of ordinary skill for, various improvements and modifications may be made without departing from the principle of the present invention, these change Protection scope of the present invention is also should be regarded as into retouching.The available prior art of each component part being not known in the present embodiment adds To realize.

Claims (3)

1. a kind of higher-dimension sparse vector reconstructing method based on IQR_OMP, which is characterized in that include the following steps:
Step B1, input observation data vector y ∈ RmWith dictionary matrix A=[a1, a2..., an]∈Rm×n, initial support collection is enabled to refer to MarkEnable initial residual vector r0Equal to observation data vector y, i.e. r0=y enables k=1, wherein RmIndicate that m × 1 ties up real number Column vector, Rm×nIndicate that m × n ties up real number matrix, aiIt represents i-th of m × 1 in dictionary matrix A and ties up real number column vector, i=1, 2…,n;
Step B2, ask in dictionary matrix A with -1 iteration residual vector r of kthk-1Kth time iteration strongest correlation row jk
jk∈argmaxj|(rk-1, aj) |, Ωkk-1∪{jk};
Wherein ΩkFor kth time iteration supported collection index, Ωk-1For -1 iteration supported collection index of kth, ajFor in dictionary matrix A Jth row real number column vector;
Step B3, by the Q of -1 iteration of kthk-1And Rk-1The Q of recurrence calculation kth time iterationkAnd Rk
Step B4, update kth time iteration residual vectorWherein qkIt is kth time Iterative Matrix Qk K column vector;
Step B5, judges whether to meet stopping criterion, i.e. the Euclid norm of kth time residual vector meets ‖ rk2≤ ε, if Meet, executes step B6;Otherwise k is updated to k+1, and repeats step B2 to step B4;
Step B6 solves minimization problemSolution xk
WhereinFor kth time iteration support index ΩkCorresponding dictionary matrix,j1, j2..., jk∈Ωk,It is matrix RkGeneralized inverse,It is matrix QkTransposition;
Step B7 exports sparse coefficient vector x:
Wherein x (i) indicates i-th of element in sparse coefficient vector x, xk(i) vector x is indicatedkIn i-th of element.
2. according to the method described in claim 1, it is characterized in that, step B3 includes:If k=1, jkFor j1, thenQ1The 1st of matrix is classified as Q1(:, 1) or q1It is equal toI.e. Step B4 is turned to,
If k ≠ 1, following calculate is executed successively:
q1=ajk-Qk-1βk,
qk=q1k,
Qk=[Qk-1:qk],It is wherein vectorialIt is the jth of dictionary matrix AkA column vector, qkIt is matrix Q K-th of column vector, βkVector, α are replaced by what kth time executed temporarilykNumerical value is replaced by what kth time executed temporarily.
3. according to the method described in claim 2, it is characterized in that, in step B6,Representation vector Euclid norm.
CN201810431881.0A 2018-05-08 2018-05-08 High-dimensional sparse vector reconstruction method based on IQR _ OMP Active CN108664448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810431881.0A CN108664448B (en) 2018-05-08 2018-05-08 High-dimensional sparse vector reconstruction method based on IQR _ OMP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810431881.0A CN108664448B (en) 2018-05-08 2018-05-08 High-dimensional sparse vector reconstruction method based on IQR _ OMP

Publications (2)

Publication Number Publication Date
CN108664448A true CN108664448A (en) 2018-10-16
CN108664448B CN108664448B (en) 2021-06-01

Family

ID=63778785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810431881.0A Active CN108664448B (en) 2018-05-08 2018-05-08 High-dimensional sparse vector reconstruction method based on IQR _ OMP

Country Status (1)

Country Link
CN (1) CN108664448B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109645985A (en) * 2019-02-22 2019-04-19 南京大学 The method that a kind of pair of peak single channel pregnant woman stomach wall electricity maternal ecg R is detected

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102624399A (en) * 2012-03-30 2012-08-01 北京邮电大学 Reconfiguration method for compression sensing signal
CN102750262A (en) * 2012-06-26 2012-10-24 清华大学 Method for realizing sparse signal recovery on CPU (Central Processing Unit) based on OMP (Orthogonal Matching Pursuit) algorithm
CN104318046A (en) * 2014-08-18 2015-01-28 南京大学 System and method for incrementally converting high dimensional data into low dimensional data
CN104333389A (en) * 2014-10-23 2015-02-04 湘潭大学 Adaptive threshold value iterative reconstruction method for distributed compressed sensing
US20170140079A1 (en) * 2014-06-30 2017-05-18 Cgg Services Sas Ensemble-based multi-scale history-matching device and method for reservoir characterization
CN107845117A (en) * 2017-10-19 2018-03-27 武汉大学 Method for compressing high spectrum image based on block sparse expression pattern and structure dictionary

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102624399A (en) * 2012-03-30 2012-08-01 北京邮电大学 Reconfiguration method for compression sensing signal
CN102750262A (en) * 2012-06-26 2012-10-24 清华大学 Method for realizing sparse signal recovery on CPU (Central Processing Unit) based on OMP (Orthogonal Matching Pursuit) algorithm
US20170140079A1 (en) * 2014-06-30 2017-05-18 Cgg Services Sas Ensemble-based multi-scale history-matching device and method for reservoir characterization
CN104318046A (en) * 2014-08-18 2015-01-28 南京大学 System and method for incrementally converting high dimensional data into low dimensional data
CN104333389A (en) * 2014-10-23 2015-02-04 湘潭大学 Adaptive threshold value iterative reconstruction method for distributed compressed sensing
CN107845117A (en) * 2017-10-19 2018-03-27 武汉大学 Method for compressing high spectrum image based on block sparse expression pattern and structure dictionary

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHELUN YU,ET AL: "Fast Compressive Sensing Reconstruction Algorithm on FPGA using Orthogonal Matching Pursuit", 《2016 ISCAS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109645985A (en) * 2019-02-22 2019-04-19 南京大学 The method that a kind of pair of peak single channel pregnant woman stomach wall electricity maternal ecg R is detected

Also Published As

Publication number Publication date
CN108664448B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
Ou et al. Asymmetric transitivity preserving graph embedding
Shmerkin On the exceptional set for absolute continuity of Bernoulli convolutions
Cohen All-distances sketches, revisited: HIP estimators for massive graphs analysis
Duchi et al. Randomized smoothing for stochastic optimization
Dubrovin et al. Frobenius manifolds and Virasoro constraints
Zhang et al. Unsupervised nonnegative adaptive feature extraction for data representation
Deng et al. Stable, fast computation of high-order Zernike moments using a recursive method
US20130236112A1 (en) Signature representation of data having high dimensionality
Fan et al. A primal dual active set algorithm with continuation for compressed sensing
Maggioni et al. Multiscale dictionary learning: non-asymptotic bounds and robustness
CN106599227B (en) Method and device for acquiring similarity between objects based on attribute values
Zhang et al. On the theoretical analysis of cross validation in compressive sensing
Xin et al. Exploring algorithmic limits of matrix rank minimization under affine constraints
Anderson et al. Efficient learning of simplices
CN108664448A (en) A kind of higher-dimension sparse vector reconstructing method based on IQR_OMP
CN108573262A (en) A kind of higher-dimension sparse vector reconstructing method based on IGR_OMP
Gerdes et al. Learning lattice quantum field theories with equivariant continuous flows
Wang et al. Condition numbers for the nonlinear matrix equation and their statistical estimation
Bihan et al. Faster real feasibility via circuit discriminants
CN112422133B (en) Binary sparse signal recovery method for subtraction matching pursuit and application thereof
Cha Chebyshev’s bias in function fields
Rabaoui Asymptotic harmonic analysis on the space of square complex matrices
Choromanski et al. TripleSpin-a generic compact paradigm for fast machine learning computations
Liang et al. Sketching transformed matrices with applications to natural language processing
Xie Euclidean representation of low-rank matrices and its statistical applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant