CN108573262B - IGR-OMP-based high-dimensional sparse vector reconstruction method - Google Patents
IGR-OMP-based high-dimensional sparse vector reconstruction method Download PDFInfo
- Publication number
- CN108573262B CN108573262B CN201810431421.8A CN201810431421A CN108573262B CN 108573262 B CN108573262 B CN 108573262B CN 201810431421 A CN201810431421 A CN 201810431421A CN 108573262 B CN108573262 B CN 108573262B
- Authority
- CN
- China
- Prior art keywords
- vector
- matrix
- omp
- iteration
- column
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Complex Calculations (AREA)
Abstract
The invention discloses a high-dimensional sparse vector reconstruction method based on IGR _ OMP. According to the characteristic that least square solution needs to be calculated in each step of the OMP algorithm, an incremental IGR _ OMP algorithm is provided based on the Greville recursion process, a series of useful recursion properties are obtained, and the calculation speed is effectively improved; on the basis of analyzing the IGR _ OMP algorithm, an incremental IQR _ OMP algorithm on the basis of QR decomposition is established by utilizing the relationship between the Greville recursion algorithm and the QR decomposition, the algorithm effectively reduces the calculation workload and improves the effectiveness of the algorithm in solving large-scale problems; high-dimensional sparse vector reconstruction simply means that the most sparse coefficient representation of the high-dimensional vector is given on an overcomplete vector system.
Description
Technical Field
The invention relates to a high-dimensional sparse vector reconstruction method based on IGR _ OMP.
Background
Compressed Sensing (CS) is a new key technology in the field of image or signal processing. The essence of this is to effectively determine the sparsest coefficient representation of this high dimensional vector on An overcomplete dictionary using information much lower than the dimensions of the Signal vector or image vector, the core problem is to solve the sparseness of a sublinear system to reconstruct the Signal or image vector (ref: Yan worship et al, compressed sensing and applications 2015, national defense industry publishers, Beijing, E.J. Cand es, M.Wakin, "girder with estimation" An interactive reduction sampling, IEEE Signal Processing Magazine,2008,25(2), 21-30.).
High-dimensional sparse vector reconstruction is a very important problem in the field of signal processing and image processing. Although the classical OMP (Orthogonal Matching Pursuit) algorithm has a lower complexity than other non-greedy algorithms, it is still a little away from real-time signal processing in practical applications.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a high-dimensional sparse vector reconstruction method based on IGR _ OMP (Incremental Greville Recursion based Orthogonal Matching Pursuit algorithm), which comprises the following steps:
let matrix A ═ a1,a2,…,aK]∈RM×K,K<<M,rank(A)=K(RM×K: set of M x K dimensional real matrices, ai(i ═ 1,2 …, K): the ith real column vector in dictionary matrix a, rank (a): rank of matrix a);
note A1=[a1],Ak=[Ak-1,ak],k=2,3,…,K(a1: first column vector of matrix A, A1: iteration matrix A, A of 1 st timek: the kth matrix A, A with k column vectorsk-1: the k-1 st iteration matrix A);
computing A by Greville recursion algorithm+I.e. the Generalized Inverse of matrix A (ref: Ben-Isreal A., Greville T.N.E., Generalized applied instruments Theory and Applications, Wiley, New York, 1974):
Recursion step pair K is 2,3, …, K
bk=ak-Ak-1dk;
Wherein d iskIs to solve forTemporarily substituted vectors, bkIs dimension and akIdentity solutionA temporarily replaced column vector. The algorithm is characterized in thatByAre obtained recursively, i.e. fromSuccessive computationsRather than directly for each iterationFor the incremental IGR _ OMP high-dimensional vector sparse recovery in compressed sensing, the steps are as follows:
step A1, inputting observation data vector y epsilon RMAnd the dictionary matrix a ═ a1,a2,…,aN]∈RM×NLet the initial support set indexLet the initial residual vector r0Is equal to the observation data vector y, i.e. r0Let k be 1. Wherein R isMIs a set of M x 1 dimensional real column vectors, RM×NIs a set of M × N dimensional real number matrices, aiRepresents the ith real column vector in the dictionary matrix a, i ═ 1,2 …, N;
step A2, solving the k-1 iteration residual vector r in the dictionary matrix Ak-1Of the kth iteration of (a) the most strongly correlated column jk:
jk∈argmaxj|(rk-1,aj)|,Ωk=Ωk-1∪{jk},
Wherein omegakFor the kth iteration support set index, Ωk-1For the k-1 iteration support set index, ajIs the jth real column vector in the dictionary matrix A;
step A3, supporting the index omega by the k-1 iterationk-1Corresponding dictionary matrixGeneralized inverse matrix ofIncremental calculation of kth iteration support index omegakCorresponding dictionary matrixGeneralized inverse matrix of
If k is 1, then jkIs j1Then vector ofIs equal toIs the inverse of the broad sense of (1)Ωk={j1Turning to step A4;
if k ≠ 1, the following calculations are performed sequentially:
c=bT/(bTb),
whereinIs a dictionary matrixMiddle jkA vector of columns of real numbers,is a vectorD is a vector of the provisional substitution, b is a dimension and a vector of the provisional substitutionThe same M × 1-dimensional vector, c is a provisionally substituted 1 × M-dimensional vector.
Step A6, determining whether the stopping criterion is satisfied, i.e. determining whether the Euclidean norm of the k-th iteration residual vector satisfies | rk‖2ε (ε is a very small number, e.g.: 10)-6) If yes, executing step A7, otherwise updating k to k +1, and repeating steps A2 to A5;
step A7, outputting sparse coefficient vector x ∈ RN:
Where x (i) is the i-th element in the sparse coefficient vector x, xk(i) Is a vector xkThe i-th element of (1), RNIs an N × 1 dimensional real number vector.
bkAn essential condition of 0 is ak∈R(Ak-1)=span{a1,a2,…,ak-1},
Namely akAnd a1,a2,…,ak-1Linear correlation, Ak-1=[a1,a2,…,ak-1](akRepresentation matrix AkThe kth column vector, R (A)k-1) Is represented by matrix Ak-1K-1 column vectors of (a) and (b) a1,a2,…,ak-1Represents by the vector a1,a2,…,ak-1Generated vector space, Ak-1For the (k-1) th iteration a matrix,is Ak-1The generalized inverse matrix of (d);
bk∈R(Ak-1)⊥is akAt R (A)k-1) Orthogonal projection on the orthogonal complement, if b k0 denotes ak∈R(Ak-1) Then a iskAnd R (A)k-1) Linear correlation (I)MIs an identity matrix of M x M,representation matrix Ak-1Generalized inverse matrix of (A)k-1)⊥Representation generation space R (A)k-1) The orthogonal complement of).
Denotes bkIs akAt R (A)k-1)⊥The orthogonal projection of (a) isi(i-1, 2, …, k-1) is a matrix ak-1Vector of ith column):
(bk,ai)=0,i=1,2,…,k-1 (4.7)
i.e. bkAnd a1,a2,…,ak-1Are orthogonal.
The invention also discloses a high-dimensional sparse vector reconstruction method based on the IQR _ OMP (Incremental QR decomposition based Orthogonal Matching Pursuit algorithm based on QR decomposition), which comprises the following steps:
step B1, inputting observation data vector y epsilon RmAnd the dictionary matrix a ═ a1,a2,…,an]∈Rm×n(ii) a Let the initial support set indexMake the initial residual vector equal to the observed data vector, i.e. r0Y, let k be 1, wherein RmRepresenting a real column vector, R, of dimension m x 1m×nRepresenting a matrix of real numbers of dimension m x n, aiRepresents the ith m × 1-dimensional real column vector in the dictionary matrix a, i is 1,2 …, n;
step B2, solving the k-1 iteration residual vector r in the dictionary matrix Ak-1Of the kth iteration of (a) the most strongly correlated column jk:
jk∈argmaxj|(rk-1,aj)|,Ωk=Ωk-1∪{jk},
Wherein omegakFor the kth iteration support set index, Ωk-1For the k-1 iteration support set index, ajA j column real number column vector in the dictionary matrix A;
step B3, Q of the k-1 st iterationk-1And Rk-1Recurrently calculating Q of kth iterationkAnd Rk:
If k is 1, then jk=j1Then, thenQ1The first column of the matrix is Q1(1) or q1Is equal toNamely, it isTurning to the step B4, the method,
if k ≠ 1, the following calculations are performed in sequence:
qk=q1/αk,
Qk=[Qk-1:qk],wherein the vectorIs the jth of the dictionary matrix AkColumn vector, qkIs the kth column vector, β, of the matrix QkTemporary replacement vectors for the kth iteration, αkA temporary replacement value for the kth iteration;
Wherein q iskIs the kth iteration matrix QkThe kth column vector of (1);
step B5, determining whether the stopping criterion is satisfied, i.e. determining whether the euclidean norm of the k-th residual vector satisfies | rk‖2ε (ε is a very small number, e.g.: 10)-6) If yes, go to step B6; otherwise, updating k to k +1, and repeating the steps B2 to B4;
WhereinSupport the indicator omega for the kth iterationkA corresponding dictionary matrix is then generated and stored in the memory,j1,j2,…,jk∈Ωk,representative vectorEuclidean norm of;is a matrix RkThe generalized inverse of (1) is,is a matrix QkTransposing;
step B7, outputting sparse coefficient vector x:
where x (i) denotes the i-th element in the sparse coefficient vector x, xk(i) Representing a vector xkThe ith element in (1).
Using a Greville recursion relation to obtain:
then there are:wherein d iskIs to solve forTemporarily substituted vectors, bkIs dimension and akIdentity solutionA temporarily replaced column vector.
According to QR Decomposition (QR Decomposition), if the matrix A executed at the k-1 th time is obtained by calculationk-1:
Ak-1=Qk-1Rk-1 (4.9)
Wherein A isk-1The k-1 th iteration matrix A, Qk-1=[q1,q2,…qk-1]∈RM×(k-1)Is a column orthogonal matrix (q)i(i-1, 2, …, k-1) is a matrix Qk-1Ith column vector of), qk-1Is a matrix Qk-1The (k-1) th column vector Rk-1An upper triangular matrix of (k-1) × (k-1), taking into account the calculation of step k:
wherein beta isk∈R(k-1)×1,αkFor a scalar, we derive (I: identity matrix):
due to the fact thatThereby the device is provided withWhereinIs a matrix RkThe inverse of the matrix of (a) is,
therefore:
also, a residual vector r is calculated every stepk=s-AkxkOne matrix and vector multiplication (s is the reconstructed vector) needs to be calculated once, and r iskEach step needs to be used in A to select and rkThe column vector having the largest inner product in absolute value is added to AkIn the formation of Ak+1For recursive QR decomposition, this is: r iskIs y at R (A)k)⊥Orthogonal projection of (a) i.e.:
has the advantages that:
the IGR _ OMP method and the IQR _ OMP two-increment OMP method for sparse reconstruction of high-dimensional signal vectors have the following characteristics:
1. the calculation speed is high. For matrix Ak=[a1,a2,…,ak]∈RM×k,y∈RM,rank(Ak) Min corresponding to kx‖y-AkSolution of x |If obtained by QR decompositionFor each k conventional OMP method, the amount of computation is aboutSubflups (ref: L.N.Trefethen, D.Bau, Numerical Linear Algebra, Chinese edition copy 2006 by Posts and Telecom Press). While the incremental algorithm IQR _ OMP is approximatelyTherefore, the workload of the incremental algorithm is obviously superior to that of the traditional OMP algorithm, especially when high dimensional problems.
2. From the viewpoint of computational stability, sinceWhile the incremental IQR _ OMP method is designed to be cond2(Ak) Rather than its square. Thus when A iskWhen the condition number of (2) is slightly larger, the normal equation algorithm is not only large in calculation amount, but also can seriously affect the stability of the algorithm. Therefore, the method is obviously superior to the method of the normal equation in the aspects of calculation precision and calculation speed.
3. At present, theoretical results about sparse reconstruction of signal vectors are difficult to be checked in the process of actual calculation. While the incremental algorithms IQR _ OMP and IGR _ OMP given herein are rk,kAnd bkCan be used as test akA relative to the front face1,a2,…,ak-1A measure of the degree of linear independence (ref: Zhao Jinxi, Huang's method of compatible linear equations and its generalization, the mathematics academic journal of higher schools, 01 th, pp 8-17,1981/4/2.). It is important to realize the check of the linear independence degree between vector systems in the calculation process. The incremental algorithm provided by the invention has the characteristics that the calculation expense is obviously reduced, the algorithm has strong robustness, and the linear independence degree of the currently selected atom relative to the previous atom can be given in the calculation process. Numerical results show that the algorithm presented herein has significant advantages.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flowchart of the IGR _ OMP algorithm.
FIG. 2 is a flow chart of the IQR _ OMP algorithm.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Among sparse reconstruction algorithms for high-dimensional signal vectors for compressed sensing, Greedy algorithm has been emphasized because of its low computational complexity and fast convergence speed, while Orthogonal Matching Pursuit (OMP) is the most important one (refer to: journal a. Tropp and Anna C. Gilbert. Signal Recovery From measuring instruments Via organic Matching Pursuit [ J ]. IEEE Transactions on Information Theory, VOL.53, NO.12, DECEMBER 2007.).
The classical Orthogonal Matching Pursuit algorithm (OMP, Orthogonal Matching Pursuit) is as follows:
inputting: observed data vector y ∈ RM(RM: m-dimensional real column vector set) and dictionary matrix a ═ a1,a2,…,aN]∈RM×N(ai(i-1, 2, …, N) is the ith column vector of matrix a, RM×N: m × N dimensional real matrix).
And (3) outputting: sparse signal vector x ∈ RN(RN: a set of N-dimensional real column vectors).
Step1. initialization: let the initial support set indexLet the initial residual vector equal the observed data vector, i.e. r0Let k be 1.
Step2. identification: solving the k-1 iteration residual error vector r in the matrix Ak-1Of the kth iteration of (a) the most strongly correlated column jk:
Wherein omegakFor the kth iteration support set index, Ωk-1For the k-1 iteration support set index, aj(j ═ 1,2, … k) the jth column vector in matrix a.
WhereinIs the support index omegakA corresponding dictionary matrix is then generated and stored in the memory,ai(i=j1,j2,…jk∈Ωk) Is a matrixThe ith column vector.
Step5. if some stop criterion is met, the euclidean norm of the k-th iteration residual vector meets | rk‖2ε (ε is a very small number, e.g.: 10)-6) Then stop iteration to Step 6;
otherwise let k ← k +1, and repeat Step2 through Step4.
Step6. output coefficient vector x (i): i-th element of output coefficient vector x, xk(i) The method comprises the following steps Representing a vector xkIth element of (2):
since the number of non-zero elements K of the solution vector is less than M and less than N, step3 in the algorithm is actually solving an M x K overdetermined linear least square problem. That is, in general, to solve the sparseness of a sublinear system, the OMP algorithm is actually implemented by successively solving an overdetermined linear least squares problem of M × K (K1, 2.., K). That is, it is important to recognize that the greedy algorithm is used to solve the sparse reconstruction problem of the high-dimensional signal vector, which is essentially to solve the least square solution of the indeterminate linear system composed of the column vectors (atoms) of a corresponding to the non-zero elements of x. The OMP algorithm has two key steps:
1. for a matrix (dictionary) A ∈ RM×NColumn (atomic) indices that match non-zero elements in x are determined. According to the principle of greedy algorithm, at step k, the AND r is found in the N columns of Ak-1=y-Axk-1The vector with the largest absolute value of inner product (reference: S.Mallat and Z.Zhang, Matching pursuits with time-frequency dictionary, IEEE Trans. Signal Process,1993,41, pp.3397-3415.), constitutes a new support set, which is step 2;
2. for large problems, the workload required to solve the least squares solution in the OMP algorithm using the usual normal equations method is often nearly 80% of the total workload. Finding a more efficient algorithm to solve the least squares solution becomes a core problem in the OMP algorithm.
It has been pointed out previously that sparse reconstruction of signal vectors is the most sparse solution to determine a large high-order sublinear system from its scientific point of view. The OMP algorithm actually introduces a support set column by column and solves the least square solution of an overdetermined linear system as follows:
whereas the usual OMP algorithm is a normal equation method, which calculates at the k-th iteration:
for the least squares solution (3.3) to solve the over-determined system of linear equations, since the number of iterations K < M < N. Thus, it is considered that the method of the equation (3.4) is suitably effective as compared with other direct methods. However, there is a fundamental problem in the coefficient matrix of (3.3)Each iteration is added with a column, and each iteration needs to be solved (3.3), and the accumulated calculation amount is very large. For general middle and small sizeThe problem, the effect of which is not obvious, is a big problem for large high-order problems. How to effectively utilize the successive iteration information of the OMP algorithm is the starting point for constructing the incremental new algorithm.
In order to be able to explain the problem more intuitively, the overdetermined linearity problem (3.3) to be solved by the invention is setThat is, in the iteration 2000 step, the calculation time of the equation method accumulation is 1667.6 seconds, while the new algorithm accumulation calculation constructed by using the information of the previous iteration only takes 137.36 seconds, and the speed of the same problem is improved by 12 times. At the 2000 th iteration, the normal equation method took 4.5273 seconds, whereas the method of the present invention took only 0.2427 seconds.
It is stated above that the concept of spark or cross-correlation value of A can be utilized to discuss the existence of large sparse solutions. It must be seen that neither of these conditions is very convenient to verify. In fact, since the solution of the present invention is an M × k overdetermined linear system, there are:
theorem 3.1 sets the sublinear system Ax ═ y, where A ∈ RM×N,x∈RN,y∈RM,rank(A)=M,‖x‖0K and K < M < N, if A ═ a1,a2,…,aN]The vector system formed by any K rows is not related linearly, so that a solution vector x meeting the sparse condition exists, and the solution vector is enabled to be II0=K。
This condition seems to be difficult to verify, but since the coefficient matrix of the overdetermined linear system is introduced column by column, it is true only to ensure that the vector system composed of the column-by-column introduced vectors is linearly independent. It is important how the new vector can be verified in the calculation process to be linearly independent of the vector system of the advanced support set, which is not done by the conventional OMP algorithm. How to take advantage of this column-wise introduced feature to consider and design an algorithm is the motivation for the present invention to consider new algorithms.
As shown in fig. 1 and 2, the present invention discloses a high-dimensional sparse vector reconstruction method based on IGR _ OMP and a high-dimensional sparse vector reconstruction method based on IQR _ OMP.
Principle of incremental OMP method:
under certain conditions, the OMP algorithm is essentially a super-definite linear system, and the super-definite linear system is characterized in that the column vector with the maximum energy is selected successively to enter a support set. That is, in the k iteration, the column vector with the maximum energy and the originally selected k-1 column vector are selected to form a new Mxk over-definite systemWhereiny∈RMThen, a least squares solution is found. From this viewpoint, understanding and analyzing can grasp the essence of OMP method, and can see the defects of traditional OMP method. The OMP algorithm computes the generalized inverse of the submatrix from scratch for each iteration. Whether directly calling standard functions or directly calculating by formulasMany extra computations are added, especially for large problems, which is more prominent as the number of iterations increases. The k-th iteration of the incremental algorithm utilizes information obtained by the previous iteration, so that the effectiveness of the algorithm is greatly improved. Two incremental OMP algorithms are given below.
IGR _ OMP algorithm:
let A ═ a1,a2,…,aK]∈RM×KK < M, rank (A) ═ K (wherein ai(i ═ 1,2, …, K) is the ith column vector of matrix a, RM×K: a set of M × K dimensional real matrices);
note A1=[a1],Ak=[Ak-1,a1],k=2,3,…,K(a1: first column vector of matrix A, A1: iteration matrix A, A of 1 st timek: the kth iteration matrix A, Ak-1: the k-1 st iteration matrix A);
computing A by Greville recursion algorithm+(ref: Ben-Isreal A., Greville T.N.E., Generalized invest Theory and Applications, Wiley, New York, 1974.):
Recursion step pair K is 2,3, …, K
bk=ak-Ak-1dk;
The algorithm is characterized in thatByAre recurrently obtained in which dkIs to solve forTemporarily substituted vectors, bkIs dimension and akIdentity solutionA temporarily replaced column vector. So there is the following IGR _ OMP algorithm:
step A1, inputting observation data vector y epsilon RM(RM: m × 1-dimensional real column vector) and dictionary matrix a ═ a1,a2,…,aN]∈RM×N(ai(i-1, 2, …, N) is the ith column vector of matrix a, RM×N: mxn dimensional real matrix), index the initial support setMake the initial residual vector equal to the input observed data vector, i.e. r0Y, let k be 1;
step A2, solving the k-1 iteration residual vector r in the dictionary matrix Ak-1Of the kth iteration of (a) the most strongly correlated column jk(Ωk: support set index, Ω, for the kth iterationk-1: the k-1 th iteration support set index, aj: jth column vector in matrix a):
step A3, supporting set index omega by the k-1 iterationk-1Corresponding matrixGeneral inverse ofIncremental calculation of kth iteration support set index omegakCorresponding matrixGeneral inverse of(Representative matrixJ (d) ofkColumn (ii):
if k is 1, then jk=j1Then vector ofIs a generalized inverse equal to the vectorIs the inverse of the broad sense of (1)Ωk={j1The flow is turned to the step A4,
if k ≠ 1, the following steps are performed in sequence:
c=bT/(bTb),
where d is the temporary replacement vector, b is the temporary replacement vectorColumn vectors of the same dimension, c is a row vector that is temporarily replaced.
Step A6, determining whether the stopping criterion is satisfied, i.e. the Euclidean norm of the k-th iteration residual vector is fullFoot | rk‖2ε (ε is a very small number, e.g.: 10)-6) If yes, executing step A7, otherwise updating k to k +1, and repeating steps A2 to A5;
step A7, outputting sparse coefficient vector x ∈ RN(RN: n × 1-dimensional real column vector, x (i): output the ith element, x, of a sparse coefficient vector xk(i) The method comprises the following steps Step A4 minimizing the problem solution vector xkIth element of (2):
the nature of the column Greville recursion:
properties 4.1bkAn essential condition of 0 is ak∈R(Ak-1)=span{a1,a2,…,ak-1I.e. akAnd a1,a2,…,ak-1The correlation is linear.
This is because:
this description bk∈R(Ak-1)⊥Is akAt R (A)k-1) Orthogonal projection on the orthogonal complement, if b k0 means ak∈R(Ak-1) Therefore a iskAnd R (A)k-1) The linear correlation and vice versa.
II b in the actual calculationk‖2Is an important index which reflects akAnd R (A)k-1) Degree of linear correlation of.
In the compressed sensing problem, b does not occur from the assumption of the sensing matrixkCase 0. But | bk‖2This index is very important and reflects akAnd R (A)k-1) Degree of linear correlation of. If | bk‖2Not equal to 0, but very small indicates akAnd a1,a2,…,ak-1Close to linear correlation, i.e. a very weak degree of linear independence, is the case of so-called numerical instability. This is important, and it solves the problem of checking a in the course of calculationkWith its front vector system a1,a2,…,ak-1Is a big problem in numerical calculation.
(bk,ai)=0,i=1,2,…,k-1 (4.7)
i.e. bkAnd a1,a2,…,ak-1Are orthogonal.
IQR _ OMP algorithm:
if already calculated:
Ak-1=Qk-1Rk-1 (4.9)
wherein Qk-1=[q1,q2,…qk-1]∈RM×(k-1)Of a column orthogonal matrix, Rk-1An upper triangular matrix of (k-1) × (k-1). The calculation of step k is considered below:
wherein beta isk∈R(k-1)×1,αkIs a pure quantity. And (3) obtaining:
Therefore:
similarly, the residue r is calculated every stepk=s-AkxkA matrix is computed once multiplied by the vector (s: the vector to be reconstructed), and rkEach step needs to be used in A to select and rkThe column vector with the largest inner product is supplemented to AkIn the formation of Ak+1. In this case, for recursive QR decomposition
Property 4.4 in the QR _ OMP algorithm, rkIt is true that y is at R (A)k)⊥Orthogonal projection of (a) i.e.:
k=1,2,…,K
computing a residual vector r using the equation (4.9)kBrings great convenience, avoids direct multiplication of a matrix and a vector, and can calculate one inner product in each step because the calculation of an intermediate quantity { x ] is avoided in an inner loopkThis can reduce the computational effort for large problems. Therefore, there are:
the IQR-OMP algorithm:
step B1, inputting observation data vector y epsilon Rm(Rm: m × 1 dimensional real column vector set) dictionary matrix a ═ a1,a2,…,an]∈Rm×n(ai(i-1, 2, … n) is the ith column vector of matrix a, Rm×n: a set of m × n dimensional real number matrices); let the initial support set indexMake the initial residual vector equal to the observed data vector, i.e. r0Y, let k be 1;
step B2, solving the k-1 iteration residual vector r in the dictionary matrix Ak-1The most strongly correlated column j ofk:
jk∈arg maxj|(rk-1,aj)|,Ωk=Ωk-1∪{jk};
Wherein omegakIs the kth iteration support set index, Ωk-1Is the k-1 th iteration support set index, ajIs the jth column vector in matrix a.
Step B3, iterating Q from the k-1 st timek-1And Rk-1Recursion calculating the kth iteration QkAnd Rk:
If k is 1, then jk=j1Then, thenMatrix Q1Is first column of (1) as Q1(1) or q1Is equal toNamely, it is
Turning to the step B4, the method,
if k ≠ 1, then the following is performed in order:
qk=q1/αk,
Qk=[Qk-1:qk],whereinIs the j-th in the matrix AkColumn vector, qkIs the kth column vector, β, of the matrix Qk、αkA temporary substitute element for the kth execution;
Step B5, determine if the stop criterion is met, i.e. the euclidean norm of the k-th iteration residual vector meets | rk‖2ε (ε is a very small number, e.g.: 10)-6) If yes, go to step B6;
otherwise, updating k to k +1, and repeating the steps B2 to B4;
give aIs the support index omegakA corresponding dictionary matrix is then generated and stored in the memory,ai(i=j1,j2,…jk∈Ωk) Is a matrixThe ith column vector.
Step B7, output sparse coefficient vector x (i): output the ith element of sparse coefficient vector x, xk(i) The method comprises the following steps Step B6 finding solution xkIth element of (2):
the relationship is recurred by using Greville:
therefore, there are:
the process of calculating the M-P generalized inverse by 4.5Grville recursion in theorem is consistent with QR recursion. In fact, under the condition of theorem 3, there are
Examples
The computing environment for designing the numerical experiment is DELL-PC WORK GROUP Intel (R) core (TM) i 7-47903.6 Ghz, the internal memory 8.0GB, the Windows7 flagship edition, and the computing software is MATLAB R2010 b.
A e R in Case12000×5000,K=600;
A e R in Case23000×6000,K=1000;
A e R in Case310000×15000,K=3000;
The calculated time comparisons for incremental OMP algorithms 1-D are shown in Table 1 below:
TABLE 1
Type of situation | IQR_OMP | IGR_OMP | OMP |
Case1 | 7.6331 | 9.112 | 30.8978 |
Case2 | 29.2647 | 34.8170 | 164.7083 |
Case3 | 810.4921 | 962.1627 | 10386.0 |
The above three algorithms are illustrated as follows:
an algorithm based on QR decomposition and a new residue recurrence formula given herein for the IQR _ OMP algorithm;
the OMP algorithm which is established on the basis of a Greville recurrence formula and a new residual recurrence formula and is given in the specification during the IGR _ OMP algorithm;
the OMP algorithm is the classical OMP algorithm, in which the generalized inverse uses the form of an equation. Namely, it isIt should be noted that the same type of raw data used by different algorithms is usedA∈RM×NAnd y ∈ RMAre the same.
Results of the calculations the psnr of the three calculation methods for the second case was 118.7734, the psnr of the three methods of the third type was 144.3529, and the MSE was 2.38611e-010. It can be seen that the algorithms GR _ OMP and QR _ OMP presented herein have significant advantages, particularly with respect to large problems, in a fast and efficient manner. For example, in the third case above, the algorithm presented herein has a computation time of less than one tenth of that of the OMP algorithm, as shown in Table 2(2-D Lena image (256X 256)) and Table 3(2-D case (1024X 1024)):
TABLE 2
IGR_OMP | OMP | IRLS | SP | GBP | IHT | CoSaMP | |
CPUtime(s) | 0.18 | 0.31 | 8.37 | 1.40 | 4.18 | 0.23 | 2.30 |
Psnr | 22.06 | 22.08 | 23.63 | 21.04 | 22.77 | 15.66 | 19.77 |
TABLE 3
IQR_OMP | OMP | IRLS | SP | GBP | IHT | CoSaMP | |
CPUtime(s) | 11.30 | 31.86 | 5809 | 243 | 2325 | 14.03 | 533 |
Psnr | 45.90 | 45.90 | 48.70 | 46.34 | 47.83 | 46.79 | 45.27 |
(1) In the Lena image of 256 × 256 in table 2, the measurement matrix is a random matrix, and the sparse transform DCT transform is calculated column by column. Wherein the method comprises the following steps:
IGR _ OMP: is an incremental OMP algorithm set up on the basis of Greville recursion as given herein;
IQR _ OMP: is the incremental OMP algorithm set forth herein based on QR decomposition;
OMP: is a classical OMP method established on a normal equation;
IRLS: iterative weighted Least squares (iterative weighted Least Square);
SP: a subspace tracking method;
GBP: greedy Basis Pursuit (Greedy Basis Pursuit);
IHT: an iterative hard threshold method;
and (3) CoSaMP: a compressive sampling matching pursuit method.
Among these methods, the IQR _ OMP and IGR _ OMP algorithms are the method of the present invention, and the other methods are the software given by doctor Chengfu Huo, university of hong kong science and technology.
The calculations show that the incremental algorithms IGR _ OMP and IQR _ OMP presented and discussed herein have significant advantages.
The present invention provides a method for incremental orthogonal matching pursuit for high-dimensional sparse vector reconstruction, and a plurality of methods and approaches for implementing the technical solution are provided, the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (1)
1. A method suitable for fast image compression based on IGR _ OMP is characterized in that compressed sensing is used in the image processing field, information far lower than the signal vector or image vector dimension is utilized to determine the sparsest coefficient representation of the high-dimension vector on an overcomplete dictionary, and sparse solution of a sublinear system is solved to reconstruct an image vector, and the method specifically comprises the following steps:
step A1, inputting observation data vector y epsilon RMAnd the dictionary matrix a ═ a1,a2,…,aN]∈RM×NLet the initial support set indexLet the initial residual vector r0Is equal to the observation data vector y, i.e. r0Y, let k be 1, wherein RMIs a set of M x 1 dimensional real column vectors, RM×NIs a set of M × N dimensional real number matrices, aiRepresents the ith real column vector in the dictionary matrix a, i ═ 1,2 …, N;
step A2, solving the k-1 iteration residual vector r in the dictionary matrix Ak-1Of the kth iteration of (a) the most strongly correlated column jk:
jk∈arg maxj|(rk-1,aj)|,Ωk=Ωk-1∪{jk},
Wherein omegakFor the kth iteration support set index, Ωk-1For the k-1 iteration support set index, ajIs the jth real column vector in the dictionary matrix A;
step A3, supporting the index omega by the k-1 iterationk-1Corresponding dictionary matrixGeneralized inverse matrix ofIncremental calculation of kth iteration support index omegakCorresponding dictionary matrixGeneralized inverse matrix of
Step A6, determining whether the stopping criterion is satisfied, i.e. determining whether the Euclidean norm of the k-th iteration residual vector satisfies | | | rk||2≦ ε, if satisfied, perform step A7, otherwise update k to k +1, and repeat step A2Go to step a 5;
step A7, outputting sparse coefficient vector as x ∈ RN:
Where x (i) is the i-th element in the sparse coefficient vector x, xk(i) Is a vector xkThe i-th element of (1), RNIs a real number column vector set with dimension Nx 1;
if k ≠ 1, the following calculations are performed sequentially:
c=bT/(bTb),
whereinIs a dictionary matrixMiddle jkA vector of columns of real numbers,is a vectorD is a vector of the provisional substitution, b is a dimension and a vector of the provisional substitutionThe same M × 1-dimensional vector, c is a temporally substituted 1 × M-dimensional vector;
the 2-D Lena images with the formats of 256 × 256 and 1024 × 1024 respectively are compressed by using the methods of the steps A1-A7, the measurement matrix is a random matrix, and the sparse transform DCT transform is calculated column by column.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810431421.8A CN108573262B (en) | 2018-05-08 | 2018-05-08 | IGR-OMP-based high-dimensional sparse vector reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810431421.8A CN108573262B (en) | 2018-05-08 | 2018-05-08 | IGR-OMP-based high-dimensional sparse vector reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108573262A CN108573262A (en) | 2018-09-25 |
CN108573262B true CN108573262B (en) | 2021-06-25 |
Family
ID=63572008
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810431421.8A Active CN108573262B (en) | 2018-05-08 | 2018-05-08 | IGR-OMP-based high-dimensional sparse vector reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108573262B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110224790B (en) * | 2019-06-11 | 2022-02-15 | 东莞理工学院 | Subspace code division greedy method based on Echelon-Ferrs |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102393911A (en) * | 2011-07-21 | 2012-03-28 | 西安电子科技大学 | Background clutter quantization method based on compressive sensing |
CN102750262A (en) * | 2012-06-26 | 2012-10-24 | 清华大学 | Method for realizing sparse signal recovery on CPU (Central Processing Unit) based on OMP (Orthogonal Matching Pursuit) algorithm |
CN102833514A (en) * | 2012-08-07 | 2012-12-19 | 南昌大学 | Measurement-matrix-controlled image compressive sensing and image encryption method |
CN104181508A (en) * | 2014-08-31 | 2014-12-03 | 西安电子科技大学 | Threatening radar signal detection method based on compressed sensing |
CN104333389A (en) * | 2014-10-23 | 2015-02-04 | 湘潭大学 | Adaptive threshold value iterative reconstruction method for distributed compressed sensing |
CN104977570A (en) * | 2015-05-08 | 2015-10-14 | 西安电子科技大学 | Null-space-tuning-based dual-channel sparse SAR moving target detection improvement method |
CN106789766A (en) * | 2016-11-28 | 2017-05-31 | 杭州电子科技大学 | Sparse OFDM channel estimation method based on homotopy method |
CN107483057A (en) * | 2017-08-11 | 2017-12-15 | 大连大学 | Sparse multi-band signals reconstructing method based on conjugate gradient tracking |
CN107728211A (en) * | 2017-08-31 | 2018-02-23 | 电子科技大学 | Seismic signal algorithm based on tensor nuclear norm regularization |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10664634B2 (en) * | 2014-06-30 | 2020-05-26 | Cgg Services Sas | Ensemble-based multi-scale history-matching device and method for reservoir characterization |
CN104318046A (en) * | 2014-08-18 | 2015-01-28 | 南京大学 | System and method for incrementally converting high dimensional data into low dimensional data |
US9538126B2 (en) * | 2014-12-03 | 2017-01-03 | King Abdulaziz City For Science And Technology | Super-resolution of dynamic scenes using sampling rate diversity |
EP3274136B1 (en) * | 2015-03-23 | 2019-09-18 | Soft Robotics, Inc. | Improvements to soft robotic actuators and methods of manufacturing the same |
CN107845117B (en) * | 2017-10-19 | 2019-12-10 | 武汉大学 | hyperspectral image compression method based on block sparse expression mode and structural dictionary |
-
2018
- 2018-05-08 CN CN201810431421.8A patent/CN108573262B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102393911A (en) * | 2011-07-21 | 2012-03-28 | 西安电子科技大学 | Background clutter quantization method based on compressive sensing |
CN102750262A (en) * | 2012-06-26 | 2012-10-24 | 清华大学 | Method for realizing sparse signal recovery on CPU (Central Processing Unit) based on OMP (Orthogonal Matching Pursuit) algorithm |
CN102833514A (en) * | 2012-08-07 | 2012-12-19 | 南昌大学 | Measurement-matrix-controlled image compressive sensing and image encryption method |
CN104181508A (en) * | 2014-08-31 | 2014-12-03 | 西安电子科技大学 | Threatening radar signal detection method based on compressed sensing |
CN104333389A (en) * | 2014-10-23 | 2015-02-04 | 湘潭大学 | Adaptive threshold value iterative reconstruction method for distributed compressed sensing |
CN104977570A (en) * | 2015-05-08 | 2015-10-14 | 西安电子科技大学 | Null-space-tuning-based dual-channel sparse SAR moving target detection improvement method |
CN106789766A (en) * | 2016-11-28 | 2017-05-31 | 杭州电子科技大学 | Sparse OFDM channel estimation method based on homotopy method |
CN107483057A (en) * | 2017-08-11 | 2017-12-15 | 大连大学 | Sparse multi-band signals reconstructing method based on conjugate gradient tracking |
CN107728211A (en) * | 2017-08-31 | 2018-02-23 | 电子科技大学 | Seismic signal algorithm based on tensor nuclear norm regularization |
Non-Patent Citations (2)
Title |
---|
A fast approach for overcomplete sparse decomposition based on smoothed norm;H.Mohimani等;《IEEE Transactions on Signal Processing》;20091231;第57卷(第1期);第289-301页 * |
基于压缩感知的信号重构算法研究;李锦秀;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150715(第7期);第I136-78页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108573262A (en) | 2018-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | A constructive approach to $ L_0 $ penalized regression | |
Wen et al. | Structured overcomplete sparsifying transform learning with convergence guarantees and applications | |
Yang et al. | Fast ℓ 1-minimization algorithms and an application in robust face recognition: A review | |
Serra-Capizzano | A note on antireflective boundary conditions and fast deblurring models | |
Celledoni et al. | Approximating the exponential from a Lie algebra to a Lie group | |
Bebendorf et al. | Accelerating Galerkin BEM for linear elasticity using adaptive cross approximation | |
Giesbrecht et al. | Symbolic-numeric sparse interpolation of multivariate polynomials | |
Plonka et al. | Fast and numerically stable algorithms for discrete cosine transforms | |
Maggioni et al. | Multiscale dictionary learning: non-asymptotic bounds and robustness | |
Dokmanić et al. | Beyond moore-penrose part i: generalized inverses that minimize matrix norms | |
Zhang et al. | A new family of global methods for linear systems with multiple right-hand sides | |
Vervliet et al. | Canonical polyadic decomposition of incomplete tensors with linearly constrained factors | |
CN108573262B (en) | IGR-OMP-based high-dimensional sparse vector reconstruction method | |
Xin et al. | Exploring algorithmic limits of matrix rank minimization under affine constraints | |
CN108664448B (en) | High-dimensional sparse vector reconstruction method based on IQR _ OMP | |
Toma et al. | Optimal robust M-estimators using Renyi pseudodistances | |
Fan et al. | Variable selection in sparse regression with quadratic measurements | |
Varenyuk et al. | Weighted pseudoinversion with indefinite weights | |
Huang et al. | On the choice of solution subspace for nonstationary iterated Tikhonov regularization | |
Liu | Discussing a more fundamental concept than the minimal residual method to solve linear system in a Krylov subspace | |
CN112422133B (en) | Binary sparse signal recovery method for subtraction matching pursuit and application thereof | |
Chen et al. | Local and global optimality of LP minimization for sparse recovery | |
Mameli et al. | Higher-order asymptotics for scoring rules | |
Lejay et al. | Beyond the delta method | |
Huang et al. | A constructive approach to high-dimensional regression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |