CN107133930A - Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix - Google Patents
Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix Download PDFInfo
- Publication number
- CN107133930A CN107133930A CN201710298239.5A CN201710298239A CN107133930A CN 107133930 A CN107133930 A CN 107133930A CN 201710298239 A CN201710298239 A CN 201710298239A CN 107133930 A CN107133930 A CN 107133930A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msubsup
- msup
- msub
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 135
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000005457 optimization Methods 0.000 claims abstract description 17
- 230000006870 function Effects 0.000 claims description 34
- 238000013459 approach Methods 0.000 claims description 14
- 230000008602 contraction Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 230000003190 augmentative effect Effects 0.000 claims description 6
- 230000000295 complement effect Effects 0.000 claims description 4
- 238000009795 derivation Methods 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 description 8
- 230000003416 augmentation Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000017105 transposition Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000002969 morbid Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention belongs to computer vision field, pixel column row missing image is accurately filled to realize.The present invention is adopted the technical scheme that, the ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix, and step is that introduce low-rank priori based on low-rank matrix reconstruction theory enters row constraint to latent image;Simultaneously, it is contemplated that each row of row missing image can be by row dictionary rarefaction representation, and every a line of row missing image can be by row dictionary rarefaction representation, therefore introduces separable two-dimentional sparse prior based on sparse representation theory;So as to based on above-mentioned joint low-rank and separable two-dimentional sparse prior, specifically be expressed as solving constrained optimization equation with the image completion problem that ranks are lacked, so as to realize that ranks missing image is filled.Present invention is mainly applied to computer vision processing occasion.
Description
Technical field
The invention belongs to computer vision field.More particularly to rebuild based on low-rank matrix and the ranks of rarefaction representation are lacked
Image filling method.
Background technology
The problem of recovering unknown full matrix according to a part of known pixels of matrix causes people very in recent years
Big concern.Such issues that frequently encountered in computer vision and many application fields of machine learning, such as image repair,
Commending system and background modeling etc..
On solving the problems, such as that the method for image completion has had many achievements in research.Due to the morbid state of matrix fill-in problem
Property, current matrix fill-in method generally believes that potential matrix is low-rank or approximate low-rank, then passes through low-rank matrix
Rebuild to fill missing pixel values.Such as singular value threshold method (SVT), augmented vector approach (ALM), accelerate neighbour's gradient
Method (APG) etc..But these existing filling algorithms are all the low-rank characteristics using image to fill missing pixel values, this for
The each row and column of pixel missing at random and image have the situation of observation to be effective, but there is full line and permutation in the image
When pixel is lacked, existing algorithm can not solve the problems, such as this image completion.Because the matrix fill-in of a large amount of ranks pixel missings
Problem can not be solved under conditions of row constraint is only entered using low-rank characteristic.And in actual applications, as image transmitting,
Image array is likely to be degenerated by some ranks missing during shake data acquisition etc..So, designing one kind can
Effectively the filling algorithm of filled matrix ranks missing is very necessary.
At this stage, for the above-mentioned shortcoming using the matrix fill-in method of low-rank characteristic, academia on this basis, draws
Enter the sparse constraint to column vector, realize the recovery lacked to image line.Yet with the deficiency of priori conditions, row matrix and
The problem of row are lacked simultaneously is also the failure to solve.Therefore, the present invention introduces low-rank and separable two-dimentional sparse elder generation in a model
Test to realize that the matrix for lacking ranks is accurately filled.
The content of the invention
The invention is intended to make up the deficiencies in the prior art, that is, realize and pixel column row missing image is accurately filled.This hair
It is bright to adopt the technical scheme that, rebuild based on low-rank matrix and rarefaction representation ranks missing image fill method, step is, base
Low-rank priori is introduced in low-rank matrix reconstruction theory, and row constraint is entered to latent image;Simultaneously, it is contemplated that row missing image it is each
Row can be by row dictionary rarefaction representation, and every a line of row missing image can be by row dictionary rarefaction representation, therefore is based on sparse table
Show the theoretical separable two-dimentional sparse prior of introducing;, will so as to be based on above-mentioned joint low-rank and separable two-dimentional sparse prior
The image completion problem lacked with ranks is specifically expressed as solving constrained optimization equation, so as to realize that ranks missing image is filled out
Fill.
It specifically will be expressed as solving the refinement of constrained optimization equation specific steps with the image completion problem that ranks are lacked
For:
1) it specifically will be expressed as solving following constrained optimization equation with the image completion problem that ranks are lacked:
Wherein tr () is the mark of matrix, represents low-rank priori;The point multiplication operation of two matrixes is represented, | | | |1Table
Show a norm of matrix, two norm items represent separable two-dimentional sparse prior respectively;Ω is observation space, is indicated
Known pixels in the observing matrix D of ranks missing, PΩ() is projection operator, in expression variable drop to spatial domain Ω
Value, A is populated matrix, Σ=diag ([σ1,σ2,...,σn]) represent what is be made up of A singular value with nonincremental order
Diagonal matrix, Wa、WbAnd WcWeighting low-rank is represented respectively and separates the weight matrix of two-dimentional sparse item, γB, γCDifference table
Show the regularization coefficient of separable two-dimentional sparse item, ΦcAnd ΦrThe row dictionary and row dictionary trained, corresponding system are represented respectively
Matrix number is represented that E represents the pixel lacked in observing matrix D by B and C respectively;
Constrained optimization problem (1) is converted into unconstrained optimization problem to ask using augmented vector approach (ALM)
Solution, augmentation Lagrange's equation is as follows:
Wherein Y1、Y2And Y3Represent Lagrange multiplier matrix, μ1、μ2And μ3It is penalty factor,<·,·>Represent two squares
The inner product of battle array, | | | |FThis black (Frobenius) norm of the not Luo Beini of representing matrix;
Solution procedure is to train row dictionary and row dictionary ΦcAnd Φr, initialization weight matrix Wa、WbAnd Wc, alternately more
New coefficient matrix B and C, recover matrix A, missing pixel matrix E, Lagrange multiplier matrix Y1、Y2And Y3, penalty factor μ1、μ2
And μ3And weight matrix Wa、WbAnd Wc, until the result A of algorithmic statement, at this moment iteration(l)It is exactly the last solution A of former problem.
Specifically, training dictionary ΦcAnd Φr:Fallen out on high-quality image data set using on-line learning algorithm training
Dictionary and row dictionary ΦcAnd Φr。
Specifically, initialization weight matrix Wa、WbAnd Wc:If weighting number of times is l again, during l=0, by weight matrix WithInitial value is all entered as 1, represents that first time iteration is not weighted again.
Specifically, equation (2) is converted into by following sequence using alternating direction method ADM and is iterated solution:
In above formulaWithRepresent to take object function respectively
The value of variable B, C, A and E during minimum value, ρ1、ρ2And ρ3For multiplying factor, k is iterations;Then enter in accordance with the following steps
Row iteration is solved:
1) B is solvedk+1:B is tried to achieve using neighbour's gradient algorithm is acceleratedk+1;
Remove item unrelated with B in the object function that B is solved in formula (3), obtain equation below:
Using the method for Taylor expansion, construct a second order function to approach above formula, then for the letter of this second order
Count to solve full scale equation, makeVariable Z is re-introduced into, may finally be solved:
Wherein, soft () is contraction operator,Gradient,
LfIt is a constant, is worth and isVariable ZjRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times;
2) C is solvedk+1:C is tried to achieve using neighbour's gradient algorithm is acceleratedk+1;
Remove item unrelated with C in the object function that C is solved in formula (3), obtain equation below:
Using the method for Taylor expansion, construct a second order function to approach above formula, then for the letter of this second order
Count to solve full scale equation, makeIt is re-introduced into variableIt may finally solve
:
Wherein, soft () is contraction operator,ForGradient, Lf
It is a constant, is worth and isVariableRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times;
3) A is solvedk+1:A is solved using singular value threshold method (Singular Value Thresholding) SVTk+1;
Remove item unrelated with A in the object function that A is solved in formula (3), and obtained by formula:
Wherein,
To Qk+1Solved using singular value threshold method:
Wherein Hk+1, Vk+1It is Q respectivelyk+1Left singular matrix and right singular matrix;
4) E is solvedk+1:Ek+1Solution be made up of two parts;
In observation space Ω, E value is 0;Beyond observation space Ω, i.e. complementary spaceIt is interior, use first derivation
It is E last solution altogether by two parts to solve:
5) repeat the above steps 1), 2), 3), 4) until the result A of algorithmic statement, at this moment iterationk+1、Σk+1、Bk+1、Ck+1
And Ek+1The result A that exactly former problem is not weighted again(l)、Σ(l)、B(l)、C(l)And E(l), here, l is to weight number of times again;
6) weight matrix W is updateda、WbAnd Wc;
It is influence of the offseting signal amplitude on nuclear norm and a norm, introduces weight weighting scheme, according to currently estimates
The singular value matrix Σ of meter(l), coefficient matrix BlAnd ClAmplitude, weight matrix W is iteratively updated using inverse proportion principlea、Wb
And Wc:
WhereinIt is the position coordinates of pixel in image, ε is arbitrarily small positive number.
7) 1) -7 are repeated the above steps) until the result A of algorithmic statement, at this moment iteration(l)It is exactly the last solution A of former problem.
The technical characterstic and effect of the present invention:
The inventive method is directed to the image completion problem that ranks are lacked, by introducing separable two-dimentional sparse prior, real
The solution of the image completion problem lacked to ranks is showed.The invention has the characteristics that:
1st, used augmented vector approach (ALM), alternating direction method (ADM), accelerate neighbour gradient algorithm, it is unusual
It is worth threshold method scheduling algorithm and solves subproblem, incorporates the advantage of existing algorithm.
2nd, rarefaction representation is carried out to the columns and rows of image using row dictionary and row dictionary, compared with traditional block dictionary more
Efficiently.
3rd, low-rank matrix reconstruction theory and sparse representation theory are combined, drawn in traditional low-rank matrix reconstruction model
Enter dictionary learning, it is proposed that joint low-rank information and separable two-dimentional openness priori so that can simultaneously be lacked to ranks
Image accurately filled.
4th, low-rank is carried out by the impaired figure to missing and sparse joint is constrained, improved filling capacity, can both fill out
Ranks missing is filled, missing at random can also be more accurately filled.
Brief description of the drawings
The above-mentioned advantage of the present invention will be apparent and be readily appreciated that from the following description of the accompanying drawings of embodiments, its
In:
Fig. 1 is flow chart of the present invention;
Fig. 2 is the original true value image without missing;
Fig. 3 is the damaged image for having ranks and missing at random, and black represents missing pixel, from left to right total miss rate difference
For:(1) 10% missing;(2) 20% missings;(3) 30% missings;(4) 50% missings;
Fig. 4 is the filling result figure to the missing figure under four kinds of miss rates with the inventive method:(1) 10% missing filling knot
Really, PSNR=40.79;(2) 20% missing filling results, PSNR=37.32;(3) 30% missing filling results, PSNR=
35.69;(4) 50% missing filling results, PSNR=32.23.
Embodiment
With reference to embodiment and accompanying drawing to ranks missing image of the present invention based on low-rank matrix reconstruction and rarefaction representation
Fill method is described in detail.
Low-rank matrix is rebuild and is combined with rarefaction representation by the present invention, on the basis of traditional low-rank matrix reconstruction model
Dictionary learning model is introduced, by using joint low-rank and the constraint of separable two-dimentional sparse prior condition to missing image,
So as to solve the problem of existing algorithm can not realize the image completion of ranks missing.Specific method comprises the following steps:
1) the low-rank characteristic of natural image in itself is considered, low-rank priori is introduced to potential based on low-rank matrix reconstruction theory
Image enters row constraint;Simultaneously, it is contemplated that each row of row missing image can be by row dictionary rarefaction representation, and row missing image
Can be by row dictionary rarefaction representation per a line, therefore separable two-dimentional sparse prior is introduced based on sparse representation theory;So as to base
In above-mentioned joint low-rank and separable two-dimentional sparse prior, will specifically it be expressed as with the image completion problem that ranks are lacked
Solve following constrained optimization equation:
Wherein tr () is the mark of matrix, represents low-rank priori;The point multiplication operation of two matrixes is represented, | | | |1Table
Show a norm of matrix, two norm items represent separable two-dimentional sparse prior respectively;Ω is observation space, is indicated
Known pixels in the observing matrix D of ranks missing, PΩ() is projection operator, in expression variable drop to spatial domain Ω
Value, A is populated matrix, Σ=diag ([σ1,σ2,...,σn]) represent what is be made up of A singular value with nonincremental order
Diagonal matrix, Wa、WbAnd WcWeighting low-rank is represented respectively and separates the weight matrix of two-dimentional sparse item, γB, γCDifference table
Show the regularization coefficient of separable two-dimentional sparse item, ΦcAnd ΦrThe row dictionary and row dictionary trained, corresponding system are represented respectively
Matrix number is represented that E represents the pixel lacked in observing matrix D by B and C respectively;
11) constrained optimization problem (1) is converted into unconstrained optimization by the present invention using augmented vector approach (ALM)
Problem is solved, and augmentation Lagrange's equation is as follows:
Wherein Y1、Y2And Y3Represent Lagrange multiplier matrix, μ1、μ2And μ3It is penalty factor,<·,·>Represent two squares
The inner product of battle array, | | | |FThis black (Frobenius) norm of the not Luo Beini of representing matrix;
12) solution procedure is to train row dictionary and row dictionary ΦcAnd Φr, initialization weight matrix Wa、WbAnd Wc, alternately
Coefficient matrix B and C are updated, recovers matrix A, missing pixel matrix E, Lagrange multiplier matrix Y1、Y2And Y3, penalty factor μ1、
μ2And μ3And weight matrix Wa、WbAnd Wc;
2) training dictionary ΦcAnd Φr:On high-quality image data set dictionary of falling out is trained using on-line learning algorithm
With row dictionary ΦcAnd Φr;
21) construction row dictionary ΦcEnable matrix A by row dictionary rarefaction representation, that is, meet A=ΦcB, wherein B are to be
Matrix number and be sparse;Construct row dictionary ΦrEnable matrix A transposition by row dictionary rarefaction representation, that is, meet AΤ=
ΦrC, wherein C are coefficient matrixes and are sparse.The present invention is instructed using Online Learning algorithms on Kodak image sets
Practise row dictionary and row dictionary ΦcAnd Φr。
22) relevant parameter of training dictionary is set as:The line number of matrix A to be reconstructed and dictionary ΦcThe dimension m phases of middle element
Deng i.e. A line number and ΦcLine number be m;A columns and dictionary ΦrThe dimension n of middle element is equal, i.e. A columns and Φr
Line number be n.The dictionary Φ trainedcAnd ΦrIt was that the columns of complete dictionary, i.e. dictionary have to be larger than its line number.
3) initialization weight matrix Wa、WbAnd Wc;
If weighting number of times is l again, during l=0, by weight matrixWithInitial value is all entered as 1,
Represent that first time iteration is not weighted again.
4) equation (2) is converted into by following sequence using alternating direction method (ADM) and is iterated solution:
In above formulaWithRepresent to take object function respectively
The value of variable B, C, A and E during minimum value, ρ1、ρ2And ρ3For multiplying factor, k is iterations;Each initial parameter values are set, so
Afterwards according to step 5), method 6), 7), 8) is iterated to solve and obtains the result that does not weight again.
5) B is solvedk+1:B is tried to achieve using neighbour's gradient algorithm is acceleratedk+1。
51) remove item unrelated with B in the object function that B is solved in formula (3), obtain equation below:
By Taylor expansion, construct a second order function to approach above formula, then solved for this second order function
Full scale equation.Order
Variable Z is re-introduced into, function is defined as follows:
Wherein,For f (Z) gradient, LfIt is a constant, is worth and isFor ensureing there are all Z F (Z)≤Q (B, Z).
52) converted by upper step, equation (4) changes into solution Q (B, Zj) minimum problems, by formula obtain as follows
Form:
Wherein,Variable ZjRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times.Solved using contraction operator:
Wherein, soft () is contraction operator.
6) C is solvedk+1:C is tried to achieve using neighbour's gradient algorithm is acceleratedk+1。
61) remove item unrelated with C in the object function that C is solved in formula (3), obtain equation below:
Using Taylor expansion, construct a second order function to approach above formula, then solved for this second order function
Full scale equation.Order
It is re-introduced into variableIt is defined as follows function:
Wherein,ForGradient, LfIt is a constant, is worth and isFor ensureing to allHave
62) converted by upper step, equation (9) changes into solutionMinimum problems, by formula obtain as
Lower form:
Wherein,VariableRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times.Solved using contraction operator:
Wherein, soft () is contraction operator.
7) A is solvedk+1:A is solved using singular value threshold method (Singular Value Thresholding) SVTk+1。
Remove item unrelated with A in the object function for solving A in formula (3) to obtain:
Above formula is rewritten into using method of completing the square:
Wherein,
To Qk+1Solved using singular value threshold method:
Wherein Hk+1, Vk+1It is Q respectivelyk+1Left singular matrix and right singular matrix;
8) E is solvedk+1:Ek+1Solution be made up of two parts.
81) in observation space Ω, E value is 0;That is PΩ(E)=0.
82) beyond observation space Ω, i.e. complementary spaceIt is interior, on Ek+1Equation be shown below:
Solved, obtained using first derivation
83) the inside and outside solutions of spatial domain Ω are joined together as E last solution:
9) repeat the above steps 5), 6), 7), 8) until the result A of algorithmic statement, at this moment iterationk+1、Σk+1、Bk+1、Ck+1
And Ek+1The result A that exactly former problem is not weighted again(l)、Σ(l)、B(l)、C(l)And E(l).Here, l is to weight number of times again.
10) weight matrix W is updateda、WbAnd Wc。
It is influence of the offseting signal amplitude on nuclear norm and a norm, introduces weight weighting scheme, according to currently estimates
The singular value matrix Σ of meter(l), coefficient matrix BlAnd ClAmplitude, weight matrix W is iteratively updated using inverse proportion principlea、Wb
And Wc:
WhereinIt is the position coordinates of pixel in image, ε is arbitrarily small positive number.
11) repeat the above steps 4), 5), 6), 7), 8), 9), 10) until the result A of algorithmic statement, at this moment iteration(l)Just
It is the last solution A of former problem.
Low-rank matrix is rebuild and is combined with sparse representation theory by the inventive method, in traditional low-rank matrix reconstruction model
On the basis of introduce dictionary learning model, by missing image using joint low-rank and separable two-dimentional sparse prior condition
Constraint, so as to solve the problem of prior art can not be handled, that is, realize that the image lacked to ranks is filled (experiment flow
Figure is as shown in Figure 1).Detailed description in conjunction with the accompanying drawings and embodiments is as follows:
1) using the picture of 321 × 481 pixels randomly selected from BSDS500 data sets (such as Fig. 2 institutes in testing
Show) as original graph, the damaged image that 4 kinds of miss rates are respectively 10%, 20%, 30% and 50% is constructed thereon and is surveyed
Try (as shown in Figure 3), wherein including ranks missing and missing at random.The present invention is fixed as 100 dictionary, institute using atom size
So that figure to be filled first is divided into several 100 × 100 image blocks in the way of sliding window by from top to bottom, from left to right.Sliding window
Step-length be 90 pixels.By this, several 100 × 100 image blocks are sequentially filled, and final recombinant, which is got up, can obtain
The blank map of original size 321 × 481.When filling first image block, then it is represented with matrix D, filling is current with row
The problem of image block that Lieque is lost, is specifically expressed as solving following constrained optimization equation:
Wherein tr () is the mark of matrix, represents low-rank priori;The point multiplication operation of two matrixes is represented, | | | |1Table
Show a norm of matrix, two norm items represent separable two-dimentional sparse prior respectively;Ω is observation space, is indicated
Known pixels in the observing matrix D of ranks missing, PΩ() is projection operator, in expression variable drop to spatial domain Ω
Value, A is populated matrix, Σ=diag ([σ1,σ2,...,σn]) represent what is be made up of A singular value with nonincremental order
Diagonal matrix, Wa、WbAnd WcWeighting low-rank is represented respectively and separates the weight matrix of two-dimentional sparse item, γB, γCDifference table
Show the regularization coefficient of separable two-dimentional sparse item, ΦcAnd ΦrThe row dictionary and row dictionary trained, corresponding system are represented respectively
Matrix number is represented that E represents the pixel lacked in observing matrix D by B and C respectively;
11) constrained optimization problem (1) is converted into unconstrained optimization by the present invention using augmented vector approach (ALM)
Problem is solved, and augmentation Lagrange's equation is as follows:
Wherein Y1、Y2And Y3Represent Lagrange multiplier matrix, μ1、μ2And μ3It is penalty factor,<·,·>Represent two squares
The inner product of battle array, | | | |FThis black (Frobenius) norm of the not Luo Beini of representing matrix;
12) solution procedure is to train row dictionary and row dictionary ΦcAnd Φr, initialization weight matrix Wa、WbAnd Wc, alternately
Coefficient matrix B and C are updated, recovers matrix A, missing pixel matrix E, Lagrange multiplier matrix Y1、Y2And Y3, penalty factor μ1、
μ2And μ3And weight matrix Wa、WbAnd Wc;
2) training dictionary ΦcAnd Φr:On high-quality image data set dictionary of falling out is trained using on-line learning algorithm
With row dictionary ΦcAnd Φr;
21) construction row dictionary ΦcEnable matrix A by row dictionary rarefaction representation, that is, meet A=ΦcB, wherein B are to be
Matrix number and be sparse;Construct row dictionary ΦrEnable matrix A transposition by row dictionary rarefaction representation, that is, meet AΤ=
ΦrC, wherein C are coefficient matrixes and are sparse.The present invention uses Online Learning algorithms institute in Kodak image sets
Have and randomly select the pixel column that 230000 sizes are 100 × 1 on image altogether and fallen out dictionary and row word as training data training
Allusion quotation ΦcAnd Φr。
22) relevant parameter of training dictionary is set as:Reconstruction matrix A line number and dictionary ΦcThe dimension m phases of middle element
Deng i.e. A line number and ΦcLine number be to take m=100 in m, experiment.A columns and dictionary ΦrThe dimension n of middle element is equal,
That is A columns and ΦrLine number be to take n=100 in n, experiment.The dictionary Φ trainedcAnd ΦrIt was complete dictionary,
That is the columns of dictionary have to be larger than its line number.Row, column dictionary columns is taken as 400 in experiment, then dictionary ΦcAnd ΦrSpecification it is equal
For 100 × 400.
3) initialization weight matrix Wa、WbAnd Wc;
If weighting number of times is l again, during l=0, by weight matrixWithInitial value is all entered as 1,
Represent that first time iteration is not weighted again.
4) equation (2) is converted into by following sequence using alternating direction method (ADM) and is iterated solution:
In above formulaWithRepresent to take object function respectively
The value of variable B, C, A and E during minimum value, ρ1、ρ2And ρ3For multiplying factor, k is iterations;Each initial parameter values are set, so
Afterwards according to step 5), method 6), 7), 8) is iterated to solve and obtains the result that does not weight again.Initialization is in experiment:
L=0;K=1;ρ1=ρ2=ρ3=1.1;A1=B1=C1=E1=0.
5) B is solvedk+1:B is tried to achieve using neighbour's gradient algorithm is acceleratedk+1。
51) remove item unrelated with B in the object function that B is solved in formula (3), obtain equation below:
By Taylor expansion, construct a second order function to approach above formula, then solved for this second order function
Full scale equation.Order
Variable Z is re-introduced into, function is defined as follows:
Wherein,For f (Z) gradient, LfIt is a constant, is worth and isFor ensureing there are all Z F (Z)≤Q (B, Z).
52) converted by upper step, equation (4) changes into solution Q (B, Zj) minimum problems, by formula obtain as follows
Form:
Wherein,Variable ZjRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times.By above-mentioned conversion, each initial parameter value is set such as
Under:J=1;t1=1;Z1=0.It can be solved during convergence:
Wherein, soft () is contraction operator.
6) C is solvedk+1:C is tried to achieve using neighbour's gradient algorithm is acceleratedk+1。
61) remove item unrelated with C in the object function that C is solved in formula (3), obtain equation below:
Using Taylor expansion, construct a second order function to approach above formula, then solved for this second order function
Full scale equation.Order
It is re-introduced into variableIt is defined as follows function:
Wherein,ForGradient, LfIt is a constant, is worth and isFor ensureing to allHave
62) converted by upper step, equation (9) changes into solutionMinimum problems, by formula obtain as follows
Form:
Wherein,VariableRenewal rule it is as follows:
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times.By above-mentioned conversion, each initial parameter value is set such as
Under:J=1;
t1=1;It can be solved during convergence:
Wherein, soft () is contraction operator.
7) A is solvedk+1:A is solved using singular value threshold method (Singular Value Thresholding) SVTk+1。
Remove item unrelated with A in the object function for solving A in formula (3) to obtain:
Above formula is rewritten into using method of completing the square:
Wherein,
To Qk+1Solved using singular value threshold method:
Wherein Hk+1, Vk+1It is Q respectivelyk+1Left singular matrix and right singular matrix;
8) E is solvedk+1:Ek+1Solution be made up of two parts.
81) in observation space Ω, E value is 0;That is PΩ(E)=0.
82) beyond observation space Ω, i.e. complementary spaceIt is interior, on Ek+1Equation be shown below:
Solved, obtained using first derivation
83) the inside and outside solutions of spatial domain Ω are joined together as E last solution:
9) repeat the above steps 5), 6), 7), 8) until the result A of algorithmic statement, at this moment iterationk+1、Σk+1、Bk+1、Ck+1
And Ek+1The result A that exactly former problem is not weighted again(l)、Σ(l)、B(l)、C(l)And E(l).Here, l is to weight number of times again.
10) weight matrix W is updateda、WbAnd Wc。
It is influence of the offseting signal amplitude on nuclear norm and a norm, introduces weight weighting scheme, according to currently estimates
The singular value matrix Σ of meter(l), coefficient matrix BlAnd ClAmplitude, weight matrix W is iteratively updated using inverse proportion principlea、Wb
And Wc:
WhereinIt is the position coordinates of pixel in image, ε is arbitrarily small positive number.ε=0.001 is taken in experiment.
11) repeat the above steps 4), 5), 6), 7), 8), 9), 10) until the result A of algorithmic statement, at this moment iteration(l)Just
It is the last solution A of former problem.
12) process step 1 successively) in obtained remaining several image block until be stuffed entirely with, then by these images
Block is combined into final blank map (as shown in Figure 4).During combination, the overlapping pixel that divides repeatedly filled takes multiple filling
Average be used as end value.
Experimental result:The present invention uses PSNR (Y-PSNR) as the calculated measure of image completion result, and unit is
dB:
Wherein I is the image after filling, I0For the true picture without missing, w is the width of image, and h is the height of image
Degree, (x, y) represents the pixel value of image xth row y row, and Σ represents summation operation, | | it is absolute value.This experiment takes n=8,
The picture filling result that 4 different degrees of ranks are lacked for being used to test in experiment is shown in that Fig. 4 is marked.
Claims (5)
1. it is a kind of rebuild based on low-rank matrix and rarefaction representation ranks missing image fill method, it is characterized in that, step is, base
Low-rank priori is introduced in low-rank matrix reconstruction theory, and row constraint is entered to latent image;Simultaneously, it is contemplated that row missing image it is each
Row can be by row dictionary rarefaction representation, and every a line of row missing image can be by row dictionary rarefaction representation, therefore is based on sparse table
Show the theoretical separable two-dimentional sparse prior of introducing;, will so as to be based on above-mentioned joint low-rank and separable two-dimentional sparse prior
The image completion problem lacked with ranks is specifically expressed as solving constrained optimization equation, so as to realize that ranks missing image is filled out
Fill.
2. the ranks missing image fill method as claimed in claim 1 rebuild based on low-rank matrix with rarefaction representation, it is special
Levying is, is refined as solution constrained optimization equation specific steps are specifically expressed as with the image completion problem that ranks are lacked:
1) it specifically will be expressed as solving following constrained optimization equation with the image completion problem that ranks are lacked:
Wherein tr () is the mark of matrix, represents low-rank priori;The point multiplication operation of two matrixes is represented, | | | |1Represent square
One norm of battle array, two norm items represent separable two-dimentional sparse prior respectively;Ω is observation space, indicates ranks
Known pixels in the observing matrix D of missing, PΩ() is projection operator, expression variable drop to the value in spatial domain Ω, and A is
Populated matrix, Σ=diag ([σ1,σ2,...,σn]) represent by A singular value with nonincremental order constitute to angular moment
Battle array, Wa、WbAnd WcWeighting low-rank is represented respectively and separates the weight matrix of two-dimentional sparse item, γB, γCRepresent to divide respectively
From the regularization coefficient of two-dimentional sparse item, ΦcAnd ΦrThe row dictionary and row dictionary trained, corresponding coefficient matrix are represented respectively
Represented respectively by B and C, E represents the pixel lacked in observing matrix D;
Constrained optimization problem (1) is converted into unconstrained optimization problem to solve using augmented vector approach (ALM), increased
Wide Lagrange's equation is as follows:
Wherein Y1、Y2And Y3Represent Lagrange multiplier matrix, μ1、μ2And μ3It is penalty factor,<·,·>Two matrixes of expression
Inner product, | | | |FThis black (Frobenius) norm of the not Luo Beini of representing matrix;
Solution procedure is to train row dictionary and row dictionary ΦcAnd Φr, initialization weight matrix Wa、WbAnd Wc, alternately update system
Matrix number B and C, recover matrix A, missing pixel matrix E, Lagrange multiplier matrix Y1、Y2And Y3, penalty factor μ1、μ2And μ3
And weight matrix Wa、WbAnd Wc, until the result A of algorithmic statement, at this moment iteration(l)It is exactly the last solution A of former problem.
3. the ranks missing image fill method as claimed in claim 2 rebuild based on low-rank matrix with rarefaction representation, it is special
Levying is, specifically, training dictionary ΦcAnd Φr:On high-quality image data set word of falling out is trained using on-line learning algorithm
Allusion quotation and row dictionary ΦcAnd Φr。
4. the ranks missing image fill method as claimed in claim 2 rebuild based on low-rank matrix with rarefaction representation, it is special
Levying is, specifically, initialization weight matrix Wa、WbAnd Wc:If weighting number of times is l again, during l=0, by weight matrix、
WithInitial value is all entered as 1, represents that first time iteration is not weighted again.
5. the ranks missing image fill method as claimed in claim 2 rebuild based on low-rank matrix with rarefaction representation, it is special
Levying is, specifically, and equation (2) is converted into following sequence using alternating direction method ADM is iterated solution:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>B</mi>
</munder>
<mo>{</mo>
<msub>
<mi>L</mi>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msup>
<mi>A</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<mi>B</mi>
<mo>,</mo>
<msup>
<mi>C</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<msup>
<mi>E</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>C</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>C</mi>
</munder>
<mo>{</mo>
<msub>
<mi>L</mi>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msup>
<mi>A</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<mi>C</mi>
<mo>,</mo>
<msup>
<mi>E</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>A</mi>
</munder>
<mo>{</mo>
<msub>
<mi>L</mi>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>A</mi>
<mo>,</mo>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mi>C</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mi>E</mi>
<mi>k</mi>
</msup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>E</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mrow>
<msub>
<mi>P</mi>
<mi>&Omega;</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>E</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>0</mn>
</mrow>
</munder>
<mo>{</mo>
<msub>
<mi>L</mi>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mi>C</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<mi>E</mi>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Y</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>-</mo>
<msub>
<mi>&Phi;</mi>
<mi>r</mi>
</msub>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Y</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<msup>
<mn>1</mn>
<mi>T</mi>
</msup>
</mrow>
</msup>
<mo>-</mo>
<msub>
<mi>&Phi;</mi>
<mi>r</mi>
</msub>
<msup>
<mi>C</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>D</mi>
<mo>-</mo>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>-</mo>
<msup>
<mi>E</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msub>
<mi>&rho;</mi>
<mn>1</mn>
</msub>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msub>
<mi>&rho;</mi>
<mn>2</mn>
</msub>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msub>
<mi>&rho;</mi>
<mn>3</mn>
</msub>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
In above formulaWithRepresent to make object function take minimum respectively
The value of variable B, C, A and E during value, ρ1、ρ2And ρ3For multiplying factor, k is iterations;Then changed in accordance with the following steps
In generation, solves:
1) B is solvedk+1:B is tried to achieve using neighbour's gradient algorithm is acceleratedk+1;
Remove item unrelated with B in the object function that B is solved in formula (3), obtain equation below:
Using the method for Taylor expansion, construct a second order function to approach above formula, then for this second order function come
Solve full scale equation, orderVariable Z is re-introduced into, may finally be solved:
<mrow>
<msup>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msubsup>
<mi>B</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>=</mo>
<mi>s</mi>
<mi>o</mi>
<mi>f</mi>
<mi>t</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>U</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>,</mo>
<mfrac>
<msub>
<mi>&gamma;</mi>
<mi>B</mi>
</msub>
<msub>
<mi>L</mi>
<mi>f</mi>
</msub>
</mfrac>
<msub>
<mi>W</mi>
<mi>b</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, soft () is contraction operator, For f (Z) gradient, LfIt is one
Constant, is worth and isVariable ZjRenewal rule it is as follows:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>t</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<msqrt>
<mrow>
<mn>4</mn>
<msubsup>
<mi>t</mi>
<mi>j</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msqrt>
<mo>)</mo>
</mrow>
<mo>/</mo>
<mn>2</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>Z</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>=</mo>
<msubsup>
<mi>B</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>t</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>t</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mfrac>
<mrow>
<mo>(</mo>
<msubsup>
<mi>B</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>B</mi>
<mi>j</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times;
2) C is solvedk+1:C is tried to achieve using neighbour's gradient algorithm is acceleratedk+1;
Remove item unrelated with C in the object function that C is solved in formula (3), obtain equation below:
Using the method for Taylor expansion, construct a second order function to approach above formula, then for this second order function come
Solve full scale equation, orderIt is re-introduced into variableIt may finally solve:
<mrow>
<msup>
<mi>C</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msubsup>
<mi>C</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>=</mo>
<mi>s</mi>
<mi>o</mi>
<mi>f</mi>
<mi>t</mi>
<mrow>
<mo>(</mo>
<msub>
<mover>
<mi>U</mi>
<mo>~</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>,</mo>
<mfrac>
<msub>
<mi>&gamma;</mi>
<mi>C</mi>
</msub>
<msub>
<mi>L</mi>
<mi>f</mi>
</msub>
</mfrac>
<msub>
<mi>W</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, soft () is contraction operator, ForGradient, LfIt is one
Individual constant, is worth and isVariableRenewal rule it is as follows:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>t</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<msqrt>
<mrow>
<mn>4</mn>
<msubsup>
<mi>t</mi>
<mi>j</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msqrt>
<mo>)</mo>
</mrow>
<mo>/</mo>
<mn>2</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mover>
<mi>Z</mi>
<mo>~</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>=</mo>
<msubsup>
<mi>C</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>t</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>t</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mfrac>
<mrow>
<mo>(</mo>
<msubsup>
<mi>C</mi>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>C</mi>
<mi>j</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>9</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, tjIt is one group of constant sequence, j is iteration of variables number of times;
3) A is solvedk+1:A is solved using singular value threshold method (Singular Value Thresholding) SVTk+1;
Remove item unrelated with A in the object function that A is solved in formula (3), and obtained by formula:
Wherein,To Qk +1Solved using singular value threshold method:
<mrow>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msup>
<mi>H</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>s</mi>
<mi>o</mi>
<mi>f</mi>
<mi>t</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>&Sigma;</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>,</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mn>1</mn>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mn>2</mn>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mrow>
</mfrac>
<msub>
<mi>W</mi>
<mi>a</mi>
</msub>
<mo>)</mo>
</mrow>
<msup>
<mi>V</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<msup>
<mn>1</mn>
<mi>T</mi>
</msup>
</mrow>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>11</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein Hk+1, Vk+1It is Q respectivelyk+1Left singular matrix and right singular matrix;
4) E is solvedk+1:Ek+1Solution be made up of two parts;
In observation space Ω, E value is 0;Beyond observation space Ω, i.e. complementary spaceIt is interior, asked using first derivation
Two parts are E last solution altogether by solution:
<mrow>
<msup>
<mi>E</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msub>
<mi>P</mi>
<mi>&Omega;</mi>
</msub>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>P</mi>
<mover>
<mi>&Omega;</mi>
<mo>&OverBar;</mo>
</mover>
</msub>
<mrow>
<mo>(</mo>
<mi>D</mi>
<mo>-</mo>
<msup>
<mi>A</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>+</mo>
<mfrac>
<msubsup>
<mi>Y</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
<msubsup>
<mi>&mu;</mi>
<mn>3</mn>
<mi>k</mi>
</msubsup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>12</mn>
<mo>)</mo>
</mrow>
</mrow>
5) repeat the above steps 1), 2), 3), 4) until the result A of algorithmic statement, at this moment iterationk+1、Σk+1、Bk+1、Ck+1And Ek+1
The result A that exactly former problem is not weighted again(l)、Σ(l)、B(l)、C(l)And E(l), here, l is to weight number of times again;
6) weight matrix W is updateda、WbAnd Wc;
For influence of the offseting signal amplitude on nuclear norm and a norm, weight weighting scheme is introduced, according to what is currently estimated
Singular value matrix Σ(l), coefficient matrix BlAnd ClAmplitude, weight matrix W is iteratively updated using inverse proportion principlea、WbAnd Wc:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>W</mi>
<mi>a</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>,</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<mi>&epsiv;</mi>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>W</mi>
<mi>b</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>,</mo>
<mover>
<mi>j</mi>
<mo>^</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mo>|</mo>
<msup>
<mi>B</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msup>
<mrow>
<mo>(</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>,</mo>
<mover>
<mi>j</mi>
<mo>^</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>+</mo>
<mi>&epsiv;</mi>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>W</mi>
<mi>c</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>,</mo>
<mover>
<mi>j</mi>
<mo>^</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mo>|</mo>
<msup>
<mi>C</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msup>
<mrow>
<mo>(</mo>
<mover>
<mi>i</mi>
<mo>^</mo>
</mover>
<mo>,</mo>
<mover>
<mi>j</mi>
<mo>^</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>+</mo>
<mi>&epsiv;</mi>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>13</mn>
<mo>)</mo>
</mrow>
</mrow>
WhereinIt is the position coordinates of pixel in image, ε is arbitrarily small positive number.
7) 1) -7 are repeated the above steps) until the result A of algorithmic statement, at this moment iteration(l)It is exactly the last solution A of former problem.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710298239.5A CN107133930A (en) | 2017-04-30 | 2017-04-30 | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710298239.5A CN107133930A (en) | 2017-04-30 | 2017-04-30 | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107133930A true CN107133930A (en) | 2017-09-05 |
Family
ID=59715788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710298239.5A Pending CN107133930A (en) | 2017-04-30 | 2017-04-30 | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107133930A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171215A (en) * | 2018-01-25 | 2018-06-15 | 河南大学 | Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification |
CN108427742A (en) * | 2018-03-07 | 2018-08-21 | 中国电力科学研究院有限公司 | A kind of distribution network reliability data recovery method and system based on low-rank matrix |
CN108734675A (en) * | 2018-05-17 | 2018-11-02 | 西安电子科技大学 | Image recovery method based on mixing sparse prior model |
CN109215025A (en) * | 2018-09-25 | 2019-01-15 | 电子科技大学 | A kind of method for detecting infrared puniness target approaching minimization based on non-convex order |
CN109325446A (en) * | 2018-09-19 | 2019-02-12 | 电子科技大学 | A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm |
CN109325442A (en) * | 2018-09-19 | 2019-02-12 | 福州大学 | A kind of face identification method of image pixel missing |
CN109348229A (en) * | 2018-10-11 | 2019-02-15 | 武汉大学 | Jpeg image mismatch steganalysis method based on the migration of heterogeneous characteristic subspace |
CN109671030A (en) * | 2018-12-10 | 2019-04-23 | 西安交通大学 | A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold |
CN109754008A (en) * | 2018-12-28 | 2019-05-14 | 上海理工大学 | The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition |
CN109978783A (en) * | 2019-03-19 | 2019-07-05 | 上海交通大学 | A kind of color image restorative procedure |
CN111025385A (en) * | 2019-11-26 | 2020-04-17 | 中国地质大学(武汉) | Seismic data reconstruction method based on low rank and sparse constraint |
CN111597440A (en) * | 2020-05-06 | 2020-08-28 | 上海理工大学 | Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation |
CN111881413A (en) * | 2020-07-28 | 2020-11-03 | 中国人民解放军海军航空大学 | Multi-source time sequence missing data recovery method based on matrix decomposition |
CN112184571A (en) * | 2020-09-14 | 2021-01-05 | 江苏信息职业技术学院 | Robust principal component analysis method based on non-convex rank approximation |
CN112564945A (en) * | 2020-11-23 | 2021-03-26 | 南京邮电大学 | IP network flow estimation method based on time sequence prior and sparse representation |
CN112561842A (en) * | 2020-12-07 | 2021-03-26 | 昆明理工大学 | Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning |
CN112734763A (en) * | 2021-01-29 | 2021-04-30 | 西安理工大学 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
CN113112563A (en) * | 2021-04-21 | 2021-07-13 | 西北大学 | Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior |
CN115508835A (en) * | 2022-10-28 | 2022-12-23 | 广东工业大学 | Tomography SAR three-dimensional imaging method based on blind compressed sensing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093430A (en) * | 2013-01-25 | 2013-05-08 | 西安电子科技大学 | Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning |
CN103679660A (en) * | 2013-12-16 | 2014-03-26 | 清华大学 | Method and system for restoring image |
CN104867119A (en) * | 2015-05-21 | 2015-08-26 | 天津大学 | Structural lack image filling method based on low rank matrix reconstruction |
CN104978716A (en) * | 2015-06-09 | 2015-10-14 | 重庆大学 | SAR image noise reduction method based on linear minimum mean square error estimation |
CN105743611A (en) * | 2015-12-25 | 2016-07-06 | 华中农业大学 | Sparse dictionary-based wireless sensor network missing data reconstruction method |
-
2017
- 2017-04-30 CN CN201710298239.5A patent/CN107133930A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093430A (en) * | 2013-01-25 | 2013-05-08 | 西安电子科技大学 | Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning |
CN103679660A (en) * | 2013-12-16 | 2014-03-26 | 清华大学 | Method and system for restoring image |
CN104867119A (en) * | 2015-05-21 | 2015-08-26 | 天津大学 | Structural lack image filling method based on low rank matrix reconstruction |
CN104978716A (en) * | 2015-06-09 | 2015-10-14 | 重庆大学 | SAR image noise reduction method based on linear minimum mean square error estimation |
CN105743611A (en) * | 2015-12-25 | 2016-07-06 | 华中农业大学 | Sparse dictionary-based wireless sensor network missing data reconstruction method |
Non-Patent Citations (4)
Title |
---|
JINGYU YANG等: ""Completion of Structurally-Incomplete Matrices with Reweighted Low-Rank and Sparsity Priors"", 《2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 》 * |
杜伟男等: ""基于残差字典学习的图像超分辨率重建方法"", 《北京工业大学学报》 * |
汪雄良等: ""基于快速基追踪算法的图像去噪"", 《计算机应用》 * |
王斌等: ""基于低秩表示和学习字典的高光谱图像异常探测"", 《红外与毫米波学报》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171215A (en) * | 2018-01-25 | 2018-06-15 | 河南大学 | Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification |
CN108171215B (en) * | 2018-01-25 | 2023-02-03 | 河南大学 | Face camouflage detection and camouflage type detection method based on low-rank variation dictionary and sparse representation classification |
CN108427742A (en) * | 2018-03-07 | 2018-08-21 | 中国电力科学研究院有限公司 | A kind of distribution network reliability data recovery method and system based on low-rank matrix |
CN108427742B (en) * | 2018-03-07 | 2023-12-19 | 中国电力科学研究院有限公司 | Power distribution network reliability data restoration method and system based on low-rank matrix |
CN108734675A (en) * | 2018-05-17 | 2018-11-02 | 西安电子科技大学 | Image recovery method based on mixing sparse prior model |
CN108734675B (en) * | 2018-05-17 | 2021-09-28 | 西安电子科技大学 | Image restoration method based on mixed sparse prior model |
CN109325446A (en) * | 2018-09-19 | 2019-02-12 | 电子科技大学 | A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm |
CN109325442A (en) * | 2018-09-19 | 2019-02-12 | 福州大学 | A kind of face identification method of image pixel missing |
CN109325446B (en) * | 2018-09-19 | 2021-06-22 | 电子科技大学 | Infrared weak and small target detection method based on weighted truncation nuclear norm |
CN109215025A (en) * | 2018-09-25 | 2019-01-15 | 电子科技大学 | A kind of method for detecting infrared puniness target approaching minimization based on non-convex order |
CN109215025B (en) * | 2018-09-25 | 2021-08-10 | 电子科技大学 | Infrared weak and small target detection method based on non-convex rank approach minimization |
CN109348229A (en) * | 2018-10-11 | 2019-02-15 | 武汉大学 | Jpeg image mismatch steganalysis method based on the migration of heterogeneous characteristic subspace |
CN109348229B (en) * | 2018-10-11 | 2020-02-11 | 武汉大学 | JPEG image mismatch steganalysis method based on heterogeneous feature subspace migration |
CN109671030B (en) * | 2018-12-10 | 2021-04-20 | 西安交通大学 | Image completion method based on adaptive rank estimation Riemann manifold optimization |
CN109671030A (en) * | 2018-12-10 | 2019-04-23 | 西安交通大学 | A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold |
CN109754008A (en) * | 2018-12-28 | 2019-05-14 | 上海理工大学 | The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition |
CN109754008B (en) * | 2018-12-28 | 2022-07-19 | 上海理工大学 | High-dimensional symmetric sparse network missing information estimation method based on matrix decomposition |
CN109978783A (en) * | 2019-03-19 | 2019-07-05 | 上海交通大学 | A kind of color image restorative procedure |
CN111025385A (en) * | 2019-11-26 | 2020-04-17 | 中国地质大学(武汉) | Seismic data reconstruction method based on low rank and sparse constraint |
CN111597440A (en) * | 2020-05-06 | 2020-08-28 | 上海理工大学 | Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation |
CN111881413A (en) * | 2020-07-28 | 2020-11-03 | 中国人民解放军海军航空大学 | Multi-source time sequence missing data recovery method based on matrix decomposition |
CN111881413B (en) * | 2020-07-28 | 2022-12-09 | 中国人民解放军海军航空大学 | Multi-source time sequence missing data recovery method based on matrix decomposition |
CN112184571A (en) * | 2020-09-14 | 2021-01-05 | 江苏信息职业技术学院 | Robust principal component analysis method based on non-convex rank approximation |
CN112564945A (en) * | 2020-11-23 | 2021-03-26 | 南京邮电大学 | IP network flow estimation method based on time sequence prior and sparse representation |
CN112564945B (en) * | 2020-11-23 | 2023-03-24 | 南京邮电大学 | IP network flow estimation method based on time sequence prior and sparse representation |
CN112561842A (en) * | 2020-12-07 | 2021-03-26 | 昆明理工大学 | Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning |
CN112561842B (en) * | 2020-12-07 | 2022-12-09 | 昆明理工大学 | Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning |
CN112734763A (en) * | 2021-01-29 | 2021-04-30 | 西安理工大学 | Image decomposition method based on convolution and K-SVD dictionary joint sparse coding |
CN113112563A (en) * | 2021-04-21 | 2021-07-13 | 西北大学 | Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior |
CN113112563B (en) * | 2021-04-21 | 2023-10-27 | 西北大学 | Sparse angle CB-XLCT imaging method for optimizing regional knowledge priori |
CN115508835A (en) * | 2022-10-28 | 2022-12-23 | 广东工业大学 | Tomography SAR three-dimensional imaging method based on blind compressed sensing |
CN115508835B (en) * | 2022-10-28 | 2024-03-15 | 广东工业大学 | Chromatographic SAR three-dimensional imaging method based on blind compressed sensing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107133930A (en) | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix | |
CN109241491A (en) | The structural missing fill method of tensor based on joint low-rank and rarefaction representation | |
CN104867119B (en) | The structural missing image fill method rebuild based on low-rank matrix | |
CN110119780A (en) | Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network | |
CN102142137B (en) | High-resolution dictionary based sparse representation image super-resolution reconstruction method | |
CN103871041B (en) | The image super-resolution reconstructing method built based on cognitive regularization parameter | |
CN106600538A (en) | Human face super-resolution algorithm based on regional depth convolution neural network | |
CN105488776B (en) | Super-resolution image reconstruction method and device | |
CN106067161A (en) | A kind of method that image is carried out super-resolution | |
CN103345643B (en) | A kind of Classifying Method in Remote Sensing Image | |
CN104751472B (en) | Fabric defect detection method based on B-spline small echo and deep neural network | |
CN106919952A (en) | EO-1 hyperion Anomaly target detection method based on structure rarefaction representation and internal cluster filter | |
CN105825200A (en) | High-spectrum abnormal object detection method based on background dictionary learning and structure sparse expression | |
CN106203625A (en) | A kind of deep-neural-network training method based on multiple pre-training | |
CN105118078B (en) | The CT image rebuilding methods of lack sampling | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN105957022A (en) | Recovery method of low-rank matrix reconstruction with random value impulse noise deletion image | |
CN105981050A (en) | Method and system for exacting face features from data of face images | |
CN104794455B (en) | A kind of Dongba pictograph recognition methods | |
CN109872278A (en) | Image cloud layer removing method based on U-shape network and generation confrontation network | |
CN105719262B (en) | PAN and multi-spectral remote sensing image fusion method based on the sparse reconstruct of sub- dictionary | |
CN108053456A (en) | A kind of PET reconstruction images optimization method and system | |
CN105931264A (en) | Sea-surface infrared small object detection method | |
CN110139046A (en) | A kind of video frame synthetic method based on tensor | |
CN104361574A (en) | No-reference color image quality assessment method on basis of sparse representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170905 |