CN110223243A - The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor - Google Patents

The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor Download PDF

Info

Publication number
CN110223243A
CN110223243A CN201910379660.8A CN201910379660A CN110223243A CN 110223243 A CN110223243 A CN 110223243A CN 201910379660 A CN201910379660 A CN 201910379660A CN 110223243 A CN110223243 A CN 110223243A
Authority
CN
China
Prior art keywords
tensor
model
matrix
rank
local self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910379660.8A
Other languages
Chinese (zh)
Inventor
李晓彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910379660.8A priority Critical patent/CN110223243A/en
Publication of CN110223243A publication Critical patent/CN110223243A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The tensor restorative procedure of the invention discloses a kind of non local self similarity based on tensor and low-rank canonical, comprising the following steps: S1: establish tensor model;S2: according to closing on operator function optimization object function and solved to tensor model;S3: increase strategy using order and be iterated solution.Using plug and play frame, a non-explicit non local self similarity canonical is devised to promote the detail recovery of tensor.And devise the model solution algorithm based on the continuous upper bound descent method of block.Numerical experiment show model NLS-LR that we are proposed restore target tensor structure, profile and in terms of there is apparent advantage, our model of Experimental results show is more than many existing main stream approach in visual effect and evaluation index.

Description

The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of non local self similarity and low-rank based on tensor is just Tensor restorative procedure then.
Background technique
In informationized society of today, information is in explosive growth.In actual life as nuclear magnetic resonance image (MRI), The data such as high-spectrum/multispectral figure (HSI/MSI), color image and video often have the structure of higher-dimension.As vector sum The popularization of matrix, tensor have very important effect in the data for indicating higher-dimension.Due to loss of learning or obtain information Cost is excessive, and real-life tensor often shows a kind of incomplete structure.It has been obtained from missing tensor supposition The problem of whole tensor, is called tensor and repairs problem (LRTC).Tensor repairs problem and is widely used in reality, as picture is repaired Multiple, nuclear magnetic resonance image restores, removes rain and remote sensing satellite image reparation etc..
In order to solve the problems, such as that tensor is repaired, it would be desirable to excavate the inherent connection in missing tensor between known point and unknown point System.In fact, data often have very strong internal association, we are commonly referred to as low-rank.There are many utilize quarter in recent years The method contacted between known point and unknown point is drawn, achieves good progress in tensor reparation problem.From mathematical model From the point of view of, low-rank tensor is repaired problem and can be expressed as:
WhereinIt is target tensor,It is observed tensors, Ω is the observation collection where known point,It is projection letter (element that observation can be made to concentrate remains unchanged number, 0) remaining element is set to.In fact, our common low-rank matrixes Filling problem is exactly that a kind of tensor of second order repairs problem.
Unlike matrix, the order of tensor is not defined uniquely.Among these different definitions, most common two kinds The method for defining tensor order be the CP order that CP (CANDECOMP/PARAFAC) based on tensor is decomposed and Tucker is decomposed and Tucker order.For a N rank tensorTensorCP order be defined as forming target tensor The number of minimum one tensor of order.TensorTucker order can be defined as (rank (Y(1)), rank (Y(2)) ..., rank (Y(N))), wherein Y (N) is matrix expansion of the tensor along n-th of direction.
However, directly optimization CP order or Tucker order are NP- difficulty problems.Between past several years, nuclear norm becomes matrix The most stable of convex approximation of order, and be widely used in and solve all kinds of order optimization problems.
Although having been achieved for good effect using the method for tensor part slickness priori, they have ignored tensor The non local self similarity information of middle redundancy.In fact, non local method not only can use the neighborhood information in pixel, also It can use the information in global similar modular blocks.Meanwhile in many image procossing inverse problems, non local method effect will It is better than local method.
Therefore, we have proposed a kind of low-rank tensor repairing model using non local priori come safeguard target tensor from Similitude, this will be helpful to promote structure, profile in target tensor, the reparation of the details such as texture.The tensor repairing model can be with Statement are as follows:
WhereinTo guarantee low-rank factor square of the tensor along the low-rank in each direction Battle array, λ is regular parameter,It is the plug and play canonical for promoting the non local self-similarity of target tensor.Meanwhile it utilizing Plug and play frame, we have proposed a non-explicit Optimized models.By consider global low-rank and it is non local from Similitude, our model can effectively rejuvenation target tensor structure, and can effectively capture in target tensor and lack The details of mistake.
Summary of the invention
The tensor reparation side of the purpose of the present invention is to provide a kind of non local self similarity based on tensor and low-rank canonical Method solves the problems, such as to have ignored the non local self similarity information of redundancy in tensor in current tensor algorithm.
In order to solve the above technical problems, the invention adopts the following technical scheme:
A kind of tensor restorative procedure of non local self similarity and low-rank canonical based on tensor, comprising the following steps:
S1: tensor model is established;
S2: according to closing on operator function optimization object function and solved to tensor model;
S3: increase strategy using order and be iterated solution.
Further, the specific method that tensor model is established in the S1 step is:
For three-dimensional tensorEstablishing model is
Wherein αnIt is nonnegative curvature, meetsY(n)It represents along tensorThe expansion matrix of different directions;A =(A1, A2, A3) and X=(X1, X2, X3) represents the low-rank factor matrix along tensor different directions, and λ is regular parameter, Φ (y) It is non-local self-similarity canonical,It is indicator function, meets
Further, the S2 step is according to closing on operator function optimization object function and solve to tensor model Method be:
According to closing on operator function optimization object function:
Wherein It is to close on operator parameter;
By using the continuous upper bound descent method of block, solve as follows:
For the subproblem of Xn and An:
It is converted into and how to solveSubproblem, i.e.,
The Frobenius norm of matrixIt is the sum of all elements volume arithmetic square root, if definitionFor matrixFolding tensor along the n-th direction, then the Frobenius norm of matrixWith Amount Frobenius norm be it is equal, then obtainSubproblem be converted into following form:
Objective function is further converted to following form:
WhereinIt is matrixThe tensor folded along n-th of direction;
Now enableAt this timeSubproblem be rewritten into:
It is the non local self similarity canonical by plug and play Frame Design, then objective function is asked by following formula Solution:Result is projected into feasible zone to meet constraint condition, so far by projection functionSon The solution of problem is following form:
WhereinIt is projection function,It is plug and play operator function, σ is the parameter for controlling regular terms intensity.
Further, increasing the method that strategy is iterated using order in the S3 step is:
With orderAs initial, when the relative error of iteration is less than the threshold value of setting, it may be assumed that
Corresponding order rnIt is updated in+1 iteration of kthWherein Δ rnBe positive integer, It is the Tucker order upper bound;Work as rnIt, will when increasing in+1 iteration of kthIt is updated toMeanwhile It willIt is updated toBy rnIt is updated toI.e. Addition size is Δ rnRandom matrix arriveColumn vector groups, addition size be Δ rnRandom matrix arriveRow vector Group.
Compared with prior art, the beneficial effects of the present invention are:
The invention proposes a novel low-rank tensor repairing models, by the low-rank of tensor and non local self-similarity It combines.On the one hand, we guarantee global low-rank using low-rank factor matrix method.On the other hand, plug and play is utilized Frame devises a non-explicit non local self similarity canonical to promote the detail recovery of tensor.And it devises and is connected based on block The model solution algorithm of continuous upper bound descent method.Numerical experiment shows that the model NLS-LR that we are proposed is restoring target tensor Structure, profile and details etc. have apparent advantage, our model of Experimental results show is in visual effect and evaluation It is more than many existing main stream approach in index.
Detailed description of the invention
Fig. 1 is the color image data set used in present invention experiment.
Fig. 2 is recovery effect of the present invention experiment about 10% stochastical sampling color image.
Fig. 3 is that model TMac, MF-TV, TNN, LRTC-TV-II, SPC-QV and NLS-LR of the present invention scheme different pollutions The reparative experiment effect of piece.
Fig. 4 be the present invention choose 10% sample rate under restored video coastguard, suzie, news, foreman and Wherein a frame is shown hall.
Fig. 5 is the pseudocolour picture exhibition that the present invention multispectral figure cloth, beads and toy repair result under 10% sample rate Show (R-10G-20B-30).
Fig. 6 closes on operator parameter with different for PSNR value of the present inventionWith the change curve of different regular parameter σ.
Fig. 7 is convergence curve of the present invention with the number of iterations.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
The symbol used first to the present invention is illustrated:
In the present invention, vector is indicated with lowercase (a), with capitalization (A) come representing matrix, with quirk letterTo indicate tensor.The basic concept and knowledge of some tensors are introduced in following part.
(1) tensor rudimentary knowledge
Define a N rank tensorWherein its (i1, i2..., iN) element definition of position isTensorExpansion matrix along n-th of direction is defined asWherein tensorIn (i1, i2..., iN) position element correspond to expansion matrix X(n)(in, j) and position element, meet
The inverse operator that operation is unfolded is " folding ", that is,
TensorTucker order be defined as array.
Two tensorsInner product be defined as
The Frobenius norm of tensor is defined as:
It is not difficult to verify, inner product of tensors has law of communication and associative law, we will use in derivation of the property below It arrives.
(2) projection operator and operator is closed on
Projection functionIt can makeElement in observation collection Ω remains unchanged, while making other elements 0, It is,
Operator (proximal operator) is closed on for convex function f (x) is defined as:
WhereinIt is.There is a conclusion to prove optimization problem minx{ f (x) } is equivalent to Therefore prox can be solved by iterationf(xk) obtain f (x) minimum value.
(3) plug and play (Plug and Play)
In image repair field, there are many work to be dedicated to matching efficient canonical and advanced optimization algorithm.Thing In reality, there are many image priors known to us, such as sparsity, Piecewise Smooth priori, but its corresponding regular terms | | x | |1With | | x | |TVIt is non-differentiability.In order to solve the problems, such as the non-differentiability of many regular terms, come out very between the past more than ten years More approximation algorithms, such as ISTA, ADMM.By these methods, many indirect problems in image procossing is made to pass through deformation, etc. The valence restricted problem that we are able to solve in solution, such as image denoising problem.
In recent years, is caused by plug and play (Plug and Play) frame that Venkatakrishnan et al. is proposed The extensive concern of person.Have a large amount of the experiment proves that its validity.It can be by certain known method (such as BM3D, BM4D, TNRD And NLM) it is converted into non-explicit canonical.Many advanced denoising devices can be applied to various in image procossing by plug and play frame Indirect problem, such as image deblurring, super-resolution etc..Next plug and play frame is introduced.
In the indirect problem of many image procossings, it often will appear following subproblem:
Wherein, Φ (x) indicates that regular terms, λ indicate the parameter of intensity between balance fidelity term and regular terms, we defineThen have:
If we assign z as certain " degeneration " image, this problem is translated into, and passes through image prior Φ (x) the problem of carrying out residual error between minimization degraded image z and true picture x.So former problem can be converted into known to us Denoising Problems, here it is its main thoughts.Under plug and play frame, some advanced denoising functions can be described as non-aobvious The regular terms of formula, then the solution of former problem
Form can be written as follow:
Wherein,It is denoising function.By this form, image prior Φ (x) can be by non-explicit denoising function Description.For example, if used regular terms be Φ (x)=| | x | |TV, gone then we can use based on TV Function make an uproar to solve this problem.Studies have shown that we, which can permit, is used as priori with non-explicit regular terms Φ (x), this Some theoretical basis is provided using non local self-similarity canonical for us.
Next we model and derivation algorithm:
(1) model is established
For three-dimensional tensorEstablishing model is
Wherein αnIt is nonnegative curvature, meetsY(n)It represents along tensorThe expansion matrix of different directions;A =(A1, A2, A3) and X=(X1, X2, X3) represents the low-rank factor matrix along tensor different directions, and λ is regular parameter, It is non-local self-similarity canonical,It is indicator function, meets
The model has two parts, and first part is low-rank regular termsWe are not Mistake is generally assumedTucker order be (r1, r2, r3),WithIt is low-rank factor matrix.This is to enhance target tensor along the low-rank in each direction, thus Tensor can preferably be capturedIn global information.
Another is non local self similarity canonicalIt can be used to promote the non local self-similarity of target tensor.It is logical It crosses using plug and play frame,It can be described as non-explicit regular terms.Meanwhile to different types of dataIt can To select different denoising functions, there is good replicability.
(2) derivation algorithm
According to closing on operator function optimization object function:
WhereinIt is to close on operator parameter;
By using the continuous upper bound descent method of block, solve as follows:
For the subproblem of Xn and An:
It notices that the subproblem about Xn and An can be micro-, is easy directly to solve, subproblem has closed solution.It asks Topic core translates into how to solveSubproblem, i.e.,
The Frobenius norm of matrixIt is the sum of all elements volume arithmetic square root, if definitionFor matrixFolding tensor along the n-th direction, then the Frobenius norm of matrixWith Amount Frobenius norm be it is equal, then obtainSubproblem be converted into following form:
Objective function is further converted to following form:
WhereinIt is matrixThe tensor folded along n-th of direction;Now enableAt this timeSubproblem be rewritten into:
Objective function translates into the constrained optimization problem an of standard at this time, and actually _ (Y) is us by inserting i.e. With the non local self similarity canonical of Frame Design, solved then objective function passes through (2.10).And it will be tied by projection function Fruit projects to feasible zone to meet constraint condition, so farThe solution of subproblem can be written as follow form:
WhereinIt is projection function,It is plug and play operator function, σ is the parameter for controlling regular terms intensity.Pay attention to To in independent identically distributed denoising model σ it is related to noise level size, and in plug and play frame, σ and original image withGeneralized ensemble error it is related.So σ obtains ideal result as an adjustable parameter in our model.
(3) order increases strategy
In a model, with the order of a very littleAs initial, set when the relative error of iteration is less than When fixed threshold value, it may be assumed that
Corresponding order rnIt is updated in+1 iteration of kthWherein Δ rnBe positive integer, It is the Tucker order upper bound;Work as rnIt, will when increasing in+1 iteration of kthIt is updated toTogether When, it willIt is updated toBy rnIt is updated toI.e. addition size is Δ rnRandom matrix arriveColumn vector groups, addition size be Δ rn Random matrix arriveRow vector group.At the beginning using the order of a very little as starting point, low-rank factor matrix can be made very The global information for promptly capturing target tensor, with the growth of order, the information that our factor matrixs save is more, we can To safeguard the more details information of target tensor.Herein, threshold gamma=10 are updated to color image-3, to video and mostly light Spectrogram γ=10-2
(4) experimental result
Experiment effect of the assessment models in three kinds of different type tensor data (color image, video and multispectral figure).For The validity of across comparison model, we have chosen five kinds of contrast models from mainstream periodical, meeting: TMac, MFTV, TNN, LRTC-TV-II and SPC-QV.The specific introduction of above-mentioned model is contained in table 1.In an experiment, our model is used In NLS-LR generation, refers to.
Table 1: the introduction of existing part tensor repairing model.
We use Y-PSNR (peak signal to noise rate, PSNR) and structural similarity (structural similarity index, SSIM) is as the Measure Indexes for judging effect.The methodical condition of convergence of institute It is fixed against the relative error (RelCha) of adjacent iteration twice, it may be assumed thatWherein ε is Iteration convergence threshold value.In we test, being provided that for design parameter closes on operator parameter ρ=0: 1, along all directions low-rank Factor matrix weight αn=1/3 (n=1,2,3), iteration convergence threshold epsilon=3 × 10-4, order increase strategy in order increase step deltar= The value collection of (5,5,5), plug and play operator parameter σ is combined into { 5,10,15,20 }.By using plug and play frame, Wo Menke Different plug and play operators is selected to be neatly directed to different dataExperiment running environment is set as Windows 10MATLAB(R2018a)Intel Core i7-8700K 3.70GHz 32GB RAM。
8 color images are had chosen as data set, be respectively barbara, lena, house, sailboat, tulips, Sails, airplane and pepper.It is 256*256*3 that we, which are showed in all color image sizes of Fig. 1,.It is original Missing tensor be to be obtained by random element sampling, and sample rate (SR) is respectively 5%, 10% and 20%.Initially Tucker order r0=(10,10,3), Tucker order upper bound rmax=(125,125,3).For color image, we are chosen CBM3D is set as σ=10 as plug and play operator function, regular parameter.
Table 2 illustrates PSNR, SSIM and the numerical result of average CPU time (second) about NLS-LR and contrast model. It can be seen that our model numerically all steadily obtains best effect in PSNR and SSIM.In order to further compare us As a result, the recovery effect that we have chosen four color images (house, barbara, tulips and pepper) is shown In Fig. 2.It is obtained by observation, NLS-LR possesses best visual effect, for the picture with many details, our result Advantage is particularly evident.Comparison 105 is in TMac, we have huge effect promoting, and this point illustrates that our regular terms hair has Powerful effect.TMac and MF-TV only considered global low-rank, and have ignored the relevance in spectrum direction, so it is very Difficulty has good recovery effect to the tensor of low sampling rate.SPC-QV and LRTC-TV-II has the slickness canonical to tensor, There is good performance in experimental result, but it have been found that, the method based on slickness can generate alias, can bring The loss of details and fuzzy.This namely place for playing a role of our non local canonicals, it may be seen that NLS-LR for The maintenance of target tensor details has the effect of more outstanding.
Repairing effect about various types pollution picture is showed in Fig. 3.Table 3 summarizes its PSNR, SSIM peace The experimental result of equal CPU time (second).It can be seen that model NLS-LR still has best visual effect and numerical indication. The repairing effect figure of NLS-LR and true figure only have small difference, however the repairing effect of other models still has pollution Trace.
Test 5 video datas, including suzie, news, foreman and carphone.The experiment number 115 tested It is 144*176*150 according to size.The Tucker order upper bound is rmax=(105,115,75).We select VBM3D as i.e. slotting Use operator function, regular parameter σ=5.Test the data of 5%, 10% and 20% stochastical sampling.PSNR, SSIM and average CPU time is showed in table 4.As shown, our models are above contrast model on PSNR and SSIM.Meanwhile our models Runing time also acceptable.Fig. 4 chooses a wherein frame of 5 recovery video data as displaying.It can be seen that TMac Related information with MF-TV due to having ignored time latitude so their method is fine in high sampling rate performance, but is adopted low It just will appear when sample rate and be difficult to the phenomenon that restoring.Show us by effect, it is apparent that NLS-LR show it is best Vision repairing effect especially has good details capturing ability to the object of movement.
Table 2 model TMac, MF-TV, TNN, LRTC-TV-II, SPC-QV and NLS-LR are for different color images and difference PSNR, SSIM of sample rate reparative experiment and average CPU runing time (second) numerical indication result.Best PSNR and SSIM knot Fruit is shown with black matrix.
Table 3 model TMac, MF-TV, TNN, LRTC-TV-II, SPC-QV and NLS-LR pollute word pollution, band And PSNR, SSIM and average CPU runing time (second) numerical indication result of scribble pollution amelioration experiment.Best PSNR and SSIM result is shown with black matrix.
Table 4 model TMac, MF-TV, TNN, LRTC-TV-II, SPC-QV and NLS-LR are for different video data and difference PSNR, SSIM of sample rate reparative experiment and average CPU runing time (minute) numerical indication result.Best PSNR and SSIM As a result it is shown with black matrix.
Next 3 multispectral datas are tested, the size of the multispectral figure of selection is 256*256*31.Tucker order The upper bound is rmax=(185,185,5).The data for choosing 5%, 10% and 20% stochastical sampling are tested.Table 5 summarizes The empirical value of PSNR, SSIM and average CPU time (minute).We illustrate experimental result in Fig. 5.It can see part side There is the distortions of color for method, this is because not accounting for caused by the connection in spectrum direction.At the same time, our models are not only There is good holding to the continuity in spectrum direction, and also has good recovery to the details of target tensor.
Table 5 model TMac, MF-TV, TNN, LRTC-TV-II, SPC-QV and NLS-LR are for different multispectral datas and not PSNR, SSIM and average CPU runing time (minute) numerical indication result with sample rate reparative experiment.Best PSNR and SSIM result is shown with black matrix.
Parameter analysis:
Research closes on the influence of operator parameter ρ and regular parameter σ to model.Select the color image of 10% stochastical sampling Barbara is as test picture.We test the different experimental results for closing on operator parameter ρ value, and by its PSNR value Fig. 6 is showed in the convergence curve of iteration.By observing us it can be found that closing on operator parameter will affect the convergence speed of iteration It spends and certain influence is generated to final convergence numerical value.We test different regular parameter σ to the shadow of experimental result It rings, and result is showed in 6.It may be seen that regular parameter σ has very crucial influence to experimental result.Regular parameter rises The effect for having arrived intensity between control low-rank item and non local self similarity item will cause when regular parameter σ is excessive and repair result It crosses smooth, and when regular parameter σ is too small, and is difficult to play the role of restoring target tensor.Then we should carefully choose just Then parameter σ is to obtain best experiment effect.
About convergence
Although plug and play frame has been proved to be effective extensively, can plug and play frame be write as a tool Having fine constringent convex model is still the unsolved problem for perplexing the field.In Fig. 7, we can be seen by numerical experiment The model has apparent convergent tendency out.
The invention proposes a novel low-rank tensor repairing models, by the low-rank of tensor and non local self-similarity It combines.On the one hand, we guarantee global low-rank using low-rank factor matrix method.On the other hand, we, which utilize, inserts Frame is used, devises a non-explicit non local self similarity canonical to promote the detail recovery of tensor.And it devises and is based on The model solution algorithm of the continuous upper bound descent method of block.Numerical experiment shows that the model NLS-LR that we are proposed is restoring target Structure, profile and details of amount etc. have an apparent advantage, our model of Experimental results show in visual effect and It is more than many existing main stream approach in evaluation index.
Although reference be made herein to invention has been described for multiple explanatory embodiments of the invention, however, it is to be understood that Those skilled in the art can be designed that a lot of other modification and implementations, these modifications and implementations will fall in this Shen It please be within disclosed scope and spirit.More specifically, disclose in the application, drawings and claims in the range of, can With the building block and/or a variety of variations and modifications of layout progress to theme combination layout.In addition to building block and/or layout Outside the modification and improvement of progress, to those skilled in the art, other purposes also be will be apparent.

Claims (4)

1. a kind of tensor restorative procedure of non local self similarity and low-rank canonical based on tensor, it is characterised in that: including following Step:
S1: tensor model is established;
S2: according to closing on operator function optimization object function and solved to tensor model;
S3: increase strategy using order and be iterated solution.
2. the tensor restorative procedure of the non local self similarity and low-rank canonical according to claim 1 based on tensor, special Sign is: the specific method that tensor model is established in the S1 step is:
For three-dimensional tensorEstablishing model is
Wherein αnIt is nonnegative curvature, meetsY(n)It represents along tensorThe expansion matrix of different directions;A= (A1, A2, A3) and X=(X1, X2, X3) represent the low-rank factor matrix along tensor different directions, and λ is regular parameter,It is Non local self-similarity canonical,It is indicator function, meets
3. the tensor restorative procedure of the non local self similarity and low-rank canonical according to claim 1 based on tensor, special Sign is: the S2 step is according to closing on operator function optimization object function and the method solved to tensor model is:
According to closing on operator function optimization object function:
Whereinρ is to close on operator parameter;
By using the continuous upper bound descent method of block, solve as follows:
For the subproblem of Xn and An:
It is converted into and how to solveSubproblem, i.e.,
The Frobenius norm of matrixIt is the sum of all elements volume arithmetic square root, if definition For matrixFolding tensor along the n-th direction, then the Frobenius norm of matrixAnd tensor Frobenius norm be it is equal, then obtainSubproblem be converted into following form:
Objective function is further converted to following form:
WhereinIt is matrixThe tensor folded along n-th of direction;
Now enableAt this timeSubproblem be rewritten into:
It is the non local self similarity canonical by plug and play Frame Design, then objective function is asked by following formula Solution:Result is projected into feasible zone to meet constraint condition, so far by projection functionSon The solution of problem is following form:
WhereinIt is projection function,It is plug and play operator function, σ is the parameter for controlling regular terms intensity.
4. the tensor restorative procedure of the non local self similarity and low-rank canonical according to claim 1 based on tensor, special Sign is: increasing the method that strategy is iterated using order in the S3 step is:
With orderAs initial, when the relative error of iteration is less than the threshold value of setting, it may be assumed that
Corresponding order rnIt is updated in+1 iteration of kthWherein Δ rnBe positive integer,It is The Tucker order upper bound;Work as rnIt, will when increasing in+1 iteration of kthIt is updated toMeanwhile it willIt is updated toBy rnIt is updated to I.e. addition size is Δ rnRandom matrix arriveColumn vector groups, addition size be Δ rnRandom matrix arriveRow to Amount group.
CN201910379660.8A 2019-05-05 2019-05-05 The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor Pending CN110223243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910379660.8A CN110223243A (en) 2019-05-05 2019-05-05 The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910379660.8A CN110223243A (en) 2019-05-05 2019-05-05 The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor

Publications (1)

Publication Number Publication Date
CN110223243A true CN110223243A (en) 2019-09-10

Family

ID=67820873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910379660.8A Pending CN110223243A (en) 2019-05-05 2019-05-05 The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor

Country Status (1)

Country Link
CN (1) CN110223243A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598798A (en) * 2020-04-27 2020-08-28 浙江工业大学 Image restoration method based on low-rank tensor chain decomposition
CN114119426A (en) * 2022-01-26 2022-03-01 之江实验室 Image reconstruction method and device by non-local low-rank conversion domain and full-connection tensor decomposition
WO2022068321A1 (en) * 2020-09-29 2022-04-07 International Business Machines Corporation Video frame synthesis using tensor neural networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598798A (en) * 2020-04-27 2020-08-28 浙江工业大学 Image restoration method based on low-rank tensor chain decomposition
CN111598798B (en) * 2020-04-27 2023-09-05 浙江工业大学 Image restoration method based on low-rank tensor chain decomposition
WO2022068321A1 (en) * 2020-09-29 2022-04-07 International Business Machines Corporation Video frame synthesis using tensor neural networks
US11553139B2 (en) 2020-09-29 2023-01-10 International Business Machines Corporation Video frame synthesis using tensor neural networks
GB2614212A (en) * 2020-09-29 2023-06-28 Ibm Video frame synthesis using tensor neural networks
GB2614212B (en) * 2020-09-29 2024-02-07 Ibm Video frame synthesis using tensor neural networks
CN114119426A (en) * 2022-01-26 2022-03-01 之江实验室 Image reconstruction method and device by non-local low-rank conversion domain and full-connection tensor decomposition

Similar Documents

Publication Publication Date Title
Huang et al. Bidirectional recurrent convolutional networks for multi-frame super-resolution
Lee et al. Local texture estimator for implicit representation function
Oh et al. Learning-based video motion magnification
CN106709881B (en) A kind of high spectrum image denoising method decomposed based on non-convex low-rank matrix
Zhang et al. Image restoration: From sparse and low-rank priors to deep priors [lecture notes]
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
Zhang et al. Double low-rank matrix decomposition for hyperspectral image denoising and destriping
CN108961180B (en) Infrared image enhancement method and system
CN110223243A (en) The tensor restorative procedure of non local self similarity and low-rank canonical based on tensor
CN108288256B (en) Multispectral mosaic image restoration method
CN103093444A (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
Ye et al. CSformer: Bridging convolution and transformer for compressive sensing
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
Ding et al. Tensor train rank minimization with nonlocal self-similarity for tensor completion
CN111932452B (en) Infrared image convolution neural network super-resolution method based on visible image enhancement
Huang et al. Deep gaussian scale mixture prior for image reconstruction
Lee et al. Learning local implicit fourier representation for image warping
Ma et al. Robust locally weighted regression for superresolution enhancement of multi-angle remote sensing imagery
Grm et al. Face hallucination revisited: An exploratory study on dataset bias
CN112598604A (en) Blind face restoration method and system
CN117333359A (en) Mountain-water painting image super-resolution reconstruction method based on separable convolution network
CN106447667B (en) The vision significance detection method restored based on self study feature and matrix low-rank
CN112200752A (en) Multi-frame image deblurring system and method based on ER network
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
Hu et al. A spatial constraint and deep learning based hyperspectral image super-resolution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190910

WD01 Invention patent application deemed withdrawn after publication