CN102842115B - Based on the compressed sensing image super-resolution rebuilding method of double dictionary study - Google Patents

Based on the compressed sensing image super-resolution rebuilding method of double dictionary study Download PDF

Info

Publication number
CN102842115B
CN102842115B CN201210184626.3A CN201210184626A CN102842115B CN 102842115 B CN102842115 B CN 102842115B CN 201210184626 A CN201210184626 A CN 201210184626A CN 102842115 B CN102842115 B CN 102842115B
Authority
CN
China
Prior art keywords
image
value
represent
matrix
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210184626.3A
Other languages
Chinese (zh)
Other versions
CN102842115A (en
Inventor
王好贤
张勇
毛兴鹏
黄建文
牛静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao bri Futian intelligent door and window Technology Co., Ltd.
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN201210184626.3A priority Critical patent/CN102842115B/en
Publication of CN102842115A publication Critical patent/CN102842115A/en
Application granted granted Critical
Publication of CN102842115B publication Critical patent/CN102842115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to the compressed sensing image super-resolution rebuilding method based on double dictionary study, it comprises the steps: redundant dictionary, encoder dictionary parameter training, autoregressive model weighting parameter is trained, with the redundant dictionary trained, encoder dictionary, autoregressive model weighting parameter, super-resolution rebuilding is carried out to single frames low-resolution image, this algorithm has rebuilds effective feature, is applicable to medical imaging, satellite remote sensing remote measurement, military surveillance and the numerous areas such as location and city security protection.

Description

Based on the compressed sensing image super-resolution rebuilding method of double dictionary study
Technical field:
The invention belongs to digital image processing techniques field, specifically a kind of image super-resolution rebuilding method, the high-fidelity for single image is amplified.
Background technology:
Along with popularizing of security device, the application of watch-dog also can be more and more extensive, in our life, role is also more and more important, due to monitoring the larger or object scene in the visual field from watch-dog distance more away from, the pixel be assigned on each scenery is fewer, causes the loss in detail of object scene, image thickens, be unfavorable for subsequent treatment and target identification, therefore, super-resolution rebuilding carried out to object scene and just seems particularly important.Super-resolution rebuilding technology is widely used in medical imaging, satellite remote sensing remote measurement, military surveillance and the field such as location and city security protection.
Mainly contain three major types reconstruction algorithm at present:
One class is traditional multi-frame processing algorithm, this type of algorithm is rebuild by the different information of the multiple image merging Same Scene, this type of algorithm is all based on same model, by choosing different bound term, an ill-conditioning problem is solved, this type of algorithm input requirements is higher, and the accuracy requirement of parameter estimation is very high, therefore applies limited.
Interpolation algorithm is that a class is the simplest, the highest ageing algorithm, comprise bilinear interpolation, bicubic interpolation, cubic spline interpolation etc., interpolation algorithm does not consider image immanent structure (edge), easily cause image blurring, this type of algorithm is generally as the pre-service of other super-resolution rebuilding algorithms.
Algorithm based on machine learning: such algorithm, first by obtaining the information needed for super-resolution rebuilding to image library learning training, then utilizes the information learnt to rebuild.Tradition can realize large multiple based on the method for learn-by-example and rebuild, but requires that the image in the image of input and image library is a class image, is generally applied to the image reconstruction that face etc. is special.Adopt the method for two redundant dictionary rarefaction representation can obtain good result when quality of input image is higher, but to fuzzyyer, the image reconstruction capabilities that noise is larger is just not enough.
The present invention adopts single redundancy dictionary to carry out rarefaction representation, and adopt the method for iterative shrinkage to carry out target super resolution image to Reconstructed equation and solve, reconstructed results is good.
Summary of the invention:
The object of this invention is to provide a kind of image super-resolution rebuilding method, it can keep the sharpness of image while enlarged image.
The technical solution used in the present invention is as follows:
One, redundant dictionary, encoder dictionary parameter training:
Definition for redundant dictionary, Ψ=[ψ 1, ψ 2..., ψ n] ∈ R m × nfor encoder dictionary, m, n are positive integer.Double dictionary refers to redundant dictionary and encoder dictionary.
The first step: super-resolution image in reading images storehouse, transfers super-resolution coloured image to gray level image, is then divided into size to be image fritter sample, the image fritter obtained is comply with left-to-right, from top to bottom, by row reading manner formed column vector, use s i∈ R n, i=1,2 ..., Q represents that the column vector that each fritter is formed, Q are the number of total column vector;
Second step: calculate s ivariance Var (s i), only retain Var (s i) be greater than the vector of threshold value TH, finally obtain training sample set S=[s 1, s 2... s m];
3rd step: formula (1) is solved, adopt alternative manner to solve redundant dictionary Φ and encoder dictionary Ψ, if Θ is sparse coefficient, λ, η are constant, represent and ask l 2norm, || 1represent and ask l 1norm:
{ Φ , Ψ , Θ } = arg min Φ , Ψ , Θ { | | S - ΦΘ | | 2 2 + η | | Θ - ΨS | | 2 2 + λ | Θ | 1 } - - - ( 1 )
(1) with gaussian random matrix initialisation redundant dictionary Φ, with unit matrix initialization codes dictionary Ψ, with full null matrix initialization sparse coefficient Θ, if iterations k=0, maximum iteration time is Max_Iter, and iteration convergence controlling elements are ε;
(2) (T is defined ζ[O]) i, j=sign (O i, j) max{|O i, j|-ζ, 0} are threshold operation operator, and ζ represents threshold operation variable, and O represents threshold operation matrix variables, O i, jbe designated as the element of (i, j) under in representing matrix O, sign () is symbol manipulation operator, gets σ Θ=2|| Φ tΦ+η I|| f, wherein || || frepresent and ask Frobenius norm, I is unit matrix, uses formula (2) to upgrade current Θ value;
Θ k + 1 = T λ / 2 σ Θ [ ( 1 - η σ Θ ) Θ k + 1 σ Θ ( Φ T ( S - Φ Θ k ) + ηΨS ) ] - - - ( 2 )
Wherein Θ k+1, Θ krepresent the Θ value of iteration kth+1 and kth step respectively, Φ trepresent the transposition of Φ.
(3) defining operation π (d)=d/max (1, || d||), d is vector, and this operation represents by vector projection to unit length, definition σ Φ=2|| Θ Θ t|| f, use formula (3) to upgrade current Φ value:
Φ k + 1 = π ( Φ k + 1 σ Φ ( S - Φ k Θ ) Θ T ) - - - ( 3 )
π () expression herein carries out unit length projection, wherein Φ to each row of Φ k+1, Φ krepresent the Φ value of iteration kth+1 and kth step respectively, Θ trepresent the transposition of Θ.
(4) σ is calculated Ψ=2||SS t|| f, use formula (4) to upgrade current Ψ value:
Ψ k + 1 = π ( Ψ k + 1 σ Ψ ( Θ - Ψ k S ) S T ) - - - ( 4 )
π () expression herein carries out unit length projection to each row of Ψ, wherein Ψ k+1, Ψ krepresent the Ψ value of iteration kth+1 and kth step respectively, S trepresent the transposition of S.
(5) iterations k=k+1;
(6) (2) to (5) are repeated until the value arriving maximum iterations or formula (1) changes enough hour stopping iteration.Export Φ value and Ψ value.
Two, autoregressive model weighting parameter training:
The first step: super-resolution image in reading images storehouse, transfer image to gray level image, then with one low frequency Gaussian convolution core carries out convolution, obtain low-frequency image, then original image and low-frequency image are subtracted each other, the high-frequency information (here by its called after high frequency imaging) of difference reflection image, high frequency imaging being divided into size is image fritter, find and s in super-resolution image ithe image fritter of corresponding position, and comply with left-to-right, from top to bottom, form column vector by row reading manner, be designated as all S=[s 1, s 2... s m] corresponding high frequency fritter vector set is combined into
Second step: use K-means sorting algorithm by S hbe divided into K class { C 1, C 2... C k, m krepresent C kthe number of middle vector, uses formula (5) to calculate the barycenter μ of each class k, k=1,2 ..., K, according to S hclassification results, by S=[s 1, s 2... s m] be also divided into K class, be expressed as { S 1, S 2... S k}:
μ k = 1 m k Σ i = 1 m k s i h s i h ∈ C k - - - ( 5 )
3rd step: if s ' irepresent s icenter pixel value, q ifor s imiddle s ' ithe vector of neighborhood territory pixel value composition, use least square method calculating formula in a k, identical process is carried out to all classification, obtains autoregressive model weighting parameter combination { a 1, a 2... a k;
4th step: output from regression model weighting parameter combination { a 1, a 2... a kand the barycenter { μ of each class 1, μ 2... μ k;
Three, image super-resolution rebuilding
Redundant dictionary Φ, encoder dictionary Ψ, autoregressive model weighting parameter combination { a 1, a 2... a k, the barycenter { μ of each class 1, μ 2... μ kfor training in advance obtains, once training can use always.
The first step: read in the low-resolution image Y needing to carry out rebuilding, if gray level image, uses bi-cubic interpolation Y to be interpolated into the size of needs, is expressed as X (0); If image is RGB image three-colo(u)r, then image is transformed to YCbCr color space, Y-component is interpolated into the size of needs, is expressed as X (0)if, X (0)∈ R n " × 1, then the matrix of coefficients that A, B are N " × N " dimension is defined;
Second step: the value of design factor matrix A and B, it is divided into the following steps:
(1) by X (0)being divided into size is image fritter, note become x i, i=1,2 ... N, N represent the number of image block, have overlap between block adjacent when piecemeal; By X (0)carry out convolution with a low frequency Gaussian convolution core, obtain low-frequency image, then by X (0)subtract each other with low-frequency image, difference reflection X (0)high-frequency information (here by its called after high frequency imaging), high frequency imaging being divided into size is image fritter, find and x ithe image fritter of corresponding position, and comply with left-to-right, from top to bottom, form column vector by row reading manner, be designated as
(2) calculate with all { μ 1, μ 2... μ kbetween Euclidean distance, find apart from minimum that, under it, be labeled as k i, find { a 1, a 2... a kin under be designated as k iweighting parameter, as x iautoregressive model weighting parameter
(3) center pixel value of establishing, χ ifor x imiddle x ' ineighborhood territory pixel value composition vector, use formula (6) compute matrix A;
I, j are coordinate variables, get positive integer, and span is 1 ~ N ".
(4) at X (0)in the fritter that image is divided into, find l similar fritter, variable l=1,2 ... L, L represent the quantity of similar fritter, are positive integer, represent X (0)in and x isimilar fritter, calculates in value, wherein represent normalized factor, h is constant, if for weight vector, represent the center pixel set of all similar fritters, then use formula (7) compute matrix B:
I, l are coordinate variables, get positive integer, and span is 1 ~ N ".
3rd step: set the constant γ in formula below 1, γ 2, γ 3, P, e, Mid_Iter and maximum iterations Max_Iter, setting constant matrices τ, initialization iterations k=0;
4th step: establish I representation unit matrix, I matrix size and matrix A, B are the same, and D represents down-sampling matrix, D is according to the setting of reconstruction multiple, and H is Gaussian Blur matrix, and it is the matrix form of Gaussian convolution core, according to the setting of Gaussian convolution core, be a circular matrix, computing formula (8):
X (k+1/2)=X (k)3[(DH) TY-(DH) TDHX (k)]
1(I-A) T(I-A)X (k)2(I-B) T(I-B)X (k)(8)
X (k+1/2), X (k)represent the reconstructed results of iteration kth+1/2 and kth step respectively.
5th step: if R irepresent x ifrom X, cutting out, that is: x i=R ix, if iterations k is less than Mid_Iter, uses formula (9) compute sparse coefficient Θ (k+1/2), Θ (k+1/2)=[α 1, α 2... α n]; Otherwise, use formula (10) to calculate α i:
Θ (k+1/2)=[ΨR 1X (k+1/2),ΨR 2X (k+1/2),…ΨR NX (k+1/2)](9)
α i = arg min α { | | x i - Φα | | 2 + γ 4 | α | 1 } - - - ( 10 )
Wherein, γ 4for constant, formula (10) adopts characteristic symbol finding algorithm to solve, and detailed process is as follows:
A () defines vectorial θ ∈ R m × 1, θ ja jth element in representation vector θ, θ j∈ {-1,0,1}, initialization definition of activities set β={ }, and be initialized as empty set;
B (), for the element in α being 0, calculates find out j value, α jrepresent each element of jth of α, if then θ j=-1, β=β ∪ j}, if: then θ j=1, β=β ∪ { j};
C () selects the column vector composition being designated as β under in Φ select the element being designated as β under in α and θ to form respectively with calculate α ‾ new = ( Φ ‾ T Φ ‾ ) - 1 ( Φ ‾ T x i - γ 4 θ ‾ / 2 ) , Then check one by one with relevant position element, sees which element changes symbol (refer to by just become negative or by negative change just), is provided with the value reindexing of Num position, general the element of middle negate position sets to 0 (each Value Operations only to a position, the value of other position remains unchanged), uses the altered new vector of representative value, then there is Num kind value, will num kind value substitute into respectively in, find out and make be worth minimum that value, its assignment is given the element being designated as β under in α with middle relevant position element makes identical change, will the subscript becoming 0 with element in α removes from β, upgrades θ=sign (α);
(d) judge in α be not 0 element whether meet: ∂ | | x i - Φα | | 2 ∂ α j + γ 4 sign ( α j ) = 0 , ∀ α j ≠ 0 , If do not meet, then perform (c) step, otherwise judge in α be 0 element whether meet: if do not meet, perform (b) step, otherwise, return value (the i.e. α of α i=α).
6th step: calculate Θ with formula (11) (k+1), defining operation (T τ[Z]) i, j=sign (Z i, j) max{|Z i, j|-τ i, j, 0}, Z are the objects of this operation.
Θ (k+1)=T τ(k+1/2)](11)
7th step: use formula (12) to calculate X (k+1)
X ( k + 1 ) = ( Σ i = 1 N R i T R i ) - 1 Σ i = 1 N R i T Φ α i - - - ( 12 )
8th step: if mod (k, P)==0, and k>=Mid_Iter, by X (k+1)replace the X in second step (0), recalculate A and B, and use formula (13) to calculate τ i, j;
τ i , j = c σ n 2 σ i , j + δ - - - ( 13 )
C is a constant, σ nbe the standard deviation of picture noise, δ is a smaller constant, σ i, jbe calculated as follows: use X (k+1), extract fritter, and ask and x isimilar fritter, to all and x isimilar fritter vector calculate then propose l=1,2 ... the jth number of L, the standard deviation calculating these numbers is σ i, j.
9th step: iterations k=k+1;
Tenth step: judge with k>=Max_Iter, wherein there is a condition to set up, then stop iteration, return super-resolution image X; Otherwise repeat the 4th step to the tenth step;
11 step: if input picture is gray level image, directly export X, if coloured image, is then interpolated into the size identical with X by CbCr component, then brightness X and color CbCr is converted into rgb space, exports the image after rebuilding.
The present invention is a kind of image super-resolution rebuilding method, compared with prior art, the invention has the advantages that image reconstruction better quality.
In order to the validity after verification algorithm reconstruction, by the same bi-cubic interpolation of result after reconstruction herein, [the JianchaoYang that JianchaoYang proposes, JohnWright, ThomasS.Huang, YiMa.Imagesuper-resolutionviasparserepresentation [J] .IEEETransactiononimageprocessing.2010, 19 (11): 2861-2873.] based on [ImageDeblurringandSuper-ResolutionbyAdaptiveSparseDomain SelectionandAdaptiveRegularization [J] .IEEETransactiononimageprocessing.2011 that method and the WeishengDong of rarefaction representation propose, 20 (7): 1838-1857.] ASDS reconstruction algorithm contrasts.
Accompanying drawing illustrates: Figure 1 shows that the visual effect after the reconstruction of four kinds of methods contrasts, and the image in Fig. 1 is through reducing process, and the upper left corner is the local original image intercepted.In Fig. 1 first is classified as the low-resolution image of input, second result being classified as bi-cubic interpolation, 3rd is classified as the result that JianchaoYang algorithm (be called for short Yang algorithm) rebuilds, 4th is classified as the result that WeishengDong algorithm (be called for short Dong algorithm) rebuilds, and the 5th is classified as the result that the inventive method rebuilds.Can see that the result that bi-cubic interpolation obtains is the fuzzyyest from result (the image upper left corner), and the Edge restoration degree of the image obtained based on the method for rarefaction representation is inadequate, image visual effect is also not as ASDS method, and ASDS rebuild after picture quality very high, but it is in butterfly's wing figure, cause partial distortion, method of the present invention can obtain best visual effect, does not almost have visible distortion.
In order to objective appraisal four kinds of super resolution ratio reconstruction methods, table one gives the data such as Y-PSNR (PSNR:PeakSignaltoNoiseRatio) and structural similarity index (SSIM:StructureSimilarityIndex) of four kinds of super resolution ratio reconstruction methods.Can obtain from table one, the value of PSNR and SSIM of bi-cubic interpolation is minimum, Yang algorithm is relative to bi-cubic interpolation, PSNR and SSIM value has and promotes largely, it is the highest that Dong algorithm and algorithm indices of the present invention improve degree, and algorithm of the present invention in most cases PSNR and SSIM index is slightly better than Dong algorithm.
Table one
Embodiment:
The technical solution used in the present invention is:
One, redundant dictionary, encoder dictionary parameter training:
for redundant dictionary, Ψ=[ψ 1, ψ 2..., ψ n] ∈ R m × nfor encoder dictionary, m, n are positive integer, wherein m=512, n=49, and double dictionary refers to redundant dictionary and encoder dictionary.
The first step: super-resolution image in reading images storehouse, transfers super-resolution image to gray level image, being then divided into size is fritter sample, the image fritter obtained is comply with from left to right, from top to bottom, by row reading manner formed column vector, use s i∈ R n, i=1,2 ..., Q represents that the column vector that each fritter is formed, Q are the number of total column vector;
Second step: calculate s ivariance Var (s i), only retain Var (s i) be greater than the vector of threshold value TH, wherein TH span is: 4.5 ~ 20, finally obtains training sample set S=[s 1, s 2... s m], M is greater than 120000;
3rd step: formula (1) is solved, adopt alternative manner to solve redundant dictionary Φ, encoder dictionary Ψ, Θ are sparse coefficient, and λ, η are constant, and η gets the numerical value being approximately equal to 1, and λ span is 0.05 ~ 0.2, represent and ask l 2norm, || 1represent and ask l 1norm:
fuction = { Φ , Ψ , Θ } = arg min Φ , Ψ , Θ { | | S - ΦΘ | | 2 2 + η | | Θ - ΨS | | 2 2 + λ | Θ | 1 } - - - ( 1 )
(1) with gaussian random matrix initialisation redundant dictionary Φ, with unit matrix initialization codes dictionary Ψ, with full null matrix initialization sparse coefficient Θ, iterations k=0, maximum iteration time Max_Iter gets 800 ~ 1500, iteration convergence controlling elements ε=10 -6;
(2) (T is defined ζ[O]) i, j=sign (O i, j) max{|O i, j|-ζ, 0} are threshold operation operator, and ζ represents threshold operation variable, and O represents threshold operation matrix variables, O i, jbe designated as the element of (i, j) under in representing matrix O, sign () is symbol manipulation operator, gets σ Θ=2|| Φ tΦ+η I|| f, || || frepresent and ask Frobenius norm, I is unit matrix, uses formula (2) to upgrade current Θ value;
Θ k + 1 = T λ / 2 σ Θ [ ( 1 - η σ Θ ) Θ k + 1 σ Θ ( Φ T ( S - Φ Θ k ) + ηΨS ) ] - - - ( 2 )
Wherein Θ k+1, Θ krepresent the Θ value of iteration kth+1 and kth step respectively, Φ trepresent the transposition of Φ.
(3) defining operation π (d)=d/max (1, || d||), d is vector, and this operation represents by vector projection to unit length, definition σ Φ=2|| Θ Θ t|| f, use formula (3) to upgrade current Φ value:
Φ k + 1 = π ( Φ k + 1 σ Φ ( S - Φ k Θ ) Θ T ) - - - ( 3 )
π () expression herein carries out unit length projection, wherein Φ to each row of Φ k+1, Φ krepresent the Φ value of iteration kth+1 and kth step respectively, Θ trepresent the transposition of Θ.
(4) σ is calculated Ψ=2||SS t|| f, use formula (4) to upgrade current Ψ value:
Ψ k + 1 = π ( Ψ k + 1 σ Ψ ( Θ - Ψ k S ) S T ) - - - ( 4 )
π () expression herein carries out unit length projection to each row of Ψ, wherein Ψ k+1, Ψ krepresent the Ψ value of iteration kth+1 and kth step respectively, S trepresent the transposition of S.
(5) iterations k=k+1;
(6) Θ, Ψ, Φ value of current calculating and the front value once calculated are substituted into formula (1), the value of calculating target function respectively, judge | | function ( k + 1 ) - function ( k ) | | 2 2 / | | function ( k ) | | 2 2 < &epsiv; (function (k+1), function krepresent the function value that kth+1 and kth step calculate respectively) and k>=Max_Iter condition whether meet, meet wherein any one, stopping iteration, output Φ value and Ψ value; Otherwise repeat (2) to (5).
Two, autoregressive model weighting parameter training:
The first step: super-resolution image in reading images storehouse, transfer image to gray level image, then with one low frequency Gaussian convolution core carries out convolution, and (Gaussian convolution core size is 7 × 7, standard deviation is 1.6), obtain low-frequency image, then original image and low-frequency image are subtracted each other, the high-frequency information (here by its called after high frequency imaging) of difference reflection image, high frequency imaging being divided into size is image fritter, n=49, finds and s in super-resolution image ithe image fritter of corresponding position, and comply with left-to-right, from top to bottom, form column vector by row reading manner, be designated as all S=[s 1, s 2... s m] corresponding high frequency fritter vector set is combined into S h = [ s 1 h , s 2 h , . . . s M h ] .
Second step: use K-means sorting algorithm by S hbe divided into K class { C 1, C 2... C k, k gets 200, m krepresent C kthe number of middle vector, uses formula (5) to calculate the barycenter μ of each class k, k=1,2 ..., K, according to S hclassification results, by S=[s 1, s 2... s m] be also divided into K class, be expressed as { S 1, S 2... S k}:
&mu; k = 1 m k &Sigma; i = 1 m k s i h s i h &Element; C k - - - ( 5 )
3rd step: if s ' irepresent s icenter pixel value, q ifor s imiddle s ' ineighborhood territory pixel value composition vector, Size of Neighborhood is got 3 × 3 and (is comprised center pixel, q ifor the vector formed after removing center pixel, be the column vector of 8 elements), use least square method calculating formula in a k, a kbe the vector of 8 × 1, identical process is carried out to all classification, obtain autoregressive model weighting parameter combination { a 1, a 2... a k;
4th step: output from regression model weighting parameter combination { a 1, a 2... a kand the barycenter { μ of each class 1, μ 2... μ k;
Three, image super-resolution rebuilding
Redundant dictionary Φ, encoder dictionary Ψ, autoregressive model weighting parameter combination { a 1, a 2... a k, the barycenter { μ of each class 1, μ 2... μ kfor training in advance obtains, once training can use always.
The first step: read in the low-resolution image Y needing to carry out rebuilding, if gray level image, uses bi-cubic interpolation Y to be interpolated into the size of needs, is expressed as X (0); If image is RGB image three-colo(u)r, then image is transformed to YCbCr color space, Y-component is interpolated into the size of needs, is expressed as X (0)if, X (0)∈ R n " × 1, then the matrix of coefficients that A, B are N " × N " dimension is defined; ;
Second step: design factor matrix A and B, it is divided into the following steps:
(1) by X (0)being divided into size is image fritter, note become x i, i=1,2 ... N, N represent the number (size variation with input picture) of image block, have overlap (transverse direction or longitudinal overlap 4 pixel width) between block adjacent when piecemeal; By X (0)carry out convolution with a low frequency Gaussian convolution core, obtain low-frequency image, then by X (0)subtract each other with low-frequency image, difference reflection X (0)high-frequency information (here by its called after high frequency imaging), high frequency imaging being divided into size is image fritter, find and x ithe image fritter of corresponding position, and comply with left-to-right, from top to bottom, form column vector by row reading manner, be designated as
(2) calculate with all { μ 1, μ 2... μ kbetween Euclidean distance, find apart from minimum that, under it, be labeled as k i, find { a 1, a 2... a kin under be designated as k iweighting parameter, as x iautoregressive model weighting parameter
(3) if x ' irepresent x icenter pixel value, χ ifor x imiddle x ' ineighborhood territory pixel value composition vector, use formula (6) compute matrix A;
I, j are coordinate variables, get positive integer, and span is 1 ~ N ".
(4) at X (0)in the fritter be divided in image, find l similar fritter, variable l=1,2 ... L, L represent the quantity of similar fritter, are positive integer, and L span is 7 ~ 10, represent X (0)in and x isimilar fritter, calculates in value, wherein represent normalized factor, h is constant, and span is 65 ~ 70, if for weight vector, represent the center pixel set of all similar fritters, then use formula (7) compute matrix B:
I, l are coordinate variables, get positive integer, and span is 1 ~ N ".
3rd step: presetting: γ 1span 0.008 ~ 0.01, γ 2span 0.04 ~ 0.1, γ 3get the value of about 6.5, P=20, e=10 -6, Mid_Iter=100 and maximum iterations Max_Iter=150, setting constant matrices τ=0, initialization iterations k=0;
4th step: establish I representation unit matrix, I matrix size and matrix A, B is the same, D represents down-sampling matrix, D is according to the setting of reconstruction multiple, H is Gaussian Blur matrix, it is that Gaussian convolution core is (when the reconstruction factor is 3, Gaussian convolution core size is 7 × 7, standard deviation is 1.6, when the reconstruction factor is 2, Gaussian convolution core size is 5 × 5, standard deviation is about 0.9 ~ 1.1, when the reconstruction factor is 4, Gaussian convolution core size is 7 × 7, standard deviation is 1.7 ~ 1.8) matrix form, set according to Gaussian convolution core, it is a circular matrix, computing formula (8):
X (k+1/2)=X (k)3[(DH) TY-(DH) TDHX (k)]
1(I-A) T(I-A)X (k)2(I-B) T(I-B)X (k)(8)
X (k+1/2), X (k)represent the reconstructed results of iteration kth+1/2 and kth step respectively.
5th step: if R irepresent x ifrom X, cutting out, that is: x i=R ix, if iterations k is less than Mid_Iter, uses formula (9) compute sparse coefficient Θ (k+1/2), Θ (k+1/2)=[α 1, α 2... α n]; Otherwise, use formula (10) to calculate α i:
Θ (k+1/2)=[ΨR 1X (k+1/2),ΨR 2X (k+1/2),…ΨR NX (k+1/2)](9)
&alpha; i = arg min &alpha; { | | x i - &Phi;&alpha; | | 2 + &gamma; 4 | &alpha; | 1 } - - - ( 10 )
Wherein, γ 4for constant, span is 0.1 ~ 0.2, and formula (10) adopts characteristic symbol finding algorithm to solve, and detailed process is as follows:
A () defines vectorial θ ∈ R m × 1, θ ja jth element in representation vector θ, θ j∈ {-1,0,1}, initialization definition of activities set β={ }, and be initialized as empty set;
B (), for the element in α being 0, calculates find out j value, α jrepresent each element of jth of α, if then θ j=-1, β=β ∪ j}, if: then θ j=1, β=β ∪ { j};
C () selects the column vector composition being designated as β under in Φ select the element being designated as β under in α and θ to form respectively with calculate &alpha; &OverBar; new = ( &Phi; &OverBar; T &Phi; &OverBar; ) - 1 ( &Phi; &OverBar; T x i - &gamma; 4 &theta; &OverBar; / 2 ) , Then check one by one with relevant position element, sees which element changes symbol (refer to by just become negative or by negative change just), is provided with the value reindexing of Num position, general the element of middle negate position sets to 0 (each Value Operations only to a position, the value of other position remains unchanged), uses the altered new vector of representative value, then there is Num kind value, will num kind value substitute into respectively in, find out and make be worth minimum that value, its assignment is given the element being designated as β under in α with middle relevant position element makes identical change, will the subscript becoming 0 with element in α removes from β, upgrades θ=sign (α);
(d) judge in α be not 0 element whether meet: &PartialD; | | x i - &Phi;&alpha; | | 2 &PartialD; &alpha; j + &gamma; 4 sign ( &alpha; j ) = 0 , &ForAll; &alpha; j &NotEqual; 0 , If do not meet, then perform (c) step, otherwise judge in α be 0 element whether meet: if do not meet, perform (b) step, otherwise, return value (the i.e. α of α i=α);
6th step: calculate Θ with formula (11) (k+1), defining operation (T τ[Z]) i, j=sign (Z i, j) max{|Z i, j|-τ i, j, 0}, Z are the objects of this operation:
Θ (k+1)=T τ(k+1/2)](11)
7th step: use formula (12) to calculate X (k+1):
X ( k + 1 ) = ( &Sigma; i = 1 N R i T R i ) - 1 &Sigma; i = 1 N R i T &Phi; &alpha; i - - - ( 12 )
8th step: if mod (k, P)==0, and k>=Mid_Iter, by X (k+1)replace the X in second step (0), recalculate A and B, and use formula (13) to calculate τ i, j:
&tau; i , j = c &sigma; n 2 &sigma; i , j + &delta; - - - ( 13 )
C is a constant, σ nthe standard deviation of picture noise, c σ nspan is 0.1 ~ 3.6, and normal image gets 0.1 ~ 0.6; δ is a smaller constant, gets 0.35, σ i, jbe calculated as follows: use X (k+1), extract fritter, and ask and x isimilar fritter, to all and x isimilar fritter vector calculate then propose l=1,2 ... the jth number of L, the standard deviation calculating these numbers is σ i, j.
9th step: iterations k=k+1;
Tenth step: judge with k>=Max_Iter, wherein there is a condition to set up, then stop iteration, return super-resolution image X; Otherwise repeat the 4th step to the tenth step;
11 step: if input picture is gray level image, directly export X, if coloured image, is then interpolated into the size identical with X by CbCr component, then brightness X and color CbCr is converted into rgb space, exports the image after rebuilding.

Claims (1)

1., based on the compressed sensing image super-resolution rebuilding method of double dictionary study, it is characterized in that following steps:
A, redundant dictionary, encoder dictionary parameter training:
for redundant dictionary, Ψ=[ψ 1, ψ 2..., ψ n] ∈ R m × nfor encoder dictionary, m, n are positive integer, wherein m=512, n=49, and double dictionary refers to and produces redundant dictionary and encoder dictionary two dictionaries simultaneously;
The first step: super-resolution image in reading images storehouse, transfers super-resolution image to gray level image, being then divided into size is fritter sample, by the image fritter that obtains according to from left to right, from top to bottom, form column vector by row reading manner, use s i∈ R n, i=1,2 ..., Q represents that the column vector that each fritter is formed, Q are the number of total column vector;
Second step: calculate s ivariance Var (s i), only retain Var (s i) be greater than the vector of threshold value TH, wherein TH span is: 4.5 ~ 20, finally obtains training sample set S=[s 1, s 2... s m], M is greater than 120000;
3rd step: formula (1) is solved, adopt alternative manner to solve redundant dictionary Φ, encoder dictionary Ψ, Θ are sparse coefficient, and λ, η are constant, and η gets the numerical value equaling 1, and λ span is 0.05 ~ 0.2, represent and ask l 2norm, || 1represent and ask l 1norm:
fuction = { &Phi; , &Psi; , &Theta; } = arg min &Phi; , &Psi; , &Theta; { | | S - &Phi;&Theta; | | 2 2 + &eta; | | &Theta; - &Psi;S | | 2 2 + &lambda; | &Theta; | 1 } - - - ( 1 )
(1) with gaussian random matrix initialisation redundant dictionary Φ, with unit matrix initialization codes dictionary Ψ, with full null matrix initialization sparse coefficient Θ, iterations k=0, maximum iteration time Max_Iter gets 800 ~ 1500, iteration convergence controlling elements ε=10 -6;
(2) (T is defined ζ[O]) i, j=sign (O i, j) max{|O i, j|-ζ, 0} are threshold operation operator, and ζ represents threshold operation variable, and O represents threshold operation matrix variables, O i, jbe designated as the element of (i, j) under in representing matrix O, sign () is symbol manipulation operator, gets σ Θ=2|| Φ tΦ+η I|| f, || || frepresent and ask Frobenius norm, I is unit matrix, uses formula (2) to upgrade current Θ value;
&Theta; k + 1 = T &lambda; / 2 &sigma; &Theta; [ ( 1 - &eta; &sigma; &Theta; ) &Theta; k + 1 &sigma; &Theta; ( &Phi; T ( S - &Phi; &Theta; k ) + &eta;&Psi;S ) ] - - - ( 2 )
Wherein Θ k+1, Θ krepresent the Θ value of iteration kth+1 and kth step respectively, Φ trepresent the transposition of Φ;
(3) defining operation π (d)=d/max (1, || d||), d is vector, and this operation represents by vector projection to unit length, definition σ Φ=2|| Θ Θ t|| f, use formula (3) to upgrade current Φ value:
&Phi; k + 1 = &pi; ( &Phi; k + 1 &sigma; &Phi; ( S - &Phi; k &Theta; ) &Theta; T ) - - - ( 3 )
π () expression herein carries out unit length projection, wherein Φ to each row of Φ k+1, Φ krepresent the Φ value of iteration kth+1 and kth step respectively, Θ trepresent the transposition of Θ;
(4) σ is calculated Ψ=2||SS t|| f, use formula (4) to upgrade current Ψ value:
&Psi; k + 1 = &pi; ( &Psi; k + 1 &sigma; &Psi; ( &Theta; - &Psi; k S ) S T ) - - - ( 4 )
π () expression herein carries out unit length projection to each row of Ψ, wherein Ψ k+1, Ψ krepresent the Ψ value of iteration kth+1 and kth step respectively, S trepresent the transposition of S;
(5) iterations k=k+1;
(6) Θ, Ψ, Φ value of current calculating and the front value once calculated are substituted into formula (1), the value of calculating target function respectively, judge whether meet with k>=Max_Iter condition, meet wherein any one, stop iteration, export Φ value and Ψ value, otherwise repeat (2) to (5), wherein function (k+1), function krepresent the function value of kth+1 and the calculating of kth step respectively;
B, autoregressive model weighting parameter are trained:
The first step: super-resolution image in reading images storehouse, transfer image to gray level image, then with one size is 7 × 7, standard deviation be 1.6 low frequency Gaussian convolution core carry out convolution, obtain low-frequency image, then original image and low-frequency image are subtracted each other, the high-frequency information of difference reflection image, we are by its called after high frequency imaging, high frequency imaging are divided into size and are image fritter, n=49, finds and s in super-resolution image ithe image fritter of corresponding position, and according to from left to right, from top to bottom, form column vector by row reading manner, be designated as all S=[s 1, s 2... s m] corresponding high frequency fritter vector set is combined into
Second step: use K-means sorting algorithm by S hbe divided into K class { C 1, C 2... C k, k gets 200, m krepresent C kthe number of middle vector, uses formula (5) to calculate the barycenter μ of each class k, k=1,2 ..., K, according to S hclassification results, by S=[s 1, s 2... s m] be also divided into K class, be expressed as { S 1, S 2... S k}:
&mu; k = 1 m k &Sigma; i = 1 m k s i h , s i h &Element; C k - - - ( 5 )
3rd step: if s ' irepresent s icenter pixel value, q ifor s imiddle s ' ineighborhood territory pixel value composition vector, Size of Neighborhood gets 3 × 3, q ifor the vector formed after removing center pixel, be the column vector of 8 elements, use least square method calculating formula in a k, a kbe the vector of 8 × 1, identical process is carried out to all classification, obtain autoregressive model weighting parameter combination { a 1, a 2... a k;
4th step: output from regression model weighting parameter combination { a 1, a 2... a kand the barycenter { μ of each class 1, μ 2... μ k;
C, image super-resolution rebuilding
Redundant dictionary Φ, encoder dictionary Ψ, autoregressive model weighting parameter combination { a 1, a 2... a k, the barycenter { μ of each class 1, μ 2... μ kfor training in advance obtains, once training can use always;
The first step: read in the low-resolution image Y needing to carry out rebuilding, if gray level image, uses bi-cubic interpolation Y to be interpolated into the size of needs, is expressed as X (0); If image is RGB image three-colo(u)r, then image is transformed to YCbCr color space, Y-component is interpolated into the size of needs, is expressed as X (0)if, then define the matrix of coefficients that A, B are N " × N " dimension;
Second step: design factor matrix A and B, it is divided into the following steps:
(1) by X (0)being divided into size is image fritter, note become x i, i=1,2 ... N, N represent the number of image block, and the value of N, with the size variation of input picture, has overlap between block adjacent when piecemeal, transverse direction or longitudinal overlap 4 pixel width; By X (0)carry out convolution with a low frequency Gaussian convolution core, obtain low-frequency image, then by X (0)subtract each other with low-frequency image, difference reflection X (0)high-frequency information, here by its called after high frequency imaging, high frequency imaging being divided into size is image fritter, find and x ithe image fritter of corresponding position, and according to from left to right, from top to bottom, form column vector by row reading manner, be designated as
(2) calculate with all { μ 1, μ 2... μ kbetween Euclidean distance, find apart from minimum that, under it, be labeled as k i, find { a 1, a 2... a kin under be designated as k iweighting parameter, as x iautoregressive model weighting parameter
(3) if x ' irepresent x icenter pixel value, χ ifor x imiddle x ' ineighborhood territory pixel value composition vector, use formula (6) compute matrix A;
I, j are coordinate variables, get positive integer, and span is 1 ~ N ";
(4) at X (0)in the fritter be divided in image, find l similar fritter, variable l=1,2 ... L, L represent the quantity of similar fritter, are positive integer, and L span is 7 ~ 10, represent X (0)in and x isimilar fritter, calculates in value, wherein represent normalized factor, h is constant, and span is 65 ~ 70, if for weight vector, represent the center pixel set of all similar fritters, then use formula (7) compute matrix B:
I, l are coordinate variables, get positive integer, and span is 1 ~ N ";
3rd step: presetting: γ 1span 0.008 ~ 0.01, γ 2span 0.04 ~ 0.1, γ 3value 6.5, P=20, e=10 -6, Mid_Iter=100 and maximum iterations Max_Iter=150, setting constant matrices τ=0, initialization iterations k=0;
4th step: establish I representation unit matrix, I matrix size and matrix A, B are the same, D represents down-sampling matrix, D is according to the setting of reconstruction multiple, and H is Gaussian Blur matrix, and it is Gaussian convolution core, when the reconstruction factor is 3, Gaussian convolution core size is 7 × 7, and standard deviation is 1.6, when the reconstruction factor is 2, Gaussian convolution core size is 5 × 5, standard deviation is 0.9 ~ 1.1, and when the reconstruction factor is 4, Gaussian convolution core size is 7 × 7, standard deviation is 1.7 ~ 1.8, according to the setting of Gaussian convolution core, reconstruction matrix is a circular matrix, computing formula (8):
X (k+1/2)=X (k)3[(DH) TY-(DH) TDHX (k)]
1(I-A) T(I-A)X (k)2(I-B) T(I-B)X (k)(8)
X (k+1/2), X (k)represent the reconstructed results of iteration kth+1/2 and kth step respectively;
5th step: if R irepresent x ifrom X, cutting out, that is: x i=R ix, if iterations k is less than Mid_Iter, uses formula (9) compute sparse coefficient Θ (k+1/2), Θ (k+1/2)=[α 1, α 2... α n]; Otherwise, use formula (10) to calculate α i:
Θ (k+1/2)=[ΨR 1X (k+1/2),ΨR 2X (k+1/2),…ΨR NX (k+1/2)](9)
&alpha; i = arg min &alpha; { | | x i - &Phi;&alpha; | | 2 + &gamma; 4 | &alpha; | 1 } - - - ( 10 )
Wherein, γ 4for constant, span is 0.1 ~ 0.2, and formula (10) adopts characteristic symbol finding algorithm to solve, and detailed process is as follows:
A () defines vectorial θ ∈ R m × 1, θ ja jth element in representation vector θ, θ j∈ {-1,0,1}, initialization definition of activities set β={ }, and be initialized as empty set;
B (), for the element in α being 0, calculates find out j value, α jrepresent each element of jth of α, if then θ j=-1, β=β ∪ j}, if: then θ j=1, β=β ∪ { j};
C () selects the column vector composition being designated as β under in Φ select the element being designated as β under in α and θ to form respectively with calculate then check one by one with relevant position element, sees which element changes symbol, and sign modification refers to by just becoming negative or by negative change just, being provided with the value reindexing of Num position, will the element of middle negate position sets to 0, each Value Operations only to a position, and the value of other position remains unchanged, and uses the altered new vector of representative value, then there is Num kind value, will num kind value substitute into respectively in, find out and make be worth minimum that value, its assignment is given the element being designated as β under in α with middle relevant position element makes identical change, will the subscript becoming 0 with element in α removes from β, upgrades θ=sign (α);
(d) judge in α be not 0 element whether meet: if do not meet, then perform (c) step, otherwise judge in α be 0 element whether meet: if do not meet, perform (b) step, otherwise, return the value of α;
6th step: calculate Θ with formula (11) (k+1), defining operation (T τ[Z]) i, j=sign (Z i, j) max{|Z i, j|-τ i, j, 0}, Z are the objects of this operation:
Θ (k+1)=T τ(k+1/2)](11)
7th step: use formula (12) to calculate X (k+1):
X ( k + 1 ) = ( &Sigma; i = 1 N R i T R i ) - 1 &Sigma; i = 1 N R i T &Phi; &alpha; i - - - ( 12 )
8th step: if mod (k, P)==0, and k>=Mid_Iter, by X (k+1)replace the X in second step (0), recalculate A and B, and use formula (13) to calculate τ i, j:
&tau; i , j = c &sigma; n 2 &sigma; i , j + &delta; - - - ( 13 )
C is a constant, σ nthe standard deviation of picture noise, c σ nspan is 0.1 ~ 3.6, and normal image gets 0.1 ~ 0.6; δ is a smaller constant, gets 0.35, σ i, jbe calculated as follows: use X (k+1), extract fritter, and ask and x isimilar fritter, to all and x isimilar fritter vector calculate then propose jth number, calculate these number standard deviations be σ i, j;
9th step: iterations k=k+1;
Tenth step: judge with k>=Max_Iter, wherein there is a condition to set up, then stop iteration, return super-resolution image X; Otherwise repeat the 4th step to the tenth step;
11 step: if input picture is gray level image, directly export X, if coloured image, is then interpolated into the size identical with X by CbCr component, then brightness X and color CbCr is converted into rgb space, exports the image after rebuilding.
CN201210184626.3A 2012-05-31 2012-05-31 Based on the compressed sensing image super-resolution rebuilding method of double dictionary study Active CN102842115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210184626.3A CN102842115B (en) 2012-05-31 2012-05-31 Based on the compressed sensing image super-resolution rebuilding method of double dictionary study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210184626.3A CN102842115B (en) 2012-05-31 2012-05-31 Based on the compressed sensing image super-resolution rebuilding method of double dictionary study

Publications (2)

Publication Number Publication Date
CN102842115A CN102842115A (en) 2012-12-26
CN102842115B true CN102842115B (en) 2015-11-25

Family

ID=47369441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210184626.3A Active CN102842115B (en) 2012-05-31 2012-05-31 Based on the compressed sensing image super-resolution rebuilding method of double dictionary study

Country Status (1)

Country Link
CN (1) CN102842115B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237205B (en) * 2013-03-25 2016-01-20 西安电子科技大学 Digital camera based on Toeplitz matrix observation and dictionary learning compresses formation method
CN103295196B (en) * 2013-05-21 2015-09-30 西安电子科技大学 Based on the image super-resolution rebuilding method of non local dictionary learning and biregular item
CN103400348B (en) * 2013-07-19 2016-03-02 哈尔滨工业大学深圳研究生院 Based on the image restoring method and system of compressed sensing
CN103607292B (en) * 2013-10-28 2017-01-18 国家电网公司 Fast distributed monitoring method for electric-power communication network services
CN103854268B (en) * 2014-03-26 2016-08-24 西安电子科技大学 The Image Super-resolution Reconstruction method returned based on multinuclear Gaussian process
CN104159112B (en) * 2014-08-08 2017-11-03 哈尔滨工业大学深圳研究生院 The compressed sensing video transmission method and system decoded based on dual sparse model
CN106815806B (en) * 2016-12-20 2020-01-10 浙江工业大学 Single image SR reconstruction method based on compressed sensing and SVR
KR102442449B1 (en) 2017-09-01 2022-09-14 삼성전자주식회사 Image processing apparatus, method for processing image and computer-readable recording medium
CN107993205A (en) * 2017-11-28 2018-05-04 重庆大学 A kind of MRI image reconstructing method based on study dictionary with the constraint of non-convex norm minimum
CN108090873B (en) * 2017-12-20 2021-03-05 河北工业大学 Pyramid face image super-resolution reconstruction method based on regression model
CN110136055B (en) * 2018-02-02 2023-07-14 腾讯科技(深圳)有限公司 Super resolution method and device for image, storage medium and electronic device
CN110555800A (en) * 2018-05-30 2019-12-10 北京三星通信技术研究有限公司 image processing apparatus and method
CN109188389B (en) * 2018-10-16 2023-03-28 哈尔滨工业大学 Method for solving time difference measurement ambiguity in beyond-visual-distance multi-base passive radar
CN110211017B (en) * 2019-05-15 2023-12-19 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN110575146B (en) * 2019-09-20 2022-03-15 福建工程学院 Pulse signal noise detection method based on enhanced Gaussian redundant dictionary
CN110717859B (en) * 2019-10-09 2023-06-23 哈尔滨工业大学 Super-resolution reconstruction method based on two-way video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image Super-Resolution Via Sparse Representation;Jianchao Yang;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20101130;第19卷(第11期);2861-2873 *
一种自适应正则化的图像超分辨率算法;安耀祖;《自动化学报》;20120430;第38卷(第4期);602-609 *

Also Published As

Publication number Publication date
CN102842115A (en) 2012-12-26

Similar Documents

Publication Publication Date Title
CN102842115B (en) Based on the compressed sensing image super-resolution rebuilding method of double dictionary study
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN106204449B (en) A kind of single image super resolution ratio reconstruction method based on symmetrical depth network
CN103150713B (en) Utilize the image super-resolution method that image block classification rarefaction representation is polymerized with self-adaptation
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN105046672A (en) Method for image super-resolution reconstruction
CN111340696B (en) Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
CN107067367A (en) A kind of Image Super-resolution Reconstruction processing method
CN104657962B (en) The Image Super-resolution Reconstruction method returned based on cascading linear
CN105046664A (en) Image denoising method based on self-adaptive EPLL algorithm
CN106920214A (en) Spatial target images super resolution ratio reconstruction method
CN105631807A (en) Single-frame image super resolution reconstruction method based on sparse domain selection
CN113516601A (en) Image restoration technology based on deep convolutional neural network and compressed sensing
Ding et al. Tensor train rank minimization with nonlocal self-similarity for tensor completion
CN114998167B (en) High-spectrum and multi-spectrum image fusion method based on space-spectrum combined low rank
CN105550989A (en) Image super-resolution method based on nonlocal Gaussian process regression
CN104657951A (en) Multiplicative noise removal method for image
CN104408697B (en) Image Super-resolution Reconstruction method based on genetic algorithm and canonical prior model
CN104299193B (en) Image super-resolution reconstruction method based on high-frequency information and medium-frequency information
Pan et al. FDPPGAN: remote sensing image fusion based on deep perceptual patchGAN
Zhang et al. Learning stacking regressors for single image super-resolution
Xia et al. Meta-learning-based degradation representation for blind super-resolution
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study
Sun et al. Mixed noise removal for hyperspectral images based on global tensor low-rankness and nonlocal SVD-aided group sparsity

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180820

Address after: 266000 east of Huadong Road, Qingdao hi tech Zone, Shandong

Patentee after: Qingdao bri Futian intelligent door and window Technology Co., Ltd.

Address before: 264209 Harbin Institute of Technology, Weihai 2, Cultural West Road, Weihai, Shandong

Patentee before: Harbin Institute of Technology (Weihai)

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: 266000 east of Huadong Road, high tech Zone, Qingdao City, Shandong Province, west of planned east line 4 and north of planned east line 17

Patentee after: Qingdao Borui Futian Technology Group Co.,Ltd.

Address before: 266000 east of Huadong Road, Qingdao hi tech Zone, Shandong

Patentee before: Qingdao bri Futian intelligent door and window Technology Co.,Ltd.

CP03 Change of name, title or address