CN103871041A - Image super-resolution reconstruction method based on cognitive regularization parameters - Google Patents

Image super-resolution reconstruction method based on cognitive regularization parameters Download PDF

Info

Publication number
CN103871041A
CN103871041A CN201410108363.7A CN201410108363A CN103871041A CN 103871041 A CN103871041 A CN 103871041A CN 201410108363 A CN201410108363 A CN 201410108363A CN 103871041 A CN103871041 A CN 103871041A
Authority
CN
China
Prior art keywords
matrix
image
resolution
cognitive
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410108363.7A
Other languages
Chinese (zh)
Other versions
CN103871041B (en
Inventor
张爱新
金波
徐光耀
李建华
王芳
李生红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201410108363.7A priority Critical patent/CN103871041B/en
Publication of CN103871041A publication Critical patent/CN103871041A/en
Application granted granted Critical
Publication of CN103871041B publication Critical patent/CN103871041B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image super-resolution reconstruction method based on cognitive regularization parameters. A human eye cognition theory is used for calculating appropriate regularization parameters for all image blocks, and the regularization parameters are called as the cognitive regularization parameters and introduced into the process of solving sparse representation, then in combination with the super-resolution reconstruction method based on sparse representation, all the blocks are reconstructed, and finally all the reconstructed blocks are recombined to obtain a high-resolution image. The image super-resolution reconstruction method is used for testing in a facial image, and the result shows that the method has a better image reconstruction effect than that of a method using fixed regularization parameters.

Description

The image super-resolution reconstructing method building based on cognitive regularization parameter
Technical field
What the present invention relates to is a kind of technical field of image processing, specifically a kind of image super-resolution reconstructing method building based on cognitive regularization parameter.
Background technology
Along with the development of information age, information obtain and be treated as more and more important problem.For image, the resolution of image has represented the quantity of information that image comprises.Image resolution ratio is higher, and the detailed information that can provide is just abundanter, also just more accurate, careful to the description of objective scene.Because sensor manufacturing technology has run into bottleneck, traditional pass through to improve hardware configuration to improve the method for image resolution ratio no longer applicable.Under such background, super-resolution reconstruction (Super-Resolution Reconstruction, SRR) provide a new solution by Digital Signal Processing for this difficult problem, its target is the frequency component of having lost by recovering some, realize digital picture from low resolution (Low-Resolution, LR) image is to the conversion of high resolving power (High-Resolution, HR) image.
Image super-resolution reconstructing method mainly can be divided into the algorithm based on frequency domain and the algorithm based on spatial domain.Super-resolution reconstruction based on frequency domain is mainly to solve interpolation problem on the frequency change territory of image, by eliminating the spectral aliasing of low-resolution image, carry out realize target, it be a kind of intuitively, go distortion method, but be subject to the restriction of Fourier transform theory, and do not comprise the prior-constrained information of image, thereby its development is greatly limited, and reconstruct poor effect.In view of the above-mentioned limitation of frequency domain method, current research is all carried out substantially in spatial domain.Super-resolution reconstruction based on spatial domain need to consider to affect the spatial domain factor of imaging effect, comprise optical dimming and motion blur etc., and it is set up to observation model flexibly, it can comprise abundant prior-constrained information, thereby can obtain better reconstruct effect.Algorithm based on spatial domain can be divided into again based on interpolation, based on reconstruction and the method based on study.
In recent years, the ultra-resolution ratio reconstructing method based on study becomes the study hotspot in field for this reason, and aspect single-frame images reconstruct, is showing outstanding effect.Method based on study is to collect to contain and the high low-resolution image pair of low-resolution image with category information, composing training set, and analyze its corresponding statistical relationship, retrain super-resolution reconstruction process as priori.Its thought is still obtains priori, but source is not limited only to low-resolution image itself, also comprises the statistics of some approximate images.Its advantage is to take full advantage of the prior imformation of image, in the situation that not increasing input picture sample size, still can produce new high frequency details, obtains than the better reconstruct effect of other ultra-resolution ratio reconstructing methods.
Through the literature search of prior art is found, along with the theoretical development of compressed sensing (Compressive Sensing, CS), the people such as Jianchao Yang have proposed the super-resolution reconstruction algorithm based on rarefaction representation.This algorithm projects to high low-resolution image piece on identical rarefaction representation through different super complete dictionaries, and breakthrough point using this as study and reconstruct, has obtained good reconstruct effect.In this algorithm, the regularization parameter λ that solves rarefaction representation use is mainly for the sparse property of the required solution vector α of balance and the approximate evaluation fidelity of reconstruction result.Can find by emulation experiment, for different images, optimum regularization parameter λ value is not identical, and this is relevant to the textural characteristics of image.Therefore, using fixing regularization parameter is not optimum solution.And human eye cognitive theory shows the response difference of human eye to image different texture feature, for this reason, image can be divided into three kinds of dissimilar regions: (1) strong edge regions, in this region, brightness of image changes greatly and is obvious, and this type of region comes across the profile place of picture material more; (2) texture region, in this region, brightness of image exists and changes by a small margin and constantly repeatedly change; (3) smooth region, in this region, brightness of image changes littlely and not obvious, is common in the background area of image, as large stretch of sky etc.Strong edge regions can provide more image outline information; On the other hand, if smooth region goes wrong or comprises noise, can produce larger impact to observer's visual experience.Therefore, introducing human eye cognitive theory can effectively solve the optimization problem of regularization parameter λ in sparse vector solution procedure.
Summary of the invention
The object of the invention is to overcome above-mentioned the deficiencies in the prior art, a kind of image super-resolution reconstructing method building based on cognitive regularization parameter is provided.
Technical solution of the present invention is as follows:
A kind of image super-resolution reconstructing method building based on cognitive regularization parameter, its feature is, first, pending low-resolution image is carried out to piecemeal, utilize human eye cognitive theory, according to the textural characteristics of each image block, calculate respectively suitable regularization parameter, i.e. described cognitive regularization parameter; Then, the cognitive regularization parameter building is incorporated in the solution procedure of rarefaction representation, then in conjunction with the reconstructing method based on rarefaction representation, all piecemeals is carried out to super-resolution reconstruction; Finally, all piecemeals of recombinating obtain high-definition picture.
The detailed process of the inventive method comprises the following steps:
1) the single width low-resolution image Y of input N × M size 0, adopt bicubic interpolation method to be amplified doubly (establishing a is the enlargement factor of expecting, a is positive integer and a>1) of a, obtain matrix Y ∈ R aN × aM;
2) matrix Y piecemeal is extracted to feature, obtain set of eigenvectors U{t 1, t 2..., t q, wherein Q=a (N-4) * a (M-4);
3) the cognitive regularization parameter λ of the each piecemeal of compute matrix Y i, wherein i=1,2 ..., Q;
4) traversal set of eigenvectors U{t 1, t 2..., t q, to each vectorial t i(1≤i≤Q) arranges the cognitive regularization parameter λ of its correspondence i, solve rarefaction representation, then carry out super-resolution reconstruction, obtain the proper vector z of corresponding high-definition picture i.When having traveled through all proper vector t iafter, just obtain the blocking characteristic vector set V{z of reconstruct high-definition picture 1, z 2..., z q;
5) according to high-definition picture blocking characteristic vector set V{z 1, z 2..., z qand matrix Y, restructuring obtains high-definition picture Z ';
6) adopt iterative backprojection method to carry out overall reconstruct to Z ', further remove fuzzy and noise, finally obtain high-definition picture Z;
7) output high-definition picture Z.
The single width low-resolution image Y of described step 1) input N × M size 0, and adopt bicubic interpolation method to carry out super-resolution pre-service, and amplified a doubly, obtain matrix Y ∈ R aN × aM, be specially:
1.1) the single width low-resolution image of input N × M size,
1.2) judge that low-resolution image is gray level image or coloured image, judge the dimension d in image array space,
In the time of d=2, low-resolution image is gray level image, is amplified to a doubly by bicubic interpolation method, obtains matrix Y ∈ R aN × aM;
In the time of d=3, low-resolution image is coloured image, first it is carried out to color space conversion, and it is transformed into YCbCr space from rgb space, and conversion formula is:
I Y=0.299*I R+0.587*I G+0.114*I B
I Cb=0.564*(I B-Y)
I Cr=0.713*(I R-Y)
Wherein I r, I gand I brepresent respectively low-resolution image Y 0at 3 components of rgb space, I y, I cband I crrepresent respectively low-resolution image Y 0at 3 components in YCbCr space, then to I y, I cband I crthese 3 components adopt respectively bicubic interpolation method to amplify a doubly, obtain matrix I y∈ R aN × aM, I cb∈ R aN × aMand I cr∈ R aN × aM.Make Y=I y, obtain matrix Y ∈ R aN × aM.
Described bicubic interpolation method, its concrete steps are as follows:
(a) establishing input matrix is Y=[g 1, g 2..., g m], Y ∈ R n × M, wherein g i∈ R nthe i column vector of representing matrix Y (i=1,2 ..., M).Expand matrix Y, concrete steps are: first make Y 1=[g 1, g 1, Y, g m, g m]=[h 1, h 2..., h n] t, wherein h j∈ R m+4representing matrix Y 1j row vector (j=1,2 ..., N); Then make Y 2=[h 1, h 1, Y 1, h n, h n] t; Finally make Y=Y 2.
(b) establishing enlargement factor is a, output matrix X ∈ R aN × aMin each element be calculated as follows:
X(i,j)=A*B*C
Wherein: i=1,2 ..., aN, j=1,2 ..., aM
A=[S(1+u)S(u)S(1-u)S(2-u)]
B = Y ( p - 1 , q - 1 ) Y ( p - 1 , q ) Y ( p - 1 , q + 1 ) Y ( p - 1 , q + 2 ) Y ( p , q - 1 ) Y ( p , q ) Y ( p , q + 1 ) Y ( p , q + 2 ) Y ( p + 1 , q + 1 ) Y ( p + 1 , q ) Y ( p + 1 , q + 1 ) Y ( p + 1 , q + 2 ) Y ( p + 2 , q - 1 ) Y ( p + 2 , q ) Y ( p + 2 , q + 1 ) Y ( p + 2 , q + 2 )
C=[S(1+v)S(v)S(1-v)S(2-v)]
S ( x ) = 1 - 2 | x | 2 + | x | 3 , | x | < 1 4 - 8 | x | + 5 | x | 2 - | x | 3 , 1 &le; | x | < 2 0 , | x | &GreaterEqual; 2
U=(i%a)/a(% represents modulo operation)
v=(j%a)/a
P=[i/a]+2([] represent downward rounding operation)
q=[j/a]+2
Described step 2) obtain the set of eigenvectors U of matrix Y, be specially:
2.1) 1 rank of horizontal direction and 1 rank of 2 ladder degree eigenmatrixes and vertical direction and the 2 ladder degree eigenmatrixes of difference compute matrix Y, obtain 4 eigenmatrixes and be respectively T h1, T h2, T v1and T v2, its dimension is identical with Y, and its process is:
2.1.1) 1 ladder degree matrix T of the horizontal direction of described compute matrix Y h1method as follows:
If filter operator is f 1=[1,0,1], calculate convolution:
T 1=Y*f 1
Obtain T 1∈ R aN × (aM+2), delete the 1st row and (aM+2) row, obtain T h1∈ R aN × aM;
2.1.2) 1 ladder degree matrix T of the vertical direction of described compute matrix Y v1method as follows:
If filter operator is
Figure BDA0000480222050000055
, calculate convolution:
T 2=Y*f 2
Obtain T 2∈ R (aN+2) × aM, delete the 1st row and (aN+2) OK, obtain T v1∈ R aN × aM;
2.1.3) 2 ladder degree matrix T of the horizontal direction of described compute matrix Y h2method as follows:
If filter operator is f 3=[1,0 ,-2,0,1], calculate convolution:
T 3=Y*f 3
Obtain T 3∈ R aN × (aM+4), delete the 1st row, the 2nd row, (aM+3) row and (aM+4) row, obtain T h2∈ R aN × aM;
2.1.4) 2 ladder degree matrix T of the vertical direction of described compute matrix Y v2method as follows:
If filter operator is
Figure BDA0000480222050000051
calculate convolution:
T 4=Y*f 4
Obtain T 4∈ R (aN+4) × aM, delete the 1st row, the 2nd row, (aN+3) row and (aM+4) OK, obtain T v2∈ R aN × aM;
2.2) to matrix T h1proceed as follows: the T that filter width is a h1edge, obtain matrix T ` h1∈ R a (N-2) × a (M-2); Ergodic Matrices T ' according to the order of sequence h1in element, get piecemeal
Figure BDA0000480222050000052
and be converted into vector
Figure BDA0000480222050000053
(wherein i=1,2 ..., Q, and Q=a (N-4) * a (M-4)), obtain vector set
Figure BDA0000480222050000054
being implemented as follows of aforesaid operations:
2.2.1) T that described filter width is a h1edge, refer to puncture table T h11st~a capable, the aN-a+1~aN is capable, 1st~a row and the aM-a+1~aM row, obtains T ' h1∈ R a (N-2) × a (M-2);
2.2.2) described traversal according to the order of sequence refers to: by order from left to right, from top to bottom, Ergodic Matrices T ' successively h1element;
Described piecemeal is chosen and is referred to: if this pixel coordinate (p, q) that traversal arrives meets:
1≤p≤a* (N-2)-b+1 and 1≤q≤a* (M-2)-b+1
P and q are integer, and establishing point block size is b × b, and the left upper apex take this pixel as piecemeal, cuts apart and obtain piecemeal
Figure BDA0000480222050000061
then merge in order
Figure BDA0000480222050000062
b row vector, obtain vector
Figure BDA0000480222050000063
so just obtain vector set
Figure BDA0000480222050000064
2.3) similar, to T h2, T v1and T v2carry out identical operation, obtain respectively vector set
Figure BDA0000480222050000065
with
Figure BDA0000480222050000066
2.4) travel through U simultaneously h1, U h2, U v1and U v2four set, merge vector
Figure BDA0000480222050000067
with
Figure BDA0000480222050000068
obtain vector
Figure BDA0000480222050000069
(wherein i=1,2 ..., Q),
Figure BDA00004802220500000610
So just obtain the set of eigenvectors U{t of matrix Y 1, t 2..., t q.
The cognitive regularization parameter λ of the described each piecemeal of step 3) compute matrix Y i, detailed process is as follows:
3.1), according to human eye cognitive theory, calculate the texture cognitive model coefficient of each pixel in Y:
3.1.1) Y is carried out to wavelet transformation, every one-level wavelet transformation can form 4
Figure BDA00004802220500000611
the subgraph of size, is respectively low-frequency approximation component L, vertical high frequency details component H v, horizontal high frequency details component H h, diagonal angle high frequency details component H d.
3.1.2) calculating contrast image
C = C v + C h + C d 3
Wherein,
Figure BDA00004802220500000614
for vertical contrast,
Figure BDA00004802220500000615
for horizontal contrast,
Figure BDA00004802220500000616
for diagonal angle contrast.From definition, the Wavelet Contrast C on different directions v, C hand C dembody respectively the contrast of its corresponding low frequency component of high frequency (background luminance) of image different directions, considered the contrast situation of prospect and background simultaneously, thereby reflected each pixel difference visually in image.Then adopt the bicubic differential technique described in step 1) to amplify 2 times to C, obtain the contrast image C ∈ R of Y aN × aM
3.1.3) for the different texture region of image, super-resolution reconstruction operates the emphasis difference of paying close attention to.For strong edge regions, in super-resolution reconstruction, want refinement details, corresponding regularization parameter will be got smaller value; And for smooth region, requiring the result after super-resolution reconstruction more level and smooth, regularization parameter will be got larger value.For this reason, the cognitive texture model coefficient Q that structure location of pixels (k, s) is located k,sfor:
Q k , s = &rho; &Sigma; ( &mu; , &eta; ) &Element; neighborhood of ( k , s ) | C &mu; , &eta; | | n k , s | F &mu; , &eta;
Wherein, ρ is normalized parameter, the span of texture model matrix of coefficients Q is approximately fixed between [0,1], neighborhood of (k, s) represent (k, s) neighborhood, it is defined as point centered by pixel (k, s), size is the rectangular area of (2 δ+1) × (2 θ+1), and δ, θ are positive integer; | n k,s| be the number of wavelet coefficient in (k, s) neighborhood; C μ, ηrepresent the luminance contrast that position (μ, η) is located, by 3.1.2) calculate F μ, ηrepresent the coefficient fluctuation degree in neighborhood (2 δ+1) × (2 θ+1), it is defined as:
F &mu; , &eta; = 0.5 F &mu; , &eta; h + 0.5 F &mu; , &eta; v
Wherein,
Figure BDA0000480222050000073
for level fluctuation degree, be defined as:
F &mu; , &eta; h = 1 2 &delta; + 1 &Sigma; m = &mu; - &delta; &mu; + &delta; &Sigma; n = &eta; - &theta; &eta; + &theta; - 1 | C m , n - C m , n + 1 | &Sigma; n = &eta; - &theta; &eta; + &theta; | C m , n |
Figure BDA0000480222050000075
for vertical fluctuation degree, be defined as:
F &mu; , &eta; v = 1 2 &theta; + 1 &Sigma; n = &eta; - &theta; &eta; + &theta; &Sigma; m = &mu; - &delta; &mu; + &delta; - 1 | C m , n - C m + 1 , n | &Sigma; m = &mu; - &delta; &mu; + &delta; | C m , n |
Wherein, the wavelet coefficient of texture region generally speaking undulatory property is larger, and smooth region wavelet coefficient is less and it is milder to change, and the wavelet coefficient of strong edge regions exists a peak value, at the two ends of peak value, is monotone variation substantially.Therefore the fluctuation degree of wavelet coefficient can become a whether important indicator in texture region of this position of process decision chart picture.In addition, the model coefficient of borderline region is set to 0.
3.2) calculate the cognitive regularization parameter λ of each image block i;
In the technical program, obtained the cognitive regularization parameter λ of this piecemeal by the mean value calculation of the texture model coefficient of each pixel of each image block i:
&lambda; i = 1 &gamma; ( 1 - 1 b 2 &Sigma; ( k , s ) &Element; patch Q k , s )
Wherein, b 2what represent is a point block size, and γ adjusts the factor.Because Q k, s ∈[0,1], so
Figure BDA0000480222050000078
γ is mainly used to adjust λ imaximal value.For the less picture of noise, γ can be set to higher value, guarantees Image Reconstruction quality; For the larger picture of noise, γ can be set to smaller value, effectively to remove the noise in image.The method of the cognitive regularization parameter of above-mentioned calculating, only need once calculate, and has avoided each piecemeal to complete the problem that will re-start iteration after super-resolution reconstruction, and computation complexity reduces greatly.
{ the λ obtaining as stated above 1, λ 2..., λ qand U{t 1, t 2..., t qcorresponding one by one.
Described step 4) is obtained high-definition picture blocking characteristic vector set V{z 1, z 2..., z q, be specially traversal set of eigenvectors U{t 1, t 2..., t q, to vectorial t iproceed as follows:
4.1) make y=t i, utilize low-resolution image dictionary D ly is carried out to rarefaction representation, can be written as:
min:‖α‖ 0
s . t . : | | FD l &alpha; - Fy | | 2 2 &le; &Element;
Wherein, F is that linear feature extracts operator, and it can the sparse factor alpha of exact constrain and the distance of y.Because α is enough sparse, and D lmeet limited equidistant character (Restricted Isometry Property, RIP), so above formula can be converted into minimum 1-norm problem:
min:‖α‖ 1
s . t : | | FD l &alpha; - Fy | | 2 2 &le; &Element;
Recycling lagrange's method of multipliers is converted into formula:
Figure BDA0000480222050000083
Wherein, cognitive regularization parameter λ icalculated the sparse property that its balance is understood and the approximate evaluation fidelity of y by step 3).On the other hand, due to the overlapping problem of piecemeal, need to consider to make D hα, with reconstruct high resolving power piecemeal overlapping region is out as far as possible approaching, so improve constraint condition is:
min: ‖α‖ 1
s . t . : | | FD l &alpha; - Fy | | 2 2 &le; &Element; 1 | | PD l &alpha; - w | | 2 2 &le; &Element; 2
Wherein, matrix P is for extracting current piecemeal and overlapping between reconstruct high-definition picture, and w has comprised the overlapping value of reconstruct high-definition picture.And then can be optimized for:
min &alpha; || D ~ &alpha;- y ~ || 2 2 + &lambda; i | | &alpha; | | 1
Wherein
Figure BDA0000480222050000086
low-resolution image input can be controlled by parameter beta, a high-definition picture piece mating with the adjacent piecemeal that reconstruct is good can be found.
According to the method described above, can try to achieve the solution α of rarefaction representation *.
4.2) according to x=D hα *try to achieve x, make z i=x, can obtain the estimation z of high-definition picture blocking characteristic vector i.
After having traveled through, just obtain high-definition picture blocking characteristic vector set V{z 1, z 2..., z q.
Above-mentioned used dictionary is to D hand D lobtained by precondition, the method for training is as follows:
(a) first collect image that the resolution close with picture material to be tested of some is higher as high resolving power training plan image set.Down-sampled by high-definition picture being carried out to bicubic, dwindle a doubly, thereby obtain corresponding low resolution training plan image set;
Described bicubic is down-sampled, be enlargement factor be 1/a described 1.2) in bicubic interpolation method;
(b) obtain the set of eigenvectors of all low resolution training images, concrete grammar and step 2) consistent; Be provided with n and open low resolution training image, obtain n vector set
Figure BDA0000480222050000091
this n vector set is merged into a set Y in order l;
(c) obtain the set of eigenvectors of all high resolving power training images, concrete grammar is: first adopt described bicubic interpolation method, by low-resolution image L i(i=1,2 ..., n) interpolation amplification a doubly, obtains M i(i=1,2 ..., n); Then calculate high-definition picture H iwith M idifference, obtain Dif i=H i-M i; Afterwards according to step 2.2) in to T h1processing mode to Dif icarry out identical processing, obtain vector set (i=1,2 ..., n), obtain so altogether n vector set then this n vector set is merged in order to a set X h;
(d) in order to train two super complete dictionary D of high low resolution hand D l, and these two dictionaries will guarantee that the rarefaction representation of corresponding high low-resolution image piece is identical, list system of equations:
D h = arg min D h , &Phi; | | X h - D h &Phi; | | 2 2 + &lambda; | | &Phi; | | 1 D 1 = arg min D l , &Phi; | | Y l - D l &Phi; | | 2 2 + &lambda; | | &Phi; | | 1
And to be combined be a formula:
min { D h , D l , &Phi; } 1 b 2 | | X h - D h &Phi; | | 2 2 + 1 4 b 2 | | Y 1 - D 1 &Phi; | | 2 2 + &lambda; ( 1 b 2 + 1 4 b 2 ) | | &Phi; | | 1
Wherein, tile size is b × b, and Φ is rarefaction representation matrix.Merge D hand D l, can obtain
min { D h , D l , z } | | X C - D C &Phi; | | 2 2 + &lambda; ^ | | &Phi; | | 1
Wherein, X C = 1 b X h 1 2 b Y l , D C = 1 b D h 1 2 b D l , &lambda; ^ = &lambda; ( 1 b 2 + 1 4 b 2 ) , As fixing D cduring with in Φ one, solve another and can be converted into the optimum solution problem that solves, so can be to D creplace iteration optimization with Φ, concrete grammar is as follows:
(i) by D cbe initialized as gaussian random matrix, and each row is standardized;
(ii) fix D c, upgrade Φ by following formula:
&Phi; = min z | | X C - D C &Phi; | | 2 2 + &lambda; ^ | | &Phi; | | 1
(iii) fix Φ, upgrade D by following formula c:
D C = arg min D C | | X C - D C &Phi; | | 2 2 s . t . | | D i | | 2 2 &le; l , i = 1,2 , . . . , K
Wherein D irepresent D ci row vector, K represents D crow vector number;
(iv) iterate (ii) and (iii) until convergence;
(v) obtain D cbe the super complete dictionary of requirement, decompose and obtain D hand D i.
Described step 5) is according to high-definition picture blocking characteristic vector set V{z 1, z 2..., z qand matrix Y, restructuring obtains high-definition picture Z ', is specially:
5.1) the proper vector z to all high-definition picture piecemeals i∈ V, is divided into b isometric vector, then, using this b vector as row vector, is merged in order the piecemeal Z of b × b size i, wherein i=1,2 ..., Q.Obtain like this partitioned matrix set { Z 1, Z 2..., Z q;
5.2) by all partitioned matrix set Z ibe merged into a matrix Z ' ∈ R a (N-2) × a (M-2), concrete grammar is as follows:
Make Z '=0, E=0, Z ' ∈ R a (N-2) × a (M-2), E ∈ R a (N-2) × a (M-2), i=1, by order from left to right, from top to bottom, the element of Ergodic Matrices Z ' successively, if this pixel coordinate (p, q) traversing meets:
1≤p≤a* (N-2)-b+1 and 1≤q≤a* (M-2)-b+1
Wherein p and q are integer, order:
Z‘(p:(p+b-1),q:(q+b-1))=Z‘(p:(p+b-1),q:(q+b-1))+Z i
i=i+1
Each element of matrix E is used for the accumulative frequency of the pixel of the correspondence position that records matrix Z ', after traversal finishes, superposition image vegetarian refreshments between piecemeal is got to average, and each element value of order matrix Z ', divided by the element value of the correspondence position of matrix E, finally obtains Z ';
5.3) Z ' is added to middle a (N-2) × a (M-2) region of Y, gained new images assignment is Z ', obtains Z ' ∈ R aN × aM.
Described step 6) adopts iterative backprojection method to carry out overall reconstruct to Z ', further removes fuzzy and noise, finally obtains high-definition picture Z.Described iterative backprojection method, is specially and solves following formula optimization problem:
Z = min Z | | &Theta;&Psi;Z - Y 0 | | 2 2 + c | | Z - Z &prime; | | 2 2
Wherein Ψ represents fuzzy matrix, and Θ represents down-sampled matrix, Y 0for the low-resolution image of N × M size of input.
Described step 7) output high-definition picture Z, specific as follows:
If input picture is black white image, the Z obtaining in step 6) is the image obtaining after super-resolution reconstruction;
If input picture is coloured image, make I y=Z, by (I y, I cb, I cr) be transformed into rgb color space, that is:
I R=I Y+1.402I Cr
I G=Z-0.344I Cb-0.714I Cr
I B=I Y+1.772I Cb
Then make Z=(I r, I g, I b) ∈ R aN × aM × 3, be the high-definition picture finally obtaining.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Fig. 2 is the test result contrast of the present invention and other algorithms, and wherein (a) is low-resolution image, (b) is true high-definition picture, (c), for using the result of fixing regularization parameter value (λ=0.2) method, (d) is the inventive method result.
Embodiment
Below describe by reference to the accompanying drawings the present invention in detail and be applied to the specific embodiment of the super-resolution reconstruction of facial image.
The present embodiment uses face picture as training atlas and test pattern.The training atlas using comes from Georgia Institute of Technology's face database (Georgia Tech Face Database), the face picture that this face database has comprised 50 people, everyone comprises again 15 images of different angles, expression, picture format is JPG, and in image, the size of face is about 150 × 150 pixels.
The present embodiment has been chosen first two in the picture of front 40 people in this face database, totally 80 images are as training atlas, test pattern is chosen at random from remaining 10 people's image, and the present embodiment selects Fig. 2 (b) as test pattern, and size is 136 × 190.The expectation enlargement factor of the present embodiment is made as 2, and detailed process is as follows:
1) Fig. 2 (b) is carried out to bicubic down-sampled, dwindle 2 times, obtain Fig. 2 (a) of 68 × 95 sizes.Fig. 2 (a) is as the input low-resolution image of ultra-resolution ratio reconstructing method; And Fig. 2 (b) is as real high-definition picture, compare for the high-definition picture obtaining with reconstruct.40 training images selecting are as high resolving power training plan image set.Down-sampled by high-definition picture being carried out to bicubic interpolation, dwindle 2 times, thereby obtain corresponding low resolution training plan image set.
Because Fig. 2 (a) is coloured image, first it is carried out to color space conversion, it is transformed into YCbCr space from rgb space, adopt respectively bicubic interpolation algorithm to amplify 2 times to Y, Cb and these 3 components of Cr, obtain matrix Y ∈ R 136 × 190, Cb ∈ R 136 × 190with Cr ∈ R 136 × 190, temporarily preserve Cb and Cr.
2) establishing point block size is 5 × 5, and overlapping size is 4, obtains the set of eigenvectors U{t of Y 1, t 2..., t 23296, specific as follows:
2.1) 1 rank of horizontal direction and 1 rank of 2 ladder degree eigenmatrixes and vertical direction and the 2 ladder degree eigenmatrixes of difference compute matrix Y, obtain 4 eigenmatrixes and be respectively T h1, T h2, T v1and T v2;
2.2) to matrix T h1proceed as follows:
2.2.1) puncture table T h11st~2 row, 189~190 row, 1st~2 row and 189th~190 row, obtain T ' h1∈ R 132 × 186;
2.2.2) press order from left to right, from top to bottom, Ergodic Matrices T ' successively h1element, if this pixel coordinate (p, q) traversing meet:
1≤p≤128 and 1≤q≤182
Wherein p and q are integer, and the left upper apex take this pixel as piecemeal, cuts apart and obtain piecemeal
Figure BDA0000480222050000131
then merge in order
Figure BDA0000480222050000132
5 row vectors, obtain vector
Figure BDA0000480222050000133
so just obtain vector set U h 1 = { t h 1 1 , t h 1 2 , . . . , t h 1 23296 } ;
2.3) to T h2, T v1and T v2carry out above-mentioned 2.2) identical operation, obtain respectively vector set
Figure BDA0000480222050000135
with
Figure BDA0000480222050000136
2.4) travel through U simultaneously h1, U h2, U v1and U v2four set, merge vector with
Figure BDA0000480222050000138
obtain vectorial t i∈ R 100(wherein i=1,2 ..., 23296),
Figure BDA0000480222050000139
so just obtain the set of eigenvectors U{t of matrix Y 1, t 2..., t 23296.
3) the cognitive regularization parameter λ of the each piecemeal of compute matrix Y i, detailed process is as follows:
3.1) according to human eye cognitive theory, the texture cognitive model coefficient of each pixel in compute matrix Y, concrete steps are as follows:
3.1.1) matrix Y is carried out to wavelet transformation, every one-level wavelet transformation forms the subgraph of 4 68 × 95 sizes, is respectively low-frequency approximation component L, vertical high frequency details component H v, horizontal high frequency details component H h, diagonal angle high frequency details component H d;
3.1.2) calculate contrast image C ∈ R 68 × 95:
C = C v + C h + C d 3
Wherein, for vertical contrast,
Figure BDA00004802220500001312
for horizontal contrast,
Figure BDA00004802220500001313
for diagonal angle contrast;
Adopt bicubic differential technique to amplify 2 times to C, obtain the contrast image C ∈ R of matrix Y 136 × 190;
3.1.3) the cognitive texture model coefficient Q that calculating pixel point (k, s) is located k,s, formula is as follows:
Q k , s = &rho; &Sigma; ( &mu; , &eta; ) &Element; neighborhood of ( k , s ) | C &mu; , &eta; | 9 F &mu; , &eta;
Wherein, ρ is normalized parameter, make the span of texture model matrix of coefficients Q approximately fix on [0,1], between, neighborhood of (k, s) represents (k, s) neighborhood, it is defined as centered by pixel (k, s) point, the size rectangular area as 3 × 3, C μ, ηthe contrast size that position (μ, η) is located, F μ, ηit is the coefficient fluctuation degree in neighborhood 3 × 3.F μ, ηcomputing formula be:
F &mu; , &eta; = 0.5 F &mu; , &eta; h + 0.5 F &mu; , &eta; v
Wherein, level fluctuation degree
Figure BDA0000480222050000141
computing formula be:
F &mu; , &eta; h = 1 3 &Sigma; m = &mu; - 1 &mu; + 1 &Sigma; n = &eta; - 1 &eta; | C m , n - C m , n + 1 | &Sigma; n = &eta; - 1 &eta; + 1 | C m , n |
Vertical fluctuation degree
Figure BDA0000480222050000143
computing formula be:
F &mu; , &eta; v = 1 3 &Sigma; n = &eta; - 1 &eta; + 1 &Sigma; m = &mu; - 1 &mu; | C m , n - C m + 1 , n | &Sigma; m = &mu; - 1 &mu; + 1 | C m , n |
3.2) obtained the cognitive regularization parameter λ of this piecemeal by the mean value calculation of each 5 × 5 image block texture model coefficient i:
&lambda; i = 1 &gamma; ( 1 - 1 5 2 &Sigma; ( k , s ) &Element; patch Q k , s )
Wherein establish γ=5, so just can be in the hope of λ i∈ [0,0.2].
4) traversal U{t 1, t 2..., t 23296, to t i(i=1,2 ..., 23296) proceed as follows:
First solve following formula:
min &alpha; || D ~ &alpha;- y ~ || 2 2 + &lambda; i | | &alpha; | | 1
Wherein
Figure BDA0000480222050000147
f is that linear feature extracts operator, and matrix P is for extracting current piecemeal and overlapping between reconstruct high-definition picture piece, and w has comprised the overlapping value of reconstruct high-definition picture piece, setting parameter β=1.Can obtain the solution α of rarefaction representation *.
Then according to x=D hα *try to achieve x, make z i=x, can obtain the estimation z of high-definition picture blocking characteristic vector i.
After having traveled through, just obtain high-definition picture blocking characteristic vector set V{z 1, z 2..., z q.
Above-mentioned used dictionary is to D hand D lobtained by precondition, the process of training is as follows:
The first step, the training set of the present embodiment comprises 80 images altogether as mentioned above, sets it as high-definition picture training set, down-sampled by high-definition picture being carried out to bicubic, dwindles 2 times, thereby obtains corresponding low resolution training plan image set;
Second step, obtains the set of eigenvectors of all low resolution training images, concrete grammar and step 2) consistent; Obtain 80 vector sets
Figure BDA0000480222050000148
this n vector set is merged into a set Y in order l;
The 3rd step, obtains the set of eigenvectors of all high resolving power training images, and concrete grammar is: first adopt described bicubic interpolation method, by low-resolution image L i(i=1,2 ..., 80) and 2 times of interpolation amplifications, obtain M i(i=1,2 ..., 80); Then calculate high-definition picture H iwith M idifference, obtain Dif i=H i-M i; Afterwards according to step 2.2) in to T h1processing mode to Dif icarry out identical processing, obtain vector set (wherein i=1,2 ..., 80), obtain so altogether n vector set
Figure BDA0000480222050000154
then these 80 vector sets are merged in order to a set X h;
Finally in order to train two super complete dictionary D of high low resolution hand D l, and these two dictionaries will guarantee that the rarefaction representation of corresponding high low-resolution image piece is identical, solve following formula:
min { D h , D l , z } | | X C - D C &Phi; | | 2 2 + &lambda; ^ | | &Phi; | | 1
Wherein, X C = 1 5 X h 1 10 Y l , D C = 1 5 D h 1 10 D l , &lambda; ^ = &lambda; ( 1 25 + 1 100 ) , If λ=0.15, the dimension of rarefaction representation vector is made as 1024 in addition.Iteration optimization can obtain D after calculating c, then decompose and can obtain D h, D l.
5) the proper vector z to all high-definition picture piecemeals i, be divided into 5 isometric vectors, then, using these 5 vectors as row vector, be merged in order the piecemeal Z of 5 × 5 sizes i, wherein i=1,2 ..., 23296.Obtain like this partitioned matrix set { Z 1, Z 2..., Z 23296.By all piecemeal Z ibe merged into a matrix, concrete grammar is as follows:
Make Z '=0, F=0, Z ' ∈ R 132 × 186, F ∈ R 132 × 186, i=1, by order from left to right, from top to bottom, the element of Ergodic Matrices Z ' successively, if this pixel coordinate (p, q) traversing meets:
1≤p≤128 and 1≤q≤182
Wherein p and q are integer, order:
Z‘(p:(p+4),q:(q+4))=Z‘(p:(p+4),q:(q+4))+Z i
i=i+1
Each element of matrix F is used for the accumulative frequency of the pixel of the correspondence position that records matrix Z ', after traversal finishes, superposition image vegetarian refreshments between piecemeal is got to average, and each element value of order matrix Z ', divided by the element value of the correspondence position of matrix E, obtains Z ' ∈ R 132 × 186.By be added to 132 × 186 regions, centre of Y of Z ', result assignment, to Z ', is obtained to Z ' ∈ R 136 × 190.
6) adopt iterative backprojection method to carry out overall reconstruct to Z ', further remove fuzzy and noise, obtain Z.
7) make I y=Z, by (I y, I cb, I cr) be transformed into rgb color space, that is:
I R=I Y+1.402I Cr
I G=Z-0.344I Cb-0.714I Cr
I B=I Y+1.772I Cb
Then make Z=(I r, I g, I b) ∈ R 136 × 190 × 3, be the high-definition picture finally obtaining, i.e. Fig. 2 (d).
In Fig. 2, (c), (d) figure adopts diverse ways (a) figure to be carried out to the result of super-resolution reconstruction, (c) figure is the result of fixing regularization parameter (λ=0.2) method, and (d) figure is the inventive method result.Both are respectively with respect to the PSNR value of former figure (b): 38.1128 and 39.0157, and the reconstruct effect of visible the inventive method is better than fixing regularization parameter method.

Claims (8)

1. the image super-resolution reconstructing method building based on cognitive regularization parameter, its feature is, first, pending low-resolution image is carried out to piecemeal, utilize human eye cognitive theory, according to the textural characteristics of each image block, calculate respectively cognitive regularization parameter; Then, the cognitive regularization parameter building is incorporated in the solution procedure of rarefaction representation, then in conjunction with the reconstructing method based on rarefaction representation, all piecemeals is carried out to super-resolution reconstruction; Finally, all piecemeals of recombinating obtain high-definition picture.
2. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 1, is characterized in that, the method comprises the steps:
1) the single width low-resolution image Y of input N × M size 0, adopt bicubic interpolation method to be amplified a doubly, establishing a is the enlargement factor of expecting, a is positive integer and a>1, obtains matrix Y ∈ R aN × aM;
2) matrix Y piecemeal is extracted to feature, obtain the set of eigenvectors U{t of matrix Y 1, t 2..., t q, wherein Q=a (N-4) * a (M-4);
3) the cognitive regularization parameter λ of the each piecemeal of compute matrix Y i, wherein i=1,2 ..., Q;
Be specially:
3.1) according to human eye cognitive theory, the texture cognitive model coefficient of each pixel in compute matrix Y:
3.1.1) matrix Y is carried out to wavelet transformation, every one-level wavelet transformation can form 4 the subgraph of size, is respectively low-frequency approximation component L, vertical high frequency details component H v, horizontal high frequency details component H h, diagonal angle high frequency details component H d;
3.1.2) calculating contrast image
Figure FDA0000480222040000012
C = C v + C h + C d 3
Wherein, for vertical contrast,
Figure FDA0000480222040000015
for horizontal contrast,
Figure FDA0000480222040000016
for diagonal angle contrast;
Bicubic differential technique described in employing step 1) amplifies 2 times to C, obtains the contrast image C ∈ R of matrix Y aN × aM;
3.1.3) calculate the cognitive texture model coefficient Q that structure location of pixels (k, s) is located k,s, formula is as follows:
Q k , s = &rho; &Sigma; ( &mu; , &eta; ) &Element; neighborhood of ( k , s ) | C &mu; , &eta; | | n k , s | F &mu; , &eta;
Wherein, ρ is normalized parameter, the span of texture model matrix of coefficients Q is approximately fixed between [0,1], neighborhood of (k, s) represent (k, s) neighborhood, it is defined as point centered by pixel (k, s), size is the rectangular area of (2 δ+1) × (2 θ+1), and δ, θ are positive integer; | n k,s| be the number of wavelet coefficient in (k, s) neighborhood; C μ, ηrepresent the luminance contrast that position (μ, η) is located, F μ, ηrepresent the coefficient fluctuation degree in neighborhood (2 δ+1) × (2 θ+1), it is defined as:
F &mu; , &eta; = 0.5 F &mu; , &eta; h + 0.5 F &mu; , &eta; v
Wherein,
Figure FDA0000480222040000022
for level fluctuation degree, be defined as:
F &mu; , &eta; h = 1 2 &delta; + 1 &Sigma; m = &mu; - &delta; &mu; + &delta; &Sigma; n = &eta; - &theta; &eta; + &theta; - 1 | C m , n - C m , n + 1 | &Sigma; n = &eta; - &theta; &eta; + &theta; | C m , n |
Figure FDA0000480222040000024
for vertical fluctuation degree, be defined as:
F &mu; , &eta; v = 1 2 &theta; + 1 &Sigma; n = &eta; - &theta; &eta; + &theta; &Sigma; m = &mu; - &delta; &mu; + &delta; - 1 | C m , n - C m + 1 , n | &Sigma; m = &mu; - &delta; &mu; + &delta; | C m , n |
3.2) calculate the cognitive regularization parameter λ of each piecemeal i, formula is as follows:
&lambda; i = 1 &gamma; ( 1 - 1 b 2 &Sigma; ( k , s ) &Element; patch Q k , s )
Wherein, b 2what represent is a point block size, and γ adjusts the factor;
4) traversal set of eigenvectors U{t 1, t 2..., t q, to each vectorial t i(1≤i≤Q) arranges the cognitive regularization parameter λ of its correspondence i, solve rarefaction representation, then carry out super-resolution reconstruction, obtain the proper vector z of corresponding high-definition picture i, when having traveled through all proper vector t iafter, just obtain the blocking characteristic vector set V{z of reconstruct high-definition picture 1, z 2..., z q;
5) according to the blocking characteristic vector set V{z of high-definition picture 1, z 2..., z qand matrix Y, restructuring obtains high-definition picture Z ';
6) adopt iterative backprojection method to carry out overall reconstruct to high-definition picture Z ', remove fuzzy and noise, the high-definition picture Z being optimized;
7) the high-definition picture Z that output is optimized.
3. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that, described step 1) is specially:
1.1) the single width low-resolution image of input N × M size,
1.2) judge that low-resolution image is gray level image or coloured image, judge the dimension d in image array space,
In the time of d=2, low-resolution image is gray level image, is amplified to a doubly by bicubic interpolation method, obtains matrix Y ∈ R aN × aM;
In the time of d=3, low-resolution image is coloured image, first it is carried out to color space conversion, and it is transformed into YCbCr space from rgb space, and conversion formula is:
Y=0.299*R+0.587*G+0.114*B
Cb=0.564*(B-Y)
Cr=0.713*(R-Y)
Wherein, R, G and B represent respectively three components of rgb space, and Y, Cb and Cr represent respectively three components in YCbCr space, then adopt respectively bicubic interpolation algorithm to be amplified to a doubly to this Y, Cb and tri-components of Cr, obtain respectively matrix Y ∈ R aN × aM, Matrix C b ∈ R aN × aMwith Matrix C r ∈ R aN × aM, preservation matrix Cb and Cr.
4. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that described step 2) be specially:
2.1) 1 rank of horizontal direction and 1 rank of 2 ladder degree eigenmatrixes and vertical direction and the 2 ladder degree eigenmatrixes of difference compute matrix Y, obtain 4 eigenmatrixes and be respectively T h1, T h2, T v1and T v2, its dimension is identical with Y, and its process is:
2.1.1) 1 ladder degree matrix T of the horizontal direction of described compute matrix Y h1method as follows:
If filter operator is f 1=[1,0,1], calculate convolution:
T 1=Y*f 1
Obtain T 1∈ R aN × (aM+2), delete the 1st row and (aM+2) row, obtain T h1∈ R aN × aM;
2.1.2) 1 ladder degree matrix T of the vertical direction of described compute matrix Y v1method as follows:
If filter operator is
Figure FDA0000480222040000031
calculate convolution:
T 2=Y*f 2
Obtain T 2 ∈r (aN+2) × aM, delete the 1st row and (aN+2) OK, obtain T v1∈ R aN × aM;
2.1.3) 2 ladder degree matrix T of the horizontal direction of described compute matrix Y h2method as follows:
If filter operator is f 3=[1,0 ,-2,0,1], calculate convolution:
T 3=Y*f 3
Obtain T 3∈ R aN × (aM+4), delete the 1st row, the 2nd row, (aM+3) row and (aM+4) row, obtain T h2∈ R aN × aM;
2.1.4) 2 ladder degree matrix T of the vertical direction of described compute matrix Y v2method as follows:
If filter operator is calculate convolution:
T 4=Y*f 4
Obtain T 4∈ R (aN+4) × aM, delete the 1st row, the 2nd row, (aN+3) row and (aM+4) OK, obtain T v2∈ R aN × aM;
2.2) to matrix T h1proceed as follows: the T that filter width is a h1edge, obtain matrix T ` h1∈ R a (N-2) × a (M-2); Ergodic Matrices T ` according to the order of sequence h1in element, get piecemeal
Figure FDA0000480222040000042
and be converted into vector
Figure FDA0000480222040000043
wherein i=1,2 ..., Q, and Q=a (N-4) * a (M-4), obtain vector set
Figure FDA0000480222040000044
being implemented as follows of aforesaid operations:
2.2.1) T that described filter width is a h1edge, refer to puncture table T h11st~a capable, the aN-a+1~aN is capable, 1st~a row and the aM-a+1~aM row, obtains T ` h1∈ R a (N-2) × a (M-2);
2.2.2) described traversal according to the order of sequence refers to: by order from left to right, from top to bottom, Ergodic Matrices T ` successively h1element;
Described piecemeal is chosen and is referred to: if this pixel coordinate (p, q) that traversal arrives meets:
1≤p≤a* (N-2)-bb1 and 1≤q≤a* (M-2)-b+1
P and q are integer, and establishing point block size is b × b, and the left upper apex take this pixel as piecemeal, cuts apart and obtain piecemeal
Figure FDA0000480222040000045
then merge in order
Figure FDA0000480222040000046
b row vector, obtain vector
Figure FDA0000480222040000047
so just obtain vector set
2.3) to T h2, T v1and T v2carry out identical operation, cut apart and obtain respectively
Figure FDA0000480222040000049
with
Figure FDA00004802220400000410
after, then obtain vector set
Figure FDA00004802220400000412
with U v 2 { t v 2 1 , t v 2 2 , . . . , t v 2 Q } ;
2.4) travel through U simultaneously h1, U h2, U v1and U v2four set, merge vector
Figure FDA0000480222040000051
with obtain vector wherein i=1,2 ..., Q,
Figure FDA0000480222040000054
Finally obtain the set of eigenvectors U{t of matrix Y 1, t 2..., t q.
5. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that, described step 4) is specially:
Obtain high-definition picture blocking characteristic vector set V{z 1, z 2..., z q, be specially traversal set of eigenvectors U{t 1, t 2..., t q, to vectorial t iproceed as follows:
4.1) make y=t i, utilize low-resolution image dictionary D 1y is carried out to rarefaction representation, is written as:
min:‖α‖ 0
s . t . : | | FD l &alpha; - Fy | | 2 2 &le; &Element;
Wherein, F is that linear feature extracts operator;
Above formula is converted into minimum 1-norm problem:
min:‖α‖ 1
s . t : | | FD l &alpha; - Fy | | 2 2 &le; &Element;
Recycling lagrange's method of multipliers is converted into following formula:
min &alpha; | | FD l &alpha; - Fy | | 2 2 + &lambda; i | | &alpha; | | 1
Wherein, cognitive regularization parameter λ icalculated by step 3);
Improvement constraint condition is:
min:‖α‖ 1
s . t . : | | FD l &alpha; - Fy | | 2 2 &le; &Element; 1 | | PD l &alpha; - w | | 2 2 &le; &Element; 2
Wherein, matrix P is for extracting current piecemeal and overlapping between reconstruct high-definition picture, and w has comprised the overlapping value of reconstruct high-definition picture;
Be optimized for:
min &alpha; || D ~ &alpha;- y ~ || 2 2 + &lambda; i | | &alpha; | | 1
Wherein
Figure FDA00004802220400000510
parameter beta can be controlled low-resolution image input, makes it find a high-definition picture piece mating with the adjacent piecemeal that reconstruct is good;
According to the method described above, try to achieve the solution α of rarefaction representation *;
4.2) according to x=D hα *try to achieve x, make z i=x, obtains the estimation z of high-definition picture blocking characteristic vector i.
After having traveled through, obtain high-definition picture blocking characteristic vector set V{z 1, z 2..., z q;
Above-mentioned used dictionary is to D hand D 1obtained by precondition, the method for training is as follows:
(a) first collect image that the resolution close with picture material to be tested of some is higher as high resolving power training plan image set.Down-sampled by high-definition picture being carried out to bicubic, dwindle a doubly, thereby obtain corresponding low resolution training plan image set;
Described bicubic is down-sampled, be enlargement factor be 1/a described 1.2) in bicubic interpolation method;
(b) obtain the set of eigenvectors of all low resolution training images, concrete grammar and step 2) consistent; Be provided with n and open low resolution training image, obtain n vector set
Figure FDA0000480222040000067
, this n vector set is merged into a set Y in order l;
(c) obtain the set of eigenvectors of all high resolving power training images, concrete grammar is: first adopt described bicubic interpolation method, by low-resolution image L i(i=1,2 ..., n) interpolation amplification a doubly, obtains M i(i=1,2 ..., n); Then calculate high-definition picture H iwith M idifference, obtain Dif i=H i-M i; Afterwards according to step 2.2) in to T h1processing mode to Dif icarry out identical processing, obtain vector set (i=1,2 ..., n), obtain so altogether n vector set
Figure FDA0000480222040000062
then this n vector set is merged in order to a set X h;
(d) in order to train two super complete dictionary D of high low resolution hand D l, and these two dictionaries will guarantee that the rarefaction representation of corresponding high low-resolution image piece is identical, list system of equations:
D h = arg min D h , &Phi; | | X h - D h &Phi; | | 2 2 + &lambda; | | &Phi; | | 1 D 1 = arg min D l , &Phi; | | Y l - D l &Phi; | | 2 2 + &lambda; | | &Phi; | | 1
And to be combined be a formula:
min { D h , D l , &Phi; } 1 b 2 | | X h - D h &Phi; | | 2 2 + 1 4 b 2 | | Y 1 - D 1 &Phi; | | 2 2 + &lambda; ( 1 b 2 + 1 4 b 2 ) | | &Phi; | | 1
Wherein, image block is that size is b × b, and Φ is rarefaction representation matrix;
Merge D hand D l,
min { D h , D l , z } | | X C - D C &Phi; | | 2 2 + &lambda; ^ | | &Phi; | | 1
Wherein, X C = 1 b X h 1 2 b Y l , D C = 1 b D h 1 2 b D l , &lambda; ^ = &lambda; ( 1 b 2 + 1 4 b 2 ) , As fixing D cduring with in Φ one, solve another and be converted into the optimum solution problem that solves, to D creplace iteration optimization with Φ, concrete grammar is as follows:
(i) by D cbe initialized as gaussian random matrix, and each row is standardized;
(ii) fix D c, upgrade Φ by following formula:
&Phi; = min z | | X C - D C &Phi; | | 2 2 + &lambda; ^ | | &Phi; | | 1
(iii) fix Φ, upgrade D by following formula c:
D C = arg min D C | | X C - D C &Phi; | | 2 2 s . t . | | D i | | 2 2 &le; l , i = 1,2 , . . . , K
Wherein D irepresent D ci row vector, K represents D crow vector number;
(iv) iterate (ii) and (iii) until convergence;
(v) obtain D cbe the super complete dictionary of requirement, decompose and obtain D hand D l.
6. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that, described step 5) is specially:
5.1) the proper vector z to all high-definition picture piecemeals i∈ V, is divided into b isometric vector, then, using this b vector as row vector, is merged in order the piecemeal Z of b × b size i, wherein i=1,2 ..., Q, obtains partitioned matrix set { Z 1, Z 2..., Z q;
5.2) by all partitioned matrix set Z ibe merged into a matrix Z ' ∈ R a (N-2) × a (M-2), concrete grammar is as follows:
Make Z '=0, E=0, Z ' ∈ R a (N-2) × a (M-2), E ∈ R a (N-2) × a (M-2), i=1, by order from left to right, from top to bottom, the element of Ergodic Matrices Z ' successively, if this pixel coordinate (p, q) traversing meets:
1≤p≤a* (N-2)-b+1 and 1≤q≤a* (M-2)-b+1
Wherein p and q are integer, order:
Z‘(p:(p+b-1),q:(q+b-1))=Z‘(p:(p+b-1),q:(q+b-1))+Z i
i=i+1
Each element of matrix E is used for the accumulative frequency of the pixel of the correspondence position that records matrix Z ', after traversal finishes, superposition image vegetarian refreshments between piecemeal is got to average, and each element value of order matrix Z ', divided by the element value of the correspondence position of matrix F, finally obtains Z ';
5.3) Z ' is added to middle a (N-2) × a (M-2) region of Y, gained new images assignment is given to obtain Z ', obtains Z ' ∈ R aN × aM.
7. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that the iterative backprojection method in described step 6) is specially and solves following formula optimization problem:
Z = arg min Z | | &Theta;&Psi;Z - Y 0 | | 2 2 + c | | Z - Z &prime; | | 2 2
Wherein Ψ represents fuzzy matrix, and Θ represents down-sampled matrix, Y 0for the low-resolution image of N × M size of input.
8. the image super-resolution reconstructing method building based on cognitive regularization parameter according to claim 2, is characterized in that, described step 7) is specific as follows:
If input picture is gray level image, the Z obtaining in step 6) is the image obtaining after super-resolution reconstruction;
If input picture is coloured image, according to Cb temporary in the Z obtaining in step 6) and step 1) and Cr matrix, (Z, Cb, Cr) is transformed into rgb space, that is:
R=Z+1.402Cr
G=Z-0.344Cb-0.714Cr
B=Z+1.772Cb
Then make Z=(R, G, B) ∈ R aN × aM × 3.
CN201410108363.7A 2014-03-21 2014-03-21 The image super-resolution reconstructing method built based on cognitive regularization parameter Expired - Fee Related CN103871041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410108363.7A CN103871041B (en) 2014-03-21 2014-03-21 The image super-resolution reconstructing method built based on cognitive regularization parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410108363.7A CN103871041B (en) 2014-03-21 2014-03-21 The image super-resolution reconstructing method built based on cognitive regularization parameter

Publications (2)

Publication Number Publication Date
CN103871041A true CN103871041A (en) 2014-06-18
CN103871041B CN103871041B (en) 2016-08-17

Family

ID=50909549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410108363.7A Expired - Fee Related CN103871041B (en) 2014-03-21 2014-03-21 The image super-resolution reconstructing method built based on cognitive regularization parameter

Country Status (1)

Country Link
CN (1) CN103871041B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252703A (en) * 2014-09-04 2014-12-31 吉林大学 Wavelet preprocessing and sparse representation-based satellite remote sensing image super-resolution reconstruction method
CN105260730A (en) * 2015-11-24 2016-01-20 严媚 Machine learning-based contact-type imaging microfluid cell counter and image processing method thereof
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN106157251A (en) * 2015-04-01 2016-11-23 武汉大学 A kind of face super-resolution method based on Cauchy's regularization
CN106296668A (en) * 2016-08-01 2017-01-04 南京邮电大学 A kind of global image dividing method of multiresolution analysis
CN104268829B (en) * 2014-10-17 2017-09-29 中国科学院地理科学与资源研究所 A kind of Super-resolution Mapping based on multiscale space regularization model
CN107993194A (en) * 2017-11-30 2018-05-04 天津大学 A kind of super resolution ratio reconstruction method based on Stationary Wavelet Transform
CN108133456A (en) * 2016-11-30 2018-06-08 京东方科技集团股份有限公司 Face super-resolution reconstruction method, reconstructing apparatus and computer system
CN108154474A (en) * 2017-12-22 2018-06-12 浙江大华技术股份有限公司 A kind of super-resolution image reconstruction method, device, medium and equipment
CN109033963A (en) * 2018-06-22 2018-12-18 王连圭 The trans-regional human motion posture target identification method of multiple-camera video
CN109255770A (en) * 2018-07-16 2019-01-22 电子科技大学 A kind of down-sampled method of New Image transform domain
CN110062232A (en) * 2019-04-01 2019-07-26 杭州电子科技大学 A kind of video-frequency compression method and system based on super-resolution
CN111260557A (en) * 2020-01-21 2020-06-09 中国工程物理研究院激光聚变研究中心 Deep learning-based super-resolution imaging method for remote target
WO2020118687A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Method and device for dynamic magnetic resonance image reconstruction with adaptive parameter learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070103595A1 (en) * 2005-10-27 2007-05-10 Yihong Gong Video super-resolution using personalized dictionary
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution
CN102393966A (en) * 2011-06-15 2012-03-28 西安电子科技大学 Self-adapting image compressive sampling method based on multi-dimension saliency map
CN102722875A (en) * 2012-05-29 2012-10-10 杭州电子科技大学 Visual-attention-based variable quality ultra-resolution image reconstruction method
CN103247028A (en) * 2013-03-19 2013-08-14 广东技术师范学院 Multi-hypothesis prediction block compressed sensing image processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070103595A1 (en) * 2005-10-27 2007-05-10 Yihong Gong Video super-resolution using personalized dictionary
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102393966A (en) * 2011-06-15 2012-03-28 西安电子科技大学 Self-adapting image compressive sampling method based on multi-dimension saliency map
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution
CN102722875A (en) * 2012-05-29 2012-10-10 杭州电子科技大学 Visual-attention-based variable quality ultra-resolution image reconstruction method
CN103247028A (en) * 2013-03-19 2013-08-14 广东技术师范学院 Multi-hypothesis prediction block compressed sensing image processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANG WANG, 等: "Perceptual compressive sensing scheme based on human vision system", 《INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE》 *
付怀正: "基于稀疏表示的彩色图像超分辨率重建算法研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252703B (en) * 2014-09-04 2017-05-03 吉林大学 Wavelet preprocessing and sparse representation-based satellite remote sensing image super-resolution reconstruction method
CN104252703A (en) * 2014-09-04 2014-12-31 吉林大学 Wavelet preprocessing and sparse representation-based satellite remote sensing image super-resolution reconstruction method
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN105488776B (en) * 2014-10-10 2018-05-08 北京大学 Super-resolution image reconstruction method and device
CN104268829B (en) * 2014-10-17 2017-09-29 中国科学院地理科学与资源研究所 A kind of Super-resolution Mapping based on multiscale space regularization model
CN106157251A (en) * 2015-04-01 2016-11-23 武汉大学 A kind of face super-resolution method based on Cauchy's regularization
CN106157251B (en) * 2015-04-01 2018-10-26 武汉大学 A kind of face super-resolution method based on Cauchy's regularization
CN105260730A (en) * 2015-11-24 2016-01-20 严媚 Machine learning-based contact-type imaging microfluid cell counter and image processing method thereof
CN106296668A (en) * 2016-08-01 2017-01-04 南京邮电大学 A kind of global image dividing method of multiresolution analysis
CN106296668B (en) * 2016-08-01 2019-07-16 南京邮电大学 A kind of global image dividing method of multiresolution analysis
CN108133456A (en) * 2016-11-30 2018-06-08 京东方科技集团股份有限公司 Face super-resolution reconstruction method, reconstructing apparatus and computer system
CN107993194A (en) * 2017-11-30 2018-05-04 天津大学 A kind of super resolution ratio reconstruction method based on Stationary Wavelet Transform
CN107993194B (en) * 2017-11-30 2021-01-01 天津大学 Super-resolution reconstruction method based on stationary wavelet transform
CN108154474A (en) * 2017-12-22 2018-06-12 浙江大华技术股份有限公司 A kind of super-resolution image reconstruction method, device, medium and equipment
CN109033963A (en) * 2018-06-22 2018-12-18 王连圭 The trans-regional human motion posture target identification method of multiple-camera video
CN109033963B (en) * 2018-06-22 2021-07-06 王连圭 Multi-camera video cross-region human motion posture target recognition method
CN109255770A (en) * 2018-07-16 2019-01-22 电子科技大学 A kind of down-sampled method of New Image transform domain
WO2020118687A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Method and device for dynamic magnetic resonance image reconstruction with adaptive parameter learning
CN110062232A (en) * 2019-04-01 2019-07-26 杭州电子科技大学 A kind of video-frequency compression method and system based on super-resolution
CN111260557A (en) * 2020-01-21 2020-06-09 中国工程物理研究院激光聚变研究中心 Deep learning-based super-resolution imaging method for remote target

Also Published As

Publication number Publication date
CN103871041B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CN103871041A (en) Image super-resolution reconstruction method based on cognitive regularization parameters
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN105069825B (en) Image super-resolution rebuilding method based on depth confidence network
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
US8345971B2 (en) Method and system for spatial-temporal denoising and demosaicking for noisy color filter array videos
Chierchia et al. A nonlocal structure tensor-based approach for multicomponent image recovery problems
CN102142137B (en) High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN103150713B (en) Utilize the image super-resolution method that image block classification rarefaction representation is polymerized with self-adaptation
CN102968766B (en) Dictionary database-based adaptive image super-resolution reconstruction method
CN108269244B (en) Image defogging system based on deep learning and prior constraint
CN101950415B (en) Shape semantic model constraint-based face super-resolution processing method
CN105243670A (en) Sparse and low-rank joint expression video foreground object accurate extraction method
CN105513033A (en) Super-resolution reconstruction method based on non-local simultaneous sparse representation
CN104851077A (en) Adaptive remote sensing image panchromatic sharpening method
CN107610093B (en) Full-reference image quality evaluation method based on similarity feature fusion
CN111340696B (en) Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
CN107169946B (en) Image fusion method based on nonnegative sparse matrix and hypersphere color transformation
CN104376565A (en) Non-reference image quality evaluation method based on discrete cosine transform and sparse representation
CN105160647A (en) Panchromatic multi-spectral image fusion method
CN102842124A (en) Multispectral image and full-color image fusion method based on matrix low rank decomposition
CN105550989A (en) Image super-resolution method based on nonlocal Gaussian process regression
CN104036468A (en) Super-resolution reconstruction method for single-frame images on basis of pre-amplification non-negative neighbor embedding
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN105303542A (en) Gradient weighted-based adaptive SFIM image fusion algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160817

Termination date: 20190321