CN103020937A - Method for improving face image super-resolution reconfiguration - Google Patents

Method for improving face image super-resolution reconfiguration Download PDF

Info

Publication number
CN103020937A
CN103020937A CN2012105409928A CN201210540992A CN103020937A CN 103020937 A CN103020937 A CN 103020937A CN 2012105409928 A CN2012105409928 A CN 2012105409928A CN 201210540992 A CN201210540992 A CN 201210540992A CN 103020937 A CN103020937 A CN 103020937A
Authority
CN
China
Prior art keywords
image
formula
sigma
resolution
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN2012105409928A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUBEI WEIJIA TECHNOLOGY CO LTD
Original Assignee
HUBEI WEIJIA TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUBEI WEIJIA TECHNOLOGY CO LTD filed Critical HUBEI WEIJIA TECHNOLOGY CO LTD
Priority to CN2012105409928A priority Critical patent/CN103020937A/en
Publication of CN103020937A publication Critical patent/CN103020937A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the field of image super-resolution reconfiguration, and particularly relates to a method for improving face image super-resolution reconfiguration. The method includes the steps: firstly, taking input low-resolution face image I and K low-resolution reference face images Ik (x); secondly, calculating a local embedding coefficient; thirdly, substituting the local embedding coefficient into a reconfiguration model to calculate a super-resolution reconfiguration image; and fourthly, using the image obtained in the previous step as an input image. The Euclidean distance between each low-resolution reference face image Ik (x) and the input low-resolution face image I is the shortest, the low-resolution reference face images Ik (x) are translated for p units by an affine translation operator to form images Ik (x+p), and K=6. By the method, face recognition precision can be improved.

Description

A kind of improvement face image super-resolution reconstructing method
Technical field
The invention belongs to the image super-resolution reconstruction field, especially a kind of improvement face image super-resolution reconstructing method.
Background technology
The patent No. is that 201210164069.9 patent discloses a kind of face identification method based on multithread shape discriminatory analysis super-resolution, the method obtains one by the mapping matrix of low high resolution facial image multithread shape space to high-resolution human face image multithread shape space in the training stage by the discriminatory analysis of multithread shape.Similarity figure between similarity figure and class in original high resolution facial image multithread shape space makes up class utilizes these two neighbours to scheme to make up and differentiates bound term, and optimization obtains mapping matrix by rebuilding bound term and differentiating the cost function that bound term forms.At cognitive phase, the mapping matrix that obtains by off-line learning is mapped to high-resolution human face image multithread shape space with low resolution facial image to be identified, obtains the high-resolution human face image.
But the precision of images of existing ultra-resolution ratio reconstructing method reconstruct is inadequate, causes the performance degradation of recognition of face.
Summary of the invention
Technical matters to be solved by this invention is: in order to improve the quality of Image Reconstruction, this patent has proposed a kind of improvement face image super-resolution reconstructing method, and the method can make the precision of recognition of face get a promotion.
The present invention solves the problems of the technologies described above the technical scheme that adopts: a kind of improvement face image super-resolution reconstructing method, and it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input k(x), low-resolution reference facial image I k(x) image behind p unit of affine translation operator translation is I k(x+p), get K=6;
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I k(x) carry out interpolation amplification with the composite center of gravity rational interpolants algorithm respectively, the image behind the interpolation amplification is designated as respectively I successively L ↑And I L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I L ↑And I L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E R, k(x), k=1,2 ..., K, it can be calculated by following formula:
Figure BDA00002574425000011
In the formula
Figure BDA00002574425000012
Representative utilizes optical flow field to I L ↑ kCarry out the image that generates behind the registration; With E R, k(x) substitution formula (1.1)
b k ( x ) = [ Σ q ∈ Ω E r , k ( x + q ) + u eps ] - 2 Σ k [ Σ q ∈ Ω E r , k ( x + q ) + u eps ] - 2 - - - ( 1.1 )
B x=diag[b 1(x)b 2(x)…b k(x)] (1.2)
Wherein, u EpsBeing one is 0 normal number in order to avoid denominator, and Ω is a neighborhood window, and its size is 7 * 7 pixels,
Figure BDA00002574425000022
The registration error that has reflected near reference sample translation q unit pixel x; Find the solution B x, the B that tries to achieve xWeight substitution formula (2) as balance locally embedding coefficient; High resolving power reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, and it embeds coefficient and is calculated as follows:
{ w p ( x ) | p ∈ C , x ∈ G } = arg min Σ x γ 2 × [ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] T B x ·
[ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] + Σ p ∈ C ( Σ x | ▿ w p ( x ) | ) - - - ( 2 )
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (2) plus sige front and back, γ=0.5; Last item has reflected w pThe locally embedding relation that (x) should satisfy; Rear one is its total variance; In order to find the solution formula (2), adopt based on the time become partial differential equation method come iterative w p(x):
∂ w p ( x ) ∂ t = ▿ ( ▿ w p ( x ) | ▿ w p ( x ) | ) - γ h T ( x + p ) B x × [ Σ q ∈ C h ( x + p ) w q ( x ) - h ( x ) ]
In the formula For embedding the in time variable quantity of t of coefficient; The discretize following formula just can be in the hope of locally embedding coefficient w p(x) numerical solution;
Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot i, y j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I L ↑And I L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
R ( x , y ) = Σ i = 0 n - d 1 λ i ( x ) r i ( x , y ) Σ i = 0 n - d 1 λ i ( x ) - - - ( 1 )
Wherein,
Figure BDA00002574425000033
ψ k ( x , y ) = Σ l = k k + d 2 ( - 1 ) l y - y l f ( x , y l ) Σ l = k k + d 2 ( - 1 ) l y - y l k = 0,1 , . . . , m - d 2
λ i ( x ) = ( - 1 ) i ( x - x i ) ( x - x i + 1 ) . . . ( x - x i + d 1 )
λ i ( y ) = ( - 1 ) k ( y - y k ) ( y - y k + 1 ) . . . ( y - y k + d 2 )
M, n are positive integer, get m=3 here, n=3; x i, y jBe interpolation knot, f (x i, y j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w p(x) numerical solution substitution formula (3) is to target image
Figure BDA00002574425000037
Carrying out maximum a posteriori probability by following formula (3) estimates:
I ^ h = arg I h min Q ( I h ) = arg I h min | | DBI h - I 1 | | 2 + λ Σ x | | I h ( x ) - Σ p ∈ C I h ( x + p ) w p ( x ) | | 2 - - - ( 3 )
Q (I in the formula h) be the cost function about high-resolution human face image column vector; Q (I h) in last Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Figure BDA00002574425000041
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3) h(x) computing formula is
I h ( x ) = Σ p ∈ C I h ( x - p ) w p ( x ) - - - ( 4 )
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3) 1Computing formula as follows:
I 1=DBI h+n ⑸
I 1Be the column vector of low resolution facial image, its dimension is N 1I hBe the column vector of high-resolution human face image, its dimension is N 2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N 2* N 2D is that size is N 1* N 2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3) h) write as following matrix operation form, namely
Q ( I h ) = | | DBI h - I 1 | | 2 + λ | | ( E - Σ p ∈ C W p S - p ) I h | | 2 - - - ( 6 )
In the formula: S -pBe the translation operator of p for translational movement, it is to be of a size of N 2* N 2Matrix; W pBe N 2* N 2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction p(x); E is and S -pAnd W pThe unit matrix of formed objects; Like this, Q (I h) gradient can be expressed as
∂ Q ( I h ) ∂ I h = 2 B T D T ( DBI h - I 1 ) + 2 λ ( E - Σ p ∈ C W p S - p ) T ( E - Σ p ∈ C W p S - p ) I h - - - ( 7 )
Utilize formula (7) to try to achieve about I hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
I ^ h t + 1 = I ^ h t - β ∂ Q ( I h ) ∂ I h | h = I ^ h t - - - ( 8 )
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration
Figure BDA00002574425000047
For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Figure BDA00002574425000048
This patent is compared with existing patent, and its technical advantage is to have quoted in the restructuring procedure a kind of high-precision image interpolation method, and the image that precision is incurred loss is able to High precision reconstruction, can suppress the generation of the noise such as burr, pseudo-shadow in the reconstructed image.
Description of drawings
Fig. 1 is the schematic flow sheet of the embodiment of the invention.
Embodiment
The present invention is further illustrated below in conjunction with embodiment:
As shown in Figure 1, a kind of detailed calculation procedure of improving the face image super-resolution reconstructing method of the embodiment of the invention is described below:
A kind of improvement face image super-resolution reconstructing method, it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input k(x), low-resolution reference facial image I k(x) image behind p unit of affine translation operator translation is I k(x+p); Preferably, getting K=6, is the speed in order not only to guarantee the precision of reconstruct but also to guarantee to calculate.
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I k(x) carry out interpolation amplification to larger size with the composite center of gravity rational interpolants algorithm respectively, such as amplifying 3 times.Image behind the interpolation amplification is designated as respectively I successively L ↑And I L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I L ↑And I L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E R, k(x), k=1,2 ..., K, it can be calculated by following formula: In the formula
Figure BDA00002574425000052
Representative utilizes optical flow field to I L ↑, kCarry out the image that generates behind the registration; E R, kThe weight of each reference sample of balance when (x) being used for the locally embedding coefficient of this step learning pixel x is with E R, k(x) substitution formula (1.1), (1.1)
B x=diag[b 1(x)b 2(x)…b k(x)] (1.2)
b k(x) be the weight of k reference sample, its value and registration error E R, k(x) relevant, in the formula:
Figure BDA00002574425000061
The registration error that has reflected near reference sample translation q unit pixel x; Ω is 1 neighborhood window, and its size is 7 * 7 pixels. can find out, each reference sample is at the inverse ratio that square is approximated to of the weight of relevant position and its registration error.In the formula, denominator is normalized factor, u EpsBeing 1 little normal number, is 0 in order to avoid denominator.Obviously, when the registration error of reference sample at the x place is larger, its weight b k(x) just little, vice versa.Owing to will consider the continuous relationship between the pixel, may have noncontinuity between the embedding coefficient of trying to achieve.In addition, when the number K of reference sample hour (for example K<| C|, | C| represents the number of adjoint point), satisfy the w of formula (2) condition p(x) not unique.Therefore, algorithm has been introduced the total variance Method for minimization smoothness that embeds coefficient has additionally been retrained.Find the solution B x, the B that tries to achieve xWeight substitution formula (2) as balance locally embedding coefficient; High-definition picture reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, required locally embedding coefficient { w p(x) } P ∈ CShould satisfy
{ w p ( x ) | p ∈ C , x ∈ G } = arg min Σ x γ 2 × [ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] T B x ·
[ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] + Σ p ∈ C ( Σ x | ▿ w p ( x ) | ) - - - ( 2 )
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (3) plus sige front and back, γ>0, preferred γ=0.5; Last item has reflected w pThe locally embedding relation that (x) should satisfy; Rear one is its total variance.In image denoising, when being denoising, the benefit that minimizes total variance also can protect preferably the high frequency information such as Edge texture of image.Here, algorithm utilizes total variance to suppress to embed the noncontinuity of coefficient, and the high-definition picture partial structurtes feature that comprises in the retention coefficient.In order to find the solution formula (2), adopt based on the time become partial differential equation method come iterative w p(x):
∂ w p ( x ) ∂ t = ▿ ( ▿ w p ( x ) | ▿ w p ( x ) | ) - γ h T ( x + p ) B x × [ Σ q ∈ C h ( x + p ) w q ( x ) - h ( x ) ]
In the formula
Figure BDA00002574425000065
For embedding the in time variable quantity of t of coefficient.Discrete following formula just can be in the hope of w p(x) numerical solution.Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot i, y j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I L ↑And I L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
R ( x , y ) = Σ i = 0 n - d 1 λ i ( x ) r i ( x , y ) Σ i = 0 n - d 1 λ i ( x ) - - - ( 1 )
Wherein,
Figure BDA00002574425000073
ψ k ( x , y ) = Σ l = k k + d 2 ( - 1 ) l y - y l f ( x , y l ) Σ l = k k + d 2 ( - 1 ) l y - y l k = 0,1 , . . . , m - d 2
λ i ( x ) = ( - 1 ) i ( x - x i ) ( x - x i + 1 ) . . . ( x - x i + d 1 )
λ i ( y ) = ( - 1 ) k ( y - y k ) ( y - y k + 1 ) . . . ( y - y k + d 2 )
M, n are positive integer, get m=3 here, n=3; x i, y jBe interpolation knot, f (x i, y j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w p(x) numerical solution substitution formula (3) is to target image
Figure BDA00002574425000077
Carrying out maximum a posteriori probability by following formula (3) estimates:
I ^ h = arg I h min Q ( I h ) = arg I h min | | DBI h - I 1 | | 2 + λ Σ x | | I h ( x ) - Σ p ∈ C I h ( x + p ) w p ( x ) | | 2 - - - ( 3 )
Q (I in the formula h) be the cost function about high-resolution human face image column vector; Q (I h) in last Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Figure BDA00002574425000083
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3) h(x) computing formula is
I h ( x ) = Σ p ∈ C I h ( x - p ) w p ( x ) - - - ( 4 )
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3) 1Computing formula as follows:
I 1=DBI h+n ⑸
I 1Be the column vector of low resolution facial image, its dimension is N 1I hBe the column vector of high-resolution human face image, its dimension is N 2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N 2* N 2D is that size is N 1* N 2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3) h) write as following matrix operation form, namely
Q ( I h ) = | | DBI h - I 1 | | 2 + λ | | ( E - Σ p ∈ C W p S - p ) I h | | 2 - - - ( 6 )
In the formula: S -pBe the translation operator of p for translational movement, it is to be of a size of N 2* N 2Matrix; W pBe N 2* N 2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction p(x); E is and S -pAnd W pThe unit matrix of formed objects; Like this, Q (I h) gradient can be expressed as
∂ Q ( I h ) ∂ I h = 2 B T D T ( DBI h - I 1 ) + 2 λ ( E - Σ p ∈ C W p S - p ) T ( E - Σ p ∈ C W p S - p ) I h - - - ( 7 )
Utilize formula (7) to try to achieve about I hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
Figure BDA00002574425000087
I ^ h t + 1 = I ^ h t - β ∂ Q ( I h ) ∂ I h | h = I ^ h t - - - ( 8 )
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration
Figure BDA00002574425000089
For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Figure BDA00002574425000091
Described λ=0.8 is got λ=0.8 and can be guaranteed that balance of weights is moderate.
Preferably, described C is the neighborhood window of 3 * 3 pixels or 4 * 4 pixel sizes.
The above, it only is preferred embodiment of the present invention, be not that any pro forma restriction is done in invention, every foundation technical spirit of the present invention all still is within the scope of the present invention any simple modification, equivalent variations and modification that above embodiment does.

Claims (1)

1. one kind is improved the face image super-resolution reconstructing method, and it is characterized in that: it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input k(x), low-resolution reference facial image I k(x) image behind p unit of affine translation operator translation is I k(x+p), get K=6;
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I k(x) carry out interpolation amplification with the composite center of gravity rational interpolants algorithm respectively, the image behind the interpolation amplification is designated as respectively I successively L ↑And I L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I L ↑And I L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E R, k(x), k=1,2 ..., K, it can be calculated by following formula:
Figure FDA00002574424900011
In the formula Representative utilizes optical flow field to I L ↑, kCarry out the image that generates behind the registration; With E R, k(x) substitution formula (1.1)
b k ( x ) = [ Σ q ∈ Ω E r , k ( x + q ) + u eps ] - 2 Σ k [ Σ q ∈ Ω E r , k ( x + q ) + u eps ] - 2 - - - ( 1.1 )
B x=diag[b 1(x)b 2(x)…b k(x)] (1.2)
Wherein, u EpsBeing one is 0 normal number in order to avoid denominator, and Ω is a neighborhood window, and its size is 7 * 7 pixels,
Figure FDA00002574424900014
The registration error that has reflected near reference sample translation q unit pixel x; Find the solution B x, the B that tries to achieve xWeight substitution formula (2) as balance locally embedding coefficient; High resolving power reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, and it embeds coefficient and is calculated as follows:
{ w p ( x ) | p ∈ C , x ∈ G } = arg min Σ x γ 2 × [ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] T B x ·
[ h ( x ) - Σ p ∈ C h ( x + p ) w p ( x ) ] + Σ p ∈ C ( Σ x | ▿ w p ( x ) | ) - - - ( 2 )
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (2) plus sige front and back, γ=0.5; Last item has reflected w pThe locally embedding relation that (x) should satisfy; Rear one is its total variance; In order to ask
Solution formula (2), adopt based on the time become partial differential equation method come iterative w p(x):
∂ w p ( x ) ∂ t = ▿ ( ▿ w p ( x ) | ▿ w p ( x ) | ) - γ h T ( x + p ) B x × [ Σ q ∈ C h ( x + p ) w q ( x ) - h ( x ) ]
In the formula
Figure FDA00002574424900022
For embedding the in time variable quantity of t of coefficient; The discretize following formula just can be in the hope of locally embedding coefficient w p(x) numerical solution;
Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot i, y j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I L ↑And I L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
R ( x , y ) = Σ i = 0 n - d 1 λ i ( x ) r i ( x , y ) Σ i = 0 n - d 1 λ i ( x ) - - - ( 1 )
Wherein,
Figure FDA00002574424900024
Figure FDA00002574424900025
ψ k ( x , y ) = Σ l = k k + d 2 ( - 1 ) l y - y l f ( x , y l ) Σ l = k k + d 2 ( - 1 ) l y - y l k = 0,1 , . . . , m - d 2
λ i ( x ) = ( - 1 ) i ( x - x i ) ( x - x i + 1 ) . . . ( x - x i + d 1 )
λ i ( y ) = ( - 1 ) k ( y - y k ) ( y - y k + 1 ) . . . ( y - y k + d 2 )
M, n are positive integer, get m=3 here, n=3; x i, y jBe interpolation knot, f (x i, y j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w p(x) numerical solution substitution formula (3) is to target image
Figure FDA00002574424900033
Carrying out maximum a posteriori probability by following formula (3) estimates:
I ^ h = arg I h min Q ( I h ) = arg I h min | | DBI h - I 1 | | 2 + λ Σ x | | I h ( x ) - Σ p ∈ C I h ( x + p ) w p ( x ) | | 2 - - - ( 3 )
Q (I in the formula h) be the cost function about high-resolution human face image column vector; Q (I h) in last
Figure FDA00002574424900035
Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Figure FDA00002574424900036
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3) h(x) computing formula is
I h ( x ) = Σ p ∈ C I h ( x - p ) w p ( x ) - - - ( 4 )
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3) 1Computing formula as follows:
I 1=DBI h+n ⑸
I 1Be the column vector of low resolution facial image, its dimension is N 1I hBe the column vector of high-resolution human face image, its dimension is N 2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N 2* N 2D is that size is N 1* N 2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3) h) write as following matrix operation form, namely
Q ( I h ) = | | DBI h - I 1 | | 2 + λ | | ( E - Σ p ∈ C W p S - p ) I h | | 2 - - - ( 6 )
In the formula: S -pBe the translation operator of p for translational movement, it is to be of a size of N 2* N 2Matrix; W pBe N 2* N 2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction p(x); E is and S -pAnd W pThe unit matrix of formed objects; Like this, Q (I h) gradient can be expressed as
∂ Q ( I h ) ∂ I h = 2 B T D T ( DBI h - I 1 ) + 2 λ ( E - Σ p ∈ C W p S - p ) T ( E - Σ p ∈ C W p S - p ) I h - - - ( 7 )
Utilize formula (7) to try to achieve about I hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
Figure FDA00002574424900042
I ^ h t + 1 = I ^ h t - β ∂ Q ( I h ) ∂ I h | h = I ^ h t - - - ( 8 )
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Figure FDA00002574424900045
CN2012105409928A 2012-12-12 2012-12-12 Method for improving face image super-resolution reconfiguration Withdrawn CN103020937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012105409928A CN103020937A (en) 2012-12-12 2012-12-12 Method for improving face image super-resolution reconfiguration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012105409928A CN103020937A (en) 2012-12-12 2012-12-12 Method for improving face image super-resolution reconfiguration

Publications (1)

Publication Number Publication Date
CN103020937A true CN103020937A (en) 2013-04-03

Family

ID=47969504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012105409928A Withdrawn CN103020937A (en) 2012-12-12 2012-12-12 Method for improving face image super-resolution reconfiguration

Country Status (1)

Country Link
CN (1) CN103020937A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384386B2 (en) 2014-08-29 2016-07-05 Motorola Solutions, Inc. Methods and systems for increasing facial recognition working rang through adaptive super-resolution
CN113010038A (en) * 2021-02-09 2021-06-22 北京工业大学 Ultrasonic lamb wave touch load identification method based on super-resolution reconstruction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384386B2 (en) 2014-08-29 2016-07-05 Motorola Solutions, Inc. Methods and systems for increasing facial recognition working rang through adaptive super-resolution
CN113010038A (en) * 2021-02-09 2021-06-22 北京工业大学 Ultrasonic lamb wave touch load identification method based on super-resolution reconstruction
CN113010038B (en) * 2021-02-09 2024-02-02 北京工业大学 Ultrasonic lamb wave touch load identification method based on super-resolution reconstruction

Similar Documents

Publication Publication Date Title
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN103871041B (en) The image super-resolution reconstructing method built based on cognitive regularization parameter
CN102136144B (en) Image registration reliability model and reconstruction method of super-resolution image
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN105469360A (en) Non local joint sparse representation based hyperspectral image super-resolution reconstruction method
CN106408524A (en) Two-dimensional image-assisted depth image enhancement method
CN106920214B (en) Super-resolution reconstruction method for space target image
CN103366347B (en) Image super-resolution rebuilding method based on rarefaction representation
CN105825477A (en) Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
US9865037B2 (en) Method for upscaling an image and apparatus for upscaling an image
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN105046672A (en) Method for image super-resolution reconstruction
CN103150713A (en) Image super-resolution method of utilizing image block classification sparse representation and self-adaptive aggregation
CN103020936B (en) A kind of face image super-resolution reconstructing method
CN105488759B (en) A kind of image super-resolution rebuilding method based on local regression model
CN113762147B (en) Facial expression migration method and device, electronic equipment and storage medium
CN113222825B (en) Infrared image super-resolution reconstruction method based on visible light image training and application
CN105513033A (en) Super-resolution reconstruction method based on non-local simultaneous sparse representation
CN113450396A (en) Three-dimensional/two-dimensional image registration method and device based on bone features
CN104036468A (en) Super-resolution reconstruction method for single-frame images on basis of pre-amplification non-negative neighbor embedding
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN111242999B (en) Parallax estimation optimization method based on up-sampling and accurate re-matching
CN104091364B (en) Single-image super-resolution reconstruction method
CN112529777A (en) Image super-resolution analysis method based on multi-mode learning convolution sparse coding network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C04 Withdrawal of patent application after publication (patent law 2001)
WW01 Invention patent application withdrawn after publication

Application publication date: 20130403