CN102999748A - Refactoring method for optimizing super resolution of facial images - Google Patents
Refactoring method for optimizing super resolution of facial images Download PDFInfo
- Publication number
- CN102999748A CN102999748A CN2012105376229A CN201210537622A CN102999748A CN 102999748 A CN102999748 A CN 102999748A CN 2012105376229 A CN2012105376229 A CN 2012105376229A CN 201210537622 A CN201210537622 A CN 201210537622A CN 102999748 A CN102999748 A CN 102999748A
- Authority
- CN
- China
- Prior art keywords
- image
- formula
- sigma
- resolution
- lambda
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention belongs to the field of refactoring of the super resolution of facial images and particularly provides a refactoring method for optimizing the super resolution of the facial images. The method includes step one, inputting a low-resolution facial image and K low-resolution reference facial image; step two, calculating a local embedding coefficient; step three, substituting a local embedding system into a refactoring model to calculate a refactored image with a high resolution; and step four, taking the image solved in the last step as an input image. The method can improve the face recognition accuracy.
Description
Technical field
The invention belongs to the image super-resolution reconstruction field, especially a kind of optimization face image super-resolution reconstructing method.
Background technology
The patent No. a kind of face identification method based on multithread shape discriminatory analysis super-resolution that has been 201210164069.9 patent disclosure, the method obtains one by the mapping matrix of low high resolution facial image multithread shape space to high-resolution human face image multithread shape space in the training stage by the discriminatory analysis of multithread shape.Similarity figure between similarity figure and class in original high resolution facial image multithread shape space makes up class utilizes these two neighbours to scheme to make up and differentiates bound term, and optimization obtains mapping matrix by rebuilding bound term and differentiating the cost function that bound term forms.At cognitive phase, the mapping matrix that obtains by off-line learning is mapped to high-resolution human face image multithread shape space with low resolution facial image to be identified, obtains the high-resolution human face image.
But the precision of images of existing ultra-resolution ratio reconstructing method reconstruct is inadequate, causes the performance degradation of recognition of face.
Summary of the invention
Technical matters to be solved by this invention is: in order to improve the quality of Image Reconstruction, this patent has proposed a kind of optimization face image super-resolution reconstructing method, and the method can make the precision of recognition of face get a promotion.
The present invention solves the problems of the technologies described above the technical scheme that adopts: a kind of optimization face image super-resolution reconstructing method, and it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input
k(x), low-resolution reference facial image I
k(x) image behind p unit of affine translation operator translation is I
k(x+p), get K=6;
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I
k(x) carry out interpolation amplification with the composite center of gravity rational interpolants algorithm respectively, the image behind the interpolation amplification is designated as respectively I successively
L ↑And I
L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I
L ↑And I
L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E
R, k(x), k=1,2 ..., K, it can be calculated by following formula:
In the formula
Representative utilizes optical flow field to I
L ↑, kCarry out the image that generates behind the registration; With E
R, k(x) substitution formula (1.1)
B
x=diag[b
1(x)b
2(x)…b
k(x)](1.2)
Wherein, u
EpsBeing one is 0 normal number in order to avoid denominator, and Ω is a neighborhood window, and its size is 7 * 7 pixels,
The registration error that has reflected near reference sample translation q unit pixel x; Find the solution B
x, the B that tries to achieve
xWeight substitution formula (2) as balance locally embedding coefficient; High resolving power reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, and it embeds coefficient and is calculated as follows:
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (2) plus sige front and back, γ=0.5; Last item has reflected w
pThe locally embedding relation that (x) should satisfy; Rear one is its total variance; In order to find the solution formula (2), adopt based on the time become partial differential equation method come iterative w
p(x):
In the formula
For embedding the in time variable quantity of t of coefficient; The discretize following formula just can be in the hope of locally embedding coefficient w
p(x) numerical solution;
Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot
i, y
j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I
L ↑And I
L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
Wherein,
M, n are positive integer, get m=3 here, n=3; x
i, y
jBe interpolation knot, f (x
i, y
j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve
p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w
p(x) numerical solution substitution formula (3) is to target image
Carrying out maximum a posteriori probability by following formula (3) estimates:
Q (I in the formula
h) be the cost function about high-resolution human face image column vector; Q (I
h) in last
Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3)
h(x) computing formula is
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w
p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3)
1Computing formula as follows:
I
1=DBI
h+n ⑸
I
1Be the column vector of low resolution facial image, its dimension is N
1I
hBe the column vector of high-resolution human face image, its dimension is N
2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N
2* N
2D is that size is N
1* N
2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3)
h) write as following matrix operation form, namely
In the formula: S
-pFor translational movement is the translation operator of p, it is to be of a size of N
2* N
2Matrix; W
pBe N
2* N
2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction
p(x); E is and S
-pAnd W
pThe unit matrix of formed objects; Like this, Q (I
h) gradient can be expressed as
Utilize formula (7) to try to achieve about I
hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration
For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Described λ=0.8 is got λ=0.8 and can be guaranteed that balance of weights is moderate;
Described C is the neighborhood window of 3 * 3 pixels or 4 * 4 pixel sizes.
This patent is compared with existing patent, and its technical advantage is to have quoted in the restructuring procedure a kind of high-precision image interpolation method, and the image that precision is incurred loss is able to High precision reconstruction, can suppress the generation of the noise such as burr, pseudo-shadow in the reconstructed image.
Description of drawings
Fig. 1 is the schematic flow sheet of the embodiment of the invention.
Embodiment
The present invention is further illustrated below in conjunction with embodiment:
As shown in Figure 1, a kind of detailed calculation procedure of optimizing the face image super-resolution reconstructing method of the embodiment of the invention is described below:
A kind of optimization face image super-resolution reconstructing method, it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input
k(x), low-resolution reference facial image I
k(x) image behind p unit of affine translation operator translation is I
k(x+p); Preferably, getting K=6, is the speed in order not only to guarantee the precision of reconstruct but also to guarantee to calculate.
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I
k(x) carry out interpolation amplification to larger size with the composite center of gravity rational interpolants algorithm respectively, such as amplifying 3 times.Image behind the interpolation amplification is designated as respectively I successively
L ↑And I
L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I
L ↑And I
L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E
R, k(x), k=1,2 ..., K, it can be calculated by following formula:
In the formula
Representative utilizes optical flow field to I
L ↑, kCarry out the image that generates behind the registration; E
R, kThe weight of each reference sample of balance when (x) being used for the locally embedding coefficient of this step learning pixel x is with E
R, k(x) substitution formula (1.1),
(1.1)
B
x=diag[
b1(x)b
2(x)…b
k(x)] (1.2)
b
k(x) be the weight of k reference sample, its value and registration error E
R, k(x) relevant, in the formula:
The registration error that has reflected near reference sample translation q unit pixel x; Ω is 1 neighborhood window, and its size is 7 * 7 pixels. can find out that each reference sample is at the inverse ratio that square is approximated to of the weight of relevant position and its registration error.In the formula, denominator is normalized factor, u
EpsBeing 1 little normal number, is 0 in order to avoid denominator.Obviously, when the registration error of reference sample at the x place is larger, its weight b
k(x) just little, vice versa.Owing to will consider the continuous relationship between the pixel, may have noncontinuity between the embedding coefficient of trying to achieve.In addition, when the number K of reference sample hour (for example K<| C|, | C| represents the number of adjoint point), satisfy the w of formula (2) condition
p(x) not unique.Therefore, algorithm has been introduced the total variance Method for minimization smoothness that embeds coefficient has additionally been retrained.Find the solution B
x, the B that tries to achieve
xWeight substitution formula (2) as balance locally embedding coefficient; High-definition picture reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, required locally embedding coefficient { w
p(x) }
P ∈ CShould satisfy
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (3) plus sige front and back, γ>0, preferred γ=0.5; Last item has reflected w
pThe locally embedding relation that (x) should satisfy; Rear one is its total variance.In image denoising, when being denoising, the benefit that minimizes total variance also can protect preferably the high frequency information such as Edge texture of image.Here, algorithm utilizes total variance to suppress to embed the noncontinuity of coefficient, and the high-definition picture partial structurtes feature that comprises in the retention coefficient.In order to find the solution formula (2), adopt based on the time become partial differential equation method come iterative w
p(x):
In the formula
For embedding the in time variable quantity of t of coefficient.Discrete following formula just can be in the hope of w
p(x) numerical solution.Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot
i, y
j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I
L ↑And I
L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
M, n are positive integer, get m=3 here, n=3; x
i, y
jBe interpolation knot, f (x
i, y
j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve
p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w
p(x) numerical solution substitution formula (3) is to target image
Carrying out maximum a posteriori probability by following formula (3) estimates:
Q (I in the formula
h) be the cost function about high-resolution human face image column vector; Q (I
h) in last
Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3)
h(x) computing formula is
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w
p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3)
1Computing formula as follows:
I
1=DBI
h+n ⑸
I
1Be the column vector of low resolution facial image, its dimension is N
1I
hBe the column vector of high-resolution human face image, its dimension is N
2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N
2* N
2D is that size is N
1* N
2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3)
h) write as following matrix operation form, namely
In the formula: S
-pFor translational movement is the translation operator of p, it is to be of a size of N
2* N
2Matrix; W
pBe N
2* N
2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction
p(x); E is and S
-pAnd W
pThe unit matrix of formed objects; Like this, Q (I
h) gradient can be expressed as
Utilize formula (7) to try to achieve about I
hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration
For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Described λ=0.8 is got λ=0.8 and can be guaranteed that balance of weights is moderate.
Preferably, described C is the neighborhood window of 3 * 3 pixels or 4 * 4 pixel sizes.
The above, it only is preferred embodiment of the present invention, be not that any pro forma restriction is done in invention, every foundation technical spirit of the present invention all still is within the scope of the present invention any simple modification, equivalent variations and modification that above embodiment does.
Claims (1)
1. optimize the face image super-resolution reconstructing method for one kind, it is characterized in that: it may further comprise the steps:
Step 1), at first get low resolution facial image I and K low-resolution reference facial image I nearest with the Euclidean distance of inputting low resolution facial image I of input
k(x), low-resolution reference facial image I
k(x) image behind p unit of affine translation operator translation is I
k(x+p), get K=6;
Step 2), to input low resolution facial image I and K in the step 1) the low-resolution reference facial image I nearest with the Euclidean distance of input low resolution facial image I
k(x) carry out interpolation amplification with the composite center of gravity rational interpolants algorithm respectively, the image behind the interpolation amplification is designated as respectively I successively
L ↑And I
L ↑, k, k=1,2 ..., then K, adopts optical flow method and utilizes I
L ↑And I
L ↑, kObtain the high-definition picture optical flow field; Making the registration error of K reference sample at the x place is E
R, k(x), k=1,2 ..., K, it can be calculated by following formula:
In the formula
Representative utilizes optical flow field to I
L ↑, kCarry out the image that generates behind the registration; With E
R, k(x) substitution formula (1.1)
B
x=diag[b
1(x)b
2(x)…b
k(x)](1.2)
Wherein, u
EpsBeing one is 0 normal number in order to avoid denominator, and Ω is a neighborhood window, and its size is 7 * 7 pixels,
The registration error that has reflected near reference sample translation q unit pixel x; Find the solution B
x, the B that tries to achieve
xWeight substitution formula (2) as balance locally embedding coefficient; High resolving power reference sample behind the registration and target image have approximately uniform embedding coefficient at pixel x place, and it embeds coefficient and is calculated as follows:
In the formula: G represents all possible pixel position in the high-definition picture; γ is used for the contribution degree size of two of balanced type (2) plus sige front and back, γ=0.5; Last item has reflected w
pThe locally embedding relation that (x) should satisfy; Rear one is its total variance; In order to find the solution formula (2), adopt based on the time become partial differential equation method come iterative w
p(x):
In the formula
For embedding the in time variable quantity of t of coefficient; The discretize following formula just can be in the hope of locally embedding coefficient w
p(x) numerical solution;
Described composite center of gravity rational interpolants algorithm is specially:
Step 2.1: with low resolution facial image I and K with the nearest low-resolution image of the Euclidean distance of inputting low resolution facial image I in each image be decomposed into respectively three color channels of red, green, blue, each passage respectively with the pixel value in the neighborhood window of 4 * 4 pixel sizes as input image pixels f (x corresponding to interpolation knot
i, y
j);
Step 2.2: carry out interpolation calculation by formula (1); Every calculating is once complete, according to from left to right, scans from top to bottom the result who progressively calculates gained, the sequence results of calculating is deposited in the target image array, as the image behind the last interpolation amplification; Image after the amplification is designated as respectively I
L ↑And I
L ↑, k, k=1,2 ..., K;
The mathematical model of described composite center of gravity rational interpolation is:
M, n are positive integer, get m=3 here, n=3; x
i, y
jBe interpolation knot, f (x
i, y
j) be input image pixels value corresponding to node, R (x, y) amplifies rear image pixel value for output;
Step 3), with step 2) in the locally embedding coefficient w that tries to achieve
p(x) numerical solution substitution reconstruction model calculates the super-resolution reconstruction image; The computing method of described reconstruction model are: at first, and with locally embedding coefficient w
p(x) numerical solution substitution formula (3) is to target image
Carrying out maximum a posteriori probability by following formula (3) estimates:
Q (I in the formula
h) be the cost function about high-resolution human face image column vector; Q (I
h) in last
Data item, the required high-definition picture of its representative, image should be consistent with known observation sample through after degrading; Rear one
Be the priori item, it defines the linear imbeding relation that should satisfy between all pixels and its neighbor point in the reconstructed image; Parameter lambda is used for the Relative Contribution size of equilibrium criterion item and priori item;
I in the formula (3)
h(x) computing formula is
In the formula: p is the spatial offset between pixel x and its neighbor point; C is the neighborhood window centered by x, and it defines the span of p, then 0≤p≤1; w
p(x) be to embed coefficient corresponding to the linearity of neighbor point (x+p);
I in the formula (3)
1Computing formula as follows:
I
1=DBI
h+n ⑸
I
1Be the column vector of low resolution facial image, its dimension is N
1I
hBe the column vector of high-resolution human face image, its dimension is N
2B is by the generation of Gaussian point spread function and corresponding to the fuzzy matrix in the imaging process, it is of a size of N
2* N
2D is that size is N
1* N
2The down-sampling matrix; N is that average is 0 additive white Gaussian noise;
With the Q (I in the formula (3)
h) write as following matrix operation form, namely
In the formula: S
-pFor translational movement is the translation operator of p, it is to be of a size of N
2* N
2Matrix; W
pBe N
2* N
2Diagonal matrix, wherein per 1 diagonal element embeds coefficient w corresponding to 1 pixel x in the linearity of p direction
p(x); E is and S
-pAnd W
pThe unit matrix of formed objects; Like this, Q (I
h) gradient can be expressed as
Utilize formula (7) to try to achieve about I
hThe Grad of cost function (x) is tried to achieve final super-resolution reconstruction target image with formula (8) below the Grad substitution of trying to achieve cost function with the gradient descent method iteration
In the formula: t is current iterations; β is iteration step length, and β gets 0.3; Make the initial value of iteration
For input picture being carried out the image after the composite center of gravity rational interpolation amplifies;
Step 3), output step 2) the estimated super-resolution reconstruction target image of Chinese style (8)
Described λ=0.8;
Described C is the neighborhood window of 3 * 3 pixels or 4 * 4 pixel sizes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012105376229A CN102999748A (en) | 2012-12-12 | 2012-12-12 | Refactoring method for optimizing super resolution of facial images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012105376229A CN102999748A (en) | 2012-12-12 | 2012-12-12 | Refactoring method for optimizing super resolution of facial images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102999748A true CN102999748A (en) | 2013-03-27 |
Family
ID=47928297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012105376229A Withdrawn CN102999748A (en) | 2012-12-12 | 2012-12-12 | Refactoring method for optimizing super resolution of facial images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102999748A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960099A (en) * | 2018-06-22 | 2018-12-07 | 哈尔滨工业大学深圳研究生院 | Face tilts angle estimating method, system, equipment and storage medium |
CN110288525A (en) * | 2019-05-21 | 2019-09-27 | 西北大学 | A kind of multiword allusion quotation super-resolution image reconstruction method |
-
2012
- 2012-12-12 CN CN2012105376229A patent/CN102999748A/en not_active Withdrawn
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960099A (en) * | 2018-06-22 | 2018-12-07 | 哈尔滨工业大学深圳研究生院 | Face tilts angle estimating method, system, equipment and storage medium |
CN108960099B (en) * | 2018-06-22 | 2021-07-06 | 哈尔滨工业大学深圳研究生院 | Method, system, equipment and storage medium for estimating left and right inclination angles of human face |
CN110288525A (en) * | 2019-05-21 | 2019-09-27 | 西北大学 | A kind of multiword allusion quotation super-resolution image reconstruction method |
CN110288525B (en) * | 2019-05-21 | 2022-12-02 | 西北大学 | Multi-dictionary super-resolution image reconstruction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734659B (en) | Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label | |
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
Liu et al. | Single image super-resolution using multi-scale deep encoder–decoder with phase congruency edge map guidance | |
CN105427308B (en) | A kind of sparse and dense characteristic mates the method for registering images for combining | |
CN102136144B (en) | Image registration reliability model and reconstruction method of super-resolution image | |
US9865037B2 (en) | Method for upscaling an image and apparatus for upscaling an image | |
CN102629374B (en) | Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding | |
CN103366347B (en) | Image super-resolution rebuilding method based on rarefaction representation | |
CN105046672A (en) | Method for image super-resolution reconstruction | |
CN105825477A (en) | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion | |
CN105069825A (en) | Image super resolution reconstruction method based on deep belief network | |
CN101976435A (en) | Combination learning super-resolution method based on dual constraint | |
CN108520495B (en) | Hyperspectral image super-resolution reconstruction method based on clustering manifold prior | |
CN113450396B (en) | Three-dimensional/two-dimensional image registration method and device based on bone characteristics | |
CN113762147B (en) | Facial expression migration method and device, electronic equipment and storage medium | |
CN103295197A (en) | Image super-resolution rebuilding method based on dictionary learning and bilateral holomorphy | |
CN103020936A (en) | Super-resolution reconstruction method of facial image | |
CN105488759B (en) | A kind of image super-resolution rebuilding method based on local regression model | |
CN105513033A (en) | Super-resolution reconstruction method based on non-local simultaneous sparse representation | |
He et al. | Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks | |
CN115578255A (en) | Super-resolution reconstruction method based on inter-frame sub-pixel block matching | |
CN104091364B (en) | Single-image super-resolution reconstruction method | |
CN110097499B (en) | Single-frame image super-resolution reconstruction method based on spectrum mixing kernel Gaussian process regression | |
CN106157240A (en) | Remote sensing image super resolution method based on dictionary learning | |
CN106920213B (en) | Method and system for acquiring high-resolution image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C04 | Withdrawal of patent application after publication (patent law 2001) | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20130327 |