CN102521810B - Face super-resolution reconstruction method based on local constraint representation - Google Patents
Face super-resolution reconstruction method based on local constraint representation Download PDFInfo
- Publication number
- CN102521810B CN102521810B CN 201110421452 CN201110421452A CN102521810B CN 102521810 B CN102521810 B CN 102521810B CN 201110421452 CN201110421452 CN 201110421452 CN 201110421452 A CN201110421452 A CN 201110421452A CN 102521810 B CN102521810 B CN 102521810B
- Authority
- CN
- China
- Prior art keywords
- image
- low resolution
- resolution
- image block
- training set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a face super-resolution reconstruction method based on local constraint representation. The method comprises the following steps of: respectively dividing the input low-resolution face image and the face image in high and low resolution training sets into mutually overlapped image blocks; for each image block of the input low-resolution face image, according to the priori that the representation of the image blocks has locality, calculating the optimal weighting coefficient when performing linear reconstruction by the image blocks at corresponding positions of each image in the low-resolution training set; correspondingly substituting the image blocks at corresponding positions of each image in the low-resolution training set with the image blocks at corresponding positions of each image in the high-resolution training set one to one, and weighting and synthesizing high-resolution image blocks; based on the positions on the face, fusing the image blocks into a high-resolution face image. The method provides a local constraint representation model, adaptively selects the image blocks, which are adjacent to the input image blocks, from a sample image block space in the training set to linearly reconstruct the input image blocks, thereby obtaining the optical weighting coefficient and synthesizing the high-quality high-resolution image.
Description
Technical field
The present invention relates to the image super-resolution field, be specifically related to a kind of face super-resolution reconstruction method based on local constraint representation.
Background technology
In safe-guard system, rig camera is widely used.Yet in many cases, video camera and interested scenery (such as people's face etc.) distance is far, so that the resolution of taken facial image is very low in the video, the shared zone of people's face often only has tens pixels.Because resolution is too little, interested facial image has been lost too much detailed information, this so that the captured people's face that obtains of rig camera be difficult to effectively be distinguished by people or machine.Therefore, how to improve the quality of low resolution facial image, effectively strengthen the resolution of inferior quality facial image in the monitoring video, distinguishing for next step people's face the feature detailed information that provides enough becomes instantly problem demanding prompt solution.Human face super-resolution technology (also being illusion face technology) is exactly a kind of image super-resolution rebuilding method that is produced the high-resolution human face image by input low resolution facial image.
In recent years, scholars have proposed a large amount of face super-resolution methods based on study.These class methods are according to high low-resolution image this prior imformation of training set to consisting of, and the facial image of a low resolution of input just can go out a high-resolution facial image by super-resolution rebuilding.For example, the people such as Freeman in 2000 are at document 1(W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. In
IJCV, 40 (1): 25 – 47,2000) in a kind of Markov network (Markov network) method is at first proposed, this method also be the earliest based on study super-resolution method.In the same year, Simon and Kanade are specially for facial image, at document 2(S. Baker and T. Kanade. Hallucinating faces. In
FG, Grenoble, France, Mar. 2000, proposed the unreal structure of a kind of people's face (face hallucination) method in 83-88.).Subsequently, the people such as Liu are at document 3(C. Liu, H.Y. Shum, and C.S. Zhang. A two-step approach to hallucinating faces:global parametric model and local nonparametric model. In
CVPR, pp. 192 – 198,2001.) and the middle two-step approach that proposes human face rebuilding, the respectively overall situation and the local message of synthetic people's face.So far, the super-resolution method based on study has caused scholars' extensive concern.
According to theoretical (the document 4:S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. of manifold learning
Science, 290 (5500): 2323 – 2326,2000.), the people such as Chang in 2004 are at document 5(H. Chang, D.Y. Yeung, and Y.M. Xiong. Super-resolution through neighbor embedding. In
CVPR, pp. 275 – 282,2004.) in have this hypothesis of similar local geometric features based on high low resolution Sample Storehouse, the image super-resolution rebuilding method that a kind of neighborhood embeds is proposed, obtain well to rebuild effect.But, because the selected neighbour's piece of the method number is fixed, when the input picture piece is represented, improperly problem of over-fitting or match can appear.For this problem, the people such as Ma in 2010 are at document 6(X. Ma, J.P Zhang, and C. Qi. Hallucinating face by position-patch.
Pattern Recognition43 (6): 3178 – 3194,2010.) the middle face super-resolution method that proposes a kind of position-based image block, use in the training set all and the facial image piece reconstruction high-resolution human face image of input picture piece co-located, avoid the steps such as manifold learning or feature extraction, improve efficient, also promoted the quality of composograph simultaneously.Yet because the method adopts least square method to find the solution, when the number of image in the training sample was larger than the dimension of image block, the expression coefficient of image block was not unique.Therefore, the people such as Jung in 2011 are at document 7(C. Jung, L. Jiao, B. Liu, and M. Gong, " Position-Patch Based Face Hallucination Using Convex Optimization, "
IEEE Signal Process. Lett.Vol.18, no.6, pp.367 – 370,2011.) a kind of position image block face super-resolution method based on protruding optimization of middle proposition, sparse constraint is joined image block finds the solution in the expression, can solve the not unique problem of solution of equation, for the expression that makes the input picture piece as far as possible sparse, the method may be chosen some and carry out linearity reconstruction with the widely different image block of image block of input when the image block of synthetic input, therefore do not consider this feature of locality, rebuild effect unsatisfactory.
Summary of the invention
The object of the invention provides a kind of face image super-resolution reconstruction method based on Local Representation, solves the inaccurate problem of low-resolution image piece of existing similar algorithmic notation input, improves the quality of finally synthetic high-resolution human face image.
For achieving the above object, the technical solution used in the present invention is a kind of face super-resolution reconstruction method based on local constraint representation, comprises the steps:
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
Step 3 replaces with the image block of all low resolution people face sample images the image block of high-resolution human face sample image corresponding to position, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
And, in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is as follows,
According to from left to right, from top to bottom order carries out the partitioned image piece, when partitioned image piece during to the image border, if surplus size is less then divide as benchmark carries out rollback take the edge of original image than the size of the image block that sets in advance, comprise when laterally being divided into image the right edge, left rollback and carry out piecemeal take edge, the right as benchmark; When vertically being divided into image base edge, rollback and carry out piecemeal take the edge, base as benchmark upwards.
And, establish the low resolution facial image
The set of partitioned image piece gained is
, with the high resolving power training set
With the low resolution training set
Correspondingly the set of partitioned image piece gained is respectively
With
,
The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of expression,
The line number of the image block divided of expression and row number,
With
Represent respectively the image block number that each row and every delegation mark off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Wherein,
For from the low resolution training set
In individual low resolution people's face sample image
Row
The reconstructed coefficients of row image block,
By all
MIn the low resolution people face sample image the
Row
The row vector that the reconstructed coefficients of row image block forms,
,
Be reconstructed coefficients
Penalty factor,
The regularization parameter of balance reconstruction error and local restriction, "
" inner product operation between two vectors of expression,
Expression Euclidean squared-distance,
Return about variable
Function when obtaining minimum value
Value
, i.e. desired optimum weights coefficient,
,
Be the of synthetic low resolution facial image
Row
During the row image block, from the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block;
Wherein,
For in the low resolution facial image
Row
The image block of row,
Be low the of the rate training set of distinguishing
In individual low resolution people's face sample image
Row
The image block of row.
And, in the step 3, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting, adopt following formula to calculate and obtain,
Wherein,
Be the of synthetic low resolution facial image in the step 2
Row
During the row image block, from the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block,
Be of high resolution training set
In the individual high-resolution human face sample image
Row
The image block of row.
The present invention is by increasing locality constraint condition, the image block of choosing adaptively with input picture piece neighbour represents the input picture piece, avoided in the similar algorithm owing to fixing neighbour's piece number (document 5) causes over-fitting or match improper, owing to the problem of selecting too many image block to cause a plurality of solutions (document 6) and ignore locality (document 7) owing to overemphasized sparse property, make the expression coefficient of input picture piece more accurate, finally obtain higher-quality high-resolution human face image.
Description of drawings
Fig. 1 is the process flow diagram of the embodiment of the invention;
Fig. 2 is the piece division methods of facial image;
Fig. 3 is the rollback schematic diagram that laterally is divided into image the right edge during the image block of the embodiment of the invention is divided;
Fig. 4 is the rollback schematic diagram that vertically is divided into image base edge during the image block of the embodiment of the invention is divided;
Fig. 5 is the mean P SNR value comparing result that the inventive method and art methods obtain.
Fig. 6 is the average SSIM value comparing result that the inventive method and art methods obtain.
Embodiment
Technical solution of the present invention can adopt software engineering to realize the automatic flow operation.Below in conjunction with drawings and Examples technical solution of the present invention is further described.Referring to Fig. 1, embodiment of the invention concrete steps are:
Comprise low resolution people face sample image in the low resolution training set, comprise the high-resolution human face sample image in the high resolving power training set, low resolution training set and high resolving power training set provide predefined training sample pair.Each low resolution people face sample image is to be extracted by a high-resolution human face sample image in the high resolving power training set in the low resolution training set.Among the embodiment, all high-resolution image pixel size are 120 * 100, and the image pixel size of all low resolution is 30 * 25.Low resolution people's face sample image is that the high-resolution human face sample image is by the result of four times of Bicubic down-samplings.
Among the embodiment, establish the low resolution facial image of input
, the high resolving power training set
With the low resolution training set
It is the low resolution training set
In comprise low resolution people face sample image
, the high resolving power training set
In comprise the high-resolution human face sample image
,
For the purpose of follow-up replacement, embodiment is with low resolution people face sample image and high-resolution human face sample image reference numeral, i.e. low resolution people face sample image
It is the high-resolution human face sample image
Result by four times of Bicubic down-samplings.The number of low resolution people face sample image is identical with the number of high resolving power training set middle high-resolution people face sample image in the low resolution training set, all is
With the low resolution facial image
The set of partitioned image piece gained is
, with the high resolving power training set
With the low resolution training set
Correspondingly the set of partitioned image piece gained is respectively
With
,
Expression is low distinguishes the number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image, as shown in Figure 2,
The image block that expression is divided is at the image block coordinate system
Under positional information, namely the line number of image block and row number.Take image upper left side to be divided as starting point, the image block positional information that division obtains is
,
,
The position
The image block at place up and down the image block of adjacency at coordinate system
Under coordinate be respectively
,
,
With
With
Represent respectively the image block number that every delegation and every delegation mark off.The present invention divides overlapped image block to low resolution facial image, all low resolution people face sample images and the high-resolution human face sample image of input and adopts consistent mode, and namely each image division is concrete
With
Numerical value is identical.
Concrete
With
Numerical value obtains according to the image dividing mode, and the present invention divides overlapped image block to image, namely take image upper left side to be divided as starting point, chooses a size at every turn and is
*
(unit: image block pixel), making top and the left of image block and having divided part has
Individual pixel overlapping (except when image block is positioned at the edge, top of facial image or leftmost edge).
Therefore value as shown in the formula:
Wherein,
With
The line number of difference presentation video and columns (unit: pixel),
The length of side of the square image blocks that expression marks off,
Overlapping number of pixels between presentation video piece and the image block,
Expression is returned and is greater than or equal to
Smallest positive integral.It is the position
The image block at place and overlapping one respectively of the image block of adjacency up and down
*
The rectangular area, when certainly image block is positioned at the image border except.
When image being carried out the piece division, for fear of since the picture size that cutting or fill up causes change, the present invention proposes to take a kind of " rollback " strategy, when the image block of namely dividing exceeds image border (the right or base), then divides as benchmark carries out rollback take the edge of original image.As shown in Figure 3, when horizontal division exceeds image the right edge, take left " rollback " and the strategy take edge, the right as benchmark to carry out piecemeal; Similarly, as shown in Figure 4, when vertical division exceeds image base edge, take upwards " rollback " and the strategy take the edge, base as benchmark to carry out piecemeal.
Step 2 is for each locational image block of low resolution facial image
, calculate under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction.
Among the embodiment, optimum weights coefficient is obtained by following formula:
This formula is comprised of two parts, and First is the super-resolution rebuilding constraint, and second portion is the local restriction that the input picture piece represents.Wherein,
For from the low resolution training set
In individual low resolution people's face sample image
Row
The reconstructed coefficients of row image block,
By all
MIn the low resolution people face sample image the
Row
The row vector that the reconstructed coefficients of row image block forms,
,
Be reconstructed coefficients
Penalty factor,
The regularization parameter of balance reconstruction error and local restriction, "
" inner product operation between two vectors of expression,
Expression Euclidean squared-distance,
Return about variable
Function when obtaining minimum value
Value
, i.e. desired optimum weights coefficient.
,
Be the of synthetic low resolution facial image
Row
During the row image block, from the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block.
For in the low resolution facial image of input the
Row
The image block of row,
Be low the of the rate training set of distinguishing
In individual low resolution people's face sample image
Row
The image block of row.Work as image block
The distance input image block
When larger, give
Larger punishment; Otherwise, work as image block
The distance input image block
Hour, give
Less punishment.By minimize formula (3) just can guarantee to choose as far as possible those with
Neighbour's sample image piece represents it.Represent penalty factor with the Euclidean squared-distance in the embodiment of the invention:
In formula (3),
It is the regularization parameter that balance is rebuild constraint and local restriction.
Reconstruction effect when getting different value is different, when
The time, the local constraint representation method among the present invention just deteriorates to least square method for expressing in the document 6.Suggestion of the present invention
Value between 0.02 to 0.1.
During implementation, the solution of formula (3) can be obtained by lower formula:
Be under local restriction the optimum weights coefficient when by all low resolution people face sample image pieces in the low resolution training set input low-resolution image piece being carried out linear the reconstruction.
Matrix
CMay be defined as:
Wherein
By owning in the low resolution training set
Individual low resolution people's face sample image is
Row
The matrix that the column vector corresponding to image block of row forms,
,
That element is 1 entirely
The row vector,
Representing matrix
CTransposed matrix.
Embodiment uses the optimal weights coefficient that obtains in the step 2 to represent that the expression formula of high-resolution human face image block is:
Wherein,
Be the of synthetic low resolution facial image in the step 2
Row
During the row image block, from the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block,
Be of high resolution training set
In the individual high-resolution human face sample image
Row
The image block of row.
As shown in fig. 1, for certain locational image block of low resolution facial image, calculate under local constraint representation by owning in the low resolution training set
Individual this locational image block of low resolution people's face sample image carries out the linear weights coefficient of rebuilding
,
,
,
Image block with sample comes the image block of approximate representation input can have certain error, so Fig. 1 adopts approximately equal sign ≌.The image block of all low resolution people face sample images is replaced with the image block of high-resolution human face sample image corresponding to position, use the weights coefficient
,
,
,
Corresponding high-resolution human face image block is synthesized in weighting, can obtain the high-definition picture piece of this position.When replacing image block, the image block of each high-resolution human face sample image
Replacement is by the image block of its down-sampling gained low resolution people face sample image
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
Embodiment splices according to it all high-resolution human face image blocks that obtain in step 3 in people position on the face, the pixel value of lap between the adjacent image piece can adopt the method for getting average to obtain.Adoptable implementation procedure is as follows when for the sake of ease of implementation, providing implementation:
Step a, one of initialization and the equirotal image array of high-definition picture
I(all elements value is 0) and overlapping degree matrix
F(all elements value is 0);
Step b is with the high-resolution human face image block opsition dependent image array that is added to
IIn;
Step c, overlapping degree matrix
FThe all elements of correspondence position all adds 1, represents that this pixel position has been overlapped once;
Steps d, repeating step b and c are until all high-definition picture pieces are synthetic complete;
Step e is with image array
IPoint is except the overlapping degree matrix of sign
FThe matrix of two formed objects " point removes " is divided by with regard to the element that refers to two matrix relevant positions.
The high-resolution human face image that obtains can be used as the output that predicts the outcome, and forecast period is finished.
The present invention has increased a locality constraint condition on the basis of document 6 methods, solved facial image sample block not unique problem of least square solution when too much, increase sparse property constraint condition in documents 7 methods, but ignored this prior feature of locality, method can obtain more accurate image block and represents mode among the present invention, thereby can synthesize higher-quality high-resolution human face image.
For the purpose of explanation effect of the present invention, below provide the experiment contrast.
FEI face database (document 8:Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, " Image quality assessment:From error visibility to structural similarity, " have been adopted
IEEE Trans. Image Process., vol.13, no.4, pp.600 – 612,2004.).200 Different Individual (100 male sex, 100 women), each one of everyone just poker-faced facial image and positive smile expression facial image, all image size unifications are 120 * 100, therefrom choose 360 and train, remaining 40 is image to be tested.Every training is carried out smoothly (using 4 * 4 average filter) with high-resolution image, and 4 times of down-samplings obtain the image of 30 * 25 low resolution.The size of dividing the facial image piece in the embodiment of the invention is respectively: the high-resolution human face image is divided into 12 * 12 image block, and overlapping is 4 pixels; The low resolution facial image is divided into 3 * 3 image block, and overlapping is 1 pixel.Namely for high-resolution image,
=120,
=100,
=12,
=4; For the image of low resolution,
=30,
=25,
=3,
=1.
Neighbour's piece number of document 5 neighborhood embedding grammars
KGet 200.Reconstruction error is set to 1.0 in the document 7 sparse constraint methods.
The sparse method for solving of employing document 9 (E. Candes and J.Rombergt,
-Magic:Recovery of Sparse Signals via Convex Programming 2005 [Online]. Available:http: //www.acm.caltech.edu/l1magic/).Unique parameter in the inventive method
Value is 0.04.
Y-PSNR (PSNR, unit are dB) is the most general, the objective measurement index of most popular picture quality; SSIM then is the index of weighing two width of cloth figure similarities, and its value illustrates that more close to 1 the effect of image reconstruction is better.The inventive method and document 5, document 6, contrast is respectively as shown in Figure 5 and Figure 6 for the PSNR that document 7 methods obtain and SSIM value (all 40 width of cloth test facial images are averaged).Document 5, document 6, the mean P SNR value with the inventive method in the document 7 is followed successively by 31.22,31.90,32.11,32.76; Document 5, document 6 is followed successively by 0.8972,0.9034,0.9052,0.9145 with the average SSIM value of the inventive method in the document 7.The inventive method improves respectively 0.65 dB and 0.07 than best algorithm (document 7) in control methods on PSNR and SSIM value.
Claims (3)
1. the face super-resolution reconstruction method based on local constraint representation is characterized in that, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of inputting, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
If with low resolution facial image X
LThe set of partitioned image piece gained is { X
L(i, j) | 1≤i≤U, 1≤j≤V} is with the high resolving power training set
With the low resolution training set
Correspondingly the set of partitioned image piece gained is respectively
With
M represents the number of low resolution people face sample image in the low resolution training set and the number of high resolving power training set middle high-resolution people face sample image, 1≤m≤M; The line number of the image block divided of (i, j) expression and row number, U and V represent respectively the image block number that each row and every delegation mark off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Wherein, w
m(i, j) be reconstructed coefficients from the capable j row of i image block in m low resolution people face sample image in the low resolution training set, w (i, j) open the row vector that the reconstructed coefficients of the capable j row of i image block in the low resolution people face sample image forms by all M, w (i, j)=[w
1(i, j), w
2(i, j) ..., w
m(i, j) ..., w
M(i, j)], d
m(i, j) is reconstructed coefficients w
mThe penalty factor of (i, j), τ are the regularization parameters of balance reconstruction error and local restriction, the inner product operation between two vectors of " ο " expression,
Expression Euclidean squared-distance,
Return the value w about function w (i, j) when obtaining minimum value of variable w (i, j)
*(i, j), i.e. desired optimum weights coefficient,
During for the capable j row of the i image block of synthetic low resolution facial image, the optimum weights coefficient of the capable j row of i image block in m low resolution people face sample image in the low resolution training set;
Described penalty factor d
mThe computing formula of (i, j) is as follows,
Wherein, X
L(i, j) is the image block of the capable j row of i in the low resolution facial image,
Image block for the capable j row of i in m the low resolution people face sample image of low resolution training set;
Step 3 replaces with the image block of all low resolution people face sample images the image block of high-resolution human face sample image corresponding to position, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
2. described face super-resolution reconstruction method based on local constraint representation according to claim 1, it is characterized in that: in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is as follows
According to from left to right, from top to bottom order partitioned image piece, when partitioned image piece during to the image border, if surplus size is less then divide as benchmark carries out rollback take the edge of original image than the size of the image block that sets in advance, comprise when laterally being divided into image the right edge, left rollback and carry out piecemeal take edge, the right as benchmark; When vertically being divided into image base edge, rollback and carry out piecemeal take the edge, base as benchmark upwards.
3. described face super-resolution reconstruction method based on local constraint representation according to claim 1 is characterized in that: in the step 3, with the synthetic high-resolution human face image block X of step 2 gained weight coefficient weighting
H(i, j) adopts following formula to calculate and obtains,
Wherein,
During for the capable j row of the i image block of synthetic low resolution facial image in the step 2, the optimum weights coefficient of the capable j row of i image block in m low resolution people face sample image in the low resolution training set;
Image block for the capable j row of i in m the high-resolution human face sample image of high resolving power training set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110421452 CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110421452 CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102521810A CN102521810A (en) | 2012-06-27 |
CN102521810B true CN102521810B (en) | 2013-09-18 |
Family
ID=46292714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110421452 Expired - Fee Related CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102521810B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034974B (en) * | 2012-12-07 | 2015-12-23 | 武汉大学 | The face image super-resolution reconstruction method of sparse coding is driven based on support set |
CN103208109B (en) * | 2013-04-25 | 2015-09-16 | 武汉大学 | A kind of unreal structure method of face embedded based on local restriction iteration neighborhood |
CN104574455B (en) * | 2013-10-29 | 2017-11-24 | 华为技术有限公司 | Image rebuilding method and device |
CN103824272B (en) * | 2014-03-03 | 2016-08-17 | 武汉大学 | The face super-resolution reconstruction method heavily identified based on k nearest neighbor |
CN104091320B (en) * | 2014-07-16 | 2017-03-29 | 武汉大学 | Based on the noise face super-resolution reconstruction method that data-driven local feature is changed |
CN106558018B (en) * | 2015-09-25 | 2019-08-06 | 北京大学 | The unreal structure method and device of video human face that Component- Based Development decomposes |
CN105405097A (en) * | 2015-10-29 | 2016-03-16 | 武汉大学 | Robustness human face super resolution processing method and system based on reverse manifold constraints |
CN105550649B (en) * | 2015-12-09 | 2019-03-08 | 武汉工程大学 | Extremely low resolution ratio face identification method and system based on unity couping local constraint representation |
CN105469359B (en) * | 2015-12-09 | 2019-05-03 | 武汉工程大学 | Face super-resolution reconstruction method based on local restriction low-rank representation |
CN105787462B (en) * | 2016-03-16 | 2019-05-03 | 武汉工程大学 | Extremely low resolution ratio face identification method and system based on half coupling judgement property dictionary learning |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
CN106530231B (en) * | 2016-11-09 | 2020-08-11 | 武汉工程大学 | Super-resolution image reconstruction method and system based on deep cooperative expression |
CN107292865B (en) * | 2017-05-16 | 2021-01-26 | 哈尔滨医科大学 | Three-dimensional display method based on two-dimensional image processing |
CN107154023B (en) * | 2017-05-17 | 2019-11-05 | 电子科技大学 | Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution |
CN107203967A (en) * | 2017-05-25 | 2017-09-26 | 中国地质大学(武汉) | A kind of face super-resolution reconstruction method based on context image block |
CN107633483A (en) * | 2017-09-18 | 2018-01-26 | 长安大学 | The face image super-resolution method of illumination robustness |
CN109117892B (en) * | 2018-08-28 | 2021-07-27 | 国网福建省电力有限公司福州供电公司 | Deep learning large-size picture training detection algorithm |
CN113887371B (en) * | 2021-09-26 | 2024-05-28 | 华南理工大学 | Data enhancement method for low-resolution face recognition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635048A (en) * | 2009-08-20 | 2010-01-27 | 上海交通大学 | Super-resolution processing method of face image integrating global feature with local information |
CN102024266A (en) * | 2010-11-04 | 2011-04-20 | 西安电子科技大学 | Image structure model-based compressed sensing image reconstruction method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4944639B2 (en) * | 2007-03-01 | 2012-06-06 | 富士フイルム株式会社 | Image processing apparatus, image processing method, and photographing apparatus |
-
2011
- 2011-12-16 CN CN 201110421452 patent/CN102521810B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635048A (en) * | 2009-08-20 | 2010-01-27 | 上海交通大学 | Super-resolution processing method of face image integrating global feature with local information |
CN102024266A (en) * | 2010-11-04 | 2011-04-20 | 西安电子科技大学 | Image structure model-based compressed sensing image reconstruction method |
Non-Patent Citations (3)
Title |
---|
JP特开2008-219271A 2008.09.18 |
position-based face hallucination method;Xiang Ma et al.;《ICME 2009》;20091231;全文 * |
Xiang Ma et al..position-based face hallucination method.《ICME 2009》.2009, |
Also Published As
Publication number | Publication date |
---|---|
CN102521810A (en) | 2012-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521810B (en) | Face super-resolution reconstruction method based on local constraint representation | |
Zhang et al. | Residual networks for light field image super-resolution | |
CN110443842B (en) | Depth map prediction method based on visual angle fusion | |
Gan et al. | Monocular depth estimation with affinity, vertical pooling, and label enhancement | |
CN111047548B (en) | Attitude transformation data processing method and device, computer equipment and storage medium | |
CN103824272B (en) | The face super-resolution reconstruction method heavily identified based on k nearest neighbor | |
Yan et al. | Single image superresolution based on gradient profile sharpness | |
CN101976435B (en) | Combination learning super-resolution method based on dual constraint | |
CN104156957B (en) | Stable and high-efficiency high-resolution stereo matching method | |
CN105741252A (en) | Sparse representation and dictionary learning-based video image layered reconstruction method | |
CN102693419B (en) | Super-resolution face recognition method based on multi-manifold discrimination and analysis | |
CN111861880B (en) | Image super-fusion method based on regional information enhancement and block self-attention | |
CN109801212A (en) | A kind of fish eye images joining method based on SIFT feature | |
CN115205672A (en) | Remote sensing building semantic segmentation method and system based on multi-scale regional attention | |
CN104504672A (en) | NormLV feature based low-rank sparse neighborhood-embedding super-resolution method | |
CN104036482A (en) | Facial image super-resolution method based on dictionary asymptotic updating | |
CN103034974A (en) | Face image super-resolution reconstructing method based on support-set-driven sparse codes | |
Du et al. | Srh-net: Stacked recurrent hourglass network for stereo matching | |
KR20150065302A (en) | Method deciding 3-dimensional position of landsat imagery by Image Matching | |
CN103208109A (en) | Local restriction iteration neighborhood embedding-based face hallucination method | |
Deng et al. | Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution | |
CN110335228B (en) | Method, device and system for determining image parallax | |
Chen et al. | Density-imbalance-eased lidar point cloud upsampling via feature consistency learning | |
Zhao et al. | Multiple attention network for spartina alterniflora segmentation using multitemporal remote sensing images | |
Zhu et al. | Stereoscopic image super-resolution with interactive memory learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130918 Termination date: 20171216 |
|
CF01 | Termination of patent right due to non-payment of annual fee |