CN102521810A - Face super-resolution reconstruction method based on local constraint representation - Google Patents
Face super-resolution reconstruction method based on local constraint representation Download PDFInfo
- Publication number
- CN102521810A CN102521810A CN2011104214523A CN201110421452A CN102521810A CN 102521810 A CN102521810 A CN 102521810A CN 2011104214523 A CN2011104214523 A CN 2011104214523A CN 201110421452 A CN201110421452 A CN 201110421452A CN 102521810 A CN102521810 A CN 102521810A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- low resolution
- row
- training set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a face super-resolution reconstruction method based on local constraint representation. The method comprises the following steps of: respectively dividing the input low-resolution face image and the face image in high and low resolution training sets into mutually overlapped image blocks; for each image block of the input low-resolution face image, according to the priori that the representation of the image blocks has locality, calculating the optimal weighting coefficient when performing linear reconstruction by the image blocks at corresponding positions of each image in the low-resolution training set; correspondingly substituting the image blocks at corresponding positions of each image in the low-resolution training set with the image blocks at corresponding positions of each image in the high-resolution training set one to one, and weighting and synthesizing high-resolution image blocks; based on the positions on the face, fusing the image blocks into a high-resolution face image. The method provides a local constraint representation model, adaptively selects the image blocks, which are adjacent to the input image blocks, from a sample image block space in the training set to linearly reconstruct the input image blocks, thereby obtaining the optical weighting coefficient and synthesizing the high-quality high-resolution image.
Description
Technical field
The present invention relates to the image super-resolution field, be specifically related to a kind of human face super-resolution method for reconstructing of representing based on local restriction.
Background technology
In safe-guard system, rig camera has obtained using widely.Yet in many cases, video camera and interested scenery (like people's face etc.) distance is far, makes that the resolution of taken facial image is very low in the video, and the shared zone of people's face often has only tens pixels.Because resolution is too little, interested facial image has been lost too much detailed information, and this makes the captured people's face that obtains of rig camera be difficult to distinguished by people or machine effectively.Therefore, how to improve the quality of low resolution facial image, effectively strengthen the resolution of inferior quality facial image in the monitoring video, distinguish the characteristic detailed information that provides enough, become problem demanding prompt solution instantly for next step people's face.Human face super-resolution technology (also being illusion face technology) is exactly a kind of image super-resolution rebuilding method that is produced the high-resolution human face image by input low resolution facial image.
In recent years, scholars have proposed a large amount of human face super-resolution methods based on study.These class methods are according to high low-resolution image this prior imformation of training set to being constituted, and the facial image of a low resolution of input just can go out a high-resolution facial image by super-resolution rebuilding.For example, people such as Freeman in 2000 are at document 1 (W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. In
IJCV, 40 (1): 25 – 47,2000) in a kind of Markov network (Markov network) method is at first proposed, this method also be the earliest based on study super-resolution method.In the same year, Simon and Kanade are specially to facial image, at document 2 (S. Baker and T. Kanade. Hallucinating faces. In
FG, Grenoble, France, Mar. 2000, proposed the unreal structure of a kind of people's face (face hallucination) method in 83-88.).Subsequently, people such as Liu are at document 3 (C. Liu, H.Y. Shum, and C.S. Zhang. A two-step approach to hallucinating faces:global parametric model and local nonparametric model. In
CVPR, pp. 192 – 198,2001.) and the middle two-step approach that proposes human face rebuilding, the overall situation and the local message of synthetic people's face respectively.So far, the super-resolution method based on study has caused scholars' extensive concern.
According to theoretical (the document 4:S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. of manifold learning
Science, 290 (5500): 2323 – 2326,2000.), people such as Chang were at document 5 (H. Chang, D.Y. Yeung, and Y.M. Xiong. Super-resolution through neighbor embedding. In in 2004
CVPR, pp. 275 – 282,2004.) in have this hypothesis of similar local geometric features based on high low resolution sample storehouse, the image super-resolution rebuilding method that a kind of neighborhood embeds is proposed, obtain well to rebuild effect.But, because the selected neighbour's piece of this method number is fixed, when the input picture piece is represented, over-fitting or match problem improperly can appear.To this problem, people such as Ma were at document 6 (X. Ma, J.P Zhang, and C. Qi. Hallucinating face by position-patch. in 2010
Pattern Recognition43 (6): 3178 – 3194; 2010.) in a kind of human face super-resolution method of position-based image block is proposed, use in the training set all and the facial image piece reconstruction high-resolution human face image of input picture piece co-located, avoid steps such as manifold learning or feature extraction; Improve efficient, also promoted the quality of composograph simultaneously.Yet because this method adopts least square method to find the solution, when the number of image in the training sample was bigger than the dimension of image block, the expression coefficient of image block was not unique.Therefore, people such as Jung in 2011 are at document 7 (C. Jung, L. Jiao, B. Liu, and M. Gong, " Position-Patch Based Face Hallucination Using Convex Optimization, "
IEEE Signal Process. Lett., vol.18, no.6; Pp.367 – 370; 2011.) a kind of position image block human face super-resolution method based on protruding optimization of middle proposition, sparse constraint is joined image block find the solution in the expression, can solve not unique problem of separating of equation; For the expression that makes the input picture piece sparse as far as possible; This method possibly chosen some and carries out linearity with the widely different image block of image block of input and rebuild when the image block of synthetic input, do not consider this characteristic of locality, so it is unsatisfactory to rebuild effect.
Summary of the invention
The object of the invention provides a kind of face image super-resolution reconstruction method of representing based on the part, solves the inaccurate problem of low-resolution image piece of existing similar algorithmic notation input, improves the quality of finally synthetic high-resolution human face image.
For achieving the above object, the technical scheme that the present invention adopts is a kind of human face super-resolution method for reconstructing of representing based on local restriction, comprises the steps:
Step 3 replaces with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesizes the high-resolution human face image block with the weighting of step 2 gained optimal weights coefficient;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
And, in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is following,
According to from left to right, from top to bottom order carries out the partitioned image piece; When partitioned image piece to image border; If surplus size is littler then be that benchmark carries out rollback and divides with the edge of original image than the size of the image block that is provided with in advance; Comprise when laterally being divided into image the right edge, left rollback and be that benchmark carries out piecemeal with edge, the right; When vertically being divided into image base edge, rollback and be that benchmark carries out piecemeal upwards with the edge, base.
And; If set is
with low resolution facial image
partitioned image piece gained; With high resolving power training set
and low resolution training set
correspondingly the set of partitioned image piece gained be respectively
and
; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
expression; The row of
expression institute divided image piece number and row number,
and
represent that respectively each row and each goes the image block number that marks off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Wherein,
For from the low resolution training set
In individual low resolution people's face sample image
Row
The reconstructed coefficients of row image block,
Be by all
MOpen in the low resolution people face sample image the
Row
The row vector that the reconstructed coefficients of row image block is formed,
,
Be reconstructed coefficients
Penalty factor,
Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
" inner product operation between two vectors of expression,
Expression Euclidean squared-distance,
Return about variable
Function when obtaining minimum value
Value
, promptly desired optimum weights coefficient,
,
Be the of synthetic low resolution facial image
Row
During the row image block, from the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block;
The computing formula of said penalty factor
is following
Wherein,
is the image block of
row
row in the low resolution facial image,
for hanging down the image block of
row
Lie in
individual low resolution people's face sample image of distinguishing the rate training set.
And, in the step 3,, adopt following formula to calculate and obtain with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting,
Wherein, When
is
row
row image block of synthetic low resolution facial image in the step 2; From the optimum weights coefficient of
row
row image block in
individual low resolution people's face sample image in the low resolution training set,
is the image block of
row
Lie in
individual high-resolution human face sample image of high resolution training set.
The present invention is through increasing locality constraint condition; Choose adaptively with input picture piece neighbour's image block and represent the input picture piece; Avoided in the similar algorithm owing to fixing neighbour's piece number (document 5) causes over-fitting or match improper; Owing to the problem of selecting too many image block to cause a plurality of separating (document 6) and ignoring locality (document 7) owing to overemphasized sparse property; Make the expression coefficient of input picture piece more accurate, finally obtain higher-quality high-resolution human face image.
Description of drawings
Fig. 1 is the process flow diagram of the embodiment of the invention;
Fig. 2 is the piece division methods of facial image;
Laterally be divided into the rollback synoptic diagram of image the right edge in the image block division of Fig. 3 for the embodiment of the invention;
Vertically be divided into the rollback synoptic diagram of image base edge in the image block division of Fig. 4 for the embodiment of the invention;
Fig. 5 is the mean P SNR value comparing result that the inventive method and art methods obtain.
Fig. 6 is the average SSIM value comparing result that the inventive method and art methods obtain.
Embodiment
Technical scheme of the present invention can adopt software engineering to realize the automatic flow operation.Below in conjunction with accompanying drawing and embodiment to technical scheme further explain of the present invention.Referring to Fig. 1, embodiment of the invention concrete steps are:
Comprise low resolution people face sample image in the low resolution training set, comprise the high-resolution human face sample image in the high resolving power training set, low resolution training set and high resolving power training set provide predefined training sample right.Each low resolution people face sample image is to be extracted by a high-resolution human face sample image in the high resolving power training set in the low resolution training set.Among the embodiment, all high-resolution image pixel size are 120 * 100, and the image pixel size of all low resolution is 30 * 25.Low resolution people's face sample image is the result of high-resolution human face sample image through four times of Bicubic down-samplings.
Among the embodiment, establish low resolution facial image
, high resolving power training set
and the low resolution training set
of input.Be to comprise low resolution people face sample image
in the low resolution training set
; Comprise high-resolution human face sample image
in the high resolving power training set
,
.For the purpose of follow-up replacement; Embodiment is with low resolution people face sample image and high-resolution human face sample image reference numeral, and promptly low resolution people face sample image
is the result of high-resolution human face sample image
through four times of Bicubic down-samplings.The number of low resolution people face sample image is identical with the number of high resolving power training set middle high-resolution people face sample image in the low resolution training set, all is
.
Set is
with low resolution facial image
partitioned image piece gained; With high resolving power training set
and low resolution training set
correspondingly the set of partitioned image piece gained be respectively
and
; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
expression; As shown in Figure 2;
expression positional information of institute's divided image piece under image block coordinate system
, promptly the row of image block number be listed as number.To treat that the divided image upper left side is a starting point, divide the image block positional information that obtains and be
,
...
,
.Location
at the adjacent image blocks up and down around the image block in the coordinate system
the coordinates are
,
,
and
.
and
denote each line and each line of the image into blocks.The present invention divides overlapped image block to low resolution facial image, all low resolution people face sample images and the high-resolution human face sample image of input and adopts consistent mode, and promptly concrete
of each image division is identical with
numerical value.
Concrete
and
numerical value obtains according to the image division mode; The image block that the present invention is overlapped to image division; Promptly to treat that the divided image upper left side is a starting point; Choose a size (unit: image block pixel) makes top and the left of image block and has divided the individual pixel overlapping that partly
arranged (except when image block is positioned at edge, top or the leftmost edge of facial image) for
*
at every turn.
Therefore value as shown in the formula:
Wherein,
and
be the line number and the columns (unit: pixel) of presentation video respectively; The length of side of the square image blocks that
expression marks off; The number of pixels that overlaps between
presentation video piece and the image block, the smallest positive integral that is greater than or equal to
is returned in
expression.Be the image block located of position
and the image block of the adjacency up and down rectangular area of overlapping one
*
respectively, when image block is positioned at the image border certainly except.
When image being carried out the piece division; For fear of since the picture size that cutting or fill up causes change; The present invention proposes to take a kind of " rollback " strategy, and when promptly the divided image piece exceeded image border (the right or base), then the edge with original image was that benchmark carries out the rollback division.As shown in Figure 3, when horizontal division exceeds image the right edge, take left " rollback " and be that the strategy of benchmark carries out piecemeal with edge, the right; Likewise, as shown in Figure 4, when vertical division exceeds image base edge, take upwards " rollback " and be that the strategy of benchmark carries out piecemeal with the edge, base.
Among the embodiment, optimum weights coefficient is obtained by following formula:
This formula is made up of two parts, and first one is the super-resolution rebuilding constraint, and second portion is the local restriction that the input picture piece is represented.Wherein,
For from the low resolution training set
In individual low resolution people's face sample image
Row
The reconstructed coefficients of row image block,
Be by all
MOpen in the low resolution people face sample image the
Row
The row vector that the reconstructed coefficients of row image block is formed,
,
Be reconstructed coefficients
Penalty factor,
Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
" inner product operation between two vectors of expression,
Expression Euclidean squared-distance,
Return about variable
Function when obtaining minimum value
Value
, promptly desired optimum weights coefficient.
; When
is
row
row image block of synthetic low resolution facial image, from the optimum weights coefficient of
row
row image block in
individual low resolution people's face sample image in the low resolution training set.
for the image block of
row
row in the low resolution facial image of input,
is for hanging down the image block of
row
Lie in
individual low resolution people's face sample image of distinguishing the rate training set.When image block
is big apart from input picture piece
, give
bigger punishment; Otherwise; When image block
apart from input picture piece
hour, give
less punishment.Just can guarantee to choose as far as possible those sample image pieces through the formula of minimizing (3) and represent it with
neighbour.Represent penalty factor with the Euclidean squared-distance in the embodiment of the invention:
In formula (3),
is used for balance to rebuild a regularization parameter of constraint and local restriction.
reconstruction effect when getting different value is different; During as
, the local restriction method for expressing among the present invention just deteriorates to least square method for expressing in the document 6.The value of suggestion of the present invention
is between 0.02 to 0.1.
During practical implementation, separating of formula (3) can be obtained by following formula:
(5)
Optimum weights coefficient when
is under local restriction and by all low resolution people face sample image pieces in the low resolution training set input low-resolution image piece carried out linear the reconstruction.
,
(6)
Matrix
CMay be defined as:
Wherein
Be by owning in the low resolution training set
Individual low resolution people's face sample image is
Row
The matrix that the column vector that the image block of row is corresponding is formed,
,
Be that element is 1 entirely
The row vector,
Representing matrix
CTransposed matrix.
Step 3; Replace with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesize high-resolution human face image block
with the weighting of step 2 gained optimal weights coefficient.
Embodiment uses the optimal weights coefficient that obtains in the step 2 to represent that the expression formula of high-resolution human face image block is:
Wherein, When
is
row
row image block of synthetic low resolution facial image in the step 2; From the optimum weights coefficient of
row
row image block in
individual low resolution people's face sample image in the low resolution training set,
is the image block of
row
Lie in
individual high-resolution human face sample image of high resolution training set.
As shown in fig. 1; For certain locational image block of low resolution facial image, it is
,
,
,
that calculating all
individual these locational image blocks of low resolution people's face sample image in local restriction is represented down by the low resolution training set carry out the linear weights coefficient of rebuilding ...
.Image block with sample comes the image block of approximate representation input can have certain error, so Fig. 1 adopts about equal sign ≌.Replace with the image block of all low resolution people face sample images the image block of the corresponding high-resolution human face sample image in position; With weights coefficient
,
,
,
... Corresponding high-resolution human face image block is synthesized in weighting, can obtain the high-definition picture piece of this position.During the replacement image block, the image block of each high-resolution human face sample image
replacement is by the image block
of its down-sampling gained low resolution people face sample image.
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
Embodiment splices according to it all high-resolution human face image blocks that obtain in step 3 in people position on the face, the pixel value of lap between the adjacent image piece can adopt the method for getting average to obtain.For the purpose of the enforcement reference, adoptable implementation procedure was following when practical implementation was provided:
Step a, one of initialization and the equirotal image array of high-definition picture
I(all elements value is 0) and overlapping degree matrix
F(all elements value is 0);
Step b is with the high-resolution human face image block opsition dependent image array that is added to
IIn;
Step c, the overlapping degree matrix
FThe all elements of correspondence position all adds 1, representes that this pixel position has been overlapped once;
Steps d, repeating step b and c finish up to all high-definition picture pieces are synthetic;
Step e is with image array
IPoint removes sign overlapping degree matrix
FThe matrix of two identical sizes " point removes " is divided by with regard to the element that is meant two matrix relevant positions.
The high-resolution human face image that obtains can be used as the output that predicts the outcome, and forecast period is accomplished.
The present invention has increased a locality constraint condition on the basis of document 6 methods; Solved facial image sample block not unique problem of least square solution when too much; Increase sparse property constraint condition in documents 7 methods; But ignored this prior characteristic of locality, method can obtain more accurate image block and representes mode among the present invention, thereby can synthesize higher-quality high-resolution human face image.
For the purpose of explanation effect of the present invention, the experiment contrast is provided below.
FEI face database (document 8:Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, " Image quality assessment:From error visibility to structural similarity, " have been adopted
IEEE Trans. Image Process., vol.13, no.4, pp.600 – 612,2004.).200 Different Individual (100 male sex; 100 women), everyone just poker-faced facial image and each one of positive smile expression facial image, the unification of all images size is 120 * 100; Therefrom choose 360 and train, remaining 40 is image to be tested.Every training is carried out smoothly (using 4 * 4 average filter) with high-resolution image, and 4 times of down-samplings obtain the image of 30 * 25 low resolution.The size of dividing the facial image piece in the embodiment of the invention is respectively: the high-resolution human face image is divided into 12 * 12 image block, and overlapping is 4 pixels; The low resolution facial image is divided into 3 * 3 image block, and overlapping is 1 pixel.Promptly for high-resolution image;
=120;
=100;
=12,
=4; Image for low resolution;
=30;
=25;
=3,
=1.
Neighbour's piece number of document 5 neighborhood embedding grammars
KGet 200.Reconstruction error is set to 1.0 in the document 7 sparse constrained procedures.
The sparse method for solving of employing document 9 (E. Candes and J.Rombergt,
-Magic:Recovery of Sparse Signals via Convex Programming 2005 [Online]. Available:http: //www.acm.caltech.edu/l1magic/).Parameter
value unique in the inventive method is 0.04.
Y-PSNR (PSNR, unit are dB) is the most general, the objective measurement index of most popular picture quality; SSIM then is an index of weighing two width of cloth figure similarities, and its value approaches 1 more, explains that the effect of image reconstruction is good more.The inventive method and document 5, document 6, PSNR that document 7 methods obtain and SSIM value (all 40 width of cloth test facial images are averaged) contrast is respectively like Fig. 5 and shown in Figure 6.Document 5, document 6, the mean P SNR value with the inventive method in the document 7 is followed successively by 31.22,31.90,32.11,32.76; Document 5, document 6 is followed successively by 0.8972,0.9034,0.9052,0.9145 with the average SSIM value of the inventive method in the document 7.The inventive method improves 0.65 dB and 0.07 respectively than best algorithm (document 7) in control methods on PSNR and SSIM value.
Claims (4)
1. a human face super-resolution method for reconstructing of representing based on local restriction is characterized in that, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of importing, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
Step 3 replaces with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesizes the high-resolution human face image block with the weighting of step 2 gained optimal weights coefficient;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
2. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1; It is characterized in that: in the step 1; Adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block; Concrete dividing mode is following
According to from left to right, from top to bottom order carries out the partitioned image piece; When partitioned image piece to image border; If surplus size is littler then be that benchmark carries out rollback and divides with the edge of original image than the size of the image block that is provided with in advance; Comprise when laterally being divided into image the right edge, left rollback and be that benchmark carries out piecemeal with edge, the right; When vertically being divided into image base edge, rollback and be that benchmark carries out piecemeal upwards with the edge, base.
3. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1; It is characterized in that: establish the set of low resolution facial image
partitioned image piece gained to
; With high resolving power training set
and low resolution training set
correspondingly the set of partitioned image piece gained be respectively
and
; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
expression; The row of
expression institute divided image piece number and row number,
and
represent that respectively each row and each goes the image block number that marks off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Wherein,
For from the low resolution training set
In individual low resolution people's face sample image
Row
The reconstructed coefficients of row image block,
Be by all
MOpen in the low resolution people face sample image the
Row
The row vector that the reconstructed coefficients of row image block is formed,
,
Be reconstructed coefficients
Penalty factor,
Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
" inner product operation between two vectors of expression,
Expression Euclidean squared-distance,
Return about variable
Function when obtaining minimum value
Value
, promptly desired optimum weights coefficient,
,
Be the of synthetic low resolution facial image
Row
During the row image block, in the low resolution training set
In individual low resolution people's face sample image
Row
The optimum weights coefficient of row image block;
4. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1, it is characterized in that: in the step 3,, adopt following formula to calculate and obtain with the synthetic high-resolution human face image block of step 2 gained weight coefficient weighting,
Which,?
Step 2 for the synthesis of low-resolution face images in the first
row
column image block, the low-resolution training set first
a sample of low-resolution images of human faces
row
column image block optimal weight coefficient;
High-resolution training set for the first
a High-resolution images of the sample face
row
column of image blocks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110421452 CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110421452 CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102521810A true CN102521810A (en) | 2012-06-27 |
CN102521810B CN102521810B (en) | 2013-09-18 |
Family
ID=46292714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110421452 Expired - Fee Related CN102521810B (en) | 2011-12-16 | 2011-12-16 | Face super-resolution reconstruction method based on local constraint representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102521810B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034974A (en) * | 2012-12-07 | 2013-04-10 | 武汉大学 | Face image super-resolution reconstructing method based on support-set-driven sparse codes |
CN103208109A (en) * | 2013-04-25 | 2013-07-17 | 武汉大学 | Local restriction iteration neighborhood embedding-based face hallucination method |
CN103824272A (en) * | 2014-03-03 | 2014-05-28 | 武汉大学 | Face super-resolution reconstruction method based on K-neighboring re-recognition |
CN104091320A (en) * | 2014-07-16 | 2014-10-08 | 武汉大学 | Noise human face super-resolution reconstruction method based on data-driven local feature conversion |
CN104574455A (en) * | 2013-10-29 | 2015-04-29 | 华为技术有限公司 | Image reestablishing method and device |
CN105405097A (en) * | 2015-10-29 | 2016-03-16 | 武汉大学 | Robustness human face super resolution processing method and system based on reverse manifold constraints |
CN105469359A (en) * | 2015-12-09 | 2016-04-06 | 武汉工程大学 | Locality-constrained and low-rank representation based human face super-resolution reconstruction method |
CN105550649A (en) * | 2015-12-09 | 2016-05-04 | 武汉工程大学 | Extremely low resolution human face recognition method and system based on unity coupling local constraint expression |
CN105787462A (en) * | 2016-03-16 | 2016-07-20 | 武汉工程大学 | Semi-coupling-crucial-dictionary-learning-based extremely-low-resolution face identification method and system |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
CN106530231A (en) * | 2016-11-09 | 2017-03-22 | 武汉工程大学 | Method and system for reconstructing super-resolution image based on deep collaborative representation |
CN106558018A (en) * | 2015-09-25 | 2017-04-05 | 北京大学 | The unreal structure method and device of video human face that Component- Based Development decomposes |
CN107154023A (en) * | 2017-05-17 | 2017-09-12 | 电子科技大学 | Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution |
CN107203967A (en) * | 2017-05-25 | 2017-09-26 | 中国地质大学(武汉) | A kind of face super-resolution reconstruction method based on context image block |
CN107292865A (en) * | 2017-05-16 | 2017-10-24 | 哈尔滨医科大学 | A kind of stereo display method based on two dimensional image processing |
CN107633483A (en) * | 2017-09-18 | 2018-01-26 | 长安大学 | The face image super-resolution method of illumination robustness |
CN109117892A (en) * | 2018-08-28 | 2019-01-01 | 国网福建省电力有限公司福州供电公司 | Deep learning large scale picture training detection algorithm |
CN113887371A (en) * | 2021-09-26 | 2022-01-04 | 华南理工大学 | Data enhancement method for low-resolution face recognition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008219271A (en) * | 2007-03-01 | 2008-09-18 | Fujifilm Corp | Image processor, image processing method and photographing device |
CN101635048A (en) * | 2009-08-20 | 2010-01-27 | 上海交通大学 | Super-resolution processing method of face image integrating global feature with local information |
CN102024266A (en) * | 2010-11-04 | 2011-04-20 | 西安电子科技大学 | Image structure model-based compressed sensing image reconstruction method |
-
2011
- 2011-12-16 CN CN 201110421452 patent/CN102521810B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008219271A (en) * | 2007-03-01 | 2008-09-18 | Fujifilm Corp | Image processor, image processing method and photographing device |
CN101635048A (en) * | 2009-08-20 | 2010-01-27 | 上海交通大学 | Super-resolution processing method of face image integrating global feature with local information |
CN102024266A (en) * | 2010-11-04 | 2011-04-20 | 西安电子科技大学 | Image structure model-based compressed sensing image reconstruction method |
Non-Patent Citations (1)
Title |
---|
XIANG MA ET AL.: "position-based face hallucination method", 《ICME 2009》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034974B (en) * | 2012-12-07 | 2015-12-23 | 武汉大学 | The face image super-resolution reconstruction method of sparse coding is driven based on support set |
CN103034974A (en) * | 2012-12-07 | 2013-04-10 | 武汉大学 | Face image super-resolution reconstructing method based on support-set-driven sparse codes |
CN103208109A (en) * | 2013-04-25 | 2013-07-17 | 武汉大学 | Local restriction iteration neighborhood embedding-based face hallucination method |
CN103208109B (en) * | 2013-04-25 | 2015-09-16 | 武汉大学 | A kind of unreal structure method of face embedded based on local restriction iteration neighborhood |
CN104574455A (en) * | 2013-10-29 | 2015-04-29 | 华为技术有限公司 | Image reestablishing method and device |
CN104574455B (en) * | 2013-10-29 | 2017-11-24 | 华为技术有限公司 | Image rebuilding method and device |
CN103824272A (en) * | 2014-03-03 | 2014-05-28 | 武汉大学 | Face super-resolution reconstruction method based on K-neighboring re-recognition |
CN103824272B (en) * | 2014-03-03 | 2016-08-17 | 武汉大学 | The face super-resolution reconstruction method heavily identified based on k nearest neighbor |
CN104091320B (en) * | 2014-07-16 | 2017-03-29 | 武汉大学 | Based on the noise face super-resolution reconstruction method that data-driven local feature is changed |
CN104091320A (en) * | 2014-07-16 | 2014-10-08 | 武汉大学 | Noise human face super-resolution reconstruction method based on data-driven local feature conversion |
CN106558018B (en) * | 2015-09-25 | 2019-08-06 | 北京大学 | The unreal structure method and device of video human face that Component- Based Development decomposes |
CN106558018A (en) * | 2015-09-25 | 2017-04-05 | 北京大学 | The unreal structure method and device of video human face that Component- Based Development decomposes |
CN105405097A (en) * | 2015-10-29 | 2016-03-16 | 武汉大学 | Robustness human face super resolution processing method and system based on reverse manifold constraints |
CN105550649A (en) * | 2015-12-09 | 2016-05-04 | 武汉工程大学 | Extremely low resolution human face recognition method and system based on unity coupling local constraint expression |
CN105469359B (en) * | 2015-12-09 | 2019-05-03 | 武汉工程大学 | Face super-resolution reconstruction method based on local restriction low-rank representation |
CN105469359A (en) * | 2015-12-09 | 2016-04-06 | 武汉工程大学 | Locality-constrained and low-rank representation based human face super-resolution reconstruction method |
CN105787462A (en) * | 2016-03-16 | 2016-07-20 | 武汉工程大学 | Semi-coupling-crucial-dictionary-learning-based extremely-low-resolution face identification method and system |
CN105787462B (en) * | 2016-03-16 | 2019-05-03 | 武汉工程大学 | Extremely low resolution ratio face identification method and system based on half coupling judgement property dictionary learning |
CN106203269A (en) * | 2016-06-29 | 2016-12-07 | 武汉大学 | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
CN106530231A (en) * | 2016-11-09 | 2017-03-22 | 武汉工程大学 | Method and system for reconstructing super-resolution image based on deep collaborative representation |
CN106530231B (en) * | 2016-11-09 | 2020-08-11 | 武汉工程大学 | Super-resolution image reconstruction method and system based on deep cooperative expression |
CN107292865A (en) * | 2017-05-16 | 2017-10-24 | 哈尔滨医科大学 | A kind of stereo display method based on two dimensional image processing |
CN107292865B (en) * | 2017-05-16 | 2021-01-26 | 哈尔滨医科大学 | Three-dimensional display method based on two-dimensional image processing |
CN107154023B (en) * | 2017-05-17 | 2019-11-05 | 电子科技大学 | Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution |
CN107154023A (en) * | 2017-05-17 | 2017-09-12 | 电子科技大学 | Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution |
CN107203967A (en) * | 2017-05-25 | 2017-09-26 | 中国地质大学(武汉) | A kind of face super-resolution reconstruction method based on context image block |
CN107633483A (en) * | 2017-09-18 | 2018-01-26 | 长安大学 | The face image super-resolution method of illumination robustness |
CN109117892A (en) * | 2018-08-28 | 2019-01-01 | 国网福建省电力有限公司福州供电公司 | Deep learning large scale picture training detection algorithm |
CN109117892B (en) * | 2018-08-28 | 2021-07-27 | 国网福建省电力有限公司福州供电公司 | Deep learning large-size picture training detection algorithm |
CN113887371A (en) * | 2021-09-26 | 2022-01-04 | 华南理工大学 | Data enhancement method for low-resolution face recognition |
CN113887371B (en) * | 2021-09-26 | 2024-05-28 | 华南理工大学 | Data enhancement method for low-resolution face recognition |
Also Published As
Publication number | Publication date |
---|---|
CN102521810B (en) | 2013-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521810B (en) | Face super-resolution reconstruction method based on local constraint representation | |
Zhang et al. | Residual networks for light field image super-resolution | |
CN110443842B (en) | Depth map prediction method based on visual angle fusion | |
Lee et al. | From big to small: Multi-scale local planar guidance for monocular depth estimation | |
Gan et al. | Monocular depth estimation with affinity, vertical pooling, and label enhancement | |
CN103824272B (en) | The face super-resolution reconstruction method heavily identified based on k nearest neighbor | |
CN101976435B (en) | Combination learning super-resolution method based on dual constraint | |
JP7058277B2 (en) | Reconstruction method and reconfiguration device | |
CN105741252A (en) | Sparse representation and dictionary learning-based video image layered reconstruction method | |
CN108776971B (en) | Method and system for determining variable-split optical flow based on hierarchical nearest neighbor | |
CN102693419B (en) | Super-resolution face recognition method based on multi-manifold discrimination and analysis | |
KR101994112B1 (en) | Apparatus and method for compose panoramic image based on image segment | |
KR20100038168A (en) | Composition analysis method, image device having composition analysis function, composition analysis program, and computer-readable recording medium | |
CN110381268B (en) | Method, device, storage medium and electronic equipment for generating video | |
Guan et al. | Multistage dual-attention guided fusion network for hyperspectral pansharpening | |
KR102141319B1 (en) | Super-resolution method for multi-view 360-degree image and image processing apparatus | |
CN111861880A (en) | Image super-fusion method based on regional information enhancement and block self-attention | |
CN103034974B (en) | The face image super-resolution reconstruction method of sparse coding is driven based on support set | |
Zhou et al. | PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes | |
Ma et al. | Position-based face hallucination method | |
WO2022216521A1 (en) | Dual-flattening transformer through decomposed row and column queries for semantic segmentation | |
CN114387346A (en) | Image recognition and prediction model processing method, three-dimensional modeling method and device | |
KR20150065302A (en) | Method deciding 3-dimensional position of landsat imagery by Image Matching | |
Jin et al. | Light field reconstruction via deep adaptive fusion of hybrid lenses | |
Zhou et al. | Mh pose: 3d human pose estimation based on high-quality heatmap |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130918 Termination date: 20171216 |