CN102521810A - Face super-resolution reconstruction method based on local constraint representation - Google Patents

Face super-resolution reconstruction method based on local constraint representation Download PDF

Info

Publication number
CN102521810A
CN102521810A CN2011104214523A CN201110421452A CN102521810A CN 102521810 A CN102521810 A CN 102521810A CN 2011104214523 A CN2011104214523 A CN 2011104214523A CN 201110421452 A CN201110421452 A CN 201110421452A CN 102521810 A CN102521810 A CN 102521810A
Authority
CN
China
Prior art keywords
image
resolution
low resolution
row
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104214523A
Other languages
Chinese (zh)
Other versions
CN102521810B (en
Inventor
胡瑞敏
江俊君
王冰
韩镇
黄克斌
卢涛
王亦民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN 201110421452 priority Critical patent/CN102521810B/en
Publication of CN102521810A publication Critical patent/CN102521810A/en
Application granted granted Critical
Publication of CN102521810B publication Critical patent/CN102521810B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a face super-resolution reconstruction method based on local constraint representation. The method comprises the following steps of: respectively dividing the input low-resolution face image and the face image in high and low resolution training sets into mutually overlapped image blocks; for each image block of the input low-resolution face image, according to the priori that the representation of the image blocks has locality, calculating the optimal weighting coefficient when performing linear reconstruction by the image blocks at corresponding positions of each image in the low-resolution training set; correspondingly substituting the image blocks at corresponding positions of each image in the low-resolution training set with the image blocks at corresponding positions of each image in the high-resolution training set one to one, and weighting and synthesizing high-resolution image blocks; based on the positions on the face, fusing the image blocks into a high-resolution face image. The method provides a local constraint representation model, adaptively selects the image blocks, which are adjacent to the input image blocks, from a sample image block space in the training set to linearly reconstruct the input image blocks, thereby obtaining the optical weighting coefficient and synthesizing the high-quality high-resolution image.

Description

A kind of human face super-resolution method for reconstructing of representing based on local restriction
Technical field
The present invention relates to the image super-resolution field, be specifically related to a kind of human face super-resolution method for reconstructing of representing based on local restriction.
Background technology
In safe-guard system, rig camera has obtained using widely.Yet in many cases, video camera and interested scenery (like people's face etc.) distance is far, makes that the resolution of taken facial image is very low in the video, and the shared zone of people's face often has only tens pixels.Because resolution is too little, interested facial image has been lost too much detailed information, and this makes the captured people's face that obtains of rig camera be difficult to distinguished by people or machine effectively.Therefore, how to improve the quality of low resolution facial image, effectively strengthen the resolution of inferior quality facial image in the monitoring video, distinguish the characteristic detailed information that provides enough, become problem demanding prompt solution instantly for next step people's face.Human face super-resolution technology (also being illusion face technology) is exactly a kind of image super-resolution rebuilding method that is produced the high-resolution human face image by input low resolution facial image.
In recent years, scholars have proposed a large amount of human face super-resolution methods based on study.These class methods are according to high low-resolution image this prior imformation of training set to being constituted, and the facial image of a low resolution of input just can go out a high-resolution facial image by super-resolution rebuilding.For example, people such as Freeman in 2000 are at document 1 (W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. In IJCV, 40 (1): 25 – 47,2000) in a kind of Markov network (Markov network) method is at first proposed, this method also be the earliest based on study super-resolution method.In the same year, Simon and Kanade are specially to facial image, at document 2 (S. Baker and T. Kanade. Hallucinating faces. In FG, Grenoble, France, Mar. 2000, proposed the unreal structure of a kind of people's face (face hallucination) method in 83-88.).Subsequently, people such as Liu are at document 3 (C. Liu, H.Y. Shum, and C.S. Zhang. A two-step approach to hallucinating faces:global parametric model and local nonparametric model. In CVPR, pp. 192 – 198,2001.) and the middle two-step approach that proposes human face rebuilding, the overall situation and the local message of synthetic people's face respectively.So far, the super-resolution method based on study has caused scholars' extensive concern.
According to theoretical (the document 4:S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. of manifold learning Science, 290 (5500): 2323 – 2326,2000.), people such as Chang were at document 5 (H. Chang, D.Y. Yeung, and Y.M. Xiong. Super-resolution through neighbor embedding. In in 2004 CVPR, pp. 275 – 282,2004.) in have this hypothesis of similar local geometric features based on high low resolution sample storehouse, the image super-resolution rebuilding method that a kind of neighborhood embeds is proposed, obtain well to rebuild effect.But, because the selected neighbour's piece of this method number is fixed, when the input picture piece is represented, over-fitting or match problem improperly can appear.To this problem, people such as Ma were at document 6 (X. Ma, J.P Zhang, and C. Qi. Hallucinating face by position-patch. in 2010 Pattern Recognition43 (6): 3178 – 3194; 2010.) in a kind of human face super-resolution method of position-based image block is proposed, use in the training set all and the facial image piece reconstruction high-resolution human face image of input picture piece co-located, avoid steps such as manifold learning or feature extraction; Improve efficient, also promoted the quality of composograph simultaneously.Yet because this method adopts least square method to find the solution, when the number of image in the training sample was bigger than the dimension of image block, the expression coefficient of image block was not unique.Therefore, people such as Jung in 2011 are at document 7 (C. Jung, L. Jiao, B. Liu, and M. Gong, " Position-Patch Based Face Hallucination Using Convex Optimization, " IEEE Signal Process. Lett., vol.18, no.6; Pp.367 – 370; 2011.) a kind of position image block human face super-resolution method based on protruding optimization of middle proposition, sparse constraint is joined image block find the solution in the expression, can solve not unique problem of separating of equation; For the expression that makes the input picture piece sparse as far as possible; This method possibly chosen some and carries out linearity with the widely different image block of image block of input and rebuild when the image block of synthetic input, do not consider this characteristic of locality, so it is unsatisfactory to rebuild effect.
Summary of the invention
The object of the invention provides a kind of face image super-resolution reconstruction method of representing based on the part, solves the inaccurate problem of low-resolution image piece of existing similar algorithmic notation input, improves the quality of finally synthetic high-resolution human face image.
For achieving the above object, the technical scheme that the present invention adopts is a kind of human face super-resolution method for reconstructing of representing based on local restriction, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of importing, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
Step 3 replaces with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesizes the high-resolution human face image block with the weighting of step 2 gained optimal weights coefficient;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
And, in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is following,
According to from left to right, from top to bottom order carries out the partitioned image piece; When partitioned image piece to image border; If surplus size is littler then be that benchmark carries out rollback and divides with the edge of original image than the size of the image block that is provided with in advance; Comprise when laterally being divided into image the right edge, left rollback and be that benchmark carries out piecemeal with edge, the right; When vertically being divided into image base edge, rollback and be that benchmark carries out piecemeal upwards with the edge, base.
And; If set is
Figure 284147DEST_PATH_IMAGE002
with low resolution facial image
Figure 957071DEST_PATH_IMAGE001
partitioned image piece gained; With high resolving power training set
Figure 506181DEST_PATH_IMAGE003
and low resolution training set
Figure 89347DEST_PATH_IMAGE004
correspondingly the set of partitioned image piece gained be respectively
Figure 55029DEST_PATH_IMAGE005
and ; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
Figure 567230DEST_PATH_IMAGE007
expression; The row of expression institute divided image piece number and row number,
Figure 469382DEST_PATH_IMAGE009
and
Figure 505471DEST_PATH_IMAGE010
represent that respectively each row and each goes the image block number that marks off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Figure 69307DEST_PATH_IMAGE011
Wherein,
Figure 128530DEST_PATH_IMAGE012
For from the low resolution training set In individual low resolution people's face sample image
Figure 592190DEST_PATH_IMAGE014
Row
Figure 389244DEST_PATH_IMAGE015
The reconstructed coefficients of row image block, Be by all MOpen in the low resolution people face sample image the Row
Figure 290576DEST_PATH_IMAGE015
The row vector that the reconstructed coefficients of row image block is formed,
Figure 258532DEST_PATH_IMAGE017
, Be reconstructed coefficients Penalty factor,
Figure 72401DEST_PATH_IMAGE019
Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
Figure 211259DEST_PATH_IMAGE020
" inner product operation between two vectors of expression,
Figure 529108DEST_PATH_IMAGE021
Expression Euclidean squared-distance,
Figure 480621DEST_PATH_IMAGE022
Return about variable
Figure 465894DEST_PATH_IMAGE016
Function when obtaining minimum value
Figure 713336DEST_PATH_IMAGE016
Value
Figure 456164DEST_PATH_IMAGE023
, promptly desired optimum weights coefficient, ,
Figure 287034DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image
Figure 203912DEST_PATH_IMAGE014
Row
Figure 761932DEST_PATH_IMAGE015
During the row image block, from the low resolution training set
Figure 291134DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 188683DEST_PATH_IMAGE014
Row
Figure 840244DEST_PATH_IMAGE015
The optimum weights coefficient of row image block;
The computing formula of said penalty factor is following
Figure 952873DEST_PATH_IMAGE026
Wherein,
Figure 203464DEST_PATH_IMAGE027
is the image block of
Figure 25926DEST_PATH_IMAGE014
row
Figure 230643DEST_PATH_IMAGE015
row in the low resolution facial image,
Figure 429543DEST_PATH_IMAGE028
for hanging down the image block of
Figure 29469DEST_PATH_IMAGE014
row
Figure 721481DEST_PATH_IMAGE015
Lie in
Figure 36105DEST_PATH_IMAGE013
individual low resolution people's face sample image of distinguishing the rate training set.
And, in the step 3,, adopt following formula to calculate and obtain with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting,
Figure 724072DEST_PATH_IMAGE029
Wherein, When is
Figure 847941DEST_PATH_IMAGE014
row
Figure 89566DEST_PATH_IMAGE015
row image block of synthetic low resolution facial image in the step 2; From the optimum weights coefficient of
Figure 945844DEST_PATH_IMAGE014
row
Figure 484273DEST_PATH_IMAGE015
row image block in
Figure 567952DEST_PATH_IMAGE013
individual low resolution people's face sample image in the low resolution training set,
Figure 947615DEST_PATH_IMAGE030
is the image block of
Figure 898308DEST_PATH_IMAGE014
row
Figure 669955DEST_PATH_IMAGE015
Lie in individual high-resolution human face sample image of high resolution training set.
The present invention is through increasing locality constraint condition; Choose adaptively with input picture piece neighbour's image block and represent the input picture piece; Avoided in the similar algorithm owing to fixing neighbour's piece number (document 5) causes over-fitting or match improper; Owing to the problem of selecting too many image block to cause a plurality of separating (document 6) and ignoring locality (document 7) owing to overemphasized sparse property; Make the expression coefficient of input picture piece more accurate, finally obtain higher-quality high-resolution human face image.
Description of drawings
Fig. 1 is the process flow diagram of the embodiment of the invention;
Fig. 2 is the piece division methods of facial image;
Laterally be divided into the rollback synoptic diagram of image the right edge in the image block division of Fig. 3 for the embodiment of the invention;
Vertically be divided into the rollback synoptic diagram of image base edge in the image block division of Fig. 4 for the embodiment of the invention;
Fig. 5 is the mean P SNR value comparing result that the inventive method and art methods obtain.
Fig. 6 is the average SSIM value comparing result that the inventive method and art methods obtain.
Embodiment
Technical scheme of the present invention can adopt software engineering to realize the automatic flow operation.Below in conjunction with accompanying drawing and embodiment to technical scheme further explain of the present invention.Referring to Fig. 1, embodiment of the invention concrete steps are:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of importing, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set.
Comprise low resolution people face sample image in the low resolution training set, comprise the high-resolution human face sample image in the high resolving power training set, low resolution training set and high resolving power training set provide predefined training sample right.Each low resolution people face sample image is to be extracted by a high-resolution human face sample image in the high resolving power training set in the low resolution training set.Among the embodiment, all high-resolution image pixel size are 120 * 100, and the image pixel size of all low resolution is 30 * 25.Low resolution people's face sample image is the result of high-resolution human face sample image through four times of Bicubic down-samplings.
Among the embodiment, establish low resolution facial image
Figure 620594DEST_PATH_IMAGE031
, high resolving power training set
Figure 440782DEST_PATH_IMAGE003
and the low resolution training set of input.Be to comprise low resolution people face sample image
Figure 111432DEST_PATH_IMAGE032
in the low resolution training set
Figure 673497DEST_PATH_IMAGE004
; Comprise high-resolution human face sample image
Figure 214457DEST_PATH_IMAGE033
in the high resolving power training set
Figure 735311DEST_PATH_IMAGE003
, .For the purpose of follow-up replacement; Embodiment is with low resolution people face sample image and high-resolution human face sample image reference numeral, and promptly low resolution people face sample image is the result of high-resolution human face sample image
Figure 352812DEST_PATH_IMAGE033
through four times of Bicubic down-samplings.The number of low resolution people face sample image is identical with the number of high resolving power training set middle high-resolution people face sample image in the low resolution training set, all is .
Set is
Figure 48869DEST_PATH_IMAGE002
with low resolution facial image
Figure 636343DEST_PATH_IMAGE001
partitioned image piece gained; With high resolving power training set
Figure 513087DEST_PATH_IMAGE003
and low resolution training set
Figure 694669DEST_PATH_IMAGE004
correspondingly the set of partitioned image piece gained be respectively
Figure 87604DEST_PATH_IMAGE005
and
Figure 721848DEST_PATH_IMAGE006
; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
Figure 225642DEST_PATH_IMAGE007
expression; As shown in Figure 2;
Figure 527310DEST_PATH_IMAGE008
expression positional information of institute's divided image piece under image block coordinate system
Figure 91147DEST_PATH_IMAGE035
, promptly the row of image block number be listed as number.To treat that the divided image upper left side is a starting point, divide the image block positional information that obtains and be
Figure 648905DEST_PATH_IMAGE036
, ...
Figure 112564DEST_PATH_IMAGE038
,
Figure 909619DEST_PATH_IMAGE039
.Location
Figure 456138DEST_PATH_IMAGE008
at the adjacent image blocks up and down around the image block in the coordinate system
Figure 364051DEST_PATH_IMAGE035
the coordinates are
Figure 312415DEST_PATH_IMAGE040
,
Figure 716590DEST_PATH_IMAGE041
,
Figure 812722DEST_PATH_IMAGE042
and .
Figure 327197DEST_PATH_IMAGE009
and
Figure 466054DEST_PATH_IMAGE010
denote each line and each line of the image into blocks.The present invention divides overlapped image block to low resolution facial image, all low resolution people face sample images and the high-resolution human face sample image of input and adopts consistent mode, and promptly concrete
Figure 987165DEST_PATH_IMAGE009
of each image division is identical with
Figure 440143DEST_PATH_IMAGE010
numerical value.
Concrete
Figure 159837DEST_PATH_IMAGE009
and
Figure 905814DEST_PATH_IMAGE010
numerical value obtains according to the image division mode; The image block that the present invention is overlapped to image division; Promptly to treat that the divided image upper left side is a starting point; Choose a size (unit: image block pixel) makes top and the left of image block and has divided the individual pixel overlapping that partly
Figure 807408DEST_PATH_IMAGE045
arranged (except when image block is positioned at edge, top or the leftmost edge of facial image) for
Figure 976538DEST_PATH_IMAGE044
*
Figure 967628DEST_PATH_IMAGE044
at every turn.
Therefore value as shown in the formula:
Figure 225751DEST_PATH_IMAGE046
(1)
Figure 783772DEST_PATH_IMAGE047
(2)
Wherein,
Figure 312973DEST_PATH_IMAGE048
and
Figure 7260DEST_PATH_IMAGE049
be the line number and the columns (unit: pixel) of presentation video respectively; The length of side of the square image blocks that
Figure 360618DEST_PATH_IMAGE044
expression marks off; The number of pixels that overlaps between
Figure 78039DEST_PATH_IMAGE045
presentation video piece and the image block, the smallest positive integral that is greater than or equal to is returned in expression.Be the image block located of position
Figure 47766DEST_PATH_IMAGE008
and the image block of the adjacency up and down rectangular area of overlapping one
Figure 314799DEST_PATH_IMAGE044
*
Figure 451382DEST_PATH_IMAGE045
respectively, when image block is positioned at the image border certainly except.
When image being carried out the piece division; For fear of since the picture size that cutting or fill up causes change; The present invention proposes to take a kind of " rollback " strategy, and when promptly the divided image piece exceeded image border (the right or base), then the edge with original image was that benchmark carries out the rollback division.As shown in Figure 3, when horizontal division exceeds image the right edge, take left " rollback " and be that the strategy of benchmark carries out piecemeal with edge, the right; Likewise, as shown in Figure 4, when vertical division exceeds image base edge, take upwards " rollback " and be that the strategy of benchmark carries out piecemeal with the edge, base.
Step 2; For each locational image block
Figure 556479DEST_PATH_IMAGE027
of low resolution facial image, calculate under local restriction the optimum weights coefficient when it being carried out linear reconstruction by all low resolution people face sample image this locational image blocks in the low resolution training set.
Among the embodiment, optimum weights coefficient is obtained by following formula:
Figure 549843DEST_PATH_IMAGE011
(3)
This formula is made up of two parts, and first one is the super-resolution rebuilding constraint, and second portion is the local restriction that the input picture piece is represented.Wherein,
Figure 241855DEST_PATH_IMAGE012
For from the low resolution training set In individual low resolution people's face sample image
Figure 439936DEST_PATH_IMAGE014
Row
Figure 869780DEST_PATH_IMAGE015
The reconstructed coefficients of row image block, Be by all MOpen in the low resolution people face sample image the
Figure 324212DEST_PATH_IMAGE014
Row
Figure 967683DEST_PATH_IMAGE015
The row vector that the reconstructed coefficients of row image block is formed,
Figure 739068DEST_PATH_IMAGE017
,
Figure 467989DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 484487DEST_PATH_IMAGE012
Penalty factor,
Figure 920147DEST_PATH_IMAGE019
Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
Figure 691794DEST_PATH_IMAGE020
" inner product operation between two vectors of expression,
Figure 580116DEST_PATH_IMAGE021
Expression Euclidean squared-distance,
Figure 462621DEST_PATH_IMAGE022
Return about variable Function when obtaining minimum value
Figure 193872DEST_PATH_IMAGE016
Value
Figure 631806DEST_PATH_IMAGE023
, promptly desired optimum weights coefficient.
Figure 990106DEST_PATH_IMAGE024
; When
Figure 462676DEST_PATH_IMAGE025
is
Figure 513809DEST_PATH_IMAGE052
row row image block of synthetic low resolution facial image, from the optimum weights coefficient of
Figure 662527DEST_PATH_IMAGE014
row
Figure 383096DEST_PATH_IMAGE015
row image block in
Figure 601030DEST_PATH_IMAGE013
individual low resolution people's face sample image in the low resolution training set.
Figure 795623DEST_PATH_IMAGE027
for the image block of row
Figure 880571DEST_PATH_IMAGE015
row in the low resolution facial image of input,
Figure 70244DEST_PATH_IMAGE028
is for hanging down the image block of
Figure 473860DEST_PATH_IMAGE014
row
Figure 775529DEST_PATH_IMAGE015
Lie in
Figure 907750DEST_PATH_IMAGE013
individual low resolution people's face sample image of distinguishing the rate training set.When image block
Figure 572321DEST_PATH_IMAGE028
is big apart from input picture piece , give bigger punishment; Otherwise; When image block apart from input picture piece
Figure 892258DEST_PATH_IMAGE027
hour, give
Figure 438777DEST_PATH_IMAGE028
less punishment.Just can guarantee to choose as far as possible those sample image pieces through the formula of minimizing (3) and represent it with
Figure 612269DEST_PATH_IMAGE027
neighbour.Represent penalty factor with the Euclidean squared-distance in the embodiment of the invention:
Figure 59169DEST_PATH_IMAGE026
(4)
In formula (3),
Figure 27125DEST_PATH_IMAGE019
is used for balance to rebuild a regularization parameter of constraint and local restriction.
Figure 60940DEST_PATH_IMAGE019
reconstruction effect when getting different value is different; During as
Figure 772544DEST_PATH_IMAGE053
, the local restriction method for expressing among the present invention just deteriorates to least square method for expressing in the document 6.The value of suggestion of the present invention is between 0.02 to 0.1.
During practical implementation, separating of formula (3) can be obtained by following formula:
(5)
Optimum weights coefficient when
Figure 235384DEST_PATH_IMAGE055
is under local restriction and by all low resolution people face sample image pieces in the low resolution training set input low-resolution image piece carried out linear the reconstruction.
Wherein
Figure 485099DEST_PATH_IMAGE056
is the diagonal matrix of
Figure 906591DEST_PATH_IMAGE057
, can be expressed as:
, (6)
Figure 278164DEST_PATH_IMAGE060
For to matrix DThe mOK mThe value of row,
Figure 55627DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 536287DEST_PATH_IMAGE012
Penalty factor
Figure 766411DEST_PATH_IMAGE061
is the local covariance matrix of input picture piece
Figure 59727DEST_PATH_IMAGE027
, can be expressed as:
Figure 754013DEST_PATH_IMAGE062
(7)
Matrix CMay be defined as:
Figure 343258DEST_PATH_IMAGE063
(8)
Wherein
Figure 388574DEST_PATH_IMAGE064
Be by owning in the low resolution training set Individual low resolution people's face sample image is
Figure 207942DEST_PATH_IMAGE014
Row
Figure 295984DEST_PATH_IMAGE015
The matrix that the column vector that the image block of row is corresponding is formed,
Figure 999236DEST_PATH_IMAGE065
,
Figure 198136DEST_PATH_IMAGE066
Be that element is 1 entirely
Figure 539118DEST_PATH_IMAGE067
The row vector,
Figure 798061DEST_PATH_IMAGE068
Representing matrix CTransposed matrix.
Step 3; Replace with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesize high-resolution human face image block
Figure 490074DEST_PATH_IMAGE069
with the weighting of step 2 gained optimal weights coefficient.
Embodiment uses the optimal weights coefficient that obtains in the step 2 to represent that the expression formula of high-resolution human face image block is:
Figure 164769DEST_PATH_IMAGE070
(9)
Wherein, When
Figure 688154DEST_PATH_IMAGE025
is
Figure 554217DEST_PATH_IMAGE014
row row image block of synthetic low resolution facial image in the step 2; From the optimum weights coefficient of
Figure 924825DEST_PATH_IMAGE014
row
Figure 197674DEST_PATH_IMAGE015
row image block in
Figure 281354DEST_PATH_IMAGE013
individual low resolution people's face sample image in the low resolution training set,
Figure 864279DEST_PATH_IMAGE030
is the image block of row
Figure 383357DEST_PATH_IMAGE015
Lie in individual high-resolution human face sample image of high resolution training set.
As shown in fig. 1; For certain locational image block of low resolution facial image, it is
Figure 357446DEST_PATH_IMAGE071
,
Figure 709930DEST_PATH_IMAGE072
, ,
Figure 28096DEST_PATH_IMAGE074
that calculating all
Figure 537258DEST_PATH_IMAGE007
individual these locational image blocks of low resolution people's face sample image in local restriction is represented down by the low resolution training set carry out the linear weights coefficient of rebuilding ...
Figure 884931DEST_PATH_IMAGE075
.Image block with sample comes the image block of approximate representation input can have certain error, so Fig. 1 adopts about equal sign ≌.Replace with the image block of all low resolution people face sample images the image block of the corresponding high-resolution human face sample image in position; With weights coefficient
Figure 357501DEST_PATH_IMAGE071
,
Figure 143054DEST_PATH_IMAGE072
,
Figure 68285DEST_PATH_IMAGE073
,
Figure 230276DEST_PATH_IMAGE074
... Corresponding high-resolution human face image block is synthesized in weighting, can obtain the high-definition picture piece of this position.During the replacement image block, the image block of each high-resolution human face sample image
Figure 495035DEST_PATH_IMAGE030
replacement is by the image block
Figure 215604DEST_PATH_IMAGE028
of its down-sampling gained low resolution people face sample image.
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
Embodiment splices according to it all high-resolution human face image blocks that obtain in step 3 in people position on the face, the pixel value of lap between the adjacent image piece can adopt the method for getting average to obtain.For the purpose of the enforcement reference, adoptable implementation procedure was following when practical implementation was provided:
Step a, one of initialization and the equirotal image array of high-definition picture I(all elements value is 0) and overlapping degree matrix F(all elements value is 0);
Step b is with the high-resolution human face image block opsition dependent image array that is added to IIn;
Step c, the overlapping degree matrix FThe all elements of correspondence position all adds 1, representes that this pixel position has been overlapped once;
Steps d, repeating step b and c finish up to all high-definition picture pieces are synthetic;
Step e is with image array IPoint removes sign overlapping degree matrix FThe matrix of two identical sizes " point removes " is divided by with regard to the element that is meant two matrix relevant positions.
The high-resolution human face image that obtains can be used as the output that predicts the outcome, and forecast period is accomplished.
The present invention has increased a locality constraint condition on the basis of document 6 methods; Solved facial image sample block not unique problem of least square solution when too much; Increase sparse property constraint condition in documents 7 methods; But ignored this prior characteristic of locality, method can obtain more accurate image block and representes mode among the present invention, thereby can synthesize higher-quality high-resolution human face image.
For the purpose of explanation effect of the present invention, the experiment contrast is provided below.
FEI face database (document 8:Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, " Image quality assessment:From error visibility to structural similarity, " have been adopted IEEE Trans. Image Process., vol.13, no.4, pp.600 – 612,2004.).200 Different Individual (100 male sex; 100 women), everyone just poker-faced facial image and each one of positive smile expression facial image, the unification of all images size is 120 * 100; Therefrom choose 360 and train, remaining 40 is image to be tested.Every training is carried out smoothly (using 4 * 4 average filter) with high-resolution image, and 4 times of down-samplings obtain the image of 30 * 25 low resolution.The size of dividing the facial image piece in the embodiment of the invention is respectively: the high-resolution human face image is divided into 12 * 12 image block, and overlapping is 4 pixels; The low resolution facial image is divided into 3 * 3 image block, and overlapping is 1 pixel.Promptly for high-resolution image;
Figure 628131DEST_PATH_IMAGE048
=120;
Figure 328234DEST_PATH_IMAGE049
=100;
Figure 447500DEST_PATH_IMAGE044
=12,
Figure 902752DEST_PATH_IMAGE045
=4; Image for low resolution;
Figure 740258DEST_PATH_IMAGE048
=30;
Figure 306368DEST_PATH_IMAGE049
=25;
Figure 778676DEST_PATH_IMAGE044
=3,
Figure 404829DEST_PATH_IMAGE045
=1.
Neighbour's piece number of document 5 neighborhood embedding grammars KGet 200.Reconstruction error is set to 1.0 in the document 7 sparse constrained procedures.
The sparse method for solving of employing document 9 (E. Candes and J.Rombergt, -Magic:Recovery of Sparse Signals via Convex Programming 2005 [Online]. Available:http: //www.acm.caltech.edu/l1magic/).Parameter
Figure 771537DEST_PATH_IMAGE019
value unique in the inventive method is 0.04.
Y-PSNR (PSNR, unit are dB) is the most general, the objective measurement index of most popular picture quality; SSIM then is an index of weighing two width of cloth figure similarities, and its value approaches 1 more, explains that the effect of image reconstruction is good more.The inventive method and document 5, document 6, PSNR that document 7 methods obtain and SSIM value (all 40 width of cloth test facial images are averaged) contrast is respectively like Fig. 5 and shown in Figure 6.Document 5, document 6, the mean P SNR value with the inventive method in the document 7 is followed successively by 31.22,31.90,32.11,32.76; Document 5, document 6 is followed successively by 0.8972,0.9034,0.9052,0.9145 with the average SSIM value of the inventive method in the document 7.The inventive method improves 0.65 dB and 0.07 respectively than best algorithm (document 7) in control methods on PSNR and SSIM value.

Claims (4)

1. a human face super-resolution method for reconstructing of representing based on local restriction is characterized in that, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of importing, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
Step 3 replaces with the image block of the corresponding high-resolution human face sample image in position to the image block of all low resolution people face sample images, synthesizes the high-resolution human face image block with the weighting of step 2 gained optimal weights coefficient;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
2. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1; It is characterized in that: in the step 1; Adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block; Concrete dividing mode is following
According to from left to right, from top to bottom order carries out the partitioned image piece; When partitioned image piece to image border; If surplus size is littler then be that benchmark carries out rollback and divides with the edge of original image than the size of the image block that is provided with in advance; Comprise when laterally being divided into image the right edge, left rollback and be that benchmark carries out piecemeal with edge, the right; When vertically being divided into image base edge, rollback and be that benchmark carries out piecemeal upwards with the edge, base.
3. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1; It is characterized in that: establish the set of low resolution facial image
Figure 334218DEST_PATH_IMAGE001
partitioned image piece gained to
Figure 898054DEST_PATH_IMAGE002
; With high resolving power training set
Figure 721391DEST_PATH_IMAGE003
and low resolution training set
Figure 825614DEST_PATH_IMAGE004
correspondingly the set of partitioned image piece gained be respectively
Figure 919472DEST_PATH_IMAGE005
and
Figure 716526DEST_PATH_IMAGE006
; The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of
Figure 263045DEST_PATH_IMAGE007
expression; The row of
Figure 436538DEST_PATH_IMAGE008
expression institute divided image piece number and row number,
Figure 384902DEST_PATH_IMAGE009
and represent that respectively each row and each goes the image block number that marks off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Figure 908646DEST_PATH_IMAGE011
Wherein, For from the low resolution training set
Figure 423121DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 499661DEST_PATH_IMAGE014
Row The reconstructed coefficients of row image block,
Figure 270488DEST_PATH_IMAGE016
Be by all MOpen in the low resolution people face sample image the
Figure 255762DEST_PATH_IMAGE014
Row
Figure 1739DEST_PATH_IMAGE015
The row vector that the reconstructed coefficients of row image block is formed,
Figure 72463DEST_PATH_IMAGE017
,
Figure 63553DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 903333DEST_PATH_IMAGE012
Penalty factor, Be the regularization parameter that balance is rebuild the sum of errors local restriction, "
Figure 614117DEST_PATH_IMAGE020
" inner product operation between two vectors of expression, Expression Euclidean squared-distance,
Figure 103184DEST_PATH_IMAGE022
Return about variable
Figure 190963DEST_PATH_IMAGE016
Function when obtaining minimum value Value
Figure 569172DEST_PATH_IMAGE023
, promptly desired optimum weights coefficient,
Figure 55648DEST_PATH_IMAGE024
,
Figure 143690DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image
Figure 348406DEST_PATH_IMAGE014
Row
Figure 717946DEST_PATH_IMAGE015
During the row image block, in the low resolution training set
Figure 386824DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 583450DEST_PATH_IMAGE014
Row
Figure 72201DEST_PATH_IMAGE015
The optimum weights coefficient of row image block;
The computing formula of said penalty factor
Figure 12475DEST_PATH_IMAGE018
is following
Figure 535860DEST_PATH_IMAGE026
Wherein,
Figure 637808DEST_PATH_IMAGE027
is the image block of
Figure 879434DEST_PATH_IMAGE028
row row in the low resolution facial image,
Figure 171930DEST_PATH_IMAGE029
for hanging down the image block of
Figure 173701DEST_PATH_IMAGE014
row Lie in
Figure 772675DEST_PATH_IMAGE013
individual low resolution people's face sample image of distinguishing the rate training set.
4. according to the said human face super-resolution method for reconstructing of representing based on local restriction of claim 1, it is characterized in that: in the step 3,, adopt following formula to calculate and obtain with the synthetic high-resolution human face image block of step 2 gained weight coefficient weighting,
Figure 953755DEST_PATH_IMAGE030
Which,? Step 2 for the synthesis of low-resolution face images in the first
Figure 112258DEST_PATH_IMAGE014
row
Figure 932447DEST_PATH_IMAGE015
column image block, the low-resolution training set first
Figure 284931DEST_PATH_IMAGE013
a sample of low-resolution images of human faces
Figure 899583DEST_PATH_IMAGE014
row
Figure 337518DEST_PATH_IMAGE015
column image block optimal weight coefficient;
Figure 961397DEST_PATH_IMAGE031
High-resolution training set for the first
Figure 168387DEST_PATH_IMAGE013
a High-resolution images of the sample face
Figure 718055DEST_PATH_IMAGE014
row
Figure 643286DEST_PATH_IMAGE015
column of image blocks.
CN 201110421452 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation Expired - Fee Related CN102521810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110421452 CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110421452 CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Publications (2)

Publication Number Publication Date
CN102521810A true CN102521810A (en) 2012-06-27
CN102521810B CN102521810B (en) 2013-09-18

Family

ID=46292714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110421452 Expired - Fee Related CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Country Status (1)

Country Link
CN (1) CN102521810B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034974A (en) * 2012-12-07 2013-04-10 武汉大学 Face image super-resolution reconstructing method based on support-set-driven sparse codes
CN103208109A (en) * 2013-04-25 2013-07-17 武汉大学 Local restriction iteration neighborhood embedding-based face hallucination method
CN103824272A (en) * 2014-03-03 2014-05-28 武汉大学 Face super-resolution reconstruction method based on K-neighboring re-recognition
CN104091320A (en) * 2014-07-16 2014-10-08 武汉大学 Noise human face super-resolution reconstruction method based on data-driven local feature conversion
CN104574455A (en) * 2013-10-29 2015-04-29 华为技术有限公司 Image reestablishing method and device
CN105405097A (en) * 2015-10-29 2016-03-16 武汉大学 Robustness human face super resolution processing method and system based on reverse manifold constraints
CN105469359A (en) * 2015-12-09 2016-04-06 武汉工程大学 Locality-constrained and low-rank representation based human face super-resolution reconstruction method
CN105550649A (en) * 2015-12-09 2016-05-04 武汉工程大学 Extremely low resolution human face recognition method and system based on unity coupling local constraint expression
CN105787462A (en) * 2016-03-16 2016-07-20 武汉工程大学 Semi-coupling-crucial-dictionary-learning-based extremely-low-resolution face identification method and system
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system
CN106530231A (en) * 2016-11-09 2017-03-22 武汉工程大学 Method and system for reconstructing super-resolution image based on deep collaborative representation
CN106558018A (en) * 2015-09-25 2017-04-05 北京大学 The unreal structure method and device of video human face that Component- Based Development decomposes
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107203967A (en) * 2017-05-25 2017-09-26 中国地质大学(武汉) A kind of face super-resolution reconstruction method based on context image block
CN107292865A (en) * 2017-05-16 2017-10-24 哈尔滨医科大学 A kind of stereo display method based on two dimensional image processing
CN107633483A (en) * 2017-09-18 2018-01-26 长安大学 The face image super-resolution method of illumination robustness
CN109117892A (en) * 2018-08-28 2019-01-01 国网福建省电力有限公司福州供电公司 Deep learning large scale picture training detection algorithm
CN113887371A (en) * 2021-09-26 2022-01-04 华南理工大学 Data enhancement method for low-resolution face recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008219271A (en) * 2007-03-01 2008-09-18 Fujifilm Corp Image processor, image processing method and photographing device
CN101635048A (en) * 2009-08-20 2010-01-27 上海交通大学 Super-resolution processing method of face image integrating global feature with local information
CN102024266A (en) * 2010-11-04 2011-04-20 西安电子科技大学 Image structure model-based compressed sensing image reconstruction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008219271A (en) * 2007-03-01 2008-09-18 Fujifilm Corp Image processor, image processing method and photographing device
CN101635048A (en) * 2009-08-20 2010-01-27 上海交通大学 Super-resolution processing method of face image integrating global feature with local information
CN102024266A (en) * 2010-11-04 2011-04-20 西安电子科技大学 Image structure model-based compressed sensing image reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG MA ET AL.: "position-based face hallucination method", 《ICME 2009》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034974B (en) * 2012-12-07 2015-12-23 武汉大学 The face image super-resolution reconstruction method of sparse coding is driven based on support set
CN103034974A (en) * 2012-12-07 2013-04-10 武汉大学 Face image super-resolution reconstructing method based on support-set-driven sparse codes
CN103208109A (en) * 2013-04-25 2013-07-17 武汉大学 Local restriction iteration neighborhood embedding-based face hallucination method
CN103208109B (en) * 2013-04-25 2015-09-16 武汉大学 A kind of unreal structure method of face embedded based on local restriction iteration neighborhood
CN104574455A (en) * 2013-10-29 2015-04-29 华为技术有限公司 Image reestablishing method and device
CN104574455B (en) * 2013-10-29 2017-11-24 华为技术有限公司 Image rebuilding method and device
CN103824272A (en) * 2014-03-03 2014-05-28 武汉大学 Face super-resolution reconstruction method based on K-neighboring re-recognition
CN103824272B (en) * 2014-03-03 2016-08-17 武汉大学 The face super-resolution reconstruction method heavily identified based on k nearest neighbor
CN104091320B (en) * 2014-07-16 2017-03-29 武汉大学 Based on the noise face super-resolution reconstruction method that data-driven local feature is changed
CN104091320A (en) * 2014-07-16 2014-10-08 武汉大学 Noise human face super-resolution reconstruction method based on data-driven local feature conversion
CN106558018B (en) * 2015-09-25 2019-08-06 北京大学 The unreal structure method and device of video human face that Component- Based Development decomposes
CN106558018A (en) * 2015-09-25 2017-04-05 北京大学 The unreal structure method and device of video human face that Component- Based Development decomposes
CN105405097A (en) * 2015-10-29 2016-03-16 武汉大学 Robustness human face super resolution processing method and system based on reverse manifold constraints
CN105550649A (en) * 2015-12-09 2016-05-04 武汉工程大学 Extremely low resolution human face recognition method and system based on unity coupling local constraint expression
CN105469359B (en) * 2015-12-09 2019-05-03 武汉工程大学 Face super-resolution reconstruction method based on local restriction low-rank representation
CN105469359A (en) * 2015-12-09 2016-04-06 武汉工程大学 Locality-constrained and low-rank representation based human face super-resolution reconstruction method
CN105787462A (en) * 2016-03-16 2016-07-20 武汉工程大学 Semi-coupling-crucial-dictionary-learning-based extremely-low-resolution face identification method and system
CN105787462B (en) * 2016-03-16 2019-05-03 武汉工程大学 Extremely low resolution ratio face identification method and system based on half coupling judgement property dictionary learning
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system
CN106530231A (en) * 2016-11-09 2017-03-22 武汉工程大学 Method and system for reconstructing super-resolution image based on deep collaborative representation
CN106530231B (en) * 2016-11-09 2020-08-11 武汉工程大学 Super-resolution image reconstruction method and system based on deep cooperative expression
CN107292865A (en) * 2017-05-16 2017-10-24 哈尔滨医科大学 A kind of stereo display method based on two dimensional image processing
CN107292865B (en) * 2017-05-16 2021-01-26 哈尔滨医科大学 Three-dimensional display method based on two-dimensional image processing
CN107154023B (en) * 2017-05-17 2019-11-05 电子科技大学 Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107203967A (en) * 2017-05-25 2017-09-26 中国地质大学(武汉) A kind of face super-resolution reconstruction method based on context image block
CN107633483A (en) * 2017-09-18 2018-01-26 长安大学 The face image super-resolution method of illumination robustness
CN109117892A (en) * 2018-08-28 2019-01-01 国网福建省电力有限公司福州供电公司 Deep learning large scale picture training detection algorithm
CN109117892B (en) * 2018-08-28 2021-07-27 国网福建省电力有限公司福州供电公司 Deep learning large-size picture training detection algorithm
CN113887371A (en) * 2021-09-26 2022-01-04 华南理工大学 Data enhancement method for low-resolution face recognition
CN113887371B (en) * 2021-09-26 2024-05-28 华南理工大学 Data enhancement method for low-resolution face recognition

Also Published As

Publication number Publication date
CN102521810B (en) 2013-09-18

Similar Documents

Publication Publication Date Title
CN102521810B (en) Face super-resolution reconstruction method based on local constraint representation
Zhang et al. Residual networks for light field image super-resolution
CN110443842B (en) Depth map prediction method based on visual angle fusion
Lee et al. From big to small: Multi-scale local planar guidance for monocular depth estimation
Gan et al. Monocular depth estimation with affinity, vertical pooling, and label enhancement
CN103824272B (en) The face super-resolution reconstruction method heavily identified based on k nearest neighbor
CN101976435B (en) Combination learning super-resolution method based on dual constraint
JP7058277B2 (en) Reconstruction method and reconfiguration device
CN105741252A (en) Sparse representation and dictionary learning-based video image layered reconstruction method
CN108776971B (en) Method and system for determining variable-split optical flow based on hierarchical nearest neighbor
CN102693419B (en) Super-resolution face recognition method based on multi-manifold discrimination and analysis
KR101994112B1 (en) Apparatus and method for compose panoramic image based on image segment
KR20100038168A (en) Composition analysis method, image device having composition analysis function, composition analysis program, and computer-readable recording medium
CN110381268B (en) Method, device, storage medium and electronic equipment for generating video
Guan et al. Multistage dual-attention guided fusion network for hyperspectral pansharpening
KR102141319B1 (en) Super-resolution method for multi-view 360-degree image and image processing apparatus
CN111861880A (en) Image super-fusion method based on regional information enhancement and block self-attention
CN103034974B (en) The face image super-resolution reconstruction method of sparse coding is driven based on support set
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
Ma et al. Position-based face hallucination method
WO2022216521A1 (en) Dual-flattening transformer through decomposed row and column queries for semantic segmentation
CN114387346A (en) Image recognition and prediction model processing method, three-dimensional modeling method and device
KR20150065302A (en) Method deciding 3-dimensional position of landsat imagery by Image Matching
Jin et al. Light field reconstruction via deep adaptive fusion of hybrid lenses
Zhou et al. Mh pose: 3d human pose estimation based on high-quality heatmap

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130918

Termination date: 20171216