CN102521810B - Face super-resolution reconstruction method based on local constraint representation - Google Patents

Face super-resolution reconstruction method based on local constraint representation Download PDF

Info

Publication number
CN102521810B
CN102521810B CN 201110421452 CN201110421452A CN102521810B CN 102521810 B CN102521810 B CN 102521810B CN 201110421452 CN201110421452 CN 201110421452 CN 201110421452 A CN201110421452 A CN 201110421452A CN 102521810 B CN102521810 B CN 102521810B
Authority
CN
China
Prior art keywords
image
low resolution
resolution
image block
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110421452
Other languages
Chinese (zh)
Other versions
CN102521810A (en
Inventor
胡瑞敏
江俊君
王冰
韩镇
黄克斌
卢涛
王亦民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN 201110421452 priority Critical patent/CN102521810B/en
Publication of CN102521810A publication Critical patent/CN102521810A/en
Application granted granted Critical
Publication of CN102521810B publication Critical patent/CN102521810B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a face super-resolution reconstruction method based on local constraint representation. The method comprises the following steps of: respectively dividing the input low-resolution face image and the face image in high and low resolution training sets into mutually overlapped image blocks; for each image block of the input low-resolution face image, according to the priori that the representation of the image blocks has locality, calculating the optimal weighting coefficient when performing linear reconstruction by the image blocks at corresponding positions of each image in the low-resolution training set; correspondingly substituting the image blocks at corresponding positions of each image in the low-resolution training set with the image blocks at corresponding positions of each image in the high-resolution training set one to one, and weighting and synthesizing high-resolution image blocks; based on the positions on the face, fusing the image blocks into a high-resolution face image. The method provides a local constraint representation model, adaptively selects the image blocks, which are adjacent to the input image blocks, from a sample image block space in the training set to linearly reconstruct the input image blocks, thereby obtaining the optical weighting coefficient and synthesizing the high-quality high-resolution image.

Description

A kind of face super-resolution reconstruction method based on local constraint representation
Technical field
The present invention relates to the image super-resolution field, be specifically related to a kind of face super-resolution reconstruction method based on local constraint representation.
Background technology
In safe-guard system, rig camera is widely used.Yet in many cases, video camera and interested scenery (such as people's face etc.) distance is far, so that the resolution of taken facial image is very low in the video, the shared zone of people's face often only has tens pixels.Because resolution is too little, interested facial image has been lost too much detailed information, this so that the captured people's face that obtains of rig camera be difficult to effectively be distinguished by people or machine.Therefore, how to improve the quality of low resolution facial image, effectively strengthen the resolution of inferior quality facial image in the monitoring video, distinguishing for next step people's face the feature detailed information that provides enough becomes instantly problem demanding prompt solution.Human face super-resolution technology (also being illusion face technology) is exactly a kind of image super-resolution rebuilding method that is produced the high-resolution human face image by input low resolution facial image.
In recent years, scholars have proposed a large amount of face super-resolution methods based on study.These class methods are according to high low-resolution image this prior imformation of training set to consisting of, and the facial image of a low resolution of input just can go out a high-resolution facial image by super-resolution rebuilding.For example, the people such as Freeman in 2000 are at document 1(W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. In IJCV, 40 (1): 25 – 47,2000) in a kind of Markov network (Markov network) method is at first proposed, this method also be the earliest based on study super-resolution method.In the same year, Simon and Kanade are specially for facial image, at document 2(S. Baker and T. Kanade. Hallucinating faces. In FG, Grenoble, France, Mar. 2000, proposed the unreal structure of a kind of people's face (face hallucination) method in 83-88.).Subsequently, the people such as Liu are at document 3(C. Liu, H.Y. Shum, and C.S. Zhang. A two-step approach to hallucinating faces:global parametric model and local nonparametric model. In CVPR, pp. 192 – 198,2001.) and the middle two-step approach that proposes human face rebuilding, the respectively overall situation and the local message of synthetic people's face.So far, the super-resolution method based on study has caused scholars' extensive concern.
According to theoretical (the document 4:S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. of manifold learning Science, 290 (5500): 2323 – 2326,2000.), the people such as Chang in 2004 are at document 5(H. Chang, D.Y. Yeung, and Y.M. Xiong. Super-resolution through neighbor embedding. In CVPR, pp. 275 – 282,2004.) in have this hypothesis of similar local geometric features based on high low resolution Sample Storehouse, the image super-resolution rebuilding method that a kind of neighborhood embeds is proposed, obtain well to rebuild effect.But, because the selected neighbour's piece of the method number is fixed, when the input picture piece is represented, improperly problem of over-fitting or match can appear.For this problem, the people such as Ma in 2010 are at document 6(X. Ma, J.P Zhang, and C. Qi. Hallucinating face by position-patch. Pattern Recognition43 (6): 3178 – 3194,2010.) the middle face super-resolution method that proposes a kind of position-based image block, use in the training set all and the facial image piece reconstruction high-resolution human face image of input picture piece co-located, avoid the steps such as manifold learning or feature extraction, improve efficient, also promoted the quality of composograph simultaneously.Yet because the method adopts least square method to find the solution, when the number of image in the training sample was larger than the dimension of image block, the expression coefficient of image block was not unique.Therefore, the people such as Jung in 2011 are at document 7(C. Jung, L. Jiao, B. Liu, and M. Gong, " Position-Patch Based Face Hallucination Using Convex Optimization, " IEEE Signal Process. Lett.Vol.18, no.6, pp.367 – 370,2011.) a kind of position image block face super-resolution method based on protruding optimization of middle proposition, sparse constraint is joined image block finds the solution in the expression, can solve the not unique problem of solution of equation, for the expression that makes the input picture piece as far as possible sparse, the method may be chosen some and carry out linearity reconstruction with the widely different image block of image block of input when the image block of synthetic input, therefore do not consider this feature of locality, rebuild effect unsatisfactory.
Summary of the invention
The object of the invention provides a kind of face image super-resolution reconstruction method based on Local Representation, solves the inaccurate problem of low-resolution image piece of existing similar algorithmic notation input, improves the quality of finally synthetic high-resolution human face image.
For achieving the above object, the technical solution used in the present invention is a kind of face super-resolution reconstruction method based on local constraint representation, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of inputting, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
Step 3 replaces with the image block of all low resolution people face sample images the image block of high-resolution human face sample image corresponding to position, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
And, in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is as follows,
According to from left to right, from top to bottom order carries out the partitioned image piece, when partitioned image piece during to the image border, if surplus size is less then divide as benchmark carries out rollback take the edge of original image than the size of the image block that sets in advance, comprise when laterally being divided into image the right edge, left rollback and carry out piecemeal take edge, the right as benchmark; When vertically being divided into image base edge, rollback and carry out piecemeal take the edge, base as benchmark upwards.
And, establish the low resolution facial image
Figure 957071DEST_PATH_IMAGE001
The set of partitioned image piece gained is
Figure 284147DEST_PATH_IMAGE002
, with the high resolving power training set
Figure 506181DEST_PATH_IMAGE003
With the low resolution training set
Figure 89347DEST_PATH_IMAGE004
Correspondingly the set of partitioned image piece gained is respectively
Figure 55029DEST_PATH_IMAGE005
With
Figure 174295DEST_PATH_IMAGE006
,
Figure 567230DEST_PATH_IMAGE007
The low number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image distinguished of expression,
Figure 467053DEST_PATH_IMAGE008
The line number of the image block divided of expression and row number,
Figure 469382DEST_PATH_IMAGE009
With
Figure 505471DEST_PATH_IMAGE010
Represent respectively the image block number that each row and every delegation mark off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Figure 69307DEST_PATH_IMAGE011
Wherein,
Figure 128530DEST_PATH_IMAGE012
For from the low resolution training set
Figure 498332DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 592190DEST_PATH_IMAGE014
Row
Figure 389244DEST_PATH_IMAGE015
The reconstructed coefficients of row image block,
Figure 434299DEST_PATH_IMAGE016
By all MIn the low resolution people face sample image the
Figure 342212DEST_PATH_IMAGE014
Row
Figure 290576DEST_PATH_IMAGE015
The row vector that the reconstructed coefficients of row image block forms,
Figure 258532DEST_PATH_IMAGE017
,
Figure 292347DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 3951DEST_PATH_IMAGE012
Penalty factor,
Figure 72401DEST_PATH_IMAGE019
The regularization parameter of balance reconstruction error and local restriction, "
Figure 211259DEST_PATH_IMAGE020
" inner product operation between two vectors of expression,
Figure 529108DEST_PATH_IMAGE021
Expression Euclidean squared-distance,
Figure 480621DEST_PATH_IMAGE022
Return about variable
Figure 465894DEST_PATH_IMAGE016
Function when obtaining minimum value Value , i.e. desired optimum weights coefficient,
Figure 509571DEST_PATH_IMAGE024
,
Figure 287034DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image
Figure 203912DEST_PATH_IMAGE014
Row
Figure 761932DEST_PATH_IMAGE015
During the row image block, from the low resolution training set
Figure 291134DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 188683DEST_PATH_IMAGE014
Row
Figure 840244DEST_PATH_IMAGE015
The optimum weights coefficient of row image block;
Described penalty factor
Figure 557664DEST_PATH_IMAGE018
Computing formula as follows,
Figure 952873DEST_PATH_IMAGE026
Wherein, For in the low resolution facial image Row
Figure 230643DEST_PATH_IMAGE015
The image block of row,
Figure 429543DEST_PATH_IMAGE028
Be low the of the rate training set of distinguishing
Figure 36105DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 29469DEST_PATH_IMAGE014
Row
Figure 721481DEST_PATH_IMAGE015
The image block of row.
And, in the step 3, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting, adopt following formula to calculate and obtain,
Figure 724072DEST_PATH_IMAGE029
Wherein,
Figure 247457DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image in the step 2
Figure 847941DEST_PATH_IMAGE014
Row
Figure 89566DEST_PATH_IMAGE015
During the row image block, from the low resolution training set
Figure 567952DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 945844DEST_PATH_IMAGE014
Row
Figure 484273DEST_PATH_IMAGE015
The optimum weights coefficient of row image block,
Figure 947615DEST_PATH_IMAGE030
Be of high resolution training set
Figure 229692DEST_PATH_IMAGE013
In the individual high-resolution human face sample image
Figure 898308DEST_PATH_IMAGE014
Row
Figure 669955DEST_PATH_IMAGE015
The image block of row.
The present invention is by increasing locality constraint condition, the image block of choosing adaptively with input picture piece neighbour represents the input picture piece, avoided in the similar algorithm owing to fixing neighbour's piece number (document 5) causes over-fitting or match improper, owing to the problem of selecting too many image block to cause a plurality of solutions (document 6) and ignore locality (document 7) owing to overemphasized sparse property, make the expression coefficient of input picture piece more accurate, finally obtain higher-quality high-resolution human face image.
Description of drawings
Fig. 1 is the process flow diagram of the embodiment of the invention;
Fig. 2 is the piece division methods of facial image;
Fig. 3 is the rollback schematic diagram that laterally is divided into image the right edge during the image block of the embodiment of the invention is divided;
Fig. 4 is the rollback schematic diagram that vertically is divided into image base edge during the image block of the embodiment of the invention is divided;
Fig. 5 is the mean P SNR value comparing result that the inventive method and art methods obtain.
Fig. 6 is the average SSIM value comparing result that the inventive method and art methods obtain.
Embodiment
Technical solution of the present invention can adopt software engineering to realize the automatic flow operation.Below in conjunction with drawings and Examples technical solution of the present invention is further described.Referring to Fig. 1, embodiment of the invention concrete steps are:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of inputting, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set.
Comprise low resolution people face sample image in the low resolution training set, comprise the high-resolution human face sample image in the high resolving power training set, low resolution training set and high resolving power training set provide predefined training sample pair.Each low resolution people face sample image is to be extracted by a high-resolution human face sample image in the high resolving power training set in the low resolution training set.Among the embodiment, all high-resolution image pixel size are 120 * 100, and the image pixel size of all low resolution is 30 * 25.Low resolution people's face sample image is that the high-resolution human face sample image is by the result of four times of Bicubic down-samplings.
Among the embodiment, establish the low resolution facial image of input
Figure 620594DEST_PATH_IMAGE031
, the high resolving power training set
Figure 440782DEST_PATH_IMAGE003
With the low resolution training set It is the low resolution training set In comprise low resolution people face sample image
Figure 111432DEST_PATH_IMAGE032
, the high resolving power training set
Figure 735311DEST_PATH_IMAGE003
In comprise the high-resolution human face sample image
Figure 214457DEST_PATH_IMAGE033
,
Figure 327907DEST_PATH_IMAGE034
For the purpose of follow-up replacement, embodiment is with low resolution people face sample image and high-resolution human face sample image reference numeral, i.e. low resolution people face sample image
Figure 190821DEST_PATH_IMAGE032
It is the high-resolution human face sample image
Figure 352812DEST_PATH_IMAGE033
Result by four times of Bicubic down-samplings.The number of low resolution people face sample image is identical with the number of high resolving power training set middle high-resolution people face sample image in the low resolution training set, all is
Figure 679888DEST_PATH_IMAGE007
With the low resolution facial image
Figure 636343DEST_PATH_IMAGE001
The set of partitioned image piece gained is
Figure 48869DEST_PATH_IMAGE002
, with the high resolving power training set
Figure 513087DEST_PATH_IMAGE003
With the low resolution training set
Figure 694669DEST_PATH_IMAGE004
Correspondingly the set of partitioned image piece gained is respectively
Figure 87604DEST_PATH_IMAGE005
With
Figure 721848DEST_PATH_IMAGE006
, Expression is low distinguishes the number of low resolution people face sample image in the rate training set and the number of high resolution training set middle high-resolution people face sample image, as shown in Figure 2,
Figure 527310DEST_PATH_IMAGE008
The image block that expression is divided is at the image block coordinate system
Figure 91147DEST_PATH_IMAGE035
Under positional information, namely the line number of image block and row number.Take image upper left side to be divided as starting point, the image block positional information that division obtains is
Figure 648905DEST_PATH_IMAGE036
,
Figure 18706DEST_PATH_IMAGE037
Figure 112564DEST_PATH_IMAGE038
,
Figure 909619DEST_PATH_IMAGE039
The position
Figure 456138DEST_PATH_IMAGE008
The image block at place up and down the image block of adjacency at coordinate system
Figure 364051DEST_PATH_IMAGE035
Under coordinate be respectively ,
Figure 716590DEST_PATH_IMAGE041
, With
Figure 462009DEST_PATH_IMAGE043
Figure 327197DEST_PATH_IMAGE009
With
Figure 466054DEST_PATH_IMAGE010
Represent respectively the image block number that every delegation and every delegation mark off.The present invention divides overlapped image block to low resolution facial image, all low resolution people face sample images and the high-resolution human face sample image of input and adopts consistent mode, and namely each image division is concrete With
Figure 440143DEST_PATH_IMAGE010
Numerical value is identical.
Concrete
Figure 159837DEST_PATH_IMAGE009
With
Figure 905814DEST_PATH_IMAGE010
Numerical value obtains according to the image dividing mode, and the present invention divides overlapped image block to image, namely take image upper left side to be divided as starting point, chooses a size at every turn and is
Figure 976538DEST_PATH_IMAGE044
* (unit: image block pixel), making top and the left of image block and having divided part has
Figure 807408DEST_PATH_IMAGE045
Individual pixel overlapping (except when image block is positioned at the edge, top of facial image or leftmost edge).
Therefore value as shown in the formula:
Figure 225751DEST_PATH_IMAGE046
(1)
Figure 783772DEST_PATH_IMAGE047
(2)
Wherein,
Figure 312973DEST_PATH_IMAGE048
With
Figure 7260DEST_PATH_IMAGE049
The line number of difference presentation video and columns (unit: pixel),
Figure 360618DEST_PATH_IMAGE044
The length of side of the square image blocks that expression marks off,
Figure 78039DEST_PATH_IMAGE045
Overlapping number of pixels between presentation video piece and the image block,
Figure 473248DEST_PATH_IMAGE050
Expression is returned and is greater than or equal to Smallest positive integral.It is the position
Figure 47766DEST_PATH_IMAGE008
The image block at place and overlapping one respectively of the image block of adjacency up and down
Figure 314799DEST_PATH_IMAGE044
* The rectangular area, when certainly image block is positioned at the image border except.
When image being carried out the piece division, for fear of since the picture size that cutting or fill up causes change, the present invention proposes to take a kind of " rollback " strategy, when the image block of namely dividing exceeds image border (the right or base), then divides as benchmark carries out rollback take the edge of original image.As shown in Figure 3, when horizontal division exceeds image the right edge, take left " rollback " and the strategy take edge, the right as benchmark to carry out piecemeal; Similarly, as shown in Figure 4, when vertical division exceeds image base edge, take upwards " rollback " and the strategy take the edge, base as benchmark to carry out piecemeal.
Step 2 is for each locational image block of low resolution facial image
Figure 556479DEST_PATH_IMAGE027
, calculate under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction.
Among the embodiment, optimum weights coefficient is obtained by following formula:
Figure 549843DEST_PATH_IMAGE011
(3)
This formula is comprised of two parts, and First is the super-resolution rebuilding constraint, and second portion is the local restriction that the input picture piece represents.Wherein,
Figure 241855DEST_PATH_IMAGE012
For from the low resolution training set
Figure 978867DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 439936DEST_PATH_IMAGE014
Row
Figure 869780DEST_PATH_IMAGE015
The reconstructed coefficients of row image block,
Figure 111405DEST_PATH_IMAGE016
By all MIn the low resolution people face sample image the
Figure 324212DEST_PATH_IMAGE014
Row
Figure 967683DEST_PATH_IMAGE015
The row vector that the reconstructed coefficients of row image block forms,
Figure 739068DEST_PATH_IMAGE017
,
Figure 467989DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 484487DEST_PATH_IMAGE012
Penalty factor, The regularization parameter of balance reconstruction error and local restriction, " " inner product operation between two vectors of expression,
Figure 580116DEST_PATH_IMAGE021
Expression Euclidean squared-distance, Return about variable
Figure 815105DEST_PATH_IMAGE016
Function when obtaining minimum value
Figure 193872DEST_PATH_IMAGE016
Value
Figure 631806DEST_PATH_IMAGE023
, i.e. desired optimum weights coefficient.
Figure 990106DEST_PATH_IMAGE024
,
Figure 462676DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image
Figure 513809DEST_PATH_IMAGE052
Row During the row image block, from the low resolution training set In individual low resolution people's face sample image
Figure 662527DEST_PATH_IMAGE014
Row The optimum weights coefficient of row image block.
Figure 795623DEST_PATH_IMAGE027
For in the low resolution facial image of input the
Figure 761305DEST_PATH_IMAGE014
Row
Figure 880571DEST_PATH_IMAGE015
The image block of row,
Figure 70244DEST_PATH_IMAGE028
Be low the of the rate training set of distinguishing
Figure 907750DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 473860DEST_PATH_IMAGE014
Row
Figure 775529DEST_PATH_IMAGE015
The image block of row.Work as image block
Figure 572321DEST_PATH_IMAGE028
The distance input image block
Figure 959440DEST_PATH_IMAGE027
When larger, give
Figure 266925DEST_PATH_IMAGE028
Larger punishment; Otherwise, work as image block
Figure 95203DEST_PATH_IMAGE028
The distance input image block
Figure 892258DEST_PATH_IMAGE027
Hour, give Less punishment.By minimize formula (3) just can guarantee to choose as far as possible those with
Figure 612269DEST_PATH_IMAGE027
Neighbour's sample image piece represents it.Represent penalty factor with the Euclidean squared-distance in the embodiment of the invention:
Figure 59169DEST_PATH_IMAGE026
(4)
In formula (3),
Figure 27125DEST_PATH_IMAGE019
It is the regularization parameter that balance is rebuild constraint and local restriction.
Figure 60940DEST_PATH_IMAGE019
Reconstruction effect when getting different value is different, when
Figure 772544DEST_PATH_IMAGE053
The time, the local constraint representation method among the present invention just deteriorates to least square method for expressing in the document 6.Suggestion of the present invention
Figure 575415DEST_PATH_IMAGE019
Value between 0.02 to 0.1.
During implementation, the solution of formula (3) can be obtained by lower formula:
Figure 714272DEST_PATH_IMAGE054
(5)
Figure 235384DEST_PATH_IMAGE055
Be under local restriction the optimum weights coefficient when by all low resolution people face sample image pieces in the low resolution training set input low-resolution image piece being carried out linear the reconstruction.
Wherein
Figure 485099DEST_PATH_IMAGE056
It is one
Figure 906591DEST_PATH_IMAGE057
Diagonal matrix, can be expressed as:
Figure 216350DEST_PATH_IMAGE058
,
Figure 224757DEST_PATH_IMAGE059
(6)
Figure 278164DEST_PATH_IMAGE060
For to matrix DThe mOK mThe value of row,
Figure 55627DEST_PATH_IMAGE018
Be reconstructed coefficients
Figure 536287DEST_PATH_IMAGE012
Penalty factor
Figure 766411DEST_PATH_IMAGE061
It is the input picture piece
Figure 59727DEST_PATH_IMAGE027
A local covariance matrix, can be expressed as:
Figure 754013DEST_PATH_IMAGE062
(7)
Matrix CMay be defined as:
Figure 343258DEST_PATH_IMAGE063
(8)
Wherein By owning in the low resolution training set
Figure 721466DEST_PATH_IMAGE007
Individual low resolution people's face sample image is
Figure 207942DEST_PATH_IMAGE014
Row
Figure 295984DEST_PATH_IMAGE015
The matrix that the column vector corresponding to image block of row forms,
Figure 999236DEST_PATH_IMAGE065
,
Figure 198136DEST_PATH_IMAGE066
That element is 1 entirely
Figure 539118DEST_PATH_IMAGE067
The row vector,
Figure 798061DEST_PATH_IMAGE068
Representing matrix CTransposed matrix.
Step 3 replaces with the image block of all low resolution people face sample images the image block of high-resolution human face sample image corresponding to position, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting
Figure 490074DEST_PATH_IMAGE069
Embodiment uses the optimal weights coefficient that obtains in the step 2 to represent that the expression formula of high-resolution human face image block is:
Figure 164769DEST_PATH_IMAGE070
(9)
Wherein,
Figure 688154DEST_PATH_IMAGE025
Be the of synthetic low resolution facial image in the step 2 Row
Figure 530263DEST_PATH_IMAGE015
During the row image block, from the low resolution training set
Figure 281354DEST_PATH_IMAGE013
In individual low resolution people's face sample image
Figure 924825DEST_PATH_IMAGE014
Row
Figure 197674DEST_PATH_IMAGE015
The optimum weights coefficient of row image block,
Figure 864279DEST_PATH_IMAGE030
Be of high resolution training set In the individual high-resolution human face sample image
Figure 877289DEST_PATH_IMAGE014
Row
Figure 383357DEST_PATH_IMAGE015
The image block of row.
As shown in fig. 1, for certain locational image block of low resolution facial image, calculate under local constraint representation by owning in the low resolution training set Individual this locational image block of low resolution people's face sample image carries out the linear weights coefficient of rebuilding
Figure 357446DEST_PATH_IMAGE071
,
Figure 709930DEST_PATH_IMAGE072
,
Figure 590161DEST_PATH_IMAGE073
,
Figure 884931DEST_PATH_IMAGE075
Image block with sample comes the image block of approximate representation input can have certain error, so Fig. 1 adopts approximately equal sign ≌.The image block of all low resolution people face sample images is replaced with the image block of high-resolution human face sample image corresponding to position, use the weights coefficient
Figure 357501DEST_PATH_IMAGE071
,
Figure 143054DEST_PATH_IMAGE072
,
Figure 68285DEST_PATH_IMAGE073
,
Figure 230276DEST_PATH_IMAGE074
Corresponding high-resolution human face image block is synthesized in weighting, can obtain the high-definition picture piece of this position.When replacing image block, the image block of each high-resolution human face sample image
Figure 495035DEST_PATH_IMAGE030
Replacement is by the image block of its down-sampling gained low resolution people face sample image
Figure 215604DEST_PATH_IMAGE028
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
Embodiment splices according to it all high-resolution human face image blocks that obtain in step 3 in people position on the face, the pixel value of lap between the adjacent image piece can adopt the method for getting average to obtain.Adoptable implementation procedure is as follows when for the sake of ease of implementation, providing implementation:
Step a, one of initialization and the equirotal image array of high-definition picture I(all elements value is 0) and overlapping degree matrix F(all elements value is 0);
Step b is with the high-resolution human face image block opsition dependent image array that is added to IIn;
Step c, overlapping degree matrix FThe all elements of correspondence position all adds 1, represents that this pixel position has been overlapped once;
Steps d, repeating step b and c are until all high-definition picture pieces are synthetic complete;
Step e is with image array IPoint is except the overlapping degree matrix of sign FThe matrix of two formed objects " point removes " is divided by with regard to the element that refers to two matrix relevant positions.
The high-resolution human face image that obtains can be used as the output that predicts the outcome, and forecast period is finished.
The present invention has increased a locality constraint condition on the basis of document 6 methods, solved facial image sample block not unique problem of least square solution when too much, increase sparse property constraint condition in documents 7 methods, but ignored this prior feature of locality, method can obtain more accurate image block and represents mode among the present invention, thereby can synthesize higher-quality high-resolution human face image.
For the purpose of explanation effect of the present invention, below provide the experiment contrast.
FEI face database (document 8:Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, " Image quality assessment:From error visibility to structural similarity, " have been adopted IEEE Trans. Image Process., vol.13, no.4, pp.600 – 612,2004.).200 Different Individual (100 male sex, 100 women), each one of everyone just poker-faced facial image and positive smile expression facial image, all image size unifications are 120 * 100, therefrom choose 360 and train, remaining 40 is image to be tested.Every training is carried out smoothly (using 4 * 4 average filter) with high-resolution image, and 4 times of down-samplings obtain the image of 30 * 25 low resolution.The size of dividing the facial image piece in the embodiment of the invention is respectively: the high-resolution human face image is divided into 12 * 12 image block, and overlapping is 4 pixels; The low resolution facial image is divided into 3 * 3 image block, and overlapping is 1 pixel.Namely for high-resolution image, =120,
Figure 328234DEST_PATH_IMAGE049
=100,
Figure 447500DEST_PATH_IMAGE044
=12,
Figure 902752DEST_PATH_IMAGE045
=4; For the image of low resolution,
Figure 740258DEST_PATH_IMAGE048
=30,
Figure 306368DEST_PATH_IMAGE049
=25,
Figure 778676DEST_PATH_IMAGE044
=3, =1.
Neighbour's piece number of document 5 neighborhood embedding grammars KGet 200.Reconstruction error is set to 1.0 in the document 7 sparse constraint methods.
The sparse method for solving of employing document 9 (E. Candes and J.Rombergt,
Figure 729631DEST_PATH_IMAGE076
-Magic:Recovery of Sparse Signals via Convex Programming 2005 [Online]. Available:http: //www.acm.caltech.edu/l1magic/).Unique parameter in the inventive method
Figure 771537DEST_PATH_IMAGE019
Value is 0.04.
Y-PSNR (PSNR, unit are dB) is the most general, the objective measurement index of most popular picture quality; SSIM then is the index of weighing two width of cloth figure similarities, and its value illustrates that more close to 1 the effect of image reconstruction is better.The inventive method and document 5, document 6, contrast is respectively as shown in Figure 5 and Figure 6 for the PSNR that document 7 methods obtain and SSIM value (all 40 width of cloth test facial images are averaged).Document 5, document 6, the mean P SNR value with the inventive method in the document 7 is followed successively by 31.22,31.90,32.11,32.76; Document 5, document 6 is followed successively by 0.8972,0.9034,0.9052,0.9145 with the average SSIM value of the inventive method in the document 7.The inventive method improves respectively 0.65 dB and 0.07 than best algorithm (document 7) in control methods on PSNR and SSIM value.

Claims (3)

1. the face super-resolution reconstruction method based on local constraint representation is characterized in that, comprises the steps:
Step 1, input low resolution facial image is divided overlapped image block to the low resolution facial image of inputting, low resolution people's face sample image and the high-resolution human face sample image in the high resolving power training set in the low resolution training set;
Step 2 for each locational image block of low resolution facial image, is calculated under local restriction the optimum weights coefficient when by all low resolution people face sample image this locational image blocks in the low resolution training set it being carried out linear reconstruction;
If with low resolution facial image X LThe set of partitioned image piece gained is { X L(i, j) | 1≤i≤U, 1≤j≤V} is with the high resolving power training set With the low resolution training set
Figure FDA00003312923900015
Correspondingly the set of partitioned image piece gained is respectively { Y H m ( i , j ) | 1 ≤ i ≤ U , 1 ≤ j ≤ V } With { Y L m ( i , j ) | 1 ≤ i ≤ U , 1 ≤ j ≤ V } , M represents the number of low resolution people face sample image in the low resolution training set and the number of high resolving power training set middle high-resolution people face sample image, 1≤m≤M; The line number of the image block divided of (i, j) expression and row number, U and V represent respectively the image block number that each row and every delegation mark off;
In the step 2, adopt following formula to calculate the optimum weights coefficient of acquisition,
Figure FDA00003312923900011
Wherein, w m(i, j) be reconstructed coefficients from the capable j row of i image block in m low resolution people face sample image in the low resolution training set, w (i, j) open the row vector that the reconstructed coefficients of the capable j row of i image block in the low resolution people face sample image forms by all M, w (i, j)=[w 1(i, j), w 2(i, j) ..., w m(i, j) ..., w M(i, j)], d m(i, j) is reconstructed coefficients w mThe penalty factor of (i, j), τ are the regularization parameters of balance reconstruction error and local restriction, the inner product operation between two vectors of " ο " expression,
Figure FDA00003312923900012
Expression Euclidean squared-distance,
Figure FDA00003312923900013
Return the value w about function w (i, j) when obtaining minimum value of variable w (i, j) *(i, j), i.e. desired optimum weights coefficient, w * ( i , j ) = [ w 1 * ( i , j ) , w 2 * ( i , j ) , · · · , w m * ( i , j ) , · · · , w M * ( i , j ) ] , w m * ( i , j ) During for the capable j row of the i image block of synthetic low resolution facial image, the optimum weights coefficient of the capable j row of i image block in m low resolution people face sample image in the low resolution training set;
Described penalty factor d mThe computing formula of (i, j) is as follows,
d m ( i , j ) = | | X L ( i , j ) - Y L m ( i , j ) | | 2 2
Wherein, X L(i, j) is the image block of the capable j row of i in the low resolution facial image,
Figure FDA00003312923900023
Image block for the capable j row of i in m the low resolution people face sample image of low resolution training set;
Step 3 replaces with the image block of all low resolution people face sample images the image block of high-resolution human face sample image corresponding to position, with the synthetic high-resolution human face image block of step 2 gained optimal weights coefficient weighting;
Step 4 according to merging in people position on the face, obtains a high-resolution human face image with the synthetic gained high-resolution human face image block of step 3.
2. described face super-resolution reconstruction method based on local constraint representation according to claim 1, it is characterized in that: in the step 1, adopt the rollback mode that low resolution facial image, low resolution people's face sample image and the high-resolution human face sample image of input are divided overlapped image block, concrete dividing mode is as follows
According to from left to right, from top to bottom order partitioned image piece, when partitioned image piece during to the image border, if surplus size is less then divide as benchmark carries out rollback take the edge of original image than the size of the image block that sets in advance, comprise when laterally being divided into image the right edge, left rollback and carry out piecemeal take edge, the right as benchmark; When vertically being divided into image base edge, rollback and carry out piecemeal take the edge, base as benchmark upwards.
3. described face super-resolution reconstruction method based on local constraint representation according to claim 1 is characterized in that: in the step 3, with the synthetic high-resolution human face image block X of step 2 gained weight coefficient weighting H(i, j) adopts following formula to calculate and obtains,
X H ( i , j ) = Σ m = 1 M w m * ( i , j ) Y H m ( i , j )
Wherein,
Figure FDA00003312923900025
During for the capable j row of the i image block of synthetic low resolution facial image in the step 2, the optimum weights coefficient of the capable j row of i image block in m low resolution people face sample image in the low resolution training set;
Figure FDA00003312923900024
Image block for the capable j row of i in m the high-resolution human face sample image of high resolving power training set.
CN 201110421452 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation Expired - Fee Related CN102521810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110421452 CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110421452 CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Publications (2)

Publication Number Publication Date
CN102521810A CN102521810A (en) 2012-06-27
CN102521810B true CN102521810B (en) 2013-09-18

Family

ID=46292714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110421452 Expired - Fee Related CN102521810B (en) 2011-12-16 2011-12-16 Face super-resolution reconstruction method based on local constraint representation

Country Status (1)

Country Link
CN (1) CN102521810B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034974B (en) * 2012-12-07 2015-12-23 武汉大学 The face image super-resolution reconstruction method of sparse coding is driven based on support set
CN103208109B (en) * 2013-04-25 2015-09-16 武汉大学 A kind of unreal structure method of face embedded based on local restriction iteration neighborhood
CN104574455B (en) * 2013-10-29 2017-11-24 华为技术有限公司 Image rebuilding method and device
CN103824272B (en) * 2014-03-03 2016-08-17 武汉大学 The face super-resolution reconstruction method heavily identified based on k nearest neighbor
CN104091320B (en) * 2014-07-16 2017-03-29 武汉大学 Based on the noise face super-resolution reconstruction method that data-driven local feature is changed
CN106558018B (en) * 2015-09-25 2019-08-06 北京大学 The unreal structure method and device of video human face that Component- Based Development decomposes
CN105405097A (en) * 2015-10-29 2016-03-16 武汉大学 Robustness human face super resolution processing method and system based on reverse manifold constraints
CN105550649B (en) * 2015-12-09 2019-03-08 武汉工程大学 Extremely low resolution ratio face identification method and system based on unity couping local constraint representation
CN105469359B (en) * 2015-12-09 2019-05-03 武汉工程大学 Face super-resolution reconstruction method based on local restriction low-rank representation
CN105787462B (en) * 2016-03-16 2019-05-03 武汉工程大学 Extremely low resolution ratio face identification method and system based on half coupling judgement property dictionary learning
CN106203269A (en) * 2016-06-29 2016-12-07 武汉大学 A kind of based on can the human face super-resolution processing method of deformation localized mass and system
CN106530231B (en) * 2016-11-09 2020-08-11 武汉工程大学 Super-resolution image reconstruction method and system based on deep cooperative expression
CN107292865B (en) * 2017-05-16 2021-01-26 哈尔滨医科大学 Three-dimensional display method based on two-dimensional image processing
CN107154023B (en) * 2017-05-17 2019-11-05 电子科技大学 Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution
CN107203967A (en) * 2017-05-25 2017-09-26 中国地质大学(武汉) A kind of face super-resolution reconstruction method based on context image block
CN107633483A (en) * 2017-09-18 2018-01-26 长安大学 The face image super-resolution method of illumination robustness
CN109117892B (en) * 2018-08-28 2021-07-27 国网福建省电力有限公司福州供电公司 Deep learning large-size picture training detection algorithm
CN113887371B (en) * 2021-09-26 2024-05-28 华南理工大学 Data enhancement method for low-resolution face recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635048A (en) * 2009-08-20 2010-01-27 上海交通大学 Super-resolution processing method of face image integrating global feature with local information
CN102024266A (en) * 2010-11-04 2011-04-20 西安电子科技大学 Image structure model-based compressed sensing image reconstruction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4944639B2 (en) * 2007-03-01 2012-06-06 富士フイルム株式会社 Image processing apparatus, image processing method, and photographing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635048A (en) * 2009-08-20 2010-01-27 上海交通大学 Super-resolution processing method of face image integrating global feature with local information
CN102024266A (en) * 2010-11-04 2011-04-20 西安电子科技大学 Image structure model-based compressed sensing image reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JP特开2008-219271A 2008.09.18
position-based face hallucination method;Xiang Ma et al.;《ICME 2009》;20091231;全文 *
Xiang Ma et al..position-based face hallucination method.《ICME 2009》.2009,

Also Published As

Publication number Publication date
CN102521810A (en) 2012-06-27

Similar Documents

Publication Publication Date Title
CN102521810B (en) Face super-resolution reconstruction method based on local constraint representation
Zhang et al. Residual networks for light field image super-resolution
CN110443842B (en) Depth map prediction method based on visual angle fusion
Gan et al. Monocular depth estimation with affinity, vertical pooling, and label enhancement
CN111047548B (en) Attitude transformation data processing method and device, computer equipment and storage medium
CN103824272B (en) The face super-resolution reconstruction method heavily identified based on k nearest neighbor
Yan et al. Single image superresolution based on gradient profile sharpness
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN104156957B (en) Stable and high-efficiency high-resolution stereo matching method
CN105741252A (en) Sparse representation and dictionary learning-based video image layered reconstruction method
CN102693419B (en) Super-resolution face recognition method based on multi-manifold discrimination and analysis
CN111861880B (en) Image super-fusion method based on regional information enhancement and block self-attention
CN109801212A (en) A kind of fish eye images joining method based on SIFT feature
CN115205672A (en) Remote sensing building semantic segmentation method and system based on multi-scale regional attention
CN104504672A (en) NormLV feature based low-rank sparse neighborhood-embedding super-resolution method
CN104036482A (en) Facial image super-resolution method based on dictionary asymptotic updating
CN103034974A (en) Face image super-resolution reconstructing method based on support-set-driven sparse codes
Du et al. Srh-net: Stacked recurrent hourglass network for stereo matching
KR20150065302A (en) Method deciding 3-dimensional position of landsat imagery by Image Matching
CN103208109A (en) Local restriction iteration neighborhood embedding-based face hallucination method
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN110335228B (en) Method, device and system for determining image parallax
Chen et al. Density-imbalance-eased lidar point cloud upsampling via feature consistency learning
Zhao et al. Multiple attention network for spartina alterniflora segmentation using multitemporal remote sensing images
Zhu et al. Stereoscopic image super-resolution with interactive memory learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130918

Termination date: 20171216

CF01 Termination of patent right due to non-payment of annual fee