CN110490796A - A kind of human face super-resolution processing method and system of the fusion of low-and high-frequency ingredient - Google Patents

A kind of human face super-resolution processing method and system of the fusion of low-and high-frequency ingredient Download PDF

Info

Publication number
CN110490796A
CN110490796A CN201910290815.0A CN201910290815A CN110490796A CN 110490796 A CN110490796 A CN 110490796A CN 201910290815 A CN201910290815 A CN 201910290815A CN 110490796 A CN110490796 A CN 110490796A
Authority
CN
China
Prior art keywords
resolution
low
image
library
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910290815.0A
Other languages
Chinese (zh)
Other versions
CN110490796B (en
Inventor
陈亮
吴怡
吴庆祥
林贵敏
徐哲鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Normal University
Original Assignee
Fujian Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Normal University filed Critical Fujian Normal University
Priority to CN201910290815.0A priority Critical patent/CN110490796B/en
Publication of CN110490796A publication Critical patent/CN110490796A/en
Application granted granted Critical
Publication of CN110490796B publication Critical patent/CN110490796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The present invention discloses the human face super-resolution processing method and system of a kind of low-and high-frequency ingredient fusion, comprising: S1: the multiple dimensioned trained library of building;S2: image in low-resolution face image to be processed and training library is divided into the image block of tool overlapping part using identical partitioned mode;S3: preprocessing module, respectively high-resolution human face image library and severe low-resolution face image library pre-process, and prepare pretreated neighbor relationships for each library;S4: low-frequency component determining module determines low-frequency component for low-resolution face image to be processed;S5: radio-frequency component determining module determines reconstruction image block using neural network, determines radio-frequency component as low-resolution face image to be processed using the result of low-frequency component determining module as input;S6: splicing high-resolution human face image block.The present invention is remarkably improved the visual experience for restoring image, the recovery especially suitable for low quality monitoring environment servant face image.

Description

A kind of human face super-resolution processing method and system of the fusion of low-and high-frequency ingredient
Technical field
The present invention relates to image procossings and image-recovery technique field, and in particular to a kind of face of low-and high-frequency ingredient fusion Super-resolution processing method and system.
Background technique
Human face super-resolution technology is to learn high-low resolution corresponding relationship, and then reach from by supplemental training library The purpose of high-resolution human face image is estimated in some low-resolution face images.Human face super-resolution is widely used now In multiple fields, wherein one of most representative field is exactly the facial image enhancing in monitoring video.With monitoring system It is widely available, monitor video it is criminal evidence obtaining and criminal investigation fact-finding process in play increasingly important role.And face figure As being used as one of positive evidence, in occupation of important position in case analysis and court's evidence obtaining.However, due to image capture ring The limitation in border causes to recognize to monitor environment as in the physical condition of representative, the face valid pixel of target suspect is low Journey and image enhancement processes degree-of-difficulty factor greatly increase, thus need to using human face super-resolution skill upgrading image effective dimensions and Effective resolution achievees the purpose that be restored to high-definition picture from low-resolution image.
For the effective recovery for reaching monitor video face, innovatory algorithm even sacrifice efficiency exchange for the validity of algorithm and Ease for use or necessary (caused by face alignment, big image library).Specific practice in addition to introduce effective calculating instrument and Data utilize except form, can also solve the problems, such as this to provide constraint by proposing new prior model and image model.
In recent years, manifold learning becomes one of classic algorithm of human face super-resolution.The core concept of such methods is: To two sample spaces of high-low resolution, the local property around each low resolution image data point is searched out, it then will be low The manifold local property of image in different resolution is non-linearly mapped in the manifold space of high-definition picture, corresponding in high-resolution It spatially projects, to synthesize full resolution pricture.The representative are following several methods: 2004, Chang[1]Deng Manifold learning method is introduced into image super-resolution reconstruct for the first time, proposes a kind of image super-resolution reconstruct of neighborhood insertion Method.Sung Won Park[2]A kind of adaptive manifold learning based on locality preserving projections is proposed, from local submanifold point The internal characteristics for analysing face reconstruct the radio-frequency component of low-resolution image missing.2010, Huang[4]It proposes to be based on CCA The method of (Canonical Correlation Analysis, CCA), by by PCA (Principal Component Analysis, PCA) it is spatially extended to the space CCA, further solve this problem.The same year, Lan[5]For under monitoring environment Image pixel caused by serious fuzzy and noise damages serious problem, proposes a kind of face super-resolution based on shape constraining Rate method adds shape constraining as measuring similarity criterion, to dry when identifying shape using human eye in traditional PCA framework The robustness disturbed manually adds Feature Points as constraint, optimizes the reconstructed results of low-quality image.In conclusion existing These methods mostly only according to traditional technical thought using high frequency detail remaining in image to be processed as local relation away from From measurement criterion, more complete middle low-frequency information can also be saved in low-quality image by having ignored.Thus while rebuilding one As under environment during low-quality image, available good effect, but in face of to monitor the critical noisy for representative When image, the damage of high frequency detail result in it is this with high frequency detail be it is main consider that object distance measurement criterion is no longer accurate, The precise degrees of local relation description are seriously affected, therefore the subspace information of image itself is easy to be damaged, with biography The image that system method recovers, effect are simultaneously unsatisfactory.
Following bibliography involved in text:
[1]H.Chang,D.Y.Yeung,and Y.Xiong,“Super-resolution through neighbor embedding,”in Proc.IEEE Conf.Comput.Vis.Pattern Recog.,Jul.2004,pp.275–282.
[2]Sung Won Park,Savvides,M."Breaking the Limitation of Manifold Analysis for Super-Resolution of Facial Images",ICASSP,pp:573-576,2007.
[3]Xiaogang Wang and Xiaoou Tang,“Hallucinating face by eigentransformation,” Systems,Man,and Cybernetics,Part C:Applications and Reviews,IEEE Transactions on,vol. 35,no.3,pp.425–434,2005.
[4]Hua Huang,Huiting He,Xin Fan,and Junping Zhang,“Super-resolution of human face image using canonical correlation analysis,”Pattern Recognition,vol.43,no.7,pp. 2532–2543,2010.”
[5]C Lan,R Hu,Z Han,A face super-resolution approach using shape semantic mode regularization.IEEE International Conference on Image Processing(ICIP),2021–2024, 26-29Sept.2010.
Summary of the invention
In view of the problems of the existing technology, the present invention provides at a kind of human face super-resolution of low-and high-frequency ingredient fusion Method and system are managed, the recovery of facial image in low quality monitor video is particularly suitable for.
In order to solve the above-mentioned technical problem, an object of the present invention adopts the following technical scheme that:
A kind of human face super-resolution processing method of low-and high-frequency ingredient fusion, which comprises the following steps:
S1: building includes high-resolution human face image library Tc iAnd its corresponding moderate low-resolution face image library Tb iWith The training library T in severe low-resolution face image libraryaI constructs high-resolution universal image library and correspondence based on general image Low resolution general image library;
S2: will be in moderate and severe low-resolution face image to be processed and training library using identical partitioned mode Image is divided into the image block of tool overlapping part, and the image block is the square image blocks moderate that side length is psize and again Low-resolution face image to be processed is spent to be expressed asByDown-sampling is got, down-sampling multiple WithByThe multiple of down-sampling is identical;
S3: on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library is pre-processed, Detailed process are as follows:
For each image block in high-resolution human face image library, from high-resolution human face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour;
For each image block in low-resolution face image library, from low-resolution face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour;
S4: to each piece of low-resolution face image to be processed, low resolution face training library and high-resolution are primarily based on Rate face trains library, seeks low-frequency component therein;Detailed process includes following sub-step:
S4.1: to each piece of low-resolution face image to be processed, in the low resolution training set of blocks of corresponding position Its arest neighbors block is searched, referred to as direct anchor point neighbour block;
S4.2: utilizing corresponding relationship, finds from low resolution training set of blocks and high-resolution training set of blocks respectively The index set of the corresponding blocks collection of direct anchor point neighbour, is labeled as Nin, that is, is referred to as to input low-resolution face image to be processed The low frequency neighbour of block indexes set;
S4.3: according to index set Nin, from severe low resolution face training blockSet, moderate low resolution face Training set of blocksIn, neighbour's block collection is taken out respectively, is denoted asWherein i indicates the label of piecemeal, j's Value range is j ∈ Nin;
S4.4: according toWithTwo neighbour's set and low-resolution face image block to be processed Find out best neighbour's coefficient w, the sub-step specifically:
Best neighbour's coefficient w is found out, even moderate and severe reconstruction error reach minimum, to constrain best neighbour's coefficient W be it is optimal, with formulae express are as follows:
WhereinIt is D is diagonal matrix, assignment mode is the product of unit battle array and real number S, and S is real number value, according to warp Test setting.λ is real number value, is arranged also according to experience.By symbolic simplification, above formula can be re-written as:
It enablesWithRespectively represent i-th piece and moderate low resolution to be processed of severe low-resolution image to be processed I-th piece of image, so havingWe to w derivation and take zero with J, That is:It is available:
Wherein
S4.5: according to best neighbour's coefficient w, the high-definition picture block of reconstruction is found out by following formulaAs centre As a result:
WhereinIt indicates according to index set Nin, the neighbour's block collection taken out in high-resolution training set of blocks, Wherein j ∈ Nin;
S4.6: according to the intermediate image agllutination fruit of reconstructionLow-frequency information number can be found out
Wherein operator ds () represents down-sampling process, rule of thumb obtains, herein detailed process are as follows: special parameter Gaussian filtering;
S5: to obtained low-resolution face image low-frequency information to be processedBased on low resolution general image library With high-resolution universal image library, the frequent ingredient of height therein is sought, detailed process includes following sub-step:
S5.1: three-layer coil product neural network, detailed process are built are as follows:
First layer CNN: to the characteristic extraction procedure of input picture, the spy of image block is extracted using the property of convolutional network Sign.Formula form are as follows:
F1(Y)=max (0, W1*Y+B1)
Wherein Y indicates the training sample block of input, F1() indicates the processing operation of first layer, F1(Y) it indicates at first layer Manage result.W1Indicate that the convolution kernel of n1 support three-dimensional matrice, n1 indicate the number of convolution, B1It is the feature vector of n1 dimension, N1 is empirical value.
Second layer CNN: Nonlinear Mapping: to the non-linear convolution by three-dimensional matrice form for the feature that first layer extracts On nuclear mapping to another characteristic dimension, formula form are as follows:
F2(Y)=max (0, W2*F1(Y)+B2)
Wherein F2() indicates the processing operation of the second layer, F2(Y) second layer processing result, W are indicated2Indicate n2 three-dimensional Filter, B2For the feature vector of a n2 dimension, n2 is empirical value;
Third layer CNN: doing primary reconstruct with the convolution kernel of three-dimensional matrice form again, carries out weight to the feature after mapping It builds, generates high-definition picture, formula form are as follows:
F (Y)=W3*F2(Y)+B3
Wherein F () indicates the processing operation of the second layer, and F (Y) indicates second layer processing result, and physical significance is high-resolution Rate image pixel matrix, W3Indicate c three-dimensional filter, B3For the feature vector of a c dimension;
S5.2: neural network parameter is trained using low resolution general image library and high-resolution universal image library.Benefit Parameter needed for three-layer neural network is trained with low resolution general image library and high-resolution universal image library, it may be assumed that W1, W2, W3, B1, B2, B3
S5.3: together by trained parameter, network architecture, the low resolution face figure to be processed recovered with S4 step As low-frequency informationIt inputs, predicts resultResult as super-resolution face block.
S6: splicing high-resolution human face image blockThe resolution that secures satisfactory grades facial image.
2, the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that: step Rapid S1 specifically:
By high-resolution human face image library middle high-resolution facial image aligned in position, and the processing that degrades is carried out, obtains correspondence Moderate low-resolution face image library;
By moderate low-resolution face image library middle high-resolution facial image aligned in position, and the processing that degrades is carried out, obtained Corresponding severe low-resolution face image library;
High-resolution human face image library and moderate low-resolution face image library, severe low resolution face database, three's structure At face training library;High-resolution universal image library middle high-resolution general image is subjected to the processing that degrades, obtains corresponding low point Resolution general image library, high-resolution universal image library and low resolution general image library constitute general image training library;
Meanwhile before step S2, keep low-resolution face image to be processed identical as image size in face training library, And aligned in position.
3, the human face super-resolution processing method of low-and high-frequency ingredient fusion as claimed in claim 2, it is characterised in that: institute The aligned in position stated will carry out aligned in position using affine transformation method.
4, the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that: step Rapid S3 specifically: on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library is pre-processed, Detailed process are as follows:
For each image block in high-resolution human face image library, from high-resolution human face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour;
For each image block in low-resolution face image library, from low-resolution face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour.
5, the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that:
Step S4 specifically:
To each piece of low-resolution face image to be processed, it is primarily based on severe low resolution training library and high-resolution instruction Practice library, seeks low-frequency component therein.Detailed process includes following sub-step:
S4.1 ': to each piece of low-resolution face image to be processed, in the severe low resolution training block collection of corresponding position Its arest neighbors block is searched in conjunction, referred to as direct anchor point neighbour block;
S4.2 ': utilizing corresponding relationship, respectively from severe low resolution training set of blocks and high-resolution training set of blocks The index set of the corresponding blocks collection of direct anchor point neighbour is found, Nin is labeled as, that is, is referred to as to input low resolution face to be processed The low frequency neighbour of image block indexes set;
S4.3 ': according to index set Nin, from severe low resolution face training blockThe training of moderate low resolution face Set of blocksIn, neighbour's block collection is taken out respectively, is denoted asWherein i indicates the label of piecemeal, and j's takes Value range is j ∈ Nin;
S4.4 ': according toWithTwo neighbour's set and low-resolution face image block to be processedFind out best neighbour's coefficient w.The sub-step specifically:
Best neighbour's coefficient w is found out, even moderate and severe reconstruction error reach minimum, to constrain best neighbour's coefficient W be it is optimal, with formulae express are as follows:
Wherein D is diagonal matrix, and assignment mode is the product of unit battle array and real number S, and S is real number value, is rule of thumb arranged. λ is real number value, is arranged also according to experience.By symbolic simplification, above formula can be re-written as:
It enablesWithRespectively represent i-th piece and moderate low resolution to be processed of severe low-resolution image to be processed I-th piece of image, so havingWe to w derivation and take zero with J, That is:It is available:
Wherein
S4.5 ': according to best neighbour's coefficient w, the high-definition picture block of reconstruction is found out by following formulaAs in Between result:
WhereinIt indicates according to index set Nin, the neighbour's block collection taken out in high-resolution training set of blocks, Wherein j ∈ Nin;
S4.6 ': according to the intermediate image agllutination fruit of reconstructionLow-frequency information number can be found out
Wherein operator ds () represents down-sampling process, rule of thumb obtains, herein detailed process are as follows: adopts under 3 times The fuzzy window filtering of sample, 4*4.
6, the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that:
Step S5 specifically:
To obtained low-resolution face image low-frequency information to be processedBased on general image training library, it is sought In the frequent ingredient of height.Detailed process includes following sub-step:
S5.1 ': three-layer coil product neural network, detailed process are built are as follows:
First layer CNN: to the characteristic extraction procedure of input picture, the spy of image block is extracted using the property of convolutional network Sign.Formula form are as follows:
F1(Y)=max (0, W1*Y+B1)
Wherein Y indicates the training sample block of input, F1() indicates the processing operation of first layer, F1(Y) it indicates at first layer Reason is as a result, W1Indicate that the convolution kernel of n1 support three-dimensional matrice, n1 indicate the number of convolution, B1It is the feature vector of n1 dimension, N1 is empirical value;
Second layer CNN: Nonlinear Mapping: to the non-linear convolution by three-dimensional matrice form for the feature that first layer extracts On nuclear mapping to another characteristic dimension, formula form are as follows:
F2(Y)=max (0, W2*F1(Y)+B2)
Wherein F2() indicates the processing operation of the second layer, F2(Y) second layer processing result, W are indicated2Indicate n2 three-dimensional Filter, B2For the feature vector of a n2 dimension, n2 is empirical value;
Third layer CNN: doing primary reconstruct with the convolution kernel of three-dimensional matrice form again, carries out weight to the feature after mapping It builds, generates high-definition picture.Formula form are as follows:
F (Y)=W3*F2(Y)+B3
Wherein F () indicates the processing operation of the second layer, and F (Y) indicates second layer processing result, and physical significance is high-resolution Rate image pixel matrix, W3Indicate c three-dimensional filter, B3For the feature vector of a c dimension;
S5.2 ': neural network parameter is trained using general image training library.Three are trained using general image training library Parameter needed for layer neural network, it may be assumed that W1, W2, W3, B1, B2, B3
S5.3 ': together by trained parameter, network architecture, the low resolution face to be processed recovered with S4 step Image low-frequency informationIt inputs, predicts resultResult as super-resolution face block.
7, a kind of human face super-resolution processing system of low-and high-frequency ingredient fusion characterized by comprising
Training library constructs module, for constructing comprising high-resolution human face image library and its corresponding low resolution face figure As the training library in library and general image training library;
Piecemeal module is used to use identical partitioned mode by image in low-resolution face image to be processed and training library It is divided into the image block of tool overlapping part, the image block is the square image blocks that side length is psize;
Preprocessing module, on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library It pre-processes, prepares pretreated neighbor relationships for each library;
Low-frequency component determining module determines low-frequency component for low-resolution face image to be processed, and detailed process is to generate Wherein low-frequency component is sought using the process that manually degrades after complete result;Include following submodule: neighbour determines submodule, root Gather according to index, obtains neighbour's set of blocks;Neighbour's coefficient seeks submodule, by known data, acquires optimal neighbour's power Weight coefficient;Reconstruction image submodule acquires reconstruction image block by seeking neighbour's set and weight coefficient;Down-sampling submodule, By the way that the artificial down-sampling of reconstruction image block is obtained low-frequency image ingredient;
Radio-frequency component determining module determines radio-frequency component for low-resolution face image to be processed, and detailed process is will be low The result of frequency ingredient determining module determines reconstruction image block using neural network as input;Include following submodule: network is taken Submodule is built, the network architecture is built;Network parameter trains submodule, according to general image training library training network parameter;It rebuilds Prediction of result submodule reconstructs final result image block according to parameter and framework to rebuild low-frequency component as input;
Splicing module is used to splice high-resolution human face image block according to position iThe resolution that secures satisfactory grades facial image.
Compared to the prior art, the present invention has the advantages that:
Currently, preferable effect can be obtained in image super-resolution problem based on the method for deep neural network.However Method based on deep neural network needs to complete to train using at least thousands of relevant large samples of content, is unable to satisfy Quality picture concerned only has the scene demand in several hundred or tens small sample libraries in actual conditions.In view of low-quality spirogram More complete middle low-frequency information is saved as in can provide directive function in the reconstruction, and this directive function is with the degree that degrades Aggravation, can more robust compared to higher frequency information, and it is traditional based on the super-resolution method of machine learning only with sample Originally it can be obtained the principle of good middle low-frequency information, this method proposes, the super-resolution problem of image is divided into based on depth The high frequency detail of neural network generates, and the middle low frequency details based on machine learning generates two concatenated processes.The latter generates Result images in, middle low-frequency component will be fed as input to the former high frequency detail generate.This way sufficiently combines The former generates the small sample advantage of advantage and the latter in middle low frequency details in high frequency detail, solves than a conventionally employed step super The way of resolution ratio is compared, so that having higher robustness and smaller performance advantage for low-quality image.Only with sample This facial image and general image training sample can be obtained the super-resolution result of robust.By being used to image to be processed The concatenated way of high-low frequency weight is lacked true based on the recovery that small sample face database solves low quality environment servant's face image Property the problem of, be remarkably improved on subjective quality restore image visual experience.
The present invention has universality, can obtain preferable recovery effects for general low quality facial image;It is special Not for the recovery of low quality monitoring environment servant face image, effect is more obvious.
Detailed description of the invention
Fig. 1 is the flow diagram of the embodiment of the present invention;
Fig. 2 is that the facial image of the embodiment of the present invention is based on position piecemeal schematic diagram.
Specific embodiment
The method that the present invention utilizes the layering of image frequency range is based only on small sample image library and completes the face based on deep learning Image super-resolution method.Specific practice be by face small sample library be applied to machine learning method, generation will to degrade compared with For the image low frequency component of robust, the content based on low frequency component, using the convolutional Neural more sensitive to high frequency detail generation Network generates corresponding high fdrequency component content, and the two is blended as final result, and the objective quality of restoration result is promoted with this And similarity.
Below in conjunction with specific embodiments and the drawings, the present invention will be further described.
When it is implemented, computer software technology, which can be used, in technical solution of the present invention realizes automatic running process.
Referring to Fig. 1, specific step is as follows by the present invention:
S1: training library of the building comprising high-resolution human face image library and its corresponding low-resolution face image library.Structure Build the high-resolution universal image library based on general image and corresponding low resolution general image library.
In specific implementation, firstly, the eyes of high-resolution human face image and mouth aligned in position are obtained library, it is denoted as Tc; Then, down-sampling, fuzzy window filtering, up-sampling are successively carried out to high-resolution human face image, are obtained and high-resolution human face figure As corresponding moderate low-resolution face image library Tb
Then, down-sampling, fuzzy window filtering, up-sampling are successively carried out to moderate low-resolution face image, are obtained and height Resolution ratio facial image corresponding severe low-resolution face image library Ta
High-resolution universal image library middle high-resolution general image is subjected to the processing that degrades, down-sampling, fuzzy window filtering, Up-sampling, obtains corresponding low resolution general image library, and high-resolution universal image library and low resolution general image library are constituted General image trains library.
For reference convenient to carry out, the detailed process that facial image alignment is realized using affine transformation method is provided below:
Characteristic point mark is carried out to high-resolution human face image, characteristic point is face marginal point, such as canthus, nose, mouth Angle etc.;Then, using affine transformation method alignment feature point.
Affine transformation method specifically:
By high-resolution human face image library TcMiddle face images phase adduction obtains average face divided by sample number.If (x'i, y'i) it is ith feature point coordinate on average face, (xi,yi) it is corresponding i-th in high-resolution human face image to be aligned Characteristic point coordinate.If affine matrixWherein a, b, c, d, e, f are affine transformation coefficient,Indicate ith feature point coordinate (x' in average face and high-resolution human face image to be alignedi,y 'i) and (xi,yi) between relationship, using Method of Direct Liner Transformation solve affine transformation matrix M.High-resolution human face to be aligned All coordinate points of image be aligned with the affine matrix M coordinate being multiplied after high-resolution human face image coordinate.
The processing that degrades is done to the high-resolution human face image after alignment, for example, successively to adopting under high-resolution human face image 4 times of sample, fuzzy window filter 23 * 3,4 times of up-sampling, obtain severe low resolution face figure corresponding with high-resolution human face image Picture, to obtain low-resolution face image library Tb, successively to 4 times of moderate low-resolution face image down-sampling, fuzzy window mistake 3*3,4 times of up-sampling are filtered, severe low-resolution face image corresponding with high-resolution human face image is obtained, to obtain severe Low-resolution face image library Ta
High-resolution human face image library TcWith low-resolution face image library TbMiddle facial image corresponds, and constitutes height Resolution ratio facial image pair.High-resolution human face image library TcWith low-resolution face image library TbConstitute face training library.
Keep low-resolution face image to be processed identical as image size in training library, and aligned in position.
The present invention is will be to low-resolution face image x to be processedinputIt is handled, estimates its corresponding high-resolution The high-resolution human face image estimated is denoted as high-resolution human face image y to be estimated by facial imageoutput
Low-resolution face image x to be processedinputThe low-resolution face image usually obtained in noisy severe environments. , generally will be by pretreatment for low-resolution face image to be processed as input, including be cut out and meet Uniform provisions Face part, i.e., by low-resolution face image x to be processedinputIt is up-sampled, keeps it big with facial image in training library It is small identical.To low-resolution face image x to be processedinputCharacteristic point mark is carried out, it is finally affine using being recorded in step S1 Converter technique makes low-resolution face image x to be processedinputWith average face aligned in position.In this way, to train facial image in library With low-resolution face image x to be processedinputIdentical level is in size, eyebrow height.If low resolution people to be processed Face image xinputInsufficient light when acquisition then can carry out auto brightness to the low-resolution face image to be processed after aligned in position Setting contrast makes it be in similar brightness level with low-resolution face image in training library.
S2: will be in moderate and severe low-resolution face image to be processed and training library using identical partitioned mode Image is divided into the image block of tool overlapping part, and the image block is the square image blocks that side length is psize;
In this step, each image in training library is divided into N number of square image blocks;Meanwhile by low resolution to be processed Rate facial image xinputIt is also divided into N number of image block.Using the corresponding facial image of image block set representations, high-resolution human to be estimated Face image youtputIt will be by low-resolution face image x to be processedinputImage block restore obtain.By low resolution to be processed Rate facial image xinput, high-resolution human face image y to be estimatedoutput, training library in low-resolution face image Tbj, high-resolution Facial image TcjImage block collection be denoted as respectivelyI indicates that image block is compiled Number,Respectively indicate low-resolution face image x to be processedinput, high-resolution human face to be estimated Image youtput, training library in j-th of low-resolution face image Tbj, high-resolution human face image TcjIn i-th of image block.
Referring to fig. 2, the main foundation for carrying out piecemeal to facial image is the thought of local manifolds, i.e., facial image is a kind of Particular image, these images have specific structural meaning, such as all fritters on some position be all eyes or certain It is all nose on a position, that is to say, that the local fritter of each position is all in a specific local geometric stream in image In shape.To guarantee this local manifolds, need to divide the image into the image block of several squares.The size of image block needs Suitable dimension can cause ghost phenomena due to small alignment problem if piecemeal is too big;If piecemeal is too small, it can obscure, desalinate The position feature of each fritter.In addition it is also necessary to select the size for overlapping block between image block.Because if simply by image It is divided into several square tiles without overlapping block, then can be because net occurs in incompatibility problem between these square blocks and block Lattice effect.And facial image is not always square, then the size selection of overlapping block need to pay attention to so that image as far as possible Sufficient piecemeal.
For example, image block size is denoted as psize × psize, the width of overlapping part is denoted as d between adjacent image block, will scheme As block position is expressed as (j, k), then have:
Wherein, height and width is respectively the height and width of facial image.In embodiment, psize takes 12, d to take 8.
S3: on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library is pre-processed, Detailed process are as follows:
For each image block in high-resolution human face image library, from high-resolution human face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour.
For each image block in low-resolution face image library, from low-resolution face image library same position its He finds its K nearest image block in image block, be defined as the pretreatment neighbour of the image block, and pre- for the storage of each block Handle the label of neighbour.
S4: to each piece of low-resolution face image to be processed, low resolution face training library and high-resolution are primarily based on Rate face trains library, seeks low-frequency component therein.Detailed process are as follows:
Sub-step one: to each piece of low-resolution face image to be processed, in the low resolution training block collection of corresponding position Its arest neighbors block is searched in conjunction, referred to as direct anchor point neighbour block.
Sub-step two: utilizing corresponding relationship, respectively from low resolution training set of blocks and high-resolution training set of blocks The index set of the corresponding blocks collection of direct anchor point neighbour is found, Nin is labeled as, that is, is referred to as to input low resolution face to be processed The low frequency neighbour of image block indexes set.
Sub-step three: according to index set Nin, in moderate low resolution face training set of blocksThe low resolution of severe degree Rate face trains blockIn, neighbour's block collection is taken out respectively, is denoted asWherein i indicates the label of piecemeal, j Value range be j ∈ Nin.
Sub-step four: according toWithTwo neighbour's set and low-resolution face image block to be processedFind out best neighbour's coefficient w.The sub-step specifically:
Best neighbour's coefficient w is found out, even moderate and severe reconstruction error reach minimum, to constrain best neighbour's coefficient W be it is optimal, with formulae express are as follows:
Wherein D is diagonal matrix, and assignment mode is the product of unit battle array and real number S, and S is real number value, is rule of thumb arranged. λ is real number value, is arranged also according to experience.By symbolic simplification, above formula can be re-written as:
It enablesWithRespectively represent i-th piece and moderate low resolution to be processed of severe low-resolution image to be processed I-th piece of image, so havingWe to w derivation and take zero with J, That is:It is available:
Wherein
Sub-step five: according to best neighbour's coefficient w, the high-definition picture block of reconstruction is found out by following formulaAs Intermediate result:
WhereinIt indicates according to index set Nin, the neighbour's block collection taken out in high-resolution training set of blocks, Wherein j ∈ Nin.
Sub-step six: according to the intermediate image agllutination fruit of reconstructionLow-frequency information number can be found out
Wherein operator ds () represents down-sampling process, rule of thumb obtains, herein detailed process are as follows: parameter 5 Gaussian filtering.
S5: to obtained low-resolution face image low-frequency information to be processedBased on low resolution general image library With high-resolution universal image library, the frequent ingredient of height therein is sought.Detailed process are as follows:
Sub-step one: three-layer coil product neural network is built.Detailed process are as follows:
First layer CNN: to the characteristic extraction procedure of input picture, the spy of image block is extracted using the property of convolutional network Sign.(9*9*64 convolution kernel), formula form are as follows:
F1(Y)=max (0, W1*Y+B1)
Wherein Y indicates the training sample block of input, F1() indicates the processing operation of first layer, F1(Y) it indicates at first layer Manage result.W1Indicate that the convolution kernel of n1 support three-dimensional matrice, n1 indicate the number of convolution, B1It is the feature vector of n1 dimension, N1 is empirical value.
Second layer CNN: Nonlinear Mapping: the non-linear of feature extracted to first layer passes through convolution kernel (1*1*35 convolution Core) it is mapped on another characteristic dimension, formula form are as follows:
F2(Y)=max (0, W2*F1(Y)+B2)
Wherein F2() indicates the processing operation of the second layer, F2(Y) second layer processing result, W are indicated2Indicate n2 three-dimensional Filter, B2For the feature vector of a n2 dimension, n2 is empirical value.
Third layer CNN: doing primary reconstruct with convolution kernel (5*5*1 convolution kernel) again, rebuild to the feature after mapping, Generate high-definition picture.Formula form are as follows:
F (Y)=W3*F2(Y)+B3
Wherein F () indicates the processing operation of the second layer, and F (Y) indicates second layer processing result, and physical significance is high-resolution Rate image pixel matrix, W3Indicate c three-dimensional filter, B3For the feature vector of a c dimension.
Sub-step two: neural network is trained using low resolution general image library and high-resolution universal image library and is joined Number.Parameter needed for three-layer neural network is trained using low resolution general image library and high-resolution universal image library, it may be assumed that W1, W2, W3, B1, B2, B3
Sub-step three: together by trained parameter, network architecture, the low resolution people to be processed recovered with S4 step Face image low-frequency informationIt inputs, predicts resultResult as super-resolution face block.
S6: splicing high-resolution human face image blockThe resolution that secures satisfactory grades facial image.
To verify the technology of the present invention effect, verified using Chinese face database CAS-PEAL.Therefrom select 540 A face sample, resolution ratio are 112*96, are aligned face with affine transformation method.Take 500 width for training at random from face sample Sample, remaining 40 4 times of width image down sampling (resolution ratio 24*28) afterwards plus 0.015 Gaussian noise after be used as test image. Using face sample residual image as training library, test image amplification is obtained into subjective figure for 4 times using bicubic interpolation method Picture;Traditional local face face super-resolution method (method 1), method Lan is respectively adopted[4](method 2), based on profile priori Robustness human face super-resolution processing method [5] (method 3) obtains subjective image.
From the experimental results, although method 1~3 is promoted in resolution ratio than interpolation method, occur relatively tight Weight error is very low with the similarity of original image.Result in method 2 is past based on global method due to being global face framework Toward the short slab having on detail recovery, so being slightly poorer than the method for the present invention in this respect.The matter of the restored image of the method for the present invention Amount is all significantly increased compared to method 1~3 and bicubic interpolation method.
Table 1 illustrates the corresponding objective quality of each image, including PSNR (Y-PSNR) and SSIM value (structural similarity Criterion).From table 1 it follows that the method for the present invention also has more apparent stabilization to mention on the objective quality for restoring image It rises.
The comparison of the recovery image objective quality of table 1
The method of the present invention handles low-and high-frequency data by selecting different methods from the bandwidth of facial image respectively, respectively Restored with structure and two aspects of detail recovery provide guidance, low quality facial image is restored.Experimental result is from subjectivity Quality demonstrates effectiveness of the invention to objective quality, i.e. the introducing of fractional bandwidth effectively reduces critical noisy to oversubscription The influence that resolution is rebuild avoids small sample single treatment strategy bring fitting deficiency or over-fitting, to improve people Face super-resolution processing result.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (7)

1. a kind of human face super-resolution processing method of low-and high-frequency ingredient fusion, which comprises the following steps:
S1: building includes high-resolution human face image libraryAnd its corresponding moderate low-resolution face image libraryIt is low with severe The training library of resolution ratio facial image databaseConstruct high-resolution universal image library and corresponding low resolution based on general image General image library;
S2: image in library by moderate and severe low-resolution face image to be processed and is trained using identical partitioned mode It is divided into the image block of tool overlapping part, the image block is that the square image blocks moderate that side length is psize and severe wait for Processing low-resolution face image is expressed as ByDown-sampling is got, down-sampling multiple andByThe multiple of down-sampling is identical;
S3: on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library is pre-processed, specifically Process are as follows:
For each image block in high-resolution human face image library, other figures of same position from high-resolution human face image library As finding its K nearest image block in block, it is defined as the pretreatment neighbour of the image block, and be the storage pretreatment of each block The label of neighbour;
For each image block in low-resolution face image library, other figures of same position from low-resolution face image library As finding its K nearest image block in block, it is defined as the pretreatment neighbour of the image block, and be the storage pretreatment of each block The label of neighbour;
S4: to each piece of low-resolution face image to be processed, low resolution face training library and high-resolution human are primarily based on Face trains library, seeks low-frequency component therein;Detailed process includes following sub-step:
S4.1: it to each piece of low-resolution face image to be processed, is searched in the low resolution training set of blocks of corresponding position Its arest neighbors block, referred to as direct anchor point neighbour block;
S4.2: utilizing corresponding relationship, finds directly from low resolution training set of blocks and high-resolution training set of blocks respectively The index set of the corresponding blocks collection of anchor point neighbour is labeled as Nin, that is, is referred to as to input low-resolution face image block to be processed Low frequency neighbour indexes set;
S4.3: according to index set Nin, from severe low resolution face training blockSet, moderate low resolution face training block SetIn, neighbour's block collection is taken out respectively, is denoted asWithWherein i indicates the label of piecemeal, the value model of j It encloses for j ∈ Nin;
S4.4: according toWithTwo neighbour's set and low-resolution face image block to be processedIt finds out Best neighbour coefficient w, the sub-step specifically:
Best neighbour's coefficient w is found out, even moderate and severe reconstruction error reach minimum, is to constrain best neighbour's coefficient w It is optimal, with formulae express are as follows:
WhereinIt is moderate low-quality image.D is diagonal matrix, and assignment mode is the product of unit battle array and real number S, and S is real number Value, is rule of thumb arranged.λ is real number value, is arranged also according to experience.By symbolic simplification, above formula can be re-written as:
It enablesWithRespectively represent i-th piece and moderate low-resolution image to be processed of severe low-resolution image to be processed I-th piece, so havingWe to w derivation and take zero with J, it may be assumed thatIt is available:
Wherein
S4.5: according to best neighbour's coefficient w, the high-definition picture block of reconstruction is found out by following formulaIt is tied as centre Fruit:
WhereinIt indicates according to index set Nin, the neighbour's block collection taken out in high-resolution training set of blocks, wherein j ∈Nin;
S4.6: according to the intermediate image agllutination fruit of reconstructionLow-frequency information number can be found out
Wherein operator ds () represents down-sampling process, rule of thumb obtains, herein detailed process are as follows: the height of special parameter This filtering;
S5: to obtained low-resolution face image low-frequency information to be processedBased on low resolution general image library and height The frequent ingredient of height therein is sought in resolution ratio general image library, and detailed process includes following sub-step:
S5.1: three-layer coil product neural network, detailed process are built are as follows:
First layer CNN: to the characteristic extraction procedure of input picture, the feature of image block is extracted using the property of convolutional network.It is public Formula form are as follows:
F1(Y)=max (0, W1*Y+B1)
Wherein Y indicates the training sample block of input, F1() indicates the processing operation of first layer, F1(Y) first layer processing knot is indicated Fruit.W1Indicate that the convolution kernel of n1 support three-dimensional matrice, n1 indicate the number of convolution, B1It is the feature vector of n1 dimension, n1 is Empirical value.
Second layer CNN: Nonlinear Mapping: the non-linear convolution kernel by three-dimensional matrice form for the feature that first layer extracts is reflected It is mapped on another characteristic dimension, formula form are as follows:
F2(Y)=max (0, W2*F1(Y)+B2)
Wherein F2() indicates the processing operation of the second layer, F2(Y) second layer processing result, W are indicated2Indicate n2 three-dimensional filter Wave device, B2For the feature vector of a n2 dimension, n2 is empirical value;
Third layer CNN: doing primary reconstruct with the convolution kernel of three-dimensional matrice form again, rebuild to the feature after mapping, raw At high-definition picture, formula form are as follows:
F (Y)=W3*F2(Y)+B3
Wherein F () indicates the processing operation of the second layer, and F (Y) indicates second layer processing result, and physical significance is high resolution graphics As picture element matrix, W3Indicate c three-dimensional filter, B3For the feature vector of a c dimension;
S5.2: neural network parameter is trained using low resolution general image library and high-resolution universal image library.Using low Resolution ratio general image library and high-resolution universal image library train parameter needed for three-layer neural network, it may be assumed that W1, W2, W3, B1, B2, B3
S5.3: together by trained parameter, network architecture, the low-resolution face image to be processed recovered with S4 step is low Frequency informationIt inputs, predicts resultResult as super-resolution face block.
S6: splicing high-resolution human face image blockThe resolution that secures satisfactory grades facial image.
2. the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that: step S1 Specifically:
By high-resolution human face image library middle high-resolution facial image aligned in position, and the processing that degrades is carried out, obtained in corresponding Spend low-resolution face image library;
By moderate low-resolution face image library middle high-resolution facial image aligned in position, and the processing that degrades is carried out, obtains correspondence Severe low-resolution face image library;
High-resolution human face image library and moderate low-resolution face image library, severe low resolution face database, three constitute people Face trains library;High-resolution universal image library middle high-resolution general image is subjected to the processing that degrades, obtains corresponding low resolution General image library, high-resolution universal image library and low resolution general image library constitute general image training library;
Meanwhile before step S2, keep low-resolution face image to be processed identical as image size in face training library, and position Set alignment.
3. the human face super-resolution processing method of low-and high-frequency ingredient fusion as claimed in claim 2, it is characterised in that: described Aligned in position will carry out aligned in position using affine transformation method.
4. the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that: step S3 Specifically: on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library is pre-processed, specifically Process are as follows:
For each image block in high-resolution human face image library, other figures of same position from high-resolution human face image library As finding its K nearest image block in block, it is defined as the pretreatment neighbour of the image block, and be the storage pretreatment of each block The label of neighbour;
For each image block in low-resolution face image library, other figures of same position from low-resolution face image library As finding its K nearest image block in block, it is defined as the pretreatment neighbour of the image block, and be the storage pretreatment of each block The label of neighbour.
5. the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that:
Step S4 specifically:
To each piece of low-resolution face image to be processed, it is primarily based on severe low resolution training library and high-resolution training Low-frequency component therein is sought in library.Detailed process includes following sub-step:
S4.1 ': to each piece of low-resolution face image to be processed, in the severe low resolution training set of blocks of corresponding position Its arest neighbors block is searched, referred to as direct anchor point neighbour block;
S4.2 ': utilizing corresponding relationship, finds from severe low resolution training set of blocks and high-resolution training set of blocks respectively The index set of the corresponding blocks collection of direct anchor point neighbour, is labeled as Nin, that is, is referred to as to input low-resolution face image to be processed The low frequency neighbour of block indexes set;
S4.3 ': according to index set Nin, from severe low resolution face training blockModerate low resolution face trains block collection It closesIn, neighbour's block collection is taken out respectively, is denoted asWherein i indicates the label of piecemeal, the value range of j For j ∈ Nin;
S4.4 ': according toWithTwo neighbour's set and low-resolution face image block to be processedIt finds out Best neighbour's coefficient w.The sub-step specifically:
Best neighbour's coefficient w is found out, even moderate and severe reconstruction error reach minimum, is to constrain best neighbour's coefficient w It is optimal, with formulae express are as follows:
Wherein D is diagonal matrix, and assignment mode is the product of unit battle array and real number S, and S is real number value, is rule of thumb arranged.λ is Real number value is arranged also according to experience.By symbolic simplification, above formula can be re-written as:
It enablesWithRespectively represent i-th piece and moderate low-resolution image to be processed of severe low-resolution image to be processed I-th piece, so havingWe to w derivation and take zero with J, it may be assumed thatIt is available:
Wherein
S4.5 ': according to best neighbour's coefficient w, the high-definition picture block of reconstruction is found out by following formulaIt is tied as centre Fruit:
WhereinIt indicates according to index set Nin, the neighbour's block collection taken out in high-resolution training set of blocks, wherein j ∈Nin;
S4.6 ': according to the intermediate image agllutination fruit of reconstructionLow-frequency information number can be found out
Wherein operator ds () represents down-sampling process, rule of thumb obtains, herein detailed process are as follows: 3 times of down-samplings, 4* 4 fuzzy window filtering.
6. the human face super-resolution processing method of low-and high-frequency ingredient fusion as described in claim 1, it is characterised in that:
Step S5 specifically:
To obtained low-resolution face image low-frequency information to be processedBased on general image training library, seek therein High frequent ingredient.Detailed process includes following sub-step:
S5.1 ': three-layer coil product neural network, detailed process are built are as follows:
First layer CNN: to the characteristic extraction procedure of input picture, the feature of image block is extracted using the property of convolutional network.It is public Formula form are as follows:
F1(Y)=max (0, W1*Y+B1)
Wherein Y indicates the training sample block of input, F1() indicates the processing operation of first layer, F1(Y) first layer processing knot is indicated Fruit, W1Indicate that the convolution kernel of n1 support three-dimensional matrice, n1 indicate the number of convolution, B1It is the feature vector of n1 dimension, n1 is Empirical value;
Second layer CNN: Nonlinear Mapping: the non-linear convolution kernel by three-dimensional matrice form for the feature that first layer extracts is reflected It is mapped on another characteristic dimension, formula form are as follows:
F2(Y)=max (0, W2*F1(Y)+B2)
Wherein F2() indicates the processing operation of the second layer, F2(Y) second layer processing result, W are indicated2Indicate n2 three-dimensional filter Wave device, B2For the feature vector of a n2 dimension, n2 is empirical value;
Third layer CNN: doing primary reconstruct with the convolution kernel of three-dimensional matrice form again, rebuild to the feature after mapping, raw At high-definition picture.Formula form are as follows:
F (Y)=W3*F2(Y)+B3
Wherein F () indicates the processing operation of the second layer, and F (Y) indicates second layer processing result, and physical significance is high resolution graphics As picture element matrix, W3Indicate c three-dimensional filter, B3For the feature vector of a c dimension;
S5.2 ': neural network parameter is trained using general image training library.Three layers of mind are trained using general image training library Through parameter needed for network, it may be assumed that W1, W2, W3, B1, B2, B3
S5.3 ': together by trained parameter, network architecture, the low-resolution face image to be processed recovered with S4 step Low-frequency informationIt inputs, predicts resultResult as super-resolution face block.
7. a kind of human face super-resolution processing system of low-and high-frequency ingredient fusion characterized by comprising
Training library constructs module, for constructing comprising high-resolution human face image library and its corresponding low-resolution face image library Training library and general image training library;
Piecemeal module is used to divide image in low-resolution face image to be processed and training library using identical partitioned mode To have the image block of overlapping part, the image block is the square image blocks that side length is psize;
Preprocessing module, on the basis of piecemeal, respectively high-resolution human face image library and low-resolution face image library are done pre- Processing prepares pretreated neighbor relationships for each library;
Low-frequency component determining module determines low-frequency component for low-resolution face image to be processed, and detailed process is to generate completely Result after using the manually process that degrades seek wherein low-frequency component;Include following submodule: neighbour determines submodule, according to rope Draw set, obtains neighbour's set of blocks;Neighbour's coefficient seeks submodule, by known data, acquires optimal neighbour's weight system Number;Reconstruction image submodule acquires reconstruction image block by seeking neighbour's set and weight coefficient;Down-sampling submodule, passes through The artificial down-sampling of reconstruction image block is obtained into low-frequency image ingredient;
Radio-frequency component determining module determines radio-frequency component for low-resolution face image to be processed, detailed process be by low frequency at Divide the result of determining module as input, determines reconstruction image block using neural network;Include following submodule: network establishment Module builds the network architecture;Network parameter trains submodule, according to general image training library training network parameter;Reconstructed results It predicts submodule, final result image block is reconstructed to rebuild low-frequency component as input according to parameter and framework;
Splicing module is used to splice high-resolution human face image block according to position iThe resolution that secures satisfactory grades facial image.
CN201910290815.0A 2019-04-11 2019-04-11 High-low frequency component fused face super-resolution processing method and system Active CN110490796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290815.0A CN110490796B (en) 2019-04-11 2019-04-11 High-low frequency component fused face super-resolution processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290815.0A CN110490796B (en) 2019-04-11 2019-04-11 High-low frequency component fused face super-resolution processing method and system

Publications (2)

Publication Number Publication Date
CN110490796A true CN110490796A (en) 2019-11-22
CN110490796B CN110490796B (en) 2023-02-14

Family

ID=68545797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290815.0A Active CN110490796B (en) 2019-04-11 2019-04-11 High-low frequency component fused face super-resolution processing method and system

Country Status (1)

Country Link
CN (1) CN110490796B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330542A (en) * 2020-11-18 2021-02-05 重庆邮电大学 Image reconstruction system and method based on CRCSAN network
CN113592715A (en) * 2021-08-05 2021-11-02 昆明理工大学 Super-resolution image reconstruction method for small sample image set

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968775A (en) * 2012-11-02 2013-03-13 清华大学 Low-resolution face image rebuilding method based on super-resolution rebuilding technology
WO2018099405A1 (en) * 2016-11-30 2018-06-07 京东方科技集团股份有限公司 Human face resolution re-establishing method and re-establishing system, and readable medium
CN108320267A (en) * 2018-02-05 2018-07-24 电子科技大学 Super-resolution processing method for facial image
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968775A (en) * 2012-11-02 2013-03-13 清华大学 Low-resolution face image rebuilding method based on super-resolution rebuilding technology
WO2018099405A1 (en) * 2016-11-30 2018-06-07 京东方科技集团股份有限公司 Human face resolution re-establishing method and re-establishing system, and readable medium
CN108320267A (en) * 2018-02-05 2018-07-24 电子科技大学 Super-resolution processing method for facial image
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾洁: "基于生成对抗网络的人脸超分辨率重建及识别", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330542A (en) * 2020-11-18 2021-02-05 重庆邮电大学 Image reconstruction system and method based on CRCSAN network
CN112330542B (en) * 2020-11-18 2022-05-03 重庆邮电大学 Image reconstruction system and method based on CRCSAN network
CN113592715A (en) * 2021-08-05 2021-11-02 昆明理工大学 Super-resolution image reconstruction method for small sample image set

Also Published As

Publication number Publication date
CN110490796B (en) 2023-02-14

Similar Documents

Publication Publication Date Title
Cai et al. FCSR-GAN: Joint face completion and super-resolution via multi-task learning
CN102982520B (en) Robustness face super-resolution processing method based on contour inspection
CN102243711B (en) Neighbor embedding-based image super-resolution reconstruction method
Ma et al. Structure-preserving image super-resolution
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN109819321A (en) A kind of video super-resolution Enhancement Method
CN105701770B (en) A kind of human face super-resolution processing method and system based on context linear model
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN107123091A (en) A kind of near-infrared face image super-resolution reconstruction method based on deep learning
CN105701515B (en) A kind of human face super-resolution processing method and system based on the constraint of the double-deck manifold
CN109961407A (en) Facial image restorative procedure based on face similitude
CN110490796A (en) A kind of human face super-resolution processing method and system of the fusion of low-and high-frequency ingredient
CN116128820A (en) Pin state identification method based on improved YOLO model
CN106203269A (en) A kind of based on can the human face super-resolution processing method of deformation localized mass and system
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN108550114A (en) A kind of human face super-resolution processing method and system of multiscale space constraint
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
Almasri et al. Rgb guided thermal super-resolution enhancement
Yang et al. Deep networks with detail enhancement for infrared image super-resolution
CN116343052B (en) Attention and multiscale-based dual-temporal remote sensing image change detection network
CN109934193A (en) Prior-constrained anti-of global context blocks face super-resolution method and its system
Zhang et al. Super-resolution reconstruction algorithms based on fusion of deep learning mechanism and wavelet
CN110310228A (en) It is a kind of based on the human face super-resolution processing method expressed of closure link data and system again
Tian et al. Retinal fundus image superresolution generated by optical coherence tomography based on a realistic mixed attention GAN
Kumar et al. Orthogonal transform based generative adversarial network for image dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant