CN108171124B - Face image sharpening method based on similar sample feature fitting - Google Patents

Face image sharpening method based on similar sample feature fitting Download PDF

Info

Publication number
CN108171124B
CN108171124B CN201711322319.6A CN201711322319A CN108171124B CN 108171124 B CN108171124 B CN 108171124B CN 201711322319 A CN201711322319 A CN 201711322319A CN 108171124 B CN108171124 B CN 108171124B
Authority
CN
China
Prior art keywords
clear
face image
face
image blocks
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711322319.6A
Other languages
Chinese (zh)
Other versions
CN108171124A (en
Inventor
干宗良
刘志恒
刘峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201711322319.6A priority Critical patent/CN108171124B/en
Publication of CN108171124A publication Critical patent/CN108171124A/en
Application granted granted Critical
Publication of CN108171124B publication Critical patent/CN108171124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The invention provides a face image sharpening method based on similar sample feature fitting. Firstly, a group of clear face images with the size and the posture consistent with those of a face image to be cleared is provided, and the group of face images are subjected to degradation processing to obtain a corresponding group of non-clear face images with the same size; projecting the unsharp and clear sample image blocks corresponding to each pixel position into a common feature space; finding a plurality of most similar non-clear sample blocks in a feature space from image blocks of each pixel position of the face image to be cleared; in the feature space, obtaining a nonlinear regression model among the plurality of unclear and clear samples by using a minimum mean square error criterion; applying the regression model to the image blocks to be clarified, and fitting corresponding clarified image blocks; and splicing the clear image blocks at all the pixel positions at the corresponding face positions to obtain a clear face image.

Description

Face image sharpening method based on similar sample feature fitting
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to a face image sharpening method based on similar sample feature fitting.
Background
With the development of artificial intelligence technology, the technologies of face detection, face recognition and expression recognition are applied in the fields of intelligent transportation, mobile payment and the like, which shows that the computer vision technology is fully integrated into the life of ordinary people. However, in a real life scene, a general monitoring device is affected by the hardware condition of the shooting device itself, and also affected by factors such as the shooting environment, for example, the shooting weather, the shooting distance, the shooting time, the lighting, and the like, so that the shot face image is blurred. Therefore, when people prepare to obtain useful face information from video monitoring or from images with poor quality, some problems are encountered, so that the research of the face image sharpening technology has important practical significance. The face sharpening technology is a method for processing a learned sharpening model to obtain a sharp face image according to an acquired non-sharp face image.
Disclosure of Invention
Based on the problems, the invention provides a face image sharpening method based on similar sample feature fitting, and the quality of a face image to be sharpened is improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
s1, obtaining a group of corresponding non-clear face images by degrading a group of clear face images with the same size and posture as those of the face images to be clear, dividing the two groups of images into corresponding image blocks according to pixel positions, and constructing clear and non-clear training sample sets according to the pixel positions of the image blocks;
s2, subtracting the average value from the image block in the training sample set of each pixel position, and then extracting the features;
s3, overlapping and blocking the face image to be clarified according to pixel positions to obtain face image blocks to be clarified, subtracting a mean value from each image block, and then performing feature extraction;
s4, finding K non-clear face image block features which are most similar to the features of the face image blocks to be clear in a non-clear training sample set corresponding to the pixel positions of the face image blocks to be clear, finding K clear face image block features corresponding to the K non-clear face image block features in a clear training sample set, and forming a training sample pair by the K clear and non-clear face image block features;
s5, learning a nonlinear regression relationship between the characteristics of the unclear face image blocks and the characteristics of the clear face image blocks by using the training sample pairs, and obtaining the characteristics of the clear face image blocks corresponding to the characteristics of the face image blocks to be cleared by using the learned regression relationship;
s6, carrying out back projection transformation on the obtained characteristics of the clear human face image block to obtain a clear human face image block;
s7, splicing the obtained clear face image blocks one by one into a final clear face image according to the positions of the clear face image blocks on the face image;
further, in step S1, it specifically includes:
and S11, selecting a group of clear face images with the size and the posture consistent with those of the face images to be cleared from the face sample library. Downsampling the clear face image to obtain a reduced clear face image, then amplifying the reduced clear face image to the size same as that of the original clear face image by adopting a bicubic interpolation algorithm, and obtaining a group of interpolated non-clear face images by carrying out the operation on each image in the clear face image set;
s12, adopting a rectangular window with fixed size to respectively perform sliding window blocking on the clear and non-clear face images, and ensuring that overlapping parts exist between the upper, lower, left and right adjacent position blocks and the number of overlapped pixels is the same.
And S13, integrating image blocks at the same pixel position in the clear face image and the non-clear face image respectively to form a clear training sample set and a non-clear training sample set. Assume that the unsharp training sample set at position p is defined as
Figure GDA0003292005600000021
Figure GDA0003292005600000022
A clear training sample set is defined as
Figure GDA0003292005600000023
Figure GDA0003292005600000024
Where J is the dimension of the image block in the sample set, M is the number of image blocks in the sample set, xpAnd ypRepresenting the image blocks of the non-clear and clear images at position p, respectively.
Further, in step S2, it specifically includes:
s21, subtracting the mean value from the image blocks in the clear and unsharp training sample sets at each pixel position in S1, and performing feature extraction, taking position p as an example, specifically:
x, Y of S1 is projected into the feature space according to the projection matrix U, V:
Figure GDA0003292005600000025
wherein meanXAnd meanYThe average values i ═ 1, 2., M of all image blocks in the X sample set and the Y sample set of S1, respectively. The sample sets after projection are respectively
Figure GDA0003292005600000026
And
Figure GDA0003292005600000027
where q is less than or equal to the image block dimension J in the sample set,
Figure GDA0003292005600000028
and
Figure GDA0003292005600000029
respectively represent xpAnd ypAnd (5) projection results.
The projection matrix U, V is calculated as follows:
the mean values of the image blocks in the training sample sets X and Y of S1 are subtracted, and then the correlation matrix C between them is calculated by formula (3):
C=(Y-meanY)T(X-meanX) (3)
the correlation matrix C is decomposed by the method of equation (4):
C=UΛVT (4)
obtaining U, U and B by solving the formula (5),
Figure GDA0003292005600000031
Figure GDA0003292005600000032
Further, in step S3, it specifically includes:
and S31, performing sliding window blocking on the face image to be clearly seen by adopting a rectangular window with the same size as that of S1, and ensuring that an overlapping part exists between the blocks at the upper, lower, left and right adjacent positions, wherein the number of overlapping pixels is the same as that of the overlapping pixels in S1.
S32, at the position p, the human face image block L to be cleared at the position p is processed by the S2 projection matrix V according to the formula (6)pProjecting the image into a feature space to obtain the features of the image blocks of the human face to be clarified
Figure GDA0003292005600000033
Figure GDA0003292005600000034
Further, in step S4, it specifically includes:
at position p, Z at S2 by solving equation (7)XFinding out the features of the human face image block to be clear
Figure GDA0003292005600000035
The K unclear face image block features with the closest euclidean distance:
Figure GDA0003292005600000036
wherein, i is 1, 2. And in ZYIs found with ZXThe K clear face image block features corresponding to the non-clear face image block features in the training sample pair.
Further, in step S5, it specifically includes:
at each position on the image, solving a formula (8) according to a minimum equipartition error criterion, so as to obtain a nonlinear regression relation between the characteristics of the corresponding image blocks in the clear and unsharp training sample sets:
Figure GDA0003292005600000037
wherein, each row in the matrix a represents an image block feature, C is a regularization parameter, I represents a column vector whose elements are all 1, and Φ represents a Sigmoid function, that is, Φ (x) is 1/(1+ e)-x) Where β is a hidden parameter, it is randomly generated in a gaussian distribution when data is input, y represents an output, and if w and b are found, for one input x', the corresponding output y ═ wTΦ(β,x′)-b。
W and b can be obtained by solving the above model by newton's method. w is a weight matrix, b is a bias vector;
the model target parameters w and b may be found by taking the unsharp image patch features in the S4 training sample pair as the input to the regression model and the clear image patch features as the output of the model. For the S3 input
Figure GDA0003292005600000041
Can be obtained by the formula (10)
Figure GDA0003292005600000042
Corresponding clear image block features
Figure GDA0003292005600000043
Figure GDA0003292005600000044
Further, in step S6, it specifically includes:
by solving equation (11), S5 can be characterized by a clear image block
Figure GDA0003292005600000045
Back projecting to the original image space to obtain a clear human face image block Hp
Figure GDA0003292005600000046
Further, in step S7, it specifically includes:
and sequentially solving the corresponding clear face image blocks according to the face image blocks to be cleared, splicing the clear face image blocks into clear face images according to pixel positions, and taking the average value of the pixel values at the overlapping positions as the final pixel value at the overlapping positions when overlapping pixels are encountered.
The steps involve the following modules:
a sample generation module: and performing quality degradation processing on the clear image at S1 to obtain an unclear face image. Downsampling the clear face image to obtain a reduced clear face image, and then amplifying the reduced clear face image to the size same as the size of the original clear face image by adopting a bicubic interpolation algorithm to obtain a non-clear face image;
an image blocking module: the method is used for dividing the clear and non-clear face sample images into image blocks with the same size, and ensuring that the upper, lower, left and right adjacent image blocks have overlapping parts and the number of overlapped pixels is the same.
A feature extraction module: and the image blocks are used for carrying out feature extraction and projecting the image blocks to a common feature space.
A sample pair generation module: the method is used for finding K non-clear face image block features which are most similar to the features of the face image blocks to be clear in a non-clear training sample set, finding K clear face image block features corresponding to the K non-clear face image block features in a clear training sample set, and forming a training sample pair by the K clear and non-clear face image block features.
Advantageous effects
Compared with the prior art, the invention has the following beneficial effects:
1. the invention searches the K non-clear image block features which are nearest to the image block features to be cleared in the feature space for clearing processing, and the method can more accurately acquire the nearest neighbors of the image blocks to be cleared, thereby bringing better processing effect.
2. The method carries out the sharpening processing on the to-be-sharpened image by utilizing the nonlinear regression relation between the sharp and the non-sharp samples, and experiments show that better effect can be obtained by utilizing the nonlinear regression.
Drawings
Fig. 1 is a schematic flow chart of a face image sharpening method by similar sample feature fitting according to the present invention.
Detailed description of the preferred embodiments
The invention is further illustrated by the following figures and examples. The technical scheme is as follows:
s1, obtaining a group of corresponding non-clear face images by degrading a group of clear face images with the same size and posture as those of the face images to be clear, dividing the two groups of images into corresponding image blocks according to pixel positions, and constructing clear and non-clear training sample sets according to the pixel positions of the image blocks;
s2, subtracting the average value from the image block in the training sample set of each pixel position, and then extracting the features;
s3, overlapping and blocking the face image to be clarified according to pixel positions to obtain face image blocks to be clarified, subtracting a mean value from each image block, and then performing feature extraction;
s4, finding K non-clear face image block features which are most similar to the features of the face image blocks to be clear in a non-clear training sample set corresponding to the pixel positions of the face image blocks to be clear, finding K clear face image block features corresponding to the K non-clear face image block features in a clear training sample set, and forming a training sample pair by the K clear and non-clear face image block features;
s5, learning a nonlinear regression relationship between the characteristics of the unclear face image blocks and the characteristics of the clear face image blocks by using the training sample pairs, and obtaining the characteristics of the clear face image blocks corresponding to the characteristics of the face image blocks to be cleared by using the learned regression relationship;
s6, carrying out back projection transformation on the obtained characteristics of the clear human face image block to obtain a clear human face image block;
s7, splicing the obtained clear face image blocks one by one into a final clear face image according to the positions of the clear face image blocks on the face image;
further, in step S1, it specifically includes:
s11, selecting 400 front clear face images with the size of 100x120 from a face sample library, expanding two pixels at the edge of each clear face image, and then performing sliding window processing from left to right and from top to bottom by using a window with the size of 100x120, so that each image can generate 25 images. The 400 sharp face images can be expanded to 10000.
S12, down-sampling 4 times each image of the 10000 clear face images in S11 to obtain corresponding reduced clear face images with the size of 20x25, and amplifying the reduced clear face images by 4 times to the original size of 100x120 by adopting a bicubic interpolation algorithm to obtain 10000 unclear face images in total;
s13, adopting a rectangular window with the size of 8x8 to respectively perform sliding window blocking on the clear face image and the non-clear face image from left to right and from top to bottom, and ensuring that an overlapping part of 4 pixels exists between blocks at adjacent positions, namely each face image can be divided into 696 image blocks.
And S14, arranging image blocks at the same plane position in the clear and non-clear face images into column vectors with the size of 64x1 respectively, and then integrating the column vectors to form a clear and non-clear training sample set. Assume an unsharp training sample set at position p as
Figure GDA0003292005600000061
A set of clear training samples of
Figure GDA0003292005600000062
Figure GDA0003292005600000063
Wherein J is 64, M is 10000, xpAnd ypRepresenting the column vectors into which the image blocks of the non-sharp and sharp images, respectively, are arranged at position p.
Further, in step S2, it specifically includes:
s21, performing feature extraction after the image block in the clear and unclear training sample sets at each position in S1 is subjected to mean value removal, taking position p as an example, specifically:
projecting X, Y of S1 into the feature space according to the projection matrix U, V:
Figure GDA0003292005600000064
Figure GDA0003292005600000065
wherein meanXAnd meanYThe average values i ═ 1, 2., M of the image blocks in the X sample set and the Y sample set described in S1, respectively. The sample sets after projection are respectively
Figure GDA0003292005600000066
And
Figure GDA0003292005600000067
and each image block after projection is characterized by a column vector of 45x1, i.e. q-45, wherein,
Figure GDA0003292005600000068
and
Figure GDA0003292005600000069
respectively represent xpAnd ypAnd (5) projection results.
The projection matrix U, V is calculated as follows:
the correlation matrix C after X and Y de-averaging described in S1 is calculated by equation (3):
C=(Y-meanY)T(X-meanX) (3)
the correlation matrix C is decomposed by the method of equation (4):
C=UΛVT (4)
obtaining U, U and B by solving the formula (5),
Figure GDA0003292005600000071
Figure GDA0003292005600000072
Further, in step S3, it specifically includes:
s31, performing sliding window blocking on the face image to be clarified from left to right and from top to bottom by adopting a window with the size of 8x8, and ensuring that an overlapping part of 4 pixels exists between the upper, lower, left and right adjacent position blocks, namely dividing the face image to be clarified into 696 face image blocks to be clarified.
S32, at the position p, defining L as the column vector for arranging the 8x8 to-be-clear face image blocks at p into 64x1pL is calculated by using the projection matrix V of S2 according to equation (6)pProjecting the image block to a feature space to obtain the features of the image block of the human face to be cleaned with the size of 45x1
Figure GDA0003292005600000073
Figure GDA0003292005600000074
Further, in step S4, it specifically includes:
let K1200, at position p, by solving equation (7), Z as described at S2XFinding out the features of the human face image block to be clear
Figure GDA0003292005600000075
The 1200 unclear face image blocks with the closest euclidean distance have the following characteristics:
Figure GDA0003292005600000076
wherein, i is 1, 2. And in ZYIs found with ZXThe 1200 clear face image block features corresponding to the unclear face image block features in the training sample pair.
Further, in step S5, it specifically includes:
at each position on the image, solving a formula (8) according to a minimum equipartition error criterion, so as to obtain a nonlinear regression relation between the characteristics of the corresponding image blocks in the clear and unsharp training sample sets:
Figure GDA0003292005600000077
wherein each row in the matrix A represents an image block feature, C is a regularization parameter set to 0.001, IRepresents a column vector having elements of 1, and Φ represents a Sigmoid function, i.e., Φ (x) is 1/(1+ e)-x) Where β is a hidden parameter, it is randomly generated in a gaussian distribution when data is input, y represents an output, and if w and b are found, for one input x', the corresponding output y ═ wTΦ(β,x′)-b。
Setting the intermediate parameter HN as 240, solving the model by a lagrange method to obtain w and b:
Figure GDA0003292005600000081
wherein E ═ Φ (β, a), -I ].
The model target parameters w and b can be obtained by using 1200 unclear image block features in the training sample pair of S4 as the input of the regression model and 1200 clear image block features as the output of the model. Then for the input described at S3
Figure GDA0003292005600000082
Can be obtained by the formula (10)
Figure GDA0003292005600000083
Corresponding clear image block features of size 45x1
Figure GDA0003292005600000084
Figure GDA0003292005600000085
Further, in step S6, it specifically includes:
the clear image block characteristics described in S5 can be characterized by solving equation (11)
Figure GDA0003292005600000086
Back projecting to original image space to obtain clear image block Hp
Figure GDA0003292005600000087
Wherein HpIs a vector of size 64x1, and the pixels of the vector are rearranged into a matrix of 8x8 to obtain a clear image block of the face.
Further, in step S7, it specifically includes:
and sequentially solving the corresponding clear face image blocks according to the face image blocks to be cleared, splicing the clear face image blocks into a clear face image according to positions, and taking the average value of the pixel values at the overlapping positions as the final pixel value at the overlapping positions when overlapping pixels are encountered.

Claims (1)

1. A face image sharpening method based on similar sample feature fitting comprises the following steps:
s1, obtaining a group of corresponding non-clear face images by degrading a group of clear face images with the same size and posture as those of the face images to be clear, dividing the two groups of images into corresponding image blocks according to pixel positions, and constructing clear and non-clear training sample sets according to the pixel positions of the image blocks;
s2, subtracting the average value from the image block in the training sample set of each pixel position, and then extracting the features;
s3, overlapping and blocking the face image to be clarified according to pixel positions to obtain face image blocks to be clarified, subtracting a mean value from each image block, and then performing feature extraction;
s4, finding K non-clear face image block features which are most similar to the features of the face image blocks to be clear in a non-clear training sample set corresponding to the pixel positions of the face image blocks to be clear, finding K clear face image block features corresponding to the K non-clear face image block features in a clear training sample set, and forming a training sample pair by the K clear and non-clear face image block features;
s5, learning a nonlinear regression relationship between the characteristics of the unclear face image blocks and the characteristics of the clear face image blocks by using the training sample pairs, and obtaining the characteristics of the clear face image blocks corresponding to the characteristics of the face image blocks to be cleared by using the learned regression relationship;
s6, carrying out back projection transformation on the obtained characteristics of the clear human face image block to obtain a clear human face image block;
s7, splicing the obtained clear face image blocks one by one into a final clear face image according to the positions of the clear face image blocks on the face image;
in the step S1, it is specifically:
s11, selecting a group of clear face images with the size and posture consistent with those of the face images to be cleared from the face sample library; downsampling the clear face image to obtain a reduced clear face image, then amplifying the reduced clear face image to the size same as that of the original clear face image by adopting a bicubic interpolation algorithm, and obtaining a group of interpolated non-clear face images by carrying out the operation on each image in the clear face image set;
s12, respectively carrying out sliding window blocking on the clear and non-clear face images by adopting a rectangular window with fixed size, and ensuring that overlapping parts exist between upper, lower, left and right adjacent position blocks and the number of overlapped pixels is the same;
s13, integrating image blocks at the same pixel position in the clear and non-clear face images to form clear and non-clear training sample sets; assume that the unsharp training sample set at position p is defined as
Figure FDA0003453885590000011
Figure FDA0003453885590000012
A clear training sample set is defined as
Figure FDA0003453885590000013
Figure FDA0003453885590000014
Where J is the dimension of the image block in the sample set, M is the number of image blocks in the sample set, xpAnd ypRepresenting image blocks of an unsharp and a clear image, respectively, at position p;
In the step S2, it is specifically:
s21, subtracting the mean value from the image blocks in the clear and unsharp training sample sets at each pixel position in S1, and performing feature extraction, taking position p as an example, specifically:
projecting X, Y of S1 into the feature space according to the projection matrix U, V:
Figure FDA0003453885590000021
Figure FDA0003453885590000022
wherein meanXAnd meanYThe average values of all image blocks in the X sample set and the Y sample set described in S1 are represented, i is 1,2, …, M, and the sample sets after projection are respectively
Figure FDA0003453885590000023
And
Figure FDA0003453885590000024
where q is less than or equal to the image block dimension J in the sample set,
Figure FDA0003453885590000025
and
Figure FDA0003453885590000026
respectively represent xpAnd ypThe result after projection;
the projection matrix U, V is calculated as follows:
subtracting the mean values from the image blocks in the training sample sets X and Y in S1, respectively, and calculating a correlation matrix C therebetween by formula (3):
C=(Y-meanY)T(X-meanX) (3)
the correlation matrix C is decomposed by the method of equation (4):
C=UΛVT (4)
by solving the formula (5) to obtain
Figure FDA0003453885590000027
Figure FDA0003453885590000028
Figure FDA0003453885590000029
In the step S3, it is specifically:
s31, performing sliding window blocking on the human face image to be clearly seen by adopting a rectangular window with the same size as that in the step S1, and ensuring that an overlapping part exists between blocks at the upper, lower, left and right adjacent positions, wherein the number of overlapping pixels is the same as that of the pixels in the step S1;
s32, at the position p, using the projection matrix V of S2 to combine the image block L of the human face to be cleaned at the position p according to the formula (6)pProjecting the image into a feature space to obtain the features of the image blocks of the human face to be clarified
Figure FDA00034538855900000210
Figure FDA00034538855900000211
In the step S4, it is specifically:
at position p, Z as described at S2 by solving equation (7)XFinding out the features of the human face image block to be clear
Figure FDA0003453885590000031
The K unclear face image block features with the closest euclidean distance:
Figure FDA0003453885590000032
wherein i is 1,2, …, M, and is at ZYIs found with ZXThe K clear human face image block features corresponding to the non-clear human face image block features form a training sample pair;
in the step S6, it is specifically:
the clear image block characteristics of S5 are obtained by solving equation (11)
Figure FDA0003453885590000033
Back projecting to the original image space to obtain a clear human face image block Hp
Figure FDA0003453885590000034
In the step S7, it is specifically:
sequentially solving corresponding clear face image blocks according to the face image blocks to be cleared, splicing the clear face image blocks into clear face images according to pixel positions, and taking the average value of pixel values at the overlapping positions as the final pixel value at the overlapping positions when overlapping pixels are encountered;
the method is characterized in that, in the step S5, the method specifically includes:
at each position on the image, solving a formula (8) according to a minimum average error criterion to obtain a nonlinear regression relation between the characteristics of the corresponding image blocks in the clear and non-clear training sample sets:
Figure FDA0003453885590000035
wherein, each row in the matrix a represents an image block feature, C is a regularization parameter, I represents a column vector whose elements are all 1, and Φ represents a Sigmoid function, that is, Φ (x) is 1/(1+ e)-x) Beta is a hidden parameter, in the input numberAccording to the method, the output is randomly generated according to Gaussian distribution, y represents the output, if w and b are obtained, for one input x', the corresponding output y ═ wTΦ(β,x′)-b;
Solving by a Newton method to obtain w and b;
obtaining regression model target parameters w and b by taking the unclear image block features in the training sample pair of S4 as the input of a regression model and taking the clear image block features as the output of the regression model; then for the input described at S3
Figure FDA0003453885590000036
Is obtained by the formula (10)
Figure FDA0003453885590000037
Corresponding clear image block features
Figure FDA0003453885590000038
Figure FDA0003453885590000039
CN201711322319.6A 2017-12-12 2017-12-12 Face image sharpening method based on similar sample feature fitting Active CN108171124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711322319.6A CN108171124B (en) 2017-12-12 2017-12-12 Face image sharpening method based on similar sample feature fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711322319.6A CN108171124B (en) 2017-12-12 2017-12-12 Face image sharpening method based on similar sample feature fitting

Publications (2)

Publication Number Publication Date
CN108171124A CN108171124A (en) 2018-06-15
CN108171124B true CN108171124B (en) 2022-04-05

Family

ID=62525713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711322319.6A Active CN108171124B (en) 2017-12-12 2017-12-12 Face image sharpening method based on similar sample feature fitting

Country Status (1)

Country Link
CN (1) CN108171124B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084193B (en) * 2019-04-26 2023-04-18 深圳市腾讯计算机系统有限公司 Data processing method, apparatus, and medium for face image generation
CN110852962B (en) * 2019-10-29 2022-08-26 南京邮电大学 Dual-mapping learning compressed face image restoration method based on regression tree classification
CN113628142B (en) * 2021-08-19 2022-03-15 湖南汽车工程职业学院 Picture sharpening processing system based on similarity simulation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306374A (en) * 2011-08-30 2012-01-04 西安交通大学 Method for rebuilding super-resolution human face image by position block nonlinear mapping
CN102402784A (en) * 2011-12-16 2012-04-04 武汉大学 Human face image super-resolution method based on nearest feature line manifold learning
CN106096547A (en) * 2016-06-11 2016-11-09 北京工业大学 A kind of towards the low-resolution face image feature super resolution ratio reconstruction method identified

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306374A (en) * 2011-08-30 2012-01-04 西安交通大学 Method for rebuilding super-resolution human face image by position block nonlinear mapping
CN102402784A (en) * 2011-12-16 2012-04-04 武汉大学 Human face image super-resolution method based on nearest feature line manifold learning
CN106096547A (en) * 2016-06-11 2016-11-09 北京工业大学 A kind of towards the low-resolution face image feature super resolution ratio reconstruction method identified

Also Published As

Publication number Publication date
CN108171124A (en) 2018-06-15

Similar Documents

Publication Publication Date Title
Dian et al. Regularizing hyperspectral and multispectral image fusion by CNN denoiser
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
Zhang et al. Adversarial spatio-temporal learning for video deblurring
CN103761710B (en) The blind deblurring method of efficient image based on edge self-adaption
CN102326379B (en) Method for removing blur from image
Kappeler et al. Video super-resolution with convolutional neural networks
CN111145112B (en) Two-stage image rain removing method and system based on residual countermeasure refinement network
CN109685045B (en) Moving target video tracking method and system
CN111275643B (en) Real noise blind denoising network system and method based on channel and space attention
CN104408742B (en) A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis
CN108171124B (en) Face image sharpening method based on similar sample feature fitting
Dong et al. Blur kernel estimation via salient edges and low rank prior for blind image deblurring
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
Gao et al. Atmospheric turbulence removal using convolutional neural network
Chandak et al. Semantic image completion and enhancement using deep learning
Wang et al. Uneven image dehazing by heterogeneous twin network
CN108629119B (en) Time sequence MODIS quantitative remote sensing product space-time restoration and batch processing realization method
Zhou et al. Single image dehazing based on weighted variational regularized model
Han et al. MPDNet: An underwater image deblurring framework with stepwise feature refinement module
Ge et al. Blind image deconvolution via salient edge selection and mean curvature regularization
CN111986136B (en) Fuzzy image sequence fusion restoration method based on poisson probability model
Xue et al. Bwin: A bilateral warping method for video frame interpolation
CN106530259B (en) A kind of total focus image rebuilding method based on multiple dimensioned defocus information
CN112069870A (en) Image processing method and device suitable for vehicle identification
Rezayi et al. Huber Markov random field for joint super resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant