CN107680037B - Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning - Google Patents
Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning Download PDFInfo
- Publication number
- CN107680037B CN107680037B CN201710817616.1A CN201710817616A CN107680037B CN 107680037 B CN107680037 B CN 107680037B CN 201710817616 A CN201710817616 A CN 201710817616A CN 107680037 B CN107680037 B CN 107680037B
- Authority
- CN
- China
- Prior art keywords
- resolution
- sample
- image
- low
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013213 extrapolation Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 84
- 238000012545 processing Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 101100149325 Escherichia coli (strain K12) setC gene Proteins 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 13
- 230000014509 gene expression Effects 0.000 abstract description 8
- 238000011156 evaluation Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 230000002285 radioactive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an improved human face super-resolution reconstruction method based on nearest characteristic line manifold learning, which is characterized in that on the basis of the existing human face super-resolution reconstruction method based on nearest characteristic line manifold learning, the condition that a projection point falls on an extrapolation line of a connecting line between two sample points is distinguished, namely when the sum of Euclidean distances from the projection point to the two sample points is greater than the Euclidean distance between the two sample pointsWAnd searching a sample point which is closer to the projection point from the two sample points to replace the projection point to form a point set to be screened, so that the projection point is limited to have stronger relevance with the sample point, the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved, the introduction of detailed information which does not exist in the original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an improved face super-resolution reconstruction method based on nearest characteristic line manifold learning.
Background
2016 in government work reports emphasizes: 'innovation of a social security comprehensive treatment mechanism, support promotion of social security prevention and control system construction by informatization, law punishment of illegal criminal behaviors, severe attack of violent terrorism activities and enhancement of the security sense of the masses'; at present, in a plurality of security measures, video monitoring and image processing technologies play more and more important roles in crime prevention and the like, but according to statistical data, the quality difference ratio of a monitored image obtained in the daytime is up to 60%, and the quality difference ratio of the monitored image obtained at night is up to 95%, so that how to reconstruct and obtain a high-quality recognizable face image on the basis of the face image of an original low-quality suspect becomes an urgent need of video detection.
At present, the study of the face super-resolution reconstruction method based on learning is in the main study direction in the field of image processing, the face super-resolution reconstruction method based on learning is to reconstruct and obtain a high-resolution face image which is most similar to an input low-resolution face image by utilizing high-resolution and low-resolution face image training library samples according to an observed low-resolution face image, and the face super-resolution reconstruction method can reproduce the local details of a face and achieve the aim of enhancing the accuracy of face identification; compared with the traditional method, the learning-based face super-resolution reconstruction method can obtain better reconstruction effect and higher magnification by means of prior information obtained by training samples.
The basic premise of the learning-based face image super-resolution reconstruction method is that sample image blocks of a low-resolution face image and sample image blocks of a high-resolution face image have similar local geometric structures, however, in order to realize the assumption, two premise conditions must be satisfied: first, sample data is densely sampled in the potential manifold space; second, the samples are not disturbed by noise; for the first precondition, the maximum number of samples in the existing face image library does not exceed 2000 samples without considering the individual repetition of the samples, and even if the samples are combined into a training set and put into a high-dimensional face manifold space, a sparse sample space is formed; therefore, the existing face image library samples cannot meet the precondition that the learning-based face image super-resolution reconstruction method is supposed to be established, the creation of the face image library is a very time-consuming and complex process, and a large amount of computing resources are occupied in the operation process of the creation algorithm; therefore, it is not practical to solve the problem of insufficiently dense sampling of the manifold space by simply increasing the number of face image samples to expand the face image library.
In 2014, Jiang et al of Wuhan university introduced the concept of the nearest characteristic line in the field of image processing, and proposed a face super-resolution reconstruction method based on the manifold learning of the nearest characteristic line, the application number of which is 201110421817.2, and the method expands the expression capacity of a sample library by introducing the concept of the nearest characteristic line into the super-resolution reconstruction; firstly, selecting a sample image nearest to a sample point to be inquired from a low-resolution training sample library by an inventor; secondly, connecting the screened sample images serving as sample points pairwise to obtain corresponding characteristic lines, and solving the projection point of the sample point to be inquired on each characteristic line, so that the capacity expansion work of the sample data is realized, and the problem that sampling in manifold space is not dense enough is solved; then selecting a part of projection points nearest to the sample point to be queried from the obtained projection points, and solving a linear reconstruction weight between the sample point to be queried and the nearest projection points; and finally, replacing the low-resolution projection points with the high-resolution projection points corresponding to the nearest neighbor part low-resolution projection points, and reconstructing to obtain a target high-resolution image.
Although the method greatly expands the expression capability of sample data, when the nearest neighbor projection point is selected, necessary constraint information is lacked, and detailed information which does not exist in the original image is introduced, so that the image reconstruction effect is not ideal.
Disclosure of Invention
The invention aims to provide an improved face super-resolution reconstruction method based on recent characteristic line manifold learning, which can avoid the introduction of detail information which does not exist in an original image as much as possible on the basis of the face super-resolution reconstruction method based on recent characteristic line manifold learning, and improve the reconstruction effect of a low-resolution image.
The technical scheme adopted by the invention is as follows: an improved face super-resolution reconstruction method based on nearest feature line manifold learning comprises the following steps:
And 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein under the condition that the projection points fall on an extrapolation line of a connecting line between the sample points, the projection points which do not accord with the reality are found out according to a constraint parameter W, and the nearest points which accord with the reality are calculated as substitutes.
And 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points on the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction.
And 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points in the high-resolution face sample block space and the low-resolution face sample block space obtained in the step 2.
And 5, replacing K nearest projection points in the low-resolution face sample block space obtained in the step 2 with K sample points in the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3.
And 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
Further, in step 1, the input low-resolution face image, the high-resolution training set and the low-resolution training set are respectively converted into one-dimensional vectors to obtain low-resolution image x to be reconstructed and high-resolution image training samplesAnd low resolution image training samplesWhere N represents the number of training sample patterns in the high resolution image training samples and the low resolution image training samples.
After dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},Where M denotes the number of image blocks per image division.
In step 2, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt,
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block setWhereinDenotes xtThe neighborhood set of (a) is selected,representing a neighborhoodThe number of image blocks in (1).
Step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample pointsAndare connected to formStrip characteristic linej1And j2Are all integers and j is not less than 11≤j2≤N。
Step 2.4, calculate input image Block xtIn all characteristic linesProjected point on Representing a position parameter, wherein
Then, input image block xtAnd characteristic lineCan be regarded as xtAnd projection pointA distance of (i) that
Step 2.5, projecting points are aligned according to actual conditionsCarrying out distinguishing calculation; when projected pointDoes not fall on the sample pointAndwhen extrapolated, illustrate the projected pointFall on the sample pointAndbetween the line segments of the connecting line, the projection points are not required to be replaced; when projected pointFall on the sample pointAndwhen extrapolated, computing the projected pointsTo the sample pointAndeuclidean distance of (a):andif the projected pointTo the sample pointEuclidean distance ofSmaller, two sample pointsAndeuclidean distance ofMultiplied by a constraint parameter W, ifThen let the projection point beOrder toIs composed ofPutting the sample set to be selected into the sample set, if the projection pointTo the sample pointEuclidean distance ofThe processing is the same as described above.
Step 2.6, obtained according to step 2.5From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point setAnd C (t) is a set of K nearest neighbor sample projection point subscripts.
In step 3, the t-th image block x in the input low-resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the processPerforming linear reconstruction to obtain a target reconstruction weight Wt。
In step 4, for the t-th image block x in the input low resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samplesThe K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point setWherein
In step 5, for the t-th image block x in the input low resolution image block set to be reconstructedtCombining the high-resolution nearest neighbor sample projection points obtained in the step 4Linear synthetic meshElevation resolution image block ytCoefficient of synthesis Wt:
In a further step 2.5, the value of the constraint parameter W is 1.25.
The invention has the main advantages that: the method comprises the steps of distinguishing and processing the condition that a projection point falls on an extrapolation line of a connecting line between two sample points in the steps of the existing human face super-resolution reconstruction method based on nearest characteristic line manifold learning, and if the sum of Euclidean distances from the projection point to the two sample points is larger than W times of the Euclidean distance between the two sample points, searching a sample point close to the projection point from the two sample points to replace the projection point to form a point set to be screened, so that the projection point is limited to have stronger relevance with the sample point, the expression capacity of newly obtained sample data on input low-resolution image blocks can be greatly improved, the introduction of non-existing detail information of an original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved.
And the value of the parameter W is further constrained to be 1.25, so that a better reconstruction effect can be achieved when the low-resolution image is reconstructed.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic representation of projected points according to the present invention falling on an extrapolation of a connecting line between two sample points;
FIG. 3 is a schematic diagram of changes in an objective evaluation index PSNR under different constraints W according to the present invention;
fig. 4 is a schematic diagram of a change situation of the objective evaluation index SSIM under different constraint conditions W in the present invention.
Detailed Description
The technical scheme of the invention can adopt a software form to realize automatic flow operation, and the technical scheme of the invention is further explained by combining an embodiment and an attached drawing, as shown in figure 1, an improved face super-resolution reconstruction method based on nearest characteristic line manifold learning specifically comprises the following steps:
After dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},Where M denotes the number of image blocks per image division.
The embodiment adopts a CAS-PEAL-RI face library, which is obtained in a special experimental environment and comprises 1040 individuals, wherein 30871 facial images of the individuals under different postures, illumination and expressions are covered; selecting 1040 human face images with neutral expressions and normal illumination in a database, firstly picking up human face areas of individual images, cutting the human face areas into images with 112 multiplied by 100 pixels, manually labeling the nose tips, two mouth corners and two eye centers of the human face images as characteristic points, and then performing radioactive transformation alignment to obtain a high-resolution training set; the low-resolution training set is obtained by performing fuzzy 4-fold down-sampling on the high-resolution training set, wherein 1000 images are used as training samples, and 40 images are used as test images.
The embodiment relates to five parameters in total, namely the number Kpre of pre-screened image blocks, the number K of projections of nearest neighbor samples, the sizes of image blocks in a low-resolution image block set to be reconstructed, a high-resolution image training sample set and a low-resolution image training sample set, the number of overlapped pixels between adjacent image blocks, and a constraint condition W for comparison between sample points and projection points, wherein the sizes of the image blocks are all set to be 7 × 7, the number of overlapped pixels between the image blocks is set to be 4, in order to facilitate smooth experiment, other parameters are respectively tested, and W>1, only W>1, the projection point can be on the extrapolation line; kpre is more than or equal to 1 and less than or equal to 1040, but if the sample value is too small, the result is not ideal due to the lack of sample data, if the value is too large, the algorithm complexity is greatly increased, and the experiment difficulty is increased by geometric times;after the nearest characteristic line processing, the total number of projection points isGreater than 3 is to obtain enough sample data to make the effect better.
And 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein for the condition that the projection points fall on an extrapolation line of a connecting line between two sample points, the projection points which do not accord with the reality are found out according to a constraint parameter W, and the nearest points which accord with the reality are calculated as substitutes.
In this step, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt,Low resolution training image block set LtHigh resolution training image block set H representing low resolution face sample block spacetRepresenting a high resolution face sample block space.
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block setWhereinDenotes xtThe neighborhood set of (a) is selected,representing a neighborhoodThe number of image blocks in (1).
Step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample pointsAndare connected to formStrip characteristic linej1And j2Are all integers and j is not less than 11≤j2≤N。
Step 2.4, calculate input image Block xtIn all characteristic linesProjected point on Representing a position parameter, wherein
Then, input image block xtAnd characteristic lineCan be regarded as xtAnd projection pointA distance of (i) that
Step 2.5, projecting points are aligned according to actual conditionsPerforming differential calculation when the projected pointsDoes not fall on the sample pointAndwhen extrapolated, illustrate the projected pointFall on the sample pointAndbetween the line segments of the connecting line, the projection points are not required to be replaced; when projected pointFall on the sample pointAndwhen extrapolated, computing the projected pointsTo the sample pointAndeuclidean distance of (a):andif the projected pointTo the sample pointEuclidean distance ofSmaller, two sample pointsAndeuclidean distance ofMultiplied by a constraint parameter W, ifThen let the projection point beOrder toIs composed ofPut into the sample set to be selected, becauseThis does not exist, and is not considered in the present invention, and otherwiseIllustrating projected pointsFall on the sample pointAndbetween the line segments of the connecting line, the projection points are not required to be replaced; if the projected pointTo the sample pointEuclidean distance ofThe processing is the same as described above.
As shown in figure 2 of the drawings, in which,andare respectively an input query point xiOn the characteristic lineAndprojected point of (a), xiToCloser in distance, xiAndhaving more similar characteristics, butDistance sample pointAndif the distance is too far, the super-resolution algorithm is preferentially selected according to the face super-resolution algorithm based on the manifold learning of the nearest characteristic lines before improvementIt does not match the reality and therefore for the sample pointAndfeature line formed if the query point x is inputiIs projected onAndon the extrapolation line ofAnd the Euclidean distance from the nearest sample point is greater than that of the sample pointAndw times the Euclidean distance, then from the sample pointAndsearching for sample points closer to projection pointReplacement proxelsAnd a point set to be screened is formed, the limitation on the projection points enables the projection points to have stronger relevance with the sample points, and the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved.
In order to better determine the influence of different W on the reconstruction result of the improved algorithm of the face super-resolution based on the latest feature line manifold learning, the PSNR and SSIM values of the face super-resolution are determined under different W to facilitate the analysis of the algorithm performance, wherein the PSNR is a peak signal-to-noise ratio and is an objective standard for evaluating images, the larger the PSNR value is, the less distortion is represented, the SSIM is structural similarity and is an index for measuring the similarity of two images, the structural similarity range is-1 to 1, and when the two images are identical, the SSIM value is equal to 1.
Referring to fig. 3 and 4, with the increase of the constraint parameter W, the values of the objective evaluation indexes PSNR and SSIM are in a trend of rising first and then falling, with the continuous increase of W, the values of the objective evaluation indexes PSNR and SSIM gradually approach to the face super-resolution algorithm based on the recent eigen line manifold learning before improvement, the PSNR index of the reconstructed image reaches the best when W is 1.25, and the SSIM index of the reconstructed image reaches the best when W is 1.7, because the change amplitude of the SSIM value is small, the constraint parameter W is set to 1.25 in this embodiment.
The reason that the image reconstruction effect changes along with the constraint parameter W is that the selective constraint strength of the constraint condition on the projection point becomes smaller along with the change of W, and when W is very small, the projection point which is positioned outside the sample point and has a very small Euclidean distance from the nearest sample point is replaced, so that the reconstruction effect is temporarily reduced; with the increase of W, the constraint condition does not timely replace the projection point which is far from the sample point and affects the image reconstruction, so that the image reconstruction effect is not ideal, under the condition of keeping other parameters unchanged, the number of the preselected points and the number of the nearest neighbor projection points are respectively tested, and the best experimental effect is obtained in the embodiment when Kpre is 60 and K is 30.
Step 2.6, obtained according to step 2.5From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point setAnd C (t) is a set of K nearest neighbor sample projection point subscripts.
And 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points on the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction.
Step 3 specifically is to input the t-th image block x in the low resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the processPerforming linear reconstruction to obtain a target reconstruction weight WtTarget reconstruction weight WtThe calculation of (a) belongs to the prior art, and is not described in detail herein.
And 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points in the high-resolution face sample block space and the low-resolution face sample block space obtained in the step 2.
Step 4 specifically is that for the t-th image block x in the input low-resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samplesThe K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point setWherein
And 5, replacing K nearest projection points in the low-resolution face sample block space obtained in the step 2 with K sample points in the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3.
Step 5 specifically includes that for the t-th image block x in the input low-resolution image block set to be reconstructedtThe high-resolution nearest neighbor sample projection point set H obtained in the step 4t KLinearly synthesizing target high-resolution image block ytCoefficient of synthesis Wt:
And 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
In summary, on the basis of the existing face super-resolution reconstruction method based on the nearest characteristic line manifold learning, the invention distinguishes and processes the situation that the projection point falls on the extrapolation line of the connecting line between the two sample points, namely when the euclidean distance between the projection point and the two sample points is greater than W times of the euclidean distance between the two sample points, the sample point closer to the projection point is searched from the two sample points to replace the projection point, and a point set to be screened is formed, so that the projection point is limited to have stronger relevance with the sample point, the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved, the introduction of the nonexistent detail information of the original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved; in addition, the method is funded by a national science fund project (project number: U1404618) and a scientific and technological development plan project (project number: 172102210186) of Henan province, and has good research value in the technical field of image processing.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (3)
1. An improved face super-resolution reconstruction method based on nearest feature line manifold learning is characterized by comprising the following steps:
step 1, inputting a low-resolution face image, and dividing the input low-resolution face image, a low-resolution face sample image in a low-resolution training set and a high-resolution face sample image in a high-resolution training set into mutually overlapped image blocks;
step 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein for the condition that the projection points fall on an extrapolation line of a connecting line between the sample points, according to a constraint parameter W, the projection points which do not conform to the reality are found out, and the nearest points which conform to the reality are calculated as a substitute;
step 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points in the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction;
step 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points on the low-resolution face sample block space obtained in the step 2 in the high-resolution face sample block space;
step 5, replacing K nearest projection points on the low-resolution face sample block space obtained in the step 2 with K sample points on the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3;
and 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
2. The improved face super-resolution reconstruction method based on nearest eigen-line manifold learning of claim 1, characterized in that:
in step 1, the input low-resolution face image, the input high-resolution training set and the input low-resolution training set are respectively converted into one-dimensional vectors to obtain low-resolution image x to be reconstructed and high-resolution image training samplesAnd low resolution image training samplesWherein N represents the number of training sample patterns in the high-resolution image training sample and the low-resolution image training sample;
after dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},Wherein M represents the number of image blocks of each image division;
in step 2, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt,
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block setWhereinDenotes xtThe neighborhood set of (a) is selected,representing a neighborhoodThe number of image blocks in (1);
step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample pointsAndare connected to formStrip characteristic linej1And j2Are all integers and j is not less than 11≤j2≤N;
Step 2.4, calculate input image Block xtIn all characteristic linesProjected point on Representing a position parameter, wherein
Then, input image block xtAnd characteristic lineCan be regarded as xtAnd projection pointA distance of (i) that
step 2.5, projecting points are aligned according to actual conditionsCarrying out distinguishing calculation; when projected pointDoes not fall on the sample pointAndwhen extrapolated, illustrate the projected pointFall on the sample pointAndbetween the line segments of the connecting line, the projection points are not required to be replaced; when projected pointFall on the sample pointAndwhen extrapolated, computing the projected pointsTo the sampleDotAndeuclidean distance of (a):andif the projected pointTo the sample pointEuclidean distance ofSmaller, two sample pointsAndeuclidean distance ofMultiplied by a constraint parameter W, ifThen let the projection point beOrder toIs composed ofPutting the sample set to be selected into the sample set, if the projection pointTo the sample pointEuclidean distance ofSmaller, the processing mode is the same as the mode;
step 2.6, obtained according to step 2.5From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point setC (t) is a set of K nearest neighbor sample projection point subscripts;
in step 3, the t-th image block x in the input low-resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the processPerforming linear reconstruction to obtain a target reconstruction weight Wt;
In step 4, for the t-th image block x in the input low resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samplesThe K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point setWherein
in step 5, for the t-th image block x in the input low resolution image block set to be reconstructedtCombining the high-resolution nearest neighbor sample projection points obtained in the step 4Linearly synthesizing target high-resolution image block ytCoefficient of synthesis Wt:
3. The improved face super-resolution reconstruction method based on nearest eigen-line manifold learning of claim 2, characterized in that: in step 2.5, the value of the constraint parameter W is 1.25.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710817616.1A CN107680037B (en) | 2017-09-12 | 2017-09-12 | Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710817616.1A CN107680037B (en) | 2017-09-12 | 2017-09-12 | Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107680037A CN107680037A (en) | 2018-02-09 |
CN107680037B true CN107680037B (en) | 2020-09-29 |
Family
ID=61135193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710817616.1A Active CN107680037B (en) | 2017-09-12 | 2017-09-12 | Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107680037B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447020A (en) * | 2018-03-12 | 2018-08-24 | 南京信息工程大学 | A kind of face super-resolution reconstruction method based on profound convolutional neural networks |
CN109359655B (en) * | 2018-09-18 | 2021-07-16 | 河南大学 | Image segmentation method based on context regularization cycle deep learning |
CN109343692B (en) * | 2018-09-18 | 2021-07-23 | 河南大学 | Mobile device display power saving method based on image segmentation |
CN113516588B (en) * | 2021-04-26 | 2024-07-02 | 中国工商银行股份有限公司 | Image generation method, device and equipment |
CN114549323A (en) * | 2022-02-28 | 2022-05-27 | 福建师范大学 | Robust face super-resolution processing method and system based on empirical relationship deviation correction |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004075093A2 (en) * | 2003-02-14 | 2004-09-02 | University Of Rochester | Music feature extraction using wavelet coefficient histograms |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103336960A (en) * | 2013-07-26 | 2013-10-02 | 电子科技大学 | Human face identification method based on manifold learning |
CN103824272A (en) * | 2014-03-03 | 2014-05-28 | 武汉大学 | Face super-resolution reconstruction method based on K-neighboring re-recognition |
CN104112147A (en) * | 2014-07-25 | 2014-10-22 | 哈尔滨工业大学深圳研究生院 | Nearest feature line based facial feature extracting method and device |
CN104933692A (en) * | 2015-07-02 | 2015-09-23 | 中国地质大学(武汉) | Reconstruction method and apparatus for the super-resolution of a face |
CN105023240A (en) * | 2015-07-08 | 2015-11-04 | 北京大学深圳研究生院 | Dictionary-type image super-resolution system and method based on iteration projection reconstruction |
CN105488776A (en) * | 2014-10-10 | 2016-04-13 | 北京大学 | Super-resolution image reconstruction method and apparatus |
CN107133921A (en) * | 2016-02-26 | 2017-09-05 | 北京大学 | The image super-resolution rebuilding method and system being embedded in based on multi-level neighborhood |
-
2017
- 2017-09-12 CN CN201710817616.1A patent/CN107680037B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004075093A2 (en) * | 2003-02-14 | 2004-09-02 | University Of Rochester | Music feature extraction using wavelet coefficient histograms |
CN102402784A (en) * | 2011-12-16 | 2012-04-04 | 武汉大学 | Human face image super-resolution method based on nearest feature line manifold learning |
CN103336960A (en) * | 2013-07-26 | 2013-10-02 | 电子科技大学 | Human face identification method based on manifold learning |
CN103824272A (en) * | 2014-03-03 | 2014-05-28 | 武汉大学 | Face super-resolution reconstruction method based on K-neighboring re-recognition |
CN104112147A (en) * | 2014-07-25 | 2014-10-22 | 哈尔滨工业大学深圳研究生院 | Nearest feature line based facial feature extracting method and device |
CN105488776A (en) * | 2014-10-10 | 2016-04-13 | 北京大学 | Super-resolution image reconstruction method and apparatus |
CN104933692A (en) * | 2015-07-02 | 2015-09-23 | 中国地质大学(武汉) | Reconstruction method and apparatus for the super-resolution of a face |
CN105023240A (en) * | 2015-07-08 | 2015-11-04 | 北京大学深圳研究生院 | Dictionary-type image super-resolution system and method based on iteration projection reconstruction |
CN107133921A (en) * | 2016-02-26 | 2017-09-05 | 北京大学 | The image super-resolution rebuilding method and system being embedded in based on multi-level neighborhood |
Non-Patent Citations (3)
Title |
---|
An improved classifier based on nearest feature line;Youfu Du,et al;《2012 International Conference on Information Security and》;20130207;正文第321-324页 * |
人脸图像特征提取和分类算法研究;徐征;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120415;I138-1931 * |
改进的K最近特征线算法在文本分类中的应用;谭冠群,丁华福;《哈尔滨理工大学学报》;20081231;第13卷(第6期);正文第19-22页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107680037A (en) | 2018-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680037B (en) | Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning | |
CN111311563B (en) | Image tampering detection method based on multi-domain feature fusion | |
CN110135366B (en) | Shielded pedestrian re-identification method based on multi-scale generation countermeasure network | |
Liu et al. | LF-YOLO: A lighter and faster yolo for weld defect detection of X-ray image | |
CN109325550B (en) | No-reference image quality evaluation method based on image entropy | |
Wang et al. | LiSiam: Localization invariance Siamese network for deepfake detection | |
CN102402784B (en) | Human face image super-resolution method based on nearest feature line manifold learning | |
CN112818849B (en) | Crowd density detection algorithm based on context attention convolutional neural network for countermeasure learning | |
CN110648310A (en) | Weak supervision casting defect identification method based on attention mechanism | |
Li et al. | A review of deep learning methods for pixel-level crack detection | |
Li et al. | Image quality assessment using deep convolutional networks | |
CN111652240B (en) | CNN-based image local feature detection and description method | |
CN113344110A (en) | Fuzzy image classification method based on super-resolution reconstruction | |
CN112927783A (en) | Image retrieval method and device | |
Wang et al. | Small vehicle classification in the wild using generative adversarial network | |
US11481919B2 (en) | Information processing device | |
Li et al. | Adversarial domain adaptation via category transfer | |
Zhuang et al. | ReLoc: A restoration-assisted framework for robust image tampering localization | |
Liu et al. | Adaptive Texture and Spectrum Clue Mining for Generalizable Face Forgery Detection | |
CN116934820A (en) | Cross-attention-based multi-size window Transformer network cloth image registration method and system | |
Xiu et al. | Double discriminative face super-resolution network with facial landmark heatmaps | |
CN113222887A (en) | Deep learning-based nano-iron labeled neural stem cell tracing method | |
Yuan et al. | LR-ProtoNet: Meta-Learning for Low-Resolution Few-Shot Recognition and Classification | |
CN106204451B (en) | Based on the Image Super-resolution Reconstruction method for constraining fixed neighborhood insertion | |
Yan et al. | Multimodal Graph Learning for Deepfake Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |