CN109447903B - Method for establishing semi-reference super-resolution reconstruction image quality evaluation model - Google Patents
Method for establishing semi-reference super-resolution reconstruction image quality evaluation model Download PDFInfo
- Publication number
- CN109447903B CN109447903B CN201811213803.XA CN201811213803A CN109447903B CN 109447903 B CN109447903 B CN 109447903B CN 201811213803 A CN201811213803 A CN 201811213803A CN 109447903 B CN109447903 B CN 109447903B
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- low
- resolution image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 230000008859 change Effects 0.000 claims abstract description 4
- 238000009826 distribution Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 33
- 238000011156 evaluation Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000013210 evaluation model Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004434 saccadic eye movement Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for establishing a semi-reference super-resolution reconstruction image quality evaluation model, which comprises the steps of firstly, respectively carrying out significance detection on a low-resolution image and a high-resolution image so as to extract significance characteristics of the images; secondly, multiplying the original image by the saliency characteristics, carrying out DWT change on the image obtained by multiplication, extracting high-frequency information in the image, and calculating the information gain of the high-resolution image relative to the low-resolution image; then extracting texture features of the low-resolution image and the high-resolution image by adopting an LBP operator, and comparing the texture similarity of the low-resolution image and the high-resolution image through a histogram by combining with the image significant features; and finally, combining the information gain obtained in the step two and the texture similarity obtained in the step three to construct a semi-reference super-resolution reconstruction image quality evaluation model.
Description
Technical Field
The invention relates to a method for establishing a semi-reference super-resolution reconstruction image quality evaluation model, in particular to a method and a device for establishing a semi-reference super-resolution reconstruction image quality evaluation model, which utilize singular value decomposition of a gray level image to extract distortion information in the aspects of structure and brightness and utilize singular value decomposition of a quaternion to extract integral distortion information of colors.
Background
The image super-resolution refers to the image super-resolution reconstruction technology which is divided into super-resolution reconstruction of a single image and super-resolution reconstruction of a plurality of images according to the number of input images. The single-image super-resolution reconstruction means that a single image is used as input to obtain a high-resolution image, and the multi-image super-resolution reconstruction means that a plurality of low-resolution images in the same scene are used as input to reconstruct the high-resolution image. The high-resolution image has more detailed information, higher pixel density and finer image quality. In order to obtain high resolution images, the simplest and straightforward solution is to use a high resolution camera. However, in practical applications, it is not suitable to use a high-resolution camera for image signal acquisition in many cases in view of engineering cost and manufacturing process considerations. Therefore, there is a practical application demand for obtaining a high-resolution image by a super-resolution reconstruction algorithm. Image super-resolution reconstruction techniques play a very important role in many respects, such as: computer vision, remote sensing images, web browsing, medical images, high definition television, video surveillance, and the like.
A number of image super-resolution reconstruction algorithms have been proposed in recent years, but how to effectively evaluate the performance of these algorithms has been a problem. A simple and effective method is to subjectively evaluate the image, namely, different people are found to observe and score the super-resolution reconstruction image, and a score is finally obtained after the obtained score is processed and is used as the quality score of the image. Since human eyes are the final recipients of information such as images and videos, the subjective evaluation method is reliable for evaluating super-resolution reconstructed images. However, the cost for evaluating the super-resolution reconstructed image by adopting the subjective evaluation method is too high, and a great amount of time is spent. In addition, the subjective evaluation method is not easy to perform parameter optimization work on the image super-resolution reconstruction algorithm. Therefore, an objective quality evaluation model is very meaningful for super-resolution images and corresponding reconstruction algorithms.
The objective evaluation of the super-resolution reconstructed image is very challenging, and because no high-quality high-resolution image is used as a reference image in practical application, some full-reference quality evaluation algorithms with better performance: peak Signal-to-Noise Ratio (PSNR), structural Similarity Index (SSIM), multi-Scale Structural Similarity Index (MS-SSIM), and the like are difficult to effectively evaluate super-resolution reconstructed images generated in practical applications. Due to the fact that composite distortion such as blurring, ringing effect and texture distortion is brought in the image super-resolution reconstruction process. Therefore, the existing general reference-free quality evaluation algorithm with superior performance: a Spatial Reference-Free Image Quality estimation method (BRISQUE), a Robust Reference-Free evaluation algorithm based on Free Energy (NFERM), a Reference-Free Image evaluation method based on Codebook expression (Codebook evaluation for Reference Image evaluation, CORNIA), and the like are also difficult to effectively evaluate super-resolution reconstructed images. Therefore, for the brand new problem of super-resolution reconstruction image quality evaluation, the research on the effective super-resolution reconstruction image and the quality evaluation algorithm of the corresponding reconstruction algorithm is significant.
Disclosure of Invention
The invention aims to solve the problems and provides a semi-reference super-resolution reconstruction image quality evaluation model established based on two points of information gain and texture similarity.
In order to achieve the purpose, the invention provides a method for establishing a semi-reference super-resolution reconstruction image quality evaluation model, which comprises the following steps:
the method comprises the following steps: respectively carrying out significance detection on the low-resolution image and the high-resolution image so as to extract significance characteristics of the images;
step two: multiplying the original image by the saliency characteristics, carrying out DWT change on the image obtained by multiplication, extracting high-frequency information in the image, and calculating the information gain of the high-resolution image relative to the low-resolution image;
step three: extracting texture features of the low-resolution image and the high-resolution image by adopting an LBP operator, and comparing the texture similarity of the low-resolution image and the high-resolution image by combining the image significant features through a histogram;
step four: and (4) combining the information gain obtained in the second step and the texture similarity obtained in the third step to construct a semi-reference super-resolution reconstruction image quality evaluation model.
Preferably, the saliency detection in the first step, so that the extraction of the saliency image, includes the following:
the calculation formula of the salient feature M (r) extracted by the gazing detection module through visual salient feature extraction on the low-resolution image and the high-resolution image is as follows:
wherein r represents a high resolution image or a low resolution image, rg represents an image of r after Gaussian filtering, C is a constant, P represents a gradient x Denotes the gradient in the x direction, P y Representing the gradient in the y-direction, s representing the image signal,representing a convolution.
Preferably, the specific steps of the second step are as follows:
A. negating the extracted significant features M (r) to obtain a weight matrix, and weighting the original low-resolution image and the high-resolution image by using the weight matrix to obtain significant image information of the image: w (r) =1-M (r);
B. carrying out DWT (discrete wavelet transform) on the obtained significant image to obtain a horizontal high-frequency sub-band LH, a vertical high-frequency sub-band HL, a diagonal high-frequency sub-band HH and a diagonal low-frequency sub-band LL;
C. the information entropy E of the respective sub-bands of the low-resolution image LR and the high-resolution image HR is calculated separately,
wherein M and N represent the image size of each sub-band, i and j represent the coordinate index of the image, and P (i, j) represents the wavelet coefficient of each sub-band image after the image is subjected to DWT decomposition;
D. calculating a sum S of an amount of increase in information entropy of the LH, HL and HH subband images of the high resolution image compared with the low resolution image 1 ;
Wherein E is LH Entropy of information, E, representing a high-resolution sub-band LH HL Entropy of information, E, representing a high-resolution subband HL HH Entropy of information, E, representing the high-resolution sub-band HH LH ' information entropy, E, representing the sub-band LH of low resolution HL ' information entropy, E, representing the sub-band HL of low resolution HH ' indicates the information entropy of the sub-band HH of low resolution.
Preferably, the third step comprises the following specific steps:
A. adopting LBP operator to respectively extract texture structures of the low-resolution image LP and the high-resolution image HP, wherein the formula is as follows:
wherein P and R represent the number and radius of domains, g c And g i Is the pixel value of the center position and neighborhood;
B. combining the obtained texture structure with the significant image information to obtain a significant texture structure chart of the image
U=LBP(r)·W(r);
C. Comparing the significant texture structure chart obtained from the low-resolution image and the high-resolution image by using the histogram to obtain the similarity degree S of the texture structure 2 ,
H=hist(U),
Where H is the histogram distribution of the saliency texture map U, LR is the low resolution, HR is the high resolution, n represents the number of histograms, and c is a constant.
Preferably, in the fourth step, the visual quality of the combined predicted image is determined by combining the information content gain and the texture similarity degree of the high-resolution image compared with the low-resolution imageWhere α and β are tuning parameters.
The invention has the beneficial effects that: the invention discloses a method for establishing a semi-reference super-resolution reconstruction image quality evaluation model, which has the following characteristics: the method adopts the low-resolution image as reference information, has feasibility in practical application compared with other methods, and aims at the problems that the super-resolution reconstructed image is degraded in image structure, has special ringing effect, fuzzy distortion and the like in the process of image super-resolution reconstruction, the saliency of the image is detected and extracted, and the texture characteristics of the image are combined, so that the process is in accordance with the situation that a human eye perception system observes the detailed part of the super-resolution reconstructed image which is more concerned, and therefore, the method is more appropriate to the subjective quality evaluation in the aspect of evaluating the quality of the super-resolution reconstructed image, and is more targeted and more accurate in prediction result compared with a general image quality evaluation method.
Drawings
FIG. 1 is a flow chart of a method for establishing a semi-reference super-resolution reconstruction image quality evaluation model provided by the invention;
FIG. 2 is a graph of information gain versus quality assessment of a high resolution reconstructed image in accordance with the present invention;
FIG. 3 is a low resolution and high resolution image and its corresponding saliency texture map and histogram;
fig. 4 is a comparison graph of the parameter α and β optimization.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the method for establishing a semi-reference super-resolution reconstruction image quality evaluation model disclosed by the invention comprises the following steps:
the method comprises the following steps: respectively carrying out significance detection on the low-resolution image LR and the high-resolution image HR so as to extract significance characteristics of the images;
step two: multiplying an original image by the saliency characteristics, carrying out DWT change on an image obtained by the multiplication, extracting high-frequency information in the image, and calculating the information gain of a high-resolution image relative to a low-resolution image;
step three: extracting texture features of the low-resolution image and the high-resolution image by adopting an LBP operator, and comparing the texture similarity of the low-resolution image and the high-resolution image by combining the image significant features through a histogram;
step four: and (4) combining the information gain obtained in the second step and the texture similarity obtained in the third step to construct a semi-reference super-resolution reconstruction image quality evaluation model.
The visual saliency can improve the performance of a quality evaluation module in practical application, but a saliency region measured by an eye tracker and most of the existing saliency technologies are difficult to effectively act on a quality evaluation task of a super-resolution image (SR), so that the invention is a saliency detection method based on simulation of 'fixation and saccades' of human eye movement, and because a 'fixation' observation method is mostly adopted when the SR image is subjectively tested, the saliency detection is carried out only by adopting a detection module aiming at 'fixation', and the method specifically comprises the following steps:
the calculation formula of the salient feature M (r) extracted by the gazing detection module through visual salient feature extraction on the low-resolution image and the high-resolution image is as follows:
wherein r represents a high resolution image or a low resolution image, rg represents an image of r after Gaussian filtering, C is a constant, P represents a gradient x Denotes the gradient in the x-direction, P y Representing the gradient in the y-direction, s represents the image signal,represents a convolution;
when the low-resolution image saliency detection is performed, the low-resolution image LR is input r, and the obtained saliency characteristic should be M (LR), whereas when the high-resolution image saliency detection is performed, the high-resolution image HR is input r, and the obtained saliency characteristic should be M (HR).
The image super-resolution reconstruction is to reconstruct the LR image to generate the HR image to obtain more information from the reconstructed HR image. Since it is relatively easy to reconstruct a low frequency portion in an image super-resolution reconstruction process, the existing image super-resolution reconstruction algorithm is mainly designed for the reconstruction of high frequency information. The invention also designs an information gain module aiming at the information increment of the high-frequency part of the HR image compared with the LR image, and the information gain module specifically comprises the following steps:
A. negating the extracted significant features M (r) to obtain a weight matrix, and weighting the original low-resolution image and the high-resolution image by using the weight matrix to obtain significant image information of the corresponding image: w (r) =1-M (r);
B. carrying out DWT (discrete wavelet transform) on the obtained significant image to obtain a horizontal high-frequency sub-band LH, a vertical high-frequency sub-band HL, a diagonal high-frequency sub-band HH and a low-frequency sub-band LL;
C. the information entropy E of the respective sub-bands of the high-frequency part of the low-resolution image LR and the high-resolution image HR is calculated separately,
wherein M and N represent the image size of each sub-band (LH, HL, HH), i and j represent the coordinate index of the image, P (i, j) represents the wavelet coefficient of each sub-band image after the DWT decomposition of the image, and the obtained information entropy comprises when the high-resolution image calculation is carried out; e LH Entropy of the high-resolution sub-band LH, E HL Entropy of the high-resolution subband HL, E HH -information entropy of the high resolution sub-band HH;when the low-resolution image calculation is carried out, the obtained information entropy comprises the following components: e LH ' -information entropy of the sub-band LH at low resolution, E HL ' -entropy of the low-resolution subband HL, E HH ' -information entropy of the low resolution sub-band HH;
D. calculating a sum S of an amount of increase in information entropy of the LH, HL and HH subband images of the high resolution image compared with the low resolution image 1 ;
To validate the information gain model S 1 Whether the super-resolution reconstruction image can be effectively evaluated or not is judged by adopting an information gain model S 1 The same LR images produced HR images of different quality were evaluated for quality and the results of the experiment are shown in fig. 2. The first line is an LR image and a corresponding HR image, and the second line is an image generated by weighting the first line image through a saliency module. As can be seen from FIG. 2, HR images of different qualities have more obvious differences after being weighted by the saliency module, which is also the information gain module S 1 The method lays a foundation for effectively evaluating the super-resolution reconstruction image. As the HR image MOS value increases, the information gain model S 1 The predicted value of (c) also increases. Therefore, the information gain model provided by the invention can effectively evaluate the super-resolution reconstruction image.
In addition, whether significance detection has a promoting effect on the information gain model is verified. The experimental results of the evaluation model combining the information gain model and the significance detection and information gain model are respectively tested on the ECCV-2011 super-resolution reconstruction image library and are shown in the table 1:
TABLE 1
As can be seen from Table 1, KRCC, SRCC, PLCC and RMSE reach 0.6082, 0.8078, 0.7880 and 1.1860 respectively by only adopting the information gain detection model to evaluate the ECCV-2011 super-resolution reconstruction image library. However, when the super-resolution reconstruction image library is subjected to quality evaluation by adopting an information gain detection model based on saliency detection, KRCC, SRCC, PLCC and RMSE respectively reach 0.6257, 0.8180, 0.8173 and 1.1100. Obviously, compared with the information gain model only, the information gain detection model based on the significance detection has obvious improvement on performance. The performance of the super-resolution reconstruction image and the performance of the corresponding super-resolution reconstruction algorithm can be evaluated more effectively by combining the two methods.
Image super-resolution reconstruction is to obtain more image information and to make the texture of the HR image as consistent as possible with the texture of the LR image. Due to different image super-resolution reconstruction algorithms and different magnification factors, the texture structure of the generated HR image is also degraded to a different extent compared with the texture of the LR image when the LR image is super-resolution reconstructed. The super-resolution reconstructed image can be effectively evaluated based on the texture difference between the HR image and the LR image. Therefore, the invention provides an evaluation module aiming at the super-resolution reconstruction image based on the texture similarity degree of the HR image generated by the LR image and the super-resolution reconstruction algorithm, which comprises the following contents:
A. adopting LBP operator to extract texture structures of the low-resolution image LP and the high-resolution image HP respectively, wherein the formula is as follows:
wherein P and R represent the number and radius of domains, g c And g i Is the pixel value of the center position and neighborhood;
B. combining the obtained texture structure with the significant image information to obtain a significant texture structure chart of the image
U=LBP(r)·W(r);
C. Comparing the significant texture structure diagrams obtained from the low-resolution image and the high-resolution image by using the histogram to obtain the similarity degree S of the texture structure 2 ,
H=hist(U),
Where H is the histogram distribution of the saliency texture structure map U, LR is the low resolution, HR is the high resolution, n represents the number of histograms, c is a constant, and the resulting S 2 I.e. the degree of texture similarity between the LR and HR images, the more similar the texture structure of the LR and HR images, the S 2 The larger the value of (c), the better the quality of the reconstructed HR image and vice versa.
To verify the texture similarity feature S 2 Fig. 3 illustrates an example of the LR image of the first line of fig. 3 and HR images of different masses generated by its super-resolution reconstruction. The second line is the saliency texture map of the LR and HR images of the first line, and the third line is the histogram distribution corresponding to the saliency texture map of the second line. It can be seen from the figure that the significant texture mapping tables of HR images of different quality are significantly different, and in order to more intuitively compare the difference between the texture of HR images of different quality and the texture of LR images, the third row shows the histogram distribution of the texture mapping tables. The most distinctive parts of the texture histogram difference of the different HR images and the corresponding LR images have been circled in the figure with a wire frame. From this it can be found that the better the quality of the HR image the more similar the texture histogram distribution is to the LR image. The unshrouded part differences are relatively small, but overall the better the quality of the HR image, the higher the similarity of the histogram distribution of the texture structure map with the corresponding texture structure histogram of the LR image. The mos values of the 3 HR images shown in the first row of FIG. 3 are 5.65, 3.85, and 2.00, corresponding to S 2 The values of (a) are 0.8656, 0.7762 and 0.7217, respectively. It can be seen from this thatAs the HR image quality deteriorates, S 2 The smaller the value of (b), the lower the similarity of the texture of the LR image to that of the HR image. The experiments show that the texture structure similarity characteristic provided in this section can effectively evaluate the super-resolution reconstructed image.
Furthermore, to verify the necessity of combining the texture similarity model presented in this section with the saliency module. The texture structure similarity model and the evaluation model combining the significance detection and texture structure similarity module are respectively tested on an ECCV-2014 super-resolution reconstruction image library, and the experimental results are shown in Table 2:
TABLE 2
As can be seen from Table 2, the evaluation of the ECCV-2014 super-resolution reconstructed image library only by using the texture similarity model achieves 0.4249, 0.6067, 0.5966 and 1.5461 for KRCC, SRCC, PLCC and RMSE respectively. However, when the quality of the super-resolution reconstruction image library is evaluated by adopting a texture structure similarity model based on saliency detection, KRCC, SRCC, PLCC and RMSE reach 0.4728, 0.6602, 0.6677 and 1.4342 respectively. Obviously, compared with the texture similarity model only adopting the texture similarity model based on the saliency detection, the texture similarity model based on the saliency detection has remarkable improvement on the performance. The super-resolution reconstruction image can be more effectively evaluated by combining the two images.
And finally, combining the information content gain and the texture similarity degree of the high-resolution image compared with the low-resolution image, and combining the visual quality of the predicted imageWhere α and β are tuning parameters, we performed parameter validation on the parameters α and β on the ECCV-2014 image library. First, α is set to 1, and then the parameter β is in the interval [0,1 ]]Parameter optimization was performed and the experimental results are shown in fig. 4. The results show that both are equally important, i.e. the sum of α and β is set to 1.
The following experimental verification of the method of the present invention
The semi-reference algorithm provided by the invention performs performance test on an ECCV-2014 image library. The LR images in the image library ECCV-2014 are generated from high quality images by nine different down-sampling and blur combining processes, and then reconstructed by six super-resolution image reconstruction algorithms to generate HR images, so that the ECCV-2014 image library provides high quality images as reference images. In order to verify the effectiveness and superiority of the semi-reference quality evaluation algorithm provided by the invention, the semi-reference quality evaluation algorithm is compared with a mainstream full-reference algorithm on an ECCV-2014 image library: PSNR, SSIM, MSSSIM, FSIM, IWSSIM, VIF, MAD and GSM, general no-reference algorithms: BRISQUE, NFERM, BIQI, DIIVINE, BLLINDSII, NIQE and DESIQUE, and the quality evaluation algorithms proposed for super-resolution reconstructed images were compared, and the results are seen in tables 3-5 below, table 3 being the performance of the full reference type quality module on the ECCV-2014 library and table 4 being the performance of the no reference type quality module on the ECCV-2014 library.
TABLE 3
Algorithm | KRCC | SRCC | PLCC | RMSE |
PSNR | 0.2068 | 0.3020 | 0.3498 | 1.8048 |
SSIM | 0.3800 | 0.5308 | 0.5817 | 1.5670 |
FSIM | 0.3928 | 0.5605 | 0.6586 | 1.4496 |
IWSSIM | 0.5936 | 0.7863 | 0.8538 | 1.0028 |
VIF | 0.6242 | 0.8130 | 0.8543 | 1.0041 |
MAD | 0.5056 | 0.6972 | 0.7528 | 1.2682 |
GSM | 0.3050 | 0.4420 | 0.5494 | 1.6097 |
MSSSIM | 0.4686 | 0.6403 | 0.6802 | 1.4122 |
Proposed(RR) | 0.6558 | 0.8503 | 0.8321 | 1.0686 |
TABLE 4
The performance of various image quality evaluation algorithms was tested using four metrics, KRCC, SRCC, PLCC and RMSE. KRCC and SRCC represent monotonicity of the image quality evaluation model and are obtained by directly calculating subjective and objective quality scores. Two criteria, PLCC and RMSE, are used to measure the accuracy of the image quality evaluation algorithm. And converting the objective quality fraction obtained by predicting the image by using an image quality evaluation algorithm by adopting a nonlinear transformation function, and then calculating the subjective quality fraction and the converted objective quality fraction to obtain two values of PLCC and RMSE.
From the experimental results, it can be seen that on the image library ECCV-2014, the full reference type algorithm VIF in Table 3 has the best performance, the values of KRCC, SRCC, PLCC and RMSE reach 0.6242, 0.8130, 0.8543 and 1.0041 respectively, and the values of KRCC, SRCC, PLCC and RMSE of the non-reference type algorithm BRISQE in Table 4 are 0.6280, 0.8045, 0.8818 and 0.9221 respectively, so that the best results in terms of accuracy are obtained compared with other algorithms. The KRCC, SRCC, PLCC and RMSE indexes obtained by the semi-reference quality evaluation algorithm provided by the invention are 0.6558, 0.8503, 0.8321 and 1.0686 respectively. It can be seen from the combination of table 3 and table 4 that the quality improvement evaluation algorithm of the present invention has the best effect in monotonicity, is slightly inferior to the full-reference evaluation algorithm with the best performance in accuracy, and is much worse than the quality evaluation method BRISQUE based on training. However, the monotonicity of the algorithm is more important than the accuracy because the quality evaluation work of the images is mainly to rank the quality of a series of images and not only to give an accurate score. And the problem of overfitting exists when the learning-based full-supervision quality evaluation algorithm is tested in an image library, and the method does not have good generalization capability.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (4)
1. A method for establishing a semi-reference super-resolution reconstruction image quality evaluation model is characterized by comprising the following steps:
the method comprises the following steps: respectively carrying out significance detection on the low-resolution image and the high-resolution image so as to extract significance characteristics of the images;
step two: multiplying the original image by the saliency characteristics, carrying out DWT change on the image obtained by multiplication, extracting high-frequency information in the image, and calculating the information gain of the high-resolution image relative to the low-resolution image;
the second step comprises the following specific steps:
A. negating the extracted significant features M (r) to obtain a weight matrix, and weighting the original low-resolution image and the high-resolution image by using the weight matrix to obtain significant image information of the image: w (r) =1-M (r);
B. carrying out DWT (discrete wavelet transform) on the obtained significant image to obtain a horizontal high-frequency sub-band LH, a vertical high-frequency sub-band HL, a diagonal high-frequency sub-band HH and a low-frequency sub-band LL;
C. the information entropy E of the respective sub-bands of the low-resolution image LR and the high-resolution image HR is calculated separately,
wherein M and N represent the image size of each sub-band, i and j represent the coordinate index of the image, and P (i, j) represents the wavelet coefficient of each sub-band image after the image is subjected to DWT decomposition;
D. calculating a sum S of an amount of increase in information entropy of the LH, HL and HH subband images of the high resolution image compared with the low resolution image 1 :
Wherein, E LH Entropy of information, E, representing the high-resolution sub-band LH HL Entropy of information, E, representing a high-resolution subband HL HH Entropy of information, E, representing the high-resolution sub-band HH LH ' information entropy, E, representing the sub-band LH of low resolution HL ' information entropy, E, representing the sub-band HL of low resolution HH ' information entropy of the sub-band HH representing low resolution;
step three: extracting texture features of the low-resolution image and the high-resolution image by adopting an LBP operator, and comparing the texture similarity of the low-resolution image and the high-resolution image by combining the image significant features through a histogram;
step four: and (4) combining the information gain obtained in the second step and the texture similarity obtained in the third step to construct a semi-reference super-resolution reconstruction image quality evaluation model.
2. The method for establishing the semi-reference super-resolution reconstruction image quality evaluation model according to claim 1, wherein the method comprises the following steps: the significance detection in the first step, so that the extracted significance image comprises the following contents:
the gazing-based detection module extracts visual salient features of the low-resolution image and the high-resolution image, and a calculation formula of the extracted salient features M (r) is as follows:
wherein r represents a high resolution image or a low resolution image, rg represents an image of r after Gaussian filtering, C is a constant, P represents a gradient x Denotes the gradient in the x-direction, P y Representing the gradient in the y-direction, s represents the image signal,representing a convolution.
3. The method for establishing the semi-reference super-resolution reconstruction image quality evaluation model according to claim 2, wherein the method comprises the following steps: the third step comprises the following specific steps:
A. adopting LBP operator to extract texture structures of the low-resolution image LP and the high-resolution image HP respectively, wherein the formula is as follows:
wherein P and R represent the number and radius of domains, g c And g i Is the pixel value of the center position and neighborhood;
B. combining the obtained texture structure with the significant image information to obtain a significant texture structure chart of the image
U=LBP(r)·W(r);
C. Comparing the significant texture structure diagrams obtained from the low-resolution image and the high-resolution image by using the histogram to obtain the similarity degree S of the texture structure 2 ,
H=hist(U),
Where H is the histogram distribution of the saliency texture map U, LR is the low resolution, HR is the high resolution, n represents the number of histograms, and c is a constant.
4. The method for establishing the semi-reference super-resolution reconstruction image quality evaluation model according to claim 3, wherein the method comprises the following steps: in the fourth step, the information content gain and the texture similarity degree of the high-resolution image compared with the low-resolution image are combined, and the visual quality of the combined predicted imageWhere α and β are tuning parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811213803.XA CN109447903B (en) | 2018-10-17 | 2018-10-17 | Method for establishing semi-reference super-resolution reconstruction image quality evaluation model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811213803.XA CN109447903B (en) | 2018-10-17 | 2018-10-17 | Method for establishing semi-reference super-resolution reconstruction image quality evaluation model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109447903A CN109447903A (en) | 2019-03-08 |
CN109447903B true CN109447903B (en) | 2022-11-18 |
Family
ID=65546625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811213803.XA Active CN109447903B (en) | 2018-10-17 | 2018-10-17 | Method for establishing semi-reference super-resolution reconstruction image quality evaluation model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109447903B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415242B (en) * | 2019-08-02 | 2020-05-19 | 中国人民解放军军事科学院国防科技创新研究院 | Super-resolution magnification evaluation method based on reference image |
CN113298337A (en) * | 2020-10-19 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Quality evaluation method and device |
EP4318377A4 (en) * | 2021-05-24 | 2024-10-16 | Samsung Electronics Co Ltd | Image processing device and operating method therefor |
CN113436078B (en) * | 2021-08-10 | 2022-03-15 | 诺华视创电影科技(江苏)有限公司 | Self-adaptive image super-resolution reconstruction method and device |
CN117495854B (en) * | 2023-12-28 | 2024-05-03 | 淘宝(中国)软件有限公司 | Video data processing method, device and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825500A (en) * | 2016-03-10 | 2016-08-03 | 江苏商贸职业学院 | Camera image quality evaluation method and device |
-
2018
- 2018-10-17 CN CN201811213803.XA patent/CN109447903B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825500A (en) * | 2016-03-10 | 2016-08-03 | 江苏商贸职业学院 | Camera image quality evaluation method and device |
Non-Patent Citations (6)
Title |
---|
A reduced-reference quality assessment metric for super-resolution reconstructed images with information gain and texture similarity;Lijuan Tang;《Signal Processing: Image Communication》;20190821;全文 * |
No-reference image quality assessment based on spatial and spectral entropies;Lixiong Liu等;《Signal Processing: Image Communication》;20140623;全文 * |
Saliency-Guided Quality Assessment of Screen Content Images;Ke Gu;《IEEE TRANSACTIONS ON MULTIMEDIA》;20160630;全文 * |
基于SVD的超分辨率重建图像质量无参考评价方法;黄慧娟等;《计算机辅助设计与图形学学报》;20120915(第09期);全文 * |
超分辨率重建图像质量评价算法分析;王小怡等;《信息通信》;20170915(第09期);全文 * |
颜色空间统计联合纹理特征的无参考图像质量评价;范赐恩等;《光学精密工程》;20180415(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109447903A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447903B (en) | Method for establishing semi-reference super-resolution reconstruction image quality evaluation model | |
Gu et al. | Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views | |
Kim et al. | Deep learning of human visual sensitivity in image quality assessment framework | |
Mittal et al. | Blind/referenceless image spatial quality evaluator | |
Jaya et al. | IEM: a new image enhancement metric for contrast and sharpness measurements | |
Liu et al. | A no-reference metric for perceived ringing artifacts in images | |
Zhang et al. | Kurtosis-based no-reference quality assessment of JPEG2000 images | |
CN108053396B (en) | No-reference evaluation method for multi-distortion image quality | |
CN101847257B (en) | Image denoising method based on non-local means and multi-level directional images | |
Zhang et al. | An algorithm for no-reference image quality assessment based on log-derivative statistics of natural scenes | |
Tang et al. | A reduced-reference quality assessment metric for super-resolution reconstructed images with information gain and texture similarity | |
CN103606132A (en) | Multiframe digital image denoising method based on space domain and time domain combination filtering | |
WO2016169244A1 (en) | Method of denoising and enhancing video image based on random spray retinex and device utilizing same | |
CN104021523B (en) | A kind of method of the image super-resolution amplification based on marginal classification | |
Chen et al. | Blind quality index for tone-mapped images based on luminance partition | |
Moorthy et al. | Visual perception and quality assessment | |
WO2011097696A1 (en) | Method and system for determining a quality measure for an image using a variable number of multi-level decompositions | |
CN110211084B (en) | Image multi-resolution reconstruction method based on weight wavelet transform | |
Li et al. | GridSAR: Grid strength and regularity for robust evaluation of blocking artifacts in JPEG images | |
Mansouri et al. | SSVD: Structural SVD-based image quality assessment | |
CN111445435B (en) | Multi-block wavelet transform-based reference-free image quality evaluation method | |
Ein-shoka et al. | Quality enhancement of infrared images using dynamic fuzzy histogram equalization and high pass adaptation in DWT | |
Jin et al. | Perceptual Gradient Similarity Deviation for Full Reference Image Quality Assessment. | |
CN107180427B (en) | 3D synthetic image quality evaluation method based on autoregressive local image description | |
Xue et al. | Image quality assessment with mean squared error in a log based perceptual response domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231123 Address after: Room 106, Building 30, Jiangjingyuan, No. 52 Yaogang Road, Chongchuan District, Nantong City, Jiangsu Province, 226000 Patentee after: Nantong Daguo Intelligent Technology Co.,Ltd. Address before: No.48 jiangtongdao Road, Nantong City, Jiangsu Province, 226000 Patentee before: JIANGSU VOCATIONAL College OF BUSINESS |
|
TR01 | Transfer of patent right |