CN113450319B - Super-resolution reconstruction image quality evaluation method based on KLT technology - Google Patents

Super-resolution reconstruction image quality evaluation method based on KLT technology Download PDF

Info

Publication number
CN113450319B
CN113450319B CN202110672307.6A CN202110672307A CN113450319B CN 113450319 B CN113450319 B CN 113450319B CN 202110672307 A CN202110672307 A CN 202110672307A CN 113450319 B CN113450319 B CN 113450319B
Authority
CN
China
Prior art keywords
image
aggd
resolution
super
vectorization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110672307.6A
Other languages
Chinese (zh)
Other versions
CN113450319A (en
Inventor
姜求平
刘震涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202110672307.6A priority Critical patent/CN113450319B/en
Publication of CN113450319A publication Critical patent/CN113450319A/en
Application granted granted Critical
Publication of CN113450319B publication Critical patent/CN113450319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a super-resolution reconstruction image quality evaluation method based on a KLT (Kelvin transform) technology, which comprises the steps of converting an RGB (red, green and blue) color space into a color space with higher color perception correlation degree with human eyes through color space conversion, and calculating MSCN (maximum color coefficient) maps of three color channels by utilizing an MSCN (maximum color coefficient) technology; partitioning and vectorizing the MSCN coefficient diagram to obtain a vectorization matrix corresponding to the MSCN coefficient diagram; calculating a KLT coefficient matrix of the MSCN coefficient diagram according to the vectorization matrix and the constructed KLT core; carrying out AGGD parameter estimation on the KLT coefficient matrix by using a moment matching method to obtain a first eigenvector of each color channel; obtaining a second eigenvector of each color channel by calculating an energy coefficient of the KLT coefficient matrix; further obtaining a feature vector of the super-resolution reconstruction image; taking the feature vector as input, and combining with an SVM regression machine to obtain a final objective quality prediction score of the super-resolution reconstruction image; the method has the advantage of effectively improving the consistency of objective evaluation results and human eye subjective perception.

Description

KLT (karhunen-Loeve transform) technology-based super-resolution reconstruction image quality evaluation method
Technical Field
The invention relates to an image quality evaluation method, in particular to a super-resolution reconstruction image quality evaluation method based on a KLT (Karhunen-love Transform) technology.
Background
In the last two decades, many Single-Image Super-Resolution (SISR) reconstruction methods have been proposed, which restore a Low-Resolution (LR) Image to a High-Resolution (HR) Image and restore as much detail as possible to improve the visual effect of the Super-Resolution (SR) reconstructed Image. There are differences in the performance of different super-resolution reconstruction methods, which super-resolution reconstruction method has better performance? Therefore, it is necessary to perform Super-Resolution reconstruction image Quality evaluation (SRQA) to select a high-Quality Super-Resolution reconstruction image.
The traditional full-reference quality evaluation method mainly measures the distance and difference between a super-resolution reconstructed image and a reference image. However, the super-resolution reconstructed image is prone to spatial misalignment, that is, compared with a reference image, the image content on a pixel point at the same coordinate position has deviation, so that the distance between the super-resolution reconstructed image and the reference image is large, and therefore, the full-reference quality evaluation method is often inaccurate and deviates from subjective perception of human eyes. Therefore, at present, a reference-free quality evaluation method is frequently used to evaluate the quality of the super-resolution reconstructed image.
The existing various non-reference quality evaluation methods usually extract prior information of an image from all angles such as an airspace, a transform domain and the like to evaluate a super-resolution reconstruction image, but few methods can consider the characteristics of the super-resolution reconstruction image and specially evaluate the super-resolution reconstruction image, so that the obtained effects of the methods are not good, and the deviation from human eye subjective perception is large. How to consider the characteristics of the super-resolution reconstructed image and reasonably extract prior information for parameter modeling so that objective evaluation results are more in line with human eye subjective perception is a problem to be researched and solved in the process of objective quality evaluation of the super-resolution reconstructed image.
Disclosure of Invention
The invention aims to provide a super-resolution reconstruction image quality evaluation method based on the KLT technology, which can well measure the recovery effect of a super-resolution reconstruction image and further can effectively improve the consistency of objective evaluation results and human eye subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a quality evaluation method for a super-resolution reconstruction image based on a KLT technology is characterized by comprising the following steps:
step 1: recording the distorted super-resolution reconstruction image as I, and recording the red channel, the green channel and the blue channel of I correspondingly as IR、IG、IBIs shown byRThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IR(a, b) mixing IGThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IG(a, b) mixing IBThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IB(a, b); wherein a is more than or equal to 1 and less than or equal to W, b is more than or equal to 1 and less than or equal to H, W represents the width of I, and H represents the height of I;
and 2, step: performing color space conversion on the I to obtain a corresponding three-channel image, marking the three-channel image as O, and correspondingly marking the 1 st color channel, the 2 nd color channel and the 3 rd color channel of the O as O1、O2、O3Introducing O into1The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O1(a, b) reacting O2The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O2(a, b) reacting O3The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O3(a,b),
Figure GDA0003602550600000021
Then, the MSCN technology is used to calculate O1、O2、O3Respective MSCN coefficient map, corresponding notation
Figure GDA0003602550600000022
Wherein, O, O1、O2、O3
Figure GDA0003602550600000023
Figure GDA0003602550600000024
Are all W wide and all H high, the symbol [ 2 ]]"is a vector or matrix representation symbol;
and 3, step 3: will be provided with
Figure GDA0003602550600000025
Respectively divided into S non-overlapping sizes
Figure GDA0003602550600000026
The image block of (1); then pair
Figure GDA0003602550600000027
Vectorizing each image block in each image block; then will
Figure GDA0003602550600000028
All the image blocks in each image block are spliced into a vectorization matrix according to vectorization results obtained after the image blocks are subjected to radial quantization processing, and the vectorization matrix is obtained by splicing the vectorization results
Figure GDA0003602550600000029
All image blocks in (1) are quantizedThe vectorization matrix formed by splicing the vectorization results obtained after the processing is recorded as Z1,Z1Each column in (1) is
Figure GDA0003602550600000031
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA0003602550600000032
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Z2,Z2Each column in (1) is
Figure GDA0003602550600000033
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA0003602550600000034
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Z3,Z3Each column in (1) is
Figure GDA0003602550600000035
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003602550600000036
(symbol)
Figure GDA0003602550600000037
for rounding-down the sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Z1、Z2、Z3The dimensions of (A) are KxS;
and 4, step 4: obtaining respective KLT cores of three color channels which are the same as the three-channel image obtained in the step 3 by utilizing a plurality of original high-resolution images in an off-line mode, and marking the KLT core of the 1 st color channel, the KLT core of the 2 nd color channel and the KLT core of the 3 rd color channel as P1、P2、P3(ii) a Wherein, P1、P2、P3The dimensions of (A) are K multiplied by K;
and 5: computing
Figure GDA0003602550600000038
Respective KLT coefficient matrix, corresponding to A1、A2、A3,A1=(P1)TZ1,A2=(P2)TZ2,A3=(P3)TZ3(ii) a Wherein the superscript "T" denotes the transpose of the vector or matrix, A1、A2、A3The dimensions of (A) are KxS;
and 6: obtaining O1First feature vector of (a), denoted fa1,fa1The acquisition process comprises the following steps: using a moment matching method to A1Sequentially estimating Asymmetric Generalized Gaussian Distribution (AGGD) parameters according to the sequence of each row vector and the row vector spliced by all the row vectors to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to A1The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A1The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters correspond to A1The K-th row vector in (1), the K + 1-th set of AGGD parameters correspond to A1All the line vectors in the set are spliced into line vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is added1A column vector formed by arranging corresponding K +1 sets of AGGD parameters in sequence is taken as O1First feature vector fa of1,fa1=[Aggd1(1) Aggd1(2) … Aggd1(K) Aggd1(K+1)]T(ii) a Wherein, Aggd1(1) Is shown as A1 Corresponding group 1 AGGD parameter, Aggd1(2) Is shown as A1 Corresponding group 2 AGGD parameters, Aggd1(K) Is represented by A1Corresponding K-th group of AGGD parameters, Aggd1(K +1) represents A1Corresponding K +1 th group of AGGD parameters, fa1Has a dimension of 3(K + 1). times.1;
likewise, obtain O2First feature vector of (a), denoted fa2,fa2The acquisition process comprises the following steps: using a moment matching method to A2Sequentially carrying out asymmetric generalized Gaussian distribution parameter estimation on each row vector and row vectors spliced by all the row vectors in the set to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to A2The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A2The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters correspond to A2The K-th row vector of (1), the K + 1-th set of AGGD parameters correspond to A2All the line vectors in the set are spliced into line vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is mixed2The column vector formed by arranging the corresponding K +1 sets of AGGD parameters in sequence is taken as O2First feature vector fa of2,fa2=[Aggd2(1) Aggd2(2) … Aggd2(K) Aggd2(K+1)]T(ii) a Wherein, Aggd2(1) Is shown as A2 Corresponding group 1 AGGD parameter, Aggd2(2) Is represented by A2 Corresponding group 2 AGGD parameters, Aggd2(K) Is represented by A2Corresponding K-th group of AGGD parameters, Aggd2(K +1) represents A2Corresponding K +1 th group of AGGD parameters, fa2Has a dimension of 3(K + 1). times.1;
obtaining O3First feature vector of (a) noted fa3,fa3The acquisition process comprises the following steps: using a moment matching method to A3Sequentially estimating asymmetric generalized Gaussian distribution parameters of each row vector and row vectors formed by splicing all the row vectors to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters correspond to A3The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A3The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters corresponds to A3The K-th row vector in (1), the K + 1-th set of AGGD parameters correspond to A3All the line vectors in the set are spliced into line vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0(ii) a Then A is added3A column vector formed by arranging corresponding K +1 sets of AGGD parameters in sequence is taken as O3First feature vector fa of3,fa3=[Aggd3(1) Aggd3(2) … Aggd3(K) Aggd3(K+1)]T(ii) a Wherein, Aggd3(1) Is represented by A3Corresponding set 1 AGGD parameters, Aggd3(2) Is represented by A3 Corresponding group 2 AGGD parameters, Aggd3(K) Is represented by A3Corresponding K-th group of AGGD parameters, Aggd3(K +1) represents A3Corresponding K +1 th group of AGGD parameters, fa3Has a dimension of 3(K + 1). times.1;
and 7: obtaining O1Second feature vector of (2), denoted as fe1,fe1The acquisition process comprises the following steps: calculation of A1Energy coefficient of each row vector in (1), A1The energy coefficient of the h-th row vector in (1) is recorded as e1(h),
Figure GDA0003602550600000051
Then A is added1A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O1Second feature vector fe1,fe1=[e1(1) … e1(h) … e1(K)]T(ii) a Wherein h is more than or equal to 1 and less than or equal to K, S is more than or equal to 1 and less than or equal to S, A1(h, s) represents A1S column element of the h row of (1), e1(1) Is represented by A1Energy coefficient of the 1 st row vector of (1), e1(K) Is represented by A1Energy coefficient of the K-th row vector of (1), fe1Has a dimension of K × 1;
likewise, obtaining O2Second feature vector of (1), denoted as fe2,fe2The acquisition process comprises the following steps: calculation of A2Energy coefficient of each row vector in (1), A2The energy coefficient of the h-th row vector in (1) is recorded as e2(h),
Figure GDA0003602550600000052
Then A is added2A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O2Second characteristic of (1)Amount fe2,fe2=[e2(1) … e2(h) … e2(K)]T(ii) a Wherein, A2(h, s) represents A2S column element of the h row of2(1) Is shown as A2Energy coefficient of the 1 st row vector of (1), e2(K) Is represented by A2Energy coefficient of the K-th row vector of (1), fe2Has a dimension of K × 1;
obtaining O3Second feature vector of (1), denoted as fe3,fe3The acquisition process comprises the following steps: calculation of A3Energy coefficient of each row vector in (1), A3The energy coefficient of the h-th row vector in (1) is recorded as e3(h),
Figure GDA0003602550600000053
Then A is added3A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O3Second feature vector fe3,fe3=[e3(1) … e3(h) … e3(K)]T(ii) a Wherein, A3(h, s) represents A3S column element of the h row of3(1) Is shown as A3Energy coefficient of the 1 st row vector of (1), e3(K) Is shown as A3Energy coefficient of the K-th row vector of (1), fe3Has a dimension of K × 1;
and step 8: will fa1And fe1Are spliced into O1Feature vector of (2), noted as f1,f1=[(fa1)T (fe1)T]T(ii) a Similarly, will fa2And fe2Are spliced into O2Is noted as f2,f2=[(fa2)T (fe2)T]T(ii) a Will fa3And fe3Are spliced into O3Feature vector of (2), noted as f3,f3=[(fa3)T (fe3)T]T(ii) a Then f is put1、f2、f3The feature vector spliced into I is denoted as fea, fea ═ f1 T f2 Tf3 T]T(ii) a Wherein f is1Has a dimension of (4K + 3). times.1, f2Has a dimension of (4K +3) x 1, f3Has a dimension of (4K +3) × 1, and fea has a dimension of 3(4K +3) × 1;
and step 9: taking the feature vector of the distorted super-resolution reconstructed image as input, and calculating to obtain a final objective quality prediction score of the distorted super-resolution reconstructed image by combining an SVM regression machine, wherein the larger the final objective quality prediction score is, the better the quality of the distorted super-resolution reconstructed image corresponding to the input feature vector is; conversely, the worse the quality of the super-resolution reconstructed image showing the distortion corresponding to the input feature vector.
The specific process of the step 4 comprises the following steps:
step 4_ 1: selecting Num original high-resolution images, forming an image set, and marking the ith original high-resolution image in the image set as XiIs mixing XiThe red channel, the green channel and the blue channel are correspondingly marked as Xi,R、Xi,G、Xi,BX is to bei,RThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,R(a ', b') reacting Xi,GThe pixel value of the pixel point with the middle coordinate position of (a ', b') is marked as Xi,G(a ', b') reacting Xi,BThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,B(a ', b'); wherein Num is more than or equal to 1, i is more than or equal to 1 and less than or equal to Num, XiHas a width of MiAnd a height of Ni,1≤a'≤Mi,1≤b'≤Ni
Step 4_ 2: color space conversion is carried out on each original high-resolution image in the image set to obtain a corresponding three-channel image, and X is convertediThe three-channel image obtained after color space conversion is recorded as YiLet Y beiIs correspondingly marked as Y in the 1 st color channel, the 2 nd color channel and the 3 rd color channeli,1、Yi,2、Yi,3Is a reaction of Yi,1The pixel value of the pixel point with the middle coordinate position (a ', b') is recorded as Yi,1(a ', b') reacting Yi,2The pixel value of the pixel point with the middle coordinate position (a ', b') is recorded as Yi,2(a ', b') reacting Yi,3Middle coordinate positionThe pixel value of the pixel point set as (a ', b') is marked as Yi,3(a',b'),
Figure GDA0003602550600000071
Then, by utilizing MSCN technology, calculating MSCN coefficient diagram of each color channel of three-channel image obtained by color space conversion of each original high-resolution image in image set, and converting Yi,1、Yi,2、Yi,3The respective MSCN coefficient maps are correspondingly labeled
Figure GDA0003602550600000072
Wherein Y isi、Yi,1、Yi,2、Yi,3
Figure GDA0003602550600000073
Are all M in widthiAnd the heights are all NiThe term "[ 2 ]]"is a vector or matrix representation symbol;
step 4_ 3: dividing the MSCN coefficient graph of each color channel of a three-channel image obtained by color space conversion of each original high-resolution image in an image set into a plurality of non-overlapping sizes
Figure GDA0003602550600000074
Image block of
Figure GDA0003602550600000075
Are respectively divided into SiEach non-overlapping size is
Figure GDA0003602550600000076
The image block of (1); then, each image block in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set is subjected to vectorization processing; then all image blocks in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set are spliced into a vectorization matrix, and vectorization results obtained by carrying out quantization processing on all image blocks in the MSCN coefficient graph of each color channel of the three-channel image are spliced into a vectorization matrix
Figure GDA0003602550600000077
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,1,Li,1Each column in (1) is
Figure GDA0003602550600000078
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA0003602550600000079
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the radial quantization processing is marked as Li,2,Li,2Each column in (1) is
Figure GDA00036025506000000710
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA00036025506000000711
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,3,Li,3Each column in (1) is
Figure GDA00036025506000000712
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein the content of the first and second substances,
Figure GDA00036025506000000713
(symbol)
Figure GDA00036025506000000714
for rounding down the operation sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Li,1、Li,2、Li,3All dimensions of (are KxSi
Step 4_ 4: color space converting all original high resolution images in an image setThe vectorization matrixes corresponding to the MSCN coefficient diagram of the 1 st color channel of the three-channel image obtained after conversion are spliced into a first prior matrix in the row direction and marked as V1,V1=[L1,1 … Li,1 … LNum,1](ii) a Similarly, the vectorization matrices corresponding to the MSCN coefficient maps of the 2 nd color channel of the three-channel image obtained by color space conversion of all the original high-resolution images in the image set are spliced into a second prior matrix in the row direction, which is denoted as V2,V2=[L1,2 … Li,2 … LNum,2](ii) a Splicing vectorization matrixes corresponding to MSCN coefficient graphs of 3 rd color channels of three-channel images obtained by color space conversion of all original high-resolution images in the image set into third prior matrixes in the row direction, and marking the third prior matrixes as V3,V3=[L1,3 … Li,3 … LNum,3](ii) a Then calculate V1、V2、V3Respective covariance matrix, corresponding to C1、C2、C3
Figure GDA0003602550600000081
Wherein L is1,1Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion 11 st color channel Y1,1A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagramNum,1Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversion Num1 st color channel YNum,1A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagram1,2Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion 12 nd color channel Y1,2The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L isNum,2Representing image setsNum original high resolution image X in (2)NumThree-channel image Y obtained after color space conversion Num2 nd color channel YNum,2A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagram1,3Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion1Of the 3 rd color channel Y1,3A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagramNum,3Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNumOf the 3 rd color channel YNum,3The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to the vectorization processing, j is 1,2 and 3, and Sum represents V1、V2Or V3The number of column vectors in (1) t is less than or equal to Sum, vj(t) represents VjThe t-th column vector of (1), vjThe dimension of (t) is K x 1,
Figure GDA0003602550600000091
is shown to VjTaking the average value according to the rows to obtain an average value vector,
Figure GDA0003602550600000092
has dimension K × 1, the superscript "T" denotes the transpose of the vector or matrix, CjHas dimension K × K;
step 4_ 5: using eigenvalue decomposition techniques to solve for C1K eigenvalues and corresponding K eigenvectors; then to C1The K eigenvectors are arranged in a descending order of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order from V1From the extracted a priori information, will be from V1Taking the extracted prior information as a KLT core P of a 1 st color channel1,P1=[p1(1) p1(2) … p1(K)];
Similarly, by using eigenvalue decomposition technique, the methodSolution C2K eigenvalues and corresponding K eigenvectors; then to C2The K eigenvectors are arranged in a descending way of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order V2Will be derived from V2The extracted prior information is used as a KLT core P of a 2 nd color channel2,P2=[p2(1) p2(2) … p2(K)];
Using eigenvalue decomposition techniques to solve for C3K eigenvalues and corresponding K eigenvectors; then to C3The K eigenvectors are arranged in a descending order of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order from V3Will be derived from V3Taking the extracted prior information as a KLT core P of a 3 rd color channel3,P3=[p3(1) p3(2) … p3(K)];
Wherein the dimension of the feature vector is Kx 1, P1、P2、P3All dimensions of (a) are KxK, p1(1)、p1(2)、p1(K) Corresponds to and represents C1The 1 st eigenvector, the 2 nd eigenvector, the Kth eigenvector, p are arranged in a descending way of the corresponding K eigenvalues from big to small2(1)、p2(2)、p2(K) Corresponds to and represents C2The 1 st eigenvector, the 2 nd eigenvector, the Kth eigenvector, p are arranged in a descending way of the corresponding K eigenvalues from big to small3(1)、p3(2)、p3(K) Corresponds to and represents C3The K eigenvectors are arranged according to a descending mode of the corresponding K eigenvalues from big to small, and then the 1 st eigenvector, the 2 nd eigenvector and the K eigenvector are obtained.
The specific process of the step 9 is as follows:
step 9_ 1: selecting d groups of distorted super-resolution reconstruction images, wherein each group comprises omega1Two-scale super-resolution reconstructed image omega2Amplitude three-scale super-resolution reconstruction image, omega3Super-resolution of four scales in widthBuilding an image, and forming a super-resolution reconstructed image with the total Number of distorted super-resolution images into a super-resolution image library; wherein d is more than or equal to 60, omega1、ω2、ω3Are all greater than or equal to 1, Number ═ d (omega)123) Each group of distorted super-resolution reconstruction images corresponds to a high-resolution image;
step 9_ 2: acquiring the characteristic vector of each distorted super-resolution reconstruction image in the super-resolution image library in the same way according to the processes from step 1 to step 8; obtaining the subjective score of each distorted super-resolution reconstruction image in a super-resolution image library;
step 9_ 3: randomly selecting m groups of distorted super-resolution reconstruction images from a super-resolution image library, and multiplying m by omega in the selected m groups1Forming a first training set by the feature vectors and subjective scores of the two-scale super-resolution reconstructed images, and selecting m multiplied by omega in m groups2Forming a second training set by the feature vectors and the subjective scores of the three-scale super-resolution reconstructed images, and selecting m multiplied by omega in the m groups3Forming a third training set by the feature vectors and the subjective scores of the four-scale super-resolution reconstructed images, and multiplying (d-m) x omega in the rest d-m groups1Forming a first test set by the characteristic vectors of the two-scale super-resolution reconstruction image, and multiplying (d-m) x omega in the residual d-m groups2Forming a second test set by the feature vectors of the three-scale super-resolution reconstruction images, and multiplying (d-m) x omega in the remaining d-m groups3Feature vectors of the four-scale super-resolution reconstruction images form a third test set; wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003602550600000101
(symbol)
Figure GDA0003602550600000102
is a rounded-down operation sign;
step 9_ 4: inputting all feature vectors and all subjective scores in the first training set into an SVM regression machine for training by adopting the SVM regression machine as a machine learning method, so that the error between a regression function value obtained through training and the subjective score is minimum, and constructing to obtain a first SVM regression model; then inputting each feature vector in the first test set into a first SVM regression model for testing to obtain an objective quality prediction score of a distorted super-resolution reconstructed image corresponding to each feature vector in the first test set;
similarly, an SVM regression machine is used as a machine learning method, all feature vectors and all subjective scores in the second training set are input into the SVM regression machine for training, so that the error between a regression function value obtained through training and the subjective scores is minimum, and a second SVM regression model is constructed; then inputting each feature vector in the second test set into a second SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the second test set;
inputting all feature vectors and all subjective scores in a third training set into the SVM regression machine for training by adopting an SVM regression machine as a machine learning method, so that the error between a regression function value obtained through training and the subjective scores is minimum, and constructing to obtain a third SVM regression model; then inputting each feature vector in the third test set into a third SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the third test set;
step 9_ 5: repeatedly executing the step 9_3 and the step 9_4 for U times, so that the objective quality prediction score of each distorted super-resolution reconstruction image in the super-resolution image library is at least 1; then calculating the average value of a plurality of objective quality prediction scores of each distorted super-resolution reconstruction image in the super-resolution image library as the final objective quality prediction score of the distorted super-resolution reconstruction image; wherein U is more than or equal to 1000.
In the step 9_1, the generation process of each group of distorted super-resolution reconstructed images is as follows: processing a high-resolution image by adjusting the focal length and shooting again, and degrading to obtain a two-scale low-resolution image, a three-scale low-resolution image and a four-scale low-resolution image; then low resolution on two scalesRate image applying ω respectively1Obtaining corresponding omega by a super-resolution reconstruction method1Two-scale super-resolution reconstruction images are obtained; also, ω is applied to each of the three-scale low-resolution images2Obtaining corresponding omega by a super-resolution reconstruction method2Reconstructing an image by using three-scale super-resolution; applying omega to four-scale low-resolution images respectively3Obtaining corresponding omega by a super-resolution reconstruction method3And (5) performing four-scale super-resolution reconstruction on the image.
Compared with the prior art, the invention has the advantages that:
1) according to the method, the RGB color space is converted into the color space with higher degree of correlation with human eye color perception through color space conversion, so that the expression performance of the extracted features on the image quality is improved, the recovery effect of the super-resolution reconstructed image can be well measured, and the consistency of objective evaluation results and human eye subjective perception is improved.
2) The method constructs the KLT kernel in an off-line mode, learns the prior information of the high-resolution image, improves the expression performance of the extracted features on the image quality, and can well measure the recovery effect of the super-resolution reconstructed image, thereby improving the consistency of objective evaluation results and human eye subjective perception.
3) According to the method, the asymmetry of the KLT coefficient histogram distribution is considered, the AGGD model is adopted to fit the KLT coefficient, the expression performance of the extracted features on the image quality is improved, the recovery effect of the super-resolution reconstructed image can be well measured, and therefore the consistency of objective evaluation results and human eye subjective perception is improved.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2 is a high resolution image;
fig. 3a is KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 1 st color channel of the three-channel image obtained after the color space conversion of the high-resolution image shown in fig. 2;
fig. 3b is KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 2 nd color channel of the three-channel image obtained after the color space conversion of the high-resolution image shown in fig. 2;
fig. 3c is KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 3 rd color channel of the three-channel image obtained after color space conversion of the high-resolution image shown in fig. 2.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a quality evaluation method of a super-resolution reconstruction image based on a KLT technology, which is shown in a general implementation block diagram in figure 1 and comprises the following steps:
step 1: recording the distorted super-resolution reconstruction image as I, and recording the red channel, the green channel and the blue channel of I correspondingly as IR、IG、IBIs shown byRThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IR(a, b) mixing IGThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IG(a, b) mixing IBThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IB(a, b); wherein, a is more than or equal to 1 and less than or equal to W, b is more than or equal to 1 and less than or equal to H, W represents the width of I, and H represents the height of I.
Step 2: performing color space conversion on the I to obtain a corresponding three-channel image, marking the three-channel image as O, and correspondingly marking the 1 st color channel, the 2 nd color channel and the 3 rd color channel of the O as O1、O2、O3Introducing O into1The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O1(a, b) reacting O2The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O2(a, b) reacting O3The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O3(a,b),
Figure GDA0003602550600000131
Then, the existing MSCN (mean sub-mapped Contrast normalized) technology is used to calculate O1、O2、O3Respective MSCN coefficientDrawing, corresponding notation
Figure GDA0003602550600000132
Figure GDA0003602550600000133
Wherein, O, O1、O2、O3
Figure GDA0003602550600000134
All of which have a width of W and a height of H, symbol]"represents a symbol as a vector or matrix.
And 3, step 3: will be provided with
Figure GDA0003602550600000135
Respectively divided into S non-overlapping sizes
Figure GDA0003602550600000136
The image block of (1); then to
Figure GDA0003602550600000137
Vectorizing each image block in each image block; then will
Figure GDA0003602550600000138
All image blocks in each image block are spliced into a vectorization matrix according to vectorization results obtained after the image blocks are subjected to warp quantization processing
Figure GDA0003602550600000139
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the warp-wise quantization processing is recorded as Z1,Z1Each column in (1) is
Figure GDA00036025506000001310
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA00036025506000001311
All image blocks in the image are obtained after being subjected to radial quantization processingThe vectorization matrix formed by splicing the vectorization results is marked as Z2,Z2Each column in (1) is
Figure GDA00036025506000001312
A vectorization result with dimension K multiplied by 1 obtained after one image block is subjected to quantization processing is obtained, and the vectorization result is subjected to image processing
Figure GDA00036025506000001313
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the warp-wise quantization processing is recorded as Z3,Z3Each column in (1) is
Figure GDA00036025506000001314
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein, the first and the second end of the pipe are connected with each other,
Figure GDA00036025506000001315
(symbol)
Figure GDA00036025506000001316
for rounding-down the sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Z1、Z2、Z3The dimensions of (A) are all K × S.
Here, vectorizing an image block is a conventional technical means, that is, all pixel points in the image block are arranged in a certain order (for example, in a row scanning order, a first row is scanned first, a second row is scanned second, and so on) to form a column vector; when the plurality of vectorization results are spliced into the vectorization matrix, the vectorization results can be spliced according to the sequence of the image blocks, for example, the 1 st column of the vectorization matrix is a vectorization result with dimension K × 1 obtained after the 1 st image block is subjected to the radial quantization processing, and the S th column of the vectorization matrix is a vectorization result with dimension K × 1 obtained after the S th image block is subjected to the radial quantization processing.
And 4, step 4: obtaining respective KLT cores of three color channels which are the same as the three-channel image obtained in the step 3 by utilizing a plurality of original high-resolution images in an off-line mode, and enabling the KLT core of the 1 st color channel and the KLT core of the 1 st color channelKLT kernel of 2 color channels and KLT kernel of 3 rd color channel shall be noted as P1、P2、P3(ii) a Wherein, P1、P2、P3The dimensions of (A) are all K.
In this embodiment, the specific process of step 4 is:
step 4_ 1: selecting Num original high-resolution images, forming an image set, and recording the ith original high-resolution image in the image set as XiIs mixing XiThe red channel, the green channel and the blue channel are correspondingly marked as Xi,R、Xi,G、Xi,BX is to bei,RThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,R(a ', b') reacting Xi,GThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,G(a ', b') reacting Xi,BThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,B(a ', b'); wherein Num is not less than 1, in this embodiment, Num is 60, i is not less than 1 and not more than Num, and XiHas a width of MiAnd a height of Ni,1≤a'≤Mi,1≤b'≤Ni(ii) a Here, the Num original high resolution images may be different in size.
Step 4_ 2: color space conversion is carried out on each original high-resolution image in the image set to obtain a corresponding three-channel image, and X is convertediThe three-channel image obtained after color space conversion is recorded as YiIs a reaction of YiIs correspondingly marked as Y in the 1 st color channel, the 2 nd color channel and the 3 rd color channeli,1、Yi,2、Yi,3Let Y bei,1The pixel value of the pixel point with the middle coordinate position of (a ', b') is marked as Yi,1(a ', b') reacting Yi,2The pixel value of the pixel point with the middle coordinate position (a ', b') is recorded as Yi,2(a ', b') reacting Yi,3The pixel value of the pixel point with the middle coordinate position of (a ', b') is marked as Yi,3(a',b'),
Figure GDA0003602550600000141
Then, each image in the image set is calculated by using the existing MSCN (mean filtered Contrast normalized) technologyThe MSCN coefficient diagram of each color channel of the three-channel image obtained by the color space conversion of the original high-resolution image is obtained by converting Yi,1、Yi,2、Yi,3The respective MSCN coefficient maps are correspondingly labeled
Figure GDA0003602550600000142
Wherein, Yi、Yi,1、Yi,2、Yi,3
Figure GDA0003602550600000143
Are all M in widthiAnd the heights are all NiThe term "2]"is a vector or matrix representation symbol.
Step 4_ 3: dividing the MSCN coefficient graph of each color channel of a three-channel image obtained by color space conversion of each original high-resolution image in an image set into a plurality of non-overlapping sizes
Figure GDA0003602550600000144
Image block of
Figure GDA0003602550600000151
Are respectively divided into SiEach non-overlapping size is
Figure GDA0003602550600000152
The image block of (1); then, vectorizing each image block in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set; then all image blocks in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set are spliced into a vectorization matrix, and vectorization results obtained by carrying out quantization processing on all image blocks in the MSCN coefficient graph of each color channel of the three-channel image are spliced into a vectorization matrix
Figure GDA0003602550600000153
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,1,Li,1Each column in (1) is
Figure GDA0003602550600000154
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA0003602550600000155
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,2,Li,2Each column in (1) is
Figure GDA0003602550600000156
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure GDA0003602550600000157
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,3,Li,3Each column in (1) is
Figure GDA0003602550600000158
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein the content of the first and second substances,
Figure GDA0003602550600000159
(symbol)
Figure GDA00036025506000001510
for rounding down the operation sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Li,1、Li,2、Li,3All dimensions of (are KxSi
The vectorization processing of the image block is a conventional technical means, that is, all pixel points in the image block are arranged in a certain order (for example, in the order of line scanning, a first line is scanned first, a second line is scanned second, and so on) to form a column vector; when a plurality of vectorization results are spliced into a vectorization matrix, the vectorization results can be spliced according to the sequence of image blocks, such as vectorsThe 1 st column of the quantization matrix is a quantization result with dimension K multiplied by 1 obtained after the 1 st image block is subjected to warp quantization processing, and the S th column of the quantization matrix is an S-th quantization matrixiIs listed as the SiAnd obtaining a vectorization result with dimension K multiplied by 1 after the image blocks are subjected to quantization processing.
Step 4_ 4: splicing vectorization matrixes corresponding to MSCN coefficient graphs of 1 st color channel of three-channel images obtained by color space conversion of all original high-resolution images in an image set into a first prior matrix in the row direction, and marking the first prior matrix as V1,V1=[L1,1 … Li,1 … LNum,1](ii) a Similarly, the vectorization matrices corresponding to the MSCN coefficient maps of the 2 nd color channel of the three-channel image obtained by color space conversion of all the original high-resolution images in the image set are spliced into a second prior matrix in the row direction, which is denoted as V2,V2=[L1,2 … Li,2 … LNum,2](ii) a Splicing vectorized matrixes corresponding to MSCN coefficient graphs of the 3 rd color channel of the three-channel image obtained by color space conversion of all original high-resolution images in the image set into a third prior matrix in the row direction, and marking the third prior matrix as V3,V3=[L1,3 … Li,3 … LNum,3](ii) a Then calculate V1、V2、V3Respective covariance matrix, corresponding to C1、C2、C3
Figure GDA0003602550600000161
Wherein L is1,1Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion11 st color channel Y1,1The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L isNum,1Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNum1 st color channel YNum,1All image blocks in the MSCN coefficient diagram are subjected to quantization processing to obtain vectorization result mosaicConcatenated vectorization matrices, L1,2Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion12 nd color channel Y1,2The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L isNum,2Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNum2 nd color channel YNum,2The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L is1,3Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion1Of the 3 rd color channel Y1,3A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagramNum,3Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNumOf the 3 rd color channel YNum,3The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to the vectorization processing, j is 1,2 and 3, and Sum represents V1、V2Or V3The number of column vectors in (1) t is less than or equal to Sum, vj(t) represents VjThe t-th column vector of (1), vjThe dimension of (t) is K x 1,
Figure GDA0003602550600000163
is shown to VjTaking the average value according to the rows to obtain an average value vector,
Figure GDA0003602550600000162
has dimension K × 1, the superscript "T" denotes the transpose of the vector or matrix, CjHas dimension K × K.
Step 4_ 5: solving for C using existing eigenvalue decomposition techniques1K eigenvalues and corresponding K eigenvectors; then to C1According to the corresponding K characteristic valuesArranging in descending order from big to small, and taking the arrangement result as V1Will be derived from V1Taking the extracted prior information as a KLT core P of a 1 st color channel1,P1=[p1(1) p1(2) … p1(K)]。
Also, using existing eigenvalue decomposition techniques, solve for C2K eigenvalues and corresponding K eigenvectors; then to C2The K eigenvectors are arranged in a descending order of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order from V2From the extracted a priori information, will be from V2The extracted prior information is used as a KLT core P of a 2 nd color channel2,P2=[p2(1) p2(2) … p2(K)]。
Solving for C using existing eigenvalue decomposition techniques3K eigenvalues and corresponding K eigenvectors; then to C3The K eigenvectors are arranged in a descending order of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order from V3Will be derived from V3Using the extracted prior information as KLT kernel P of the 3 rd color channel3,P3=[p3(1) p3(2) … p3(K)]。
Wherein the dimension of the feature vector is Kx 1, P1、P2、P3All dimensions of (a) are KxK, p1(1)、p1(2)、p1(K) Corresponds to and represents C1The 1 st eigenvector, the 2 nd eigenvector, the Kth eigenvector, p are arranged in a descending way of the corresponding K eigenvalues from big to small2(1)、p2(2)、p2(K) Corresponding to represents C2The K eigenvectors are arranged according to the descending order of the corresponding K eigenvalues from big to small as the 1 st eigenvector, the 2 nd eigenvector, the K eigenvector, p3(1)、p3(2)、p3(K) Corresponds to and represents C3The 1 st eigenvector and the 1 st eigenvector which are arranged in descending order of the corresponding K eigenvalues2 eigenvectors, kth eigenvector.
And 5: calculating out
Figure GDA0003602550600000171
Respective KLT coefficient matrix, corresponding to A1、A2、A3,A1=(P1)TZ1,A2=(P2)TZ2,A3=(P3)TZ3(ii) a Wherein the superscript "T" denotes the transpose of the vector or matrix, A1、A2、A3The dimensions of (A) are all K × S.
And 6: obtaining O1First feature vector of (a), denoted fa1,fa1The acquisition process comprises the following steps: using a moment matching method to A1Sequentially estimating Asymmetric Generalized Gaussian Distribution (AGGD) parameters according to the sequence to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to the A1The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A1The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters corresponds to A1The K-th row vector of (1), the K + 1-th set of AGGD parameters correspond to A1All the row vectors in the AGGD system are spliced into row vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is mixed1The column vector formed by arranging the corresponding K +1 sets of AGGD parameters in sequence is taken as O1First feature vector fa of1,fa1=[Aggd1(1) Aggd1(2) … Aggd1(K) Aggd1(K+1)]T(ii) a Wherein, Aggd1(1) Is shown as A1 Corresponding set 1 AGGD parameters, Aggd1(2) Is shown as A1 Corresponding group 2 AGGD parameters, Aggd1(K) Is shown as A1Corresponding K-th set of AGGD parameters, Aggd1(K +1) represents A1Corresponding K +1 th set of AGGD parameters, fa1Has a dimension of 3(K + 1). times.1.
Likewise, obtaining O2First feature vector of (a) noted fa2,fa2The acquisition process comprises the following steps: using a moment matching method to A2Sequentially estimating Asymmetric Generalized Gaussian Distribution (AGGD) parameters according to the sequence to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to the A2The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A2The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters correspond to A2The K-th row vector of (1), the K + 1-th set of AGGD parameters correspond to A2All the line vectors in the set are spliced into line vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is mixed2The column vector formed by arranging the corresponding K +1 sets of AGGD parameters in sequence is taken as O2First feature vector fa of2,fa2=[Aggd2(1) Aggd2(2) … Aggd2(K) Aggd2(K+1)]T(ii) a Wherein, Aggd2(1) Is shown as A2 Corresponding group 1 AGGD parameter, Aggd2(2) Is shown as A2 Corresponding group 2 AGGD parameter, Aggd2(K) Is shown as A2Corresponding K-th group of AGGD parameters, Aggd2(K +1) represents A2Corresponding K +1 th set of AGGD parameters, fa2Has a dimension of 3(K + 1). times.1.
Obtaining O3First feature vector of (a) noted fa3,fa3The acquisition process comprises the following steps: using a moment matching method to A3Sequentially estimating Asymmetric Generalized Gaussian Distribution (AGGD) parameters according to the sequence to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to the A3The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A3The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters corresponds to A3The K-th row vector of (1), the K + 1-th set of AGGD parameters correspond to A3All the line vectors in the set are spliced into line vectors, and each set of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameterThe parameter composition, the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is added3A column vector formed by arranging corresponding K +1 sets of AGGD parameters in sequence is taken as O3First feature vector fa of3,fa3=[Aggd3(1) Aggd3(2) … Aggd3(K) Aggd3(K+1)]T(ii) a Wherein, Aggd3(1) Is shown as A3 Corresponding set 1 AGGD parameters, Aggd3(2) Is represented by A3 Corresponding group 2 AGGD parameters, Aggd3(K) Is represented by A3Corresponding K-th set of AGGD parameters, Aggd3(K +1) represents A3Corresponding K +1 th set of AGGD parameters, fa3Has a dimension of 3(K +1) × 1.
Here, as A1When all the line vectors in the row vector are spliced into a new line vector, the line vector is in accordance with A1The row vectors in the sequence (in the sequence of row numbers) are spliced.
Before the estimation of the AGGD parameters, the KLT coefficients in the KLT coefficient matrix are subjected to statistical analysis. Fig. 2 shows a high-resolution image, and the KLT coefficient matrix of the MSCN coefficient map of each color channel of the three-channel image obtained by color space conversion of the high-resolution image is obtained in the same manner according to the processes of step 1 to step 5; then, for the KLT coefficient matrix of the MSCN coefficient map of each color channel, all KLT coefficients are taken to draw KLT coefficient histogram distribution, fig. 3a shows KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 1 st color channel, fig. 3b shows KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 2 nd color channel, and fig. 3c shows KLT coefficient histogram distribution corresponding to the KLT coefficient matrix of the MSCN coefficient map of the 3 rd color channel. It can be found from the KLT coefficient histogram distributions in the three color channels that they all feature gaussian distributions, and considering that these KLT coefficient histogram distributions are not completely symmetrical, it is more suitable for the inventive method to use AGGD for parameter fitting.
And 7: obtaining O1Second feature vector of (1), denoted as fe1,fe1The acquisition process comprises the following steps: calculation of A1Each of which isEnergy coefficient of row vector, A1The energy coefficient of the h-th row vector in (1) is denoted as e1(h),
Figure GDA0003602550600000201
Then A is mixed1A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O1Second feature vector fe1,fe1=[e1(1) … e1(h) … e1(K)]T(ii) a Wherein h is more than or equal to 1 and less than or equal to K, S is more than or equal to 1 and less than or equal to S, A1(h, s) represents A1S column element of the h row of1(1) Is represented by A1Energy coefficient of the 1 st row vector of (1), e1(K) Is represented by A1Energy coefficient of the K-th row vector of (1), fe1Has a dimension of K × 1.
Likewise, obtaining O2Second feature vector of (1), denoted as fe2,fe2The acquisition process comprises the following steps: calculation of A2Energy coefficient of each row vector in (1), A2The energy coefficient of the h-th row vector in (1) is denoted as e2(h),
Figure GDA0003602550600000202
Then A is mixed2A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O2Second feature vector fe2,fe2=[e2(1) … e2(h) … e2(K)]T(ii) a Wherein A is2(h, s) denotes A2S column element of the h row of2(1) Is shown as A2Energy coefficient of the 1 st row vector of (1), e2(K) Is shown as A2Energy coefficient of the K-th row vector of (1), fe2Has a dimension of K × 1.
Obtaining O3Second feature vector of (1), denoted as fe3,fe3The acquisition process comprises the following steps: calculation of A3Energy coefficient of each row vector in (1), A3The energy coefficient of the h-th row vector in (1) is denoted as e3(h),
Figure GDA0003602550600000203
Then A is mixed3A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O3Second feature vector fe3,fe3=[e3(1) … e3(h) … e3(K)]T(ii) a Wherein A is3(h, s) represents A3S column element of the h row of3(1) Is represented by A3Energy coefficient of the 1 st row vector of (1), e3(K) Is shown as A3Energy coefficient of the K-th row vector of (1), fe3Has dimension K × 1.
And 8: will fa1And fe1Are spliced into O1Feature vector of (2), noted as f1,f1=[(fa1)T (fe1)T]T(ii) a Similarly, will fa2And fe2Are spliced into O2Feature vector of (2), noted as f2,f2=[(fa2)T (fe2)T]T(ii) a Will fa is3And fe3Are spliced into O3Feature vector of (2), noted as f3,f3=[(fa3)T (fe3)T]T(ii) a Then f is mixed1、f2、f3The feature vectors spliced into I are marked as fea, fea [ f ]1 T f2 Tf3 T]T(ii) a Wherein, f1Has a dimension of (4K + 3). times.1, f2Has a dimension of (4K + 3). times.1, f3Has a dimension of (4K +3) × 1, and has a dimension of 3(4K +3) × 1.
And step 9: taking the feature vector of the distorted super-resolution reconstructed image as input, and calculating to obtain a final objective quality prediction score of the distorted super-resolution reconstructed image by combining an SVM regression machine, wherein the larger the final objective quality prediction score is, the better the quality of the distorted super-resolution reconstructed image corresponding to the input feature vector is; conversely, the worse the quality of the super-resolution reconstructed image showing the distortion corresponding to the input feature vector.
In this embodiment, the specific process of step 9 is:
step 9_ 1: sorting machineD groups of distorted super-resolution reconstruction images are taken, wherein each group comprises omega1Two-scale super-resolution reconstructed image, omega2Amplitude three-scale super-resolution reconstruction image, omega3The method comprises the steps of (1) framing four-scale super-resolution reconstruction images, and forming a super-resolution reconstruction image library by the Number-frame-total distorted super-resolution reconstruction images; wherein d is more than or equal to 60, omega1、ω2、ω3Are all greater than or equal to 1, Number ═ d (omega)123) And each group of distorted super-resolution reconstruction images corresponds to one high-resolution image.
Here, in step 9_1, the generation process of each group of distorted super-resolution reconstructed images is as follows: processing a high-resolution image by adjusting the focal length and shooting again, and degrading to obtain a two-scale low-resolution image, a three-scale low-resolution image and a four-scale low-resolution image; then applying omega to the two-scale low-resolution images respectively1Obtaining corresponding omega by a super-resolution reconstruction method1Two-scale super-resolution reconstruction images are obtained; also, ω is applied separately to the three-scale low-resolution images2Obtaining corresponding omega by a super-resolution reconstruction method2Reconstructing an image by using three-scale super-resolution; applying omega to four-scale low-resolution images respectively3Obtaining corresponding omega by a super-resolution reconstruction method3And (5) performing four-scale super-resolution reconstruction on the image.
Step 9_ 2: acquiring the characteristic vector of each distorted super-resolution reconstruction image in the super-resolution image library in the same way according to the processes from step 1 to step 8; and acquiring the subjective score of each distorted super-resolution reconstruction image in the super-resolution image library.
Step 9_ 3: randomly selecting m groups of distorted super-resolution reconstruction images from a super-resolution image library, and multiplying m by omega in the selected m groups1Forming a first training set by the feature vectors and subjective scores of the two-scale super-resolution reconstructed images, and selecting m multiplied by omega in m groups2Forming a second training set by the feature vectors and the subjective scores of the three-scale super-resolution reconstructed images, and selecting m multiplied by omega in the m groups3For reconstructing images from super-resolution images of four dimensionsThe feature vectors and subjective scores form a third training set, and the (d-m) × ω in the remaining d-m groups1The feature vectors of the two-dimensional super-resolution reconstructed image form a first test set, and the (d-m) x omega in the rest d-m groups are used2The feature vectors of the three-scale super-resolution reconstructed images form a second test set, and the (d-m) x omega in the rest d-m groups are used3Feature vectors of the four-scale super-resolution reconstruction images form a third test set; wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003602550600000221
(symbol)
Figure GDA0003602550600000222
is a rounded-down operation sign.
Step 9_ 4: inputting all feature vectors and all subjective scores in the first training set into an SVM regression machine for training by adopting the SVM regression machine as a machine learning method, so that the error between a regression function value obtained through training and the subjective score is minimum, and constructing to obtain a first SVM regression model; and then inputting each feature vector in the first test set into a first SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the first test set.
Similarly, an SVM regression machine is used as a machine learning method, all feature vectors and all subjective scores in the second training set are input into the SVM regression machine for training, so that the error between a regression function value obtained through training and the subjective score is minimum, and a second SVM regression model is constructed; and then inputting each feature vector in the second test set into a second SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the second test set.
Inputting all feature vectors and all subjective scores in a third training set into the SVM regression machine for training by adopting an SVM regression machine as a machine learning method, so that the error between a regression function value obtained through training and the subjective scores is minimum, and constructing to obtain a third SVM regression model; and then inputting each feature vector in the third test set into a third SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the third test set.
Step 9_ 5: repeatedly executing the step 9_3 and the step 9_4 for U times, so that the objective quality prediction score of each distorted super-resolution reconstruction image in the super-resolution image library is at least 1; then calculating the average value of a plurality of objective quality prediction scores of each distorted super-resolution reconstruction image in a super-resolution image library as the final objective quality prediction score of the distorted super-resolution reconstruction image; wherein U is more than or equal to 1000.
To further illustrate the feasibility and effectiveness of the method of the present invention, the method of the present invention was tested.
Selecting 60 original high-resolution images, and obtaining a two-scale low-resolution image, a three-scale low-resolution image and a four-scale low-resolution image for each original high-resolution image by adopting a method of adjusting focal length to shoot again, wherein the image contents of the low-resolution image and the high-resolution image are consistent, and the difference is that the image resolutions are different, and the width and height of the two-scale low-resolution image are the width and height of the original high-resolution image
Figure GDA0003602550600000231
The width and height of the three-dimensional low-resolution image are the width and height of the original high-resolution image
Figure GDA0003602550600000232
The width and height of the four-scale low-resolution image are the width and height of the original high-resolution image
Figure GDA0003602550600000233
For the two-dimensional low-resolution Image, 10 super-resolution reconstruction methods are used for reconstruction, which are ASDS (w.dong, l.zhang, g.shi, and x.wu, "Image denoising and super-resolution by adaptive spectrum domain selection and adaptation" respectivelytitanium reconstruction, "IEEE Transactions on Image Processing, vol.20, No.7, pp.1838-1857,2011" (Image deblurring and super-resolution reconstruction by adaptive sparse domain selection and adaptive regularization)), USRnet (K.Zhang, L.Van Gool, and R.Timoft, "Deep unfolding network for Image super-resolution," in 2020/CVF Conference on video and Pattern reconstruction (CVPR),2020, pp.3214-IEEE 3. (Image reconstruction using depth-unfolding network), super-resolution network R (J.Kim, J.K.Lee, and K.M.Lee, "Deep reconstruction analysis for use of depth reconstruction using video resolution, and" Image reconstruction for Image super-resolution "PR, J.K.Lee, and K.M.Slurvek.L.," Processing for use of depth reconstruction using depth-unfolding network ", and" Image reconstruction for super-resolution using field, and Image reconstruction for super-resolution, CN.W.S.S. J.K.S. K.S. Ser. for use of Image Processing, vol.S.S.S.S.S. 1, W.S. K.S. Pat. K.S. for Image reconstruction, W.S.S. sub.S. Pat. No.7, CN.S. 10, W.S.S.S. K.S. sub.S.S. K.S. sub.S. for Image reconstruction, and S. K.S.S. for Image reconstruction, and S. for Image reconstruction, S.S.S.S.S. K.S.S.S. K.S. Pat. No.7, S. No.4, S. K.S. 7, pp.S.S. K.S. 3, S. K.S. for Image reconstruction, S. K.S. for Image reconstruction, S. K.S. for Image reconstruction, S. K.S. K. K.S. K. for Image reconstruction, S. K.S. K. K.S. for Image reconstruction, S. K.S. K, "in 2015IEEE International Conference on Computer Vision (ICCV),2015, pp.370-378. (sparse prior based Image super-resolution reconstruction depth network)), SRCNN (C.Dong, C.L.Chen, K.He, and X.Tang," Learning a residual connected network for Image super-resolution reconstruction), "BCI (binary Interpolation)," SPM (T.Peleg and M.Elad, "A statistical prediction model based on filtered prediction for single Image reconstruction," V.25 + sparse prediction for Image super-resolution "), and" Transmission Processing "(additive model for sparse reconstruction), 2014, pp.111-126 (adjusting anchor domain regression for fast super-resolution reconstruction)), AIS (e.prez-pelitero, j.salvador, j.ruiz-Hidalgo, and b.rosenhahhn, "Antipodally innovative metrics for fast regression-based super-resolution," IEEE Transactions on Image Processing, vol.25, No.6, pp.2456-2468,2016 (inverse polarity invariant metric for fast regression reconstruction)), SRGAN (c.leidig, l.thesis, f.huszr, j.caballo, a.Cunningham, A.Acosta, A.Aitken, A.Tejani, J.Totz, Z.Wang, and W.Shi, "Photo-responsive single image super-resolution using a genetic adaptive network," in 2017IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2017, pp.105-114. (single image super-resolution reconstruction with generation confrontation network). And reconstructing the three-scale low-resolution image by adopting 9 super-resolution reconstruction methods, wherein the three-scale low-resolution images are respectively ASDS, USRnet, VDSR, CSCN, SRCNN, BCI, SPM, Aplus and AIS. For the four-scale low-resolution images, 8 super-resolution reconstruction methods are adopted for reconstruction, wherein the methods are ASDS, USRnet, VDSR, CSCN, SRCNN, BCI, Aplus and SRGAN respectively. The super-resolution image library is composed of 600 two-scale super-resolution reconstruction images, 540 three-scale super-resolution reconstruction images and 480 four-scale super-resolution reconstruction images.
Let K be 64, and 15 non-reference Image quality evaluation methods were prepared for comparative studies, which are GM-LOG (W.Xue, X.Mou, L.Zhang, A.C.Bovik, and X.Feng, "Black Image quality evaluation using joint statistics of gradient magnitude and Laplacian features)," BLIINDS-II (M.Saad, A.C.Bovik, and C.Cervix.A.C.Vision), volume.23, No.11, pp.4850-4862,2014. (non-reference Image quality evaluation using joint statistics of gradient magnitude and Laplacian features), "BLIINDS-II (M.Saad, A.C.Bovik, and C.QARR," Black Image quality evaluation No.: A.C.Bourvek, and C.C.QARE.C.C.C.C.C.C.C.C.C.B.S., "Black Image quality evaluation No. 1. balance evaluation No. A.C.C.C.C.B.C.B.C.C.C.C.B.C.C.C.C.C.C., volume 29, No.4, pp.494-505,2014, (No reference Image quality evaluation in wavelet domain), BRISQUE (A.Mittal, A.K. Moorthy, and A.C. Bovik, "No-reference Image quality assessment in the spatial domain)," IEEE Transactions on Image Processing, vol.21, No.12, pp.4695-4708,2012 (No reference Image quality assessment in spatial domain), "OG-IQA (L.Liu, Y.Hua, Q.ZHao, H.Huang, and A.C. Bovik," index Image quality assessment by spatial correlation characterization and evaluation in spatial domain), "Signal Processing: volume 40, pp.1-505,2014, and" No-reference Image quality assessment in spatial domain "(No. 5: H.S. 52, No. 5: 3, No.5, No. 52, No.5, 2, No.5, No.2, No.5, No.2, DIVINE (A.K. Moorthy and A.C. Bovik, "Black Image quality assessment: From natural scene statistics to Perceptual quality)," IEEE transaction on Image Processing, vol.20, No.12, pp.3350-3364,2011 (No reference Image quality assessment: statistical to Perceptual quality From natural scene)), RISE (L.Li, W.Xia, W.Lin, Y.Fang, and S.Wang, "No-reference and texture Image rendering based on Perceptual quality and spectral characteristics," IEEE transaction on Multi-architecture, BM19, No.5, pp.1030-1040,2017 (No space map based on multi-scale and spectral characteristics), video quality assessment, "Green Image quality assessment, No.6, No.5, P.12, graphics, No.12, No.4, No.2, No.5, No. 12-1040,2017 (No. spatial map of spectral characteristics), quality assessment of graphics rendering, No. 2. blue Image quality assessment, No. 2. V.V.V.V.V.V.V.V.V.V.V.V.A.V.V.V.V.V.V.S.S.S.12, No. 4. prediction and No. 2. 12, No. 2. 4, No.2, No.4, No.2, No.4, No.2, No. of spatial transform, No.4, No.2, No.4, No. of Image rendering, No.2, No.4, No.2, No. of Image rendering, No.2, No. of Image rendering, No.2, No. of Image rendering, No.2, No. of Image rendering, No.2, No. of Image rendering, No.2, No. of graphics, "Journal of Vision, vol.17, No.1,012017" (prediction of true distorted Image perceived quality based on the feature pack method)), NIQE (a.mi, r.soundatrajan, and a.c. bovik, "magic a" complete lens "Image quality analyzer," IEEE Signal Processing Letters, vol.20, No.3, pp.209-212,2013 "(Image quality analyzer that does" complete no reference "), iliqe (l.zhang, and a.c. bosensor," a feature-encoded complex Image analyzer, "IEEE Transactions Image evaluation, Processing, vol.24, No.8, pp.79-2552" (complete rich sensitivity of a feature), hvimage quality evaluation using a "complete non-reference" Image quality analyzer "), and map Image quality evaluation using a" complete non-filter ", hvanalysis techniques" hvmap evaluation, vol.56 maximum value ", hvanalysis, vol.24, No.8, pp.79-2552" (maximum Image perceived quality evaluation, n.10. sub-map filter), and map evaluation of Image perceived quality by a "complete rich Image quality analyzer, volume Image perceived quality evaluation, volume filter, h.10. method analysis, pp.10. 3, and map evaluation, PCRL (b.hu, l.li, h.liu, w.lin, and j.qian, "paper-contrast based search for calibrated Image restoration algorithms," IEEE Transactions on Multimedia, vol.21, No.8, pp.2042-2056,2019. (based on Pairwise comparison of ranking Learning for calibrated Image restoration methods)), SR-metric (c.ma, c.y.yang, x.yang, and m.h.yang, "Learning a no-reference quality metric for single-Image restoration," Computer Vision and Image interpretation, super-resolution.158, pp.1-16,2017. (Learning a non-reference Image for evaluating single Image reconstruction)). For each evaluation method, the objective score of each super-resolution reconstruction image under the evaluation method is calculated by adopting the previous SVM grouping training test mode. For the super-resolution reconstruction image under each scale, a Kendall correlation coefficient (KROCC) and a Spearman correlation coefficient (SROCC) between the objective score and the subjective score are calculated to serve as indexes for measuring the performance of the super-resolution reconstruction image. The results of the Kendall correlation coefficient (KROCC) and Spearman correlation coefficient (SROCC) calculations are shown in Table 1. In table 1, the average column indicates the result of weighted averaging of the correlation coefficients in the two, three, and four dimensions according to the number of super-resolution reconstructed images in each dimension, and the data of the top 3 performance ranks are shown in bold.
TABLE 1 Performance test of the index of the method of the present invention and the existing 15 no-reference image quality evaluation methods
Figure GDA0003602550600000261
As can be seen from Table 1, the Kendall correlation coefficient and the Spearman correlation coefficient of the method are maximum values in two, three and four dimensions, which proves that the method achieves the best performance.
The characteristics of each section were tested for performance as follows. The feature vector of the super-resolution reconstruction image in the method consists of two parts: the experimental results of the AGGD fitting parameters for the KLT coefficients (first eigenvector) and the KLT energy coefficients (second eigenvector), both part-feature alone tests and fused feature tests are shown in table 2.
TABLE 2 different types of characteristic Performance test experiments
Figure GDA0003602550600000271
It can be seen from table 2 that the AGGD fitting parameters of the KLT coefficients contribute more to the performance of the inventive method.
The extracted features for each color channel are tested for performance as follows. The method of the invention performs color space conversion on the original input image, converts the RGB image into a three-channel image, and independently tests the performance of the features extracted from three color channels of the three-channel image, and the test result is shown in Table 3.
TABLE 3 characteristic Performance test experiments extracted for different color channels
Figure GDA0003602550600000272
As can be seen from table 3, the performance contributions of the different color channels to the inventive method are similar.

Claims (4)

1. A quality evaluation method for a super-resolution reconstruction image based on a KLT technology is characterized by comprising the following steps:
step 1: recording the distorted super-resolution reconstruction image as I, and recording the red channel, the green channel and the blue channel of I correspondingly as IR、IG、IBIs shown byRThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IR(a, b) mixing IGThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IG(a, b) mixing IBThe pixel value of the pixel point with the middle coordinate position (a, b) is marked as IB(a,b) (ii) a Wherein a is more than or equal to 1 and less than or equal to W, b is more than or equal to 1 and less than or equal to H, W represents the width of I, and H represents the height of I;
and 2, step: performing color space conversion on the I to obtain a corresponding three-channel image, marking the three-channel image as O, and correspondingly marking the 1 st color channel, the 2 nd color channel and the 3 rd color channel of the O as O1、O2、O3Introducing O1The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O1(a, b) reacting O2The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O2(a, b) reacting O3The pixel value of the pixel point with the middle coordinate position (a, b) is recorded as O3(a,b),
Figure FDA0003602550590000011
Then using MSCN technology to calculate O1、O2、O3Respective MSCN coefficient map, corresponding notation
Figure FDA0003602550590000012
Wherein, O, O1、O2、O3
Figure FDA0003602550590000013
Figure FDA0003602550590000014
Are all W wide and all H high, the symbol [ 2 ]]"is a vector or matrix representation symbol;
and 3, step 3: will be provided with
Figure FDA0003602550590000015
Are respectively divided into S non-overlapping sizes
Figure FDA0003602550590000016
The image block of (1); then to
Figure FDA0003602550590000017
Each image block in each image block is subjected to vectorization processing; then will
Figure FDA0003602550590000018
All the image blocks in each image block are spliced into a vectorization matrix according to vectorization results obtained after the image blocks are subjected to radial quantization processing, and the vectorization matrix is obtained by splicing the vectorization results
Figure FDA0003602550590000019
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the warp-wise quantization processing is recorded as Z1,Z1Each column in (1) is
Figure FDA00036025505900000110
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure FDA00036025505900000111
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the warp-wise quantization processing is recorded as Z2,Z2Each column in (1) is
Figure FDA00036025505900000112
A vectorization result with dimension K multiplied by 1 obtained after one image block is subjected to quantization processing is obtained, and the vectorization result is subjected to image processing
Figure FDA00036025505900000113
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to the warp-wise quantization processing is recorded as Z3,Z3Each column in (1) is
Figure FDA0003602550590000021
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein the content of the first and second substances,
Figure FDA0003602550590000022
(symbol)
Figure FDA0003602550590000023
for rounding-down the sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Z1、Z2、Z3The dimensions of (A) are KxS;
and 4, step 4: obtaining respective KLT kernels of three color channels which are the same as the three-channel image obtained in the step 3 by using a plurality of original high-resolution images in an off-line mode, and recording the KLT kernel of the 1 st color channel, the KLT kernel of the 2 nd color channel and the KLT kernel of the 3 rd color channel as P1、P2、P3(ii) a Wherein, P1、P2、P3The dimensions of (A) are K multiplied by K;
and 5: calculating out
Figure FDA0003602550590000024
Respective KLT coefficient matrix, denoted A1、A2、A3,A1=(P1)TZ1,A2=(P2)TZ2,A3=(P3)TZ3(ii) a Wherein the superscript "T" denotes the transpose of the vector or matrix, A1、A2、A3The dimensions of (A) are KxS;
and 6: obtaining O1First feature vector of (a) noted fa1,fa1The acquisition process comprises the following steps: using a moment matching method to A1Sequentially carrying out asymmetric generalized Gaussian distribution parameter estimation on each row vector and row vectors spliced by all the row vectors in the set to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to A1The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A1The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters correspond to A1The K-th row vector of (1), the K + 1-th set of AGGD parameters correspond to A1All the row vectors in the AGGD system are spliced into row vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is mixed1Column direction formed by arranging corresponding K +1 sets of AGGD parameters in sequenceAmount as O1First feature vector fa of1,fa1=[Aggd1(1) Aggd1(2)…Aggd1(K) Aggd1(K+1)]T(ii) a Wherein, Aggd1(1) Is shown as A1Corresponding set 1 AGGD parameters, Aggd1(2) Is represented by A1Corresponding group 2 AGGD parameters, Aggd1(K) Is shown as A1Corresponding K-th set of AGGD parameters, Aggd1(K +1) represents A1Corresponding K +1 th group of AGGD parameters, fa1Has a dimension of 3(K + 1). times.1;
likewise, obtaining O2First feature vector of (a) noted fa2,fa2The acquisition process comprises the following steps: using a moment matching method to A2Sequentially estimating asymmetric generalized Gaussian distribution parameters of each row vector and row vectors formed by splicing all the row vectors to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters correspond to A2The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A2The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters corresponds to A2The K-th row vector in (1), the K + 1-th set of AGGD parameters correspond to A2All the line vectors in the set are spliced into line vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is added2The column vector formed by arranging the corresponding K +1 sets of AGGD parameters in sequence is taken as O2First feature vector fa of2,fa2=[Aggd2(1) Aggd2(2)…Aggd2(K) Aggd2(K+1)]T(ii) a Wherein, Aggd2(1) Is shown as A2Corresponding group 1 AGGD parameter, Aggd2(2) Is represented by A2Corresponding group 2 AGGD parameters, Aggd2(K) Is represented by A2Corresponding K-th set of AGGD parameters, Aggd2(K +1) represents A2Corresponding K +1 th group of AGGD parameters, fa2Has a dimension of 3(K + 1). times.1;
obtaining O3First feature vector of (a) noted fa3,fa3The acquisition process comprises the following steps: using a moment matching method to A3Each row vector and the row direction formed by splicing all the row vectors in theSequentially estimating asymmetric generalized Gaussian distribution parameters according to the sequence to obtain K +1 groups of AGGD parameters, wherein the 1 st group of AGGD parameters corresponds to A3The 1 st row vector in (1), the 2 nd set of AGGD parameters correspond to A3The 2 nd row vector in (1), and so on, the K-th group of AGGD parameters corresponds to A3The K-th row vector in (1), the K + 1-th set of AGGD parameters correspond to A3All the row vectors in the AGGD system are spliced into row vectors, each group of AGGD parameters consists of a shape parameter, a left scale parameter and a right scale parameter, and the shape parameter, the left scale parameter and the right scale parameter are all larger than 0; then A is added3A column vector formed by arranging corresponding K +1 sets of AGGD parameters in sequence is taken as O3First feature vector fa of3,fa3=[Aggd3(1) Aggd3(2)…Aggd3(K) Aggd3(K+1)]T(ii) a Wherein, Aggd3(1) Is represented by A3Corresponding group 1 AGGD parameter, Aggd3(2) Is shown as A3Corresponding group 2 AGGD parameters, Aggd3(K) Is represented by A3Corresponding K-th set of AGGD parameters, Aggd3(K +1) represents A3Corresponding K +1 th group of AGGD parameters, fa3Has a dimension of 3(K + 1). times.1;
and 7: obtaining O1Second feature vector of (1), denoted as fe1,fe1The acquisition process comprises the following steps: calculation of A1Energy coefficient of each row vector in (1), A1The energy coefficient of the h-th row vector in (1) is recorded as e1(h),
Figure FDA0003602550590000041
Then A is added1A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O1Second feature vector fe1,fe1=[e1(1)…e1(h)…e1(K)]T(ii) a Wherein h is more than or equal to 1 and less than or equal to K, S is more than or equal to 1 and less than or equal to S, A1(h, s) represents A1S column element of the h row of (1), e1(1) Is represented by A1Energy coefficient of the 1 st row vector of (1), e1(K) Is shown as A1Energy coefficient of the K-th row vector of (1), fe1Has a dimension of K × 1;
likewise, obtain O2Second feature vector of (1), denoted as fe2,fe2The acquisition process comprises the following steps: calculation of A2Energy coefficient of each row vector in (1), A2The energy coefficient of the h-th row vector in (1) is denoted as e2(h),
Figure FDA0003602550590000042
Then A is mixed2A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O2Second feature vector fe2,fe2=[e2(1)…e2(h)…e2(K)]T(ii) a Wherein A is2(h, s) denotes A2S column element of the h row of2(1) Is represented by A2Energy coefficient of the 1 st row vector of (1), e2(K) Is represented by A2Energy coefficient of the K-th row vector of (1), fe2Has a dimension of K × 1;
obtaining O3Second feature vector of (1), denoted as fe3,fe3The acquisition process comprises the following steps: calculation of A3Energy coefficient of each row vector in (1), A3The energy coefficient of the h-th row vector in (1) is denoted as e3(h),
Figure FDA0003602550590000043
Then A is mixed3A column vector formed by arranging the energy coefficients of all the row vectors in sequence is taken as O3Second feature vector fe3,fe3=[e3(1)…e3(h)…e3(K)]T(ii) a Wherein A is3(h, s) denotes A3S column element of the h row of3(1) Is represented by A3Energy coefficient of the 1 st row vector of (1), e3(K) Is shown as A3Energy coefficient of the K-th row vector of (1), fe3Has a dimension of K × 1;
and 8: will fa1And fe1Are spliced into O1Feature vector of (2), noted as f1,f1=[(fa1)T (fe1)T]T(ii) a Similarly, will fa2And fe2Are spliced into O2Feature vector of (2), noted as f2,f2=[(fa2)T (fe2)T]T(ii) a Will fa is3And fe3Are spliced into O3Is noted as f3,f3=[(fa3)T (fe3)T]T(ii) a Then f is put1、f2、f3The feature vectors spliced to form I, denoted fea,
Figure FDA0003602550590000051
wherein f is1Has a dimension of (4K +3) x 1, f2Has a dimension of (4K +3) x 1, f3Has a dimension of (4K +3) × 1, and has a dimension of 3(4K +3) × 1;
and step 9: taking the feature vector of the distorted super-resolution reconstructed image as input, and calculating to obtain a final objective quality prediction score of the distorted super-resolution reconstructed image by combining an SVM regression machine, wherein the larger the final objective quality prediction score is, the better the quality of the distorted super-resolution reconstructed image corresponding to the input feature vector is; conversely, the worse the quality of the super-resolution reconstructed image, which indicates the distortion corresponding to the input feature vector.
2. The method for evaluating quality of super-resolution reconstructed image based on KLT technology as claimed in claim 1, wherein the specific process of step 4 is:
step 4_ 1: selecting Num original high-resolution images, forming an image set, and marking the ith original high-resolution image in the image set as XiIs mixing XiIs correspondingly marked as X for the red channel, the green channel and the blue channeli,R、Xi,G、Xi,BIs mixing Xi,RThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,R(a ', b') reacting Xi,GThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,G(a ', b') reacting Xi,BThe pixel value of the pixel point with the middle coordinate position (a ', b') is marked as Xi,B(a ', b'); wherein Num is more than or equal to 1, i is more than or equal to 1 and less than or equal to Num,XiHas a width of MiAnd a height of Ni,1≤a'≤Mi,1≤b'≤Ni
Step 4_ 2: color space conversion is carried out on each original high-resolution image in the image set to obtain a corresponding three-channel image, and X is convertediThe three-channel image obtained after color space conversion is recorded as YiIs a reaction of YiThe 1 st color channel, the 2 nd color channel and the 3 rd color channel are correspondingly marked as Yi,1、Yi,2、Yi,3Is a reaction of Yi,1The pixel value of the pixel point with the middle coordinate position of (a ', b') is marked as Yi,1(a ', b') converting Yi,2The pixel value of the pixel point with the middle coordinate position of (a ', b') is marked as Yi,2(a ', b') converting Yi,3The pixel value of the pixel point with the middle coordinate position (a ', b') is recorded as Yi,3(a',b'),
Figure FDA0003602550590000052
Then, by utilizing MSCN technology, calculating MSCN coefficient diagram of each color channel of three-channel image obtained by color space conversion of each original high-resolution image in image set, and converting Yi,1、Yi,2、Yi,3The respective MSCN coefficient maps are correspondingly labeled
Figure FDA0003602550590000053
Wherein Y isi、Yi,1、Yi,2、Yi,3
Figure FDA0003602550590000061
Are all M in widthiAnd the heights are all NiThe term "2]"is a vector or matrix representation symbol;
step 4_ 3: the MSCN coefficient graph of each color channel of a three-channel image obtained by color space conversion of each original high-resolution image in an image set is divided into a plurality of non-overlapping size ranges
Figure FDA0003602550590000062
Image block of
Figure FDA0003602550590000063
Are respectively divided into SiEach non-overlapping size is
Figure FDA0003602550590000064
The image block of (1); then, each image block in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set is subjected to vectorization processing; then all image blocks in the MSCN coefficient graph of each color channel of the three-channel image obtained by color space conversion of each original high-resolution image in the image set are spliced into a vectorization matrix, and the vectorization matrix is formed by splicing vectorization results obtained by carrying out radial quantization processing on all image blocks in the MSCN coefficient graph of each color channel of the three-channel image
Figure FDA0003602550590000065
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,1,Li,1Each column in (1) is
Figure FDA0003602550590000066
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure FDA0003602550590000067
The vectorization matrix formed by splicing vectorization results obtained after all image blocks are subjected to radial quantization processing is recorded as Li,2,Li,2Each column in (1) is
Figure FDA0003602550590000068
A vectorization result with dimension K multiplied by 1 obtained after the image block is subjected to the quantization processing is obtained
Figure FDA0003602550590000069
All image blocks in the image are obtained after the quantization processingThe vectorization matrix formed by splicing the vectorization results is marked as Li,3,Li,3Each column in (1) is
Figure FDA00036025505900000610
Obtaining a vectorization result with dimension K multiplied by 1 after one image block is subjected to quantization processing; wherein the content of the first and second substances,
Figure FDA00036025505900000611
(symbol)
Figure FDA00036025505900000612
for rounding-down the sign, K has a value of 4, 9, 16, 25, 36, 49 or 64, Li,1、Li,2、Li,3All dimensions of (are KxSi
Step 4_ 4: splicing vectorization matrixes corresponding to MSCN coefficient graphs of 1 st color channel of three-channel images obtained by color space conversion of all original high-resolution images in an image set into a first prior matrix in the row direction, and marking the first prior matrix as V1,V1=[L1,1…Li,1…LNum,1](ii) a Similarly, the vectorization matrices corresponding to the MSCN coefficient maps of the 2 nd color channel of the three-channel image obtained by color space conversion of all the original high-resolution images in the image set are spliced into a second prior matrix in the row direction, which is denoted as V2,V2=[L1,2…Li,2…LNum,2](ii) a Splicing vectorization matrixes corresponding to MSCN coefficient graphs of 3 rd color channels of three-channel images obtained by color space conversion of all original high-resolution images in the image set into third prior matrixes in the row direction, and marking the third prior matrixes as V3,V3=[L1,3…Li,3…LNum,3](ii) a Then calculate V1、V2、V3Respective covariance matrix, corresponding to C1、C2、C3
Figure FDA0003602550590000071
Wherein the content of the first and second substances,L1,1representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion11 st color channel Y1,1The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L isNum,1Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNum1 st color channel YNum,1A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagram1,2Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion12 nd color channel Y1,2A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagramNum,2Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNum2 nd color channel YNum,2A vectorization matrix L formed by splicing vectorization results obtained by carrying out the vectorization processing on all image blocks in the MSCN coefficient diagram1,3Representing the 1 st original high resolution image X in the image set1Three-channel image Y obtained after color space conversion1Of the 3 rd color channel Y1,3The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to radial quantization processing, and L isNum,3Representing the Num original high resolution image X in the image setNumThree-channel image Y obtained after color space conversionNumOf the 3 rd color channel YNum,3The vectorization matrix is formed by splicing vectorization results obtained after all image blocks in the MSCN coefficient diagram are subjected to the vectorization processing, j is 1,2 and 3, and Sum represents V1、V2Or V3The number of column vectors in (1) t is less than or equal to Sum, vj(t) represents VjThe t-th column vector of (1), vj(t) has the dimension K x 1, vjIs shown to VjTaking the average value according to the rows to obtain an average value vector,
Figure FDA0003602550590000072
Figure FDA0003602550590000073
of dimension K x 1, the superscript "T" denoting the transpose of the vector or matrix, CjHas a dimension of K x K;
step 4_ 5: solving for C by eigenvalue decomposition technique1K eigenvalues and corresponding K eigenvectors; then to C1The K eigenvectors are arranged in a descending way of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order V1Will be derived from V1Taking the extracted prior information as a KLT core P of a 1 st color channel1,P1=[p1(1) p1(2)…p1(K)];
Likewise, using eigenvalue decomposition techniques, solve for C2K eigenvalues and corresponding K eigenvectors; then to C2The K eigenvectors are arranged in a descending way of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order V2From the extracted a priori information, will be from V2The extracted prior information is used as a KLT core P of a 2 nd color channel2,P2=[p2(1) p2(2)…p2(K)];
Using eigenvalue decomposition techniques to solve for C3K eigenvalues and corresponding K eigenvectors; then to C3The K eigenvectors are arranged in a descending way of the corresponding K eigenvalues from big to small, and the arrangement result is taken as the order V3From the extracted a priori information, will be from V3Taking the extracted prior information as a KLT core P of a 3 rd color channel3,P3=[p3(1) p3(2)…p3(K)];
Wherein the dimension of the feature vector is Kx 1, P1、P2、P3All dimensions of (a) are KxK, p1(1)、p1(2)、p1(K) Corresponding to represents C1The 1 st eigenvector, the 2 nd eigenvector, the Kth eigenvector, p are arranged in a descending way of the corresponding K eigenvalues from big to small2(1)、p2(2)、p2(K) Corresponding to represents C2The 1 st eigenvector, the 2 nd eigenvector, the Kth eigenvector, p are arranged in a descending way of the corresponding K eigenvalues from big to small3(1)、p3(2)、p3(K) Corresponding to represents C3The K eigenvectors are arranged according to a descending mode of the corresponding K eigenvalues from big to small, and then the 1 st eigenvector, the 2 nd eigenvector and the K eigenvector are obtained.
3. The method for evaluating quality of super-resolution reconstructed image based on KLT technology as claimed in claim 1 or 2, wherein the specific process of step 9 is:
step 9_ 1: d groups of distorted super-resolution reconstruction images are selected, wherein each group comprises omega1Two-scale super-resolution reconstructed image omega2Amplitude three-scale super-resolution reconstruction image omega3The method comprises the steps of (1) framing four-scale super-resolution reconstruction images, and forming a super-resolution reconstruction image library by the Number-frame-total distorted super-resolution reconstruction images; wherein d is more than or equal to 60, omega1、ω2、ω3Are all greater than or equal to 1, Number ═ d (omega)123) Each group of distorted super-resolution reconstruction images corresponds to a high-resolution image;
step 9_ 2: acquiring the characteristic vector of each distorted super-resolution reconstruction image in the super-resolution image library in the same way according to the processes from step 1 to step 8; obtaining the subjective score of each distorted super-resolution reconstruction image in a super-resolution image library;
step 9_ 3: randomly selecting m groups of distorted super-resolution reconstruction images from a super-resolution image library, and multiplying m by omega in the selected m groups1Forming a first training set by the characteristic vector and the subjective score of the two-scale super-resolution reconstruction image, and selecting m multiplied by omega in the m groups2The feature vector and subjective score of the super-resolution reconstructed image with three scales form the second stepTwo training sets, m x omega in the selected m groups3Forming a third training set by the feature vectors and the subjective scores of the four-scale super-resolution reconstructed images, and multiplying (d-m) x omega in the rest d-m groups1Forming a first test set by the characteristic vectors of the two-scale super-resolution reconstruction image, and multiplying (d-m) x omega in the residual d-m groups2Forming a second test set by the feature vectors of the three-scale super-resolution reconstruction images, and multiplying (d-m) x omega in the remaining d-m groups3Feature vectors of the four-scale super-resolution reconstruction images form a third test set; wherein the content of the first and second substances,
Figure FDA0003602550590000091
(symbol)
Figure FDA0003602550590000092
is a rounded-down operation sign;
step 9_ 4: an SVM regression machine is adopted as a machine learning method, all feature vectors and all subjective scores in a first training set are input into the SVM regression machine for training, so that the error between a regression function value obtained through training and the subjective scores is minimum, and a first SVM regression model is constructed; then inputting each feature vector in the first test set into a first SVM regression model for testing to obtain an objective quality prediction score of a distorted super-resolution reconstructed image corresponding to each feature vector in the first test set;
similarly, an SVM regression machine is used as a machine learning method, all feature vectors and all subjective scores in the second training set are input into the SVM regression machine for training, so that the error between a regression function value obtained through training and the subjective score is minimum, and a second SVM regression model is constructed; then inputting each feature vector in the second test set into a second SVM regression model for testing to obtain an objective quality prediction score of the distorted super-resolution reconstructed image corresponding to each feature vector in the second test set;
inputting all feature vectors and all subjective scores in a third training set into the SVM regression machine for training by adopting an SVM regression machine as a machine learning method, so that the error between a regression function value obtained through training and the subjective scores is minimum, and constructing to obtain a third SVM regression model; then inputting each feature vector in the third test set into a third SVM regression model for testing to obtain an objective quality prediction score of a distorted super-resolution reconstructed image corresponding to each feature vector in the third test set;
step 9_ 5: repeatedly executing the step 9_3 and the step 9_4 for U times, so that the objective quality prediction score of each distorted super-resolution reconstruction image in the super-resolution image library is at least 1; and then calculating the average value of a plurality of objective quality prediction scores of each distorted super-resolution reconstruction image in the super-resolution image library as the final objective quality prediction score of the distorted super-resolution reconstruction image.
4. The method for evaluating quality of super-resolution reconstructed image based on KLT technique as claimed in claim 3, wherein in step 9_1, each set of distorted super-resolution reconstructed image is generated by: processing a high-resolution image by adjusting the focal length and shooting again, and degrading to obtain a two-scale low-resolution image, a three-scale low-resolution image and a four-scale low-resolution image; then applying omega to the two-scale low-resolution images respectively1Obtaining corresponding omega by a super-resolution reconstruction method1Two-scale super-resolution reconstruction images are obtained; also, ω is applied to each of the three-scale low-resolution images2Obtaining corresponding omega by a super-resolution reconstruction method2Reconstructing an image by using three-scale super-resolution; applying omega to four-scale low-resolution images respectively3Obtaining corresponding omega by a super-resolution reconstruction method3And (5) performing four-scale super-resolution reconstruction on the image.
CN202110672307.6A 2021-06-15 2021-06-15 Super-resolution reconstruction image quality evaluation method based on KLT technology Active CN113450319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672307.6A CN113450319B (en) 2021-06-15 2021-06-15 Super-resolution reconstruction image quality evaluation method based on KLT technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672307.6A CN113450319B (en) 2021-06-15 2021-06-15 Super-resolution reconstruction image quality evaluation method based on KLT technology

Publications (2)

Publication Number Publication Date
CN113450319A CN113450319A (en) 2021-09-28
CN113450319B true CN113450319B (en) 2022-07-15

Family

ID=77811574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672307.6A Active CN113450319B (en) 2021-06-15 2021-06-15 Super-resolution reconstruction image quality evaluation method based on KLT technology

Country Status (1)

Country Link
CN (1) CN113450319B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841901B (en) * 2022-07-01 2022-10-25 北京大学深圳研究生院 Image reconstruction method based on generalized depth expansion network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851098A (en) * 2015-05-22 2015-08-19 天津大学 Objective evaluation method for quality of three-dimensional image based on improved structural similarity
CN106997585A (en) * 2016-01-22 2017-08-01 同方威视技术股份有限公司 Imaging system and image quality evaluating method
WO2018058090A1 (en) * 2016-09-26 2018-03-29 University Of Florida Research Foundation Incorporated Method for no-reference image quality assessment
CN109523513B (en) * 2018-10-18 2023-08-25 天津大学 Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image

Also Published As

Publication number Publication date
CN113450319A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
US9846818B2 (en) Objective assessment method for color image quality based on online manifold learning
CN105049851B (en) General non-reference picture quality appraisement method based on Color perception
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN105513033B (en) A kind of super resolution ratio reconstruction method that non local joint sparse indicates
Cui et al. Blind light field image quality assessment by analyzing angular-spatial characteristics
Jiang et al. Supervised dictionary learning for blind image quality assessment using quality-constraint sparse coding
Pang et al. Image colorization using sparse representation
CN104734724B (en) Based on the Compression of hyperspectral images cognitive method for weighting Laplce's sparse prior again
CN112184672A (en) No-reference image quality evaluation method and system
CN107146220A (en) A kind of universal non-reference picture quality appraisement method
CN113450319B (en) Super-resolution reconstruction image quality evaluation method based on KLT technology
CN112950596A (en) Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer
CN107845064B (en) Image super-resolution reconstruction method based on active sampling and Gaussian mixture model
CN111127298A (en) Panoramic image blind quality assessment method
CN108682005B (en) Semi-reference 3D synthetic image quality evaluation method based on covariance matrix characteristics
CN110910347A (en) Image segmentation-based tone mapping image no-reference quality evaluation method
Fang et al. Quality assessment for image super-resolution based on energy change and texture variation
CN112132774A (en) Quality evaluation method of tone mapping image
KR101035365B1 (en) Method and apparatus of assessing the image quality using compressive sensing
CN113192003B (en) Spliced image quality evaluation method
CN116758019A (en) Multi-exposure fusion light field image quality evaluation method based on dynamic and static region division
CN112950592B (en) Non-reference light field image quality evaluation method based on high-dimensional discrete cosine transform
CN111292238B (en) Face image super-resolution reconstruction method based on orthogonal partial least square
Wang et al. See SIFT in a rain: divide-and-conquer SIFT key point recovery from a single rainy image
Nie et al. Image restoration from patch-based compressed sensing measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant