CN103475897A - Adaptive image quality evaluation method based on distortion type judgment - Google Patents
Adaptive image quality evaluation method based on distortion type judgment Download PDFInfo
- Publication number
- CN103475897A CN103475897A CN2013104068210A CN201310406821A CN103475897A CN 103475897 A CN103475897 A CN 103475897A CN 2013104068210 A CN2013104068210 A CN 2013104068210A CN 201310406821 A CN201310406821 A CN 201310406821A CN 103475897 A CN103475897 A CN 103475897A
- Authority
- CN
- China
- Prior art keywords
- prime
- image
- distortion
- sigma
- coordinate position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
The invention discloses an adaptive image quality evaluation method based on distortion type judgment. The method comprises the steps that firstly, distortion types of images are judged and are divided into a type of white Gaussian noise distortion, a type of JPEG distortion and a type of blur class distortion, wherein the blur class distortion comprises Gaussian blur distortion, JPEG2000 distortion and fast fading distortion; through the utilization of distortion judgment results, evaluation is carried out on images with the white Gaussian noise distortion by using a structural similarity model based on pixel domains, evaluation is carried out on images with the JPEG distortion by using a structural similarity model based on DCT domains, and evaluation is carried out on images with the blur class distortion by using a structural similarity model based on wavelet domains. The adaptive image quality evaluation method based on the distortion type judgment is indicated to be an objective evaluation method through implementation results, the advantage that all the structural similarity models are well combined in the evaluation carried out on the distortion images of different distortion types through the distortion judgment method is achieved, and the evaluation results are high in conformity with human eye subjective perception.
Description
Technical field
The present invention relates to a kind of image quality evaluation technology, especially relate to a kind of adapting to image method for evaluating objective quality based on the type of distortion judgement.
Background technology
Along with developing rapidly of modern communication technology, the mankind enter information-intensive society, and image is as important information carrier, and the quality of its quality will directly affect accuracy and the integrity degree of recipient to acquisition of information.Yet, no matter in collection, processing, storage and the transmitting procedure of image, all, unavoidably due to the imperfection of processing method or external equipment is lack of standardization etc. that reason produces distortion or degrades, different distortions or the degree that degrades will cause loss in various degree to image information.In the situation that the image information technology is widely used, how to measure the distortion level different by image and the loss size of the information loaded that causes seems particularly important.The appearance of image quality evaluation problem, just in order to solve this important realistic problem.Image quality evaluation can be divided into to subjective quality assessment and evaluating objective quality method, the former relies on experimenter's subjective perception to carry out the quality of evaluation map picture, therefore very loaded down with trivial details and consuming time, is not suitable for being integrated into practical application simultaneously; The latter is according to the model quantizating index, simulating human vision system (HVS, human visual system) perception mechanism is weighed picture quality, and that the method has is simple to operate, be easy to realize and the characteristics such as real time algorithm optimization, becomes the research emphasis in image quality evaluation.And, according to the emphasis difference that HVS is described, image quality evaluation can be divided into based on error-sensitivity with based on structural similarity evaluation method two classes.Based on the error-sensitivity method consider that visual properties, multichannel, CSF band are logical, interaction and the visual psychology features of different excitations between shielding effect, multichannel, even but because the mankind do not grasp HVS fully, so these class methods exist obvious accurate modeling obstacle.Think that based on the structural similarity method natural image has specific structure, the perception of human eye to image, mainly from these structural informations, extract, the structural similarity of direct evaluation map image signal, the method implementation complexity is lower, application is stronger, but the structural similarity method kind of prior art is more, and these methods can not guarantee that the quality evaluation result of image of different type of distortion is all more accurate.
Summary of the invention
Technical problem to be solved by this invention is to provide a kind of adaptive image quality method for objectively evaluating, and it can improve the accuracy of quality evaluation result of the image of different type of distortion effectively.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of adaptive image quality evaluation method based on the type of distortion judgement, and its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated;
Secondly, in conjunction with the type of distortion of distorted image to be evaluated, process accordingly:
If distorted image is the white Gaussian noise distortion, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in pixel domain, by calculating all pixels in each image block in original undistorted image and distorted image to be evaluated in pixel domain brightness average and standard deviation, and the covariance between all pixel brightness values in two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate position is identical, obtain the structural similarity based on pixel domain between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the JPEG distortion, by original undistorted image and distorted image to be evaluated at DCT(Discrete Cosine Transform) territory is divided into the image block that a plurality of equitant sizes are 8 * 8, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in the DCT territory to be evaluated, and original undistorted image and distorted image to be evaluated the DCT territory in covariance between all coefficients in identical two image blocks of the coordinate position that has, obtain the structural similarity based on the DCT territory between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the fuzzy distortion of class, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in wavelet field, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in wavelet field to be evaluated, and the covariance between all coefficients in identical two image blocks of original undistorted image and distorted image to be evaluated all coordinate position in wavelet field, obtain the structural similarity based on wavelet field between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
Finally, according to the structural similarity between two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate positions are identical, obtain the objective quality score value of distorted image to be evaluated.
A kind of adaptive image quality evaluation method based on the type of distortion judgement of the present invention, it specifically comprises the following steps:
1. make X mean original undistorted image, make Y mean distorted image to be evaluated, determine the type of distortion of Y by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, the fuzzy distortion of class, and wherein the fuzzy distortion of class comprises Gaussian Blur distortion, JPEG2000 distortion and rapid fading distortion;
If 2. the type of distortion of distorted image Y is the white Gaussian noise distortion, adopt the sliding window that size is 8 * 8 to move by pixel in X, X is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in X is designated as x
i,j; Equally, adopt the sliding window that size is 8 * 8 to move by pixel in Y, Y is divided into to M * N the image block equitant and size is 8 * 8, the image block that is (i, j) by coordinate position in Y is designated as y
i,j; Wherein,
h means the height of X and Y, and W means the width of X and Y, symbol
for rounding symbol downwards, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the JPEG distortion, adopt the sliding window that size is 8 * 8 to move by pixel in X, X is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in X is designated as x
i,j, to all image block x
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
equally, by the pixel band, you move in Y to adopt the sliding window that size is 8 * 8, and Y is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in Y is designated as y
i,j, to all image block y
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
wherein,
h means the height of X and Y, and W means the width of X and Y, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the fuzzy distortion of class, X is carried out to the one-level wavelet transformation, extract approximate component and be designated as X
a, the sliding window that the employing size is 8 * 8 is at X
amiddle pointwise is moved, by X
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by X
athe image block that middle coordinate position is (i ', j ') is designated as
equally, Y is carried out to the one-level wavelet transformation, extract approximate component and be designated as Y
a, the sliding window that the employing size is 8 * 8 is at Y
amiddle pointwise is moved, by Y
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by Y
athe image block that middle coordinate position is (i ', j ') is designated as
wherein,
h ' expression X
aand Y
aheight, W ' expression X
aand Y
awidth, 1≤i '≤M ', 1≤j '≤N ';
If 3. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate the covariance between all pixels in two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jin all pixels and Y in the coordinate position image block y that is (i, j)
i,jin all pixels between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8;
If the type of distortion of distorted image Y is the JPEG distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate average and the standard deviation of the DCT ac coefficient of each image block in X, and average and the standard deviation of the DCT ac coefficient of each image block in calculating Y, finally calculate the covariance between all DCT ac coefficients of two image blocks that coordinate positions all in X and Y is identical, by coordinate position in X, be (i, j) image block x
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the DCT area image piece that is (i, j) by coordinate position in X
with the DCT area image piece that is (i, j) of coordinate position in Y
in all ac coefficients between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value, 1≤u
d≤ 8,1≤v
d≤ 8 and u
dand v
dwhen different, be 1;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate the approximate component X of X after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, and calculate the approximate component Y of Y after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, then calculate X
aand Y
ain covariance between all coefficients in identical two image blocks of all coordinate position, by X
amiddle coordinate position is (i
w, j
w) image block
in the average of all coefficients and standard deviation respectively correspondence be designated as
with
by Y
amiddle coordinate position is (i
w, j
w) image block
in average and the standard deviation of all coefficients be designated as respectively
with
by X
amiddle coordinate position is (i
w, j
w) image block
in all pixels and Y
amiddle coordinate position is (i
w, j
w) image block
in all coefficients between covariance be designated as
Wherein,
mean
middle coordinate position is (u
w, v
w) coefficient value,
mean
middle coordinate position is (u
w, v
w) coefficient value, 1≤u
w≤ 8,1≤v
w≤ 8;
If 4. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate luminance function, contrast function and degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function, contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j), c (x
i,j, y
i,j) and s (x
i,j, y
i,j),
wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur;
If the type of distortion of distorted image Y is the JPEG distortion, calculate luminance function and contrast function between two image blocks that coordinate positions all in X and Y is identical, and calculate two image blocks that X is identical with coordinate positions all in Y degree of structuration function in the DCT territory, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function and contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j) and c (x
i,j, y
i,j), the image block x that is (i, j) by coordinate position in X
i,jnew images piece after dct transform
with the image block y that is (i, j) of coordinate position in Y
i,jnew images piece after dct transform
between the degree of structuration function be designated as f (x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function correspondence be designated as respectively l
w(x
i,j, y
i,j), c
w(x
i,j, y
i,j) and s
w(x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur;
If 5. the type of distortion of distorted image Y is the white Gaussian noise distortion, luminance function, contrast function and the degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween structural similarity be designated as SSIM (x
i,j, y
i,j), SSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[s (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If the type of distortion of distorted image Y is the JPEG distortion, luminance function and the contrast function between identical two image blocks according to coordinate positions all in X and Y, and X two image blocks identical with coordinate positions all in Y are at the degree of structuration function in DCT territory, calculate the structural similarity based on the DCT territory between two image blocks that in X and Y, all coordinate positions are identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on the DCT territory be designated as FSSIM (x
i,j, y
i,j), FSSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[f (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If the type of distortion of distorted image Y is the fuzzy distortion of class, wavelet coefficient function, wavelet coefficient contrast function and the wavelet coefficient degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity based on wavelet field between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on wavelet field be designated as WSSIM (x
i,j, y
i,j), WSSIM (x
i,j, y
i,j)=[l
w(x
i,j, y
i,j)]
α[c
w(x
i,j, y
i,j)]
β[s
w(x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If 6. the type of distortion of distorted image Y is the white Gaussian noise distortion, the structural similarity based on pixel domain between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
wn,
If the type of distortion of distorted image Y is the JPEG distortion, the structural similarity based on the DCT territory between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
jpeg,
If the type of distortion of distorted image Y is the fuzzy distortion of class, the structural similarity based on wavelet field between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
blur,
Described step determines that by the type of distortion method of discrimination detailed process of the type of distortion of Y is in 1.:
1.-a, X is carried out to the zero lap piece that size is 64 * 64 cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in X is located at (i ', j ') is designated as x '
i ', j ', each image block x ' is carried out to the one-level wavelet decomposition, extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by x '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
wherein,
1≤i '≤M ', 1≤j '≤N ';
Equally, Y is carried out to the zero lap piece that size is 64 * 64 and cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in Y is located at (i ', j ') is designated as y '
i ', j ', each image block y ' is carried out to the one-level wavelet decomposition, extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by y '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
1.-b, calculate the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, the image block x ' that coordinate position in X and Y (i ', j ') is located
i ', j 'and y '
i ', j 'the poor difference of noise criteria be designated as
then calculate the average of the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, be designated as
1.-c, judgement
whether set up, if set up, determine that the type of distortion of Y is the white Gaussian noise distortion, then finish; Otherwise, 1.-d of execution step; Wherein, Th
wNfor white Gaussian noise distortion discrimination threshold;
1. the luminance difference figure of-d, calculating X, be designated as X
h, by X
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as X
h(i ' ', j ' '), X
h(i ' ', j ' ')=| X (i ' ', j ' ')-X (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, X (i ' ', j ' ') means in X that coordinate position the is brightness value of pixel of (i ' ', j ' '), X (i ' ', j ' '+1) mean in X that coordinate position the is brightness value of pixel of (i ' ', j ' '+1), symbol " ‖ " is the symbol that takes absolute value;
Equally, calculate the luminance difference figure of Y, be designated as Y
h, by Y
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as Y
h(i ' ', j ' '), Y
h(i ' ', j ' ')=| Y (i ' ', j ' ')-Y (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, Y (i ' ', j ' ') mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '), Y (i ' ', j ' '+1) mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '+1);
1.-e, to the luminance difference figure X of X
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8),
1≤i ' ' '≤M ' ', 1≤j ' ' '≤N ' ', 1≤p≤8,1≤q≤7;
Equally, to the luminance difference figure Y of Y
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8);
1.-f, calculating X
hin the edge energy of all image blocks and the ratio between the piece self-energy, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Equally, calculate Y
hin the edge energy of all image blocks and the ratio between the piece self-energy, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Statistics meets inequality
the number of image block, and be designated as N
0; Definition discriminant criterion J,
1.-g, judgement J>Th
jPEGwhether set up, if set up, determine that the type of distortion of Y is the JPEG distortion, then finish; Otherwise, 1.-h of execution step; Wherein, Th
jPEGfor JPEG distortion discrimination threshold;
1.-h, determine that the type of distortion of Y is the fuzzy distortion of class, the type of distortion of Y is the Gaussian Blur distortion, or the JPEG2000 distortion, or the rapid fading distortion.
Described step is got C in 4.
1=0.01, C
2=0.02, C
3=0.01, α=β=γ=1.
Compared with prior art, the invention has the advantages that:
1) during the structural similarity between the inventive method two image blocks that coordinate position is identical in obtaining original undistorted image and distorted image to be evaluated, combine the type of distortion of distorted image to be evaluated, make the inventive method can be from the angle of Adaptive critic, select adaptively to calculate the structural similarity based on pixel domain, or the structural similarity based on the DCT territory, or the structural similarity based on wavelet field, thereby improved the accuracy of quality evaluation result of the image of different type of distortion.
The distortion characteristics that show when 2) the inventive method is subject to the white Gaussian noise distortion by combining image in the process of the type of distortion of differentiating distorted image and the distortion characteristics that show while being subject to the JPEG distortion, in the situation that the original reference image is arranged, realized the judgement of the type of distortion of distorted image, the differentiation process portability of this type of distortion is high.
The accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 is 12 original undistorted images in the training set related in the inventive method;
Fig. 3 determines threshold size and the judgment accuracy magnitude relationship figure that image relates to while being the white Gaussian noise distortion in the inventive method;
Fig. 4 determines threshold size and the judgment accuracy magnitude relationship figure that image relates to while being the JPEG distortion in inventive method;
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
The present invention proposes a kind of adaptive image quality evaluation method based on the type of distortion judgement, and its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated;
Secondly, in conjunction with the type of distortion of distorted image to be evaluated, process accordingly,
If distorted image is the white Gaussian noise distortion, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in pixel domain, by calculating all pixels in each image block in original undistorted image and distorted image to be evaluated in pixel domain brightness average and standard deviation, and the covariance between all pixel brightness values in two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate position is identical, obtain the structural similarity based on pixel domain between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the JPEG distortion, by original undistorted image and distorted image to be evaluated at DCT(Discrete Cosine Transform) transform domain is divided into the image block that a plurality of equitant sizes are 8 * 8, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in the DCT territory to be evaluated, and original undistorted image and distorted image to be evaluated the DCT territory in covariance between all coefficients in identical two image blocks of the coordinate position that has, obtain the structural similarity based on the DCT territory between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the fuzzy distortion of class, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in wavelet field, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in wavelet field to be evaluated, and the covariance between all coefficients in identical two image blocks of original undistorted image and distorted image to be evaluated all coordinate position in wavelet field, obtain the structural similarity based on wavelet field between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
Finally, according to the structural similarity between two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate positions are identical, obtain the objective quality score value of distorted image to be evaluated.
Adaptive image quality method for objectively evaluating of the present invention, it totally realizes block diagram as shown in Figure 1, it specifically comprises the following steps:
1. make X mean original undistorted image, make Y mean distorted image to be evaluated, then determine the type of distortion of Y by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, Gaussian Blur distortion, class JPEG2000 distortion.
At present, the type of distortion of image generally has white Gaussian noise distortion (WN, white noise), fuzzy distortion three classes of JPEG distortion (JPEG) and class, wherein, the fuzzy distortion of class comprises Gaussian Blur distortion (Gblur, Gaussian blur), JPEG2000 distortion and rapid fading distortion (FF, fast fading) are three kinds.At this, by the type of distortion method of discrimination, determine which kind of type of distortion Y is.
Step determines that by the type of distortion method of discrimination detailed process of the type of distortion of Y is in 1.:
1.-a, X is carried out to the zero lap piece that size is 64 * 64 cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in X is located at (i ', j ') is designated as x '
i ', j ', each image block x ' is carried out to one-level wavelet decomposition (haar small echo), extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by x '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
wherein,
1≤i '≤M ', 1≤j '≤N ', H, W mean respectively height and the width of X;
Equally, Y is carried out to the zero lap piece that size is 64 * 64 and cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in Y is located at (i ', j ') is designated as y '
i ', j ', each image block y ' is carried out to one-level wavelet decomposition (haar small echo), extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by y '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
wherein,
1≤i '≤M ', 1≤j '≤N ', H, W mean respectively height and the width of Y, X and Y have identical size;
1.-b, calculate the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, the image block x ' that coordinate position in X and Y (i ', j ') is located
i ', j 'and y '
i ', j 'the poor difference of noise criteria be designated as Δ σ
i ', j ',
then calculate the average of the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, be designated as
1.-c, judgement
whether set up, if set up, determine that the type of distortion of Y is the white Gaussian noise distortion, then finish; Otherwise, 1.-d of execution step; Wherein, Th
wNfor white Gaussian noise distortion discrimination threshold, in the present embodiment, white Gaussian noise discrimination threshold Th
wNvalue be 0.8;
1. the luminance difference figure of-d, calculating X, be designated as X
h, by X
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as X
h(i ' ', j ' '), X
h(i ' ', j ' ')=| X (i ' ', j ' ')-X (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, X (i ' ', j ' ') means in X that coordinate position the is brightness value of pixel of (i ' ', j ' '), X (i ' ', j ' '+1) mean in X that coordinate position the is brightness value of pixel of (i ' ', j ' '+1), symbol " ‖ " is the symbol that takes absolute value;
Equally, calculate the luminance difference figure of Y, be designated as Y
h, by Y
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as Y
h(i ' ', j ' '), Y
h(i ' ', j ' ')=| Y (i ' ', j ' ')-Y (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, Y (i ' ', j ' ') mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '), Y (i ' ', j ' '+1) mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '+1);
1.-e, to the luminance difference figure X of X
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8),
1≤i ' ' '≤M ' ', 1≤j ' ' '≤N ' ', 1≤p≤8,1≤q≤7;
Equally, to the luminance difference figure Y of Y
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8)
1≤i ' ' '≤M ' ', 1≤j ' ' '≤N ' ', 1≤p≤8,1≤q≤7;
1.-f, calculating X
hin the edge energy of all image blocks and the ratio between the piece self-energy, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Equally, calculate Y
hin the edge energy of all image blocks and the ratio between the piece self-energy, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Statistics meets inequality
the number of image block, and be designated as N
0; Definition discriminant criterion J,
1.-g, judgement J>Th
jPEGwhether set up, if set up, determine that the type of distortion of Y is the JPEG distortion, then finish; Otherwise, 1.-h of execution step; Wherein, Th
jPEGfor JPEG distortion discrimination threshold; In the present embodiment, JPEG distortion discrimination threshold Th
jPEGvalue be 0.57;
1.-h, determine that the type of distortion of Y is the fuzzy distortion of class, the type of distortion of Y is the Gaussian Blur distortion, or the JPEG2000 distortion, or the rapid fading distortion.
In the present embodiment, the 808 width images that the view data of using provides for U.S.'s Texas university image and the disclosed picture quality estimation database in video engineering experiment chamber (LIVE), comprising undistorted reference picture 29 width, distorted image 779 width, wherein white Gaussian noise distorted image 145 width, Gaussian Blur distorted image 145 width, JPEG distorted image 175, JPEG2000 distorted image comprise 169 width and rapid fading distorted image 145 width.In addition, Fig. 2 has provided and selected the 12 width undistorted images that texture is simple, texture is complicated and texture is medium from 29 undistorted reference picture, using the image of 5 kinds of type of distortion of this 12 width undistorted image and correspondence thereof as the training set image, each 60 width of white Gaussian noise distorted image, Gaussian Blur distorted image and rapid fading distorted image wherein, each 70 width of JPEG distorted image and JPEG2000 distorted image; Using the image of 5 kinds of type of distortion of remaining 17 width undistorted images and correspondence thereof as the test set image, each 85 width of white Gaussian noise distorted image, Gaussian Blur distorted image and rapid fading distorted image wherein, JPEG distorted image 105 width, JPEG2000 distorted image 99 width.
The first step is isolated the distorted image of white Gaussian noise distortion from all distorted images of tranining database.In the type of distortion of determining distorted image Y, whether be the white Gaussian noise distortion discrimination threshold Th related in the process of white Gaussian noise distortion
wNthe time, in interval [0.5,1.5], every 0.05, get a value as Th
wNprobe value, to each probe value, carry out discriminating step 1.-the data training of a, 1.-b and 1.-c, and ask for respectively the accuracy rate of judgement, Th
wNthe relation curve of size and differentiation accuracy as shown in Figure 3, as can be seen from Figure 3, is worked as Th
wN=0.8 o'clock, the accuracy of can take was isolated the white Gaussian noise distorted image as 100% ground.
Isolate the JPEG distorted image in the image of second step non-white Gaussian noise distortion from training set.Get a point as Th every 0.01 in interval [0.4,0.7]
jPEGprobe value, to each probe value do discriminating step 1.-d is to the training of the data of 1.-g, and asks for respectively the accuracy rate of judgement, Th
jPEGthe relation curve of size and differentiation accuracy as shown in Figure 4, as can be seen from Figure 4, is worked as Th
jPEG=0.57 o'clock, the accuracy of can take was isolated the JPEG distorted image from the image of non-white Gaussian noise distortion as 100% ground.
If 2. the type of distortion of distorted image Y is the white Gaussian noise distortion, adopt sliding window order by Row Column in X that size is 8 * 8 to move by pixel, X is divided into to M * N the image block equitant and size is 8 * 8, the image block that is (i, j) by coordinate position in X is designated as x
i,j; Equally, adopt sliding window order by Row Column in Y that size is 8 * 8 to move by pixel, Y is divided into to M * N the image block equitant and size is 8 * 8, the image block that is (i, j) by coordinate position in Y is designated as y
i,j; Wherein,
h means the height of X and Y, and W means the width of X and Y, symbol
for rounding symbol downwards, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the JPEG distortion, adopt sliding window order by Row Column in X that size is 8 * 8 to move by pixel, X is divided into to M * N the image block equitant and size is 8 * 8, the image block that is (i, j) by coordinate position in X is designated as x
i,j, to all image block x
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
equally, the sliding window that the employing size is 8 * 8 pursues the pixel band by the order of Row Column in Y, and you move, and Y is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in Y is designated as y
i,j, to all image block y
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
wherein,
h means the height of X and Y, and W means the width of X and Y, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the fuzzy distortion of class, X is carried out to the one-level wavelet transformation, extract approximate component and be designated as X
a, the sliding window that the employing size is 8 * 8 is at X
ain by the order pointwise of Row Column, move, by X
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by X
athe image block that middle coordinate position is (i ', j ') is designated as
equally, Y is carried out to the one-level wavelet transformation, extract approximate component and be designated as Y
a, the sliding window that the employing size is 8 * 8 is at Y
ain by the order pointwise of Row Column, move, by Y
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by Y
athe image block that middle coordinate position is (i ', j ') is designated as
wherein,
h ' expression X
aand Y
aheight, W ' expression X
aand Y
awidth, 1≤i '≤M ', 1≤j '≤N ';
If 3. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate the covariance between all pixels in two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jin all pixels and Y in the coordinate position image block y that is (i, j)
i,jin all pixels between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8;
If the type of distortion of distorted image Y is the JPEG distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate average and the standard deviation of the DCT ac coefficient of each image block in X, and average and the standard deviation of the DCT ac coefficient of each image block in calculating Y, finally calculate the covariance between all DCT ac coefficients of two image blocks that coordinate positions all in X and Y is identical, by coordinate position in X, be (i, j) image block x
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the DCT area image piece that is (i, j) by coordinate position in X
with the DCT area image piece that is (i, j) of coordinate position in Y
in all ac coefficients between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value, 1≤u
d≤ 8,1≤v
d≤ 8 and u
dand v
dwhen different, be 1;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate the approximate component X of X after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, and calculate the approximate component Y of Y after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, then calculate X
aand Y
ain covariance between all coefficients in identical two image blocks of all coordinate position, by X
amiddle coordinate position is (i
w, j
w) image block
in the average of all coefficients and standard deviation respectively correspondence be designated as
with
by Y
amiddle coordinate position is (i
w, j
w) image block
in average and the standard deviation of all coefficients be designated as respectively
with
by X
amiddle coordinate position is (i
w, j
w) image block
in all pixels and Y
amiddle coordinate position is (i
w, j
w) image block
in all coefficients between covariance be designated as
Wherein,
mean
middle coordinate position is (u
w, v
w) coefficient value,
mean
middle coordinate position is (u
w, v
w) coefficient value, 1≤u
w≤ 8,1≤v
w≤ 8;
If 4. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate luminance function, contrast function and degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function, contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j), c (x
i,j, y
i,j) and s (x
i,j, y
i,j),
wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur, get in the present embodiment C
1=0.01, C
2=0.02, C
3=0.01;
If the type of distortion of distorted image Y is the JPEG distortion, calculate luminance function and contrast function between two image blocks that coordinate positions all in X and Y is identical, and calculate two image blocks that X is identical with coordinate positions all in Y degree of structuration function in the DCT territory, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function and contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j) and c (x
i,j, y
i,j), the image block x that is (i, j) by coordinate position in X
i,jnew images piece after dct transform
with the image block y that is (i, j) of coordinate position in Y
i,jnew images piece after dct transform
between the degree of structuration function be designated as f (x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur, get in the present embodiment C
1=0.01, C
2=0.02, C
3=0.01;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function correspondence be designated as respectively l
w(x
i,j, y
i,j), c
w(x
i,j, y
i,j) and s
w(x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur, get in the present embodiment C
1=0.01, C
2=0.02, C
3=0.01;
If 5. the type of distortion of distorted image Y is the white Gaussian noise distortion, luminance function, contrast function and the degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween structural similarity be designated as SSIM (x
i,j, y
i,j), SSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[s (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor, get in the present embodiment α=β=γ=1;
If the type of distortion of distorted image Y is the JPEG distortion, luminance function and the contrast function between identical two image blocks according to coordinate positions all in X and Y, and X two image blocks identical with coordinate positions all in Y are at the degree of structuration function in DCT territory, calculate the structural similarity based on the DCT territory between two image blocks that in X and Y, all coordinate positions are identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on the DCT territory be designated as FSSIM (x
i,j, y
i,j), FSSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[f (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor, get in the present embodiment α=β=γ=1;
If the type of distortion of distorted image Y is the fuzzy distortion of class, wavelet coefficient function, wavelet coefficient contrast function and the wavelet coefficient degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity based on wavelet field between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on wavelet field be designated as WSSIM (x
i,j, y
i,j), WSSIM (x
i,j, y
i,j)=[l
w(x
i,j, y
i,j)]
α[c
w(x
i,j, y
i,j)]
β[s
w(x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor, get in the present embodiment α=β=γ=1;
If 6. the type of distortion of distorted image Y is the white Gaussian noise distortion, the structural similarity based on pixel domain between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
wn,
If the type of distortion of distorted image Y is the JPEG distortion, the structural similarity based on the DCT territory between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
jpeg,
If the type of distortion of distorted image Y is the fuzzy distortion of class, the structural similarity based on wavelet field between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
blur,
In the present embodiment, the 29 width undistorted images that provide in LIVE and 779 single distorted images and the corresponding DMOS(differential mean of each width distorted image opinion scores are provided) value, 1. 6. calculate the quality evaluation score value Q of each width distorted image to step according to step, evaluating objective quality score value Q and the DMOS value of 779 width distorted images are carried out to four parameter L ogistic function nonlinear fittings; Utilize 4 of the evaluate image quality evaluating method objective parameters commonly used as evaluation index, these 4 evaluation indexes are respectively linearly dependent coefficient CC(correlation coefficient), Spearman coefficient correlation SROCC(Spearman rank-order correlation coefficient), dispersion ratio OR(out ratio) and mean square error coefficients R MSE(rooted mean squared error).Wherein higher, the OR value of CC value and SROCC value and the lower key diagram of RMSE value are better as method for objectively evaluating and DMOS correlation.
Table 1 has been listed the value of CC, SROCC, OR and the RMSE coefficient of assess performance under various type of distortion, the listed data from table 1, objective quality score value Q and the correlation between subjective scores DMOS of the distorted image that the present embodiment obtains are very high, the CC value all surpasses 0.94, the SROCC value all surpasses 0.91, the OR value is all lower than 0.41, the RMSE value is all lower than 5.4, this consistency as a result that has shown the objective evaluation result of the inventive method and human eye subjective perception is very high, has absolutely proved the validity of the inventive method.
The objective evaluation of the distorted image that this enforcement of table 1 obtains divides and the correlation of subjective assessment between dividing
Claims (5)
1. the adaptive image quality evaluation method based on type of distortion judgement is characterized in that its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated;
Secondly, in conjunction with the type of distortion of distorted image to be evaluated, process accordingly;
If distorted image is the white Gaussian noise distortion, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in pixel domain, by calculating all pixels in each image block in original undistorted image and distorted image to be evaluated in pixel domain brightness average and standard deviation, and the covariance between all pixel brightness values in two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate position is identical, obtain the structural similarity based on pixel domain between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the JPEG distortion, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in the DCT territory, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in the DCT territory to be evaluated, and the covariance between all coefficients in identical two image blocks of the coordinate position that has in the DCT territory of original undistorted image and distorted image to be evaluated, obtain the structural similarity based on the DCT territory between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
If distorted image is the fuzzy distortion of class, original undistorted image and distorted image to be evaluated are divided into to the image block that a plurality of equitant sizes are 8 * 8 in wavelet field, by calculating average and the standard deviation of original undistorted image piece and distorted image piece all coefficients in wavelet field to be evaluated, and the covariance between all coefficients in identical two image blocks of original undistorted image and distorted image to be evaluated all coordinate position in wavelet field, obtain the structural similarity based on wavelet field between two image blocks that coordinate positions all in original undistorted image and distorted image to be evaluated is identical,
Finally, according to the structural similarity between two image blocks that in original undistorted image and distorted image to be evaluated, all coordinate positions are identical, obtain the objective quality score value of distorted image to be evaluated.
2. a kind of adaptive image quality evaluation method based on type of distortion judgement according to claim 1, it is characterized in that: it specifically comprises the following steps:
1. make X mean original undistorted image, make Y mean distorted image to be evaluated, determine the type of distortion of Y by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, the fuzzy distortion of class, and wherein the fuzzy distortion of class comprises Gaussian Blur distortion, JPEG2000 distortion and rapid fading distortion;
If 2. the type of distortion of distorted image Y is the white Gaussian noise distortion, adopt the sliding window that size is 8 * 8 to move by pixel in X, X is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in X is designated as x
i,j; Equally, adopt the sliding window that size is 8 * 8 to move by pixel in Y, Y is divided into to M * N the image block equitant and size is 8 * 8, the image block that is (i, j) by coordinate position in Y is designated as y
i,j; Wherein,
h means the height of X and Y, and W means the width of X and Y, symbol
for rounding symbol downwards, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the JPEG distortion, adopt the sliding window that size is 8 * 8 to move by pixel in X, X is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in X is designated as x
i,j, to all image block x
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
equally, by the pixel band, you move in Y to adopt the sliding window that size is 8 * 8, and Y is divided into to M * N the image block equitant and size is 8 * 8, and the image block that is (i, j) by coordinate position in Y is designated as y
i,j, to all image block y
i,jcarry out two-dimensional dct transform, the image block obtained after correspondent transform is
wherein,
h means the height of X and Y, and W means the width of X and Y, 1≤i≤M, 1≤j≤N;
If the type of distortion of distorted image Y is the fuzzy distortion of class, X is carried out to the one-level wavelet transformation, extract approximate component and be designated as X
a, the sliding window that the employing size is 8 * 8 is at X
amiddle pointwise is moved, by X
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by X
athe image block that middle coordinate position is (i ', j ') is designated as
equally, Y is carried out to the one-level wavelet transformation, extract approximate component and be designated as Y
a, the sliding window that the employing size is 8 * 8 is at Y
amiddle pointwise is moved, by Y
abe divided into the individual equitant and image block that size is 8 * 8 of M ' * N ', by Y
athe image block that middle coordinate position is (i ', j ') is designated as
wherein,
h ' expression X
aand Y
aheight, W ' expression X
aand Y
awidth, 1≤i '≤M ', 1≤j '≤N ';
If 3. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate the covariance between all pixels in two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jin all pixels and Y in the coordinate position image block y that is (i, j)
i,jin all pixels between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8;
If the type of distortion of distorted image Y is the JPEG distortion, calculate brightness average and the standard deviation of all pixels in each image block in X, and brightness average and the standard deviation of all pixels in each image block in calculating Y, then calculate average and the standard deviation of the DCT ac coefficient of each image block in X, and average and the standard deviation of the DCT ac coefficient of each image block in calculating Y, finally calculate the covariance between all DCT ac coefficients of two image blocks that coordinate positions all in X and Y is identical, by coordinate position in X, be (i, j) image block x
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jin brightness average and the standard deviation correspondence of all pixels be designated as
with
the image block x that is (i, j) by coordinate position in X
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the image block y that is (i, j) by coordinate position in Y
i,jcarry out the new images piece obtained after dct transform
the average of all ac coefficients and standard deviation respectively correspondence be designated as
with
the DCT area image piece that is (i, j) by coordinate position in X
with the DCT area image piece that is (i, j) of coordinate position in Y
in all ac coefficients between covariance be designated as
Wherein, x
i,j(u, v) means x
i,jthe brightness value of the pixel that middle coordinate position is (u, v), y
i,j(u, v) means y
i,jthe brightness value of the pixel that middle coordinate position is (u, v), 1≤u≤8,1≤v≤8,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value,
mean
middle coordinate position is (u
d, v
d) the DCT coefficient value, 1≤u
d≤ 8,1≤v
d≤ 8 and u
dand v
dwhen different, be 1;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate the approximate component X of X after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, and calculate the approximate component Y of Y after the one-level wavelet transformation
ain average and the standard deviation of all coefficient values, then calculate X
aand Y
ain covariance between all coefficients in identical two image blocks of all coordinate position, by X
amiddle coordinate position is (i
w, j
w) image block
in the average of all coefficients and standard deviation respectively correspondence be designated as
with
by Y
amiddle coordinate position is (i
w, j
w) image block
in average and the standard deviation of all coefficients be designated as respectively
with
by X
amiddle coordinate position is (i
w, j
w) image block
in all pixels and Y
amiddle coordinate position is (i
w, j
w) image block
in all coefficients between covariance be designated as
If 4. the type of distortion of distorted image Y is the white Gaussian noise distortion, calculate luminance function, contrast function and degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function, contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j), c (x
i,j, y
i,j) and s (x
i,j, y
i,j),
If the type of distortion of distorted image Y is the JPEG distortion, calculate luminance function and contrast function between two image blocks that coordinate positions all in X and Y is identical, and calculate two image blocks that X is identical with coordinate positions all in Y degree of structuration function in the DCT territory, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween luminance function and contrast function and degree of structuration function correspondence be designated as respectively l (x
i,j, y
i,j) and c (x
i,j, y
i,j), the image block x that is (i, j) by coordinate position in X
i,jnew images piece after dct transform
with the image block y that is (i, j) of coordinate position in Y
i,jnew images piece after dct transform
between the degree of structuration function be designated as f (x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur;
If the type of distortion of distorted image Y is the fuzzy distortion of class, calculate wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient degree of structuration function correspondence be designated as respectively l
w(x
i,j, y
i,j), c
w(x
i,j, y
i,j) and s
w(x
i,j, y
i,j),
Wherein, C
1, C
2, C
3for avoiding denominator the zero little numerical constant arranged to occur;
If 5. the type of distortion of distorted image Y is the white Gaussian noise distortion, luminance function, contrast function and the degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween structural similarity be designated as SSIM (x
i,j, y
i,j), SSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[s (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If the type of distortion of distorted image Y is the JPEG distortion, luminance function and the contrast function between identical two image blocks according to coordinate positions all in X and Y, and X two image blocks identical with coordinate positions all in Y are at the degree of structuration function in DCT territory, calculate the structural similarity based on the DCT territory between two image blocks that in X and Y, all coordinate positions are identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on the DCT territory be designated as FSSIM (x
i,j, y
i,j), FSSIM (x
i,j, y
i,j)=[l (x
i,j, y
i,j)]
α[c (x
i,j, y
i,j)]
β[f (x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If the type of distortion of distorted image Y is the fuzzy distortion of class, wavelet coefficient function, wavelet coefficient contrast function and the wavelet coefficient degree of structuration function between identical two image blocks according to coordinate positions all in X and Y, calculate the structural similarity based on wavelet field between two image blocks that coordinate positions all in X and Y is identical, the image block x that is (i, j) by coordinate position in X
i,jwith the image block y that is (i, j) of coordinate position in Y
i,jbetween the structural similarity based on wavelet field be designated as WSSIM (x
i,j, y
i,j), WSSIM (x
i,j, y
i,j)=[l
w(x
i,j, y
i,j)]
α[c
w(x
i,j, y
i,j)]
β[s
w(x
i,j, y
i,j)]
γ, wherein α, β and γ are regulatory factor;
If 6. the type of distortion of distorted image Y is the white Gaussian noise distortion, the structural similarity based on pixel domain between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
wn,
If the type of distortion of distorted image Y is the JPEG distortion, the structural similarity based on the DCT territory between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
jpeg,
If the type of distortion of distorted image Y is the fuzzy distortion of class, the structural similarity based on wavelet field between identical two image blocks according to coordinate positions all in X and Y, calculate the objective quality score value of Y, is designated as Q
blur,
3. a kind of adaptive image quality evaluation method based on type of distortion judgement according to claim 2 is characterized in that: described step determines that by the type of distortion method of discrimination detailed process of the type of distortion of Y is in 1.:
1.-a, X is carried out to the zero lap piece that size is 64 * 64 cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in X is located at (i ', j ') is designated as x '
i ', j ', each image block x ' is carried out to the one-level wavelet decomposition, extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by x '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
wherein,
1≤i '≤M ', 1≤j '≤N ';
Equally, Y is carried out to the zero lap piece that size is 64 * 64 and cut apart, obtain the image block that the individual size of M ' * N ' is 64 * 64, the image block that coordinate in Y is located at (i ', j ') is designated as y '
i ', j ', each image block y ' is carried out to the one-level wavelet decomposition, extract diagonal components, find out the intermediate value of coefficient amplitude in the diagonal components of each image block, and it is poor to calculate the noise criteria of each image block, by y '
i ', j 'the small echo diagonal components in the intermediate value of coefficient amplitude be designated as
its noise criteria is poor is
1.-b, calculate the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, the image block x ' that coordinate position in X and Y (i ', j ') is located
i ', j 'and y '
i ', j 'the poor difference of noise criteria be designated as
then calculate the average of the poor difference of noise criteria between the image block that in X and Y, all coordinate positions are identical, be designated as
1.-c, judgement
whether set up, if set up, determine that the type of distortion of Y is the white Gaussian noise distortion, then finish; Otherwise, 1.-d of execution step; Wherein, Th
wNfor white Gaussian noise distortion discrimination threshold;
1. the luminance difference figure of-d, calculating X, be designated as X
h, by X
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as X
h(i ' ', j ' '), X
h(i ' ', j ' ')=| X (i ' ', j ' ')-X (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, X (i ' ', j ' ') means in X that coordinate position the is brightness value of pixel of (i ' ', j ' '), X (i ' ', j ' '+1) mean in X that coordinate position the is brightness value of pixel of (i ' ', j ' '+1), symbol " ‖ " is the symbol that takes absolute value;
Equally, calculate the luminance difference figure of Y, be designated as Y
h, by Y
hthe coefficient value that middle coordinate position is (i ' ', j ' ') point is designated as Y
h(i ' ', j ' '), Y
h(i ' ', j ' ')=| Y (i ' ', j ' ')-Y (i ' ', j ' '+1) |, wherein, 1≤i ' '≤H, 1≤j ' '≤W-1, Y (i ' ', j ' ') mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '), Y (i ' ', j ' '+1) mean in Y that coordinate position the is brightness value of pixel of (i ' ', j ' '+1);
1.-e, to the luminance difference figure X of X
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8),
1≤i ' ' '≤M ' ', 1≤j ' ' '≤N ' ', 1≤p≤8,1≤q≤7;
Equally, to the luminance difference figure Y of Y
hcut apart for non-overlapping that carries out 8 * 8 sizes, obtaining the individual non-overlapping size of M ' ' * N ' ' is 8 * 8 image blocks, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located is designated as
the definition image block
piece self-energy and block edge energy be respectively
with
Wherein,
for
the coefficient value that middle coordinate position is (p, q),
for
the coefficient value that middle coordinate position is (p, 8);
1.-f, calculating X
hin the edge energy of all image blocks and the ratio between the piece self-energy, by X
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Equally, calculate Y
hin the edge energy of all image blocks and the ratio between the piece self-energy, by Y
hmiddle coordinate position (i ' ' ', j ' ' ') image block located
the block edge energy and the ratio of block energy be designated as
Statistics meets inequality
the number of image block, and be designated as N
0; Definition discriminant criterion J,
1.-g, judgement J>Th
jPEGwhether set up, if set up, determine that the type of distortion of Y is the JPEG distortion, then finish; Otherwise, 1.-h of execution step; Wherein, Th
jPEGfor JPEG distortion discrimination threshold;
1.-h, determine that the type of distortion of Y is the fuzzy distortion of class, the type of distortion of Y is the Gaussian Blur distortion, or the JPEG2000 distortion, or the rapid fading distortion.
4. a kind of adaptive image quality evaluation method based on type of distortion judgement according to claim 3 is characterized in that: described step 1.-c in white Gaussian noise distortion discrimination threshold Th
wNvalue be 0.8; Described step 1.-g in JPEG distortion discrimination threshold Th
jPEGvalue be 0.57.
5. a kind of adaptive image quality evaluation method based on type of distortion judgement according to claim 2 is characterized in that: described step is got C in 4.
1=0.01, C
2=0.02, C
3=0.01, α=1, β=1 and γ=1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310406821.0A CN103475897B (en) | 2013-09-09 | 2013-09-09 | Adaptive image quality evaluation method based on distortion type judgment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310406821.0A CN103475897B (en) | 2013-09-09 | 2013-09-09 | Adaptive image quality evaluation method based on distortion type judgment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103475897A true CN103475897A (en) | 2013-12-25 |
CN103475897B CN103475897B (en) | 2015-03-11 |
Family
ID=49800574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310406821.0A Expired - Fee Related CN103475897B (en) | 2013-09-09 | 2013-09-09 | Adaptive image quality evaluation method based on distortion type judgment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103475897B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123723A (en) * | 2014-07-08 | 2014-10-29 | 上海交通大学 | Structure compensation based image quality evaluation method |
CN104918039A (en) * | 2015-05-05 | 2015-09-16 | 四川九洲电器集团有限责任公司 | Image quality evaluation method and image quality evaluation system |
CN106412569A (en) * | 2016-09-28 | 2017-02-15 | 宁波大学 | No-reference multi-distortion three-dimensional image quality evaluation method based on feature selection |
CN106778917A (en) * | 2017-01-24 | 2017-05-31 | 北京理工大学 | Based on small echo statistical nature without reference noise image quality evaluating method |
CN107770517A (en) * | 2017-10-24 | 2018-03-06 | 天津大学 | Full reference image quality appraisement method based on image fault type |
CN105894522B (en) * | 2016-04-28 | 2018-05-25 | 宁波大学 | A kind of more distortion objective evaluation method for quality of stereo images |
CN108664839A (en) * | 2017-03-27 | 2018-10-16 | 北京三星通信技术研究有限公司 | A kind of image processing method and equipment |
CN110415207A (en) * | 2019-04-30 | 2019-11-05 | 杭州电子科技大学 | A method of the image quality measure based on image fault type |
CN111179242A (en) * | 2019-12-25 | 2020-05-19 | Tcl华星光电技术有限公司 | Image processing method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102421007A (en) * | 2011-11-28 | 2012-04-18 | 浙江大学 | Image quality evaluating method based on multi-scale structure similarity weighted aggregate |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
CN102982532A (en) * | 2012-10-31 | 2013-03-20 | 宁波大学 | Stereo image objective quality evaluation method base on matrix decomposition |
-
2013
- 2013-09-09 CN CN201310406821.0A patent/CN103475897B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102421007A (en) * | 2011-11-28 | 2012-04-18 | 浙江大学 | Image quality evaluating method based on multi-scale structure similarity weighted aggregate |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
CN102982532A (en) * | 2012-10-31 | 2013-03-20 | 宁波大学 | Stereo image objective quality evaluation method base on matrix decomposition |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123723A (en) * | 2014-07-08 | 2014-10-29 | 上海交通大学 | Structure compensation based image quality evaluation method |
CN104918039A (en) * | 2015-05-05 | 2015-09-16 | 四川九洲电器集团有限责任公司 | Image quality evaluation method and image quality evaluation system |
CN105894522B (en) * | 2016-04-28 | 2018-05-25 | 宁波大学 | A kind of more distortion objective evaluation method for quality of stereo images |
CN106412569A (en) * | 2016-09-28 | 2017-02-15 | 宁波大学 | No-reference multi-distortion three-dimensional image quality evaluation method based on feature selection |
CN106412569B (en) * | 2016-09-28 | 2017-12-15 | 宁波大学 | A kind of selection of feature based without referring to more distortion stereo image quality evaluation methods |
CN106778917A (en) * | 2017-01-24 | 2017-05-31 | 北京理工大学 | Based on small echo statistical nature without reference noise image quality evaluating method |
CN108664839A (en) * | 2017-03-27 | 2018-10-16 | 北京三星通信技术研究有限公司 | A kind of image processing method and equipment |
CN108664839B (en) * | 2017-03-27 | 2024-01-12 | 北京三星通信技术研究有限公司 | Image processing method and device |
CN107770517A (en) * | 2017-10-24 | 2018-03-06 | 天津大学 | Full reference image quality appraisement method based on image fault type |
CN110415207A (en) * | 2019-04-30 | 2019-11-05 | 杭州电子科技大学 | A method of the image quality measure based on image fault type |
CN111179242A (en) * | 2019-12-25 | 2020-05-19 | Tcl华星光电技术有限公司 | Image processing method and device |
CN111179242B (en) * | 2019-12-25 | 2023-06-02 | Tcl华星光电技术有限公司 | Image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN103475897B (en) | 2015-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103475897B (en) | Adaptive image quality evaluation method based on distortion type judgment | |
CN103295191B (en) | Multiple scale vision method for adaptive image enhancement and evaluation method | |
CN102333233B (en) | Stereo image quality objective evaluation method based on visual perception | |
CN101872479B (en) | Three-dimensional image objective quality evaluation method | |
CN104751456B (en) | Blind image quality evaluating method based on conditional histograms code book | |
CN104023230B (en) | A kind of non-reference picture quality appraisement method based on gradient relevance | |
CN103338380B (en) | Adaptive image quality objective evaluation method | |
CN106920232A (en) | Gradient similarity graph image quality evaluation method and system based on conspicuousness detection | |
CN106447646A (en) | Quality blind evaluation method for unmanned aerial vehicle image | |
CN105260998A (en) | MCMC sampling and threshold low-rank approximation-based image de-noising method | |
CN101562675B (en) | No-reference image quality evaluation method based on Contourlet transform | |
CN102708567B (en) | Visual perception-based three-dimensional image quality objective evaluation method | |
CN102945552A (en) | No-reference image quality evaluation method based on sparse representation in natural scene statistics | |
CN103281554B (en) | Video objective quality evaluation method based on human eye visual characteristics | |
CN103945217B (en) | Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy | |
CN104202594B (en) | A kind of method for evaluating video quality based on 3 D wavelet transformation | |
CN101127926A (en) | Image quality evaluation method based on multi-scale geometric analysis | |
CN102421007A (en) | Image quality evaluating method based on multi-scale structure similarity weighted aggregate | |
CN104811691A (en) | Stereoscopic video quality objective evaluation method based on wavelet transformation | |
CN103366390A (en) | Terminal, image processing method and device thereof | |
CN105894507B (en) | Image quality evaluating method based on amount of image information natural scene statistical nature | |
CN107451981A (en) | Picture noise level estimation method based on DCT and gradient covariance matrix | |
CN103108209B (en) | Stereo image objective quality evaluation method based on integration of visual threshold value and passage | |
CN106023214A (en) | Image quality evaluation method and system based on foveal vision gradient structural similarity | |
Yang et al. | Full reference image quality assessment by considering intra-block structure and inter-block texture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150311 Termination date: 20190909 |