CN103338380A - Adaptive image quality objective evaluation method - Google Patents

Adaptive image quality objective evaluation method Download PDF

Info

Publication number
CN103338380A
CN103338380A CN2013102262697A CN201310226269A CN103338380A CN 103338380 A CN103338380 A CN 103338380A CN 2013102262697 A CN2013102262697 A CN 2013102262697A CN 201310226269 A CN201310226269 A CN 201310226269A CN 103338380 A CN103338380 A CN 103338380A
Authority
CN
China
Prior art keywords
prime
distortion
sigma
designated
coordinate position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102262697A
Other languages
Chinese (zh)
Other versions
CN103338380B (en
Inventor
蒋刚毅
靳鑫
郁梅
王晓东
邵枫
彭宗举
陈芬
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Shumei Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201310226269.7A priority Critical patent/CN103338380B/en
Publication of CN103338380A publication Critical patent/CN103338380A/en
Application granted granted Critical
Publication of CN103338380B publication Critical patent/CN103338380B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an adaptive image quality objective evaluation method. When the adaptive image quality objective evaluation method is adopted to obtain the structural similarity between two image blocks which have the same coordinate position in an original undistorted image and a distorted image to be evaluated, not only are the brightness mean value and standard deviation of all pixels in each image block in the original undistorted image and the distorted image to be evaluated, as well as the covariance among all the pixels in two image blocks in the original undistorted image and the distorted image to be evaluated utilized, but also the distortion types of the distorted image to be evaluated are utilized in a combined manner. Thus, the adaptive image quality objective evaluation method can determine the structural similarity between two image blocks according to the distortion types of the distorted image to be evaluated in the perspective of adaptive evaluation, and therefore, the consistency between an objective evaluation result and subjective perception about image quality can be improved.

Description

A kind of adaptive image quality method for objectively evaluating
Technical field
The present invention relates to a kind of image quality evaluation technology, especially relate to a kind of adaptive image quality method for objectively evaluating.
Background technology
The important channel that image obtains information as the mankind, the quality of picture quality will directly influence accuracy and the integrality of information.Yet, in collection, processing, storage and the transmission course of image, all can inevitably be subjected to distortion in various degree, how to measure the loss of the image information that is caused by these distortions, become a hot subject.The picture quality objective evaluation occurs the earliest, most popular method will belong to Y-PSNR method and absolute difference and method, but because they all belong to the method for Pixel-level, and reckon without correlation between pixel and the pixel, therefore cause the objective evaluation result often to differ greatly with the visually-perceptible of human eye.Thus, people have proposed the evaluation method of two big classes in conjunction with the human eye apperceive characteristic: the first kind, based on the perception mechanism of human visual system (HVS, human visual system); Second class, (SSIM, structure similarity) weighs picture quality based on structural similarity, the former is not owing to check on as yet HVS perception mechanism, cause when the actual algorithm modeling, can't accurately coming the analog vision perception with Mathematical Modeling, thereby limited the development of these class methods; The latter thinks that human eye is when observing image, can obtain the structural information of image on one's own initiative, human eye also comes perceptual image from these structural informations simultaneously, therefore can estimate the quality of distorted image by the change degree of metrology structure information, this class methods computation complexity is lower, be easy to realize, so development rapidly.From the domestic and international research achievement, can be divided into two classes based on the method for objectively evaluating image quality of SSIM: the first kind proposes the new evaluation factor; Second class is weighted the zones of different of image, yet which class methods no matter all do not relate to the weights adjustment of estimating in the SSIM model between the factor, and this will inevitably influence the accuracy of SSIM when estimating the distorted image of different type of distortion.
Summary of the invention
Technical problem to be solved by this invention provides a kind of adaptive image quality method for objectively evaluating, and it can improve the consistency between picture quality objective evaluation result and the subjective perception effectively.
The present invention solves the problems of the technologies described above the technical scheme that adopts: a kind of adaptive image quality method for objectively evaluating is characterized in that its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated, and original undistorted image and distorted image to be evaluated are divided into a plurality of equitant sizes respectively is 8 * 8 image block;
Secondly, by calculating brightness average and the standard deviation of all pixels in each image block in original undistorted image and the distorted image to be evaluated, and the covariance between all pixels in two image blocks that all coordinate position is identical in original undistorted image and the distorted image to be evaluated, and in conjunction with the type of distortion of distorted image to be evaluated, obtain the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated;
At last, according to the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated, obtain the objective quality score value of distorted image to be evaluated.
A kind of adaptive image quality method for objectively evaluating of the present invention is characterized in that it specifically may further comprise the steps:
1. make X represent original undistorted image, make Y represent distorted image to be evaluated, determine the type of distortion of Y then by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, Gaussian Blur distortion, the class JPEG2000 distortion, and wherein class JPEG2000 distortion comprises JPEG2000 distortion and rapid fading distortion;
2. adopting size is that 8 * 8 sliding window moves by pixel in X, and it is 8 * 8 image block that X is divided into M * N equitant and size, is that (i, image block j) is designated as x with coordinate position among the X I, jEqually, adopting size is that 8 * 8 sliding window moves by pixel in Y, and it is 8 * 8 image block that Y is divided into M * N equitant and size, is that (i, image block j) is designated as y with coordinate position among the Y I, jWherein,
Figure BDA00003309730100021
H represents the height of X and Y, and W represents the width of X and Y, symbol
Figure BDA00003309730100025
For rounding symbol downwards, 1≤i≤M, 1≤j≤N;
3. calculate brightness average and the standard deviation of all pixels in each image block among the X, and brightness average and the standard deviation of all pixels in each image block among the calculating Y, calculate the covariance between all pixels in two identical image blocks of coordinate positions all among X and the Y then, be (i, image block x j) with coordinate position among the X I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure BDA00003309730100026
With
Figure BDA00003309730100027
Be (i, image block y j) with coordinate position among the Y I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure BDA00003309730100028
With
Figure BDA00003309730100029
Be (i, image block x j) with coordinate position among the X I, jIn all pixels and Y in coordinate position be (i, image block y j) I, jIn all pixels between covariance be designated as
Figure BDA000033097301000210
μ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 x i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , σ x i , j y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( y i , j ( u , v ) - μ y i , j ) ] , Wherein, x I, j(u v) represents x I, jMiddle coordinate position is (u, the brightness value of pixel v), y I, j(u v) represents y I, jMiddle coordinate position is (u, the brightness value of pixel v), 1≤u≤8,1≤v≤8;
4. calculating luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, is (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween luminance function, contrast function and degree of structuration function correspondence be designated as l (x respectively I, j, y I, j), c (x I, j, y I, j) and s (x I, j, y I, j), l ( x i , j , y i , j ) = 2 μ x i , j μ y i , j + C 1 μ x i , j 2 + μ y i , j 2 + C 1 , c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 ,
Figure BDA00003309730100036
Wherein, C 1, C 2, C 3For avoiding denominator the zero little numerical constant that arranges to occur;
5. according to luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, calculate the structural similarity between two identical image blocks of coordinate positions all among X and the Y, be (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween structural similarity be designated as SSIM (x I, j, y I, j), SSIM ( x i , j , y i , j ) = [ l ( x i , j , y i , j ) ] a t [ c ( x i , j , y i , j ) ] β t [ s ( x i , j , y i , j ) ] γ t , Wherein, t is used for the type of distortion of expression Y, gets t=1 when the type of distortion of Y is the white Gaussian noise distortion, gets t=2 when the type of distortion of Y is the JPEG distortion, when the type of distortion of Y is the Gaussian Blur distortion, get t=3, when the type of distortion of Y is class JPEG2000 distortion, get t=4; α tThe weight factor of brightness, β are regulated in expression tThe weight factor of contrast, γ are regulated in expression tThe weight factor of expression adjustment structure degree, and satisfy α t+ β t+ γ t=3.
6. according to the structural similarity between two identical image blocks of coordinate positions all among X and the Y, calculate the objective quality score value of Y, be designated as Q, Q = 1 M × N Σ i = 1 M Σ j = 1 N SSIM ( x i , j , y i , j ) .
Described step determines that by the type of distortion method of discrimination detailed process of the type of distortion of Y is in 1.:
1.-and a, X is carried out 2-d wavelet decompose, obtain the approximate component X of X A, horizontal component
Figure BDA00003309730100039
And vertical component
Figure BDA000033097301000310
And calculate
Figure BDA000033097301000311
With
Figure BDA000033097301000312
The small echo gross energy, be designated as W X 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component X of X ACarry out a 2-d wavelet again and decompose, obtain the approximate component X of X AThe subband horizontal component
Figure BDA00003309730100045
With the subband vertical component
Figure BDA00003309730100046
And calculate
Figure BDA00003309730100047
With
Figure BDA00003309730100048
The small echo gross energy, be designated as W X 2 = Σ m ′ ′ = 1 M ' 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X H m ′ ′ , n ′ ′ 2 | ) + Σ m ′ ′ = 1 M ′ 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X V m ′ ′ , n ′ ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure BDA000033097301000410
With
Figure BDA000033097301000411
Height, N' 1Expression
Figure BDA000033097301000412
With
Figure BDA000033097301000413
Width,
Figure BDA000033097301000414
Expression
Figure BDA000033097301000415
In coordinate position be (m', the coefficient value of n') locating,
Figure BDA000033097301000416
Expression
Figure BDA000033097301000417
Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure BDA000033097301000418
With Height, N' 2Expression
Figure BDA000033097301000420
With
Figure BDA000033097301000421
Width, Expression In coordinate position be (m'', the coefficient value of n'') locating,
Figure BDA000033097301000424
Expression Middle coordinate position is (m'', the coefficient value of n'') locating;
Equally, Y is carried out a 2-d wavelet decompose, obtain the approximate component Y of Y A, horizontal component And vertical component
Figure BDA000033097301000427
And calculate
Figure BDA000033097301000428
With
Figure BDA000033097301000429
The small echo gross energy, be designated as
Figure BDA000033097301000430
W Y 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component Y of Y ACarry out a 2-d wavelet again and decompose, obtain the approximate component Y of Y AThe subband horizontal component
Figure BDA000033097301000431
With the subband vertical component
Figure BDA000033097301000432
And calculate
Figure BDA000033097301000433
With
Figure BDA000033097301000434
The small echo gross energy, be designated as
Figure BDA000033097301000435
W Y 2 = Σ m ′ ′ = 1 M ' 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | Y H m ′ ′ , n ′ ′ 2 | ) + Σ m ′ ′ = 1 M ′ 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | Y V m ′ ′ , n ′ ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure BDA000033097301000436
With
Figure BDA000033097301000437
Height, N' 1Expression
Figure BDA000033097301000438
With
Figure BDA000033097301000439
Width,
Figure BDA000033097301000440
Expression
Figure BDA000033097301000441
In coordinate position be (m', the coefficient value of n') locating,
Figure BDA000033097301000442
Expression Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure BDA000033097301000444
With Height, N' 2Expression
Figure BDA000033097301000446
With
Figure BDA000033097301000447
Width,
Figure BDA000033097301000448
Expression
Figure BDA000033097301000449
In coordinate position be (m'', the coefficient value of n'') locating,
Figure BDA000033097301000450
Expression
Figure BDA000033097301000451
Middle coordinate position is (m'', the coefficient value of n'') locating;
Calculate the capacity volume variance of the small echo gross energy of the X correspondence small echo gross energy corresponding with Y again, be designated as Δ W, ΔW = ( W X 1 + W X 2 ) - ( W Y 1 + W Y 2 ) ;
1.-b, judgement Δ W<Th WnWhether set up, if set up, determine that then the type of distortion of Y is the white Gaussian noise distortion, finish then; Otherwise, execution in step 1.-c; Wherein, Th WnBe white Gaussian noise distortion discrimination threshold;
1.-and c, calculate the luminance difference figure of X, be designated as X h, with X hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as X h(x, y), X h(x, y)=| X (x, y)-X (x, y+1) |, wherein, 1≤x≤H herein, 1≤y≤W-1, X (x, y) coordinate position is (x, the brightness value of pixel y) among the expression X, X (x, y+1) coordinate position is that (symbol " || " is the symbol that takes absolute value for x, the brightness value of pixel y+1) among the expression X;
Equally, calculate the luminance difference figure of Y, be designated as Y h, with Y hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as Y h(x, y), Y h(x, y)=| Y (x, y)-Y (x, y+1) |, wherein, and Y (x, y) coordinate position is that ((x y+1) represents that coordinate position is (x, the brightness value of pixel y+1) among the Y to Y for x, the brightness value of pixel y) among the expression Y;
1.-d, to X hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier obtains X hIn Fourier's energy spectrum of every row, with X hIn the capable Fourier's energy spectrum of x be designated as
Figure BDA00003309730100054
Wherein, Symbol
Figure BDA00003309730100056
Be the symbol that rounds up, l ∈ [1, N h2+1], X f x(l) expression X hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier, symbol " || " is the symbol that takes absolute value;
Equally, to Y hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier obtains Y hIn Fourier's energy spectrum of every row, with Y hIn the capable Fourier's energy spectrum of x be designated as
Figure BDA00003309730100057
Figure BDA00003309730100052
Wherein,
Figure BDA00003309730100058
Expression Y hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier; 1.-e, according to X hIn Fourier's energy spectrum of every row, calculate X hWhole Fourier's energy spectrum, be designated as P X,
Figure BDA00003309730100061
And according to Y hIn Fourier's energy spectrum of every row, calculate Y hWhole Fourier's energy spectrum, be designated as P Y,
Figure BDA00003309730100062
Wherein, l ∈ [1, N h/ 2+1]; Calculate P then XWith P YFourier's energy difference, be designated as Δ P, ΔP = Σ k = 1 4 [ P Y × ( k × N h + 1 8 ) - P X × ( k × N h + 1 8 ) ] ;
1.-f, according to Δ P, the JPEG distortion first judgment threshold Th Jpeg1With the JPEG distortion second judgment threshold Th Jpeg2, determine the type of distortion of Y, as Δ P 〉=Th Jpeg1The time, the type of distortion of determining Y is the JPEG distortion, finishes then; Work as Th Jpeg2≤ Δ P<Th Jpeg1The time, the type of distortion of uncertain Y, then execution in step 1.-g; As Δ P<Th Jpeg2The time, the type of distortion of determining Y is Gaussian Blur distortion or class JPEG2000 distortion, then execution in step 1.-h;
1.-and g, calculate Y and compare the percentage that X small echo gross energy reduces, be designated as Δ W',
Figure BDA00003309730100064
Judge Δ W'<Th then Jpeg3Whether set up, if set up, determine that then the type of distortion of Y is the JPEG distortion, finish then, otherwise, execution in step 1.-h, wherein, Th Jpeg3Be JPEG distortion the 3rd judgment threshold;
1.-h, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of X to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of X then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of X is designated as
Figure BDA00003309730100065
Wherein, c=R, G, B;
Equally, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of Y to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of Y then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of Y is designated as
Figure BDA00003309730100066
Judge again
Figure BDA00003309730100067
With
Figure BDA00003309730100068
Whether all set up, if all set up, then execution in step 1.-i; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then;
1.-i, with X from the RGB color space conversion to the HSI color space, extract the chromatic component of X then, be designated as X H, then calculate X HThe average chrominance value, be designated as
Figure BDA00003309730100069
Wherein, X H(m, n) expression X HMiddle coordinate position is at (m, the chromatic value of n) locating;
Equally, with Y from the RGB color space conversion to the HIS color space, extract the chromatic component of Y then, be designated as Y H, then calculate Y HThe average chrominance value, be designated as
Figure BDA00003309730100071
Figure BDA00003309730100072
Wherein, Y H(m, n) expression Y HMiddle coordinate position is at (m, the chromatic value of n) locating;
Afterwards, calculate the colourity relative different of X and Y, be designated as C,
Figure BDA00003309730100073
Judge again whether C<0.3 sets up, if set up, then execution in step 1.-j; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then;
1.-and j, the R to X, G, B Color Channel subgraph carry out two dimensional discrete Fourier transform respectively, obtains the amplitude spectrum of the R Color Channel subgraph of X, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, and correspondence is designated as respectively
Figure BDA000033097301000712
With
Figure BDA000033097301000713
Equally, R, G, the B Color Channel subgraph of Y carried out two dimensional discrete Fourier transform respectively, obtain the amplitude spectrum of the R Color Channel subgraph of Y, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, correspondence is designated as respectively
Figure BDA000033097301000714
Figure BDA000033097301000715
Figure BDA000033097301000716
Then, calculate the frequency response of each Color Channel subgraph, the frequency response of c Color Channel subgraph is designated as
Figure BDA000033097301000717
Will
Figure BDA000033097301000718
In coordinate position in that (m, the DFT coefficient value of n) locating is designated as
Figure BDA000033097301000719
(m, n),
Figure BDA00003309730100074
Wherein,
Figure BDA000033097301000720
(m, n) expression In coordinate position (m, the DFT coefficient value of n) locating,
Figure BDA000033097301000722
(m, n) expression
Figure BDA000033097301000723
Middle coordinate position is at (m, the DFT coefficient value of n) locating;
Then, extract the vertical zero-frequency component of the frequency response of each Color Channel subgraph, with the frequency response of c Color Channel subgraph
Figure BDA000033097301000724
Vertical zero-frequency component be designated as
Figure BDA00003309730100075
Will
Figure BDA00003309730100076
In m DFT coefficient value be designated as
Figure BDA00003309730100077
Figure BDA00003309730100078
Wherein,
Figure BDA000033097301000725
Expression The DFT coefficient value that middle coordinate position is located in (m, 1);
Afterwards, in the middle of the center zero-frequency of the vertical zero-frequency component of the frequency response of each Color Channel subgraph moved on to, obtain new vertical zero-frequency component, the new vertical zero-frequency component of the frequency response of c Color Channel subgraph is designated as
Figure BDA00003309730100079
Will
Figure BDA000033097301000710
In m DFT coefficient value be designated as
Figure BDA00003309730100081
Figure BDA00003309730100082
Wherein, symbol
Figure BDA00003309730100083
For rounding symbol downwards;
Extract again decentre point in the new vertical zero-frequency component of frequency response of each Color Channel subgraph ±
Figure BDA00003309730100084
The Frequency point of distance, to use window width be 3 filter carries out medium filtering to the Frequency point of each Color Channel subgraph correspondence of extracting, with filtered Frequency point and the match of one dimension Gaussian function, the Frequency point after the calculation of filtered and the Pearson correlation coefficient of the data after the match; For
Figure BDA00003309730100085
Extract
Figure BDA00003309730100086
The decentre point ±
Figure BDA00003309730100087
The Frequency point of distance is designated as
Figure BDA00003309730100088
Using window width is that 3 filter is right
Figure BDA00003309730100089
Carry out medium filtering, obtain filtered Frequency point, be designated as
Figure BDA000033097301000810
Will
Figure BDA000033097301000811
With the match of one dimension Gaussian function, obtain the data after the match, be designated as
Figure BDA000033097301000812
Calculate
Figure BDA000033097301000813
With
Figure BDA000033097301000814
Pearson correlation coefficient, be designated as Pe c, Pe c = Σ d = 1 H 4 + 1 { ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ ′ H c ( d ) ) ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) } Σ d = 1 H 4 + 1 ( G ′ ′ H c ( d ) - 4 H + 4 Σ d H 4 + 1 G ′ ′ H c ( d ) ) 2 Σ d = 1 H 4 + 1 ( G ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) 2 , Wherein,
Figure BDA000033097301000816
For In the DFT coefficient value of d,
Figure BDA000033097301000818
For
Figure BDA000033097301000819
In the DFT coefficient value after match of d;
At last, judge Pe R≤ 0.9, Pe G≤ 0.9 and Pe BWhether≤0.9 all set up, if all set up, determines that then the type of distortion of Y is class JPEG2000 distortion, finishes then; Otherwise the type of distortion of determining Y is the Gaussian Blur distortion, finishes then.
Described step 1.-b in white Gaussian noise distortion discrimination threshold Th WnValue be-0.1; Described step 1.-f in the JPEG distortion first judgment threshold Th Jpeg1Value be 0.1, JPEG distortion, the second judgment threshold Th Jpeg2Value be-0.2; Described step 1.-g in JPEG distortion the 3rd judgment threshold Th Jpeg3Value be 0.075.
Described step is got C in 4. 1=0.01, C 2=0.02, C 3=0.01.
Described step is middle α 5. t, β tAnd γ tValue determined by the type of distortion of Y, when the type of distortion of Y is the white Gaussian noise distortion, get α t=2.5, β t=0.1, γ t=0.4; When the type of distortion of Y is the JPEG distortion, get
α t=1.3, β t=1.6, γ t=0.1; When the type of distortion of Y is the Gaussian Blur distortion, get α t=2.9, β t=0, γ t=0.1; When the type of distortion of Y is class JPEG2000 distortion, get α t=0.8, β t=2, γ t=0.2.
Compared with prior art, the invention has the advantages that:
1) during the structural similarity between the inventive method two image blocks that coordinate position is identical in obtaining original undistorted image and distorted image to be evaluated, brightness average and the standard deviation of all pixels in each image block in original undistorted image and the distorted image to be evaluated have not only been utilized, and the covariance between all pixels in two image blocks that all coordinate position is identical in original undistorted image and the distorted image to be evaluated, but also combine the type of distortion of distorted image to be evaluated, make that the inventive method can be from the angle of self adaptation evaluation, determine two structural similarity between the image block according to the type of distortion of distorted image to be evaluated, thereby improved the consistency between picture quality objective evaluation result and the subjective perception.
2) the inventive method is at the weights ratio of brightness, contrast and degree of structuration three in the structural similarity between two image blocks of the adaptive adjustment of the distorted image of different type of distortion, come distorted image is carried out optimum evaluation, not only computation complexity is low, and utilized the evaluating ability of structural similarity to the distorted image of different type of distortion, therefore improved the consistency of objective evaluation result and subjective perception effectively.
The distortion characteristics that show when 3) the inventive method is subjected to the white Gaussian noise distortion by combining image in the process of the type of distortion of differentiating distorted image, the distortion characteristics that show when being subjected to the JPEG distortion, and the distortion characteristics that show when being subjected to the Gaussian Blur distortion, under the situation that the original reference image is arranged, realized the judgement of the type of distortion of distorted image, the differentiation process of this type of distortion is portable high, and any situation that needs to judge whether image is subjected to the wherein a kind of distortion in these three kinds of type of distortion can be used.
Description of drawings
Fig. 1 is the overall realization block diagram of the inventive method;
Fig. 2 is the type of distortion decision flow chart of distorted image in the inventive method;
Fig. 3 a determines the threshold size that relates to when distorted image is the white Gaussian noise distortion and the graph of a relation of correct judgment rate in the inventive method;
Fig. 3 b determines the threshold size that relates to when distorted image is the JPEG distortion and the graph of a relation of correct judgment rate in the inventive method;
Fig. 3 c is power difference and the wavelet sub-band energy difference graph of a relation of reference picture and distorted image in the inventive method.
Embodiment
Describe in further detail below in conjunction with the present invention of accompanying drawing embodiment.
A kind of adaptive image quality method for objectively evaluating that the present invention proposes, its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated, and original undistorted image and distorted image to be evaluated are divided into a plurality of equitant sizes respectively is 8 * 8 image block;
Secondly, by calculating brightness average and the standard deviation of all pixels in each image block in original undistorted image and the distorted image to be evaluated, and the covariance between all pixels in two image blocks that all coordinate position is identical in original undistorted image and the distorted image to be evaluated, and in conjunction with the type of distortion of distorted image to be evaluated, obtain the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated;
At last, according to the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated, obtain the objective quality score value of distorted image to be evaluated.
Adaptive image quality method for objectively evaluating of the present invention, it totally realizes block diagram as shown in Figure 1, it specifically may further comprise the steps:
1. make X represent original undistorted image, make Y represent distorted image to be evaluated, determine the type of distortion of Y then by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, Gaussian Blur distortion, the class JPEG2000 distortion.
At present, the type of distortion of image generally has white Gaussian noise distortion (WN, white noise), JPEG distortion (JPEG), Gaussian Blur distortion (Gblur, Gaussian blur) and class JPEG2000 distortion four classes, wherein, class JPEG2000 distortion comprises two kinds of JPEG2000 distortion and rapid fading distortions (FF, fast fading).At this, by the type of distortion method of discrimination, determine which kind of type of distortion Y is.
In this specific embodiment, as shown in Figure 2, step determines that by the type of distortion method of discrimination detailed process of the type of distortion of Y is in 1.:
1.-and a, X is carried out 2-d wavelet decompose (bior4.4 small echo), obtain the approximate component X of X A, horizontal component
Figure BDA00003309730100102
And vertical component
Figure BDA00003309730100103
And calculate
Figure BDA00003309730100104
With
Figure BDA00003309730100105
The small echo gross energy, be designated as
Figure BDA00003309730100106
W X 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component X of X ACarry out a 2-d wavelet again and decompose (bior4.4 small echo), obtain the approximate component X of X AThe subband horizontal component
Figure BDA00003309730100107
With the subband vertical component
Figure BDA00003309730100114
And calculate
Figure BDA00003309730100115
With The small echo gross energy, be designated as
Figure BDA00003309730100117
W X 2 = Σ m ′ ′ = 1 M ' 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X H m ′ ′ , n ′ ′ 2 | ) + Σ m ′ ′ = 1 M ′ 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X V m ′ ′ , n ′ ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure BDA00003309730100118
With
Figure BDA00003309730100119
Height, N' 1Expression With
Figure BDA000033097301001111
Width,
Figure BDA000033097301001112
Expression
Figure BDA000033097301001113
In coordinate position be (m', the coefficient value of n') locating,
Figure BDA000033097301001114
Expression
Figure BDA000033097301001115
Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure BDA000033097301001116
With
Figure BDA000033097301001117
Height, N' 2Expression
Figure BDA000033097301001118
With
Figure BDA000033097301001119
Width,
Figure BDA000033097301001120
Expression
Figure BDA000033097301001121
In coordinate position be (m'', the coefficient value of n'') locating, Expression
Figure BDA000033097301001123
Middle coordinate position is (m'', the coefficient value of n'') locating.
Equally, Y is carried out a 2-d wavelet decompose (bior4.4 small echo), obtain the approximate component Y of Y A, horizontal component
Figure BDA000033097301001124
And vertical component
Figure BDA000033097301001125
And calculate
Figure BDA000033097301001126
With
Figure BDA000033097301001127
The small echo gross energy, be designated as
Figure BDA000033097301001128
W Y 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component Y of Y ACarry out a 2-d wavelet again and decompose (bior4.4 small echo), obtain the approximate component Y of Y AThe subband horizontal component
Figure BDA000033097301001129
With the subband vertical component
Figure BDA000033097301001130
And calculate
Figure BDA000033097301001131
With
Figure BDA000033097301001132
The small echo gross energy, be designated as
Figure BDA000033097301001133
W Y 2 = Σ m ′ ′ = 1 M ' 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | Y H m ′ ′ , n ′ ′ 2 | ) + Σ m ′ ′ = 1 M ′ 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | Y V m ′ ′ , n ′ ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure BDA000033097301001134
With
Figure BDA000033097301001135
Height, namely
Figure BDA000033097301001136
With
Figure BDA000033097301001137
Height with With
Figure BDA000033097301001139
The height unanimity, N' 1Expression
Figure BDA000033097301001140
With
Figure BDA000033097301001141
Width, namely With
Figure BDA000033097301001143
Width with
Figure BDA000033097301001144
With
Figure BDA000033097301001145
The width unanimity,
Figure BDA000033097301001146
Expression
Figure BDA000033097301001147
In coordinate position be (m', the coefficient value of n') locating, Expression
Figure BDA000033097301001149
Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure BDA000033097301001150
With
Figure BDA000033097301001151
Height, namely With
Figure BDA000033097301001153
Height with
Figure BDA000033097301001154
With
Figure BDA000033097301001155
The height unanimity, N' 2Expression
Figure BDA000033097301001156
With Width, namely
Figure BDA000033097301001158
With
Figure BDA000033097301001159
Width with
Figure BDA000033097301001160
With
Figure BDA000033097301001161
The width unanimity, Expression
Figure BDA000033097301001163
In coordinate position be (m'', the coefficient value of n'') locating,
Figure BDA000033097301001164
Expression
Figure BDA000033097301001165
Middle coordinate position is (m'', the coefficient value of n'') locating.
Calculate the capacity volume variance of the small echo gross energy of the X correspondence small echo gross energy corresponding with Y again, be designated as Δ W, ΔW = ( W X 1 + W X 2 ) - ( W Y 1 + W Y 2 ) .
1.-b, judgement Δ W<Th WnWhether set up, if set up, determine that then the type of distortion of Y is the white Gaussian noise distortion, finish then; Otherwise, execution in step 1.-c; Wherein, Th WnBe white Gaussian noise distortion discrimination threshold, in the present embodiment, white Gaussian noise distortion discrimination threshold Th WnValue be-0.1.
1.-and c, calculate the luminance difference figure of X, be designated as X h, with X hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as X h(x, y), X h(x, y)=| X (x, y)-X (x, y+1) |, 1≤x≤H herein, 1≤y≤W-1, wherein, X (x, y) coordinate position is (x, the brightness value of pixel y) among the expression X, (x, y+1) coordinate position is that (symbol " || " is the symbol that takes absolute value for x, the brightness value of pixel y+1) to X among the expression X.
Equally, calculate the luminance difference figure of Y, be designated as Y h, with Y hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as Y h(x, y), Y h(x, y)=| Y (x, y)-Y (x, y+1) |, wherein, (x, y) coordinate position is that ((x, y+1) coordinate position is (x, the brightness value of pixel y+1) to Y among the expression Y for x, the brightness value of pixel y) to Y among the expression Y.
1.-d, to X hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier (DFT, Discrete Fourier Transform) obtains X hIn Fourier's energy spectrum of every row, with X hIn the capable Fourier's energy spectrum of x be designated as
Figure BDA00003309730100121
Figure BDA00003309730100122
Wherein,
Figure BDA00003309730100126
Symbol
Figure BDA00003309730100127
Be the symbol that rounds up, l ∈ [1, N h/ 2+1], X f x(l) expression X hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier, symbol " || " is the symbol that takes absolute value.
Equally, to Y hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier (DFT, Discrete Fourier Transform) obtains Y hIn Fourier's energy spectrum of every row, with Y hIn the capable Fourier's energy spectrum of x be designated as
Figure BDA00003309730100123
Figure BDA00003309730100124
Wherein, Y f x(l) expression Y hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier.
1.-e, according to X hIn Fourier's energy spectrum of every row, calculate X hWhole Fourier's energy spectrum, be designated as P X,
Figure BDA00003309730100131
And according to Y hIn Fourier's energy spectrum of every row, calculate Y hWhole Fourier's energy spectrum, be designated as P Y,
Figure BDA00003309730100132
Wherein, l ∈ [1, N h/ 2+1]; Calculate P then XWith P YFourier's energy difference, be designated as Δ P, ΔP = Σ k = 1 4 [ P Y × ( k × N h + 1 8 ) - P X × ( k × N h + 1 8 ) ] .
1.-f, according to Δ P, the JPEG distortion first judgment threshold Th Jpeg1With the JPEG distortion second judgment threshold Th Jpeg2, determine the type of distortion of Y, as Δ P 〉=Th Jpeg1The time, the type of distortion of determining Y is the JPEG distortion, finishes then; Work as Th Jpeg2≤ Δ P<Th Jpeg1The time, the type of distortion of uncertain Y, then execution in step 1.-g; As Δ P<Th Jpeg2The time, the type of distortion of determining Y is Gaussian Blur distortion or class JPEG2000 distortion, then execution in step 1.-h.
In the present embodiment, the JPEG distortion first judgment threshold Th Jpeg1Value be 0.1, JPEG distortion, the second judgment threshold Th Jpeg2Value be-0.2.
1.-and g, calculate Y and compare the percentage that X small echo gross energy reduces, be designated as Δ W',
Figure BDA00003309730100134
Judge Δ W'<Th then Jpeg3Whether set up, if set up, determine that then the type of distortion of Y is the JPEG distortion, finish then, otherwise, execution in step 1.-h, wherein, Th Jpeg3Be JPEG distortion the 3rd judgment threshold, in the present embodiment, JPEG distortion the 3rd judgment threshold Th Jpeg3Value be 0.075.
1.-h, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of X to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of X then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of X is designated as , wherein, c=R, G, B.
Equally, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of Y to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of Y then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of Y is designated as
Figure BDA00003309730100136
Judge again
Figure BDA00003309730100141
With
Figure BDA00003309730100142
Whether all set up, if all set up, then execution in step 1.-i; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then.
1.-i, with X from the RGB color space conversion to HSI(Hue Saturation Intensity) color space, extract the chromatic component of X then, be designated as X H, then calculate X HThe average chrominance value, be designated as
Figure BDA00003309730100143
Figure BDA00003309730100144
Wherein, X H(m, n) expression X HMiddle coordinate position is at (m, the chromatic value of n) locating;
Equally, with Y from the RGB color space conversion to the HIS color space, extract the chromatic component of Y then, be designated as Y H, then calculate Y HThe average chrominance value, be designated as
Figure BDA00003309730100145
Figure BDA00003309730100146
Wherein, Y H(m, n) expression Y HMiddle coordinate position is at (m, the chromatic value of n) locating.
Afterwards, calculate the colourity relative different of X and Y, be designated as C,
Judge again whether C<0.3 sets up, if set up, then execution in step 1.-j; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then.
1.-and j, the R to X, G, B Color Channel subgraph carry out two dimensional discrete Fourier transform respectively, obtains the amplitude spectrum of the R Color Channel subgraph of X, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, and correspondence is designated as respectively
Figure BDA00003309730100148
With
Figure BDA00003309730100149
Equally, R, G, the B Color Channel subgraph of Y carried out two dimensional discrete Fourier transform respectively, obtain the amplitude spectrum of the R Color Channel subgraph of Y, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, correspondence is designated as respectively
Figure BDA000033097301001410
Then, calculate the frequency response of each Color Channel subgraph, the frequency response of c Color Channel subgraph is designated as Will
Figure BDA000033097301001412
In coordinate position in that (m, the DFT coefficient value of n) locating is designated as
Figure BDA000033097301001413
Figure BDA000033097301001414
Wherein,
Figure BDA000033097301001415
Expression
Figure BDA000033097301001416
In coordinate position (m, the DFT coefficient value of n) locating,
Figure BDA000033097301001417
Expression
Figure BDA000033097301001418
In coordinate position (m, the DFT coefficient value of n) locating, 1≤m≤H, 1≤n≤W, H is With
Figure BDA000033097301001420
Height, namely
Figure BDA000033097301001421
With
Figure BDA000033097301001422
Height consistent with the height of X and Y, W is
Figure BDA000033097301001423
With
Figure BDA000033097301001424
Width, namely
Figure BDA000033097301001425
With
Figure BDA000033097301001426
Width consistent with the width of X and Y.
Then, extract the vertical zero-frequency component of the frequency response of each Color Channel subgraph, with the frequency response of c Color Channel subgraph
Figure BDA000033097301001523
Vertical zero-frequency component be designated as
Figure BDA00003309730100151
Will
Figure BDA00003309730100152
In m DFT coefficient value be designated as
Figure BDA00003309730100153
Figure BDA00003309730100154
Wherein,
Figure BDA000033097301001524
Expression
Figure BDA000033097301001525
The DFT coefficient value that middle coordinate position is located in (m, 1).
Afterwards, in the middle of the center zero-frequency of the vertical zero-frequency component of the frequency response of each Color Channel subgraph moved on to, obtain new vertical zero-frequency component, the new vertical zero-frequency component of the frequency response of c Color Channel subgraph is designated as
Figure BDA00003309730100155
Will
Figure BDA00003309730100156
In m DFT coefficient value be designated as
Figure BDA00003309730100157
Figure BDA00003309730100158
Wherein, symbol
Figure BDA000033097301001526
For rounding symbol downwards.
Extract again decentre point in the new vertical zero-frequency component of frequency response of each Color Channel subgraph ± The Frequency point of distance, to use window width be 3 filter carries out medium filtering to the Frequency point of each Color Channel subgraph correspondence of extracting, with filtered Frequency point and the match of one dimension Gaussian function, the Frequency point after the calculation of filtered and the Pearson correlation coefficient of the data after the match; For
Figure BDA000033097301001510
Extract
Figure BDA000033097301001511
The decentre point ±
Figure BDA000033097301001512
The Frequency point of distance is designated as
Figure BDA000033097301001527
Using window width is that 3 filter is right
Figure BDA000033097301001528
Carry out medium filtering, obtain filtered Frequency point, be designated as
Figure BDA000033097301001513
Will
Figure BDA000033097301001514
With the match of one dimension Gaussian function, obtain the data after the match, be designated as
Figure BDA000033097301001515
Calculate
Figure BDA000033097301001516
With
Figure BDA000033097301001517
Pearson correlation coefficient, be designated as
Figure BDA000033097301001529
Pe c = Σ d = 1 H 4 + 1 { ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ ′ H c ( d ) ) ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) } Σ d = 1 H 4 + 1 ( G ′ ′ H c ( d ) - 4 H + 4 Σ d H 4 + 1 G ′ ′ H c ( d ) ) 2 Σ d = 1 H 4 + 1 ( G ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) 2 , Wherein,
Figure BDA000033097301001519
For
Figure BDA000033097301001520
In the DFT coefficient value of d,
Figure BDA000033097301001521
For
Figure BDA000033097301001522
In the DFT coefficient value after match of d.
At last, judge Pe R≤ 0.9, Pe G≤ 0.9 and Pe BWhether≤0.9 all set up, if all set up, determines that then the type of distortion of Y is class JPEG2000 distortion, finishes then; Otherwise the type of distortion of determining Y is the Gaussian Blur distortion, finishes then.
In the present embodiment, 808 width of cloth images that the view data of using provides as U.S.'s Texas university image and the disclosed picture quality estimation database in video engineering experiment chamber (LIVE) are comprising undistorted reference picture 29 width of cloth, distorted image 779 width of cloth.In addition, this 779 width of cloth distorted image is assigned in the 5 number of sub images storehouses by type of distortion, that is: white Gaussian noise (WN, white noise) distorted image storehouse (comprising 145 width of cloth images), Gaussian Blur (Gblur, Gaussian blurring) distorted image storehouse (comprising 145 width of cloth images), JPEG distorted image storehouse (comprising 175 width of cloth images), JPEG2000 distorted image storehouse (comprising 169 width of cloth images) and rapid fading (FF, fast fading) distorted image storehouse (comprising 145 width of cloth images).Simultaneously, the type of distortion of these distorted images is single.
Type of distortion according to distorted image shown in Figure 2 is differentiated flow process, and the first step is isolated the distorted image of white Gaussian noise distortion from all distorted images of database.Whether the type of distortion at definite distorted image Y is the white Gaussian noise distortion discrimination threshold Th that relates in the process of white Gaussian noise distortion WnThe time, in interval [0.5,0.5], get a point as Th every 0.08 WnProbe value, to each probe value, carry out 200 groups of discriminating step 1.-a and step 1.-the data training of b, and ask the accuracy rate of judgement respectively, wherein every group of data are from the image of half quantity of selecting at random in each distortion subimage storehouse, Th WnThe relation curve of size and differentiation accuracy is worked as Th as can be seen from Fig. 3 a shown in Fig. 3 a Wn=-0.1 o'clock can be that the white Gaussian noise distorted image is isolated on 100% ground with accuracy.
Second step was isolated the JPEG distorted image from the image of non-white Gaussian noise distortion.In between interval [1,1], get a bit as Th every 0.08 JpegProbe value, equally to each probe value do 200 groups of discriminating step 1.-c to step 1.-the data training of f, experimental image is from the subimage storehouse except the white Gaussian noise database, quantity is half of the total picture number of each image word bank, selection mode is selection at random.Obtain Th JpegThe relation curve of size and correct judgment rate as can be seen, does not have the Th that is fit to from Fig. 3 b shown in Fig. 3 b JpegCan accomplish the right-on JPEG of isolating distorted image, therefore, according to discriminating step 1.-f is described, gets Th Jpeg1=0.1, Th Jpeg2=-0.2, for Δ P ∈ [0.2,0.1) image, carry out in the discriminating step 1.-the data training of g, the experimental image source is identical with the former.Experiment obtains power difference that reference picture is original undistorted image X and distorted image Y and wavelet sub-band energy difference relation curve shown in Fig. 3 c, as can be seen, gets Th from Fig. 3 c Jpeg3Whether=0.075 o'clock, can 100% correctly judging distorted image Y is the JPEG distortion.
2. adopting size is that 8 * 8 the sliding window order by Row Column in X moves by pixel, and it is 8 * 8 image block that X is divided into M * N equitant and size, is that (i, image block j) is designated as x with coordinate position among the X I, jEqually, adopting size is that sliding window order by Row Column in Y of 8 * 8 moves by pixel, and it is 8 * 8 image block that Y is divided into M * N equitant and size, is that (i, image block j) is designated as y with coordinate position among the Y i, jWherein,
Figure BDA00003309730100171
H represents the height of X and Y, and W represents the width of X and Y, symbol
Figure BDA00003309730100179
For rounding symbol downwards, 1≤i≤M, 1≤j≤N.
In the image block cutting procedure, if boundary less than 8 * 8 sizes of X and Y, then this partial pixel point is not cut apart.
3. calculate brightness average and the standard deviation of all pixels in each image block among the X, and brightness average and the standard deviation of all pixels in each image block among the calculating Y, calculate the covariance between all pixels in two identical image blocks of coordinate positions all among X and the Y then, be (i, image block x j) with coordinate position among the X I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure BDA000033097301001710
With Be (i, image block y j) with coordinate position among the Y I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure BDA000033097301001712
With
Figure BDA000033097301001713
Be (i, image block x j) with coordinate position among the X I, jIn all pixels and Y in coordinate position be (i, image block y j) I, jIn all pixels between covariance be designated as μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , σ x i , j y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( y i , j ( u , v ) - μ y i , j ) ] , Wherein, x I, j(u v) represents x I, jMiddle coordinate position is (u, the brightness value of pixel v), y I, j(u v) represents y I, jMiddle coordinate position is (u, the brightness value of pixel v), 1≤u≤8,1≤v≤8.
4. calculating luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, is (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween luminance function, contrast function and degree of structuration function correspondence be designated as l (x respectively I, j, y I, j), c (x I, j, y I, j) and s (x I, j, y I, j), l ( x i , j , y i , j ) = 2 μ x i , j μ y i , j + C 1 μ x i , j 2 + μ y i , j 2 + C 1 , c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 ,
Figure BDA00003309730100181
Wherein, C 1, C 2, C 3For avoiding denominator the zero little numerical constant that arranges to occur, get C in the present embodiment 1=0.01, C 2=0.02, C 3=0.01.
5. according to luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, calculate the structural similarity between two identical image blocks of coordinate positions all among X and the Y, be (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween structural similarity be designated as SSIM (x I, j,y I, j), SSIM ( x i , j , y i , j ) = [ l ( x i , j , y i , j ) ] a t [ c ( x i , j , y i , j ) ] β t [ s ( x i , j , y i , j ) γ t , Wherein, t is used for the type of distortion of expression Y, gets t=1 when the type of distortion of Y is the white Gaussian noise distortion, gets t=2 when the type of distortion of Y is the JPEG distortion, when the type of distortion of Y is the Gaussian Blur distortion, get t=3, when the type of distortion of Y is class JPEG2000 distortion, get t=4; α tThe weight factor of brightness, β are regulated in expression tThe weight factor of contrast, γ are regulated in expression tThe weight factor of expression adjustment structure degree, and satisfy α t+ β t+ γ t=3, its concrete size is distributed relevant with the type of distortion of distorted image.
6. according to the structural similarity between two identical image blocks of coordinate positions all among X and the Y, calculate the objective quality score value of Y, be designated as Q, Q = 1 M × N Σ i - 1 M Σ j = 1 N SSIM ( x i , j , y i , j ) .
In the present embodiment, in order to determine α t, β t, γ t(t ∈ { 1,2,3,4}, corresponding different type of distortion) 29 width of cloth undistorted images and four kinds of type of distortion (white Gaussian noise distortion, corresponding white Gaussian noise distorted image storehouses of providing in the LIVE storehouse are provided the best value when the distorted image of different type of distortion is estimated; The Gaussian Blur distortion, corresponding Gaussian Blur distorted image storehouse; The JPEG distortion, corresponding JPEG distorted image storehouse; Class JPEG2000 distortion, corresponding JPEG2000 distortion and rapid fading distorted image storehouse) corresponding subimage storehouse view data, and the corresponding average subjective scoring difference (DMOS of each width of cloth distorted image, difference mean opinion scores) (wherein the quality of the more big expression distorted image of DMOS value is more poor, the quality of the more little expression distorted image of DMOS value is more good, and the span of DMOS is [0,100]), carry out parameter optimization.
Optimizing process is: to each class distortion, the first step is set α Tem∈ [0,3], and 1 being step-length, β Temp∈ [0,3-α Temp], and 0.1 to be step-length, γ simultaneously Temp=3-α TempTemp, 6. 1. the above-mentioned four kinds of corresponding subimage of type of distortion storehouse view data calculated the corresponding quality evaluation score value of every width of cloth distorted image Q to step by the step of the inventive method; Adopt four parameter L ogistic functions to carry out nonlinear fitting then, obtain quality evaluation score value after the match and the Pearson correlation coefficient (CC between the DMOS value, Correlation Coefficient), Spearman coefficient correlation (SROCC, Spearman Rank-Order Correlation Coefficient), with calculating quality evaluation score value and the CC coefficient between the DMOS value and the SROCC coefficient of t class distorted image, be designated as CC respectively tAnd SROCC t, with T t=max (CC t+ SROCC t) be target function, wherein { max () function is for getting max function for 1,2,3,4}, corresponding different type of distortion for t ∈; Obtain the maximum of every class distortion, be designated as
Figure BDA00003309730100193
, then note and work as T tObtain maximum
Figure BDA00003309730100194
The time
Figure BDA00003309730100195
, Value, as work as T 1Get maximum
Figure BDA00003309730100197
The time
Figure BDA00003309730100198
,
Figure BDA00003309730100199
Value.
Figure BDA00003309730100191
And 0.2 being step-length, β Temp∈ [0,3-α Temp], and 0.1 being step-length, γ Temp=3-α TempTemp6. 1. the above-mentioned four kinds of corresponding subimage of type of distortion storehouse view data calculated the corresponding quality evaluation score value of every width of cloth distorted image Q to step by the inventive method step, adopt four parameter L ogistic functions to carry out nonlinear fitting then, quality evaluation score value after the calculating match and coefficient correlation CC, the SROCC between the DMOS value, with calculating quality evaluation score value and the CC coefficient between the DMOS value and the SROCC coefficient of t class distorted image, be designated as CC respectively tAnd SROCC t, with T t=max (CC t+ SROCC t) be target function; Obtain the maximum of every class distortion, be designated as
Figure BDA000033097301001910
, then note and work as T tObtain maximum
Figure BDA000033097301001911
The time
Figure BDA000033097301001912
,
Figure BDA000033097301001913
Value.
Figure BDA00003309730100192
β Temp∈ [0,3-α Temp], be step-length with 0.1 all, γ Temp=3-α TempTemp6. 1. the above-mentioned four kinds of corresponding subimage of type of distortion storehouse view data calculated the corresponding quality evaluation score value of every width of cloth distorted image Q to step by the inventive method step, adopt four parameter L ogistic functions to carry out nonlinear fitting then, quality evaluation score value after the calculating match and coefficient correlation CC, the SROCC between the DMOS value, with calculating quality evaluation score value and the CC coefficient between the DMOS value and the SROCC coefficient of t class distorted image, be designated as CC respectively tAnd SROCC t, with T t=max (CC t+ SROCC t) be target function; Obtain the maximum of every class distortion, be designated as
Figure BDA000033097301001914
, then note and work as T tObtain maximum
Figure BDA000033097301001915
The time
Figure BDA000033097301001916
,
Figure BDA000033097301001917
Value.
In the 4th step, determine that finally the weight factor under the different type of distortion is assigned as:
Figure BDA000033097301001918
,
Figure BDA000033097301001919
, γ t=3-α tt, t ∈ { 1,2,3,4}, corresponding different type of distortion wherein.
In the present embodiment, obtain α by experiment t, β tAnd γ tValue such as table 1 listed.
α under the different type of distortion of table 1 t, β tAnd γ tValue
Figure BDA00003309730100201
At this, 29 width of cloth undistorted images that provide in the LIVE storehouse and 779 single distorted images and the corresponding DMOS value of each width of cloth distorted image are provided, 1. 6. calculate the quality evaluation score value Q of each width of cloth distorted image to step according to step, quality evaluation score value Q and the DMOS value of 779 width of cloth distorted images are carried out four parameter L ogistic function nonlinear fittings; Utilize 4 objective parameters commonly used of evaluate image quality evaluating method as evaluation index, be coefficient correlation (CC, SROCC), dispersion ratio (OR, out ratio) and mean square error coefficient (RMSE, rooted mean squared error), wherein CC coefficient and SROCC coefficient calculations method are shown in the parameter optimisation procedure first step, and the OR value calculating method is:
Figure BDA00003309730100202
Z wherein 0Satisfy expression formula in the expression sample to be tested
Figure BDA00003309730100203
Element number, wherein S represents reference sample, { S z, z=1,2 ..., Z}; O represents sample to be tested, { O z, z=1,2 ..., Z}, Z are the total number S of element in the sample.The RMSE value calculating method is:
Figure BDA00003309730100204
S wherein z, O z, the Z meaning is with last identical.The more high key diagram of CC and SROCC value is more good as method for objectively evaluating and DMOS correlation, and the more low key diagram of OR value and RMSE value is more good as method for objectively evaluating and DMOS correlation.
Table 2 has been listed the value of CC, SROCC, OR and the RMSE coefficient of assess performance under the various type of distortion, from the listed data of table 2 as seen, objective quality score value Q and the correlation between the subjective scores DMOS of the distorted image that present embodiment obtains are very high, CC value and SROCC value all surpass 0.91, the OR value is lower than 0.5, the RMSE value is lower than 5.5, and this result who has shown the objective evaluation result of the inventive method and human eye subjective perception is more consistent, has proved absolutely the validity of the inventive method.
Correlation between the objective evaluation branch of the distorted image that this enforcement of table 2 obtains and subjective assessment divide
Figure BDA00003309730100211

Claims (6)

1. adaptive image quality method for objectively evaluating is characterized in that its processing procedure is:
At first, determine the type of distortion of distorted image to be evaluated, and original undistorted image and distorted image to be evaluated are divided into a plurality of equitant sizes respectively is 8 * 8 image block;
Secondly, by calculating brightness average and the standard deviation of all pixels in each image block in original undistorted image and the distorted image to be evaluated, and the covariance between all pixels in two image blocks that all coordinate position is identical in original undistorted image and the distorted image to be evaluated, and in conjunction with the type of distortion of distorted image to be evaluated, obtain the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated;
At last, according to the structural similarity between two identical image blocks of coordinate positions all in original undistorted image and the distorted image to be evaluated, obtain the objective quality score value of distorted image to be evaluated.
2. a kind of adaptive image quality method for objectively evaluating according to claim 1 is characterized in that it specifically may further comprise the steps:
1. make X represent original undistorted image, make Y represent distorted image to be evaluated, determine the type of distortion of Y then by the type of distortion method of discrimination, the type of distortion of Y is wherein a kind of in white Gaussian noise distortion, JPEG distortion, Gaussian Blur distortion, the class JPEG2000 distortion, and wherein class JPEG2000 distortion comprises two kinds of JPEG2000 distortion and rapid fading distortions;
2. adopting size is that 8 * 8 sliding window moves by pixel in X, and it is 8 * 8 image block that X is divided into M * N equitant and size, is that (i, image block j) is designated as x with coordinate position among the X I, jEqually, adopting size is that 8 * 8 sliding window moves by pixel in Y, and it is 8 * 8 image block that Y is divided into M * N equitant and size, is that (i, image block j) is designated as y with coordinate position among the Y I, jWherein, H represents the height of X and Y, and W represents the width of X and Y, symbol
Figure FDA00003309730000013
For rounding symbol downwards, 1≤i≤M, 1≤j≤N;
3. calculate brightness average and the standard deviation of all pixels in each image block among the X, and brightness average and the standard deviation of all pixels in each image block among the calculating Y, calculate the covariance between all pixels in two identical image blocks of coordinate positions all among X and the Y then, be (i, image block x j) with coordinate position among the X I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure FDA00003309730000017
With
Figure FDA00003309730000014
Be (i, image block y j) with coordinate position among the Y I, jIn brightness average and the standard deviation correspondence of all pixels be designated as
Figure FDA00003309730000015
With
Figure FDA00003309730000016
Be (i, image block x j) with coordinate position among the X I, jIn all pixels and Y in coordinate position be (i, image block y j) I, jIn all pixels between covariance be designated as
Figure FDA000033097300000210
μ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 x i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , σ x i , j y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( y i , j ( u , v ) - μ y i , j ) ] , Wherein, x I, j(u v) represents x I, jMiddle coordinate position is (u, the brightness value of pixel v), y I, j(u v) represents y I, jMiddle coordinate position is (u, the brightness value of pixel v), 1≤u≤8,1≤v≤8;
4. calculating luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, is (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween luminance function, contrast function and degree of structuration function correspondence be designated as l (x respectively I, j, y I, j), c (x I, j, y I, j) and s (x I, j, y I, j), l ( x i , j , y i , j ) = 2 μ x i , j μ y i , j + C 1 μ x i , j 2 + μ y i , j 2 + C 1 , c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 ,
Figure FDA00003309730000028
Wherein, C 1, C 2, C 3For avoiding denominator the zero little numerical constant that arranges to occur;
5. according to luminance function, contrast function and degree of structuration function between two identical image blocks of coordinate positions all among X and the Y, calculate the structural similarity between two identical image blocks of coordinate positions all among X and the Y, be (i, image block x j) with coordinate position among the X I, jBe (i, image block y j) with coordinate position among the Y I, jBetween structural similarity be designated as SSIM (x I, j, y I, j), SSIM ( x i , j , y i , j ) = [ l ( x i , j , y i , j ) ] a t [ c ( x i , j , y i , j ) ] β t [ s ( x i , j , y i , j ) ] γ t , Wherein, t is used for the type of distortion of expression Y, gets t=1 when the type of distortion of Y is the white Gaussian noise distortion, gets t=2 when the type of distortion of Y is the JPEG distortion, when the type of distortion of Y is the Gaussian Blur distortion, get t=3, when the type of distortion of Y is class JPEG2000 distortion, get t=4; α tThe weight factor of brightness, β are regulated in expression tThe weight factor of contrast, γ are regulated in expression tThe weight factor of expression adjustment structure degree, and satisfy α t+ β t+ γ t=3.
6. according to the structural similarity between two identical image blocks of coordinate positions all among X and the Y, calculate the objective quality score value of Y, be designated as Q, Q = 1 M × N Σ i = 1 M Σ j = 1 N SSIM ( x i , j , y i , j ) .
3. a kind of adaptive image quality method for objectively evaluating according to claim 2 is characterized in that determining that by the type of distortion method of discrimination detailed process of the type of distortion of Y is during described step is 1.:
1.-and a, X is carried out 2-d wavelet decompose, obtain the approximate component X of X A, horizontal component
Figure FDA00003309730000031
And vertical component
Figure FDA00003309730000032
And calculate
Figure FDA00003309730000033
With
Figure FDA00003309730000034
The small echo gross energy, be designated as
Figure FDA00003309730000035
W X 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | X V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component X of X ACarry out a 2-d wavelet again and decompose, obtain the approximate component X of X AThe subband horizontal component
Figure FDA00003309730000037
With the subband vertical component
Figure FDA00003309730000038
And calculate
Figure FDA00003309730000039
With
Figure FDA000033097300000310
The small echo gross energy, be designated as
Figure FDA000033097300000311
W X 2 = Σ m ′ ′ = 1 M ' 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X H m ′ ′ , n ′ ′ 2 | ) + Σ m ′ ′ = 1 M ′ 2 Σ n ′ ′ = 1 N ′ 2 log 2 ( 1 + | X V m ′ ′ , n ′ ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure FDA000033097300000313
With
Figure FDA000033097300000314
Height, N' 1Expression
Figure FDA000033097300000315
With
Figure FDA000033097300000316
Width,
Figure FDA000033097300000317
Expression
Figure FDA000033097300000318
In coordinate position be (m', the coefficient value of n') locating,
Figure FDA000033097300000319
Expression
Figure FDA000033097300000320
Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure FDA000033097300000321
With
Figure FDA000033097300000322
Height, N' 2Expression With
Figure FDA000033097300000324
Width,
Figure FDA000033097300000325
Expression
Figure FDA000033097300000326
In coordinate position be (m'', the coefficient value of n'') locating,
Figure FDA000033097300000327
Expression
Figure FDA000033097300000328
Middle coordinate position is (m'', the coefficient value of n'') locating;
Equally, Y is carried out a 2-d wavelet decompose, obtain the approximate component Y of Y A, horizontal component
Figure FDA000033097300000329
And vertical component
Figure FDA000033097300000330
And calculate
Figure FDA000033097300000331
With
Figure FDA000033097300000332
The small echo gross energy, be designated as
Figure FDA000033097300000333
W Y 1 = Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y H m ′ , n ′ 1 | ) + Σ m ′ = 1 M ′ 1 Σ n ′ = 1 N ′ 1 log 2 ( 1 + | Y V m ′ , n ′ 1 | ) 2 × M ′ 1 × N ′ 1 ; Then to the approximate component Y of Y ACarry out a 2-d wavelet again and decompose, obtain the approximate component Y of Y AThe subband horizontal component
Figure FDA000033097300000335
With the subband vertical component
Figure FDA000033097300000336
And calculate
Figure FDA000033097300000337
With
Figure FDA000033097300000338
The small echo gross energy, be designated as
Figure FDA000033097300000339
W y 2 = Σ m ′ = 1 M ′ 2 Σ n ′ = 1 N ′ 2 log 2 ( 1 + | y H m ′ , n ′ 2 | ) + Σ m ′ = 1 M ′ 2 Σ n ′ = 1 N ′ 2 log 2 ( 1 + | Y V m ′ , n ′ 2 | ) 2 × M ′ 2 × N ′ 2 ; At this, 1≤m'≤M' 1, 1≤n'≤N' 1, M' 1Expression
Figure FDA000033097300000341
With Height, N' 1Expression
Figure FDA000033097300000343
With
Figure FDA000033097300000344
Width,
Figure FDA000033097300000345
Expression In coordinate position be (m', the coefficient value of n') locating,
Figure FDA00003309730000042
Expression
Figure FDA00003309730000043
Middle coordinate position is (m', the coefficient value of n') locating, 1≤m''≤M' 2, 1≤n''≤N' 2, M' 2Expression
Figure FDA00003309730000044
With
Figure FDA00003309730000045
Height, N' 2Expression
Figure FDA00003309730000046
With
Figure FDA00003309730000047
Width, Expression
Figure FDA00003309730000049
In coordinate position be (m'', the coefficient value of n'') locating, Expression
Figure FDA000033097300000411
Middle coordinate position is (m'', the coefficient value of n'') locating;
Calculate the capacity volume variance of the small echo gross energy of the X correspondence small echo gross energy corresponding with Y again, be designated as Δ W, ΔW = ( W X 1 + W X 2 ) - ( W Y 1 + W Y 2 ) ;
1.-b, judgement Δ W<Th WnWhether set up, if set up, determine that then the type of distortion of Y is the white Gaussian noise distortion, finish then; Otherwise, execution in step 1.-c; Wherein, Th WnBe white Gaussian noise distortion discrimination threshold;
1.-and c, calculate the luminance difference figure of X, be designated as X h, with X hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as X h(x, y), X h(x, y)=| X (x, y)-X (x, y+1) |, wherein, 1≤x≤H herein, 1≤y≤W-1, (x, y) coordinate position be that ((x y+1) represents that coordinate position is (x, the brightness value of pixel y+1), symbol among the X to X for x, the brightness value of pixel y) to X among the expression X
Figure FDA000033097300000413
Be the symbol that takes absolute value;
Equally, calculate the luminance difference figure of Y, be designated as Y h, with Y hMiddle coordinate position is that (x, the brightness value of pixel y) is designated as Y h(x, y), Y h(x, y)=| Y (x, y)-Y (x, y+1) |, wherein, and Y (x, y) coordinate position is that ((x y+1) represents that coordinate position is (x, the brightness value of pixel y+1) among the Y to Y for x, the brightness value of pixel y) among the expression Y;
1.-d, to X hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier obtains X hIn Fourier's energy spectrum of every row, with X hIn the capable Fourier's energy spectrum of x be designated as
Figure FDA000033097300000414
Wherein,
Figure FDA000033097300000416
Symbol
Figure FDA000033097300000417
Be the symbol that rounds up,
Figure FDA000033097300000418
X f x(l) expression X hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier, symbol
Figure FDA000033097300000419
Be the symbol that takes absolute value;
Equally, to Y hIn every row in all pixels carry out N hLeaf transformation in the point discrete Fourier obtains Y hIn Fourier's energy spectrum of every row, with Y hIn the capable Fourier's energy spectrum of x be designated as
Figure FDA00003309730000051
Figure FDA00003309730000052
Wherein, Y f x(l) expression Y hIn x all pixels in capable carry out N hL the DFT coefficient value that obtains behind the leaf transformation in the point discrete Fourier; 1.-e, according to X hIn Fourier's energy spectrum of every row, calculate X hWhole Fourier's energy spectrum, be designated as P X,
Figure FDA00003309730000053
And according to Y hIn Fourier's energy spectrum of every row, calculate Y hWhole Fourier's energy spectrum, be designated as P Y,
Figure FDA00003309730000054
Wherein, l ∈ [1, N h/ 2+1]; Calculate P then XWith P YFourier's energy difference, be designated as Δ P, ΔP = Σ k = 1 4 [ P Y × ( k × N h + 1 8 ) - P X × ( k × N h + 1 8 ) ] ; 1.-f, according to Δ P, the JPEG distortion first judgment threshold Th Jpeg1With the JPEG distortion second judgment threshold Th Jpeg2, determine the type of distortion of Y, as Δ P 〉=Th Jpeg1The time, the type of distortion of determining Y is the JPEG distortion, finishes then; Work as Th Jpeg2≤ Δ P<Th Jpeg1The time, the type of distortion of uncertain Y, then execution in step 1.-g; As Δ P<Th Jpeg2The time, the type of distortion of determining Y is Gaussian Blur distortion or class JPEG2000 distortion, then execution in step 1.-h; 1.-and g, calculate Y and compare the percentage that X small echo gross energy reduces, be designated as Δ W',
Figure FDA00003309730000056
Judge Δ W'<Th then Jpeg3Whether set up, if set up, determine that then the type of distortion of Y is the JPEG distortion, finish then, otherwise, execution in step 1.-h, wherein, Th Jpeg3Be JPEG distortion the 3rd judgment threshold;
1.-h, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of X to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of X then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of X is designated as
Figure FDA00003309730000057
Wherein, c=R, G, B;
Equally, adopt the Sobel operator respectively R, G, the B Color Channel subgraph of Y to be carried out edge extracting, find out the pixel of brightness value maximum in the edge of each Color Channel subgraph of Y then, the brightness value of the pixel of brightness value maximum in the edge of the c Color Channel subgraph of Y is designated as
Judge again
Figure FDA00003309730000061
With
Figure FDA000033097300000624
Whether all set up, if all set up, then execution in step 1.-i; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then;
1.-i, with X from the RGB color space conversion to the HSI color space, extract the chromatic component of X then, be designated as X H, then calculate X HThe average chrominance value, be designated as
Figure FDA00003309730000062
Figure FDA00003309730000063
Wherein, X H(m, n) expression X HMiddle coordinate position is at (m, the chromatic value of n) locating;
Equally, with Y from the RGB color space conversion to the HIS color space, extract the chromatic component of Y then, be designated as Y H, then calculate Y HThe average chrominance value, be designated as
Figure FDA00003309730000064
Figure FDA00003309730000065
Wherein, Y H(m, n) expression Y HMiddle coordinate position is at (m, the chromatic value of n) locating;
Afterwards, calculate the colourity relative different of X and Y, be designated as C,
Figure FDA00003309730000066
Judge again whether C<0.3 sets up, if set up, then execution in step 1.-j; Otherwise the type of distortion of determining Y is class JPEG2000 distortion, finishes then;
1.-and j, the R to X, G, B Color Channel subgraph carry out two dimensional discrete Fourier transform respectively, obtains the amplitude spectrum of the R Color Channel subgraph of X, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, and correspondence is designated as respectively
Figure FDA00003309730000067
Figure FDA00003309730000068
With
Figure FDA00003309730000069
Equally, R, G, the B Color Channel subgraph of Y carried out two dimensional discrete Fourier transform respectively, obtain the amplitude spectrum of the R Color Channel subgraph of Y, the amplitude spectrum of G Color Channel subgraph and the amplitude spectrum of B Color Channel subgraph, correspondence is designated as respectively
Figure FDA000033097300000610
Figure FDA000033097300000611
Figure FDA000033097300000612
Then, calculate the frequency response of each Color Channel subgraph, the frequency response of c Color Channel subgraph is designated as
Figure FDA000033097300000613
Will
Figure FDA000033097300000614
In coordinate position in that (m, the DFT coefficient value of n) locating is designated as
Figure FDA000033097300000615
Figure FDA000033097300000616
Wherein,
Figure FDA000033097300000617
Expression
Figure FDA000033097300000618
In coordinate position (m, the DFT coefficient value of n) locating, Expression
Figure FDA000033097300000625
Middle coordinate position is at (m, the DFT coefficient value of n) locating;
Then, extract the vertical zero-frequency component of the frequency response of each Color Channel subgraph, with the frequency response of c Color Channel subgraph
Figure FDA000033097300000620
Vertical zero-frequency component be designated as Will
Figure FDA000033097300000622
In m DFT coefficient value be designated as
Figure FDA000033097300000623
Figure FDA00003309730000071
Wherein, Expression
Figure FDA000033097300000725
The DFT coefficient value that middle coordinate position is located in (m, 1); Afterwards, in the middle of the center zero-frequency of the vertical zero-frequency component of the frequency response of each Color Channel subgraph moved on to, obtain new vertical zero-frequency component, the new vertical zero-frequency component of the frequency response of c Color Channel subgraph is designated as Will
Figure FDA00003309730000074
In m DFT coefficient value be designated as
Figure FDA00003309730000075
Figure FDA00003309730000076
Wherein, symbol
Figure FDA00003309730000077
For rounding symbol downwards;
Extract again decentre point in the new vertical zero-frequency component of frequency response of each Color Channel subgraph ±
Figure FDA00003309730000078
The Frequency point of distance, to use window width be 3 filter carries out medium filtering to the Frequency point of each Color Channel subgraph correspondence of extracting, with filtered Frequency point and the match of one dimension Gaussian function, the Frequency point after the calculation of filtered and the Pearson correlation coefficient of the data after the match; For
Figure FDA00003309730000079
Extract The decentre point ±
Figure FDA000033097300000711
The Frequency point of distance is designated as
Figure FDA000033097300000712
Using window width is that 3 filter is right
Figure FDA000033097300000713
Carry out medium filtering, obtain filtered Frequency point, be designated as
Figure FDA000033097300000714
Will
Figure FDA000033097300000715
With the match of one dimension Gaussian function, obtain the data after the match, be designated as
Figure FDA000033097300000716
Calculate
Figure FDA000033097300000717
With
Figure FDA000033097300000718
Pearson correlation coefficient, be designated as
Figure FDA000033097300000719
Pe c = Σ d = 1 H 4 + 1 { ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ ′ H c ( d ) ) ( G ′ ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) } Σ d = 1 H 4 + 1 ( G ′ ′ H c ( d ) - 4 H + 4 Σ d H 4 + 1 G ′ ′ H c ( d ) ) 2 Σ d = 1 H 4 + 1 ( G ′ H c ( d ) - 4 H + 4 Σ d = 1 H 4 + 1 G ′ H c ( d ) ) 2 , Wherein, For In the DFT coefficient value of d,
Figure FDA000033097300000723
For
Figure FDA000033097300000724
In the DFT coefficient value of d after match;
At last, judge Pe R≤ 0.9, Pe G≤ 0.9 and Pe BWhether≤0.9 all set up, if all set up, determines that then the type of distortion of Y is class JPEG2000 distortion, finishes then; Otherwise the type of distortion of determining Y is the Gaussian Blur distortion, finishes then.
4. a kind of adaptive image quality method for objectively evaluating according to claim 3, it is characterized in that described step 1.-b in white Gaussian noise distortion discrimination threshold Th WnValue be-0.1; Described step 1.-f in the JPEG distortion first judgment threshold Th Jpeg1Value be 0.1, JPEG distortion, the second judgment threshold Th Jpeg2Value be-0.2; Described step 1.-g in JPEG distortion the 3rd judgment threshold Th Jpeg3Value be 0.075.
5. according to each described a kind of adaptive image quality method for objectively evaluating in the claim 2 to 4, it is characterized in that getting C during described step 4. 1=0.01, C 2=0.02, C 3=0.01.
6. a kind of adaptive image quality method for objectively evaluating according to claim 5 is characterized in that 5. middle α of described step t, β tAnd γ tValue determined by the type of distortion of Y, when the type of distortion of Y is the white Gaussian noise distortion, get α t=2.5, β t=0.1, γ t=0.4; When the type of distortion of Y is the JPEG distortion, get α t=1.3, β t=1.6, γ t=0.1; When the type of distortion of Y is the Gaussian Blur distortion, get α t=2.9, β t=0, γ t=0.1; When the type of distortion of Y is class JPEG2000 distortion, get α t=0.8, β t=2, γ t=0.2.
CN201310226269.7A 2013-06-06 2013-06-06 Adaptive image quality objective evaluation method Expired - Fee Related CN103338380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310226269.7A CN103338380B (en) 2013-06-06 2013-06-06 Adaptive image quality objective evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310226269.7A CN103338380B (en) 2013-06-06 2013-06-06 Adaptive image quality objective evaluation method

Publications (2)

Publication Number Publication Date
CN103338380A true CN103338380A (en) 2013-10-02
CN103338380B CN103338380B (en) 2015-03-11

Family

ID=49246466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310226269.7A Expired - Fee Related CN103338380B (en) 2013-06-06 2013-06-06 Adaptive image quality objective evaluation method

Country Status (1)

Country Link
CN (1) CN103338380B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118057A (en) * 2015-08-18 2015-12-02 江南大学 Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions
CN105357411A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Method and device for detecting image quality
CN106454338A (en) * 2016-11-25 2017-02-22 广州视源电子科技股份有限公司 Method and apparatus for detecting picture display effect of electronic device
CN106529593A (en) * 2016-11-08 2017-03-22 广东诚泰交通科技发展有限公司 Pavement disease detection method and system
CN106651829A (en) * 2016-09-23 2017-05-10 中国传媒大学 Non-reference image objective quality evaluation method based on energy and texture analysis
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN108682005A (en) * 2018-04-25 2018-10-19 西北工业大学 Half based on covariance matrix feature refers to 3D composograph quality evaluating methods
CN111836038A (en) * 2019-04-17 2020-10-27 北京地平线机器人技术研发有限公司 Method and device for determining imaging quality, storage medium and electronic equipment
CN114219774A (en) * 2021-11-30 2022-03-22 浙江大华技术股份有限公司 Image quality evaluation method, device, terminal and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 Three-dimensional image objective quality evaluation method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 Three-dimensional image objective quality evaluation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
楼斌等: "《基于失真模型的结构相似度图像质量评价》", 《浙江大学学报(工学版)》 *
王宇庆等: "《一种基于局部方差和结构相似度的图像质量评价方法》", 《光电子.激光》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118057A (en) * 2015-08-18 2015-12-02 江南大学 Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions
CN105357411A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Method and device for detecting image quality
CN105357411B (en) * 2015-10-29 2018-07-31 小米科技有限责任公司 The method and device of detection image quality
CN106651829A (en) * 2016-09-23 2017-05-10 中国传媒大学 Non-reference image objective quality evaluation method based on energy and texture analysis
CN106651829B (en) * 2016-09-23 2019-10-08 中国传媒大学 A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN106529593B (en) * 2016-11-08 2020-04-28 广东诚泰交通科技发展有限公司 Pavement disease detection method and system
CN106529593A (en) * 2016-11-08 2017-03-22 广东诚泰交通科技发展有限公司 Pavement disease detection method and system
CN106454338A (en) * 2016-11-25 2017-02-22 广州视源电子科技股份有限公司 Method and apparatus for detecting picture display effect of electronic device
CN106454338B (en) * 2016-11-25 2018-09-25 广州视源电子科技股份有限公司 Electronic equipment image display effect detection method and device
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN107105223B (en) * 2017-03-20 2018-12-07 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN108682005A (en) * 2018-04-25 2018-10-19 西北工业大学 Half based on covariance matrix feature refers to 3D composograph quality evaluating methods
CN108682005B (en) * 2018-04-25 2021-06-18 西北工业大学 Semi-reference 3D synthetic image quality evaluation method based on covariance matrix characteristics
CN111836038A (en) * 2019-04-17 2020-10-27 北京地平线机器人技术研发有限公司 Method and device for determining imaging quality, storage medium and electronic equipment
CN111836038B (en) * 2019-04-17 2023-02-28 北京地平线机器人技术研发有限公司 Method and device for determining imaging quality, storage medium and electronic equipment
CN114219774A (en) * 2021-11-30 2022-03-22 浙江大华技术股份有限公司 Image quality evaluation method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN103338380B (en) 2015-03-11

Similar Documents

Publication Publication Date Title
CN103338380B (en) Adaptive image quality objective evaluation method
CN103475897B (en) Adaptive image quality evaluation method based on distortion type judgment
CN108090902B (en) Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network
CN103295191B (en) Multiple scale vision method for adaptive image enhancement and evaluation method
CN102170581B (en) Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN104902267B (en) No-reference image quality evaluation method based on gradient information
CN101872479B (en) Three-dimensional image objective quality evaluation method
CN105447884A (en) Objective image quality evaluation method based on manifold feature similarity
CN110378232B (en) Improved test room examinee position rapid detection method of SSD dual-network
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN103281554B (en) Video objective quality evaluation method based on human eye visual characteristics
CN116645367B (en) Steel plate cutting quality detection method for high-end manufacturing
CN103945217B (en) Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy
CN108052980A (en) Air quality grade detection method based on image
CN102421007A (en) Image quality evaluating method based on multi-scale structure similarity weighted aggregate
CN106780446A (en) It is a kind of to mix distorted image quality evaluating method without reference
CN103871054A (en) Combined index-based image segmentation result quantitative evaluation method
CN101901482B (en) Method for judging quality effect of defogged and enhanced image
CN104408473A (en) Distance metric learning-based cotton grading method and device
CN108550146A (en) A kind of image quality evaluating method based on ROI
CN109829905A (en) It is a kind of face beautification perceived quality without reference evaluation method
CN103745466A (en) Image quality evaluation method based on independent component analysis
CN102081799B (en) Method for detecting change of SAR images based on neighborhood similarity and double-window filtering
CN101625759A (en) Image quality evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190808

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201126

Address after: 313000 No. 818 Xinhui Road, Lianshi Town, Nanxun District, Huzhou City, Zhejiang Province

Patentee after: ZHEJIANG SANXING ELECTRICAL TECHNOLOGY Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230116

Address after: B203, Chuangzhi Huichuang Space, Floor 1, Building A3, Hefei Innovation Industrial Park, No. 800, Wangjiang West Road, High-tech Zone, Hefei, Anhui Province, 230000

Patentee after: Hefei Shumei Information Technology Co.,Ltd.

Address before: No.818 Xinhui Road, Lianshi Town, Nanxun District, Huzhou City, Zhejiang Province

Patentee before: ZHEJIANG SANXING ELECTRICAL TECHNOLOGY CO.,LTD.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150311