CN104143188A - Image quality evaluation method based on multi-scale edge expression - Google Patents

Image quality evaluation method based on multi-scale edge expression Download PDF

Info

Publication number
CN104143188A
CN104143188A CN201410320983.7A CN201410320983A CN104143188A CN 104143188 A CN104143188 A CN 104143188A CN 201410320983 A CN201410320983 A CN 201410320983A CN 104143188 A CN104143188 A CN 104143188A
Authority
CN
China
Prior art keywords
mrow
msup
image
scale
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410320983.7A
Other languages
Chinese (zh)
Inventor
翟广涛
闵雄阔
杨小康
李铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201410320983.7A priority Critical patent/CN104143188A/en
Publication of CN104143188A publication Critical patent/CN104143188A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image quality evaluation method based on multi-scale edge expression. According to the image quality evaluation method based on multi-scale edge expression, image quality evaluation is conducted by means of multi-scale edge expression in a wavelet domain according to the serious independence, existing when a natural scene is understood by a human vision system, on the edges and the outline. The image quality evaluation method based on multi-scale edge expression comprises the steps that firstly, the wavelet transform of an image is calculated on multiple scales, so that the module and the argument of a wavelet factor are obtained; secondly, the structural similarity between the module of the wavelet factor of an original image and the module of the wavelet factor of a distorted image is calculated, and namely image quality evaluation is conducted according to the multi-scale module similarity algorithm; or the structural similarity of the multi-scale edges is calculated according to the multi-scale edge expression obtained based on the local maximum value of the module of the wavelet factor, and namely image quality evaluation is conducted according to the multi-scale edge similarity algorithm. According to the image quality evaluation method based on multi-scale edge expression, the performance is high, and the performance of the M2S method and the performance of the M3S method are higher than the performance of the widely-applied methods based on the peak signal to noise ratio and the performance of the widely-applied method based on the simple structure similarity.

Description

Picture quality evaluation method based on multi-scale edge expression
Technical Field
The invention relates to the technical field of image processing, in particular to a picture quality evaluation method based on multi-scale edge expression.
Background
Image quality evaluation techniques have a very important role in many image processing applications, such as system performance evaluation, image enhancement, denoising, and encoding parameter optimization. The image quality evaluation method can be divided into three categories according to whether the original image can be obtained: whole ginseng, half ginseng and non-ginseng methods. The full-parameter method generally uses the whole original image, the half-parameter method selectively uses partial features of the original image, and the non-parameter method does not need to use the original image. Many researchers have been investing in the study of image quality evaluation schemes using computers in recent years, but image quality evaluation is nevertheless a very challenging task due to a lack of understanding of the human visual system.
Mean Square Error (MSE) and peak to noise ratio (PSNR), although sometimes not particularly consistent with the subjective scores of testers, are still the most common quality assessment criteria at present due to their simplicity. Researchers have recently proposed many full-reference image quality evaluation methods, but they are still not widely adopted like MSE and PSNR; due to the complexity of feature extraction, an effective semi-parameter image quality evaluation method is provided; the non-parameter image quality evaluation method is more difficult to develop.
Earlier work in many areas of nervous system science, such as the paper "receptor Fields and Functional Architecture" published on pages 106 to 154 of Journal of Neuroscience 160, volume D.Hubel et al and the paper "receptor Fields and Functional Architecture of Monkey structure Architecture" published on pages 215 to 243 of volume 195; and in the field of computer vision, for example, the paper "Scale-Space for discovery Signals" published by t.lindeberg in IEEE trans.pattern Analysis and Machine Intelligence (IEEE schema Analysis and Machine Intelligence collection) at page 234 to page 254 of volume 3, volume 12, and the "Local Scale control for edge detection and blank evaluation" published by j.h. elder et al in IEEE trans.pattern Analysis and Machine Intelligence collection, volume 7, page 699 to page 716 of volume 20, volume 7, all show that the multi-Scale decomposition technique better simulates the psychophysiological mechanisms of the human visual system than the single-Scale technique, so that the multi-Scale decomposition technique is expected to achieve more satisfactory results in natural images. In addition, some Research works, such as the "evaluation for boundary-specific grouping in human Vision" published by J.H. Elder et al in Vision Research, volume 38, pages 143 to 152, show that the human visual system relies heavily on edges and contours in perceiving surface properties and understanding natural scenes.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a picture quality evaluation method based on multi-scale edge expression, which has good performance and better performance than the widely used peak signal-to-noise ratio and simple structure similarity method.
In order to achieve the above object, the present invention provides a picture quality evaluation method based on multi-scale edge expression, which performs image quality evaluation in a wavelet domain by using the multi-scale edge expression according to the severe dependence on edges and contours when a human visual system understands a natural scene: firstly, calculating image wavelet transformation on multiple scales to obtain a mode and an argument of a wavelet factor, and then calculating the structural similarity of an original image and a distortion image wavelet factor mode, namely performing image quality evaluation by using a multiple-scale mode similarity algorithm; or obtaining the multi-scale edge expression of the image by the local maximum value of the wavelet factor model, and calculating the structural similarity of the multi-scale edge, namely performing the quality evaluation of the image by using a multi-scale edge similarity algorithm.
The method specifically comprises the following steps:
the first step, carrying out second-order wavelet transform on an image: firstly, a first-order partial derivative is solved for a second-order differentiable two-dimensional smooth function theta (x, y) meeting the condition to obtain a wavelet function psi1(x,y),ψ2(x, y) wavelet function at scale 2jIs expanded to the second order ofThe two-dimensional function f (x, y) is in the scale 2jThe up-second order wavelet transform can be expressed in a vector form as W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) Wherein theta (x, y) is a two-dimensional smooth function which can be microminiaturized in the second order and satisfies a certain condition, a two-dimensional function f (x, y) is an image to be evaluated, and (x, y) is the coordinate of the image,respectively, image f (x, y) at scale 2jThe wavelet factors of the horizontal direction and the vertical direction of the upper second-order wavelet transform are changed, j is 1, and 2 and 3 … are positive integers;
second, multi-scale edge extraction, for scale 2jThe wavelet transformation vector of the upper image is obtained by modulusIn the direction ofGiving out that the multi-scale edge of the image is the local maximum of the second-order wavelet transform mode of the image, whereinI.e. wavelet transform vector W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) The die of (a) is used,the direction of the wavelet transformation vector of the image is obtained;
thirdly, calculating the structural similarity of the original image and the distortion image in the wavelet transform mode on multiple scales, and adopting a maximum mode similarity method (M)3S), namely, the image quality of the distortion map is evaluated by quantifying the similarity of the original image and the multi-scale edge of the distortion map on multiple scales and finally combining the similarity on the multiple scales; or using a maximum modulus similarity method (M)3S), namely, the similarity of the original image and the multi-scale edge of the distorted image on the multi-scale is quantified, and finally the similarity on the multi-scale is combined to evaluate the image quality of the distorted image.
Preferably, in the first step, the second differentiable two-dimensional smoothing function θ (x, y) should satisfy the following condition:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> <munder> <mrow> <mo>&Integral;</mo> <mo>&Integral;</mo> </mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </munder> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>
the first order partial derivatives are:
<math> <mrow> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> </mrow> </math>
ψ1(x,y),ψ2the (x, y) mean is 0, so it can be used as a wavelet function whose second-order expansion is expressed as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
the function f (x, y) is e.L2(R2) In the dimension 2jThe second order wavelet transform of (j ═ 1,2,3 …, a positive integer) is represented as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
wherein,wavelet factors in the horizontal direction and the vertical direction respectively;
smoothing the image at multiple scales using a function θ (x, y) yields the following gradient vector representation:
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mn>2</mn> <mi>j</mi> </msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <msup> <mn>2</mn> <mi>j</mi> </msup> <mover> <mo>&dtri;</mo> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) i.e. the vector representation form of the image wavelet transformation, theta (x, y) is a smoothing function, f (x, y) is an image,is shown in dimension 2jF (x, y) is smoothed by θ (x, y), which represents the convolution operation,andrespectively representing the derivation of the partial derivatives for x and y,andnamely to representThe values at (x, y) for the partial derivatives of x and y,i.e. the gradient description of the image wavelet transform.
Preferably, in the second step, in dimension 2jUpper, lower imagef (x, y) wavelet transform vector W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) The mold (A) is as follows:
M 2 j f ( x , y ) = | W 2 j 1 f ( x , y ) | 2 + | W 2 j 2 f ( x , y ) | 2
wherein,i.e. the modulus of the wavelet transform vector,two components of the wavelet transform vector are respectively;
the direction of the wavelet transform of the image f (x, y) is given by:
A 2 j f ( x , y ) = arctan | W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) |
wherein,i.e. the direction of the wavelet transform vector,two components of the wavelet transform vector are respectively;
the variable point isIn the direction ofAnd obtaining the local maximum point, wherein the multi-scale edge of the image is the local maximum of the second-order wavelet transform mode of the image.
Preferably, in the third step, the multi-scale modulo similarity method (M)2S), wherein the similarity of the patterns is measured by Structural Similarity (SSIM);
by usingThe modes of the wavelet transform representing the original image and the distortion map respectively, where N represents the number of layers of decomposition, are then in the scale 2jThe Mean Structural Similarity (MSSIM) of the upper original graph and the distortion graph model can be expressed asCalculating the multi-scale mode similarity when the decomposition series is N:
<math> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <msub> <mi>Mo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>M</mi> <msub> <mi>d</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math>
wherein, WiThe relative weights for different layers can be set asN is the number of layers to be decomposed,the modes of wavelet transformation, MSSIM, of the original image and of the distorted image]Is an average structural similarity algorithm, M2S (N) is the multi-scale mode similarity when the decomposition series is N.
Preferably, in a third step, the maximum modulus similarity method (M)3S), wherein the similarity of the multi-scale edges is measured by Structural Similarity (SSIM);
by using M Mo 2 j ( x , y ) , M Md 2 j ( x , y ) , ( j = 1,2 , . . . , N ) The local maximum values of the mode of wavelet transformation of the original image and the distortion image are respectively shown, and the similarity of the multi-scale maximum mode when the calculation decomposition level is N is as follows:
<math> <mrow> <msup> <mi>M</mi> <mn>3</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <msub> <mi>MMo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>MM</mi> <msub> <mi>d</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math>
wherein, WiThe relative weights for different layers can be set asN is the number of layers to be decomposed,local maxima, MSSIM, of the original and distorted wavelet transform modes, respectively]Is an average structural similarity algorithm, M3S (N) is the multi-scale maximum mode similarity when the decomposition series is N.
The principle of the invention is as follows:
according to the severe dependence of the human visual system on edges and contours when understanding a natural scene and the good simulation of the psychophysiological mechanism of the human visual system by the multi-scale decomposition technology, the structural similarity of the mode of wavelet transform or the local maximum value (multi-scale edge) of the mode of wavelet transform on the multi-scale of an original image and a distortion image is calculated, and finally the Structural Similarity (SSIM) of the mode of wavelet transform or the multi-scale edge on the multi-scale is combined for image quality evaluation.
Compared with the prior art, the invention has the following beneficial effects:
the image quality evaluation method adopts a multi-scale mode similarity algorithm and a multi-scale maximum mode similarity algorithm, and under most conditions, the multi-scale mode similarity algorithm and the multi-scale maximum mode similarity algorithm have better performance than a widely used peak signal-to-noise ratio algorithm and a simple structural similarity algorithm, so that the method has good performance.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIGS. 2(a) and 2(b) show the weighted variance regression correlation and the spearman rank correlation of 19 model prediction values with mean opinion scores over a JPEG picture set, respectively;
FIGS. 2(c) and 2(d) show the weighted variance regression correlation and the spearman rank correlation of 19 model predictors with mean opinion score, respectively, over a JPEG2000 picture set;
FIGS. 2(e) and 2(f) show the weighted variance regression correlation and the spearman rank correlation of 19 model predictors with mean opinion scores over a JPEG and JPEG2000 combined picture set, respectively;
FIGS. 3(a) -3 (d) are the PSNR, SSIM, M of all pictures in the implementation of the present invention2S(3),M3And S (7) a scatter diagram of the predicted values and Mean Opinion Scores (MOS) of the four models.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
As shown in fig. 1, the present embodiment provides a picture quality evaluation method based on multi-scale edge expression, which performs image quality evaluation in a wavelet domain by using multi-scale edge expression according to the severe dependence on edges and contours when a human visual system understands a natural scene; firstly, calculating image wavelet transformation on multiple scales to obtain a mode and an argument of a wavelet factor, and then calculating the structural similarity of an original image and a distortion image wavelet factor mode, namely performing image quality evaluation by using a multiple-scale mode similarity algorithm; or obtaining the multi-scale edge expression of the image by the local maximum value of the wavelet factor model, and calculating the structural similarity of the multi-scale edge, namely performing the quality evaluation of the image by using a multi-scale edge similarity algorithm; wherein, M in FIG. 12S represents a multi-scale mode similarity method, M3S represents a multi-scale maximum mode similarity method.
The method specifically comprises the following steps:
firstly, performing second-order wavelet transformation on an image, and firstly solving a first-order partial derivative of a second-order differentiable two-dimensional smooth function theta (x, y) meeting a certain condition to obtain a wavelet function psi1(x,y),ψ2(x, y) wavelet function at scale 2jIs expanded to the second order ofThe two-dimensional function f (x, y) is in the scale 2jThe up-second order wavelet transform can be expressed in a vector form as W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) ;
The two-dimensional smoothing function θ (x, y) that can be microminiaturized in the second order should satisfy the following condition:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> <munder> <mrow> <mo>&Integral;</mo> <mo>&Integral;</mo> </mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </munder> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>
the first order partial derivatives are:
<math> <mrow> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> </mrow> </math>
ψ1(x,y),ψ2the (x, y) mean is 0, so it can be used as a wavelet function whose second-order expansion is expressed as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
let the function f (x, y) be L2(R2) A two-dimensional image, it is in the dimension 2jThe second order wavelet transform of (a) is expressed as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
wherein,wavelet factors in the horizontal direction and the vertical direction respectively;
smoothing the image at multiple scales using a function θ (x, y) yields the following gradient vector representation:
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mn>2</mn> <mi>j</mi> </msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <msup> <mn>2</mn> <mi>j</mi> </msup> <mover> <mo>&dtri;</mo> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) i.e. the vector representation form of the image wavelet transformation, theta (x, y) is a smoothing function, f (x, y) is an image,is shown in dimension 2jF (x, y) is smoothed by θ (x, y), which represents the convolution operation,andrespectively representing the derivation of the partial derivatives for x and y,andnamely to representThe values at (x, y) for the partial derivatives of x and y,i.e. the gradient description of the image wavelet transform.
Second step, imageMultiscale edge extraction, for scale 2jThe wavelet transformation vector of the image is obtained by modulus extractionIn the direction ofGiving out that the multi-scale edge of the image is the local maximum value of the second-order wavelet transform mode of the image;
in the dimension 2jWavelet transform vector of the upper image f (x, y) W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) The mold (A) is as follows:
M 2 j f ( x , y ) = | W 2 j 1 f ( x , y ) | 2 + | W 2 j 2 f ( x , y ) | 2
the direction is given by:
A 2 j f ( x , y ) = arctan | W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) |
the variable point isIn the direction ofAnd obtaining the local maximum point, wherein the multi-scale edge of the image is the local maximum of the second-order wavelet transform mode of the image.
Thirdly, calculating the structural similarity of the wavelet transform model of the original image and the distortion image on multiple scales, combining the structural similarity on the multiple scales, and evaluating the image quality of the distortion image;
multi-scale mode similarity algorithm (M) for calculating similarity of original image and distortion image wavelet transform mode2S), namely, the similarity of an original image and a distortion image in a multi-scale wavelet transform mode is quantized, and finally the similarity in the multi-scale is combined to evaluate the image quality of the distortion image, wherein the similarity of the mode is measured by Structural Similarity (SSIM); the SSIM may be an index for measuring the similarity between two images, which is published by Z.Wang et al, IEEE Transactions on Image Processing, volume 13, page 4, page 600 to page 612, and which is proposed in "Image quality assessment from information to structural precision";
by usingThe modes of the wavelet transform representing the original image and the distortion map respectively, where N represents the number of layers of decomposition, are then in the scale 2jThe Mean Structural Similarity (MSSIM) of the upper original graph and the distortion graph model can be expressed asCalculating the multi-scale mode similarity when the decomposition series is N:
<math> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <msub> <mi>Mo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>M</mi> <msub> <mi>d</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math>
wherein WiThe relative weights of the different layers can be set to
Or calculating the similarity of the original image and the multi-scale edge of the distorted image-maximum mode similarity algorithm (M)3S), namely, the image quality of the distortion map is evaluated by quantifying the similarity of multi-scale edges of the original image and the distortion map on multiple scales and finally combining the similarity on the multiple scales, wherein the similarity of the multi-scale edges is measured by Structural Similarity (SSIM);
by using M Mo 2 j ( x , y ) , M Md 2 j ( x , y ) , ( j = 1,2 , . . . , N ) The local maximum values of the mode of wavelet transformation of the original image and the distortion image are respectively shown, and the multi-scale mode similarity when the calculation decomposition level is N is as follows:
<math> <mrow> <msup> <mi>M</mi> <mn>3</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <msub> <mi>MMo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>MM</mi> <msub> <mi>d</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>.</mo> </mrow> </math>
according to the steps, the performance of the proposed method is compared with that of MSE, PSNR and SSIM, and compared models in experiments are listed as follows:
Model1:PSNR
Model2:MSE
Model3:SSIM
Model4:M2S(1)
Model5:M2S(2)
...
Model11:M2S(8)
Model12:M3S(1)
Model13:M3S(2)
...
Model19:M3S(8)
the experiment was performed on an image quality evaluation library LIVE provided by HR Sheikh et al, which contains 233 JPEG compressed pictures and 227 JPEG2000 compressed pictures at different bit rates. Quantitative evaluation is carried out by using a method provided by a Video Quality Expert Group (VQEG) in Video Quality Experts Group (VQEG) Phase I Full Reference-TV test, namely, the correlation between objective scores after weighted variance regression and subjective scores, the Spanish-scale correlation of the objective scores and the subjective scores and the abnormal value proportion of predicted values after nonlinear mapping.
As shown in fig. 2, the present invention has the effect that the weighted variance regression correlation and the spearman rank correlation of 19 model predictors and mean opinion score are performed on different sets of pictures. FIGS. 2(a) and 2(b) show the weighted variance regression correlation and the spearman rank correlation of 19 Model predictors with mean opinion score on a JPEG photo set, respectively, wherein the horizontal axes p1-p19 show Model1-Model19 models, respectively, and the vertical axis shows correlation values; FIGS. 2(c) and 2(d) are graphs showing the weighted variance regression correlation and the spearman rank correlation of 19 Model predictors with mean opinion score on a JPEG2000 picture set, respectively, wherein the horizontal axes p1-p19 represent Model1-Model19 models, respectively, and the vertical axis represents correlation values; fig. 2(e) and 2(f) show the weighted variance regression correlation and spearman rank correlation of 19 Model predictors and mean opinion scores in JPEG and JPEG2000 combined sets, respectively, where the horizontal axes p1-p19 show Model1-Model19, respectively, and the vertical axis shows the correlation values. The error bars in fig. 2(a) -2(f) represent 95% confidence intervals for each predictor. From FIGS. 2(a) -2(f), it can be seen that models4-9 and models13-19 have the same excellent performance; models1-2 performed the worst; other models have centered performance.
As shown in table 1, the performance of 19 quality evaluation models evaluated on the LIVE library JPEG picture set, JPEG2000 picture set, and all pictures by using 3 indexes is shown, where Variance weighted regression represents a weighted Variance regression Correlation value, Spearman Correlation represents a Spearman rank Correlation value, and Outlier Ratio represents an abnormal value Ratio of a predicted value after nonlinear mapping. From this table, it can be seen that models6 and models18, M2S (3) and M3S (7) has optimal performance.
Table 1 quality evaluation performance parameter values of different quality evaluation models on different picture sets
As shown in FIGS. 3(a) to 3(d), PSNR, SSIM, M2S(3),M3And S (7) a scatter diagram of the predicted values and Mean Opinion Score (MOS) of the four models, wherein + represents data points of the MOS values relative to the predicted values of the models, in the three curves, the middle solid line is a logic function fitting curve containing four parameters, and the dotted lines on the two sides of the solid line are 95% confidence intervals of the fitting curve.
It was also found in the experiments that M is the number of pictures in the evaluation of JPEG and JPEG2000 pictures2S generally has good performance on a lower scale; and M3S generally has good performance on a higher scale. If M is used2S, evaluating the quality of the image, namely only performing image processing on the imageDecomposing a low level; and for M3S, only the features contained in the multi-scale edges are needed, but obtaining the same accurate predicted value usually requires calculating wavelet decomposition of higher levels.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (5)

1. A picture quality evaluation method based on multi-scale edge expression is characterized by comprising the following steps:
the first step, carrying out second-order wavelet transform on an image: firstly, a first-order partial derivative is solved for a second-order differentiable two-dimensional smooth function theta (x, y) meeting the condition to obtain a wavelet function psi1(x,y),ψ2(x, y) wavelet function at scale 2jIs expanded to the second order ofThe two-dimensional function f (x, y) is thenDimension 2jThe up-second order wavelet transform can be expressed in a vector form as W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) , Wherein theta (x, y) is a two-dimensional smooth function which can be microminiaturized in the second order and satisfies a certain condition, a two-dimensional function f (x, y) is an image to be evaluated, and (x, y) is the coordinate of the image,respectively, image f (x, y) at scale 2jThe wavelet factors of the horizontal direction and the vertical direction of the upper second-order wavelet transform are changed, j is 1, and 2 and 3 … are positive integers;
step two, multi-scale edge extraction: for dimension 2jThe wavelet transformation vector of the upper image is obtained by modulusIn the direction ofGiving out that the multi-scale edge of the image is the second-order wavelet transform mode of the imageA local maximum of, whereinI.e. wavelet transform vector of image W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) , The die of (a) is used,the direction of the wavelet transformation vector of the image is obtained;
thirdly, calculating the structural similarity of the original image and the distortion image on the multi-scale wavelet transform mode, and adopting a multi-scale mode similarity method M2S, evaluating the image quality of the distortion map by quantifying the similarity of the original image and the wavelet transform model of the distortion map on multiple scales and finally combining the similarity on the multiple scales; or using maximum modulus similarity method M3And S, evaluating the image quality of the distortion map by quantifying the similarity of the original image and the multi-scale edge of the distortion map on multiple scales and finally combining the similarity on the multiple scales.
2. The method according to claim 1, wherein in the first step, the second order differentiable two-dimensional smoothing function θ (x, y) should satisfy the following condition:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mn>0</mn> <mo>,</mo> <munder> <mrow> <mo>&Integral;</mo> <mo>&Integral;</mo> </mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </munder> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>
the first order partial derivatives are:
<math> <mrow> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&theta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> </mrow> </math>
ψ1(x,y),ψ2the (x, y) mean is 0, so it can be used as a wavelet function whose second-order expansion is expressed as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>1</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mfrac> <msup> <mi>&psi;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mfrac> <mi>x</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>,</mo> <mfrac> <mi>y</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
the function f (x, y) is e.L2(R2) In the dimension 2jThe second order wavelet transform of (a) is expressed as:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <msubsup> <mi>&psi;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
wherein,wavelet factors in the horizontal direction and the vertical direction respectively;
smoothing the image at multiple scales with a function θ (x, y) yields the following gradient vector description:
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>1</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>W</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> <mn>2</mn> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mn>2</mn> <mi>j</mi> </msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <msup> <mn>2</mn> <mi>j</mi> </msup> <mover> <mo>&dtri;</mo> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>f</mi> <mo>*</mo> <msub> <mi>&theta;</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) i.e. the vector representation form of the image wavelet transformation, theta (x, y) is a smoothing function, f (x, y) is an image,is shown in dimension 2jF (x, y) is smoothed by θ (x, y), which represents the convolution operation,andrespectively representing the derivation of the partial derivatives for x and y,andnamely to representThe values at (x, y) for the partial derivatives of x and y,i.e. the gradient description of the image wavelet transform.
3. The picture quality evaluation method based on multi-scale edge expression according to claim 1, wherein in the second step, in the scale 2jWavelet transform vector of the upper image f (x, y) W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) The mold (A) is as follows:
M 2 j f ( x , y ) = | W 2 j 1 f ( x , y ) | 2 + | W 2 j 2 f ( x , y ) | 2
wherein,i.e. the modulus of the wavelet transform vector of the image f (x, y),the components in the horizontal direction and the vertical direction of the wavelet transformation vector are respectively;
the direction of the wavelet transform of the image f (x, y) is given by:
A 2 j f ( x , y ) = arctan | W 2 j 1 f ( x , y ) W 2 j 2 f ( x , y ) |
wherein,i.e. the direction of the wavelet transform vector,the components in the horizontal direction and the vertical direction of the wavelet transformation vector are respectively;
the variable point isIn the direction ofAnd obtaining the local maximum point, wherein the multi-scale edge of the image is the local maximum of the second-order wavelet transform mode of the image.
4. The picture quality evaluation method based on multi-scale edge expression according to any one of claims 1 to 3, wherein in the third step, the multi-scale mode similarity method M2S, wherein the similarity of the modules is measured through the structural similarity;
by usingThe modes of the wavelet transform representing the original image and the distortion map respectively, where N represents the number of layers of decomposition, are then in the scale 2jThe average structural similarity of the original graph and the distortion graph model is expressed asCalculating the multi-scale mode similarity when the decomposition series is N:
<math> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <msub> <mi>Mo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>M</mi> <msub> <mi>d</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math>
wherein, WiRelative weights for different layers are set asN is the number of layers to be decomposed,the modes of wavelet transformation, MSSIM, of the original image and of the distorted image]In order to be an average structural similarity algorithm,M2s (N) is the multi-scale mode similarity when the decomposition series is N.
5. The picture quality evaluation method based on multi-scale edge expression according to any one of claims 1 to 3, wherein in the third step, the maximum mode similarity method M3S, wherein the similarity of the multi-scale edges is measured through structural similarity;
by using M Mo 2 j ( x , y ) , M Md 2 j ( x , y ) , ( j = 1,2 , . . . , N ) The local maximum values of the mode of wavelet transformation of the original image and the distortion image are respectively shown, and the similarity of the multi-scale maximum mode when the calculation decomposition level is N is as follows:
<math> <mrow> <msup> <mi>M</mi> <mn>3</mn> </msup> <mi>S</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>MSSIM</mi> <mo>[</mo> <mi>M</mi> <msub> <mi>Mo</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>MMd</mi> <msup> <mn>2</mn> <mi>j</mi> </msup> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
wherein, WiRelative weights for different layers are set asN is the number of layers to be decomposed,local maxima, MSSIM, of the original and distorted wavelet transform modes, respectively]Is an average structural similarity algorithm, M3S (N) is the multi-scale maximum mode similarity when the decomposition series is N.
CN201410320983.7A 2014-07-04 2014-07-04 Image quality evaluation method based on multi-scale edge expression Pending CN104143188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410320983.7A CN104143188A (en) 2014-07-04 2014-07-04 Image quality evaluation method based on multi-scale edge expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410320983.7A CN104143188A (en) 2014-07-04 2014-07-04 Image quality evaluation method based on multi-scale edge expression

Publications (1)

Publication Number Publication Date
CN104143188A true CN104143188A (en) 2014-11-12

Family

ID=51852356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410320983.7A Pending CN104143188A (en) 2014-07-04 2014-07-04 Image quality evaluation method based on multi-scale edge expression

Country Status (1)

Country Link
CN (1) CN104143188A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023208A (en) * 2016-05-23 2016-10-12 北京大学 Objective evaluation method for image quality
CN111598826A (en) * 2019-02-19 2020-08-28 上海交通大学 Image objective quality evaluation method and system based on joint multi-scale image characteristics
CN113222902A (en) * 2021-04-16 2021-08-06 北京科技大学 No-reference image quality evaluation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127926A (en) * 2007-09-14 2008-02-20 西安电子科技大学 Image quality evaluation method based on multi-scale geometric analysis
US20080240603A1 (en) * 2007-03-29 2008-10-02 Texas Instruments Incorporated Methods and apparatus for image enhancement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240603A1 (en) * 2007-03-29 2008-10-02 Texas Instruments Incorporated Methods and apparatus for image enhancement
CN101127926A (en) * 2007-09-14 2008-02-20 西安电子科技大学 Image quality evaluation method based on multi-scale geometric analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUANGTAO ZHAI等: "Image Quality Assessment Metrics Based on Multi-scale Edge Presentation", 《IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS DESIGN AND IMPLEMENTATION, 2005》 *
翟广涛等: "基于多尺度边缘表示的图像增强快速算法", 《中国图象图形学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023208A (en) * 2016-05-23 2016-10-12 北京大学 Objective evaluation method for image quality
CN106023208B (en) * 2016-05-23 2019-01-18 北京大学 The method for objectively evaluating of picture quality
CN111598826A (en) * 2019-02-19 2020-08-28 上海交通大学 Image objective quality evaluation method and system based on joint multi-scale image characteristics
CN111598826B (en) * 2019-02-19 2023-05-02 上海交通大学 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics
CN113222902A (en) * 2021-04-16 2021-08-06 北京科技大学 No-reference image quality evaluation method and system
CN113222902B (en) * 2021-04-16 2024-02-02 北京科技大学 No-reference image quality evaluation method and system

Similar Documents

Publication Publication Date Title
Gu et al. No-reference quality assessment of screen content pictures
Ye et al. Unsupervised feature learning framework for no-reference image quality assessment
Xue et al. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features
Li et al. No-reference image blur assessment based on discrete orthogonal moments
Shen et al. Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images
CN103049892B (en) Non-local image denoising method based on similar block matrix rank minimization
CN109523506B (en) Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement
Saha et al. Utilizing image scales towards totally training free blind image quality assessment
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN103200421B (en) No-reference image quality evaluation method based on Curvelet transformation and phase coincidence
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
CN108399620B (en) Image quality evaluation method based on low-rank sparse matrix decomposition
CN103455991A (en) Multi-focus image fusion method
CN106127741A (en) Non-reference picture quality appraisement method based on improvement natural scene statistical model
CN102722892A (en) SAR (synthetic aperture radar) image change detection method based on low-rank matrix factorization
CN103020918A (en) Shape-adaptive neighborhood mean value based non-local mean value denoising method
CN105007488A (en) Universal no-reference image quality evaluation method based on transformation domain and spatial domain
CN104103064A (en) Reference-free noise image quality evaluation method based on gradient similarity
Zhai et al. Image quality assessment metrics based on multi-scale edge presentation
CN108830829B (en) Non-reference quality evaluation algorithm combining multiple edge detection operators
Okarma Extended hybrid image similarity–combined full-reference image quality metric linearly correlated with subjective scores
CN104574381A (en) Full reference image quality evaluation method based on LBP (local binary pattern)
Ahmed et al. PIQI: perceptual image quality index based on ensemble of Gaussian process regression
CN103996188B (en) A kind of full-reference image quality evaluating method based on Gabor weighted features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141112