CN103475897B - Adaptive image quality evaluation method based on distortion type judgment - Google Patents

Adaptive image quality evaluation method based on distortion type judgment Download PDF

Info

Publication number
CN103475897B
CN103475897B CN201310406821.0A CN201310406821A CN103475897B CN 103475897 B CN103475897 B CN 103475897B CN 201310406821 A CN201310406821 A CN 201310406821A CN 103475897 B CN103475897 B CN 103475897B
Authority
CN
China
Prior art keywords
mrow
msup
msub
msubsup
munderover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310406821.0A
Other languages
Chinese (zh)
Other versions
CN103475897A (en
Inventor
蒋刚毅
靳鑫
郁梅
邵枫
彭宗举
陈芬
王晓东
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201310406821.0A priority Critical patent/CN103475897B/en
Publication of CN103475897A publication Critical patent/CN103475897A/en
Application granted granted Critical
Publication of CN103475897B publication Critical patent/CN103475897B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention discloses an adaptive image quality evaluation method based on distortion type judgment. The method comprises the steps that firstly, distortion types of images are judged and are divided into a type of white Gaussian noise distortion, a type of JPEG distortion and a type of blur class distortion, wherein the blur class distortion comprises Gaussian blur distortion, JPEG2000 distortion and fast fading distortion; through the utilization of distortion judgment results, evaluation is carried out on images with the white Gaussian noise distortion by using a structural similarity model based on pixel domains, evaluation is carried out on images with the JPEG distortion by using a structural similarity model based on DCT domains, and evaluation is carried out on images with the blur class distortion by using a structural similarity model based on wavelet domains. The adaptive image quality evaluation method based on the distortion type judgment is indicated to be an objective evaluation method through implementation results, the advantage that all the structural similarity models are well combined in the evaluation carried out on the distortion images of different distortion types through the distortion judgment method is achieved, and the evaluation results are high in conformity with human eye subjective perception.

Description

Self-adaptive image quality evaluation method based on distortion type judgment
Technical Field
The invention relates to an image quality evaluation technology, in particular to a self-adaptive image objective quality evaluation method based on distortion type judgment.
Background
With the rapid development of modern communication technology, people enter an information society, and the quality of images as important information carriers directly influences the accuracy and the integrity of information acquisition of recipients. However, no matter during the process of image acquisition, processing, storage and transmission, distortion or degradation caused by imperfect processing method or irregular external equipment cannot be avoided, and different levels of distortion or degradation will cause different levels of loss to image information. Under the condition that the image information technology is widely applied, how to measure the loss size of the carried information caused by different distortion degrees of the image is important. The problem of image quality evaluation is just to solve this important practical problem. The image quality evaluation can be divided into subjective quality evaluation and objective quality evaluation on the aspect of the method, wherein the subjective quality evaluation of experimenters is used for evaluating the quality of images, so that the method is very complicated and time-consuming, and is not suitable for being integrated into practical application; the method has the characteristics of simple operation, easy realization, real-time algorithm optimization and the like, and becomes a research focus in image quality evaluation. However, according to different emphasis points of the HVS description, image quality evaluation can be divided into two types, namely, an error sensitivity-based evaluation method and a structural similarity-based evaluation method. Visual nonlinearity, multiple channels, contrast sensitivity band-pass, masking effect, interaction of different excitations among the multiple channels and visual psychology characteristics are considered based on an error sensitivity method, but because human beings do not completely master HVS, the method has obvious accurate modeling obstacles. The method is characterized in that a natural image is considered to have a specific structure based on a structure similarity method, human eyes perceive the image, the structure similarity of image signals is directly evaluated by mainly extracting from the structure information, the method is low in implementation complexity and high in applicability, but the structure similarity methods in the prior art are more in variety, and the method cannot guarantee that quality evaluation results of images with different distortion types are accurate.
Disclosure of Invention
The invention aims to provide a self-adaptive image quality objective evaluation method which can effectively improve the accuracy of quality evaluation results of images with different distortion types.
The technical scheme adopted by the invention for solving the technical problems is as follows: a self-adaptive image quality evaluation method based on distortion type judgment comprises the following processing procedures:
firstly, determining the distortion type of a distorted image to be evaluated;
secondly, corresponding processing is carried out by combining the distortion type of the distorted image to be evaluated:
if the distorted image is Gaussian white noise distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a pixel domain, and acquiring the structural similarity based on the pixel domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the brightness mean value and the standard deviation of all pixel points in each image block in the original undistorted image and the distorted image to be evaluated in the pixel domain and the covariance between all pixel point brightness values in all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated;
if the distorted image is JPEG distorted, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a DCT (discrete Cosine transform) domain, and acquiring the DCT domain-based structural similarity between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all DCT coefficients of the original undistorted image block and the distorted image to be evaluated in the DCT domain and the covariance between all DCT coefficients of all two image blocks with the same coordinate position in the DCT domain;
if the distorted image is similar to fuzzy distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a wavelet domain, and acquiring the structural similarity based on the wavelet domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all wavelet coefficients of the original undistorted image block and the distorted image to be evaluated in the wavelet domain and the covariance between all wavelet coefficients of all two image blocks with the same coordinate position in the wavelet domain;
and finally, obtaining the objective quality score of the distorted image to be evaluated according to the structural similarity between the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated.
The invention relates to a self-adaptive image quality evaluation method based on distortion type judgment, which specifically comprises the following steps:
firstly, enabling X to represent an original undistorted image, enabling Y to represent a distorted image to be evaluated, and determining the distortion type of Y through a distortion type distinguishing method, wherein the distortion type of Y is one of Gaussian white noise distortion, JPEG distortion and quasi-fuzzy distortion, and the quasi-fuzzy distortion comprises Gaussian fuzzy distortion, JPEG2000 distortion and fast fading distortion;
if the distortion type of the distorted image Y is Gaussian white noise distortion, a sliding window with the size of 8 multiplied by 8 is adoptedMoving the X pixel by pixel, dividing the X into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and marking the image block with the coordinate position (i, j) in the X as Xi,j(ii) a Similarly, moving the sliding window with the size of 8 × 8 in Y pixel by pixel, dividing Y into M × N overlapped image blocks with the size of 8 × 8, and recording the image block with the coordinate position (i, j) in Y as Yi,j(ii) a Wherein, h denotes the height of X and Y, W denotes the width of X and Y, symbolsI is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is JPEG distortion, moving the distorted image Y pixel by pixel in X by adopting a sliding window with the size of 8 multiplied by 8, dividing X into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and recording the image block with the coordinate position (i, j) in X as Xi,jFor all image blocks xi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationSimilarly, moving the sliding window with the size of 8 × 8 in Y pixel by pixel, dividing Y into M × N overlapped image blocks with the size of 8 × 8, and recording the image block with the coordinate position (i, j) in Y as Yi,jFor all image blocks yi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationWherein,h represents the height of X and Y, W represents the width of X and Y, i is not less than 1 and not more than M, and i is not less than 1 and not more than 1j≤N;
If the distortion type of the distorted image Y is similar to fuzzy distortion, performing one-level wavelet transform on X, extracting approximate components and recording as XAUsing a sliding window of size 8X 8 at XAMoving X point by pointADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing X into X blocksAThe image block with the middle coordinate position (i ', j') is recorded asSimilarly, one-level wavelet transform is performed on Y, and approximate components are extracted and recorded as YAUsing a sliding window with a size of 8X 8 in YAMoving point by point, and moving YADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing Y intoAThe image block with the middle coordinate position (i ', j') is recorded asWherein,h' represents XAAnd YAW' represents XAAnd YAThe width of the composite is that i 'is more than or equal to 1 and is more than or equal to M', j 'is more than or equal to 1 and is more than or equal to N';
thirdly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the brightness mean value and the standard deviation of all pixel points in each image block in X, calculating the brightness mean value and the standard deviation of all pixel points in each image block in Y, then calculating the covariance between all pixel points in two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndgraph with coordinate position in Y as (i, j)Image block yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jAll the pixel points in (a) and the image block Y with coordinate position (i, j) in Yi,jThe covariance of all the pixels in (1) is recorded as <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, and v is more than or equal to 1 and less than or equal to 8;
if the distortion type of the distorted image Y is JPEG distortion, calculating the brightness mean value and standard deviation of all pixel points in each image block in X, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in X, calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in Y, finally calculating the covariance between all DCT alternating current coefficients of two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndDCT domain image block with coordinate position (i, j) in XAnd a DCT domain image block with coordinate position (i, j) in YIs recorded as the covariance between all the ac coefficients in <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, v is more than or equal to 1 and less than or equal to 8,to representThe middle coordinate position is (u)D,vD) The value of the DCT coefficient of (a),to representThe middle coordinate position is (u)D,vD) Value of DCT coefficient of 1. ltoreq. uD≤8,1≤vDU is less than or equal to 8DAnd vDIs not 1 at the same time;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating an approximate component X of the distorted image Y after the first-level wavelet transformAThe mean value and standard deviation of all the coefficient values in the above-mentioned formula, and calculating the approximate component Y of Y after one-stage wavelet transformAThe mean and standard deviation of all the coefficient values in (A) and then calculating XAAnd YAAll co-variances between all coefficients in two image blocks with the same coordinate position, XAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients are respectively recorded asAndwill YAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients in (A) are respectively recorded asAndmixing XAThe middle coordinate position is (i)W,jW) Image block ofAll pixel points in (2) and YAThe middle coordinate position is (i)W,jW) Image block ofIs recorded as the covariance between all coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>u</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein,to representThe middle coordinate position is (u)W,vW) The value of the coefficient of (a) is,to representThe middle coordinate position is (u)W,vW) Coefficient value of 1. ltoreq. uW≤8,1≤vW≤8;
If the distortion type of the distorted image Y is Gaussian white noise distortion, calculating a brightness function, a contrast function and a structure degree function between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the brightness function, the contrast function and the structure function between the two is respectively marked as l (x)i,j,yi,j)、c(xi,j,yi,j) And s (x)i,j,yi,j), <math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is JPEG distortion, calculating a brightness function and a contrast function between all two image blocks with the same coordinate position in X and Y, calculating a structure degree function of all two image blocks with the same coordinate position in X and Y in a DCT domain, and setting the coordinate position in X as an (i, j) image block Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence between the brightness function and the contrast function is denoted as l (x), respectivelyi,j,yi,j) And c (x)i,j,yi,j) Let X be an image block X with coordinate position (i, j)i,jNew image blocks after DCT transformationAnd image block Y with coordinate position (i, j) in Yi,jNew image blocks after DCT transformationThe function of structural degree between is recorded as f (x)i,j,yi,j), <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet coefficient brightness function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as an image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the wavelet coefficient brightness function, the wavelet coefficient contrast function and the wavelet coefficient structure degree function is respectively marked as lW(xi,j,yi,j)、cW(xi,j,yi,j) And sW(xi,j,yi,j), <math> <mrow> <msup> <mi>l</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
fifthly, if the distortion type of the distorted image Y is Gaussian white noise distortion, then according to the brightness function and the contrast function between two image blocks with the same coordinate position in X and YAnd a structure degree function, calculating the structure similarity between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity between them is denoted as SSIM (x)i,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is JPEG distortion, calculating the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y according to a brightness function and a contrast function between the two image blocks with the same coordinate positions in X and Y and a structural function of the two image blocks with the same coordinate positions in X and Y in the DCT domain, and carrying out image block X with the coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on DCT domain between them is marked as FSSIM (x)i,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the structural similarity based on wavelet domain between all two image blocks with the same coordinate position in X and Y according to the wavelet coefficient brightness function, the wavelet coefficient contrast function and the wavelet coefficient structural degree function between all two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on the wavelet domain between the two is marked as WSSIM (x)i,j,yi,j),WSSIM(xi,j,yi,j)=[lW(xi,j,yi,j)]α[cW(xi,j,yi,j)]β[sW(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the objective quality score of the Y according to the pixel domain-based structural similarity between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qwn <math> <mrow> <msub> <mi>Q</mi> <mi>wn</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is JPEG distortion, calculating the objective quality score of Y according to the structural similarity based on the DCT domain between all two image blocks with the same coordinate position in X and Y, and marking as Qjpeg <math> <mrow> <msub> <mi>Q</mi> <mi>jpeg</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>FSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the objective quality score of the Y according to the structural similarity based on the wavelet domain between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qblur <math> <mrow> <msub> <mi>Q</mi> <mi>blur</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
The specific process of determining the distortion type of Y by the distortion type discrimination method in the step I is as follows:
(r-a) dividing X into 64 × 64 non-overlapped blocks to obtain M ' × N ' 64 × 64 image blocks, and marking the image block with X coordinate at (i ', j ') as X 'i',j'For each image block x'i',j'Performing one-level wavelet decomposition, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing x'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>,</mo> </mrow> </math> Wherein, 1≤i'≤M',1≤j'≤N';
similarly, Y is divided into 64 × 64 non-overlapping blocks to obtain M ' × N ' 64 × 64 image blocks, and the image block with Y coordinates (i ', j ') is denoted as Y 'i',j'For each image block y'i',j'Performing one-level wavelet decomposition, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing y'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>;</mo> </mrow> </math>
Calculating the difference of noise standard deviations between all image blocks with the same coordinate position in X and Y, and combining the image blocks X 'at the coordinate positions (i', j ') in X and Y'i',j'And y'i',j'The difference of the noise standard deviation is recorded as delta sigmai',j'Then calculating the mean value of the differences of the noise standard deviations among all the image blocks with the same coordinate position in the X and the Y, and recording the mean value as the difference value
(ii) judgmentIf yes, determining that the distortion type of Y is Gaussian white noise distortion, and then ending; otherwise, executing the steps (i) and (d); wherein ThWNA threshold is judged for Gaussian white noise distortion;
computing the brightness difference graph of X, and recording as XhIs mixing XhThe coefficient value with the (i ', j') point as the middle coordinate position is recorded as Xh(i″,j″),Xh(i ", j") - | X (i ", j") -X (i ", j" +1) |, where 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, X (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in X, X (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in X, and the symbol "|" is an absolute value symbol;
similarly, a luminance difference map of Y is calculated, denoted as YhIs a reaction of YhThe coefficient value with the middle coordinate position as the (i ', j') point is recorded as Yh(i″,j″),Yh(i ", j") - | Y (i ", j") -Y (i ", j" +1) |, wherein 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, Y (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in Y, and Y (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in Y;
(ii) luminance difference map X of-e versus XhThe non-overlapping block division of 8 × 8 size is performed to obtain M "× N" non-overlapping large blocksAs small as 8X 8 image blocks, dividing XhAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,the middle coordinate position is the coefficient value of (p, q),the middle coordinate position is a coefficient value of (p,8),1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;
similarly, a luminance difference map Y for YhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing Y into Y blockshAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,the middle coordinate position is the coefficient value of (p, q),the middle coordinate position isCoefficient values of (p, 8);
(ii) f, calculating XhThe ratio between the edge energy and the intra-block energy of all image blocks in (A) and (B) is defined as XhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>x</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mrow> <mi>E</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
Likewise, calculate YhThe ratio between the edge energy and the intra-block energy of all image blocks in the image block, YhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>y</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mrow> <mi>E</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
Statistics satisfy inequalityAnd is recorded as N0(ii) a The determination index J is defined and the determination index J, <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <mrow> <msup> <mi>M</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
(ii) g, judgment J>ThJPEGIf yes, determining that the distortion type of Y is JPEG distortion, and then ending; otherwise, executing the steps of (i) - (h); wherein ThJPEGA threshold is determined for JPEG distortion;
and (h) determining the distortion type of Y as a fuzzy-like distortion, namely determining the distortion type of Y as a Gaussian fuzzy distortion, or JPEG2000 distortion, or fast fading distortion.
In the fourth step, C is taken1=0.01、C2=0.02、C3=0.01、α=β=γ=1。
Compared with the prior art, the invention has the advantages that:
1) when the method acquires the structural similarity between two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated, the method combines the distortion type of the distorted image to be evaluated, so that the method can adaptively select and calculate the structural similarity based on a pixel domain, or the structural similarity based on a DCT domain, or the structural similarity based on a wavelet domain from the self-adaptive evaluation angle, thereby improving the accuracy of the quality evaluation results of the images with different distortion types.
2) The method realizes the judgment of the distortion type of the distorted image under the condition of the original reference image by combining the distortion characteristic expressed when the image is subjected to Gaussian white noise distortion and the distortion characteristic expressed when the image is subjected to JPEG distortion in the process of judging the distortion type of the distorted image, and the judgment process of the distortion type has high transportability.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2 is a graph of 12 original undistorted images in a training set involved in the method of the present invention;
FIG. 3 is a graph showing the relationship between the threshold and the judgment accuracy when the image is determined to be Gaussian white noise distortion in the method of the present invention;
FIG. 4 is a graph showing the relationship between the threshold size and the judgment accuracy when the image is determined to be JPEG distortion in the method of the present invention;
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a self-adaptive image quality evaluation method based on distortion type judgment, which comprises the following processing procedures:
firstly, determining the distortion type of a distorted image to be evaluated;
secondly, corresponding processing is carried out by combining the distortion type of the distorted image to be evaluated,
if the distorted image is Gaussian white noise distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a pixel domain, and acquiring the structural similarity based on the pixel domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all pixel points in each image block in the original undistorted image and the distorted image to be evaluated in the pixel domain and the covariance between all pixel point brightness values in all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated;
if the distorted image is JPEG distorted, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 x 8 in a DCT (discrete Cosine transform) transformation domain, and acquiring the structural similarity based on the DCT domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all the coefficients of the original undistorted image block and the distorted image to be evaluated in the DCT domain and the covariance between all the coefficients of the two image blocks with the same coordinate position in the DCT domain of the original undistorted image and the distorted image to be evaluated;
if the distorted image is similar to fuzzy distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a wavelet domain, and acquiring the structural similarity based on the wavelet domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all the coefficients of the original undistorted image block and the distorted image to be evaluated in the wavelet domain and the covariance between all the coefficients of all the two image blocks with the same coordinate position in the wavelet domain;
and finally, obtaining the objective quality score of the distorted image to be evaluated according to the structural similarity between the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated.
The overall implementation block diagram of the adaptive image quality objective evaluation method of the invention is shown in fig. 1, and the method specifically comprises the following steps:
firstly, X represents an original undistorted image, Y represents a distorted image to be evaluated, and then the distortion type of Y is determined by a distortion type distinguishing method, wherein the distortion type of Y is one of Gaussian white noise distortion, JPEG distortion, Gaussian blur distortion and JPEG 2000-like distortion.
At present, distortion types of an image generally include three types, namely, Gaussian white noise distortion (WN), JPEG distortion (JPEG), and blur-like distortion (where the blur-like distortion includes three types, namely, Gaussian blur distortion (Gblur), JPEG2000 distortion, and fast fading distortion (FF). Here, by the distortion type discrimination method, which type of distortion Y is determined.
The specific process of determining the distortion type of Y by the distortion type discrimination method in the step I is as follows:
(r-a) dividing X into 64 × 64 non-overlapped blocks to obtain M ' × N ' 64 × 64 image blocks, and marking the image block with X coordinate at (i ', j ') as X 'i',j'Performing one-level wavelet decomposition (haar wavelet) on each image block x ', extracting diagonal components, finding out the median of coefficient amplitudes in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing x'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>,</mo> </mrow> </math> Wherein, i 'is more than or equal to 1 and less than or equal to M', j 'is more than or equal to 1 and less than or equal to N', and H, W respectively represent the height and the width of X;
similarly, Y is divided into 64 × 64 non-overlapping blocks to obtain M ' × N ' 64 × 64 image blocks, and the image block with Y coordinates (i ', j ') is denoted as Y 'i',j'Performing one-level wavelet decomposition (haar wavelet) on each image block y ', extracting diagonal components, finding out the median of coefficient amplitudes in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing y'i',j'Coefficient in the diagonal component of the waveletMedian of the amplitude is notedHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>,</mo> </mrow> </math> Wherein,1 < i '< M',1 < j '< N', H, W respectively indicate the height and width of Y, i.e. X and Y have the same size;
calculating the difference of noise standard deviations between all image blocks with the same coordinate position in X and Y, and combining the image blocks X 'at the coordinate positions (i', j ') in X and Y'i',j'And y'i',j'The difference of the noise standard deviation is recorded as delta sigmai',j'Then calculating the mean value of the differences of the noise standard deviations among all the image blocks with the same coordinate position in the X and the Y, and recording the mean valueIs composed of
(ii) judgmentIf yes, determining that the distortion type of Y is Gaussian white noise distortion, and then ending; otherwise, executing the steps (i) and (d); wherein ThWNThe threshold is determined for the white Gaussian noise distortion, in this embodiment, the white Gaussian noise determination threshold ThWNIs 0.8;
computing the brightness difference graph of X, and recording as XhIs mixing XhThe coefficient value with the (i ', j') point as the middle coordinate position is recorded as Xh(i″,j″),Xh(i ", j") - | X (i ", j") -X (i ", j" +1) |, where 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, X (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in X, X (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in X, and the symbol "|" is an absolute value symbol;
similarly, a luminance difference map of Y is calculated, denoted as YhIs a reaction of YhThe coefficient value with the middle coordinate position as the (i ', j') point is recorded as Yh(i″,j″),Yh(i ", j") - | Y (i ", j") -Y (i ", j" +1) |, wherein 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, Y (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in Y, and Y (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in Y;
(ii) luminance difference map X of-e versus XhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing XhAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,the middle coordinate position is the coefficient value of (p, q),the middle coordinate position is a coefficient value of (p,8),1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;
similarly, a luminance difference map Y for YhPerforming non-overlapped block division of 8 × 8 size to obtainM '× N' non-overlapping 8 × 8 image blocks, and YhAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,is composed ofThe middle coordinate position is the coefficient value of (p, q),is composed ofCoefficient value of (p,8) at the middle coordinate position1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;
(ii) f, calculating XhThe ratio between the edge energy and the intra-block energy of all image blocks in (A) and (B) is defined as XhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>x</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mrow> <mi>E</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
Likewise, calculate YhThe ratio between the edge energy and the intra-block energy of all image blocks in the image block, YhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>y</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mrow> <mi>E</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
Statistics satisfy inequalityAnd is recorded as N0(ii) a The determination index J is defined and the determination index J, <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <mrow> <msup> <mi>M</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
(ii) g, judgment J>ThJPEGIf yes, determining that the distortion type of Y is JPEG distortion, and then ending; otherwise, executing the steps of (i) - (h); wherein ThJPEGA threshold is determined for JPEG distortion; in this embodiment, the JPEG distortion discrimination threshold ThJPEGIs 0.57;
and (h) determining the distortion type of Y as a fuzzy-like distortion, namely determining the distortion type of Y as a Gaussian fuzzy distortion, or JPEG2000 distortion, or fast fading distortion.
In the present embodiment, the image data used are 808 images provided by image quality estimation database (LIVE) published by Texas university images and video engineering laboratories in the united states, including 29 undistorted reference images and 779 distorted images, wherein 145 gaussian white noise distorted images, 145 gaussian blur distorted images, 175 JPEG distorted images and 175 JPEG2000 distorted images include 169 images and 145 fast fading distorted images. In addition, fig. 2 shows that 12 undistorted images with simple texture, complex texture and medium texture are selected from 29 undistorted reference images, and the 12 undistorted images and 5 distortion type images corresponding to the 12 undistorted images are used as training set images, wherein 60 images each of gaussian white noise distorted image, gaussian fuzzy distorted image and fast fading distorted image and 70 images each of JPEG distorted image and JPEG2000 distorted image are selected; and taking the remaining 17 undistorted images and the corresponding 5 distortion types of images as test set images, wherein the number of the white Gaussian noise distorted image, the number of the blurred Gaussian distorted image and the number of the fast fading distorted image are respectively 85, the number of the JPEG distorted image is 105, and the number of the JPEG2000 distorted image is 99.
The first step is to separate the distorted images distorted by white gaussian noise from all the distorted images in the training database. The white Gaussian noise distortion discrimination threshold Th involved in the process of determining whether the distortion type of the distorted image Y is white Gaussian noise distortionWNWhen in the interval [ -0.5,1.5 [)]In (1), every 0.05 takes one value as ThWNDetecting values, training data of the discriminating steps of the first, the second and the third for each detecting value, and respectively calculating the accuracy of the discrimination, ThWNThe relationship between the magnitude and the accuracy of discrimination is shown in FIG. 3. As can be seen from FIG. 3, when Th isWNWhen the accuracy is 0.8%, the white gaussian noise-distorted image can be separated with 100% accuracy.
The second step separates the JPEG-distorted image from the non-Gaussian white noise-distorted images in the training set. In the interval0.4,0.7]Every 0.01 point in the table is taken as ThJPEGThe detection values of (1) are subjected to data training of the judging steps from (i) -d to (i) -g, and the accuracy rate of judgment, Th, is respectively obtainedJPEGThe relationship between the size and the accuracy of discrimination is shown in FIG. 4. it can be seen from FIG. 4 that when Th is reachedJPEGWhen the accuracy is 0.57%, the JPEG-distorted image can be separated from the image distorted by the white gaussian noise with 100% accuracy.
If the distortion type of the distorted image Y is Gaussian white noise distortion, a sliding window with the size of 8 multiplied by 8 is adopted to move pixel by pixel in X according to the sequence of front and rear rows, the X is divided into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and the image block with the coordinate position (i, j) in the X is marked as Xi,j(ii) a Similarly, a sliding window with the size of 8 × 8 is adopted to move pixel by pixel in the Y according to the sequence of the front row and the rear row, the Y is divided into M × N overlapped image blocks with the size of 8 × 8, and the image block with the coordinate position (i, j) in the Y is recorded as Yi,j(ii) a Wherein,h denotes the height of X and Y, W denotes the width of X and Y, symbolsI is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is JPEG distortion, a sliding window with the size of 8 multiplied by 8 is adopted to move pixel by pixel in X according to the sequence of the front row and the rear row, the X is divided into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and the image block with the coordinate position (i, j) in the X is recorded as Xi,jFor all image blocks xi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationSimilarly, a sliding window with a size of 8 × 8 is used to move pixel by pixel in Y in the order of front and rear rows, and Y is divided into M × N overlapped pixelsAnd the size of the image block is 8 × 8, and the image block with coordinate position (i, j) in Y is denoted as Yi,jFor all image blocks yi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationWherein,h represents the height of X and Y, W represents the width of X and Y, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is similar to fuzzy distortion, performing one-level wavelet transform on X, extracting approximate components and recording as XAUsing a sliding window of size 8X 8 at XAMoving X point by point according to the sequence of front row and back rowADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing X into X blocksAThe image block with the middle coordinate position (i ', j') is recorded asSimilarly, one-level wavelet transform is performed on Y, and approximate components are extracted and recorded as YAUsing a sliding window with a size of 8X 8 in YAMoving Y point by point according to the sequence of front row and back rowADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing Y intoAThe image block with the middle coordinate position (i ', j') is recorded asWherein,h' represents XAAnd YAW' represents XAAnd YAThe width of the composite is that i 'is more than or equal to 1 and is more than or equal to M', j 'is more than or equal to 1 and is more than or equal to N';
thirdly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the brightness mean value and the standard deviation of all pixel points in each image block in X, and calculating each pixel point in YCalculating the mean value and standard deviation of the brightness of all pixel points in the image block, then calculating the covariance of all pixel points in two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jAll the pixel points in (a) and the image block Y with coordinate position (i, j) in Yi,jThe covariance of all the pixels in (1) is recorded as <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, and v is more than or equal to 1 and less than or equal to 8;
if the distortion type of the distorted image Y is JPEG distortion, calculating the brightness mean value and standard deviation of all pixel points in each image block in X, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in X, calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in Y, finally calculating the covariance between all DCT alternating current coefficients of two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndDCT domain image block with coordinate position (i, j) in XAnd a DCT domain image block with coordinate position (i, j) in YIs recorded as the covariance between all the ac coefficients in <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, v is more than or equal to 1 and less than or equal to 8,to representThe middle coordinate position is (u)D,vD) The value of the DCT coefficient of (a),to representThe middle coordinate position is (u)D,vD) Value of DCT coefficient of 1. ltoreq. uD≤8,1≤vDU is less than or equal to 8DAnd vDIs not 1 at the same time;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating an approximate component X of the distorted image Y after the first-level wavelet transformAThe mean value and standard deviation of all the coefficient values in the above-mentioned formula, and calculating the approximate component Y of Y after one-stage wavelet transformAThe mean and standard deviation of all the coefficient values in (A) and then calculating XAAnd YAAll co-variances between all coefficients in two image blocks with the same coordinate position, XAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients are respectively recorded asAndwill YAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients in (A) are respectively recorded asAndmixing XAThe middle coordinate position is (i)W,jW) Image block ofAll pixel points in (2) and YAThe middle coordinate position is (i)W,jW) Image block ofIs recorded as the covariance between all coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>u</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein,to representThe middle coordinate position is (u)W,vW) The value of the coefficient of (a) is,to representThe middle coordinate position is (u)W,vW) Coefficient value of 1. ltoreq. uW≤8,1≤vW≤8;
If the distortion type of the distorted image Y is Gaussian white noise distortion, calculating two images with the same coordinate position in X and YBrightness function, contrast function and structure function between blocks, and image block X with coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the brightness function, the contrast function and the structure function between the two is respectively marked as l (x)i,j,yi,j)、c(xi,j,yi,j) And s (x)i,j,yi,j), <math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3In this embodiment, take C as the small constant set to avoid zero in the denominator1=0.01、C2=0.02、C3=0.01;
If the distortion type of the distorted image Y is JPEG distortion, calculating a brightness function and a contrast function between all two image blocks with the same coordinate position in X and Y, calculating a structure degree function of all two image blocks with the same coordinate position in X and Y in a DCT domain, and setting the coordinate position in X as an (i, j) image block Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe luminance function, contrast function and structure function correspondences between are denoted as l (x), respectivelyi,j,yi,j) And c (x)i,j,yi,j) Let X be an image block X with coordinate position (i, j)i,jNew image blocks after DCT transformationAnd image block Y with coordinate position (i, j) in Yi,jNew image blocks after DCT transformationThe function of structural degree between is recorded as f (x)i,j,yi,j), <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3In this embodiment, take C as the small constant set to avoid zero in the denominator1=0.01、C2=0.02、C3=0.01;
If the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet coefficient function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as an image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe wavelet coefficient function, the wavelet coefficient contrast function and the wavelet coefficient structure degree function correspondence between the two are respectively marked as lW(xi,j,yi,j)、cW(xi,j,yi,j) And sW(xi,j,yi,j), <math> <mrow> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msup> <mi>s</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </mrow> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3In this embodiment, take C as the small constant set to avoid zero in the denominator1=0.01、C2=0.02、C3=0.01;
Fifthly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the structural similarity between the two image blocks with the same coordinate positions in X and Y according to the brightness function, the contrast function and the structural degree function between the two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity between them is denoted as SSIM (x)i,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γWhere α, β, and γ are adjustment factors, in this example, α ═ β ═ γ ═ 1;
if the distortion type of the distorted image Y is JPEG distortion, calculating the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y according to a brightness function and a contrast function between the two image blocks with the same coordinate positions in X and Y and a structural function of the two image blocks with the same coordinate positions in X and Y in the DCT domain, and carrying out image block X with the coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on DCT domain between them is marked as FSSIM (x)i,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γWhere α, β, and γ are adjustment factors, in this example, α ═ β ═ γ ═ 1;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the structural similarity based on wavelet domain between all two image blocks with the same coordinate position in X and Y according to the wavelet coefficient function, the wavelet coefficient contrast function and the wavelet coefficient structural degree function between all two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity between them based on the wavelet domain is marked as WSSIM (x)i,j,yi,j),WSSIM(xi,j,yi,j)=[lW(xi,j,yi,j)]α[cW(xi,j,yi,j)]β[sW(xi,j,yi,j)]γWhere α, β, and γ are adjustment factors, in this example, α ═ β ═ γ ═ 1;
if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the objective quality score of the Y according to the pixel domain-based structural similarity between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qwn <math> <mrow> <msub> <mi>Q</mi> <mi>wn</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is JPEG distortion, calculating the objective quality score of Y according to the structural similarity based on the DCT domain between all two image blocks with the same coordinate position in X and Y, and marking as Qjpeg <math> <mrow> <msub> <mi>Q</mi> <mi>jpeg</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>FSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the objective quality score of the Y according to the structural similarity based on the wavelet domain between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qblur <math> <mrow> <msub> <mi>Q</mi> <mi>blur</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
In the embodiment, by using DMOS (differential mean opinion scales) values corresponding to 29 undistorted images and 779 single distorted images provided in LIVE and each distorted image, the quality evaluation score Q of each distorted image is calculated according to the steps from (i) to (ii), and four-parameter Logistic function nonlinear fitting is performed on the objective quality evaluation score Q and the DMOS values of 779 distorted images; 4 common objective parameters of the evaluation method for evaluating the image quality are used as evaluation indexes, and the 4 evaluation indexes are a linear correlation coefficient CC (correlation coefficient), a Spearman correlation coefficient SROCC (Spearman rank-order correlation coefficient), a dispersion rate OR (out ratio) and a mean square error coefficient RMSE (squared error), respectively. Wherein higher CC values and SROCC values and lower OR values and RMSE values indicate better correlation between the image objective evaluation method and DMOS.
Table 1 lists values of CC, SROCC, OR, and RMSE coefficients for evaluation performance under various distortion types, and as can be seen from the data listed in table 1, the correlation between the objective quality score Q and the subjective score DMOS of the distorted image obtained in this example is high, the CC values all exceed 0.94, the SROCC values all exceed 0.91, the OR values all fall below 0.41, and the RMSE values all fall below 5.4, which indicates that the objective evaluation result of the method of the present invention is highly consistent with the result of subjective perception of human eyes, and fully illustrates the effectiveness of the method of the present invention.
TABLE 1 correlation between objective evaluation and subjective evaluation of distorted images obtained by this example

Claims (5)

1. A self-adaptive image quality evaluation method based on distortion type judgment is characterized in that the processing process is as follows:
firstly, determining the distortion type of a distorted image to be evaluated;
secondly, corresponding processing is carried out by combining the distortion type of the distorted image to be evaluated;
if the distorted image is Gaussian white noise distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a pixel domain, and acquiring the structural similarity based on the pixel domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the brightness mean value and the standard deviation of all pixel points in each image block in the original undistorted image and the distorted image to be evaluated in the pixel domain and the covariance between all pixel point brightness values in all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated;
if the distorted image is JPEG distorted, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a DCT domain, and acquiring the DCT domain-based structural similarity between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all DCT coefficients of the original undistorted image block and the distorted image to be evaluated in the DCT domain and the covariance between all DCT coefficients of all two image blocks with the same coordinate position in the DCT domain;
if the distorted image is similar to fuzzy distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a wavelet domain, and acquiring the structural similarity based on the wavelet domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all wavelet coefficients of the original undistorted image block and the distorted image to be evaluated in the wavelet domain and the covariance between all wavelet coefficients of all two image blocks with the same coordinate position in the wavelet domain;
and finally, obtaining the objective quality score of the distorted image to be evaluated according to the structural similarity between the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated.
2. The adaptive image quality evaluation method based on distortion type judgment according to claim 1, characterized in that: the method specifically comprises the following steps:
firstly, enabling X to represent an original undistorted image, enabling Y to represent a distorted image to be evaluated, and determining the distortion type of Y through a distortion type distinguishing method, wherein the distortion type of Y is one of Gaussian white noise distortion, JPEG distortion and quasi-fuzzy distortion, and the quasi-fuzzy distortion comprises Gaussian fuzzy distortion, JPEG2000 distortion and fast fading distortion;
if the distortion type of the distorted image Y is Gaussian white noise distortion, a sliding window with the size of 8 multiplied by 8 is adopted to move in X pixel by pixel, the X is divided into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and the image block with the coordinate position (i, j) in X is marked as Xi,j(ii) a Similarly, moving the sliding window with the size of 8 × 8 in Y pixel by pixel, dividing Y into M × N overlapped image blocks with the size of 8 × 8, and recording the image block with the coordinate position (i, j) in Y as Yi,j(ii) a Wherein, h denotes the height of X and Y, W denotes the width of X and Y, symbolsI is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is JPEG distortion, moving the distorted image Y pixel by pixel in X by adopting a sliding window with the size of 8 multiplied by 8, dividing X into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and recording the image block with the coordinate position (i, j) in X as Xi,jFor all image blocks xi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationAlso, a slider having a size of 8X 8 was usedMoving the movable window in Y pixel by pixel, dividing Y into M × N overlapped image blocks with size of 8 × 8, and recording the image block with coordinate position (i, j) in Y as Yi,jFor all image blocks yi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationWherein,h represents the height of X and Y, W represents the width of X and Y, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is similar to fuzzy distortion, performing one-level wavelet transform on X, extracting approximate components and recording as XAUsing a sliding window of size 8X 8 at XAMoving X point by pointADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing X into X blocksAThe image block with the middle coordinate position (i ', j') is recorded asSimilarly, one-level wavelet transform is performed on Y, and approximate components are extracted and recorded as YAUsing a sliding window with a size of 8X 8 in YAMoving point by point, and moving YADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing Y intoAThe image block with the middle coordinate position (i ', j') is recorded asWherein,h' represents XAAnd YAW' represents XAAnd YAThe width of the composite is that i 'is more than or equal to 1 and is more than or equal to M', j 'is more than or equal to 1 and is more than or equal to N';
thirdly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the brightness average sum of all pixel points in each image block in the XStandard deviation, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the covariance between all pixel points in two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jAll the pixel points in (a) and the image block Y with coordinate position (i, j) in Yi,jThe covariance of all the pixels in (1) is recorded as <math> <mrow> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jMiddle coordinate positionThe brightness value of the pixel point of (u, v) is that u is more than or equal to 1 and less than or equal to 8, and v is more than or equal to 1 and less than or equal to 8;
if the distortion type of the distorted image Y is JPEG distortion, calculating the brightness mean value and standard deviation of all pixel points in each image block in X, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in X, calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in Y, finally calculating the covariance between all DCT alternating current coefficients of two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jObtained after DCT transformationNew image blockRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndDCT domain image block with coordinate position (i, j) in XAnd a DCT domain image block with coordinate position (i, j) in YIs recorded as the covariance between all the ac coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, v is more than or equal to 1 and less than or equal to 8,to representThe middle coordinate position is (u)D,vD) The value of the DCT coefficient of (a),to representThe middle coordinate position is (u)D,vD) Value of DCT coefficient of 1. ltoreq. uD≤8,1≤vDU is less than or equal to 8DAnd vDIs not 1 at the same time;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating an approximate component X of the distorted image Y after the first-level wavelet transformAThe mean value and standard deviation of all the coefficient values in the above-mentioned formula, and calculating the approximate component Y of Y after one-stage wavelet transformAThe mean and standard deviation of all the coefficient values in (A) and then calculating XAAnd YAAll co-variances between all coefficients in two image blocks with the same coordinate position, XAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients are respectively recorded asAndwill YAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients in (A) are respectively recorded asAndmixing XAThe middle coordinate position is (i)W,jW) Image block ofAll pixel points in (2) and YAThe middle coordinate position is (i)W,jW) Image block ofIs recorded as the covariance between all coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>i</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein,to representThe middle coordinate position is (u)W,vW) The value of the coefficient of (a) is,to representThe middle coordinate position is (u)W,vW) Coefficient value of 1. ltoreq. uW≤8,1≤vW≤8;
If the distortion type of the distorted image Y is Gaussian white noise distortion, calculating a brightness function, a contrast function and a structure function between two image blocks with the same coordinate positions in X and YLet X be an image block X with coordinate position (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the brightness function, the contrast function and the structure function between the two is respectively marked as l (x)i,j,yi,j)、c(xi,j,yi,j) And s (x)i,j,yi,j), <math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is JPEG distortion, all coordinate positions in X and Y are calculatedSetting a brightness function and a contrast function between two identical image blocks, calculating the structure degree function of all two image blocks with the same coordinate position in X and Y in a DCT domain, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence between the brightness function and the contrast function is denoted as l (x), respectivelyi,j,yi,j) And c (x)i,j,yi,j) Let X be an image block X with coordinate position (i, j)i,jNew image blocks after DCT transformationAnd image block Y with coordinate position (i, j) in Yi,jNew image blocks after DCT transformationThe function of structural degree between is recorded as f (x)i,j,yi,j), <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet coefficient brightness function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as an image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the wavelet coefficient brightness function, the wavelet coefficient contrast function and the wavelet coefficient structure degree function is respectively marked as lW(xi,j,yi,j)、cW(xi,j,yi,j) And sW(xi,j,yi,j), <math> <mrow> <msup> <mi>l</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>i</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mn>2</mn> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
fifthly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the structural similarity between the two image blocks with the same coordinate positions in X and Y according to the brightness function, the contrast function and the structural degree function between the two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity between them is denoted as SSIM (x)i,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is JPEG distortion, calculating the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y according to a brightness function and a contrast function between the two image blocks with the same coordinate positions in X and Y and a structural function of the two image blocks with the same coordinate positions in X and Y in the DCT domain, and carrying out image block X with the coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on DCT domain between them is marked as FSSIM (x)i,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet domain-based image block between all two image blocks with the same coordinate position in X and Y according to a wavelet coefficient brightness function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between all two image blocks with the same coordinate position in X and YStructural similarity, namely, the image block X with coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on the wavelet domain between the two is marked as WSSIM (x)i,j,yi,j), <math> <mrow> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>[</mo> <msup> <mi>l</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&alpha;</mi> </msup> <msup> <mrow> <mo>[</mo> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&beta;</mi> </msup> <msup> <mrow> <mo>[</mo> <msup> <mi>s</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&gamma;</mi> </msup> <mo>,</mo> </mrow> </math> Wherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the objective quality score of the Y according to the pixel domain-based structural similarity between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qwn <math> <mrow> <msub> <mi>Q</mi> <mi>wn</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is JPEG distortion, calculating the objective quality score of the Y according to the structural similarity based on the DCT domain between all two image blocks with the same coordinate position in the X and the Y, and recording the objective quality score as the JPEG distortionQjpeg <math> <mrow> <msub> <mi>Q</mi> <mi>jpeg</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>FSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the objective quality score of the Y according to the structural similarity based on the wavelet domain between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qblur <math> <mrow> <msub> <mi>Q</mi> <mi>blur</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
3. The adaptive image quality evaluation method based on distortion type determination according to claim 2, characterized in that: the specific process of determining the distortion type of Y by the distortion type discrimination method in the step I is as follows:
(r-a) dividing X into 64 × 64 non-overlapped blocks to obtain M ' × N ' 64 × 64 image blocks, and marking the image block with X coordinate at (i ', j ') as X 'i',j'For each image block x'i',j'Performing one-level wavelet decomposition, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing x'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of Wherein, 1≤i'≤M',1≤j'≤N';
similarly, Y is divided into 64 × 64 non-overlapping blocks to obtain M ' × N ' 64 × 64 image blocks, and the image block with Y coordinates (i ', j ') is denoted as Y 'i',j'For each image block y'i',j'Performing a first-level waveletDecomposing, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and calculating y'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>;</mo> </mrow> </math>
Calculating the difference of noise standard deviations between all image blocks with the same coordinate position in X and Y, and combining the image blocks X 'at the coordinate positions (i', j ') in X and Y'i',j'And y'i',j'Is recorded as the difference of the standard deviation of the noiseThen calculating the mean value of the differences of the noise standard deviations among all the image blocks with the same coordinate position in the X and the Y, and recording the mean value as the difference value <math> <mrow> <mover> <mi>&Delta;&sigma;</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>N</mi> <mo>&prime;</mo> </msup> </munderover> <mi>&Delta;</mi> <msub> <mi>&sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> <mo>;</mo> </mrow> </math>
(ii) judgmentIf yes, determining that the distortion type of Y is Gaussian white noise distortion, and then ending; otherwise, executing the steps (i) and (d); wherein ThWNA threshold is judged for Gaussian white noise distortion;
computing the brightness difference graph of X, and recording as XhIs mixing XhThe coefficient value with the (i ', j') point as the middle coordinate position is recorded as Xh(i″,j″),Xh(i ", j") ═ X (i ", j") -X (i ", j" +1) |, where,1 is more than or equal to i 'and less than or equal to H,1 is more than or equal to j' -1, X (i ', j') represents the brightness value of the pixel point with the coordinate position (i ', j'), X (i ', j' +1) represents the brightness value of the pixel point with the coordinate position (i ', j' +1) in X, and the symbol "|" is an absolute value symbol;
similarly, a luminance difference map of Y is calculated, denoted as YhIs a reaction of YhThe coefficient value with the middle coordinate position as the (i ', j') point is recorded as Yh(i″,j″),Yh(i ", j") - | Y (i ", j") -Y (i ", j" +1) |, wherein 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, Y (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in Y, and Y (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in Y;
(ii) luminance difference map X of-e versus XhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing XhAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,is composed ofThe middle coordinate position is the coefficient value of (p, q),is composed ofThe middle coordinate position is a coefficient value of (p,8),1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;
similarly, a luminance difference map Y for YhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing Y into Y blockshAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,is composed ofThe middle coordinate position is the coefficient value of (p, q),is composed ofThe middle coordinate position is the coefficient value of (p, 8);
(ii) f, calculating XhThe ratio between the edge energy and the intra-block energy of all image blocks in (A) and (B) is defined as XhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>x</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mfrac> <mo>;</mo> </mrow> </math>
Likewise, calculate YhThe ratio between the edge energy and the intra-block energy of all image blocks in the image block, YhImage block with middle coordinate position at (i ', j')Block edge ofThe ratio of the edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>y</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mfrac> <mo>;</mo> </mrow> </math>
Statistics satisfy inequalityImage block ofAnd is marked as N0(ii) a The determination index J is defined and the determination index J, <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <mrow> <msup> <mi>M</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
(ii) g, judgment J>ThJPEGIf yes, determining that the distortion type of Y is JPEG distortion, and then ending; otherwise, executing the steps of (i) - (h); wherein ThJPEGA threshold is determined for JPEG distortion;
and (h) determining the distortion type of Y as a fuzzy-like distortion, namely determining the distortion type of Y as a Gaussian fuzzy distortion, or JPEG2000 distortion, or fast fading distortion.
4. The adaptive image quality evaluation method based on distortion type judgment according to claim 3, wherein: in the step I-c, a threshold Th for distinguishing Gaussian white noise distortionWNIs 0.8; in the steps of (1) and (g), a JPEG distortion discrimination threshold ThJPEGIs 0.57.
5. The adaptive image quality evaluation method based on distortion type determination according to claim 2, characterized in that: in the fourth step, C is taken1=0.01、C2=0.02、C30.01, 1, γ.
CN201310406821.0A 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment Expired - Fee Related CN103475897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310406821.0A CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310406821.0A CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Publications (2)

Publication Number Publication Date
CN103475897A CN103475897A (en) 2013-12-25
CN103475897B true CN103475897B (en) 2015-03-11

Family

ID=49800574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310406821.0A Expired - Fee Related CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Country Status (1)

Country Link
CN (1) CN103475897B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123723A (en) * 2014-07-08 2014-10-29 上海交通大学 Structure compensation based image quality evaluation method
CN104918039B (en) * 2015-05-05 2017-06-13 四川九洲电器集团有限责任公司 image quality evaluating method and system
CN105894522B (en) * 2016-04-28 2018-05-25 宁波大学 A kind of more distortion objective evaluation method for quality of stereo images
CN106412569B (en) * 2016-09-28 2017-12-15 宁波大学 A kind of selection of feature based without referring to more distortion stereo image quality evaluation methods
CN106778917A (en) * 2017-01-24 2017-05-31 北京理工大学 Based on small echo statistical nature without reference noise image quality evaluating method
CN108664839B (en) * 2017-03-27 2024-01-12 北京三星通信技术研究有限公司 Image processing method and device
CN107770517A (en) * 2017-10-24 2018-03-06 天津大学 Full reference image quality appraisement method based on image fault type
CN110415207A (en) * 2019-04-30 2019-11-05 杭州电子科技大学 A method of the image quality measure based on image fault type
CN111179242B (en) * 2019-12-25 2023-06-02 Tcl华星光电技术有限公司 Image processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209257B (en) * 2011-06-17 2013-11-20 宁波大学 Stereo image quality objective evaluation method
CN102333233B (en) * 2011-09-23 2013-11-06 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102421007B (en) * 2011-11-28 2013-09-04 浙江大学 Image quality evaluating method based on multi-scale structure similarity weighted aggregate
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN102982532B (en) * 2012-10-31 2015-06-17 宁波大学 Stereo image objective quality evaluation method base on matrix decomposition

Also Published As

Publication number Publication date
CN103475897A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN103475897B (en) Adaptive image quality evaluation method based on distortion type judgment
CN103338380B (en) Adaptive image quality objective evaluation method
CN105208374B (en) A kind of non-reference picture assessment method for encoding quality based on deep learning
CN103945217B (en) Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy
CN103208097B (en) Filtering method is worked in coordination with in the principal component analysis of the multi-direction morphosis grouping of image
CN104978715A (en) Non-local mean image denoising method based on filtering window and parameter self-adaption
CN103400367A (en) No-reference blurred image quality evaluation method
Gu et al. Structural similarity weighting for image quality assessment
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN109754390B (en) No-reference image quality evaluation method based on mixed visual features
CN103269436B (en) Key frame selection method in 2D-3D video conversion
CN104767993B (en) A kind of stereoscopic video objective quality evaluation based on matter fall time domain weighting
CN103093432B (en) Polarized synthetic aperture radar (SAR) image speckle reduction method based on polarization decomposition and image block similarity
CN104751426A (en) High density impulse noise removing method based on three dimensional block match switching
CN100367770C (en) Method for removing isolated noise point in video
CN102708568B (en) Stereoscopic image objective quality evaluation method on basis of structural distortion
CN106683079B (en) A kind of non-reference picture method for evaluating objective quality based on structure distortion
CN104144339B (en) A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images
CN105979266B (en) It is a kind of based on intra-frame trunk and the worst time-domain information fusion method of time slot
Ahmadi et al. Survey of image denoising techniques
CN103150725A (en) SUSAN edge detection method and system based on non-local mean values
Jebur et al. Image denoising techniques: An overview
CN104408736A (en) Characteristic-similarity-based synthetic face image quality evaluation method
CN104123723A (en) Structure compensation based image quality evaluation method
Balanov et al. Reduced-reference image quality assessment based on dct subband similarity

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150311

Termination date: 20190909

CF01 Termination of patent right due to non-payment of annual fee