CN103475897B - Adaptive image quality evaluation method based on distortion type judgment - Google Patents

Adaptive image quality evaluation method based on distortion type judgment Download PDF

Info

Publication number
CN103475897B
CN103475897B CN201310406821.0A CN201310406821A CN103475897B CN 103475897 B CN103475897 B CN 103475897B CN 201310406821 A CN201310406821 A CN 201310406821A CN 103475897 B CN103475897 B CN 103475897B
Authority
CN
China
Prior art keywords
mrow
msup
msub
msubsup
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310406821.0A
Other languages
Chinese (zh)
Other versions
CN103475897A (en
Inventor
蒋刚毅
靳鑫
郁梅
邵枫
彭宗举
陈芬
王晓东
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201310406821.0A priority Critical patent/CN103475897B/en
Publication of CN103475897A publication Critical patent/CN103475897A/en
Application granted granted Critical
Publication of CN103475897B publication Critical patent/CN103475897B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

本发明公开了一种基于失真类型判断的自适应图像质量客观评价方法,该方法首先对图像的失真类型进行判断,将失真类型分为高斯白噪声失真、JPEG失真和类模糊失真三类,其中类模糊失真包含高斯模糊失真、JPEG2000失真和快衰落失真;利用失真判别结果,对受到高斯白噪声失真的图像采用基于像素域的结构相似度模型评价,对受到JPEG失真的图像采用基于DCT域的结构相似度模型评价,对受到类模糊失真的图像采用基于小波域的结构相似度模型评价。实施结果表明本发明提出的客观评价方法,通过失真判别方法,在对各不同失真类型的失真图像的评价很好的结合了各结构相似度评价模型的优点,评价结果与人眼主观感知一致较高。

The invention discloses an adaptive image quality objective evaluation method based on the judgment of distortion type. The method firstly judges the distortion type of the image, and divides the distortion type into three types: Gaussian white noise distortion, JPEG distortion and quasi-fuzzy distortion. Class fuzzy distortion includes Gaussian blur distortion, JPEG2000 distortion and fast fading distortion; using the results of distortion discrimination, the image distorted by Gaussian white noise is evaluated by the structural similarity model based on the pixel domain, and the image distorted by JPEG is evaluated by the DCT domain Structural similarity model evaluation, using the structure similarity model evaluation based on wavelet domain for the image subject to fuzzy distortion. Implementation results show that the objective evaluation method proposed by the present invention combines the advantages of each structural similarity evaluation model well in the evaluation of distorted images of different distortion types through the distortion discrimination method, and the evaluation results are consistent with the subjective perception of human eyes. high.

Description

一种基于失真类型判断的自适应图像质量评价方法An Adaptive Image Quality Evaluation Method Based on Distortion Type Judgment

技术领域technical field

本发明涉及一种图像质量评价技术,尤其是涉及一种基于失真类型判断的自适应图像客观质量评价方法。The invention relates to an image quality evaluation technology, in particular to an adaptive image objective quality evaluation method based on distortion type judgment.

背景技术Background technique

随着现代通信技术的迅速发展,人类进入信息社会,图像作为重要的信息载体,其质量的好坏将直接影响接受者对信息获取的准确度和完整度。然而,无论在图像的采集、处理、储存和传输过程,都不可避免由于处理方法的不完善或者外部设备不规范等原因产生失真或降质,不同的失真或降质程度将会对图像信息造成不同程度的损失。在图像信息技术被广泛应用的情况下,如何度量由图像不同的失真程度而导致的所载信息的损失大小显得尤为重要。图像质量评价课题的出现,正是为了解决这一重要的现实问题。从方法上可以将图像质量评价分为主观质量评价和客观质量评价,前者凭借实验人员的主观感知来评价图像的质量,因此非常繁琐和耗时,同时不适合集成到实际应用;后者依据模型给出量化指标,模拟人类视觉系统(HVS,human visual system)感知机制来衡量图像质量,该方法具有操作简单、易于实现和实时算法优化等特点,成为图像质量评价中的研究重点。而根据对HVS描述的侧重点不同,可以将图像质量评价分为基于误差灵敏度和基于结构相似度评价方法两类。基于误差灵敏度方法考虑视觉非线性、多通道、对比敏感度带通、掩盖效应、多通道间不同激励的相互作用以及视觉心理特征,但是由于人类尚且没有完全掌握HVS,因此该类方法存在着明显的准确建模障碍。基于结构相似度方法认为自然图像具有特定的结构,人眼对图像的感知,主要从这些结构信息中提取,直接评价图像信号的结构相似性,方法实现复杂度较低,应用性较强,但是现有技术的结构相似度方法种类较多,而且这些方法并不能保证不同失真类型的图像的质量评价结果都较准确。With the rapid development of modern communication technology, human beings have entered the information society. As an important information carrier, image quality will directly affect the accuracy and completeness of information acquisition by recipients. However, no matter in the process of image acquisition, processing, storage and transmission, it is inevitable that distortion or degradation will occur due to imperfect processing methods or irregular external equipment. Different degrees of distortion or degradation will cause image information different degrees of loss. In the case that image information technology is widely used, how to measure the loss of information contained in images caused by different degrees of distortion is particularly important. The emergence of the subject of image quality evaluation is just to solve this important practical problem. In terms of methods, image quality evaluation can be divided into subjective quality evaluation and objective quality evaluation. The former relies on the subjective perception of the experimenter to evaluate the image quality, so it is very cumbersome and time-consuming, and is not suitable for integration into practical applications; the latter is based on the model Quantitative indicators are given to simulate the human visual system (HVS, human visual system) perception mechanism to measure image quality. This method has the characteristics of simple operation, easy implementation, and real-time algorithm optimization, and has become a research focus in image quality evaluation. According to the different emphases of HVS description, image quality evaluation can be divided into two types based on error sensitivity and based on structural similarity. Based on the error sensitivity method, visual nonlinearity, multi-channel, contrast sensitivity bandpass, masking effect, interaction of different excitations between multi-channels and visual psychological characteristics are considered, but because humans have not yet fully grasped HVS, this type of method has obvious limitations obstacles to accurate modeling. Based on the structural similarity method, it is believed that natural images have a specific structure, and the human eye's perception of images is mainly extracted from these structural information to directly evaluate the structural similarity of image signals. The method has low complexity and strong applicability, but There are many types of structural similarity methods in the prior art, and these methods cannot guarantee that the quality evaluation results of images with different distortion types are relatively accurate.

发明内容Contents of the invention

本发明所要解决的技术问题是提供一种自适应图像质量客观评价方法,其能够有效地提高不同失真类型的图像的质量评价结果的准确性。The technical problem to be solved by the present invention is to provide an adaptive image quality objective evaluation method, which can effectively improve the accuracy of quality evaluation results of images with different distortion types.

本发明解决上述技术问题所采用的技术方案为:一种基于失真类型判断的自适应图像质量评价方法,它的处理过程为:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: an adaptive image quality evaluation method based on distortion type judgment, and its processing process is:

首先,确定待评价的失真图像的失真类型;First, determine the distortion type of the distorted image to be evaluated;

其次,结合待评价的失真图像的失真类型进行相应的处理:Secondly, perform corresponding processing in combination with the distortion type of the distorted image to be evaluated:

如果失真图像为高斯白噪声失真,则将原始的无失真图像和待评价的失真图像在像素域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像和待评价的失真图像中的每个图像块中的所有像素点在像素域的亮度均值和标准差,以及原始无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块中的所有像素点亮度值之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于像素域的结构相似度;If the distorted image is distorted by Gaussian white noise, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the pixel domain, and the original undistorted image and The brightness mean and standard deviation of all pixels in each image block in the distorted image to be evaluated in the pixel domain, and the brightness values of all the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated The covariance between the brightness values of all pixels obtains the pixel domain-based structural similarity between all two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated;

如果失真图像为JPEG失真,则将原始的无失真图像和待评价的失真图像在DCT(Discrete Cosine Transform)域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像块和待评价的失真图像块在DCT域中所有DCT系数的均值和标准差,以及原始无失真图像和待评价的失真图像在DCT域中所有坐标位置相同的两个图像块中的所有DCT系数之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于DCT域的结构相似度;If the distorted image is JPEG distortion, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the DCT (Discrete Cosine Transform) domain, and the original undistorted image is calculated by calculating The mean and standard deviation of all DCT coefficients of the distorted image block and the distorted image block to be evaluated in the DCT domain, and all the two image blocks with the same coordinate position in the DCT domain of the original undistorted image and the distorted image to be evaluated The covariance between the DCT coefficients obtains the DCT domain-based structural similarity between all two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated;

如果失真图像为类模糊失真,则将原始的无失真图像和待评价的失真图像在小波域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像块和待评价的失真图像块在小波域中所有小波系数的均值和标准差,以及原始无失真图像和待评价的失真图像在小波域中所有坐标位置相同的两个图像块中的所有小波系数之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度;If the distorted image is similar to fuzzy distortion, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the wavelet domain. By calculating the original undistorted image block and The mean and standard deviation of all wavelet coefficients of the distorted image block to be evaluated in the wavelet domain, and the difference between all wavelet coefficients of the original undistorted image and the distorted image to be evaluated in two image blocks with the same coordinate positions in the wavelet domain The covariance of the original undistorted image and the distorted image to be evaluated are obtained based on the wavelet domain structure similarity between two image blocks with the same coordinate position;

最后,根据原始的无失真图像和待评价的失真图像中所有坐标位置相同的两个图像块之间的结构相似度,获取待评价的失真图像的客观质量分值。Finally, according to the structural similarity between the original undistorted image and the distorted image to be evaluated, the objective quality score of the distorted image to be evaluated is obtained.

本发明的一种基于失真类型判断的自适应图像质量评价方法,它具体包括以下步骤:A kind of adaptive image quality evaluation method based on distortion type judgment of the present invention, it specifically comprises the following steps:

①令X表示原始的无失真图像,令Y表示待评价的失真图像,通过失真类型判别方法确定Y的失真类型,Y的失真类型为高斯白噪声失真、JPEG失真、类模糊失真中的其中一种,其中类模糊失真包含高斯模糊失真、JPEG2000失真和快衰落失真;①Let X denote the original undistorted image, let Y denote the distorted image to be evaluated, determine the distortion type of Y through the distortion type discrimination method, and the distortion type of Y is one of Gaussian white noise distortion, JPEG distortion, and quasi-fuzzy distortion Types, where the blur-like distortion includes Gaussian blur distortion, JPEG2000 distortion and fast fading distortion;

②如果失真图像Y的失真类型为高斯白噪声失真,则采用尺寸大小为8×8的滑动窗口在X中逐像素点移动,将X分割成M×N个相重叠的且尺寸大小为8×8的图像块,将X中坐标位置为(i,j)的图像块记为xi,j;同样,采用尺寸大小为8×8的滑动窗口在Y中逐像素点移动,将Y分割成M×N个相重叠的且尺寸大小为8×8的图像块,将Y中坐标位置为(i,j)的图像块记为yi,j;其中, H表示X和Y的高度,W表示X和Y的宽度,符号为向下取整符号,1≤i≤M,1≤j≤N;②If the distortion type of the distorted image Y is Gaussian white noise distortion, then use a sliding window with a size of 8×8 to move pixel by pixel in X, and divide X into M×N overlapping ones with a size of 8× 8 image blocks, the image block whose coordinate position in X is (i,j) is recorded as x i,j ; similarly, a sliding window with a size of 8×8 is used to move pixel by pixel in Y, and Y is divided into M×N overlapping image blocks with a size of 8×8, the image block whose coordinate position in Y is (i,j) is recorded as y i,j ; where, H represents the height of X and Y, W represents the width of X and Y, the symbol is the symbol of rounding down, 1≤i≤M, 1≤j≤N;

如果失真图像Y的失真类型为JPEG失真,则采用尺寸大小为8×8的滑动窗口在X中逐像素点移动,将X分割成M×N个相重叠的且尺寸大小为8×8的图像块,将X中坐标位置为(i,j)的图像块记为xi,j,对所有的图像块xi,j进行二维DCT变换,得到对应变换后的图像块为同样,采用尺寸大小为8×8的滑动窗口在Y中逐像素点移动,将Y分割成M×N个相重叠的且尺寸大小为8×8的图像块,将Y中坐标位置为(i,j)的图像块记为yi,j,对所有的图像块yi,j进行二维DCT变换,得到对应变换后的图像块为其中,H表示X和Y的高度,W表示X和Y的宽度,1≤i≤M,1≤j≤N;If the distortion type of the distorted image Y is JPEG distortion, use a sliding window with a size of 8×8 to move pixel by pixel in X, and divide X into M×N overlapping images with a size of 8×8 block, the image block whose coordinate position is (i, j) in X is denoted as x i, j , perform two-dimensional DCT transformation on all image blocks x i, j , and the corresponding transformed image block is Similarly, a sliding window with a size of 8×8 is used to move pixel by pixel in Y, and Y is divided into M×N overlapping image blocks with a size of 8×8, and the coordinate position in Y is (i ,j) is denoted as y i,j , perform two-dimensional DCT transformation on all image blocks y i,j, and obtain the corresponding transformed image block as in, H represents the height of X and Y, W represents the width of X and Y, 1≤i≤M, 1≤j≤N;

如果失真图像Y的失真类型为类模糊失真,则对X进行一级小波变换,提取近似分量并记为XA,采用尺寸大小为8×8的滑动窗口在XA中逐点移动,将XA分割成M'×N'个相重叠的且尺寸大小为8×8的图像块,将XA中坐标位置为(i',j')的图像块记为同样,对Y进行一级小波变换,提取近似分量并记为YA,采用尺寸大小为8×8的滑动窗口在YA中逐点移动,将YA分割成M'×N'个相重叠的且尺寸大小为8×8的图像块,将YA中坐标位置为(i',j')的图像块记为其中,H'表示XA和YA的高度,W'表示XA和YA的宽度,1≤i'≤M',1≤j'≤N';If the distortion type of the distorted image Y is fuzzy-like distortion, perform a first-level wavelet transform on X, extract the approximate component and record it as X A , use a sliding window with a size of 8×8 to move point by point in X A , and convert X A is divided into M'×N' overlapping image blocks with a size of 8×8, and the image block whose coordinate position in X A is (i',j') is recorded as Similarly, perform a first-level wavelet transform on Y, extract the approximate component and record it as Y A , use a sliding window with a size of 8×8 to move point by point in Y A , and divide Y A into M'×N' overlapping and the size of the image block is 8×8, and the image block whose coordinate position in Y A is (i', j') is recorded as in, H' means the height of X A and Y A , W' means the width of X A and Y A , 1≤i'≤M', 1≤j'≤N';

③如果失真图像Y的失真类型为高斯白噪声失真,则计算X中的每个图像块中的所有像素点的亮度均值和标准差,并计算Y中的每个图像块中的所有像素点的亮度均值和标准差,然后计算X和Y中所有的坐标位置相同的两个图像块中的所有像素点之间的协方差,将X中坐标位置为(i,j)的图像块xi,j中的所有像素点的亮度均值和标准差对应记为将Y中坐标位置为(i,j)的图像块yi,j中的所有像素点的亮度均值和标准差对应记为将X中坐标位置为(i,j)的图像块xi,j中的所有像素点与Y中坐标位置为(i,j)的图像块yi,j中的所有像素点之间的协方差记为 σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( y i , j ( u , v ) - μ y i , j ) 2 , σ x i , j y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( y i , j ( u , v ) - μ y i , j ) ] , 其中,xi,j(u,v)表示xi,j中坐标位置为(u,v)的像素点的亮度值,yi,j(u,v)表示yi,j中坐标位置为(u,v)的像素点的亮度值,1≤u≤8,1≤v≤8;③If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the brightness mean and standard deviation of all pixels in each image block in X, and calculate the brightness value and standard deviation of all pixels in each image block in Y The brightness mean and standard deviation, and then calculate the covariance between all pixels in the two image blocks with the same coordinate position in X and Y, and the image block x i with the coordinate position (i, j) in X, The brightness mean and standard deviation of all pixels in j are recorded as and The brightness mean and standard deviation of all the pixels in the image block y i, j at the coordinate position (i, j) in Y are recorded as and The correlation between all the pixels in the image block x i,j whose coordinate position is (i,j) in X and all the pixels in the image block y i,j whose coordinate position is (i,j) in Y is Variance is recorded as σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 the y i , j ( u , v ) , σ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( the y i , j ( u , v ) - μ the y i , j ) 2 , σ x i , j the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( the y i , j ( u , v ) - μ the y i , j ) ] , Among them, x i,j (u,v) represents the brightness value of the pixel point whose coordinate position in x i,j is (u,v), and y i,j (u,v) represents the coordinate position in y i,j as The brightness value of the pixel point of (u,v), 1≤u≤8, 1≤v≤8;

如果失真图像Y的失真类型为JPEG失真,则计算X中的每个图像块中的所有像素点的亮度均值和标准差,并计算Y中的每个图像块中的所有像素点的亮度均值和标准差,然后计算X中的每个图像块的DCT交流系数的均值和标准差,并计算Y中的每个图像块的DCT交流系数的均值和标准差,最后计算X和Y中所有的坐标位置相同的两个图像块的所有DCT交流系数之间的协方差,将X中坐标位置为(i,j)的图像块xi,j中的所有像素点的亮度均值和标准差对应记为将Y中坐标位置为(i,j)的图像块yi,j中的所有像素点的亮度均值和标准差对应记为将X中坐标位置为(i,j)的图像块xi,j进行DCT变换后得到的新图像块的所有交流系数的均值和标准差分别对应记为将Y中坐标位置为(i,j)的图像块yi,j进行DCT变换后得到的新图像块的所有交流系数的均值和标准差分别对应记为将X中坐标位置为(i,j)的DCT域图像块和Y中坐标位置为(i,j)的DCT域图像块中的所有交流系数之间的协方差记为 σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( y i , j ( u , v ) - μ y i , j ) 2 , μ x i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 x i , j D ( u D , v D ) , σ x i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 ( x i , j D ( u D , v D ) - μ x i , j D ) 2 , μ y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 y i , j D ( u D , v D ) , σ y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 ( y i , j D ( u D , v D ) - μ y i , j D ) 2 , σ x i , j D , y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 [ ( x i , j D ( u D , v D ) - μ x i , j D ) ( y i , j D ( u D , v D ) - μ y i , j D ) ] , 其中,xi,j(u,v)表示xi,j中坐标位置为(u,v)的像素点的亮度值,yi,j(u,v)表示yi,j中坐标位置为(u,v)的像素点的亮度值,1≤u≤8,1≤v≤8,表示中坐标位置为(uD,vD)的DCT系数值,表示中坐标位置为(uD,vD)的DCT系数值,1≤uD≤8,1≤vD≤8且uD和vD不同时为1;If the distortion type of the distorted image Y is JPEG distortion, calculate the brightness mean and standard deviation of all pixels in each image block in X, and calculate the brightness mean and standard deviation of all pixels in each image block in Y Standard deviation, then calculate the mean and standard deviation of the DCT AC coefficients of each image block in X, and calculate the mean and standard deviation of the DCT AC coefficients of each image block in Y, and finally calculate all coordinates in X and Y The covariance between all DCT AC coefficients of two image blocks with the same position, and the brightness mean and standard deviation of all pixels in the image block x i, j whose coordinate position is (i, j) in X are recorded as and The brightness mean and standard deviation of all the pixels in the image block y i, j at the coordinate position (i, j) in Y are recorded as and A new image block obtained by performing DCT transformation on the image block x i, j whose coordinate position is (i, j) in X The mean and standard deviation of all the AC coefficients of are recorded as and A new image block obtained by performing DCT transformation on the image block y i, j whose coordinate position is (i, j) in Y The mean and standard deviation of all the AC coefficients of are recorded as and The DCT domain image block whose coordinate position in X is (i, j) and the DCT domain image block with the coordinate position (i,j) in Y The covariance between all AC coefficients in is denoted as σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 the y i , j ( u , v ) , σ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( the y i , j ( u , v ) - μ the y i , j ) 2 , μ x i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 x i , j D. ( u D. , v D. ) , σ x i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 ( x i , j D. ( u D. , v D. ) - μ x i , j D. ) 2 , μ the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 the y i , j D. ( u D. , v D. ) , σ the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 ( the y i , j D. ( u D. , v D. ) - μ the y i , j D. ) 2 , σ x i , j D. , the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 [ ( x i , j D. ( u D. , v D. ) - μ x i , j D. ) ( the y i , j D. ( u D. , v D. ) - μ the y i , j D. ) ] , Among them, x i,j (u,v) represents the brightness value of the pixel point whose coordinate position in x i,j is (u,v), and y i,j (u,v) represents the coordinate position in y i,j as The brightness value of the pixel point of (u,v), 1≤u≤8, 1≤v≤8, express The DCT coefficient value at the coordinate position (u D , v D ), express The DCT coefficient value at the middle coordinate position (u D , v D ), 1≤u D ≤8, 1≤v D ≤8 and u D and v D are not 1 at the same time;

如果失真图像Y的失真类型为类模糊失真,则计算X经一级小波变换后的近似分量XA中所有系数值的均值和标准差,并计算Y经一级小波变换后的近似分量YA中所有系数值的均值和标准差,然后计算XA和YA中所有的坐标位置相同的两个图像块中的所有系数之间的协方差,将XA中坐标位置为(iW,jW)的图像块中所有系数的均值和标准差分别对应记为将YA中坐标位置为(iW,jW)的图像块中所有系数的均值和标准差分别记为将XA中坐标位置为(iW,jW)的图像块中的所有像素点与YA中坐标位置为(iW,jW)的图像块中的所有系数之间的协方差记为 μ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 x i W , j W W ( u W , v W ) , σ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) 2 , μ y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 y i W , j W W ( u W , v W ) , σ y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( y i W , j W W ( u W , v W ) - μ y i W , j W W ) 2 , σ x i W , j W W , y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 [ ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) ( y i W , j W W ( u W , v W ) - μ u i W , j W W ) ] , 其中,表示中坐标位置为(uW,vW)的系数值,表示中坐标位置为(uW,vW)的系数值,1≤uW≤8,1≤vW≤8;If the distortion type of the distorted image Y is fuzzy-like distortion, calculate the mean and standard deviation of all coefficient values in the approximate component X A of X after the first-level wavelet transformation, and calculate the approximate component Y A of Y after the first-level wavelet transformation The mean and standard deviation of all coefficient values in X A and Y A, and then calculate the covariance between all coefficients in the two image blocks with the same coordinate position in X A and Y A , and the coordinate position in X A is (i W ,j W ) image block The means and standard deviations of all coefficients in and The image block whose coordinate position is (i W , j W ) in Y A The mean and standard deviation of all coefficients in and The image block whose coordinate position is (i W , j W ) in X A All the pixels in Y A and the image block whose coordinate position is (i W , j W ) The covariance among all the coefficients in is denoted as μ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 x i W , j W W ( u W , v W ) , σ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) 2 , μ the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 the y i W , j W W ( u W , v W ) , σ the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( the y i W , j W W ( u W , v W ) - μ the y i W , j W W ) 2 , σ x i W , j W W , the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 [ ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) ( the y i W , j W W ( u W , v W ) - μ u i W , j W W ) ] , in, express The coefficient value at the coordinate position (u W , v W ), express The coefficient value at the middle coordinate position (u W , v W ), 1≤u W ≤8, 1≤v W ≤8;

④如果失真图像Y的失真类型为高斯白噪声失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的亮度函数、对比度函数和结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的亮度函数、对比度函数和结构度函数对应分别记为l(xi,j,yi,j)、c(xi,j,yi,j)和s(xi,j,yi,j), l ( x i , j , y i , j ) = 2 μ x i , j μ y i , j + C 1 μ x i , j 2 + μ y i , j 2 + C 1 , c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数;④ If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the brightness function, contrast function and structure function between all the coordinate positions in X and Y between the same two image blocks, and set the coordinate position in X as ( The brightness function, contrast function and structure function between the image block x i,j of i,j) and the image block y i,j whose coordinate position is (i,j) in Y are respectively denoted as l(xi , j ,y i,j ), c(x i,j ,y i,j ) and s(x i,j ,y i,j ), l ( x i , j , the y i , j ) = 2 μ x i , j μ the y i , j + C 1 μ x i , j 2 + μ the y i , j 2 + C 1 , c ( x i , j , the y i , j ) = 2 σ x i , j σ the y i , j + C 2 σ x i , j 2 + σ the y i , j 2 + C 2 , Among them, C 1 , C 2 , and C 3 are small-value constants set to prevent the denominator from appearing zero;

如果失真图像Y的失真类型为JPEG失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的亮度函数和对比度函数,并计算X和Y中所有的坐标位置相同的两个图像块在DCT域的结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的亮度函数和对比度函数对应分别记为l(xi,j,yi,j)和c(xi,j,yi,j),将X中坐标位置为(i,j)的图像块xi,j在DCT变换后的新图像块与Y中坐标位置为(i,j)的图像块yi,j在DCT变换后的新图像块之间结构度函数记为f(xi,j,yi,j), c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 , f ( x i , j , y i , j ) = 2 σ x i , j D σ y i , j D + C 3 σ x i , j D 2 + σ y i , j D 2 + C 3 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数;If the distortion type of the distorted image Y is JPEG distortion, calculate the brightness function and contrast function between two image blocks with the same coordinate positions in X and Y, and calculate the two image blocks with the same coordinate positions in X and Y The structure function of the image block in the DCT domain, the brightness between the image block x i , j at the coordinate position (i, j) in X and the image block y i, j at the coordinate position ( i, j) in Y function and contrast function are recorded as l(x i,j ,y i,j ) and c(x i,j ,y i,j ) respectively, and the image block x i whose coordinate position in X is (i,j) , j is the new image block after DCT transformation The new image block after DCT transformation with the image block y i, j whose coordinate position is (i, j) in Y The structure degree function between is denoted as f(x i,j ,y i,j ), c ( x i , j , the y i , j ) = 2 σ x i , j σ the y i , j + C 2 σ x i , j 2 + σ the y i , j 2 + C 2 , f ( x i , j , the y i , j ) = 2 σ x i , j D. σ the y i , j D. + C 3 σ x i , j D. 2 + σ the y i , j D. 2 + C 3 , Among them, C 1 , C 2 , and C 3 are small-value constants set to prevent the denominator from appearing zero;

如果失真图像Y的失真类型为类模糊失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的小波系数亮度函数、小波系数对比度函数和小波系数结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的小波系数亮度函数、小波系数对比度函数和小波系数结构度函数对应分别记为lW(xi,j,yi,j)、cW(xi,j,yi,j)和sW(xi,j,yi,j), l W ( x i , j , y i , j ) = 2 μ x i W , j W W μ y i W , j W W + C 1 μ x i W , j W W 2 + μ y i W , j W W 2 + C 1 , c W ( x i , j , y i , j ) = 2 σ x i W , j W W σ y i W , j W W + C 2 σ x i W , j W W 2 + σ y i W , j W W 2 + C 2 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数;If the distortion type of the distorted image Y is fuzzy-like distortion, calculate the wavelet coefficient brightness function, wavelet coefficient contrast function and wavelet coefficient structure degree function between all the coordinate positions in X and Y between the two image blocks, and convert X The wavelet coefficient brightness function, the wavelet coefficient contrast function and the wavelet coefficient structure degree between the image block x i, j at the coordinate position (i, j) and the image block y i, j at the coordinate position (i, j) in Y The corresponding functions are denoted as l W (xi ,j ,y i,j ), c W (xi ,j ,y i,j ) and s W (xi ,j ,y i,j ), l W ( x i , j , the y i , j ) = 2 μ x i W , j W W μ the y i W , j W W + C 1 μ x i W , j W W 2 + μ the y i W , j W W 2 + C 1 , c W ( x i , j , the y i , j ) = 2 σ x i W , j W W σ the y i W , j W W + C 2 σ x i W , j W W 2 + σ the y i W , j W W 2 + C 2 , Among them, C 1 , C 2 , and C 3 are small-value constants set to avoid zero in the denominator;

⑤如果失真图像Y的失真类型为高斯白噪声失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的亮度函数、对比度函数和结构度函数,计算X和Y中所有的坐标位置相同的两个图像块之间的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的结构相似度记为SSIM(xi,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γ,其中α、β和γ为调节因子;⑤If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate all the values in X and Y according to the brightness function, contrast function and structure function between two image blocks with the same coordinate positions in X and Y. The structural similarity between two image blocks with the same coordinate position, the image block x i , j with the coordinate position (i, j) in X and the image block y i, j with the coordinate position (i, j) in Y , The structural similarity between j is recorded as SSIM(xi ,j ,y i,j ), SSIM(xi ,j ,y i,j )=[l(xi ,j ,y i,j )] α [c(xi ,j ,y i,j )] β [s(xi ,j ,y i,j )] γ , where α, β and γ are adjustment factors;

如果失真图像Y的失真类型为JPEG失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的亮度函数和对比度函数,以及X和Y中所有的坐标位置相同的两个图像块在DCT域的结构度函数,计算X和Y中所有坐标位置相同的两个图像块之间的基于DCT域的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的基于DCT域的结构相似度记为FSSIM(xi,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γ,其中α、β和γ为调节因子;If the distortion type of the distorted image Y is JPEG distortion, according to the brightness function and contrast function between two image blocks with the same coordinate positions in X and Y, and two images with the same coordinate positions in X and Y The structure degree function of the block in the DCT domain, calculate the structural similarity based on the DCT domain between two image blocks with the same coordinate position in X and Y, and take the image block x i with the coordinate position (i, j) in X ,j and the image block y i, j whose coordinate position is (i,j) in Y is denoted as FSSIM(xi ,j ,y i,j ), FSSIM(xi ,j j ,y i,j )=[l(x i,j ,y i,j )] α [c(x i,j ,y i,j )] β [f(x i,j , y i,j )] γ , where α, β and γ are regulator factors;

如果失真图像Y的失真类型为类模糊失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的小波系数亮度函数、小波系数对比度函数和小波系数结构度函数,计算X和Y中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的基于小波域的结构相似度记为WSSIM(xi,j,yi,j),WSSIM(xi,j,yi,j)=[lW(xi,j,yi,j)]α[cW(xi,j,yi,j)]β[sW(xi,j,yi,j)]γ,其中α、β和γ为调节因子;If the distortion type of the distorted image Y is fuzzy-like distortion, then calculate X and The structural similarity based on the wavelet domain between all the two image blocks with the same coordinate position in Y, the image block x i,j whose coordinate position is (i,j) in X and the coordinate position in Y are (i,j) The wavelet domain-based structural similarity between image blocks y i,j of j) is denoted as WSSIM(xi ,j ,y i,j ), WSSIM(xi ,j ,y i,j )=[l W (xi ,j ,y i,j )] α [c W (xi ,j ,y i,j )] β [s W (xi ,j ,y i,j )] γ , where α, β and γ are adjustment factors;

⑥如果失真图像Y的失真类型为高斯白噪声失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于像素域的结构相似度,计算Y的客观质量分值,记为Qwn Q wn = 1 M × N Σ i = 1 M Σ j = 1 N SSIM ( x i , j , y i , j ) ; ⑥ If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the objective quality score of Y according to the structural similarity based on the pixel domain between all the two image blocks with the same coordinate positions in X and Y, and record is Qwn , Q wn = 1 m × N Σ i = 1 m Σ j = 1 N SSIM ( x i , j , the y i , j ) ;

如果失真图像Y的失真类型为JPEG失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于DCT域的结构相似度,计算Y的客观质量分值,记为Qjpeg Q jpeg = 1 M × N Σ i = 1 M Σ j = 1 N FSSIM ( x i , j , y i , j ) ; If the distortion type of the distorted image Y is JPEG distortion, then calculate the objective quality score of Y according to the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y, and denote it as Q jpeg , Q jpeg = 1 m × N Σ i = 1 m Σ j = 1 N FSSIM ( x i , j , the y i , j ) ;

如果失真图像Y的失真类型为类模糊失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度,计算Y的客观质量分值,记为Qblur Q blur = 1 M × N Σ i = 1 M Σ j = 1 N WSSIM ( x i , j , y i , j ) . If the distortion type of the distorted image Y is fuzzy-like distortion, then calculate the objective quality score of Y according to the structural similarity based on the wavelet domain between two image blocks with the same coordinate positions in X and Y, and denote it as Q blur , Q blurred = 1 m × N Σ i = 1 m Σ j = 1 N WSSIM ( x i , j , the y i , j ) .

所述的步骤①中通过失真类型判别方法确定Y的失真类型的具体过程为:In the described step 1., the specific process of determining the distortion type of Y by the distortion type discrimination method is:

①-a、对X进行大小为64×64的无重叠块分割,得到M'×N'个大小为64×64的图像块,将X中坐标在(i',j')处的图像块记为x'i',j',对每个图像块x'i',j'进行一级小波分解,提取对角分量,找出每个图像块的对角分量中系数幅度的中值,并计算每个图像块的噪声标准差,将x'i',j'的小波对角分量中系数幅度的中值记为其噪声标准差为 σ x ′ i ′ , j ′ = MED x ′ i ′ , j ′ 0.6745 , 其中, 1≤i'≤M',1≤j'≤N';①-a. Carry out non-overlapping block segmentation with a size of 64×64 on X to obtain M’×N’ image blocks with a size of 64×64, and divide the image block whose coordinates are at (i’, j’) in X Denoted as x'i',j' , perform a first-level wavelet decomposition on each image block x'i',j', extract the diagonal components, and find the median value of the coefficient magnitude in the diagonal components of each image block, And calculate the noise standard deviation of each image block, and record the median value of the coefficient magnitude in the wavelet diagonal component of x'i',j' as Its noise standard deviation is σ x ′ i ′ , j ′ = MED x ′ i ′ , j ′ 0.6745 , in, 1≤i'≤M', 1≤j'≤N';

同样,对Y进行大小为64×64的无重叠块分割,得到M'×N'个大小为64×64的图像块,将Y中坐标在(i',j')处的图像块记为y'i',j',对每个图像块y'i',j'进行一级小波分解,提取对角分量,找出每个图像块的对角分量中系数幅度的中值,并计算每个图像块的噪声标准差,将y'i',j'的小波对角分量中系数幅度的中值记为其噪声标准差为 σ y ′ i ′ , j ′ = MED y ′ i ′ , j ′ 0.6745 ; Similarly, Y is divided into non-overlapping blocks with a size of 64×64 to obtain M'×N' image blocks with a size of 64×64, and the image block whose coordinates in Y are at (i', j') is recorded as y'i',j' , perform a first-level wavelet decomposition on each image block y'i',j' , extract the diagonal components, find out the median value of the coefficient magnitude in the diagonal components of each image block, and calculate The noise standard deviation of each image block, the median value of the magnitude of the coefficient in the wavelet diagonal component of y'i',j' is recorded as Its noise standard deviation is σ the y ′ i ′ , j ′ = MED the y ′ i ′ , j ′ 0.6745 ;

①-b、计算X和Y中所有坐标位置相同的图像块之间的噪声标准差的差值,将X和Y中坐标位置(i',j')处的图像块x'i',j'和y'i',j'的噪声标准差的差值记为△σi',j'然后计算X和Y中所有坐标位置相同的图像块之间的噪声标准差的差值的均值,记为 ①-b. Calculate the difference between the noise standard deviations between all image blocks with the same coordinate position in X and Y, and divide the image block x'i', j at the coordinate position (i', j ' ) in X and Y ' and y'i',j', the noise standard deviation difference is recorded as △σ i', j' , Then calculate the mean value of the noise standard deviation difference between all image blocks with the same coordinate position in X and Y, denoted as

①-c、判断是否成立,如果成立,则确定Y的失真类型为高斯白噪声失真,然后结束;否则,执行步骤①-d;其中,ThWN为高斯白噪声失真判别阈值;①-c, judgment Whether it is established, if established, then determine that the distortion type of Y is Gaussian white noise distortion, and then end; otherwise, perform steps ①-d; wherein, Th WN is the discrimination threshold of Gaussian white noise distortion;

①-d、计算X的亮度差值图,记为Xh,将Xh中坐标位置为(i″,j″)点的系数值记为Xh(i″,j″),Xh(i″,j″)=|X(i″,j″)-X(i″,j″+1)|,其中,1≤i″≤H,1≤j″≤W-1,X(i″,j″)表示X中坐标位置为(i″,j″)的像素点的亮度值,X(i″,j″+1)表示X中坐标位置为(i″,j″+1)的像素点的亮度值,符号“||”为取绝对值符号;①-d. Calculate the luminance difference map of X, which is denoted as X h , and the coefficient value of the point whose coordinate position is (i″, j″) in X h is denoted as X h (i″, j″), and X h ( i″,j″)=|X(i″,j″)-X(i″,j″+1)|, where, 1≤i″≤H, 1≤j″≤W-1, X(i ″, j″) indicates the brightness value of the pixel whose coordinate position in X is (i″, j″), and X(i″, j″+1) indicates that the coordinate position in X is (i″, j″+1) The brightness value of the pixel, the symbol "||" is the absolute value symbol;

同样,计算Y的亮度差值图,记为Yh,将Yh中坐标位置为(i″,j″)点的系数值记为Yh(i″,j″),Yh(i″,j″)=|Y(i″,j″)-Y(i″,j″+1)|,其中,1≤i″≤H,1≤j″≤W-1,Y(i″,j″)表示Y中坐标位置为(i″,j″)的像素点的亮度值,Y(i″,j″+1)表示Y中坐标位置为(i″,j″+1)的像素点的亮度值;Similarly, calculate the luminance difference map of Y, which is recorded as Y h , and the coefficient value of the point whose coordinate position is (i″, j″) in Y h is recorded as Y h (i″, j″), and Y h (i″ ,j″)=|Y(i″,j″)-Y(i″,j″+1)|, where, 1≤i″≤H, 1≤j″≤W-1, Y(i″, j″) represents the brightness value of the pixel whose coordinate position in Y is (i″, j″), and Y(i″, j″+1) represents the pixel whose coordinate position in Y is (i″, j″+1) The brightness value of the point;

①-e、对X的亮度差值图Xh进行8×8大小的无重叠的块分割,得到M″×N″个无重叠的大小为8×8图像块,将Xh中坐标位置在(i″',j″')处的图像块记为定义图像块的块内能量和块边缘能量分别为 Ex i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 x i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ex i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 x i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , 其中,中坐标位置为(p,q)的系数值,中坐标位置为(p,8)的系数值,1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;①-e, perform 8×8 non-overlapped block segmentation on the luminance difference map X h of X, and obtain M″×N” non-overlapping size 8×8 image blocks, and set the coordinate position in X h at The image block at (i″’, j″’) is denoted as define image blocks The energy inside the block and the energy at the edge of the block are respectively and Ex i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 x i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ex i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 x i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , in, The coefficient value at the coordinate position of (p,q), The coefficient value at the coordinate position (p,8), 1≤i″'≤M″, 1≤j″'≤N″, 1≤p≤8, 1≤q≤7;

同样,对Y的亮度差值图Yh进行8×8大小的无重叠的块分割,得到M″×N″个无重叠的大小为8×8图像块,将Yh中坐标位置在(i″',j″')处的图像块记为定义图像块的块内能量和块边缘能量分别为 Ey i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 y i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ey i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 y i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , 其中,中坐标位置为(p,q)的系数值,中坐标位置为(p,8)的系数值;Similarly, carry out 8 × 8 non-overlapped block divisions to the luminance difference map Y h of Y to obtain M″ × N” non-overlapping size 8 × 8 image blocks, and set the coordinate position in Y h at (i ″', j″'), the image block at the position is denoted as define image blocks The energy inside the block and the energy at the edge of the block are respectively and Ey i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 the y i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ey i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 the y i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , in, The coefficient value at the coordinate position of (p,q), The coefficient value whose middle coordinate position is (p,8);

①-f、计算Xh中所有图像块的边缘能量和块内能量之间的比值,将Xh中坐标位置在(i″',j″')处的图像块的块边缘能量和块能量的比值记为 R i ′ ′ ′ , j ′ ′ ′ x = Ex i ′ ′ ′ , j ′ ′ ′ Ed E x i ′ ′ ′ , j ′ ′ ′ In ; ①-f, calculate the ratio between the edge energy of all image blocks in X h and the energy in the block, and the image block whose coordinate position in X h is at (i″’, j″’) The ratio of block edge energy to block energy is denoted as R i ′ ′ ′ , j ′ ′ ′ x = Ex i ′ ′ ′ , j ′ ′ ′ Ed E. x i ′ ′ ′ , j ′ ′ ′ In ;

同样,计算Yh中所有图像块的边缘能量和块内能量之间的比值,将Yh中坐标位置在(i″',j″')处的图像块的块边缘能量和块能量的比值记为 R i ′ ′ ′ , j ′ ′ ′ y = Ey i ′ ′ ′ , j ′ ′ ′ Ed E y i ′ ′ ′ , j ′ ′ ′ In ; Similarly, calculate the ratio between the edge energy of all image blocks in Y h and the energy in the block, and the image block whose coordinate position in Y h is at (i″’, j″’) The ratio of block edge energy to block energy is denoted as R i ′ ′ ′ , j ′ ′ ′ the y = Ey i ′ ′ ′ , j ′ ′ ′ Ed E. the y i ′ ′ ′ , j ′ ′ ′ In ;

统计满足不等式的图像块的个数,并记为N0;定义判别指标J, J = N 0 M ′ ′ × N ′ ′ ; Statistics satisfy the inequality The number of image blocks, and recorded as N 0 ; define the discriminant index J, J = N 0 m ′ ′ × N ′ ′ ;

①-g、判断J>ThJPEG是否成立,如果成立,则确定Y的失真类型为JPEG失真,然后结束;否则,执行步骤①-h;其中,ThJPEG为JPEG失真判别阈值;①-g, judging whether J>Th JPEG is established, if established, then determine that the distortion type of Y is JPEG distortion, and then end; otherwise, perform step ①-h; wherein, Th JPEG is the JPEG distortion discrimination threshold;

①-h、确定Y的失真类型为类模糊失真,即Y的失真类型为高斯模糊失真,或JPEG2000失真,或快衰落失真。①-h. Determine that the distortion type of Y is fuzzy-like distortion, that is, the distortion type of Y is Gaussian blur distortion, or JPEG2000 distortion, or fast fading distortion.

所述的步骤④中取C1=0.01、C2=0.02、C3=0.01、α=β=γ=1。In step ④, C 1 =0.01, C 2 =0.02, C 3 =0.01, α=β=γ=1.

与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:

1)本发明方法在获取原始的无失真图像和待评价的失真图像中坐标位置相同的两个图像块之间的结构相似度时,结合了待评价的失真图像的失真类型,使得本发明方法能够从自适应评价的角度出发,自适应地选择计算基于像素域的结构相似度,或基于DCT域的结构相似度,或基于小波域的结构相似度,从而提高了不同失真类型的图像的质量评价结果的准确性。1) The method of the present invention combines the distortion type of the distorted image to be evaluated when obtaining the structural similarity between two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated, so that the method of the present invention From the perspective of adaptive evaluation, it can adaptively choose to calculate the structural similarity based on the pixel domain, or the structural similarity based on the DCT domain, or the structural similarity based on the wavelet domain, thereby improving the quality of images with different distortion types The accuracy of the evaluation results.

2)本发明方法在判别失真图像的失真类型的过程中通过结合图像受到高斯白噪声失真时表现出来的失真特点和受到JPEG失真时表现出来的失真特点,在有原始参考图像的情况下,实现了失真图像的失真类型的判断,该失真类型的判别过程可移植性高。2) The method of the present invention is in the process of discriminating the distortion type of distorted image by combining the distortion characteristics shown when the image is distorted by Gaussian white noise and the distortion characteristics shown when it is distorted by JPEG, in the case of the original reference image, realize The judgment of the distortion type of the distorted image is realized, and the discriminating process of the distortion type has high portability.

附图说明Description of drawings

图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;

图2为本发明方法中涉及到的训练集中的12幅原始无失真图像;Fig. 2 is 12 pieces of original undistorted images in the training set involved in the method of the present invention;

图3为本发明方法中确定图像是高斯白噪声失真时涉及的阈值大小与判断正确率大小关系图;Fig. 3 is the relationship figure between the threshold value and the judgment correct rate when determining that the image is Gaussian white noise distortion in the method of the present invention;

图4为发明方法中确定图像是JPEG失真时涉及的阈值大小与判断正确率大小关系图;Fig. 4 is the relationship figure between the threshold size and the judgment correct rate when determining that the image is JPEG distortion in the inventive method;

具体实施方式Detailed ways

以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

本发明提出一种基于失真类型判断的自适应图像质量评价方法,其处理过程为:The present invention proposes an adaptive image quality evaluation method based on distortion type judgment, and its processing process is as follows:

首先,确定待评价的失真图像的失真类型;First, determine the distortion type of the distorted image to be evaluated;

其次,结合待评价的失真图像的失真类型进行相应的处理,Secondly, corresponding processing is performed in combination with the distortion type of the distorted image to be evaluated,

如果失真图像为高斯白噪声失真,则将原始的无失真图像和待评价的失真图像在像素域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像和待评价的失真图像中的每个图像块中的所有像素点在像素域亮度均值和标准差,以及原始无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块中的所有像素点亮度值之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于像素域的结构相似度;If the distorted image is distorted by Gaussian white noise, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the pixel domain, and the original undistorted image and All pixels in each image block in the distorted image to be evaluated have the mean and standard deviation of brightness in the pixel domain, and all the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated The covariance between pixel brightness values obtains the pixel domain-based structural similarity between all two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated;

如果失真图像为JPEG失真,则将原始的无失真图像和待评价的失真图像在DCT(Discrete Cosine Transform)变换域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像块和待评价的失真图像块在DCT域中所有系数的均值和标准差,以及原始无失真图像和待评价的失真图像在DCT域所中有的坐标位置相同的两个图像块中的所有系数之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于DCT域的结构相似度;If the distorted image is JPEG distortion, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the DCT (Discrete Cosine Transform) transform domain, and the original undistorted image is calculated by calculating The mean and standard deviation of all coefficients of the undistorted image block and the distorted image block to be evaluated in the DCT domain, and the original undistorted image and the distorted image to be evaluated have the same coordinate positions in the two image blocks in the DCT domain The covariance between all coefficients of the original undistorted image and the distorted image to be evaluated are obtained based on the structure similarity of the DCT domain between two image blocks with the same coordinate positions;

如果失真图像为类模糊失真,则将原始的无失真图像和待评价的失真图像在小波域分割成多个相重叠的尺寸大小为8×8的图像块,通过计算原始的无失真图像块和待评价的失真图像块在小波域中所有系数的均值和标准差,以及原始无失真图像和待评价的失真图像在小波域中所有的坐标位置相同的两个图像块中的所有系数之间的协方差,获取原始的无失真图像和待评价的失真图像中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度;If the distorted image is similar to fuzzy distortion, the original undistorted image and the distorted image to be evaluated are divided into multiple overlapping image blocks with a size of 8×8 in the wavelet domain. By calculating the original undistorted image block and The mean and standard deviation of all coefficients of the distorted image block to be evaluated in the wavelet domain, and the coefficients between all the coefficients of the original undistorted image and the distorted image to be evaluated in the wavelet domain where all coordinate positions are the same Covariance, to obtain the wavelet domain-based structural similarity between all two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated;

最后,根据原始的无失真图像和待评价的失真图像中所有坐标位置相同的两个图像块之间的结构相似度,获取待评价的失真图像的客观质量分值。Finally, according to the structural similarity between the original undistorted image and the distorted image to be evaluated, the objective quality score of the distorted image to be evaluated is obtained.

本发明的自适应图像质量客观评价方法,其总体实现框图如图1所示,其具体包括以下步骤:The self-adaptive image quality objective evaluation method of the present invention, its overall realization block diagram is as shown in Figure 1, and it specifically comprises the following steps:

①令X表示原始的无失真图像,令Y表示待评价的失真图像,然后通过失真类型判别方法确定Y的失真类型,Y的失真类型为高斯白噪声失真、JPEG失真、高斯模糊失真、类JPEG2000失真中的其中一种。①Let X denote the original undistorted image, let Y denote the distorted image to be evaluated, and then determine the distortion type of Y through the distortion type discrimination method, and the distortion type of Y is Gaussian white noise distortion, JPEG distortion, Gaussian blur distortion, JPEG2000-like One of the distortions.

目前,图像的失真类型一般有高斯白噪声失真(WN,white noise)、JPEG失真(JPEG)和类模糊失真三类,其中,类模糊失真包含高斯模糊失真(Gblur,Gaussian blur)、JPEG2000失真和快衰落失真(FF,fast fading)三种。在此,通过失真类型判别方法,确定Y为哪种失真类型。At present, image distortion types generally include Gaussian white noise distortion (WN, white noise), JPEG distortion (JPEG) and class blur distortion, wherein, class blur distortion includes Gaussian blur distortion (Gblur, Gaussian blur), JPEG2000 distortion and There are three kinds of fast fading distortion (FF, fast fading). Here, the distortion type of Y is determined by the method of discriminating the distortion type.

步骤①中通过失真类型判别方法确定Y的失真类型的具体过程为:The specific process of determining the distortion type of Y through the distortion type discrimination method in step ① is as follows:

①-a、对X进行大小为64×64的无重叠块分割,得到M'×N'个大小为64×64的图像块,将X中坐标在(i',j')处的图像块记为x'i',j',对每个图像块x'进行一级小波分解(haar小波),提取对角分量,找出每个图像块的对角分量中系数幅度的中值,并计算每个图像块的噪声标准差,将x'i',j'的小波对角分量中系数幅度的中值记为其噪声标准差为 σ x ′ i ′ , j ′ = MED x ′ i ′ , j ′ 0.6745 , 其中, 1≤i'≤M',1≤j'≤N',H、W分别表示X的高度和宽度;①-a. Carry out non-overlapping block segmentation with a size of 64×64 on X to obtain M’×N’ image blocks with a size of 64×64, and divide the image block whose coordinates are at (i’, j’) in X denoted as x'i',j' , perform a first-level wavelet decomposition (haar wavelet) on each image block x', extract the diagonal components, find out the median value of the coefficient magnitude in the diagonal components of each image block, and Calculate the noise standard deviation of each image block, and record the median value of the magnitude of the coefficient in the wavelet diagonal component of x'i',j' as Its noise standard deviation is σ x ′ i ′ , j ′ = MED x ′ i ′ , j ′ 0.6745 , in, 1≤i'≤M', 1≤j'≤N', H and W represent the height and width of X respectively;

同样,对Y进行大小为64×64的无重叠块分割,得到M'×N'个大小为64×64的图像块,将Y中坐标在(i',j')处的图像块记为y'i',j',对每个图像块y'进行一级小波分解(haar小波),提取对角分量,找出每个图像块的对角分量中系数幅度的中值,并计算每个图像块的噪声标准差,将y'i',j'的小波对角分量中系数幅度的中值记为其噪声标准差为 σ y ′ i ′ , j ′ = MED y ′ i ′ , j ′ 0.6745 , 其中,1≤i'≤M',1≤j'≤N',H、W分别表示Y的高度和宽度,即X和Y有相同的尺寸;Similarly, Y is divided into non-overlapping blocks with a size of 64×64 to obtain M'×N' image blocks with a size of 64×64, and the image block whose coordinates are at (i', j') in Y is recorded as y'i',j' , perform a first-level wavelet decomposition (haar wavelet) on each image block y', extract the diagonal components, find out the median value of the coefficient magnitude in the diagonal components of each image block, and calculate each The noise standard deviation of an image block, the median value of the coefficient amplitude in the wavelet diagonal component of y'i',j' is recorded as Its noise standard deviation is σ the y ′ i ′ , j ′ = MED the y ′ i ′ , j ′ 0.6745 , in, 1≤i'≤M', 1≤j'≤N', H and W represent the height and width of Y respectively, that is, X and Y have the same size;

①-b、计算X和Y中所有坐标位置相同的图像块之间的噪声标准差的差值,将X和Y中坐标位置(i',j')处的图像块x'i',j'和y'i',j'的噪声标准差的差值记为△σi',j'然后计算X和Y中所有坐标位置相同的图像块之间的噪声标准差的差值的均值,记为 ①-b. Calculate the difference between the noise standard deviations between all image blocks with the same coordinate position in X and Y, and divide the image block x'i', j at the coordinate position (i', j ' ) in X and Y ' and y'i',j', the noise standard deviation difference is recorded as △σ i', j' , Then calculate the mean value of the noise standard deviation difference between all image blocks with the same coordinate position in X and Y, denoted as

①-c、判断是否成立,如果成立,则确定Y的失真类型为高斯白噪声失真,然后结束;否则,执行步骤①-d;其中,ThWN为高斯白噪声失真判别阈值,在本实施例中,高斯白噪声判别阈值ThWN的取值为0.8;①-c, judgment Whether it is established, if established, then determine that the distortion type of Y is Gaussian white noise distortion, and then end; otherwise, perform steps ①-d; wherein, Th WN is a Gaussian white noise distortion discrimination threshold, in this embodiment, Gaussian white noise The value of the discrimination threshold Th WN is 0.8;

①-d、计算X的亮度差值图,记为Xh,将Xh中坐标位置为(i″,j″)点的系数值记为Xh(i″,j″),Xh(i″,j″)=|X(i″,j″)-X(i″,j″+1)|,其中,1≤i″≤H,1≤j″≤W-1,X(i″,j″)表示X中坐标位置为(i″,j″)的像素点的亮度值,X(i″,j″+1)表示X中坐标位置为(i″,j″+1)的像素点的亮度值,符号“||”为取绝对值符号;①-d. Calculate the luminance difference map of X, which is denoted as X h , and the coefficient value of the point whose coordinate position is (i″, j″) in X h is denoted as X h (i″, j″), and X h ( i″,j″)=|X(i″,j″)-X(i″,j″+1)|, where, 1≤i″≤H, 1≤j″≤W-1, X(i ″, j″) indicates the brightness value of the pixel whose coordinate position in X is (i″, j″), and X(i″, j″+1) indicates that the coordinate position in X is (i″, j″+1) The brightness value of the pixel, the symbol "||" is the absolute value symbol;

同样,计算Y的亮度差值图,记为Yh,将Yh中坐标位置为(i″,j″)点的系数值记为Yh(i″,j″),Yh(i″,j″)=|Y(i″,j″)-Y(i″,j″+1)|,其中,1≤i″≤H,1≤j″≤W-1,Y(i″,j″)表示Y中坐标位置为(i″,j″)的像素点的亮度值,Y(i″,j″+1)表示Y中坐标位置为(i″,j″+1)的像素点的亮度值;Similarly, calculate the luminance difference map of Y, which is recorded as Y h , and the coefficient value of the point whose coordinate position is (i″, j″) in Y h is recorded as Y h (i″, j″), and Y h (i″ ,j″)=|Y(i″,j″)-Y(i″,j″+1)|, where, 1≤i″≤H, 1≤j″≤W-1, Y(i″, j″) represents the brightness value of the pixel whose coordinate position is (i″, j″) in Y, and Y(i″, j″+1) represents the pixel whose coordinate position is (i″, j″+1) in Y The brightness value of the point;

①-e、对X的亮度差值图Xh进行8×8大小的无重叠的块分割,得到M″×N″个无重叠的大小为8×8图像块,将Xh中坐标位置在(i″',j″')处的图像块记为定义图像块的块内能量和块边缘能量分别为 Ex i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 x i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ex i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 x i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , 其中,中坐标位置为(p,q)的系数值,中坐标位置为(p,8)的系数值,1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;①-e, perform 8×8 non-overlapped block segmentation on the luminance difference map X h of X, and obtain M″×N” non-overlapping size 8×8 image blocks, and set the coordinate position in X h at The image block at (i″’, j″’) is denoted as define image blocks The energy inside the block and the energy at the edge of the block are respectively and Ex i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 x i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ex i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 x i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , in, The coefficient value at the coordinate position of (p,q), The coefficient value at the coordinate position (p,8), 1≤i″'≤M″, 1≤j″'≤N″, 1≤p≤8, 1≤q≤7;

同样,对Y的亮度差值图Yh进行8×8大小的无重叠的块分割,得到M″×N″个无重叠的大小为8×8图像块,将Yh中坐标位置在(i″',j″')处的图像块记为定义图像块的块内能量和块边缘能量分别为 Ey i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 y i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ey i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 y i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , 其中,中坐标位置为(p,q)的系数值,中坐标位置为(p,8)的系数值1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;Similarly, carry out 8 × 8 non-overlapped block divisions to the luminance difference map Y h of Y to obtain M″ × N” non-overlapping size 8 × 8 image blocks, and set the coordinate position in Y h at (i ″', j″'), the image block at the position is denoted as define image blocks The energy inside the block and the energy at the edge of the block are respectively and Ey i ′ ′ ′ , j ′ ′ ′ In = 1 56 Σ p = 1 8 Σ q = 1 7 the y i ′ ′ ′ , j ′ ′ ′ h ( p , q ) , Ey i ′ ′ ′ , j ′ ′ ′ Ed = 1 8 Σ p = 1 8 the y i ′ ′ ′ , j ′ ′ ′ h ( p , 8 ) , in, for The coefficient value at the coordinate position of (p,q), for The value of the coefficient at the middle coordinate position (p,8) 1≤i″'≤M″, 1≤j″'≤N″, 1≤p≤8, 1≤q≤7;

①-f、计算Xh中所有图像块的边缘能量和块内能量之间的比值,将Xh中坐标位置在(i″',j″')处的图像块的块边缘能量和块能量的比值记为 R i ′ ′ ′ , j ′ ′ ′ x = Ex i ′ ′ ′ , j ′ ′ ′ Ed E x i ′ ′ ′ , j ′ ′ ′ In ; ①-f, calculate the ratio between the edge energy of all image blocks in X h and the energy in the block, and the image block whose coordinate position in X h is at (i″’, j″’) The ratio of block edge energy to block energy is denoted as R i ′ ′ ′ , j ′ ′ ′ x = Ex i ′ ′ ′ , j ′ ′ ′ Ed E. x i ′ ′ ′ , j ′ ′ ′ In ;

同样,计算Yh中所有图像块的边缘能量和块内能量之间的比值,将Yh中坐标位置在(i″',j″')处的图像块的块边缘能量和块能量的比值记为 R i ′ ′ ′ , j ′ ′ ′ y = Ey i ′ ′ ′ , j ′ ′ ′ Ed E y i ′ ′ ′ , j ′ ′ ′ In ; Similarly, calculate the ratio between the edge energy of all image blocks in Y h and the energy in the block, and the image block whose coordinate position in Y h is at (i″’, j″’) The ratio of block edge energy to block energy is denoted as R i ′ ′ ′ , j ′ ′ ′ the y = Ey i ′ ′ ′ , j ′ ′ ′ Ed E. the y i ′ ′ ′ , j ′ ′ ′ In ;

统计满足不等式的图像块的个数,并记为N0;定义判别指标J, J = N 0 M ′ ′ × N ′ ′ ; Statistics satisfy the inequality The number of image blocks, and recorded as N 0 ; define the discriminant index J, J = N 0 m ′ ′ × N ′ ′ ;

①-g、判断J>ThJPEG是否成立,如果成立,则确定Y的失真类型为JPEG失真,然后结束;否则,执行步骤①-h;其中,ThJPEG为JPEG失真判别阈值;本实施例中,JPEG失真判别阈值ThJPEG的取值为0.57;①-g, judging whether J>Th JPEG is established, if established, then determine that the distortion type of Y is JPEG distortion, and then end; otherwise, perform step ①-h; wherein, Th JPEG is the JPEG distortion discrimination threshold; in this embodiment , the value of the JPEG distortion discrimination threshold Th JPEG is 0.57;

①-h、确定Y的失真类型为类模糊失真,即Y的失真类型为高斯模糊失真,或JPEG2000失真,或快衰落失真。①-h. Determine that the distortion type of Y is fuzzy-like distortion, that is, the distortion type of Y is Gaussian blur distortion, or JPEG2000 distortion, or fast fading distortion.

在本实施例中,使用的图像数据为美国Texas大学图像和视频工程实验室公开的图像质量估计数据库(LIVE)所提供的808幅图像,其中包括无失真的参考图像29幅,失真图像779幅,其中高斯白噪声失真图像145幅、高斯模糊失真图像145幅、JPEG失真图像175、JPEG2000失真图像包含169幅以及快衰落失真图像145幅。另外,图2给出了从29幅无失真的参考图像中挑选纹理简单、纹理复杂和纹理中等的12幅无失真图像,将这12幅无失真图像及其对应的5种失真类型的图像作为训练集图像,其中高斯白噪声失真图像、高斯模糊失真图像和快衰落失真图像各60幅,JPEG失真图像和JPEG2000失真图像各70幅;将剩余的17幅无失真图像及其对应的5种失真类型的图像作为测试集图像,其中高斯白噪声失真图像、高斯模糊失真图像和快衰落失真图像各85幅,JPEG失真图像105幅,JPEG2000失真图像99幅。In this embodiment, the image data used are 808 images provided by the Image Quality Estimation Database (LIVE) disclosed by the Image and Video Engineering Laboratory of the University of Texas in the United States, including 29 undistorted reference images and 779 distorted images , including 145 Gaussian white noise distorted images, 145 Gaussian blur distorted images, 175 JPEG distorted images, 169 JPEG2000 distorted images and 145 fast fading distorted images. In addition, Fig. 2 shows 12 undistorted images selected from 29 undistorted reference images with simple texture, complex texture and medium texture. These 12 undistorted images and their corresponding five distortion types are used as Training set images, including 60 Gaussian white noise distorted images, Gaussian blur distorted images and fast fading distorted images each, 70 JPEG distorted images and JPEG2000 distorted images each; the remaining 17 undistorted images and their corresponding 5 kinds of distortion Types of images are used as test set images, including 85 distorted images of Gaussian white noise, 85 distorted images of Gaussian blur and 85 distorted images of fast fading, 105 distorted images of JPEG, and 99 distorted images of JPEG2000.

第一步从训练数据库所有失真图像中分离出高斯白噪声失真的失真图像。在确定失真图像Y的失真类型是否为高斯白噪声失真的过程中涉及到的高斯白噪声失真判别阈值ThWN时,在区间[-0.5,1.5]中,每隔0.05取一个值作为ThWN探测值,对每个探测值,进行判别步骤①-a、①-b和①-c的数据训练,并分别求取判断的准确率,ThWN大小与判别正确率的关系曲线如图3所示,从图3中可以看出,当ThWN=0.8时,可以以正确率为100%地分离出高斯白噪声失真图像。The first step is to separate the distorted images distorted by Gaussian white noise from all the distorted images in the training database. When determining whether the distortion type of the distorted image Y is the Gaussian white noise distortion threshold Th WN involved in the process of determining whether the distortion type of the distorted image Y is Gaussian white noise distortion, in the interval [-0.5,1.5], take a value every 0.05 as Th WN detection value, for each detected value, carry out the data training of the discrimination steps ①-a, ①-b and ①-c, and obtain the judgment accuracy respectively. The relation curve between the Th WN size and the discrimination accuracy is shown in Figure 3 , it can be seen from FIG. 3 that when Th WN =0.8, the Gaussian white noise distorted image can be separated with a correct rate of 100%.

第二步从训练集中非高斯白噪声失真的图像中分离出JPEG失真图像。在区间[0.4,0.7]中每隔0.01取一个点作为ThJPEG的探测值,对每个探测值做判别步骤①-d至①-g的数据训练,并分别求取判断的准确率,ThJPEG大小与判别正确率的关系曲线如图4所示,从图4中可以看出,当ThJPEG=0.57时,可以以正确率为100%地从非高斯白噪声失真的图像中分离出JPEG失真图像。The second step separates JPEG distorted images from non-Gaussian white noise distorted images in the training set. Take a point every 0.01 in the interval [0.4,0.7] as the detection value of Th JPEG , do the data training of steps ①-d to ①-g for each detection value, and calculate the accuracy of the judgment respectively, Th The relationship curve between the size of JPEG and the correct rate of discrimination is shown in Figure 4. It can be seen from Figure 4 that when Th JPEG = 0.57, the JPEG can be separated from the non-Gaussian white noise distorted image with a correct rate of 100%. Distorted image.

②如果失真图像Y的失真类型为高斯白噪声失真,则采用尺寸大小为8×8的滑动窗口在X中按先行后列的顺序逐像素点移动,将X分割成M×N个相重叠的且尺寸大小为8×8的图像块,将X中坐标位置为(i,j)的图像块记为xi,j;同样,采用尺寸大小为8×8的滑动窗口在Y中按先行后列的顺序逐像素点移动,将Y分割成M×N个相重叠的且尺寸大小为8×8的图像块,将Y中坐标位置为(i,j)的图像块记为yi,j;其中,H表示X和Y的高度,W表示X和Y的宽度,符号为向下取整符号,1≤i≤M,1≤j≤N;②If the distortion type of the distorted image Y is Gaussian white noise distortion, then use a sliding window with a size of 8×8 to move pixel by pixel in the order of first row and second column in X, and divide X into M×N overlapping And for an image block with a size of 8×8, the image block whose coordinate position in X is (i,j) is recorded as x i,j ; similarly, a sliding window with a size of 8×8 is used to press first and then The order of columns moves pixel by pixel, and Y is divided into M×N overlapping image blocks with a size of 8×8, and the image block whose coordinate position in Y is (i,j) is recorded as y i,j ;in, H represents the height of X and Y, W represents the width of X and Y, the symbol is the symbol of rounding down, 1≤i≤M, 1≤j≤N;

如果失真图像Y的失真类型为JPEG失真,则采用尺寸大小为8×8的滑动窗口在X中按先行后列的顺序逐像素点移动,将X分割成M×N个相重叠的且尺寸大小为8×8的图像块,将X中坐标位置为(i,j)的图像块记为xi,j,对所有的图像块xi,j进行二维DCT变换,得到对应变换后的图像块为同样,采用尺寸大小为8×8的滑动窗口在Y中按先行后列的顺序逐像素带你移动,将Y分割成M×N个相重叠的且尺寸大小为8×8的图像块,将Y中坐标位置为(i,j)的图像块记为yi,j,对所有的图像块yi,j进行二维DCT变换,得到对应变换后的图像块为其中,H表示X和Y的高度,W表示X和Y的宽度,1≤i≤M,1≤j≤N;If the distortion type of the distorted image Y is JPEG distortion, use a sliding window with a size of 8×8 to move pixel by pixel in the order of the first row and the second column in X, and divide X into M×N overlapping and sized is an 8×8 image block, and the image block whose coordinate position is (i,j) in X is recorded as x i,j , and two-dimensional DCT transformation is performed on all image blocks x i,j to obtain the corresponding transformed image block for Similarly, a sliding window with a size of 8×8 is used to move you pixel by pixel in the order of first row and second column in Y, and Y is divided into M×N overlapping image blocks with a size of 8×8. The image block whose coordinate position is (i, j) in Y is denoted as y i, j , and two-dimensional DCT transformation is performed on all image blocks y i, j , and the corresponding transformed image block is in, H represents the height of X and Y, W represents the width of X and Y, 1≤i≤M, 1≤j≤N;

如果失真图像Y的失真类型为类模糊失真,则对X进行一级小波变换,提取近似分量并记为XA,采用尺寸大小为8×8的滑动窗口在XA中按先行后列的顺序逐点移动,将XA分割成M'×N'个相重叠的且尺寸大小为8×8的图像块,将XA中坐标位置为(i',j')的图像块记为同样,对Y进行一级小波变换,提取近似分量并记为YA,采用尺寸大小为8×8的滑动窗口在YA中按先行后列的顺序逐点移动,将YA分割成M'×N'个相重叠的且尺寸大小为8×8的图像块,将YA中坐标位置为(i',j')的图像块记为其中,H'表示XA和YA的高度,W'表示XA和YA的宽度,1≤i'≤M',1≤j'≤N';If the distortion type of the distorted image Y is similar to fuzzy distortion, perform a first-level wavelet transform on X, extract the approximate component and record it as X A , and use a sliding window with a size of 8×8 in the order of first row and second column in X A Move point by point, divide X A into M'×N' overlapping image blocks with a size of 8×8, and record the image block with coordinate position (i',j') in X A as Similarly, perform a first-level wavelet transform on Y, extract the approximate component and record it as Y A , use a sliding window with a size of 8×8 to move point by point in Y A in the order of first row and second column, and divide Y A into M'×N' overlapping image blocks with a size of 8×8, the image block whose coordinate position is (i', j') in Y A is recorded as in, H' means the height of X A and Y A , W' means the width of X A and Y A , 1≤i'≤M', 1≤j'≤N';

③如果失真图像Y的失真类型为高斯白噪声失真,则计算X中的每个图像块中的所有像素点的亮度均值和标准差,并计算Y中的每个图像块中的所有像素点的亮度均值和标准差,然后计算X和Y中所有的坐标位置相同的两个图像块中的所有像素点之间的协方差,将X中坐标位置为(i,j)的图像块xi,j中的所有像素点的亮度均值和标准差对应记为将Y中坐标位置为(i,j)的图像块yi,j中的所有像素点的亮度均值和标准差对应记为将X中坐标位置为(i,j)的图像块xi,j中的所有像素点与Y中坐标位置为(i,j)的图像块yi,j中的所有像素点之间的协方差记为 σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( y i , j ( u , v ) - μ y i , j ) 2 , σ x i , j y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( y i , j ( u , v ) - μ y i , j ) ] , 其中,xi,j(u,v)表示xi,j中坐标位置为(u,v)的像素点的亮度值,yi,j(u,v)表示yi,j中坐标位置为(u,v)的像素点的亮度值,1≤u≤8,1≤v≤8;③If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the brightness mean and standard deviation of all pixels in each image block in X, and calculate the brightness mean and standard deviation of all pixels in each image block in Y The brightness mean and standard deviation, and then calculate the covariance between all pixels in the two image blocks with the same coordinate positions in X and Y, and the image block x i with the coordinate position (i, j) in X, The brightness mean and standard deviation of all pixels in j are recorded as and The brightness mean and standard deviation of all the pixels in the image block y i, j at the coordinate position (i, j) in Y are recorded as and The correlation between all the pixels in the image block x i,j whose coordinate position is (i,j) in X and all the pixels in the image block y i,j whose coordinate position is (i,j) in Y is Variance is recorded as σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 the y i , j ( u , v ) , σ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( the y i , j ( u , v ) - μ the y i , j ) 2 , σ x i , j the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 [ ( x i , j ( u , v ) - μ x i , j ) ( the y i , j ( u , v ) - μ the y i , j ) ] , Among them, x i,j (u,v) represents the brightness value of the pixel point whose coordinate position in x i,j is (u,v), and y i,j (u,v) represents the coordinate position in y i,j as The brightness value of the pixel point of (u,v), 1≤u≤8, 1≤v≤8;

如果失真图像Y的失真类型为JPEG失真,则计算X中的每个图像块中的所有像素点的亮度均值和标准差,并计算Y中的每个图像块中的所有像素点的亮度均值和标准差,然后计算X中的每个图像块的DCT交流系数的均值和标准差,并计算Y中的每个图像块的DCT交流系数的均值和标准差,最后计算X和Y中所有的坐标位置相同的两个图像块的所有DCT交流系数之间的协方差,将X中坐标位置为(i,j)的图像块xi,j中的所有像素点的亮度均值和标准差对应记为将Y中坐标位置为(i,j)的图像块yi,j中的所有像素点的亮度均值和标准差对应记为将X中坐标位置为(i,j)的图像块xi,j进行DCT变换后得到的新图像块的所有交流系数的均值和标准差分别对应记为将Y中坐标位置为(i,j)的图像块yi,j进行DCT变换后得到的新图像块的所有交流系数的均值和标准差分别对应记为将X中坐标位置为(i,j)的DCT域图像块和Y中坐标位置为(i,j)的DCT域图像块中的所有交流系数之间的协方差记为 σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 y i , j ( u , v ) , σ y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( y i , j ( u , v ) - μ y i , j ) 2 , μ x i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 x i , j D ( u D , v D ) , σ x i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 ( x i , j D ( u D , v D ) - μ x i , j D ) 2 , μ y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 y i , j D ( u D , v D ) , σ y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 ( y i , j D ( u D , v D ) - μ y i , j D ) 2 , σ x i , j D , y i , j D = 1 64 Σ u D = 1 8 Σ v D = 1 8 [ ( x i , j D ( u D , v D ) - μ x i , j D ) ( y i , j D ( u D , v D ) - μ y i , j D ) ] , 其中,xi,j(u,v)表示xi,j中坐标位置为(u,v)的像素点的亮度值,yi,j(u,v)表示yi,j中坐标位置为(u,v)的像素点的亮度值,1≤u≤8,1≤v≤8,表示中坐标位置为(uD,vD)的DCT系数值,表示中坐标位置为(uD,vD)的DCT系数值,1≤uD≤8,1≤vD≤8且uD和vD不同时为1;If the distortion type of the distorted image Y is JPEG distortion, calculate the brightness mean and standard deviation of all pixels in each image block in X, and calculate the brightness mean and standard deviation of all pixels in each image block in Y Standard deviation, then calculate the mean and standard deviation of the DCT AC coefficients of each image block in X, and calculate the mean and standard deviation of the DCT AC coefficients of each image block in Y, and finally calculate all coordinates in X and Y The covariance between all DCT AC coefficients of two image blocks with the same position, and the brightness mean and standard deviation of all pixels in the image block x i, j whose coordinate position is (i, j) in X are recorded as and The brightness mean value and standard deviation of all pixels in the image block y i, j at the coordinate position (i, j) in Y are recorded as and A new image block obtained by performing DCT transformation on the image block x i, j whose coordinate position is (i, j) in X The mean and standard deviation of all the AC coefficients of are recorded as and A new image block obtained by performing DCT transformation on the image block y i, j whose coordinate position is (i, j) in Y The mean and standard deviation of all the AC coefficients of are recorded as and The DCT domain image block whose coordinate position in X is (i,j) and the DCT domain image block with the coordinate position (i,j) in Y The covariance between all AC coefficients in is denoted as σ x i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( x i , j ( u , v ) - μ x i , j ) 2 , μ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 the y i , j ( u , v ) , σ the y i , j = 1 64 Σ u = 1 8 Σ v = 1 8 ( the y i , j ( u , v ) - μ the y i , j ) 2 , μ x i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 x i , j D. ( u D. , v D. ) , σ x i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 ( x i , j D. ( u D. , v D. ) - μ x i , j D. ) 2 , μ the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 the y i , j D. ( u D. , v D. ) , σ the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 ( the y i , j D. ( u D. , v D. ) - μ the y i , j D. ) 2 , σ x i , j D. , the y i , j D. = 1 64 Σ u D. = 1 8 Σ v D. = 1 8 [ ( x i , j D. ( u D. , v D. ) - μ x i , j D. ) ( the y i , j D. ( u D. , v D. ) - μ the y i , j D. ) ] , Among them, x i,j (u,v) represents the brightness value of the pixel point whose coordinate position in x i,j is (u,v), and y i,j (u,v) represents the coordinate position in y i,j as The brightness value of the pixel of (u,v), 1≤u≤8, 1≤v≤8, express The DCT coefficient value at the coordinate position (u D , v D ), express The DCT coefficient value at the middle coordinate position (u D , v D ), 1≤u D ≤8, 1≤v D ≤8 and u D and v D are not 1 at the same time;

如果失真图像Y的失真类型为类模糊失真,则计算X经一级小波变换后的近似分量XA中所有系数值的均值和标准差,并计算Y经一级小波变换后的近似分量YA中所有系数值的均值和标准差,然后计算XA和YA中所有的坐标位置相同的两个图像块中的所有系数之间的协方差,将XA中坐标位置为(iW,jW)的图像块中所有系数的均值和标准差分别对应记为将YA中坐标位置为(iW,jW)的图像块中所有系数的均值和标准差分别记为将XA中坐标位置为(iW,jW)的图像块中的所有像素点与YA中坐标位置为(iW,jW)的图像块中的所有系数之间的协方差记为 μ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 x i W , j W W ( u W , v W ) , σ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) 2 , μ y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 y i W , j W W ( u W , v W ) , σ y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( y i W , j W W ( u W , v W ) - μ y i W , j W W ) 2 , σ x i W , j W W , y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 [ ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) ( y i W , j W W ( u W , v W ) - μ u i W , j W W ) ] , 其中,表示中坐标位置为(uW,vW)的系数值,表示中坐标位置为(uW,vW)的系数值,1≤uW≤8,1≤vW≤8;If the distortion type of the distorted image Y is fuzzy-like distortion, calculate the mean and standard deviation of all coefficient values in the approximate component X A of X after the first-level wavelet transformation, and calculate the approximate component Y A of Y after the first-level wavelet transformation The mean and standard deviation of all coefficient values in X A and Y A, and then calculate the covariance between all coefficients in the two image blocks with the same coordinate position in X A and Y A , and the coordinate position in X A is (i W ,j W ) image block The mean and standard deviation of all coefficients in and The image block whose coordinate position is (i W , j W ) in Y A The mean and standard deviation of all coefficients in and The image block whose coordinate position is (i W , j W ) in X A All the pixels in Y A and the image block whose coordinate position is (i W , j W ) The covariance among all the coefficients in is denoted as μ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 x i W , j W W ( u W , v W ) , σ x i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) 2 , μ the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 the y i W , j W W ( u W , v W ) , σ the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 ( the y i W , j W W ( u W , v W ) - μ the y i W , j W W ) 2 , σ x i W , j W W , the y i W , j W W = 1 64 Σ u W = 1 8 Σ v W = 1 8 [ ( x i W , j W W ( u W , v W ) - μ x i W , j W W ) ( the y i W , j W W ( u W , v W ) - μ u i W , j W W ) ] , in, express The coefficient value at the coordinate position (u W , v W ), express Coefficient value at (u W , v W ), 1≤u W ≤8, 1≤v W ≤8;

④如果失真图像Y的失真类型为高斯白噪声失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的亮度函数、对比度函数和结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的亮度函数、对比度函数和结构度函数对应分别记为l(xi,j,yi,j)、c(xi,j,yi,j)和s(xi,j,yi,j), l ( x i , j , y i , j ) = 2 μ x i , j μ y i , j + C 1 μ x i , j 2 + μ y i , j 2 + C 1 , c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数,在本实施例中取C1=0.01、C2=0.02、C3=0.01;④ If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the brightness function, contrast function and structure function between all the coordinate positions in X and Y between the same two image blocks, and set the coordinate position in X as ( The brightness function, contrast function and structure function between the image block x i,j of i,j) and the image block y i,j whose coordinate position is (i,j) in Y are respectively denoted as l(xi , j ,y i,j ), c(x i,j ,y i,j ) and s(x i,j ,y i,j ), l ( x i , j , the y i , j ) = 2 μ x i , j μ the y i , j + C 1 μ x i , j 2 + μ the y i , j 2 + C 1 , c ( x i , j , the y i , j ) = 2 σ x i , j σ the y i , j + C 2 σ x i , j 2 + σ the y i , j 2 + C 2 , Among them, C 1 , C 2 , and C 3 are small-value constants set to prevent the denominator from appearing zero. In this embodiment, C 1 =0.01, C 2 =0.02, and C 3 =0.01;

如果失真图像Y的失真类型为JPEG失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的亮度函数和对比度函数,并计算X和Y中所有的坐标位置相同的两个图像块在DCT域的结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的亮度函数和对比度函数和结构度函数对应分别记为l(xi,j,yi,j)和c(xi,j,yi,j),将X中坐标位置为(i,j)的图像块xi,j在DCT变换后的新图像块与Y中坐标位置为(i,j)的图像块yi,j在DCT变换后的新图像块之间结构度函数记为f(xi,j,yi,j), c ( x i , j , y i , j ) = 2 σ x i , j σ y i , j + C 2 σ x i , j 2 + σ y i , j 2 + C 2 , f ( x i , j , y i , j ) = 2 σ x i , j D σ y i , j D + C 3 σ x i , j D 2 + σ y i , j D 2 + C 3 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数,在本实施例中取C1=0.01、C2=0.02、C3=0.01;If the distortion type of the distorted image Y is JPEG distortion, calculate the brightness function and contrast function between two image blocks with the same coordinate positions in X and Y, and calculate the two image blocks with the same coordinate positions in X and Y The structure function of the image block in the DCT domain, the brightness between the image block x i , j at the coordinate position (i, j) in X and the image block y i, j at the coordinate position ( i, j) in Y function, contrast function and structure function are recorded as l(xi ,j ,y i,j ) and c(xi ,j ,y i,j ) respectively, and the coordinate position in X is (i,j) The new image block of image block x i, j after DCT transformation The new image block after DCT transformation with the image block y i, j whose coordinate position is (i, j) in Y The structure degree function between is denoted as f(x i,j ,y i,j ), c ( x i , j , the y i , j ) = 2 σ x i , j σ the y i , j + C 2 σ x i , j 2 + σ the y i , j 2 + C 2 , f ( x i , j , the y i , j ) = 2 σ x i , j D. σ the y i , j D. + C 3 σ x i , j D. 2 + σ the y i , j D. 2 + C 3 , Among them, C 1 , C 2 , and C 3 are small-value constants set to prevent the denominator from appearing zero. In this embodiment, C 1 =0.01, C 2 =0.02, and C 3 =0.01;

如果失真图像Y的失真类型为类模糊失真,则计算X和Y中所有的坐标位置相同的两个图像块之间的小波系数函数、小波系数对比度函数和小波系数结构度函数,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的小波系数函数、小波系数对比度函数和小波系数结构度函数对应分别记为lW(xi,j,yi,j)、cW(xi,j,yi,j)和sW(xi,j,yi,j), c W ( x i , j , y i , j ) = 2 σ x i W , j W W σ y i W , j W W + C 2 σ x i W , j W W 2 + σ y i W , j W W 2 + C 2 , s W ( x i , j , y i , j ) = σ x i W , j W W , y i W , j W W + C 3 σ x i W , j W W σ y i W , j W W + C 3 , 其中,C1、C2、C3为避免分母出现零而设置的小值常数,在本实施例中取C1=0.01、C2=0.02、C3=0.01;If the distortion type of the distorted image Y is fuzzy-like distortion, calculate the wavelet coefficient function, the wavelet coefficient contrast function and the wavelet coefficient structure function between two image blocks with the same coordinate positions in X and Y, and convert the coordinates in X to The wavelet coefficient function, the wavelet coefficient contrast function and the wavelet coefficient structure function between the image block x i, j at the position (i, j) and the image block y i, j at the coordinate position (i, j) in Y correspond to They are denoted as l W (xi ,j ,y i,j ), c W (xi ,j ,y i,j ) and s W (xi ,j ,y i,j ) respectively, c W ( x i , j , the y i , j ) = 2 σ x i W , j W W σ the y i W , j W W + C 2 σ x i W , j W W 2 + σ the y i W , j W W 2 + C 2 , the s W ( x i , j , the y i , j ) = σ x i W , j W W , the y i W , j W W + C 3 σ x i W , j W W σ the y i W , j W W + C 3 , Among them, C 1 , C 2 , and C 3 are small-value constants set to prevent the denominator from appearing zero. In this embodiment, C 1 =0.01, C 2 =0.02, and C 3 =0.01;

⑤如果失真图像Y的失真类型为高斯白噪声失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的亮度函数、对比度函数和结构度函数,计算X和Y中所有的坐标位置相同的两个图像块之间的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的结构相似度记为SSIM(xi,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γ,其中α、β和γ为调节因子,在本实施例中取α=β=γ=1;⑤If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate all the values in X and Y according to the brightness function, contrast function and structure function between two image blocks with the same coordinate positions in X and Y. The structural similarity between two image blocks with the same coordinate position, the image block x i , j with the coordinate position (i, j) in X and the image block y i, j with the coordinate position (i, j) in Y , The structural similarity between j is recorded as SSIM(xi ,j ,y i,j ), SSIM(xi ,j ,y i,j )=[l(xi ,j ,y i,j )] α [c(x i,j ,y i,j )] β [s(x i,j ,y i,j )] γ , where α, β and γ are adjustment factors, and in this embodiment, α=β =γ=1;

如果失真图像Y的失真类型为JPEG失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的亮度函数和对比度函数,以及X和Y中所有的坐标位置相同的两个图像块在DCT域的结构度函数,计算X和Y中所有坐标位置相同的两个图像块之间的基于DCT域的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的基于DCT域的结构相似度记为FSSIM(xi,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γ,其中α、β和γ为调节因子,在本实施例中取α=β=γ=1;If the distortion type of the distorted image Y is JPEG distortion, according to the brightness function and contrast function between two image blocks with the same coordinate positions in X and Y, and two images with the same coordinate positions in X and Y The structure degree function of the block in the DCT domain, calculate the structural similarity based on the DCT domain between two image blocks with the same coordinate position in X and Y, and take the image block x i with the coordinate position (i, j) in X ,j and the image block y i, j whose coordinate position is (i,j) in Y is denoted as FSSIM(xi ,j ,y i,j ), FSSIM(xi ,j j ,y i,j )=[l(x i,j ,y i,j )] α [c(x i,j ,y i,j )] β [f(x i,j , y i,j )] γ , wherein α, β and γ are adjustment factors, and in this embodiment, α=β=γ=1;

如果失真图像Y的失真类型为类模糊失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的小波系数函数、小波系数对比度函数和小波系数结构度函数,计算X和Y中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度,将X中坐标位置为(i,j)的图像块xi,j与Y中坐标位置为(i,j)的图像块yi,j之间的基于小波域的结构相似度记为记为WSSIM(xi,j,yi,j),WSSIM(xi,j,yi,j)=[lW(xi,j,yi,j)]α[cW(xi,j,yi,j)]β[sW(xi,j,yi,j)]γ,其中α、β和γ为调节因子,在本实施例中取α=β=γ=1;If the distortion type of the distorted image Y is fuzzy-like distortion, calculate X and Y according to the wavelet coefficient function, wavelet coefficient contrast function and wavelet coefficient structure function between two image blocks with the same coordinate positions in X and Y The structural similarity based on the wavelet domain between all the two image blocks with the same coordinate position in X, and the image block x i, j with the coordinate position (i, j) in X and the coordinate position in Y (i, j ) between image blocks y i,j based on the wavelet domain structure is denoted as WSSIM(xi ,j ,y i,j ), WSSIM(xi ,j ,y i,j )=[l W (xi ,j ,y i,j )] α [c W (xi ,j ,y i,j )] β [s W (xi ,j , y i,j )] γ , where α, β and γ are adjustment factors, and in this embodiment, α=β=γ=1;

⑥如果失真图像Y的失真类型为高斯白噪声失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于像素域的结构相似度,计算Y的客观质量分值,记为Qwn Q wn = 1 M × N Σ i = 1 M Σ j = 1 N SSIM ( x i , j , y i , j ) ; ⑥ If the distortion type of the distorted image Y is Gaussian white noise distortion, then calculate the objective quality score of Y according to the structural similarity based on the pixel domain between all the two image blocks with the same coordinate positions in X and Y, and record is Qwn , Q wn = 1 m × N Σ i = 1 m Σ j = 1 N SSIM ( x i , j , the y i , j ) ;

如果失真图像Y的失真类型为JPEG失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于DCT域的结构相似度,计算Y的客观质量分值,记为Qjpeg Q jpeg = 1 M × N Σ i = 1 M Σ j = 1 N FSSIM ( x i , j , y i , j ) ; If the distortion type of the distorted image Y is JPEG distortion, then calculate the objective quality score of Y according to the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y, and denote it as Q jpeg , Q jpeg = 1 m × N Σ i = 1 m Σ j = 1 N FSSIM ( x i , j , the y i , j ) ;

如果失真图像Y的失真类型为类模糊失真,则根据X和Y中所有的坐标位置相同的两个图像块之间的基于小波域的结构相似度,计算Y的客观质量分值,记为Qblur Q blur = 1 M × N Σ i = 1 M Σ j = 1 N WSSIM ( x i , j , y i , j ) . If the distortion type of the distorted image Y is similar to fuzzy distortion, then calculate the objective quality score of Y according to the structural similarity based on the wavelet domain between two image blocks with the same coordinate positions in X and Y, and denote it as Q blur , Q blurred = 1 m × N Σ i = 1 m Σ j = 1 N WSSIM ( x i , j , the y i , j ) .

本实施例中,利用LIVE中提供的29幅无失真图像和779幅单一失真图像和各幅失真图像所对应的DMOS(differential mean opinion scores)值,根据步骤①至步骤⑥计算出各幅失真图像的质量评价分值Q,对779幅失真图像的客观质量评价分值Q和DMOS值进行四参数Logistic函数非线性拟合;利用评估图像质量评价方法的4个常用客观参量作为评价指标,这4个评价指标分别为线性相关系数CC(correlation coefficient)、Spearman相关系数SROCC(Spearman rank-order correlation coefficient)、离散率OR(out ratio)及均方误差系数RMSE(rooted mean squared error)。其中CC值和SROCC值越高、OR值和RMSE值越低说明图像客观评价方法与DMOS相关性越好。In this embodiment, using the 29 undistorted images and 779 single distorted images provided in LIVE and the DMOS (differential mean opinion scores) values corresponding to each distorted image, each distorted image is calculated according to steps ① to ⑥ The quality evaluation score Q of the 779 distorted images was used to perform nonlinear fitting of the four-parameter Logistic function on the objective quality evaluation scores Q and DMOS values of 779 distorted images; four commonly used objective parameters for evaluating image quality evaluation methods were used as evaluation indicators. The evaluation indicators are linear correlation coefficient CC (correlation coefficient), Spearman correlation coefficient SROCC (Spearman rank-order correlation coefficient), dispersion rate OR (out ratio) and mean square error coefficient RMSE (rooted mean squared error). The higher the CC value and SROCC value, the lower the OR value and RMSE value, the better the correlation between the image objective evaluation method and DMOS.

表1列出了各种失真类型下评价性能的CC、SROCC、OR和RMSE系数的值,从表1所列的数据可见,本实施例得到的失真图像的客观质量分值Q与主观分数DMOS之间的相关性很高,CC值都超过0.94,SROCC值都超过0.91,OR值都低于0.41,RMSE值都低于5.4,这表明了本发明方法的客观评价结果与人眼主观感知的结果一致性很高,充分说明了本发明方法的有效性。Table 1 lists the values of CC, SROCC, OR and RMSE coefficients for evaluating performance under various distortion types. From the data listed in Table 1, it can be seen that the objective quality score Q and subjective score DMOS of the distorted image obtained in this embodiment The correlation between is very high, and CC value all exceeds 0.94, and SROCC value all exceeds 0.91, and OR value is all lower than 0.41, and RMSE value is all lower than 5.4, and this shows that the objective evaluation result of the inventive method is consistent with the subjective perception of human eyes. The consistency of the results is very high, which fully demonstrates the effectiveness of the method of the present invention.

表1本实施得到的失真图像的客观评价分与主观评价分之间的相关性Table 1 Correlation between objective and subjective evaluation scores of distorted images obtained in this implementation

Claims (5)

1. A self-adaptive image quality evaluation method based on distortion type judgment is characterized in that the processing process is as follows:
firstly, determining the distortion type of a distorted image to be evaluated;
secondly, corresponding processing is carried out by combining the distortion type of the distorted image to be evaluated;
if the distorted image is Gaussian white noise distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a pixel domain, and acquiring the structural similarity based on the pixel domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the brightness mean value and the standard deviation of all pixel points in each image block in the original undistorted image and the distorted image to be evaluated in the pixel domain and the covariance between all pixel point brightness values in all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated;
if the distorted image is JPEG distorted, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a DCT domain, and acquiring the DCT domain-based structural similarity between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all DCT coefficients of the original undistorted image block and the distorted image to be evaluated in the DCT domain and the covariance between all DCT coefficients of all two image blocks with the same coordinate position in the DCT domain;
if the distorted image is similar to fuzzy distortion, dividing the original undistorted image and the distorted image to be evaluated into a plurality of overlapped image blocks with the size of 8 multiplied by 8 in a wavelet domain, and acquiring the structural similarity based on the wavelet domain between all two image blocks with the same coordinate position in the original undistorted image and the distorted image to be evaluated by calculating the mean value and the standard deviation of all wavelet coefficients of the original undistorted image block and the distorted image to be evaluated in the wavelet domain and the covariance between all wavelet coefficients of all two image blocks with the same coordinate position in the wavelet domain;
and finally, obtaining the objective quality score of the distorted image to be evaluated according to the structural similarity between the two image blocks with the same coordinate positions in the original undistorted image and the distorted image to be evaluated.
2. The adaptive image quality evaluation method based on distortion type judgment according to claim 1, characterized in that: the method specifically comprises the following steps:
firstly, enabling X to represent an original undistorted image, enabling Y to represent a distorted image to be evaluated, and determining the distortion type of Y through a distortion type distinguishing method, wherein the distortion type of Y is one of Gaussian white noise distortion, JPEG distortion and quasi-fuzzy distortion, and the quasi-fuzzy distortion comprises Gaussian fuzzy distortion, JPEG2000 distortion and fast fading distortion;
if the distortion type of the distorted image Y is Gaussian white noise distortion, a sliding window with the size of 8 multiplied by 8 is adopted to move in X pixel by pixel, the X is divided into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and the image block with the coordinate position (i, j) in X is marked as Xi,j(ii) a Similarly, moving the sliding window with the size of 8 × 8 in Y pixel by pixel, dividing Y into M × N overlapped image blocks with the size of 8 × 8, and recording the image block with the coordinate position (i, j) in Y as Yi,j(ii) a Wherein, h denotes the height of X and Y, W denotes the width of X and Y, symbolsI is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is JPEG distortion, moving the distorted image Y pixel by pixel in X by adopting a sliding window with the size of 8 multiplied by 8, dividing X into M multiplied by N overlapped image blocks with the size of 8 multiplied by 8, and recording the image block with the coordinate position (i, j) in X as Xi,jFor all image blocks xi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationAlso, a slider having a size of 8X 8 was usedMoving the movable window in Y pixel by pixel, dividing Y into M × N overlapped image blocks with size of 8 × 8, and recording the image block with coordinate position (i, j) in Y as Yi,jFor all image blocks yi,jPerforming two-dimensional DCT to obtain image blocks after corresponding transformationWherein,h represents the height of X and Y, W represents the width of X and Y, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N;
if the distortion type of the distorted image Y is similar to fuzzy distortion, performing one-level wavelet transform on X, extracting approximate components and recording as XAUsing a sliding window of size 8X 8 at XAMoving X point by pointADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing X into X blocksAThe image block with the middle coordinate position (i ', j') is recorded asSimilarly, one-level wavelet transform is performed on Y, and approximate components are extracted and recorded as YAUsing a sliding window with a size of 8X 8 in YAMoving point by point, and moving YADividing into M '× N' overlapped image blocks with size of 8 × 8, and dividing Y intoAThe image block with the middle coordinate position (i ', j') is recorded asWherein,h' represents XAAnd YAW' represents XAAnd YAThe width of the composite is that i 'is more than or equal to 1 and is more than or equal to M', j 'is more than or equal to 1 and is more than or equal to N';
thirdly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the brightness average sum of all pixel points in each image block in the XStandard deviation, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the covariance between all pixel points in two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jAll the pixel points in (a) and the image block Y with coordinate position (i, j) in Yi,jThe covariance of all the pixels in (1) is recorded as <math> <mrow> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jMiddle coordinate positionThe brightness value of the pixel point of (u, v) is that u is more than or equal to 1 and less than or equal to 8, and v is more than or equal to 1 and less than or equal to 8;
if the distortion type of the distorted image Y is JPEG distortion, calculating the brightness mean value and standard deviation of all pixel points in each image block in X, calculating the brightness mean value and standard deviation of all pixel points in each image block in Y, then calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in X, calculating the mean value and standard deviation of DCT alternating current coefficients of each image block in Y, finally calculating the covariance between all DCT alternating current coefficients of two image blocks with the same coordinate position in X and Y, and setting the coordinate position in X as (i, j) image block Xi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jThe corresponding brightness mean value and standard deviation of all the pixel points are recorded asAndthe coordinate position in X is set as (i, j) image block Xi,jNew image block obtained after DCT transformationRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndthe coordinate position in Y is set as (i, j) image block Yi,jObtained after DCT transformationNew image blockRespectively corresponding to the mean value and the standard deviation of all the AC coefficients are recorded asAndDCT domain image block with coordinate position (i, j) in XAnd a DCT domain image block with coordinate position (i, j) in YIs recorded as the covariance between all the ac coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>D</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>D</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein x isi,j(u, v) denotes xi,jThe brightness value y of the pixel point with the middle coordinate position (u, v)i,j(u, v) represents yi,jThe brightness value of the pixel point with the middle coordinate position (u, v) is that u is more than or equal to 1 and less than or equal to 8, v is more than or equal to 1 and less than or equal to 8,to representThe middle coordinate position is (u)D,vD) The value of the DCT coefficient of (a),to representThe middle coordinate position is (u)D,vD) Value of DCT coefficient of 1. ltoreq. uD≤8,1≤vDU is less than or equal to 8DAnd vDIs not 1 at the same time;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating an approximate component X of the distorted image Y after the first-level wavelet transformAThe mean value and standard deviation of all the coefficient values in the above-mentioned formula, and calculating the approximate component Y of Y after one-stage wavelet transformAThe mean and standard deviation of all the coefficient values in (A) and then calculating XAAnd YAAll co-variances between all coefficients in two image blocks with the same coordinate position, XAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients are respectively recorded asAndwill YAThe middle coordinate position is (i)W,jW) Image block ofThe mean and standard deviation of all coefficients in (A) are respectively recorded asAndmixing XAThe middle coordinate position is (i)W,jW) Image block ofAll pixel points in (2) and YAThe middle coordinate position is (i)W,jW) Image block ofIs recorded as the covariance between all coefficients in <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>64</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>[</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>i</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>u</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>v</mi> <mi>W</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein,to representThe middle coordinate position is (u)W,vW) The value of the coefficient of (a) is,to representThe middle coordinate position is (u)W,vW) Coefficient value of 1. ltoreq. uW≤8,1≤vW≤8;
If the distortion type of the distorted image Y is Gaussian white noise distortion, calculating a brightness function, a contrast function and a structure function between two image blocks with the same coordinate positions in X and YLet X be an image block X with coordinate position (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the brightness function, the contrast function and the structure function between the two is respectively marked as l (x)i,j,yi,j)、c(xi,j,yi,j) And s (x)i,j,yi,j), <math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is JPEG distortion, all coordinate positions in X and Y are calculatedSetting a brightness function and a contrast function between two identical image blocks, calculating the structure degree function of all two image blocks with the same coordinate position in X and Y in a DCT domain, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence between the brightness function and the contrast function is denoted as l (x), respectivelyi,j,yi,j) And c (x)i,j,yi,j) Let X be an image block X with coordinate position (i, j)i,jNew image blocks after DCT transformationAnd image block Y with coordinate position (i, j) in Yi,jNew image blocks after DCT transformationThe function of structural degree between is recorded as f (x)i,j,yi,j), <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <msub> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&sigma;</mi> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </mrow> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>D</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet coefficient brightness function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as an image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe correspondence of the wavelet coefficient brightness function, the wavelet coefficient contrast function and the wavelet coefficient structure degree function is respectively marked as lW(xi,j,yi,j)、cW(xi,j,yi,j) And sW(xi,j,yi,j), <math> <mrow> <msup> <mi>l</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>i</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msubsup> <mi>&mu;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mn>2</mn> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&mu;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <msub> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mi>W</mi> </msup> <mo>,</mo> <msup> <mi>j</mi> <mi>W</mi> </msup> </mrow> <mi>W</mi> </msubsup> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C1、C2、C3A small value constant set to avoid zero denominator;
fifthly, if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the structural similarity between the two image blocks with the same coordinate positions in X and Y according to the brightness function, the contrast function and the structural degree function between the two image blocks with the same coordinate positions in X and Y, and setting the coordinate position in X as the image block X of (i, j)i,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity between them is denoted as SSIM (x)i,j,yi,j),SSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[s(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is JPEG distortion, calculating the structural similarity based on the DCT domain between two image blocks with the same coordinate positions in X and Y according to a brightness function and a contrast function between the two image blocks with the same coordinate positions in X and Y and a structural function of the two image blocks with the same coordinate positions in X and Y in the DCT domain, and carrying out image block X with the coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on DCT domain between them is marked as FSSIM (x)i,j,yi,j),FSSIM(xi,j,yi,j)=[l(xi,j,yi,j)]α[c(xi,j,yi,j)]β[f(xi,j,yi,j)]γWherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is similar to fuzzy distortion, calculating a wavelet domain-based image block between all two image blocks with the same coordinate position in X and Y according to a wavelet coefficient brightness function, a wavelet coefficient contrast function and a wavelet coefficient structure degree function between all two image blocks with the same coordinate position in X and YStructural similarity, namely, the image block X with coordinate position (i, j) in Xi,jAnd image block Y with coordinate position (i, j) in Yi,jThe structural similarity based on the wavelet domain between the two is marked as WSSIM (x)i,j,yi,j), <math> <mrow> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>[</mo> <msup> <mi>l</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&alpha;</mi> </msup> <msup> <mrow> <mo>[</mo> <msup> <mi>c</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&beta;</mi> </msup> <msup> <mrow> <mo>[</mo> <msup> <mi>s</mi> <mi>W</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mi>&gamma;</mi> </msup> <mo>,</mo> </mrow> </math> Wherein α, β and γ are regulatory factors;
if the distortion type of the distorted image Y is Gaussian white noise distortion, calculating the objective quality score of the Y according to the pixel domain-based structural similarity between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qwn <math> <mrow> <msub> <mi>Q</mi> <mi>wn</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is JPEG distortion, calculating the objective quality score of the Y according to the structural similarity based on the DCT domain between all two image blocks with the same coordinate position in the X and the Y, and recording the objective quality score as the JPEG distortionQjpeg <math> <mrow> <msub> <mi>Q</mi> <mi>jpeg</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>FSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
If the distortion type of the distorted image Y is similar to fuzzy distortion, calculating the objective quality score of the Y according to the structural similarity based on the wavelet domain between the two image blocks with the same coordinate positions in the X and the Y, and marking the objective quality score as Qblur <math> <mrow> <msub> <mi>Q</mi> <mi>blur</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>WSSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
3. The adaptive image quality evaluation method based on distortion type determination according to claim 2, characterized in that: the specific process of determining the distortion type of Y by the distortion type discrimination method in the step I is as follows:
(r-a) dividing X into 64 × 64 non-overlapped blocks to obtain M ' × N ' 64 × 64 image blocks, and marking the image block with X coordinate at (i ', j ') as X 'i',j'For each image block x'i',j'Performing one-level wavelet decomposition, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and dividing x'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of Wherein, 1≤i'≤M',1≤j'≤N';
similarly, Y is divided into 64 × 64 non-overlapping blocks to obtain M ' × N ' 64 × 64 image blocks, and the image block with Y coordinates (i ', j ') is denoted as Y 'i',j'For each image block y'i',j'Performing a first-level waveletDecomposing, extracting diagonal components, finding out the median of coefficient amplitude in the diagonal components of each image block, calculating the noise standard deviation of each image block, and calculating y'i',j'The median value of the coefficient amplitude in the diagonal component of the wavelet is recorded asHaving a noise standard deviation of <math> <mrow> <msub> <mi>&sigma;</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mo>=</mo> <mfrac> <msub> <mi>MED</mi> <msub> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> </msub> <mn>0.6745</mn> </mfrac> <mo>;</mo> </mrow> </math>
Calculating the difference of noise standard deviations between all image blocks with the same coordinate position in X and Y, and combining the image blocks X 'at the coordinate positions (i', j ') in X and Y'i',j'And y'i',j'Is recorded as the difference of the standard deviation of the noiseThen calculating the mean value of the differences of the noise standard deviations among all the image blocks with the same coordinate position in the X and the Y, and recording the mean value as the difference value <math> <mrow> <mover> <mi>&Delta;&sigma;</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>N</mi> <mo>&prime;</mo> </msup> </munderover> <mi>&Delta;</mi> <msub> <mi>&sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> </msub> <mo>;</mo> </mrow> </math>
(ii) judgmentIf yes, determining that the distortion type of Y is Gaussian white noise distortion, and then ending; otherwise, executing the steps (i) and (d); wherein ThWNA threshold is judged for Gaussian white noise distortion;
computing the brightness difference graph of X, and recording as XhIs mixing XhThe coefficient value with the (i ', j') point as the middle coordinate position is recorded as Xh(i″,j″),Xh(i ", j") ═ X (i ", j") -X (i ", j" +1) |, where,1 is more than or equal to i 'and less than or equal to H,1 is more than or equal to j' -1, X (i ', j') represents the brightness value of the pixel point with the coordinate position (i ', j'), X (i ', j' +1) represents the brightness value of the pixel point with the coordinate position (i ', j' +1) in X, and the symbol "|" is an absolute value symbol;
similarly, a luminance difference map of Y is calculated, denoted as YhIs a reaction of YhThe coefficient value with the middle coordinate position as the (i ', j') point is recorded as Yh(i″,j″),Yh(i ", j") - | Y (i ", j") -Y (i ", j" +1) |, wherein 1 ≦ i "≦ H,1 ≦ j" ≦ W-1, Y (i ", j") represents the luminance value of the pixel point with the coordinate position (i ", j") in Y, and Y (i ", j" +1) represents the luminance value of the pixel point with the coordinate position (i ", j" +1) in Y;
(ii) luminance difference map X of-e versus XhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing XhAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>x</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,is composed ofThe middle coordinate position is the coefficient value of (p, q),is composed ofThe middle coordinate position is a coefficient value of (p,8),1≤i″'≤M″,1≤j″'≤N″,1≤p≤8,1≤q≤7;
similarly, a luminance difference map Y for YhDividing the image into 8 × 8 non-overlapping blocks to obtain M '× N' non-overlapping 8 × 8 image blocks, and dividing Y into Y blockshAn image block with a middle coordinate position at (i ', j') is marked asDefining image blocksRespectively the block energy and the block edge energy ofAnd <math> <mrow> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>56</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>7</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msubsup> <mi>y</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>h</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein,is composed ofThe middle coordinate position is the coefficient value of (p, q),is composed ofThe middle coordinate position is the coefficient value of (p, 8);
(ii) f, calculating XhThe ratio between the edge energy and the intra-block energy of all image blocks in (A) and (B) is defined as XhImage block with middle coordinate position at (i ', j')The ratio of the block edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>x</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <msubsup> <mi>Ex</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mfrac> <mo>;</mo> </mrow> </math>
Likewise, calculate YhThe ratio between the edge energy and the intra-block energy of all image blocks in the image block, YhImage block with middle coordinate position at (i ', j')Block edge ofThe ratio of the edge energy to the block energy is recorded as <math> <mrow> <msubsup> <mi>R</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>y</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>Ed</mi> </msubsup> <msubsup> <mi>Ey</mi> <mrow> <msup> <mi>i</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>j</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> <mi>In</mi> </msubsup> </mfrac> <mo>;</mo> </mrow> </math>
Statistics satisfy inequalityImage block ofAnd is marked as N0(ii) a The determination index J is defined and the determination index J, <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <mrow> <msup> <mi>M</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mo>&times;</mo> <msup> <mi>N</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
(ii) g, judgment J>ThJPEGIf yes, determining that the distortion type of Y is JPEG distortion, and then ending; otherwise, executing the steps of (i) - (h); wherein ThJPEGA threshold is determined for JPEG distortion;
and (h) determining the distortion type of Y as a fuzzy-like distortion, namely determining the distortion type of Y as a Gaussian fuzzy distortion, or JPEG2000 distortion, or fast fading distortion.
4. The adaptive image quality evaluation method based on distortion type judgment according to claim 3, wherein: in the step I-c, a threshold Th for distinguishing Gaussian white noise distortionWNIs 0.8; in the steps of (1) and (g), a JPEG distortion discrimination threshold ThJPEGIs 0.57.
5. The adaptive image quality evaluation method based on distortion type determination according to claim 2, characterized in that: in the fourth step, C is taken1=0.01、C2=0.02、C30.01, 1, γ.
CN201310406821.0A 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment Expired - Fee Related CN103475897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310406821.0A CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310406821.0A CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Publications (2)

Publication Number Publication Date
CN103475897A CN103475897A (en) 2013-12-25
CN103475897B true CN103475897B (en) 2015-03-11

Family

ID=49800574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310406821.0A Expired - Fee Related CN103475897B (en) 2013-09-09 2013-09-09 Adaptive image quality evaluation method based on distortion type judgment

Country Status (1)

Country Link
CN (1) CN103475897B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123723A (en) * 2014-07-08 2014-10-29 上海交通大学 Structure compensation based image quality evaluation method
CN104918039B (en) * 2015-05-05 2017-06-13 四川九洲电器集团有限责任公司 image quality evaluating method and system
CN105894522B (en) * 2016-04-28 2018-05-25 宁波大学 A kind of more distortion objective evaluation method for quality of stereo images
CN106412569B (en) * 2016-09-28 2017-12-15 宁波大学 A kind of selection of feature based without referring to more distortion stereo image quality evaluation methods
CN106778917A (en) * 2017-01-24 2017-05-31 北京理工大学 Based on small echo statistical nature without reference noise image quality evaluating method
CN108664839B (en) * 2017-03-27 2024-01-12 北京三星通信技术研究有限公司 Image processing method and device
CN107770517A (en) * 2017-10-24 2018-03-06 天津大学 Full reference image quality appraisement method based on image fault type
CN110415207A (en) * 2019-04-30 2019-11-05 杭州电子科技大学 A Method of Image Quality Evaluation Based on Image Distortion Type
CN111179242B (en) * 2019-12-25 2023-06-02 Tcl华星光电技术有限公司 Image processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209257B (en) * 2011-06-17 2013-11-20 宁波大学 Stereo image quality objective evaluation method
CN102333233B (en) * 2011-09-23 2013-11-06 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102421007B (en) * 2011-11-28 2013-09-04 浙江大学 Image quality evaluating method based on multi-scale structure similarity weighted aggregate
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN102982532B (en) * 2012-10-31 2015-06-17 宁波大学 Stereo image objective quality evaluation method base on matrix decomposition

Also Published As

Publication number Publication date
CN103475897A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN103475897B (en) Adaptive image quality evaluation method based on distortion type judgment
CN103338380B (en) Adaptive image quality objective evaluation method
Li et al. No-reference image blur assessment based on discrete orthogonal moments
Gu et al. No-reference quality metric of contrast-distorted images based on information maximization
Feichtenhofer et al. A perceptual image sharpness metric based on local edge gradient analysis
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN108052980B (en) Image-based air quality grade detection method
CN101976444B (en) An Objective Evaluation Method of Image Quality Based on Structural Similarity Based on Pixel Type
CN103208097B (en) Filtering method is worked in coordination with in the principal component analysis of the multi-direction morphosis grouping of image
CN103945217B (en) Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy
Gu et al. Structural similarity weighting for image quality assessment
Bhateja et al. Fast SSIM index for color images employing reduced-reference evaluation
CN105160667A (en) Blind image quality evaluation method based on combining gradient signal and Laplacian of Gaussian (LOG) signal
CN105006001A (en) Quality estimation method of parametric image based on nonlinear structural similarity deviation
CN104202594A (en) Video quality evaluation method based on three-dimensional wavelet transform
CN107451981A (en) Picture noise level estimation method based on DCT and gradient covariance matrix
Gu et al. An improved full-reference image quality metric based on structure compensation
CN109754390A (en) A Reference-Free Image Quality Evaluation Method Based on Hybrid Visual Features
CN103093432A (en) Polarized synthetic aperture radar (SAR) image speckle reduction method based on polarization decomposition and image block similarity
George et al. A survey on full reference image quality assessment algorithms
CN105979266B (en) It is a kind of based on intra-frame trunk and the worst time-domain information fusion method of time slot
CN106683079A (en) No-reference image objective quality evaluation method based on structural distortion
CN103578104B (en) A kind of partial reference image method for evaluating objective quality for Gaussian Blur image
CN106022362A (en) Reference-free image quality objective evaluation method for JPEG2000 compression distortion
CN106023152A (en) Reference-free stereo image quality objective evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150311

Termination date: 20190909