CN104361593A - Color image quality evaluation method based on HVSs and quaternions - Google Patents
Color image quality evaluation method based on HVSs and quaternions Download PDFInfo
- Publication number
- CN104361593A CN104361593A CN201410650245.9A CN201410650245A CN104361593A CN 104361593 A CN104361593 A CN 104361593A CN 201410650245 A CN201410650245 A CN 201410650245A CN 104361593 A CN104361593 A CN 104361593A
- Authority
- CN
- China
- Prior art keywords
- image
- mrow
- quaternion
- msub
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 40
- 241000710169 Helenium virus S Species 0.000 title 1
- 241000282414 Homo sapiens Species 0.000 claims abstract description 67
- 230000000007 visual effect Effects 0.000 claims abstract description 38
- 239000011159 matrix material Substances 0.000 claims abstract description 21
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 18
- 238000013210 evaluation model Methods 0.000 claims abstract description 13
- 230000000694 effects Effects 0.000 claims abstract description 10
- 238000001303 quality assessment method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims 1
- 238000002154 thermal energy analyser detection Methods 0.000 claims 1
- 238000011156 evaluation Methods 0.000 abstract description 42
- 230000006870 function Effects 0.000 abstract description 31
- 238000012545 processing Methods 0.000 abstract description 12
- 230000008447 perception Effects 0.000 abstract description 8
- 239000000284 extract Substances 0.000 abstract description 7
- 230000004438 eyesight Effects 0.000 abstract description 3
- 238000004422 calculation algorithm Methods 0.000 description 38
- 230000000873 masking effect Effects 0.000 description 7
- 238000013178 mathematical model Methods 0.000 description 5
- 230000016776 visual perception Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000012113 quantitative test Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于HVS和四元数的彩色图像质量评价方法,属于图像处理与计算机视觉技术领域。本发明的步骤为:一、通过分析人眼视觉特性构建原始参考图像和待评价失真图像的数学评价模型:图像的空间位置函数QL、局部方差QV、纹理边缘复杂度函数QTE和颜色函数QC;二、构造原始参考图像和待评价失真图像的四元数矩阵,并对四元数矩阵进行奇异值分解得到图像的奇异值特征向量;三、利用原始参考图像和待评价失真图像的奇异值特征向量的欧氏距离度量图像失真程度。本发明将人眼视觉特性和四元数相结合,提取图像的亮度和色度信息,利用人眼视觉特性构造空间位置函数、纹理边缘复杂度函数和局部方差,评价结果与人眼感知图像的效果更相符。
The invention discloses a color image quality evaluation method based on HVS and quaternion, which belongs to the technical field of image processing and computer vision. The steps of the present invention are: 1. Construct the original reference image and the mathematical evaluation model of the distorted image to be evaluated by analyzing the visual characteristics of the human eye: the spatial position function Q L of the image, the local variance Q V , the texture edge complexity function Q TE and the color Function Q C ; 2. Construct the quaternion matrix of the original reference image and the distorted image to be evaluated, and perform singular value decomposition on the quaternion matrix to obtain the singular value eigenvector of the image; 3. Use the original reference image and the distorted image to be evaluated The Euclidean distance of the singular value eigenvectors measures the degree of image distortion. The invention combines the visual characteristics of the human eye with the quaternion, extracts the brightness and chromaticity information of the image, uses the visual characteristics of the human eye to construct the spatial position function, the texture edge complexity function and the local variance, and the evaluation results are consistent with the perception of the image by the human eye. The effect is more consistent.
Description
技术领域technical field
本发明涉及图像处理与计算机视觉技术领域,更具体地说,涉及一种利用人眼视觉系统的特性构建与人眼观察图像一致的数学模型,与四元数奇异值分解相结合,进行彩色图像质量评价的方法。The present invention relates to the technical field of image processing and computer vision, and more specifically, relates to a mathematical model that uses the characteristics of the human visual system to construct a mathematical model that is consistent with the image observed by the human eye, and combines it with quaternion singular value decomposition to perform color image processing. method of quality assessment.
背景技术Background technique
图像质量是图像处理和计算机视觉领域中重要的参数之一,随着计算机科学技术的发展,印刷、瓷砖、影像、图像检索等方面对图像质量的要求越来越高,但是在图像的采集、处理、压缩、传输、显示等过程中会产生不同程度的图像失真和图像降质问题。Image quality is one of the important parameters in the field of image processing and computer vision. With the development of computer science and technology, the requirements for image quality in printing, ceramic tiles, images, and image retrieval are getting higher and higher. However, in image acquisition, Different degrees of image distortion and image degradation will occur during processing, compression, transmission, display, etc.
人类作为图像的最终接收者,使得其对图像的主观质量评价(Difference Mean OpinionScore,DMOS)被认为是最可靠的。主观质量评价是让观测者依据自己的主观感知经验或者某些事先统一规定好的评价标准,对待评价目标图像的视觉感知效果做出质量评价并进行打分,然后再将所有观测者的分数进行加权平均,所得的结果即为图像的主观质量分数。然而,主观图像质量评价费时费力,受观察者、图像类型和周围环境的影响较大,实时性较弱。因而人们一直致力于研究能正确及时有效地反映人们主观视觉感知的客观图像质量评价方法。客观图像质量评价是利用算法、数学模型等对图像质量进行及时、快速的反馈以获得与人的主观感受相一致的评价结果。该方法多种多样,由于切入点、基本思想的不同,分类方法也不同。根据对原始图像的参考,客观质量评价方法分为全参考型、部分参考型和无参考型3种。全参考型适用于编码器设计和不同编码器的性能比较,部分参考型和无参考型适用于带宽有限的多媒体应用。由于全参考型可以利用原始图像的全部信息,其对图像的评价结果更加符合人类主观评价。Human beings are the ultimate recipients of images, making their subjective quality evaluation of images (Difference Mean OpinionScore, DMOS) considered to be the most reliable. Subjective quality evaluation is to allow observers to evaluate and score the visual perception effect of the target image to be evaluated based on their own subjective perception experience or some pre-defined evaluation criteria, and then weight the scores of all observers. On average, the result is the subjective quality score of the image. However, subjective image quality evaluation is time-consuming and laborious, and is greatly affected by the observer, image type and surrounding environment, and its real-time performance is weak. Therefore, people have been devoting themselves to the research of an objective image quality evaluation method that can reflect people's subjective visual perception correctly, timely and effectively. Objective image quality evaluation is to use algorithms, mathematical models, etc. to provide timely and rapid feedback on image quality to obtain evaluation results that are consistent with people's subjective feelings. There are many kinds of methods, and the classification methods are also different due to different entry points and basic ideas. According to the reference to the original image, the objective quality evaluation methods are divided into three types: full reference type, partial reference type and no reference type. The full reference type is suitable for encoder design and performance comparison of different encoders, and the partial reference type and no reference type are suitable for multimedia applications with limited bandwidth. Since the full-reference type can use all the information of the original image, its image evaluation results are more in line with human subjective evaluation.
Liu A等人于2012年在《IEEE Transactions on image processing》上发表的《Image QualityAssessment Based on Gradient Similarity》中提出的峰值信噪比(Peak Signal Noise Ratio,PSNR)和均方误差(Mean Square Error,MSE)是最经典的全参考型客观图像质量评价方法。PSNR反映了待评价图像的逼真度(Fidelity),而MSE反映了待评价图像与原始图像的差异性(Diversity)。上述两种方法的理论简单明了,容易理解,计算起来也很方便,但它们只考虑了图像各个像素点的比较,并没有考虑图像各个像素点间可能存在的结构关系等,与人眼真实看到的存在偏差。The peak signal-to-noise ratio (Peak Signal Noise Ratio, PSNR) and the mean square error (Mean Square Error, MSE) is the most classic full-reference objective image quality evaluation method. PSNR reflects the fidelity of the image to be evaluated (Fidelity), while MSE reflects the difference (Diversity) between the image to be evaluated and the original image. The theory of the above two methods is simple and clear, easy to understand, and convenient to calculate, but they only consider the comparison of each pixel of the image, and do not consider the possible structural relationship between each pixel of the image. There is a deviation.
Z Wang等人于2004年在《IEEE Transactions on Image Processing》上发表的《Image qualityassessment from error measurement to structural similarity》中提出SSIM算法综合比较原始无失真图像和待评价图像在亮度、对比度和结构相似度三类不同的信息间的差异,考虑了像素间的结构关系,但存在严重模糊情况下细节把握不好,而且指数参数确定困难等问题。In "Image quality assessment from error measurement to structural similarity" published by Z Wang et al. in "IEEE Transactions on Image Processing" in 2004, the SSIM algorithm was proposed to comprehensively compare the brightness, contrast and structural similarity of the original undistorted image and the image to be evaluated. The differences between the three different types of information take into account the structural relationship between pixels, but there are problems such as poor grasp of details in severe ambiguity, and difficulty in determining index parameters.
W.Xue等人于2013年在《IEEE Transactions on Image Processing》上发表的《Gradientmagnitude similarity deviation:A highly efficient perceptual image quality index》中提出的基于梯度幅值的相似性偏差算法GMSD考虑到梯度对图像失真高度敏感,但对彩色图像的处理必须转换到灰度域上。以上方法对于彩色图像的评价必须转化为灰度图像,评价结果与人眼实际看到的情况存在偏差。In "Gradient magnitude similarity deviation: A highly efficient perceptual image quality index" published by W. Xue et al. in "IEEE Transactions on Image Processing" in 2013, the similarity deviation algorithm GMSD based on the gradient magnitude is considered. Distortion is highly sensitive, but processing of color images must be converted to the grayscale domain. The evaluation of color images by the above methods must be converted into grayscale images, and the evaluation results deviate from what the human eye actually sees.
经检索,中国专利申请号200610027433.1,申请日为2006年6月8日,发明创造名称为:一种基于超复数奇异值分解的图像质量评估方法;该申请案利用超复数(四元数)直接对彩色图像建模,用超复数奇异值分解提取出彩色图像固有能量特征,利用原始图像与失真图像奇异值之间的距离构造出失真映射矩阵,并使用该失真映射矩阵来评估彩色图像质量。中国专利申请号201210438606.4,申请日为2012年11月6日,发明创造名称为:彩色图像质量评价算法,该申请案分别将图像的色度、亮度和饱和度作为四元数的虚部,构造参考图像和待评价图像的四元数矩阵,并分别对它们进行奇异值分解,得到奇异值特征向量,最后应用灰色关联度计算参考图像的奇异值特征向量与各个待评价图像的奇异值特征向量之间的关联度,关联度越大,表明待评价图像的质量越好。但上述申请案得到的评价结果与人眼实际看到的情况仍存在较大偏差,彩色图像质量的评价方法仍需进一步优化。After searching, the Chinese patent application number 200610027433.1, the application date is June 8, 2006, and the name of the invention is: an image quality evaluation method based on super-complex singular value decomposition; the application uses super-complex numbers (quaternions) to directly For the color image modeling, the inherent energy features of the color image are extracted by hyper-complex singular value decomposition, and the distortion mapping matrix is constructed by using the distance between the singular value of the original image and the distorted image, and the quality of the color image is evaluated using the distortion mapping matrix. Chinese patent application number 201210438606.4, the application date is November 6, 2012, and the name of the invention is: color image quality evaluation algorithm. The application uses the chroma, brightness and saturation of the image as the imaginary part of the quaternion, and construct The quaternion matrix of the reference image and the image to be evaluated, and perform singular value decomposition on them respectively to obtain the singular value eigenvector, and finally apply the gray correlation degree to calculate the singular value eigenvector of the reference image and the singular value eigenvector of each image to be evaluated The higher the degree of correlation, the better the quality of the image to be evaluated. However, there is still a large deviation between the evaluation results obtained in the above application and what the human eye actually sees, and the evaluation method of color image quality still needs to be further optimized.
发明内容Contents of the invention
1.发明要解决的技术问题1. The technical problem to be solved by the invention
本发明为克服传统的评价方法在构建评价模型时,需将彩色图像转化为灰度图像进行处理,造成评价结果与人眼实际看到的情况偏差较大的问题,提供了一种基于HVS和四元数的彩色图像质量评价方法;本发明提出将人眼视觉特性和四元数相结合,提取图像的亮度和色度信息,利用人眼视觉特性构造空间位置函数、纹理边缘复杂度函数和局部方差,为改进传统割裂R、G、B三通道的方法,利用四元数奇异值分解提取图像的能量特征,使得评价结果与人眼感知图像的效果更相符。In order to overcome the problem that the traditional evaluation method needs to convert the color image into a grayscale image for processing when constructing the evaluation model, resulting in a large deviation between the evaluation result and the situation actually seen by the human eye, the present invention provides a method based on HVS and The color image quality evaluation method of the quaternion; the present invention proposes to combine the visual characteristics of the human eye with the quaternion, extract the brightness and chromaticity information of the image, and use the visual characteristics of the human eye to construct a spatial position function, a texture edge complexity function and Local variance, in order to improve the traditional method of splitting the R, G, and B channels, the energy characteristics of the image are extracted by using the quaternion singular value decomposition, so that the evaluation result is more consistent with the effect of human perception of the image.
2.技术方案2. Technical solution
为达到上述目的,本发明提供的技术方案为:In order to achieve the above object, the technical scheme provided by the invention is:
本发明的一种基于HVS和四元数的彩色图像质量评价方法,其步骤为:A kind of color image quality evaluation method based on HVS and quaternion of the present invention, its steps are:
步骤一、通过分析人眼视觉特性构建原始参考图像和待评价失真图像的数学评价模型,所述的数学评价模型包括图像的空间位置函数QL、局部方差QV、纹理边缘复杂度函数QTE和颜色函数QC;Step 1. Construct the mathematical evaluation model of the original reference image and the distorted image to be evaluated by analyzing the visual characteristics of the human eye. The mathematical evaluation model includes the spatial position function Q L of the image, the local variance Q V , and the texture edge complexity function Q TE and color function Q C ;
步骤二、将QL、QV、QTE作为四元数的虚部,QC作为四元数的实部,分别构造原始参考图像和待评价失真图像的四元数矩阵,并对四元数矩阵进行奇异值分解得到图像的奇异值特征向量;Step 2. Use Q L , Q V , and Q TE as the imaginary part of the quaternion, and Q C as the real part of the quaternion, respectively construct the quaternion matrix of the original reference image and the distorted image to be evaluated, and calculate the quaternion Singular value decomposition is carried out on the number matrix to obtain the singular value eigenvector of the image;
步骤三、利用原始参考图像和待评价失真图像的奇异值特征向量的欧氏距离度量图像失真程度。Step 3, using the Euclidean distance between the original reference image and the singular value eigenvectors of the distorted image to be evaluated to measure the degree of image distortion.
更进一步地,步骤一构建数学评价模型的具体过程为:Furthermore, the specific process of building a mathematical evaluation model in Step 1 is as follows:
(1)获取原始参考图像和待评价失真图像的RGB三刺激值;(1) Obtain the RGB tristimulus values of the original reference image and the distorted image to be evaluated;
(2)提取原始参考图像和待评价失真图像的空间位置信息,构建空间位置函数QL和纹理边缘复杂度函数QTE;(2) Extract the spatial position information of the original reference image and the distorted image to be evaluated, construct the spatial position function Q L and the texture edge complexity function Q TE ;
(3)将原始参考图像和待评价失真图像由RGB空间转换为YUV颜色空间,提取图像亮度信息构建局部方差QV,提取图像亮度和色度信息构建颜色函数QC。(3) Convert the original reference image and the distorted image to be evaluated from RGB space to YUV color space, extract image brightness information to construct local variance Q V , extract image brightness and chrominance information to construct color function Q C .
更进一步地,步骤一利用人类视觉系统的中间凹特性构建空间位置函数QL,所述的空间位置函数Furthermore, step 1 constructs a spatial position function Q L using the central concave feature of the human visual system, and the spatial position function
式中,eL为人眼视觉观察的像素点(i,j)到图像中心像素点(M/2,N/2)的距离与的商;ec为常量。In the formula, e L is the distance and quotient; e c is a constant.
更进一步地,步骤一利用人类视觉系统的掩盖效应构建纹理边缘复杂度函数QTE,所述的纹理边缘复杂度函数Furthermore, step 1 uses the masking effect of the human visual system to construct the texture edge complexity function Q TE , the texture edge complexity function
QTE=QT×QE Q TE = Q T × Q E
式中,QT为像素点(i,j)的纹理复杂度函数,Qz为像素点(i,j)的边缘复杂度函数。In the formula, Q T is the texture complexity function of pixel point (i, j), and Q z is the edge complexity function of pixel point (i, j).
更进一步地,步骤一利用人类视觉系统的多通道特性构建局部方差QV,所述的局部方差Furthermore, step 1 uses the multi-channel characteristics of the human visual system to construct a local variance Q V , the local variance
其中,按照图像的亮度分量进行互不重叠的分块得到Ii,j,L为图像分块Ii,j中包含的像素点ηp的个数,
更进一步地,所述的颜色函数Furthermore, the color function
QC=αQL+βQU Q C =αQ L +βQ U
式中,QL为图像的亮度信息,QU为图像的色度信息,α、β分别为亮度和色度所占的比重。In the formula, Q L is the luminance information of the image, Q U is the chrominance information of the image, and α and β are the proportions of luminance and chrominance respectively.
更进一步地,步骤三所述的欧氏距离Furthermore, the Euclidean distance described in step 3
式中,λi为原始参考图像的奇异值特征向量,为待评价失真图像的奇异值特征向量,K为两奇异值特征向量特征值个数的最小值,即两个四元数矩阵秩的最小值:In the formula, λi is the singular value eigenvector of the original reference image, is the singular value eigenvector of the distorted image to be evaluated, K is the minimum value of the number of eigenvalues of the two singular value eigenvectors, that is, the minimum value of the ranks of the two quaternion matrices:
3.有益效果3. Beneficial effect
采用本发明提供的技术方案,与已有的公知技术相比,具有如下显著效果:Compared with the existing known technology, the technical solution provided by the invention has the following remarkable effects:
(1)本发明的一种基于HVS和四元数的彩色图像质量评价方法,其通过分析人眼视觉特性构建了原始参考图像和待评价失真图像的空间位置函数QL、局部方差QV、纹理边缘复杂度函数QTE和颜色函数QC,并通过四元数将上述四部分图像信息整合,经奇异值分解得到图像的能量特征,改进了传统割裂R、G、B三通道的方法,很好地保证了颜色信息的完整性,提取的图像信息包含全局和局部信息,使得评价结果能更完整的表征图像的全部信息;(1) A color image quality evaluation method based on HVS and quaternion of the present invention, which constructs the spatial position function Q L , the local variance Q V , the original reference image and the distorted image to be evaluated by analyzing the visual characteristics of the human eye Texture edge complexity function Q TE and color function Q C , and integrate the above four parts of image information through quaternions, and obtain the energy characteristics of the image through singular value decomposition, which improves the traditional method of splitting R, G, and B channels, The integrity of the color information is well guaranteed, and the extracted image information contains global and local information, so that the evaluation result can more completely represent all the information of the image;
(2)本发明的一种基于HVS和四元数的彩色图像质量评价方法,将人眼视觉特性和四元数相结合,评价结果与人眼感知图像的效果更相符,优于传统的SSIM及其他几种典型的图像质量评价算法。(2) A kind of color image quality evaluation method based on HVS and quaternion of the present invention combines human visual characteristics and quaternion, and the evaluation result is more consistent with the effect of human eye perception image, which is better than traditional SSIM And several other typical image quality evaluation algorithms.
附图说明Description of drawings
图1为本发明的一种基于HVS和四元数的彩色图像质量评价方法的算法流程图;Fig. 1 is the algorithm flowchart of a kind of color image quality evaluation method based on HVS and quaternion of the present invention;
图2为本发明中人类视觉系统的中间凹特性等效图;Fig. 2 is the equivalent diagram of the central concave characteristic of the human visual system in the present invention;
图3为本发明的质量评价方法、传统方法与图像主观质量的拟合结果图;其中,图3中的(a)为PSNR与DMOS值的非线性拟合曲线图,图3中的(b)为SSIM与DMOS值的非线性拟合曲线图,图3中的(c)为MS-SSIM与DMOS值的非线性拟合曲线图,图3中的(d)为SVD与DMOS值的非线性拟合曲线图,图3中的(e)为GMSD与DMOS值的非线性拟合曲线图,图3中的(f)为本发明的质量评价方法与DMOS值的非线性拟合曲线图;Fig. 3 is the fitting result figure of quality evaluation method of the present invention, traditional method and image subjective quality; Wherein, (a) among Fig. 3 is the non-linear fitting curve figure of PSNR and DMOS value, (b among Fig. 3 ) is the nonlinear fitting curve of SSIM and DMOS value, (c) in Fig. 3 is the nonlinear fitting curve of MS-SSIM and DMOS value, (d) in Fig. 3 is the non-linear fitting curve of SVD and DMOS value Linear fitting curve, (e) in Fig. 3 is the nonlinear fitting curve of GMSD and DMOS value, (f) among Fig. 3 is the nonlinear fitting curve of quality evaluation method of the present invention and DMOS value ;
图4中的(a)~(e)为五组不同失真类型图像的HVS-QSVD、GMSD、SSIM与DMOS值的非线性拟合曲线对比图。(a)-(e) in Fig. 4 are comparison charts of non-linear fitting curves of HVS-QSVD, GMSD, SSIM and DMOS values of five groups of images with different distortion types.
具体实施方式Detailed ways
为进一步了解本发明的内容,结合附图和实施例对本发明作详细描述。In order to further understand the content of the present invention, the present invention will be described in detail in conjunction with the accompanying drawings and embodiments.
实施例1Example 1
结合图1,本实施例针对传统的评价方法在构建评价模型时,需将彩色图像转化为灰度图像进行处理,造成评价结果与人眼感知不符等问题,提供了一种基于HVS和四元数的彩色图像质量评价方法。本实施例通过将人眼视觉特性和四元数相结合,得到符合人眼感知的图像能量特征。实验表明,本实施例对彩色图像的评价优于其他方法,而且评价结果与人眼感知图像更一致。下面将结合实验结果对本实施例的图像质量评价方法加以详细说明:Combining with Figure 1, this embodiment provides a method based on HVS and quaternion to solve the problem that the traditional evaluation method needs to convert the color image into a grayscale image for processing when constructing the evaluation model, which causes the evaluation result to be inconsistent with the perception of the human eye. Several color image quality assessment methods. In this embodiment, by combining the visual characteristics of the human eye with the quaternion, an image energy feature conforming to the perception of the human eye is obtained. Experiments show that this embodiment is better than other methods in evaluating color images, and the evaluation results are more consistent with the images perceived by human eyes. The image quality evaluation method of this embodiment will be described in detail below in combination with the experimental results:
步骤一、通过分析人眼视觉特性构建原始参考图像和待评价失真图像的数学评价模型:Step 1. Construct the mathematical evaluation model of the original reference image and the distorted image to be evaluated by analyzing the visual characteristics of the human eye:
在对人类视觉系统生理结构了解的基础上提出的人眼视觉特性,与人眼如何观察外在环境、图像息息相关。因为人是图像的最终接收者,所以评价结果要与人眼实际看到的相符,本实施例通过分析人眼视觉特性构造相应的数学模型。Based on the understanding of the physiological structure of the human visual system, the visual characteristics of the human eye are closely related to how the human eye observes the external environment and images. Because people are the ultimate recipients of the images, the evaluation results must be consistent with what the human eyes actually see. In this embodiment, a corresponding mathematical model is constructed by analyzing the visual characteristics of the human eyes.
人眼与焦距可变的凸透镜相似,但其被人脑的复杂结构所影响,与一般的凸透镜又有所不同。一般地,人眼视觉特性包括中间凹特性、视觉多通道特性、视觉非线性、对比敏感度和掩盖效应。本实施例从中间凹特性、视觉多通道特性、掩盖效应出发构建数学评价模型。The human eye is similar to a convex lens with variable focal length, but it is affected by the complex structure of the human brain, which is different from a general convex lens. Generally, human visual characteristics include fovea characteristics, visual multi-channel characteristics, visual nonlinearity, contrast sensitivity and masking effect. In this embodiment, a mathematical evaluation model is constructed starting from the concave characteristics, visual multi-channel characteristics, and masking effects.
(1)空间位置函数(1) Spatial position function
人类视觉系统的中间凹特性是指图像出现时,图像的正中位置信息会首先被人眼观察到,尤其是图像的中心附近纹理边缘的位置变化信息,因为人眼比较容易感受到图像的边缘信息。The central concave characteristic of the human visual system means that when an image appears, the center position information of the image will be observed by the human eye first, especially the position change information of the texture edge near the center of the image, because the human eye is more likely to perceive the edge information of the image .
人眼首先会看到图像的中心部位,然后向四周扩散,而且四周距离中心距离一样的点人眼应该是同等对待。如图2所示,假设圆心O为图像的中心位置,圆圈上的点到圆心的距离相等,A、B、C、D四点被人眼观察到的概率相同,E与F也相同。The human eye will first see the center of the image, and then spread to the surroundings, and the human eyes should treat the points with the same distance from the center as the same. As shown in Figure 2, assuming that the center of the circle O is the center of the image, the distances from the points on the circle to the center of the circle are equal, and the four points A, B, C, and D have the same probability of being observed by human eyes, and E and F are also the same.
本实施例依据文献(CHEN T,WU H R.Space variant median filters for the restoration ofimpulse noise corrupted images[J].IEEE Transactions on Circuits and Systems II:Analog andDigital Signal Processing,2001,48(8):784-789.)提到的,根据人类视觉系统的中央凹特性,用确切的式子表示空间分辨率是如何影响人眼观察图像的。具体过程为,首先获取原始参考图像和待评价失真图像的RGB三刺激值,再提取原始参考图像和待评价失真图像的空间位置信息,分别构建原始参考图像和待评价失真图像的空间位置函数QL:This embodiment is based on the literature (CHEN T, WU H R. Space variant median filters for the restoration of impulse noise corrupted images [J]. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 2001, 48 (8): 784- 789.) Mentioned, according to the fovea characteristics of the human visual system, use the exact formula to express how the spatial resolution affects the human eye to observe the image. The specific process is as follows: firstly obtain the RGB tri-stimulus values of the original reference image and the distorted image to be evaluated, then extract the spatial position information of the original reference image and the distorted image to be evaluated, and construct the spatial position function Q of the original reference image and the distorted image to be evaluated respectively L :
式中,eL为人眼视觉观察的像素点(i,j)到图像中心像素点(M/2,N/2)的距离除以该幅图像第一个像素点(0,0)第一个像素点到图像中心像素点的距离ec是根据测试结果确定的常量,经测试后本实施例设定ec为0.6。In the formula, e L is the distance from the pixel point (i, j) observed by the human eye to the pixel point (M/2, N/2) in the center of the image divided by the first pixel point (0, 0) of the image The distance from pixels to the center pixel of the image ec is a constant determined according to test results, and ec is set to 0.6 in this embodiment after testing.
(2)纹理边缘复杂度函数(2) Texture edge complexity function
人类视觉系统的掩盖效应是指原本能够被注意到的现象由于其他现象的存在被忽略了。在不同区域内,人类视觉系统的掩盖效应可以通过各自的权重比例来反映,这与人眼观察图像的特性更一致。The masking effect of the human visual system refers to the phenomenon that can be noticed originally is ignored due to the existence of other phenomena. In different areas, the masking effect of the human visual system can be reflected by their respective weight ratios, which is more consistent with the characteristics of human eyes observing images.
本实施例提取图像的纹理特征信息和边缘特征信息,获取图像的纹理边缘复杂度函数QTE,QTE越大表示纹理越简单,人眼越加关注,其对图像质量好坏的影响就越大;相反的,QTE越小表示纹理越复杂,人眼越容易忽略。具体计算过程如下:This embodiment extracts the texture feature information and edge feature information of the image, and obtains the texture edge complexity function Q TE of the image. The larger the Q TE , the simpler the texture. Larger; on the contrary, the smaller the QTE, the more complex the texture, the easier it is for the human eye to ignore. The specific calculation process is as follows:
首先计算梯度方向:First calculate the gradient direction:
其中,θ(i,j)表示像素点(i,j)的梯度方向,和表示像素点(i,j)的水平、垂直方向梯度值。利用Sobel边缘检测算子,计算对应的图像边缘特征信息,将边缘信息归一化,记作E(i,j)。在[0,360°)范围将梯度方向分成如下几个区域,如下式所示:Among them, θ(i,j) represents the gradient direction of the pixel point (i,j), and Indicates the horizontal and vertical gradient values of the pixel point (i, j). Use the Sobel edge detection operator to calculate the corresponding image edge feature information, and normalize the edge information, denoted as E(i,j). In the [0,360°) range, the gradient direction is divided into the following areas, as shown in the following formula:
θ(i,j)∈{0°,180°,45°,225°,90°,270°,135°,315°} (3)θ(i,j)∈{0°,180°,45°,225°,90°,270°,135°,315°} (3)
其中0°和180°,45°和225°,90°和270°,135°和315°分别关于原点对称,即有4个不同的方向区域。Among them, 0° and 180°, 45° and 225°, 90° and 270°, 135° and 315° are respectively symmetrical about the origin, that is, there are 4 different direction areas.
计算纹理复杂度:Calculate texture complexity:
设a1为方向种类个数,即θ(i,j)的种类个数,a2为边缘点个数,即E(i,j)=1的像素点个数,分别计算它们的值。当a2比设定的门限值小时,a2=0,否则a2=1,该门限值经实验测试设为40。因此,我们将图像中某一像素点(i,j)的纹理复杂度函数QT定义为如下形式:Let a 1 be the number of direction types, that is, the number of types of θ(i,j), and a 2 be the number of edge points, that is, the number of pixels with E(i,j)=1, and calculate their values respectively. When a 2 is smaller than the set threshold, a 2 =0, otherwise a 2 =1, and the threshold is set to 40 through experimental testing. Therefore, we define the texture complexity function Q T of a certain pixel point (i, j) in the image as follows:
边缘复杂度:Borderline complexity:
首先定义三个向量P=(-1,0,2,0,1),L=(1,4,6,4,1),E=(-1,-2,0,2,1)。其中,P表示“点”特征描述子,L表示“线”特征描述子,E表示“边”特征描述子,利用这3个算子可得6个掩膜:LT×E,LT×P,ET×L,ET×P,PT×L,PT×E。设这6个掩膜在图中某个像素点(i,j)处的响应分别为fi,j(LT×E),fi,j(LT×P),fi,j(ET×L),fi,j(ET×P),fi,j(PT×L),fi,j(PT×E),则该像素点(i,j)的边缘复杂度为:First define three vectors P=(-1,0,2,0,1), L=(1,4,6,4,1), E=(-1,-2,0,2,1). Among them, P represents the "point" feature descriptor, L represents the "line" feature descriptor, and E represents the "edge" feature descriptor. Using these 3 operators, 6 masks can be obtained: L T × E, L T × P, E T × L, E T × P, P T × L, P T × E. Let the responses of these six masks at a certain pixel point (i,j) in the figure be f i,j (L T ×E), f i,j (L T ×P), f i,j ( E T ×L), f i,j (E T ×P), f i,j (P T ×L), f i,j (P T ×E), then the edge of the pixel (i,j) The complexity is:
像素点(i,j)的纹理边缘复杂度函数:The texture edge complexity function of pixel point (i, j):
QTE=QT×QE (6)Q TE =Q T ×Q E (6)
计算结果越大掩盖效应越弱,掩盖效应越弱,纹理约简单,人眼就比较容易看清楚,即对人眼的视觉影响就越强。The larger the calculation result, the weaker the masking effect, the weaker the masking effect, the simpler the texture, the easier it is for the human eye to see clearly, that is, the stronger the visual impact on the human eye.
(3)局部方差(3) Local variance
人类视觉系统的多通道特性是指人眼观察图像是在不同的通道中进行,分辨率低时只能分辨出大体轮廓,而分辨率高才能分辨出细节信息。图像的细节信息能由图像的局部方差来表示,所以图像的局部方差被用来作为描述、分析图像内容信息的一种手段,同时图像的一些重要结构信息也能由图像的局部方差分布来概括。The multi-channel characteristic of the human visual system means that the human eye observes images in different channels. When the resolution is low, only the general outline can be distinguished, while the high resolution can only distinguish the details. The detailed information of the image can be represented by the local variance of the image, so the local variance of the image is used as a means to describe and analyze the content information of the image, and some important structural information of the image can also be summarized by the local variance distribution of the image .
用QV表示图像I以像素点(i,j)为中心的局部区域的方差,即局部方差。本实施例首先需要将原始参考图像和待评价失真图像由RGB空间转换为YUV颜色空间,利用Y(表征亮度)分类来计算局部方差。采用滑动窗口对图像的Y分量进行互不重叠的分块,得到每一个分块的方差,此即图像的局部方差。对于每一图像分块Ii,j,包含L个像素,用ηp表示其内部每一个像素点,则局部方差可以表示为:Use Q V to represent the variance of the local area of the image I centered on the pixel point (i, j), that is, the local variance. In this embodiment, firstly, the original reference image and the distorted image to be evaluated need to be converted from the RGB space to the YUV color space, and the local variance is calculated by using Y (representing brightness) classification. The Y component of the image is divided into non-overlapping blocks by using a sliding window, and the variance of each block is obtained, which is the local variance of the image. For each image block I i,j , which contains L pixels, use ηp to represent each pixel inside it, then the local variance can be expressed as:
其中,为分块Ii,j的均值。in, is the mean value of block I i,j .
由于各个子块的大小、方式会影响到图像的结构,对于包含在Ii,j分块内的像素点ηp,采用文献(Z Wang,Z Bovik,et al.Image quality assessment from error measurement to structuralsimilarity[J].IEEE Transactions on Image Processing.2004,13(4):600-612.)提到的高斯加权法计算均值和方差,如下所示:Since the size and mode of each sub-block will affect the structure of the image, for the pixel point η p contained in the I i,j sub-block, the literature (Z Wang, Z Bovik, et al. Image quality assessment from error measurement to Structural similarity[J].IEEE Transactions on Image Processing.2004,13(4):600-612.) The Gaussian weighting method mentioned to calculate the mean and variance is as follows:
分块均值:
分块局部方差:
式中Xp为像素点ηp的个数。In the formula, X p is the number of pixel points η p .
(4)颜色信息(4) Color information
色调、饱和度、亮度是颜色的三个属性,也称为颜色三要素。它们是颜色的固有特性,并且互不相同。色调和饱和度可以由色度来体现。灰度图像的唯一特征是亮度,而彩色图像还具有色度特征。Hue, saturation, and brightness are the three attributes of color, also known as the three elements of color. They are inherent properties of color and are different from each other. Hue and saturation can be represented by chroma. The only characteristic of grayscale images is brightness, while color images also have chrominance characteristics.
亮度是一物理量,是人对光的强度的感受,反映了发光体(反光体)表面发光(反光)的强弱。色调指的是一幅画中画面色彩的总体倾向,是大的色彩效果。饱和度也称为颜色的纯度,是指色彩的鲜艳程度,表示颜色中所含彩色成分的比例。色彩的饱和度随着彩色比例的增大而增高,与光线照射情况和被拍摄物体的表面结构有直接联系。因为色调和饱和度可以由色度统一来表示,所以本实施例利用亮度和色度来表示颜色的本质属性。Brightness is a physical quantity, which is the human perception of the intensity of light, and reflects the intensity of light (reflection) on the surface of a luminous body (reflector). Hue refers to the overall tendency of the color of the picture in a painting, and it is the largest color effect. Saturation, also known as the purity of the color, refers to the vividness of the color, indicating the proportion of the color components contained in the color. The saturation of the color increases with the increase of the color ratio, which is directly related to the light exposure and the surface structure of the object being photographed. Since hue and saturation can be uniformly represented by chroma, this embodiment uses brightness and chroma to represent essential attributes of color.
人类视觉系统对亮度的敏感度高于对色度的敏感度,本实施例用权重法来表示图像的彩色信息,即对不同的彩色图像,亮度和色度所占的权重比例不同,具体计算关系如下:The sensitivity of the human visual system to brightness is higher than that to chroma. This embodiment uses the weight method to represent the color information of the image, that is, for different color images, the weight ratios of brightness and chroma are different. The specific calculation The relationship is as follows:
QC=αQL+βQU (10)Q C =αQ L +βQ U (10)
其中,QL为图像的亮度信息,QU为图像的色度信息,α、β分别为亮度和色度所占的比重,经实验结果测试,α、β分别取1.063和0.937最佳。Among them, Q L is the brightness information of the image, Q U is the chrominance information of the image, and α and β are the proportions of brightness and chrominance respectively. According to the experimental results, 1.063 and 0.937 are the best for α and β respectively.
步骤二、分别构造原始参考图像和待评价失真图像的四元数矩阵,并对四元数矩阵进行奇异值分解得到图像的奇异值特征向量:Step 2. Construct the quaternion matrix of the original reference image and the distorted image to be evaluated respectively, and perform singular value decomposition on the quaternion matrix to obtain the singular value eigenvector of the image:
(a)四元数(a) Quaternions
1843年,英国数学家哈密顿(W.R.Hamilton)创造了四元数。一个四元数包含4部分,1个实部加上3个虚部,其基本形式为:In 1843, the British mathematician WR Hamilton created quaternions. a quaternion Contains 4 parts, 1 real part plus 3 imaginary parts, its basic form is:
其中,qr,qi,qj,qk为四个实数,基元满足:Among them, q r , q i , q j , q k are four real numbers, the primitive satisfy:
四元数矩阵在实数域上可分解为如下形式:Quaternion Matrix In the field of real numbers, it can be decomposed into the following form:
四元数矩阵奇异值分解定理可表述为:对于任何四元数矩阵Qe(q)=U(q)Λ'V(q)x,设rank(A)=r,则存在四元数酉矩阵U(q)和V(q),使得The singular value decomposition theorem of the quaternion matrix can be expressed as: for any quaternion matrix Q e(q) = U (q) Λ'V (q) x, if rank(A) = r, then there is a quaternion unitary matrices U (q) and V (q) such that
Q(q)=U(q)ΛV(q)λQ (q) = U (q) ΛV (q) λ
其中,in,
并满足λi∈R,|λ1|≥|λ2|≥…≥|λr|>0,λi为非零奇异值。And satisfy λ i ∈ R,|λ 1 |≥|λ 2 |≥...≥|λ r |>0, λ i is a non-zero singular value.
(b)四元数表示(b) Quaternion representation
本实施例将上述分析所得的彩色图像的四个特征信息整合为一个四元数形式,如下所示:In this embodiment, the four feature information of the color image obtained from the above analysis are integrated into a quaternion form, as shown below:
Q=QC+QLi+QTEj+QVk (11)Q=Q C +Q L i+Q TE j+Q V k (11)
其中,QC为图像的颜色信息,QL为图像的空间位置信息,QTE为图像的纹理边缘信息,QV为图像的局部方差。Among them, Q C is the color information of the image, Q L is the spatial position information of the image, Q TE is the texture edge information of the image, and Q V is the local variance of the image.
如此,一幅M×N的彩色图像可以看作一个四元数矩阵,四元数矩阵的奇异值特征向量表征了四元数的能量特征,所以由彩色图像所得的四元数矩阵也能用来表示对应的彩色图像的能量特征。In this way, an M×N color image can be regarded as a quaternion matrix, and the singular value eigenvector of the quaternion matrix represents the energy characteristics of the quaternion, so the quaternion matrix obtained from the color image can also be used To represent the energy characteristics of the corresponding color image.
因为一个四元数矩阵q=qr+qii+qjj+qkk,可以用它的实矩阵来表示,所以本实施例将Q转化为其相应的实矩阵形式来对其进行奇异值分解(SVD)。每个四元数矩阵通过奇异值分解都可以得到一个奇异值特征向量,且特征向量的每个元素都是大于0的实数。值得说明的是,鉴于对四元数矩阵进行奇异值分解的理论研究已较成熟,本实施例从减少篇幅的角度考虑,此处不再赘述。Because a quaternion matrix q=q r +q i i+q j j+q k k can be represented by its real matrix, so this embodiment transforms Q into its corresponding real matrix form to perform its Singular value decomposition (SVD). Each quaternion matrix can obtain a singular value eigenvector through singular value decomposition, and each element of the eigenvector is a real number greater than 0. It is worth noting that, in view of the fact that the theoretical research on singular value decomposition of quaternion matrix is relatively mature, this embodiment is considered from the perspective of reducing space, and will not be repeated here.
步骤三、利用原始参考图像和待评价失真图像的奇异值特征向量的欧氏距离度量图像失真程度:Step 3, using the Euclidean distance between the original reference image and the singular value eigenvector of the distorted image to be evaluated to measure the degree of image distortion:
本实施例利用原始参考图像和待评价失真图像的奇异值特征向量的欧氏距离(EuclideanDistance)来度量对应的图像失真,即In this embodiment, the Euclidean distance (EuclideanDistance) of the singular value eigenvectors of the original reference image and the distorted image to be evaluated is used to measure the corresponding image distortion, namely
其中,λi和为计算获得的原始参考图像和待评价失真图像对应的奇异值特征向量,K为两奇异值特征向量特征值个数的最小值,即两个四元数矩阵秩的最小值:Among them, λ i and In order to calculate the singular value eigenvector corresponding to the obtained original reference image and the distorted image to be evaluated, K is the minimum value of the number of eigenvalues of the two singular value eigenvectors, that is, the minimum value of the rank of the two quaternion matrices:
本实施例的基于HVS和四元数的彩色图像质量评价方法,将人眼视觉特性和四元数相结合,评价结果与人眼感知图像的效果更相符,改进了传统割裂R、G、B三通道的方法,很好地保证了颜色信息的完整性,提取的图像信息包含全局和局部信息,使得评价结果能更完整的表征图像的全部信息。其评价结果优于传统的SSIM及其他几种典型的图像质量评价算法,下面将从两方面对本实施例的实验结果进行分析:The color image quality evaluation method based on HVS and quaternion in this embodiment combines the visual characteristics of the human eye with the quaternion, the evaluation result is more consistent with the effect of the human eye perceiving the image, and improves the traditional split R, G, B The three-channel method ensures the integrity of the color information well, and the extracted image information contains global and local information, so that the evaluation result can more completely represent all the information of the image. Its evaluation result is better than traditional SSIM and several other typical image quality evaluation algorithms. The following will analyze the experimental results of this embodiment from two aspects:
1)本发明的质量评价方法、PSNR、SSIM、MS-SSIM、Y-SVD、GMSD与DMOS值的非线性拟合;2)本发明的质量评价方法与PSNR、SSIM、MS-SSIM、Y-SVD、GMSD性能评价比较。1) the nonlinear fitting of quality evaluation method of the present invention, PSNR, SSIM, MS-SSIM, Y-SVD, GMSD and DMOS value; 2) quality evaluation method of the present invention and PSNR, SSIM, MS-SSIM, Y- SVD, GMSD performance evaluation comparison.
质量评价图片为美国德克萨斯(TEXAS)州立大学奥斯丁分校图像和视频工程实验室(Library for Image and Video Engineering,LIVE)提供的图像质量评价数据库Database Release 2图像库,一共982幅,有JPEG2000、JPEG、高斯白噪声、高斯模糊、Fast Fading瑞利信道失真五种失真类型。在进行算法比较时,会产生各个算法量纲和单位上的差异。因此,将待评价算法得到的客观图像质量评分进行非线性回归。利用Logistic函数作为非线性映射函数对本发明提出的待评价算法得出的客观图像质量原始评分进行非线性回归:The quality evaluation pictures are the image quality evaluation database Database Release 2 image library provided by the Image and Video Engineering Laboratory (Library for Image and Video Engineering, LIVE) of Texas State University in Austin (TEXAS), a total of 982 images. There are five distortion types: JPEG2000, JPEG, Gaussian white noise, Gaussian blur, and Fast Fading Rayleigh channel distortion. When comparing algorithms, there will be differences in the dimensions and units of each algorithm. Therefore, the objective image quality score obtained by the algorithm to be evaluated is subjected to nonlinear regression. Utilize the Logistic function as the non-linear mapping function to carry out the non-linear regression to the objective image quality original rating that the evaluation algorithm proposed by the present invention draws:
其中,x为本发明提出的待评价算法得出的原始质量评分,α1,α2,α3,α4为非线性回归过程中自适应调整的参数。定量测试评价结果的指标是公认度和引用次数较多的MAE/RMSE/CC/SROCC/OR。Among them, x is the original quality score obtained by the algorithm to be evaluated proposed by the present invention, and α 1 , α 2 , α 3 , α 4 are parameters for adaptive adjustment in the nonlinear regression process. The indicators of quantitative test evaluation results are MAE/RMSE/CC/SROCC/OR with higher recognition and citations.
1)主客观非线性回归后分数间的平均绝对误差(Mean Absolute Error,MAE),反映了客观质量评价结果与主观评价结果的平均误差水平,越小表明图像质量评价结果准确性越高,定义公式如下:1) The mean absolute error (Mean Absolute Error, MAE) between the scores after the subjective and objective nonlinear regression reflects the average error level between the objective quality evaluation results and the subjective evaluation results. The smaller the image quality evaluation results, the higher the accuracy. Define The formula is as follows:
2)主客观非线性回归后分数间的均方根误差(Root Mean Square Error,RMSE),反映了客观评价结果的准确性,越小表明图像质量评价结果准确性越高,定义公式如下:2) The root mean square error (Root Mean Square Error, RMSE) between the scores after the subjective and objective nonlinear regression reflects the accuracy of the objective evaluation results. The smaller the value, the higher the accuracy of the image quality evaluation results. The definition formula is as follows:
3)主客观非线性回归后分数间的Pearson线性相关系数(Correlation Coe cient,CC),反映客观评价结果的一致性和准确性,取值范围在[-1,1],结果的绝对值越接近1,主客观评价方法的相关性越好,定义公式如下:3) The Pearson linear correlation coefficient (Correlation Coe cient, CC) between the scores after the subjective and objective nonlinear regression reflects the consistency and accuracy of the objective evaluation results. The range of values is [-1,1]. The closer to 1, the better the correlation between subjective and objective evaluation methods. The definition formula is as follows:
4)主客观非线性回归后分数间的Spearman等级相关系数(Spearman Rank OrderCorrelation Coe cient,SROCC),是应用较广泛的非参数统计分析方法,反映了客观质量评价结果与主观评价结果的单调性,取值范围在[-1,1],结果的绝对值越接近1,主客观评价方法的一致性越好,定义公式如下:4) The Spearman Rank Correlation Coecient (SROCC) between scores after subjective and objective nonlinear regression is a widely used non-parametric statistical analysis method, which reflects the monotonicity of objective quality evaluation results and subjective evaluation results. The value range is [-1,1]. The closer the absolute value of the result is to 1, the better the consistency of the subjective and objective evaluation methods. The definition formula is as follows:
5)主客观非线性回归后分数间的离出率(Outlier Rate,OR),反映客观评价模型的稳定性、预测性,取值范围在[0,1],数值越小,主客观评价的一致性越好,评价模型的预测性也越好,定义公式如下:5) The Outlier Rate (OR) between scores after subjective and objective nonlinear regression reflects the stability and predictability of the objective evaluation model. The value range is [0,1]. The smaller the value, the better the subjective and objective evaluation. The better the consistency, the better the predictability of the evaluation model. The definition formula is as follows:
其中N为图像数据库的总数,即982,xi、yi分别表示第i幅图像经非线性回归后的主客观评价值,ui、vi分别表示第i幅图像的主客观评价值在整个图像数据库所有评价值中的排名,Nout表示客观评价值大于主观评价值标准差两倍的图像个数。Among them, N is the total number of image databases, that is, 982, x i and y i respectively represent the subjective and objective evaluation values of the i-th image after nonlinear regression, u i and v i respectively represent the subjective and objective evaluation values of the i-th image in The ranking of all evaluation values in the entire image database, N out indicates the number of images whose objective evaluation value is greater than twice the standard deviation of the subjective evaluation value.
图3分别显示了各种算法与主观评价值DMOS的散点图,图中每一个点表示一幅图像,点的横坐标是算法对该图像的客观质量评价评分,纵坐标是该图像的主观评价DMOS值,实线表示拟合曲线。散点越紧密分布在拟合曲线附近,表示算法与主观评价结果的一致性越好,该算法也就越好。可以看出本发明提供的方法982个散点图最靠近拟合曲线,说明本发明提出的方法经非线性拟合后效果比其他四种好。Figure 3 shows the scatter diagrams of various algorithms and subjective evaluation values DMOS, each point in the figure represents an image, the abscissa of the point is the objective quality evaluation score of the image by the algorithm, and the ordinate is the subjective The DMOS values were evaluated, and the solid line indicates the fitted curve. The closer the scatter points are distributed near the fitting curve, the better the consistency between the algorithm and the subjective evaluation results, and the better the algorithm. It can be seen that the 982 scatter plots of the method provided by the present invention are closest to the fitting curve, indicating that the method proposed by the present invention is better than the other four methods after nonlinear fitting.
表1 LIVE图像数据库图像质量评价方法性能比较Table 1 Performance comparison of image quality evaluation methods in LIVE image database
由表1的实验数据我们发现,本发明的质量评价方法在6种评价指标里是最好的,平均绝对误差和均方根误差最小、与主观视觉感知的相关性最高、离出率最低。因为PSNR算法没有考虑各个像素点之间的相关性,将每个像素同等对待,在6种比较算法中整体性能最差。SSIM算法利用图像的结构信息来评价图像质量,与人眼视觉感知模式相关。MS-SSIM算法在SSIM算法的基础上,利用多分辨率分析技术进行多尺度图像质量评价,因此性能优于PSNR和SSIM。只对亮度分量进行奇异值分解的SVD算法性能明显优于PSNR,说明对图像进行奇异值分解的算法有一定的优越性。视觉上GMSD算法与DMOS值非线性拟合出来的曲线最接近直线,但可以看出一些散点分布较分散,离曲线较远。由表1的最后一行可以看出本发明的质量评价方法明显优于传统的PSNR算法、结构相似度SSIM算法、多尺度结构相似度MS-SSIM算法、奇异值分解算法SVD、梯度幅值相似性偏差GMSD算法,说明本发明的基于四元数和人眼视觉特性的图像质量评价算法能更好地反映人眼对图像的主观视觉感受。From the experimental data in Table 1, we found that the quality evaluation method of the present invention is the best among the six evaluation indicators, with the smallest mean absolute error and root mean square error, the highest correlation with subjective visual perception, and the lowest outlier rate. Because the PSNR algorithm does not consider the correlation between each pixel and treats each pixel equally, the overall performance is the worst among the six comparison algorithms. The SSIM algorithm uses the structural information of the image to evaluate the image quality, which is related to the human visual perception mode. Based on the SSIM algorithm, the MS-SSIM algorithm uses multi-resolution analysis technology for multi-scale image quality evaluation, so its performance is better than PSNR and SSIM. The performance of the SVD algorithm that only performs singular value decomposition on the brightness component is significantly better than PSNR, which shows that the algorithm that performs singular value decomposition on images has certain advantages. Visually, the curve fitted by the GMSD algorithm and the DMOS value is the closest to a straight line, but it can be seen that some scattered points are scattered and far away from the curve. It can be seen from the last line of Table 1 that the quality evaluation method of the present invention is significantly better than traditional PSNR algorithm, structural similarity SSIM algorithm, multi-scale structural similarity MS-SSIM algorithm, singular value decomposition algorithm SVD, gradient amplitude similarity The deviation GMSD algorithm shows that the image quality evaluation algorithm based on quaternions and human visual characteristics of the present invention can better reflect the subjective visual experience of human eyes on images.
由于982幅图像是由5种不同失真类型的图像子库组成,为进一步证明本发明的质量评价方法优越性,本实施例针对5种图像子库,分别进行HVS-QSVD算法与GMSD算法、SSIM算法的性能比较。如图4所示,图中每三个为一组,分别为HVS-QSVD算法、GMSD算法、SSIM算法的非线性拟合曲线图,共五组。第一组图4中的(a)至第五组图4中的(e)分别为JPEG2000、JPEG、高斯白噪声、高斯模糊、Fast Fading瑞利信道失真五种失真类型。可以看出,本发明提供的HVS-QSVD算法在不同失真类型情况下与主观评价值的拟合效果都比GMSD、SSIM算法好。Since the 982 images are composed of image sub-databases of 5 different distortion types, in order to further prove the superiority of the quality evaluation method of the present invention, this embodiment performs HVS-QSVD algorithm and GMSD algorithm, SSIM respectively for 5 kinds of image sub-databases. Algorithm performance comparison. As shown in Figure 4, each of the three in the figure is a group, which are the nonlinear fitting curves of the HVS-QSVD algorithm, the GMSD algorithm, and the SSIM algorithm, and there are five groups in total. (a) in the first group of Figure 4 to (e) in the fifth group of Figure 4 are the five distortion types of JPEG2000, JPEG, Gaussian white noise, Gaussian blur, and Fast Fading Rayleigh channel distortion. It can be seen that the fitting effect of the HVS-QSVD algorithm provided by the present invention to the subjective evaluation value under different distortion types is better than that of the GMSD and SSIM algorithms.
实施例1所述的一种基于HVS和四元数的彩色图像质量评价方法,为使评价结果与人眼感知更相符,利用人眼视觉特性构造数学模型,为改进传统割裂R、G、B三通道的方法,利用四元数奇异值分解提取图像的特征信息。实验结果表明,评价结果与人眼感知图像的效果更相符。A color image quality evaluation method based on HVS and quaternion described in Example 1, in order to make the evaluation result more consistent with the perception of the human eye, a mathematical model is constructed using the visual characteristics of the human eye, in order to improve the traditional split R, G, B The three-channel method uses the quaternion singular value decomposition to extract the feature information of the image. Experimental results show that the evaluation results are more consistent with the effect of human eyes perceiving images.
以上示意性的对本发明及其实施方式进行了描述,该描述没有限制性,附图中所示的也只是本发明的实施方式之一,实际并不局限于此。所以,如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,不经创造性的设计出与该技术方案相似的结构方式及实施例,均应属于本发明的保护范围。The above schematically describes the present invention and its implementations, but the description is not restrictive. What is shown in the drawings is only one of the implementations of the present invention, and is not actually limited thereto. Therefore, if a person of ordinary skill in the art is inspired by it, without departing from the inventive concept of the present invention, without creatively designing a structural mode and embodiment similar to the technical solution, it shall all belong to the protection scope of the present invention .
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410650245.9A CN104361593B (en) | 2014-11-14 | 2014-11-14 | A Color Image Quality Evaluation Method Based on HVS and Quaternion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410650245.9A CN104361593B (en) | 2014-11-14 | 2014-11-14 | A Color Image Quality Evaluation Method Based on HVS and Quaternion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104361593A true CN104361593A (en) | 2015-02-18 |
CN104361593B CN104361593B (en) | 2017-09-19 |
Family
ID=52528851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410650245.9A Expired - Fee Related CN104361593B (en) | 2014-11-14 | 2014-11-14 | A Color Image Quality Evaluation Method Based on HVS and Quaternion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104361593B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528776A (en) * | 2015-08-07 | 2016-04-27 | 上海仙梦软件技术有限公司 | SDP quality evaluation method for image format JPEG |
CN105574854A (en) * | 2015-12-10 | 2016-05-11 | 小米科技有限责任公司 | Method and device for determining image oneness |
CN106600597A (en) * | 2016-12-22 | 2017-04-26 | 华中科技大学 | Non-reference color image quality evaluation method based on local binary pattern |
CN106683082A (en) * | 2016-12-19 | 2017-05-17 | 华中科技大学 | Method for evaluating quality of full reference color image based on quaternion |
CN107862678A (en) * | 2017-10-19 | 2018-03-30 | 宁波大学 | A kind of eye fundus image reference-free quality evaluation method |
WO2018140158A1 (en) * | 2017-01-30 | 2018-08-02 | Euclid Discoveries, Llc | Video characterization for smart enconding based on perceptual quality optimization |
CN109191431A (en) * | 2018-07-27 | 2019-01-11 | 天津大学 | High dynamic color image quality evaluation method based on characteristic similarity |
CN109345520A (en) * | 2018-09-20 | 2019-02-15 | 江苏商贸职业学院 | A kind of quality evaluating method of image definition |
CN109389591A (en) * | 2018-09-30 | 2019-02-26 | 西安电子科技大学 | Color image quality evaluation method based on colored description |
CN109643125A (en) * | 2016-06-28 | 2019-04-16 | 柯尼亚塔有限公司 | For training the 3D virtual world true to nature of automated driving system to create and simulation |
CN109903247A (en) * | 2019-02-22 | 2019-06-18 | 西安工程大学 | High-precision grayscale method for color images based on Gaussian color space correlation |
CN110793472A (en) * | 2019-11-11 | 2020-02-14 | 桂林理工大学 | Grinding surface roughness detection method based on quaternion singular value entropy index |
CN112771570A (en) * | 2018-08-29 | 2021-05-07 | 瑞典爱立信有限公司 | Video fidelity metric |
CN112950723A (en) * | 2021-03-05 | 2021-06-11 | 湖南大学 | Robot camera calibration method based on edge scale self-adaptive defocus fuzzy estimation |
CN116152249A (en) * | 2023-04-20 | 2023-05-23 | 济宁立德印务有限公司 | Intelligent digital printing quality detection method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020043280A1 (en) * | 2018-08-29 | 2020-03-05 | Telefonaktiebolaget Lm Ericsson (Publ) | Image fidelity measure |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030076990A1 (en) * | 2001-08-08 | 2003-04-24 | Mitsubishi Electric Research Laboratories, Inc. | Rendering deformable 3D models recovered from videos |
CN1897634A (en) * | 2006-06-08 | 2007-01-17 | 复旦大学 | Image-quality estimation based on supercomplex singular-value decomposition |
-
2014
- 2014-11-14 CN CN201410650245.9A patent/CN104361593B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030076990A1 (en) * | 2001-08-08 | 2003-04-24 | Mitsubishi Electric Research Laboratories, Inc. | Rendering deformable 3D models recovered from videos |
CN1897634A (en) * | 2006-06-08 | 2007-01-17 | 复旦大学 | Image-quality estimation based on supercomplex singular-value decomposition |
Non-Patent Citations (3)
Title |
---|
REN TONGQUN等: "3-D Free-form Shape Measuring System Using Unconstrained Range Sensor", 《CHINESE JOURNAL OF MECHANICAL ENGINEERING》 * |
何叶明等: "基于HVS特征参数提取的视频质量评价四元数模型", 《计算机应用与软件》 * |
王宇庆等: "基于四元数的彩色图像质量评价方法", 《中北大学学报(自然科学版)》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528776A (en) * | 2015-08-07 | 2016-04-27 | 上海仙梦软件技术有限公司 | SDP quality evaluation method for image format JPEG |
CN105574854A (en) * | 2015-12-10 | 2016-05-11 | 小米科技有限责任公司 | Method and device for determining image oneness |
CN105574854B (en) * | 2015-12-10 | 2019-02-12 | 小米科技有限责任公司 | Determine the monistic method and device of image |
CN109643125A (en) * | 2016-06-28 | 2019-04-16 | 柯尼亚塔有限公司 | For training the 3D virtual world true to nature of automated driving system to create and simulation |
CN109643125B (en) * | 2016-06-28 | 2022-11-15 | 柯尼亚塔有限公司 | Realistic 3D virtual world creation and simulation for training an autonomous driving system |
CN106683082A (en) * | 2016-12-19 | 2017-05-17 | 华中科技大学 | Method for evaluating quality of full reference color image based on quaternion |
CN106683082B (en) * | 2016-12-19 | 2019-08-13 | 华中科技大学 | It is a kind of complete with reference to color image quality evaluation method based on quaternary number |
CN106600597B (en) * | 2016-12-22 | 2019-04-12 | 华中科技大学 | It is a kind of based on local binary patterns without reference color image quality evaluation method |
CN106600597A (en) * | 2016-12-22 | 2017-04-26 | 华中科技大学 | Non-reference color image quality evaluation method based on local binary pattern |
US11228766B2 (en) | 2017-01-30 | 2022-01-18 | Euclid Discoveries, Llc | Dynamic scaling for consistent video quality in multi-frame size encoding |
US11350105B2 (en) | 2017-01-30 | 2022-05-31 | Euclid Discoveries, Llc | Selection of video quality metrics and models to optimize bitrate savings in video encoding applications |
WO2018140158A1 (en) * | 2017-01-30 | 2018-08-02 | Euclid Discoveries, Llc | Video characterization for smart enconding based on perceptual quality optimization |
US11159801B2 (en) | 2017-01-30 | 2021-10-26 | Euclid Discoveries, Llc | Video characterization for smart encoding based on perceptual quality optimization |
US10757419B2 (en) | 2017-01-30 | 2020-08-25 | Euclid Discoveries, Llc | Video characterization for smart encoding based on perceptual quality optimization |
CN107862678A (en) * | 2017-10-19 | 2018-03-30 | 宁波大学 | A kind of eye fundus image reference-free quality evaluation method |
CN107862678B (en) * | 2017-10-19 | 2020-03-17 | 宁波大学 | Fundus image non-reference quality evaluation method |
CN109191431A (en) * | 2018-07-27 | 2019-01-11 | 天津大学 | High dynamic color image quality evaluation method based on characteristic similarity |
CN112771570A (en) * | 2018-08-29 | 2021-05-07 | 瑞典爱立信有限公司 | Video fidelity metric |
US11394978B2 (en) | 2018-08-29 | 2022-07-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Video fidelity measure |
CN109345520A (en) * | 2018-09-20 | 2019-02-15 | 江苏商贸职业学院 | A kind of quality evaluating method of image definition |
CN109389591B (en) * | 2018-09-30 | 2020-11-20 | 西安电子科技大学 | Color Image Quality Evaluation Method Based on Color Descriptor |
CN109389591A (en) * | 2018-09-30 | 2019-02-26 | 西安电子科技大学 | Color image quality evaluation method based on colored description |
CN109903247A (en) * | 2019-02-22 | 2019-06-18 | 西安工程大学 | High-precision grayscale method for color images based on Gaussian color space correlation |
CN110793472B (en) * | 2019-11-11 | 2021-07-27 | 桂林理工大学 | A grinding surface roughness detection method based on quaternion singular value entropy index |
CN110793472A (en) * | 2019-11-11 | 2020-02-14 | 桂林理工大学 | Grinding surface roughness detection method based on quaternion singular value entropy index |
CN112950723A (en) * | 2021-03-05 | 2021-06-11 | 湖南大学 | Robot camera calibration method based on edge scale self-adaptive defocus fuzzy estimation |
CN116152249A (en) * | 2023-04-20 | 2023-05-23 | 济宁立德印务有限公司 | Intelligent digital printing quality detection method |
Also Published As
Publication number | Publication date |
---|---|
CN104361593B (en) | 2017-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104361593B (en) | A Color Image Quality Evaluation Method Based on HVS and Quaternion | |
Panetta et al. | Human-visual-system-inspired underwater image quality measures | |
Panetta et al. | No reference color image contrast and quality measures | |
CN104376565B (en) | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation | |
He et al. | Image quality assessment based on S-CIELAB model | |
Gao et al. | No reference color image quality measures | |
CN109191428A (en) | Full-reference image quality evaluating method based on masking textural characteristics | |
CN106934770B (en) | A kind of method and apparatus for evaluating haze image defog effect | |
CN101562675A (en) | No-reference image quality evaluation method based on Contourlet transform | |
Geng et al. | A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property | |
CN103761724A (en) | Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm | |
CN106683082B (en) | It is a kind of complete with reference to color image quality evaluation method based on quaternary number | |
CN109741285B (en) | Method and system for constructing underwater image data set | |
Li et al. | Local and global sparse representation for no-reference quality assessment of stereoscopic images | |
DE102008044764A1 (en) | Method and portable system for quality evaluation of meat | |
CN114998596A (en) | High dynamic range stereo omnidirectional image quality evaluation method based on visual perception | |
CN108460756A (en) | Based on statistical nature without reference ir image quality evaluating method | |
Yuan et al. | Color image quality assessment with multi deep convolutional networks | |
CN116363094A (en) | A Method for Quality Evaluation of Super-resolution Reconstructed Images | |
CN104010189A (en) | An Objective Evaluation Method of Video Quality Based on Chroma Co-occurrence Matrix Weighting | |
CN108022241A (en) | A kind of coherence enhancing quality evaluating method towards underwater picture collection | |
CN102800060B (en) | The quick self-adapted optimization method of digital picture under low-light (level) | |
Jaafar et al. | Improving measurement bias of structural similarity index (ssim) using absolute difference equation | |
Bao et al. | A no reference image quality measure using a distance doubling variance | |
Zhang et al. | SPCA: a no-reference image quality assessment based on the statistic property of the PCA on nature images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170919 |