CN105844640A - Color image quality evaluation method based on gradient - Google Patents

Color image quality evaluation method based on gradient Download PDF

Info

Publication number
CN105844640A
CN105844640A CN201610171818.9A CN201610171818A CN105844640A CN 105844640 A CN105844640 A CN 105844640A CN 201610171818 A CN201610171818 A CN 201610171818A CN 105844640 A CN105844640 A CN 105844640A
Authority
CN
China
Prior art keywords
image
color
original image
channel
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610171818.9A
Other languages
Chinese (zh)
Inventor
路文
吝冰杰
许天骄
孙互兴
何立火
邓成
王颖
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610171818.9A priority Critical patent/CN105844640A/en
Publication of CN105844640A publication Critical patent/CN105844640A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于梯度的彩色图像质量评价方法,主要解决当前图像质量评价方法中对彩色图像颜色失真评价效果不佳的问题。其包括:1)利用S‑CIELAB色貌模型分别对原始图像和失真图像进行颜色感知变换,将这两种图像分别分解为一个亮度通道和两个色度通道;2)利用线性卷积滤波对每一个通道进行梯度计算,获得原始图像和失真图像的亮度边缘和色度边缘;3)计算原始图像与失真图像之间的亮度边缘差异和色度边缘差异;4)将亮度边缘差异和色度边缘差异进行线性融合,得到最终的质量评价值CGBM。本发明能够更有效、更准确地对彩色图像进行质量评价,可用于彩色图像压缩、存储、传输过程中对彩色图像的处理。

The invention discloses a gradient-based color image quality evaluation method, which mainly solves the problem of poor evaluation effect on color distortion of color images in the current image quality evaluation method. It includes: 1) using the S-CIELAB color appearance model to perform color perception transformation on the original image and the distorted image respectively, decomposing the two images into a luminance channel and two chrominance channels respectively; 2) using linear convolution filtering to Each channel performs gradient calculation to obtain the brightness edge and chrominance edge of the original image and the distorted image; 3) calculate the brightness edge difference and chrominance edge difference between the original image and the distorted image; 4) combine the brightness edge difference and chrominance The edge difference is linearly fused to obtain the final quality evaluation value CGBM. The invention can evaluate the quality of the color image more effectively and accurately, and can be used for processing the color image in the process of compressing, storing and transmitting the color image.

Description

基于梯度的彩色图像质量评价方法Gradient-Based Color Image Quality Evaluation Method

技术领域technical field

本发明属于图像处理技术领域,特别是一种彩色图像质量评价方法,可用于彩色图像压缩、存储、传输过程中对彩色图像的处理。The invention belongs to the technical field of image processing, in particular to a method for evaluating the quality of a color image, which can be used for color image processing in the process of color image compression, storage and transmission.

背景技术Background technique

随着彩色成像技术的快速发展,彩色数字图像已被大规模应用于数据可视化领域中。与灰度图像相比,彩色图像包含了更高的信息层次,它能够真实而生动的描述客观世界。但对彩色图像进行数字化处理时,比如在信息采集、变换处理、压缩存储、信道传输、终端显示等过程中,无法避免的会引入一些失真,导致图像的颜色信息发生畸变或丢失等现象,使彩色图像的质量发生不同程度的下降。降质会导致彩色图像出现物体间颜色杂糅、物体边缘色彩块状模糊化等现象,难以从图像中获取有效信息,这对人们认识客观世界带来了很大的困扰,也为后续的彩色图像处理系统和分析带来了障碍。因此需要设计合理的彩色图像质量评价算法。With the rapid development of color imaging technology, color digital images have been widely used in the field of data visualization. Compared with grayscale images, color images contain a higher level of information, which can truly and vividly describe the objective world. However, when digitally processing color images, such as information collection, transformation processing, compression storage, channel transmission, terminal display, etc., some distortion will inevitably be introduced, resulting in distortion or loss of color information in the image. The quality of color images is degraded to varying degrees. Degradation will lead to the phenomenon of mixed colors between objects in color images, blurred color blocks at the edge of objects, etc., making it difficult to obtain effective information from images. Processing systems and analytics present obstacles. Therefore, it is necessary to design a reasonable color image quality evaluation algorithm.

现有的图像质量评价算法多数是针对于灰度图像而设计的,即先将彩色图像从RGB空间变换到灰度域后再进行评测。由于灰度图像的像素点由标量表示,而彩色图像的像素点是用矢量来表示,如果将这类算法应用于彩色失真图像,则忽略了图像里的颜色分量信息,得到的评价结果与主观感知一致性较差。研究表明,在一个人观察一幅图像的初期,人眼接收到的视觉信息80%为图像的颜色信息,即使在观察几分钟之后,这个百分比也可以保持在50%左右。由此可见,色彩信息在人类感知图像的过程中扮演着重要的角色。因此,需要在图像质量评价中考虑到颜色对于图像失真的影响。由于色彩变化的多样性和人类视觉对颜色感知的复杂性,这使得对彩色图像质量的量化评价更加困难,针对图像色彩失真特点设计可适用于彩色图像的质量评价方法是十分必要的。Most of the existing image quality evaluation algorithms are designed for gray-scale images, that is, the color images are first transformed from RGB space to gray-scale domain and then evaluated. Since the pixels of a grayscale image are represented by scalars, and the pixels of a color image are represented by vectors, if this type of algorithm is applied to a color-distorted image, the color component information in the image is ignored, and the evaluation results obtained are consistent with subjective Perceived consistency is poor. Studies have shown that at the beginning of a person's observation of an image, 80% of the visual information received by the human eye is the color information of the image, and even after a few minutes of observation, this percentage can remain at about 50%. It can be seen that color information plays an important role in the process of human perception of images. Therefore, it is necessary to take into account the influence of color on image distortion in image quality evaluation. Due to the diversity of color changes and the complexity of human vision's perception of color, it is more difficult to quantitatively evaluate the quality of color images. It is very necessary to design a quality evaluation method suitable for color images according to the characteristics of image color distortion.

Wang等人分析了图像RGB通道的局部方差变量,先利用其概率分布构造四元数的系数,再根据四元数矩阵表征图像的结构信息,最后对矩阵进行分解,将奇异值间的相似性作为评价测度。此类方法考虑到了各个彩色分量之间的关系,但在由亮度漂移和对比度变化引起的颜色失真类型上结果比较差。Wang et al. analyzed the local variance variable of the RGB channel of the image, first used its probability distribution to construct the coefficient of the quaternion, then represented the structural information of the image according to the quaternion matrix, and finally decomposed the matrix, and the similarity between the singular values as an evaluation measure. Such methods take into account the relationship between the individual color components, but have poor results on the types of color distortions caused by brightness drift and contrast changes.

Xie等人从人类对图像质量的感知主要与图像的亮度、信息量、对比度和噪声有关这一角度出发,提取了三个通道的相关特征,综合融合后得到最终测度。此方法用到的特征主要还是由图像的亮度信息构成,并没有从色彩本身提取特征,并且此方法适用的失真范围有限,只适用于评价加噪和模糊图像。由此可见,现行的大多数彩色图像质量评价方法在性能指标上与实际使用需求仍有较大距离。From the perspective that human perception of image quality is mainly related to the brightness, information content, contrast and noise of the image, Xie et al. extracted the relevant features of the three channels, and obtained the final measurement after comprehensive fusion. The features used in this method are mainly composed of the brightness information of the image, and the features are not extracted from the color itself, and the applicable distortion range of this method is limited, and it is only suitable for evaluating noisy and blurred images. It can be seen that most of the current color image quality evaluation methods still have a large distance from the actual use requirements in terms of performance indicators.

发明内容Contents of the invention

本发明的目的在于针对当前图像质量评价方法中对彩色图像颜色失真评价效果不佳的问题,提出了一种基于梯度的彩色图像质量评价方法,以更有效、更准确地对彩色图像进行质量评价,满足更多彩色图像的使用需求。The purpose of the present invention is to solve the problem that the color distortion evaluation effect of the color image is not good in the current image quality evaluation method, and propose a color image quality evaluation method based on the gradient, so as to evaluate the quality of the color image more effectively and accurately , to meet the demand for more color images.

实现本发明目的的技术方案是:将图像的亮度边缘和色度边缘相结合来表征图像失真程度,即根据彩色图像的边缘信息对失真变化敏感,基于梯度特性提取每个通道亮度边缘和色度边缘,再通过参考图像和失真图像边缘差异和差异融合,对彩色图像进行质量评价。The technical solution to realize the purpose of the present invention is: combine the brightness edge and chroma edge of the image to characterize the degree of image distortion, that is, according to the edge information of the color image is sensitive to distortion changes, and extract the brightness edge and chroma edge of each channel based on the gradient characteristic Edge, and then through the edge difference and difference fusion of the reference image and the distorted image, the quality evaluation of the color image is carried out.

其实现步骤包括如下:Its realization steps include as follows:

(1)利用国际照明委员会CIE提出的S-CIELAB色貌模型分别对原始图像R和失真图像D进行颜色感知变换,将这两种图像分别分解为一个亮度通道L和两个色度通道(a,b),其中,L表示颜色的亮度,a通道的颜色是从红色到深绿,b通道的颜色是从黄色到蓝色;(1) Use the S-CIELAB color appearance model proposed by the International Commission on Illumination CIE to perform color perception transformation on the original image R and the distorted image D, and decompose the two images into a luminance channel L and two chrominance channels (a , b), wherein, L represents the brightness of the color, the color of the a channel is from red to dark green, and the color of the b channel is from yellow to blue;

(2)利用线性卷积滤波对每一个通道进行梯度计算,获得原始图像和失真图像的亮度边缘和色度边缘:(2) Use linear convolution filtering to calculate the gradient of each channel to obtain the brightness edge and chroma edge of the original image and the distorted image:

gg RR LL (( xx ,, ythe y )) == (( LL RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. LL (( xx ,, ythe y )) == (( LL DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg RR aa (( xx ,, ythe y )) == (( aa RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. aa (( xx ,, ythe y )) == (( aa DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg RR bb (( xx ,, ythe y )) == (( bb RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. bb (( xx ,, ythe y )) == (( bb DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

其中,x表示滤波器的水平方向,y表示滤波器垂直方向,LR表示原始图像R的亮度通道,LD表示失真图像D的亮度通道,表示原始图像亮度通道的梯度幅度,表示失真图像亮度通道的梯度幅度;aR表示原始图像R中红色到深绿的颜色,aD表示失真图像D中红色到深绿的颜色,分别表示原始图像和失真图像中a色度通道的梯度幅度;bR表示原始图像R中黄色到蓝色的颜色,bD表示失真图像D中黄色到蓝色的颜色, 分别表示原始图像和失真图像中b色度通道的梯度幅度;Among them, x represents the horizontal direction of the filter, y represents the vertical direction of the filter, LR represents the luminance channel of the original image R, and LD represents the luminance channel of the distorted image D , Indicates the gradient magnitude of the brightness channel of the original image, Indicates the gradient magnitude of the brightness channel of the distorted image; a R indicates the color from red to dark green in the original image R, a D indicates the color from red to dark green in the distorted image D, denote the gradient magnitude of the a chrominance channel in the original image and the distorted image, respectively; b R denotes the color from yellow to blue in the original image R, b D denotes the color from yellow to blue in the distorted image D, Denote the gradient magnitudes of the b-chroma channels in the original image and the distorted image, respectively;

(3)计算原始图像与失真图像之间的边缘差异参数:(3) Calculate the edge difference parameter between the original image and the distorted image:

(3a)利用结构相似度算法SSIM中亮度 (3a) Using the brightness in the structural similarity algorithm SSIM

相似度评价指标Similarity Evaluation Index

推导出原始图像与失真图像亮度通道像素点之间的亮度边缘差异DML,其中,x,y分别代表原始图像块和失真图像块,μx,μy分别代表原始图像块和失真图像块的均值,C1为常量;Deduce the brightness edge difference DML between the original image and the distorted image brightness channel pixels, where x, y represent the original image block and the distorted image block respectively, μ x , μ y represent the original image block and the distorted image block respectively mean, C 1 is a constant;

(3b)比较原始图像与失真图像像素点之间红色到深绿的颜色差异,推导出二者的a色度边缘差异DMa(3b) compare the color difference from red to dark green between the original image and the distorted image pixel, and deduce the a chromaticity edge difference DM a of the two;

(3c)比较原始图像与失真图像像素点之间黄色到蓝色的颜色差异,推导出二者的b色度边缘差异DMb(3c) compare the color difference from yellow to blue between the original image and the pixel of the distorted image, and deduce the b chromaticity edge difference DM b of the two;

(4)将亮度边缘差异和色度边缘差异进行线性融合,得到最终的质量评价值CGBM:(4) Linearly fuse the brightness edge difference and chrominance edge difference to obtain the final quality evaluation value CGBM:

CC GG BB Mm == ΣΣ ii == 11 NN 11 NN (( ωω 11 ·&Center Dot; DMDM LL (( ii )) ++ ωω 22 ·&Center Dot; DMDM aa (( ii )) ++ ωω 33 ·&Center Dot; DMDM bb (( ii )) ))

其中,i=1,2,3N,N为图像内所有像素点的数量,ω1,ω2,ω3分别代表亮度通道,a色度通道,b色度通道对失真感知影响的权重参数,CGBM结果值的范围为[0,1],结果越接近1代表图像的质量越好。Among them, i=1, 2, 3N, N is the number of all pixels in the image, ω 1 , ω 2 , ω 3 respectively represent the weight parameters of the brightness channel, a chroma channel, and b chroma channel on the perception of distortion, The range of the CGBM result value is [0,1], and the closer the result is to 1, the better the quality of the image.

本发明具有如下优点:The present invention has following advantage:

1)本发明与其它方法相比,在彩色图像质量评价准确度上有较明显的改进,适用于更多类型的颜色失真,而且取得了与主观评价结果较一致的结果。1) Compared with other methods, the present invention has a more obvious improvement in the accuracy of color image quality evaluation, is applicable to more types of color distortion, and obtains results consistent with subjective evaluation results.

2)本发明由于把图像通过色貌模型变换到合适的颜色空间,与其它方法先将彩色图像的R,G,B色彩通道分别利用灰度域图像的某一质量评价算法进行计算,再把R,G,B通道的结果进行线性叠加的方式相比,更加准确的描述了图像的亮度与色度属性,更具有合理性。2) The present invention transforms the image into a suitable color space through the color appearance model, and first uses a certain quality evaluation algorithm of the gray-scale image to calculate the R, G, and B color channels of the color image with other methods, and then calculates the Compared with the method of linear superposition of the results of R, G, and B channels, it more accurately describes the brightness and chrominance properties of the image, and is more reasonable.

3)本发明利用色度边缘差异对彩色图像进行质量评价,很好的克服了传统方法中将彩色图像变换到灰度域进行质量评价时丢失大量颜色信息的问题。实验结果表明,本发明对于颜色部分差异的计算是有效的。3) The present invention utilizes chromaticity edge difference to evaluate the quality of the color image, which well overcomes the problem of losing a large amount of color information when the color image is transformed into the grayscale domain for quality evaluation in the traditional method. Experimental results show that the present invention is effective for the calculation of partial color differences.

附图说明Description of drawings

图1是本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;

图2是本发明在TID2013数据库上选取的8种不同类型的颜色失真图像;Fig. 2 is the color distortion image of 8 kinds of different types that the present invention chooses on TID2013 database;

图3是利用本发明求得的图像质量Q与失真图像MOS值的拟合图;Fig. 3 is the fitting diagram of the image quality Q obtained by the present invention and the distorted image MOS value;

具体实施方式detailed description

参照图1,本发明的实现步骤如下:With reference to Fig. 1, the realization steps of the present invention are as follows:

步骤1.利用S-CIELAB色貌模型对图像进行颜色感知变换。Step 1. Use the S-CIELAB color appearance model to perform color perception transformation on the image.

人眼视觉系统对颜色的感知会随着照明条件、观测距离、观测设备等环境因素的不同而发生变化。如果将两个相同的颜色置于不同的观察条件下,人眼视觉系统对这两个颜色的感知是不一样的。因此,在对彩色图像进行处理之前,需要对原始图像图像和失真图像进行感知变换来消除不同的观测条件对于颜色感知的影响。The perception of color by the human visual system will vary with environmental factors such as lighting conditions, observation distance, and observation equipment. If two identical colors are placed under different viewing conditions, the human visual system will perceive the two colors differently. Therefore, before processing the color image, it is necessary to perform perceptual transformation on the original image and the distorted image to eliminate the influence of different observation conditions on color perception.

本实例选取了S-CIELAB色貌模型,将原始图像与失真图像进变换到LAB2000HL空间,此空间用色差公式计算出的结果与人眼感知的颜色差异匹配度比其他颜色空间高,能较好的模拟视觉感知特性,更准确的描述颜色的色度、色相等属性,是个感知均匀的颜色空间。In this example, the S-CIELAB color appearance model is selected, and the original image and the distorted image are transformed into the LAB2000HL space. The result calculated by the color difference formula in this space matches the color difference perceived by the human eye better than other color spaces. It is a perceptually uniform color space that more accurately describes the chromaticity, hue and other attributes of colors.

步骤2.利用线性卷积滤波提取原始图像和失真图像的亮度边缘和色度边缘。Step 2. Extract the luminance and chrominance edges of the original image and the distorted image using linear convolution filtering.

对于彩色图像,传统的基于图像亮度进行边缘提取的方法是不充分的,本实例利用颜色感知变换模型将原始图像和失真图像分别分解为一个亮度通道L和两个色度通道(a,b),其中,L表示颜色的亮度,a通道的颜色是从红色到深绿,b通道的颜色是从黄色到蓝色;For color images, the traditional method of edge extraction based on image brightness is not sufficient. This example uses the color perception transformation model to decompose the original image and the distorted image into a brightness channel L and two chrominance channels (a,b) , where L represents the brightness of the color, the color of the a channel is from red to dark green, and the color of the b channel is from yellow to blue;

利用线性卷积滤波对每一个通道进行梯度计算:Use linear convolution filtering to calculate the gradient of each channel:

gg RR LL (( xx ,, ythe y )) == (( LL RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. LL (( xx ,, ythe y )) == (( LL DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg RR aa (( xx ,, ythe y )) == (( aa RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. aa (( xx ,, ythe y )) == (( aa DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg RR bb (( xx ,, ythe y )) == (( bb RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

gg DD. bb (( xx ,, ythe y )) == (( bb DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y ))

其中,x表示滤波器的水平方向,y表示滤波器垂直方向,LR表示原始图像R的亮度通道,LD表示失真图像D的亮度通道,表示原始图像亮度通道的梯度幅度,表示失真图像亮度通道的梯度幅度;aR表示原始图像R中红色到深绿的颜色,aD表示失真图像D中红色到深绿的颜色,分别表示原始图像和失真图像中a色度通道的梯度幅度;bR表示原始图像R中黄色到蓝色的颜色,bD表示失真图像D中黄色到蓝色的颜色,分别表示原始图像和失真图像中b色度通道的梯度幅度;Among them, x represents the horizontal direction of the filter, y represents the vertical direction of the filter, LR represents the luminance channel of the original image R, and LD represents the luminance channel of the distorted image D , Indicates the gradient magnitude of the brightness channel of the original image, Indicates the gradient magnitude of the brightness channel of the distorted image; a R indicates the color from red to dark green in the original image R, a D indicates the color from red to dark green in the distorted image D, denote the gradient magnitude of the a chrominance channel in the original image and the distorted image, respectively; b R denotes the color from yellow to blue in the original image R, b D denotes the color from yellow to blue in the distorted image D, Denote the gradient magnitudes of the b-chroma channels in the original image and the distorted image, respectively;

通过对每一个通道进行梯度计算获得原始图像和失真图像的亮度边缘和色度边缘。The luma edge and chrominance edge of the original image and the distorted image are obtained by gradient calculation for each channel.

步骤3.计算原始图像与失真图像之间的边缘差异参数。Step 3. Calculate the edge difference parameter between the original image and the distorted image.

3.1)计算原始图像与失真图像亮度通道像素点之间的亮度边缘差异DML3.1) Calculate the brightness edge difference DML between the original image and the brightness channel pixel of the distorted image:

利用结构相似度算法SSIM中亮度相似度评价指标 Luminance Similarity Evaluation Index Using Structural Similarity Algorithm SSIM

推导出原始图像与失真图像亮度通道像素点之间的亮度边缘差异DML Deduce the luminance edge difference DML between the original image and the distorted image luminance channel pixels:

DMDM LL == 11 11 ++ nno 11 ** (( gg RR LL (( xx ,, ythe y )) -- gg DD. LL (( xx ,, ythe y )) )) 22

其中,x,y分别代表原始图像块和失真图像块,μx,μy分别代表原始图像块和失真图像块的均值,C1为常量,n1=150用于调整DML的大小;Wherein, x, y represent the original image block and the distorted image block respectively, μ x , μ y represent the mean value of the original image block and the distorted image block respectively, C 1 is a constant, and n 1 = 150 is used to adjust the size of DML;

3.2)计算原始图像和失真图像的a色度边缘差异DMa和b色度边缘差异DMb3.2) Calculate a chroma edge difference DM a and b chroma edge difference DM b of the original image and the distorted image:

SSIM算法中亮度相似度评价指标是在线性亮度空间内计算差异,而在感知均匀的彩色空间中,本实例采取Guha T等人在文献“Learning sparse models for image qualityassessmen”中提出的差异度量方式计算始图像和失真图像的a色度边缘差异DMa和b色度边缘差异DMbIn the SSIM algorithm, the brightness similarity evaluation index is to calculate the difference in the linear brightness space, but in the perceptually uniform color space, this example adopts the difference measurement method proposed by Guha T et al. in the literature "Learning sparse models for image qualityassessmen" a chroma edge difference DM a and b chroma edge difference DM b of the original image and the distorted image:

DMDM aa == 11 -- || || gg RR aa (( xx ,, ythe y )) -- gg DD. aa (( xx ,, ythe y )) || || 22 ++ mm 11 || || gg RR aa (( xx ,, ythe y )) || || 22 ++ || || gg DD. aa (( xx ,, ythe y )) || || 22 ++ mm 11

DMDM bb == 11 -- || || gg RR bb (( xx ,, ythe y )) -- gg DD. bb (( xx ,, ythe y )) || || 22 ++ mm 22 || || gg DD. bb (( xx ,, ythe y )) || || 22 ++ || || gg DD. bb (( xx ,, ythe y )) || || 22 ++ mm 22

其中m1,m2为取值很小的常数,以避免分母为零或接近零时造成的奇异性,在本发明中取m1=0.5,m2=0.5。Among them, m 1 and m 2 are constants with very small values to avoid the singularity caused when the denominator is zero or close to zero. In the present invention, m 1 =0.5 and m 2 =0.5.

步骤4.质量评价。Step 4. Quality Evaluation.

将亮度边缘差异和色度边缘差异进行线性融合,得到最终的质量评价值CGBM:The brightness edge difference and the chrominance edge difference are linearly fused to obtain the final quality evaluation value CGBM:

CC GG BB Mm == ΣΣ ii == 11 NN 11 NN (( ωω 11 ·· DMDM LL ++ ωω 22 ·· DMDM aa ++ ωω 33 ·· DMDM bb ))

其中,i=1,2,3N,N为图像内所有像素点的数量,ω1,ω2,ω3分别代表亮度通道,a色度通道,b色度通道对失真感知影响的权重参数,CGBM结果值的范围为[0,1],结果越接近1代表图像的质量越好。Wherein, i=1, 2, 3N, N is the number of all pixels in the image, ω 1 , ω 2 , ω 3 respectively represent the weight parameters of the brightness channel, a chroma channel, and b chroma channel on the perception of distortion, The range of the CGBM result value is [0,1], the closer the result is to 1, the better the quality of the image.

本发明的优点可通过以下实验进一步说明:Advantage of the present invention can further illustrate by following experiment:

1.评测条件:1. Evaluation conditions:

使用了乌克兰航空航天大学的学者Nikolay Ponomarenko和其课题组建立的TID2013图像库。此数据库是目前包含失真类别最多、图像数量最大的公开数据库,其在已经得到了广泛应用的TID2008数据库的基础之上又增加了7种新的失真类型,并对所有失真类型的等级增加了1级,因此该数据库包含了24种失真类型,5个失真等级,共计3000幅失真图像和25幅原始图像。The TID2013 image library established by Nikolay Ponomarenko, a scholar at Ukraine Aerospace University and his research group, was used. This database is currently the public database that contains the largest number of distortion categories and the largest number of images. It adds 7 new distortion types on the basis of the widely used TID2008 database, and increases the level of all distortion types by 1 level, so the database contains 24 distortion types, 5 distortion levels, a total of 3000 distorted images and 25 original images.

上述丰富的图像内容、全面的失真种类和大量的图像个数都使TID2013更适合进行质量评价算法的验证,除此之外,选择此数据库的另一重要原因是它包含了多种对颜色感知产生影响的失真类型。本发明选取了此数据库中8类关于颜色的失真类型,如表1所示。The above-mentioned rich image content, comprehensive distortion types and a large number of images make TID2013 more suitable for the verification of quality evaluation algorithms. In addition, another important reason for choosing this database is that it contains a variety of color perception The type of distortion that is affected. The present invention selects 8 types of color distortion types in the database, as shown in Table 1.

表1 TID2013数据库中8种彩色失真Table 1 Eight kinds of color distortion in TID2013 database

表1所示的这8类失真类型的图像如图2所示,其中图2a,2b,2c,2d,2e,2f,2g,2h,I分别对应表1中的#2失真加噪,#7失真量化噪声,#10失真JPEG压缩,#16失真均值漂移,#17失真对比度变化,#18失真颜色饱和度变化,#22失真颜色量化和#23失真色差和原始图像。The images of these 8 types of distortion types shown in Table 1 are shown in Figure 2, where Figures 2a, 2b, 2c, 2d, 2e, 2f, 2g, 2h, I correspond to #2 distortion plus noise in Table 1, # 7 Distortion Quantization Noise, #10 Distortion JPEG Compression, #16 Distortion Mean Shift, #17 Distortion Contrast Change, #18 Distortion Color Saturation Change, #22 Distortion Color Quantization and #23 Distortion Color Difference and Original Image.

2.仿真实验2. Simulation experiment

实验1,一致性验证。Experiment 1, consistency verification.

为了测试本发明提出的彩色图像质量客观评价结果与主观质量评价的一致性,选取Pearson线性相关系数PLCC,反映客观评价方法预测的精确性,PLCC值越接近1,表示算法准确度越高;In order to test the consistency of the color image quality objective evaluation result and the subjective quality evaluation proposed by the present invention, the Pearson linear correlation coefficient PLCC is selected to reflect the accuracy of the objective evaluation method prediction. The closer the PLCC value is to 1, the higher the accuracy of the algorithm is;

将本发明与现有几种最新的彩色图像质量评价方法iCID,CID,QSSIM,S-SSIM和FSIM针对于彩色失真图像做对比实验。结果如表2。The present invention is compared with several existing color image quality evaluation methods iCID, CID, QSSIM, S-SSIM and FSIM for color distortion images. The results are shown in Table 2.

表2 本发明和其它算法在TID2013数据库上的CC值Table 2 The CC value of the present invention and other algorithms on the TID2013 database

从表2中可以看出,本发明在所有颜色失真类型上的结果都优于CID,QSSIM,和S-SSIM算法。与iCID算法相比,本发明在6类失真上均取得了更高的PLCC值,尤其在Contrast change这类失真中优势明显。除此之外,本发明弥补了FSIM算法在Change ofcolor saturation失真类型上的不足。综上所述,本发明在评价准确度上与对比方法相比有较明显的改进,取得了与主观评价结果相一致的结果。It can be seen from Table 2 that the results of the present invention are superior to CID, QSSIM, and S-SSIM algorithms in all color distortion types. Compared with the iCID algorithm, the present invention achieves higher PLCC values in six types of distortions, especially in distortions such as contrast change. In addition, the present invention makes up for the deficiency of the FSIM algorithm in the distortion type of Change of color saturation. To sum up, the present invention has obvious improvement in evaluation accuracy compared with the comparison method, and has obtained a result consistent with the subjective evaluation result.

实验2,合理性验证。Experiment 2, rationality verification.

为了验证本发明的合理性,针对四种彩色失真,量化噪声(#7),JPEG压缩(#10),图像的颜色量化(#22)和色差(#23)设计实验进行分析和论证。在选取的实验数据里,共包括有25幅原始图像和其对应的4种颜色失真类型,其中每种类型有5个失真程度依次增大的失真图像。In order to verify the rationality of the present invention, four kinds of color distortion, quantization noise (#7), JPEG compression (#10), color quantization (#22) and color difference (#23) design experiments of images are analyzed and demonstrated. In the selected experimental data, a total of 25 original images and their corresponding 4 types of color distortion are included, and each type has 5 distorted images with increasing degrees of distortion.

首先,对每一幅失真图像计算图像质量,再将属于同一失真类型同一个失真程度的所有失真图像对应的Q值汇合并取其均值。同时,将属于同一失真类型同一失真程度的所有图像的MOS值取均值,结果由图3所示。First, the image quality is calculated for each distorted image, and then the Q values corresponding to all distorted images belonging to the same distortion type and the same degree of distortion are combined and their average value is taken. At the same time, the MOS values of all images belonging to the same distortion type and the same degree of distortion are averaged, and the results are shown in Figure 3.

由图3可看出,客观质量评估结果Q随着图像失真程度的降低而增大,本发明的预测趋势与图像失真程度有很好的一致性,能有效检测出图像质量的变化,具有合理性。It can be seen from Fig. 3 that the objective quality evaluation result Q increases as the degree of image distortion decreases, and the prediction trend of the present invention has a good consistency with the degree of image distortion, and can effectively detect changes in image quality, with reasonable sex.

实验3,颜色有效性验证。Experiment 3, color validity verification.

为了验证本发明中颜色部分在整体评价过程中的作用,将本发明中颜色通道a和b的权重参数ω2和ω3设置为0,即只利用本发明中的亮度部分进行质量评价,实验结果记为CGBM*。将所得实验数据与利用本发明得到的实验数据CGBM进行比较。结果如表3。In order to verify the role of the color part in the overall evaluation process in the present invention, the weight parameters ω 2 and ω 3 of the color channels a and b in the present invention are set to 0, that is, only the brightness part in the present invention is used for quality evaluation. Experiment The result is reported as CGBM*. The obtained experimental data is compared with the experimental data CGBM obtained by using the present invention. The results are shown in Table 3.

表3 不同失真程度集的MOS均值与评价结果(CGBM)均值Table 3 The mean value of MOS and the mean value of evaluation results (CGBM) of different distortion degree sets

从表3可以看出,当把本发明中的颜色部分去除后,其实验结果在失真类型#2,#10,#22和#23上与本发明结果几乎保持一致,而在#16,#17和#18这三种失真上的PLCC值大幅下降,尤其是#17的结果仅为0.3,由此可见,算法内对于颜色部分差异的计算是有效的。As can be seen from Table 3, when the color part in the present invention is removed, its experimental results are almost consistent with the results of the present invention on distortion types #2, #10, #22 and #23, while in #16, # The PLCC values of the three distortions of 17 and #18 have dropped significantly, especially the result of #17 is only 0.3. It can be seen that the calculation of the color difference in the algorithm is effective.

Claims (4)

1.一种基于梯度的彩色图像质量评价方法,包括:1. A gradient-based color image quality evaluation method, comprising: (1)利用国际照明委员会CIE提出的S-CIELAB色貌模型分别对原始图像R和失真图像D进行颜色感知变换,将这两种图像分别分解为一个亮度通道L和两个色度通道(a,b),其中,L表示颜色的亮度,a通道的颜色是从红色到深绿,b通道的颜色是从黄色到蓝色;(1) Use the S-CIELAB color appearance model proposed by the International Commission on Illumination CIE to perform color perception transformation on the original image R and the distorted image D, and decompose the two images into a luminance channel L and two chrominance channels (a , b), wherein, L represents the brightness of the color, the color of the a channel is from red to dark green, and the color of the b channel is from yellow to blue; (2)利用线性卷积滤波对每一个通道进行梯度计算,获得原始图像和失真图像的亮度边缘和色度边缘:(2) Use linear convolution filtering to calculate the gradient of each channel to obtain the brightness edge and chroma edge of the original image and the distorted image: gg RR LL (( xx ,, ythe y )) == (( LL RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) gg DD. LL (( xx ,, ythe y )) == (( LL DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( LL DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) gg RR aa (( xx ,, ythe y )) == (( aa RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) gg DD. aa (( xx ,, ythe y )) == (( aa DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( aa DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) gg RR bb (( xx ,, ythe y )) == (( bb RR ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb RR ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) gg DD. bb (( xx ,, ythe y )) == (( bb DD. ⊗⊗ ff xx )) 22 (( xx ,, ythe y )) ++ (( bb DD. ⊗⊗ ff ythe y )) 22 (( xx ,, ythe y )) 其中,x表示滤波器的水平方向,y表示滤波器垂直方向,LR表示原始图像R的亮度通道,LD表示失真图像D的亮度通道,表示原始图像亮度通道的梯度幅度,表示失真图像亮度通道的梯度幅度;aR表示原始图像R中红色到深绿的颜色,aD表示失真图像D中红色到深绿的颜色,分别表示原始图像和失真图像中a色度通道的梯度幅度;bR表示原始图像R中黄色到蓝色的颜色,bD表示失真图像D中黄色到蓝色的颜色,分别表示原始图像和失真图像中b色度通道的梯度幅度;Among them, x represents the horizontal direction of the filter, y represents the vertical direction of the filter, LR represents the brightness channel of the original image R , and L D represents the brightness channel of the distorted image D, Indicates the gradient magnitude of the brightness channel of the original image, Indicates the gradient magnitude of the brightness channel of the distorted image; a R indicates the color from red to dark green in the original image R, a D indicates the color from red to dark green in the distorted image D, denote the gradient magnitude of a chrominance channel in the original image and the distorted image, respectively; b R denotes the color from yellow to blue in the original image R, b D denotes the color from yellow to blue in the distorted image D, Denote the gradient magnitudes of the b-chroma channels in the original image and the distorted image, respectively; (3)计算原始图像与失真图像之间的边缘差异参数:(3) Calculate the edge difference parameter between the original image and the distorted image: (3a)利用结构相似度算法SSIM中亮度相似度评价指标推导出原始图像与失真图像亮度通道像素点之间的亮度边缘差异DML,其中,x,y分别代表原始图像块和失真图像块,μx,μy分别代表原始图像块和失真图像块的均值,C1为常量;(3a) Using the brightness similarity evaluation index in the structural similarity algorithm SSIM Deduce the brightness edge difference DML between the original image and the distorted image brightness channel pixels, where x, y represent the original image block and the distorted image block respectively, μ x , μ y represent the original image block and the distorted image block respectively mean, C 1 is a constant; (3b)比较原始图像与失真图像像素点之间红色到深绿的颜色差异,推导出二者的a色度边缘差异DMa(3b) compare the color difference from red to dark green between the original image and the distorted image pixel, and deduce the a chromaticity edge difference DM a of the two; (3c)比较原始图像与失真图像像素点之间黄色到蓝色的颜色差异,推导出二者的b色度边缘差异DMb(3c) compare the color difference from yellow to blue between the original image and the pixel of the distorted image, and deduce the b chromaticity edge difference DM b of the two; (4)将亮度边缘差异和色度边缘差异进行线性融合,得到最终的质量评价值CGBM:(4) Linearly fuse the brightness edge difference and chrominance edge difference to obtain the final quality evaluation value CGBM: CC GG BB Mm == ΣΣ ii == 11 NN 11 NN (( ωω 11 ·&Center Dot; DMDM LL (( ii )) ++ ωω 22 ·&Center Dot; DMDM aa (( ii )) ++ ωω 33 ·&Center Dot; DMDM bb (( ii )) )) 其中,i=1,2,3 N,N为图像内所有像素点的数量,ω1,ω2,ω3分别代表亮度通道,a色度通道,b色度通道对失真感知影响的权重参数,CGBM结果值的范围为[0,1],结果越接近1代表图像的质量越好。Among them, i=1, 2, 3 N, N is the number of all pixels in the image, ω 1 , ω 2 , ω 3 respectively represent the weight parameters of the luma channel, a chroma channel, and b chroma channel on the perception of distortion , the range of the CGBM result value is [0,1], the closer the result is to 1, the better the quality of the image. 2.根据权利要求1所述的方法,其中步骤(3a)中推导出的原始图像与失真图像亮度通道像素点之间的亮度边缘差异DML,表示如下:2. The method according to claim 1, wherein the brightness edge difference DML between the original image deduced in the step (3a) and the distorted image brightness channel pixel point is expressed as follows: DMDM LL == 11 11 ++ nno 11 ** (( gg RR LL (( xx ,, ythe y )) -- gg DD. LL (( xx ,, ythe y )) )) 22 其中n1=150,用于调整DML的大小。Among them, n 1 = 150, which is used to adjust the size of DML. 3.根据权利要求1所述的方法,其中步骤(3b)中推导出的原始图像与失真图像像素点之间的a色度边缘差异DMa,表示如下:3. The method according to claim 1, wherein the a chromaticity edge difference DM a between the original image deduced in the step (3b) and the pixel of the distorted image is expressed as follows: DMDM aa == 11 -- || || gg RR aa (( xx ,, ythe y )) -- gg DD. aa (( xx ,, ythe y )) || || 22 ++ mm 11 || || gg RR aa (( xx ,, ythe y )) || || 22 ++ || || gg DD. aa (( xx ,, ythe y )) || || 22 ++ mm 11 其中m1=0.5,以避免分母为零或接近零时造成的奇异性。Where m 1 =0.5, to avoid the singularity caused when the denominator is zero or close to zero. 4.根据权利要求1所述的方法,其中步骤(3c)中推导出的原始图像与失真图像像素点之间的b色度边缘差异DMb,表示如下:4. The method according to claim 1, wherein the b chromaticity edge difference DM b between the original image deduced in the step (3c) and the pixel point of the distorted image is expressed as follows: DMDM bb == 11 -- || || gg RR bb (( xx ,, ythe y )) -- gg DD. bb (( xx ,, ythe y )) || || 22 ++ mm 22 || || gg DD. bb (( xx ,, ythe y )) || || 22 ++ || || gg DD. bb (( xx ,, ythe y )) || || 22 ++ mm 22 其中m2=0.5,以避免分母为零或接近零时造成的奇异性。Where m 2 =0.5, to avoid the singularity caused when the denominator is zero or close to zero.
CN201610171818.9A 2016-03-24 2016-03-24 Color image quality evaluation method based on gradient Pending CN105844640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610171818.9A CN105844640A (en) 2016-03-24 2016-03-24 Color image quality evaluation method based on gradient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610171818.9A CN105844640A (en) 2016-03-24 2016-03-24 Color image quality evaluation method based on gradient

Publications (1)

Publication Number Publication Date
CN105844640A true CN105844640A (en) 2016-08-10

Family

ID=56583335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610171818.9A Pending CN105844640A (en) 2016-03-24 2016-03-24 Color image quality evaluation method based on gradient

Country Status (1)

Country Link
CN (1) CN105844640A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689066A (en) * 2017-09-15 2018-02-13 深圳市唯特视科技有限公司 A kind of facial color method based on example image deformation
WO2018214671A1 (en) * 2017-05-26 2018-11-29 杭州海康威视数字技术股份有限公司 Image distortion correction method and device and electronic device
CN112509071A (en) * 2021-01-29 2021-03-16 电子科技大学 Chroma information compression and reconstruction method assisted by luminance information
CN113077405A (en) * 2021-03-27 2021-07-06 荆门汇易佳信息科技有限公司 Color transfer and quality evaluation system for two-segment block
CN113379635A (en) * 2021-06-18 2021-09-10 北京小米移动软件有限公司 Image processing method and device, model training method and device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976444A (en) * 2010-11-11 2011-02-16 浙江大学 Pixel type based objective assessment method of image quality by utilizing structural similarity
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN104021545A (en) * 2014-05-12 2014-09-03 同济大学 Full-reference color image quality evaluation method based on visual saliency
CN104023230A (en) * 2014-06-23 2014-09-03 北京理工大学 Non-reference image quality evaluation method based on gradient relevance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976444A (en) * 2010-11-11 2011-02-16 浙江大学 Pixel type based objective assessment method of image quality by utilizing structural similarity
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN104021545A (en) * 2014-05-12 2014-09-03 同济大学 Full-reference color image quality evaluation method based on visual saliency
CN104023230A (en) * 2014-06-23 2014-09-03 北京理工大学 Non-reference image quality evaluation method based on gradient relevance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TANAYA GUHA等: "learning sparse models for image quality assessment", 《2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTIC,SPEECH AND SIGNAL PROCESSING》 *
何立火: "视觉信息质量感知模型及评价方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214671A1 (en) * 2017-05-26 2018-11-29 杭州海康威视数字技术股份有限公司 Image distortion correction method and device and electronic device
CN108932697A (en) * 2017-05-26 2018-12-04 杭州海康威视数字技术股份有限公司 A kind of distorted image removes distortion methods, device and electronic equipment
CN108932697B (en) * 2017-05-26 2020-01-17 杭州海康威视数字技术股份有限公司 Distortion removing method and device for distorted image and electronic equipment
US11250546B2 (en) 2017-05-26 2022-02-15 Hangzhou Hikvision Digital Technology Co., Ltd. Image distortion correction method and device and electronic device
CN107689066A (en) * 2017-09-15 2018-02-13 深圳市唯特视科技有限公司 A kind of facial color method based on example image deformation
CN112509071A (en) * 2021-01-29 2021-03-16 电子科技大学 Chroma information compression and reconstruction method assisted by luminance information
CN113077405A (en) * 2021-03-27 2021-07-06 荆门汇易佳信息科技有限公司 Color transfer and quality evaluation system for two-segment block
CN113379635A (en) * 2021-06-18 2021-09-10 北京小米移动软件有限公司 Image processing method and device, model training method and device and storage medium

Similar Documents

Publication Publication Date Title
Panetta et al. Human-visual-system-inspired underwater image quality measures
Liu et al. CID: IQ–a new image quality database
CN109191428B (en) Full-reference image quality assessment method based on masked texture features
CN105844640A (en) Color image quality evaluation method based on gradient
CN101650833B (en) Color image quality evaluation method
CN102867295B (en) A kind of color correction method for color image
CN103426173B (en) Objective evaluation method for stereo image quality
CN104023227B (en) An Objective Evaluation Method of Video Quality Based on Similarity of Spatial and Temporal Structures
CN102663719A (en) Bayer-pattern CFA image demosaicking method based on non-local mean
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN107767363A (en) It is a kind of based on natural scene without refer to high-dynamics image quality evaluation algorithm
Pedersen Evaluation of 60 full-reference image quality metrics on the CID: IQ
CN109218716A (en) Based on color statistics and comentropy without reference tone mapping graph image quality evaluation method
Tsai et al. Quality assessment of 3D synthesized views with depth map distortion
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN105976351A (en) Central offset based three-dimensional image quality evaluation method
CN101668226B (en) Method for acquiring color image with best quality
CN103841410A (en) Half reference video QoE objective evaluation method based on image feature information
Wang et al. Screen content image quality assessment with edge features in gradient domain
CN101901482B (en) Method for judging quality effect of defogged and enhanced image
Ortiz-Jaramillo et al. Evaluating color difference measures in images
CN114998596A (en) High dynamic range stereo omnidirectional image quality evaluation method based on visual perception
Lee et al. Contrast-preserved chroma enhancement technique using YCbCr color space
CN107483918B (en) Saliency-based full-reference stereo image quality assessment method
Jadhav et al. Performance evaluation of structural similarity index metric in different colorspaces for HVS based assessment of quality of colour images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160810

WD01 Invention patent application deemed withdrawn after publication