CN111598826A - Image objective quality evaluation method and system based on joint multi-scale image characteristics - Google Patents
Image objective quality evaluation method and system based on joint multi-scale image characteristics Download PDFInfo
- Publication number
- CN111598826A CN111598826A CN201910122634.7A CN201910122634A CN111598826A CN 111598826 A CN111598826 A CN 111598826A CN 201910122634 A CN201910122634 A CN 201910122634A CN 111598826 A CN111598826 A CN 111598826A
- Authority
- CN
- China
- Prior art keywords
- feature
- similarity
- edge
- picture
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000013441 quality evaluation Methods 0.000 title claims description 20
- 238000012545 processing Methods 0.000 claims abstract description 27
- 239000000284 extract Substances 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims description 82
- 238000011156 evaluation Methods 0.000 claims description 21
- 238000011176 pooling Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 238000009825 accumulation Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000001303 quality assessment method Methods 0.000 claims 1
- 230000006835 compression Effects 0.000 abstract description 4
- 238000007906 compression Methods 0.000 abstract description 4
- 238000012795 verification Methods 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 241000208199 Buxus sempervirens Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明提供了一种基于联合多尺度图片特征的图片客观质量评价方法、系统,包括:图片处理步骤:使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n),从y1 (n)提取获得边结构特征;边显著特征提取步骤:使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征。本发明对桌面图片质量评价准确度更高,通过现有数据库验证发现,综合性能相比现有技术更加优越,并且对桌面图片的失真类型:高斯模糊、运动模糊与JPEG2000压缩失真具有突出优越性能。
The present invention provides a method and system for evaluating the objective quality of pictures based on joint multi-scale picture features. y 0 (n) and y 1 (n) , extract edge structure features from y 1 (n) ; edge saliency feature extraction step: use brightness mask and contrast mask, from the image group y 0 ( n) The edge saliency feature is obtained by extracting the image group y 1 (n) after processing with the Laplacian pyramid. The present invention has higher accuracy in evaluating the quality of desktop pictures. It is found through the verification of the existing database that the comprehensive performance is more superior than the prior art, and the distortion types of desktop pictures: Gaussian blur, motion blur and JPEG2000 compression distortion have outstanding performance. .
Description
技术领域technical field
本发明涉及图像处理领域,具体地,涉及基于联合多尺度图片特征的图片客观质量评价方法、系统。The present invention relates to the field of image processing, in particular, to a method and system for evaluating objective quality of pictures based on joint multi-scale picture features.
背景技术Background technique
随着智能终端,如智能手机、平板与笔记本电脑的广泛应用,桌面内容图片已经代替自然图片成为人们日常生活中最常见、消费量最高的图片。桌面内容图片是计算机生成的,由图形、文本与自然图片组合而成的一种图片,大量使用在桌面游戏、桌面协作与远程教育等应用下。针对这些应用,图片质量显得尤为重要。但由于桌面内容图片与自然图片具有不同的特征,传统的为自然图片设计的质量评价方法并不能很好地反映出桌面内容图片的失真情况:相比于自然图片颜色丰富、边缘平滑的特点,桌面内容图片往往相对颜色单一、边缘锐利并充斥着大量重复的图形;且自然图片的失真一般是由于物理传感器能力有限引起,但桌面内容图片的失真一般都是由计算机自身原因产生的。因此迫切需要针对内面内容图片准确、高效的客观质量评价方法。With the wide application of smart terminals, such as smart phones, tablets and laptops, desktop content pictures have replaced natural pictures and become the most common and consumed pictures in people's daily life. Desktop content pictures are computer-generated images composed of graphics, text, and natural pictures. They are widely used in desktop games, desktop collaboration, and distance education. For these applications, image quality is particularly important. However, due to the different characteristics of desktop content pictures and natural pictures, the traditional quality evaluation methods designed for natural pictures cannot well reflect the distortion of desktop content pictures: compared with the characteristics of rich colors and smooth edges of natural pictures, Desktop content images tend to have relatively single colors, sharp edges, and a lot of repetitive graphics; and the distortion of natural images is generally caused by the limited capabilities of physical sensors, but the distortion of desktop content images is generally caused by the computer itself. Therefore, there is an urgent need for an accurate and efficient objective quality evaluation method for inner content pictures.
专利文献CN108335289A(申请号:201810049789.8)公开了一种全参考融合的图像客观质量评价方法,包括:选择图片数据库作为模型训练的输入,将图片按照失真类型分组,每种失真类型下有不同程度失真的图片,分别获得每组图片的文件名及标签;特征提取,选用多种全参考度量算法,对每种失真类型中的图片分别打分,每组图片经过一种全参考度量算法运算会得到一个特征向量,将得到特征向量组成特征矩阵;数据预处理,将失真图像标签和失真类型对应的特征向量分数分别规范化到(1,100)和(0,1)之间,并进行转置处理以满足SVM训练需要;特征训练,得到质量评价模型。Patent document CN108335289A (application number: 201810049789.8) discloses a full-reference fusion image objective quality evaluation method, including: selecting a picture database as an input for model training, grouping pictures according to distortion types, and each distortion type has different degrees of distortion For each group of pictures, the file name and label of each group of pictures are obtained respectively; for feature extraction, a variety of full-reference measurement algorithms are selected to score the pictures in each distortion type. eigenvectors, and the eigenvectors will be obtained to form a feature matrix; data preprocessing, normalize the eigenvector scores corresponding to the distorted image labels and distortion types to be between (1,100) and (0,1) respectively, and transpose to satisfy the SVM Training needs; feature training to obtain a quality evaluation model.
发明内容SUMMARY OF THE INVENTION
针对现有技术中的缺陷,本发明的目的是提供一种基于联合多尺度图片特征的图片客观质量评价方法、系统。In view of the defects in the prior art, the purpose of the present invention is to provide a method and system for evaluating the objective quality of pictures based on joint multi-scale picture features.
根据本发明提供的一种基于联合多尺度图片特征的图片客观质量评价方法,包括:According to a method for evaluating objective quality of pictures based on joint multi-scale picture features, the method includes:
图片处理步骤:使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n),从y1 (n)提取获得边结构特征;Image processing steps: use Gaussian pyramid and Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively, and extract edge structure features from y 1 (n) ;
边显著特征提取步骤:使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征;Edge saliency feature extraction step: Using the brightness mask and the contrast mask, extract the edge saliency from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid. feature;
特征相似度计算步骤:根据获得的边结构特征和边显著度特征,计算获得边结构相似度和边显著度相似度;Feature similarity calculation step: According to the obtained edge structure feature and edge saliency feature, calculate and obtain the edge structure similarity and edge saliency similarity;
特征组合步骤:根据获得的边结构相似度和边显著度相似度,计算获得最终局部质量图;Feature combination step: Calculate and obtain the final local quality map according to the obtained edge structure similarity and edge saliency similarity;
特征池化步骤:根据获得的最终局部质量图,计算获得最终客观评价分数。Feature pooling step: Calculate the final objective evaluation score according to the obtained final local quality map.
优选地,所述图片处理步骤:Preferably, the picture processing steps:
使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n)。Use the Gaussian pyramid and the Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively.
优选地,所述边显著特征提取步骤:Preferably, the edge salient feature extraction step:
使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征,计算公式如下:Using the luminance mask and the contrast mask, the edge saliency features are extracted from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid, and the calculation formula is as follows:
其中,in,
CLM (n)表示亮度掩膜计算结果,是基于高斯金字塔与拉普拉斯金字塔处理后的图片组的图像特征;C LM (n) represents the calculation result of the luminance mask, which is the image feature of the image group processed based on the Gaussian pyramid and the Laplacian pyramid;
表示y函数; represents the y function;
y1 (n)表示拉普拉斯金字塔处理后的图片组;y 1 (n) represents the image group after Laplacian pyramid processing;
y0 (n)表示高斯金字塔处理后的图片组;y 0 (n) represents the image group after Gaussian pyramid processing;
n表示图层的层数;n represents the number of layers;
图层y1 (1)表现出的图片特征为边结构特征;The image features represented by layer y 1 (1) are edge structure features;
表示将y0 (n+1)带入y函数的计算结果; Indicates the calculation result of bringing y 0 (n+1) into the y function;
γ1表示亮度对比度阈值;γ 1 represents the brightness contrast threshold;
||表示取绝对值操作;|| means taking the absolute value operation;
a1表示保证等式稳定性的常数;a 1 represents a constant that guarantees the stability of the equation;
CLCM (n)表示对比度掩膜计算结果,是基于CLM (n)处理后的图片组的图像特征;C LCM (n) represents the calculation result of the contrast mask, which is based on the image features of the image group processed by C LM (n) ;
n表示图层的层数;n represents the number of layers;
CLCM (1)表示n=1时表现出的图片特征为边显著度特征;C LCM (1) indicates that the image features displayed when n=1 are edge saliency features;
a2表示保证等式稳定性的常数;a 2 represents a constant that guarantees the stability of the equation;
表示将CLM (n+1)带入y函数的计算结果; Represents the calculation result of bringing C LM (n+1) into the y function;
γ2表示对比度可检测阈值;γ 2 represents the contrast detectable threshold;
G(x,y;σ)表示高斯核函数;G(x, y; σ) represents the Gaussian kernel function;
*表示卷积;* means convolution;
↑2表示上采样。↑2 means upsampling.
优选地,所述特征相似度计算步骤:Preferably, the feature similarity calculation step:
根据获得的边结构特征和边显著度特征,计算获得边结构相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the edge structure similarity is calculated and obtained, and the calculation formula is as follows:
其中,in,
S1(x,y)表示点(x,y)边结构相似度;S 1 (x, y) represents the structural similarity of the point (x, y) edge;
下标r与d分别表示该特征取自参考图片或失真图片;The subscripts r and d respectively indicate that the feature is taken from the reference picture or the distorted picture;
y1r (1)(x,y)表示参考图片在点(x,y)处n=1时的边结构特征;y 1r (1) (x, y) represents the edge structure feature of the reference picture when n=1 at point (x, y);
y1d (2)(x,y)表示失真图片在点(x,y)处n=2时的边结构特征;y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T1表示是一个为了保证等式稳定性的非零常数;T 1 represents a non-zero constant to ensure the stability of the equation;
根据获得的边结构特征和边显著度特征,计算获得边显著度相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the similarity of the obtained edge saliency is calculated, and the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
其中,in,
S2(x,y)表示点(x,y)边显著度相似度;S 2 (x, y) represents the point (x, y) edge saliency similarity;
α表示S2(x,y)中M S1(x,y)所占权重;α represents the weight of MS 1 (x, y) in S 2 (x, y);
MS1(x,y)表示通过相似度计算函数计算后的边结构相似度;MS 1 (x,y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x,y)表示通过相似度计算函数计算后的边显著相似度;MS 2 (x,y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x,y)表示权重因子;w 1 (x, y) represents the weight factor;
GLMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;G LMr (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
GLMd (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;G LMd (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
T2表示为了保证等式稳定性的非零常数;T 2 represents a non-zero constant to ensure the stability of the equation;
∑(x,y)w1(x,y)表示图片上所有的点处w1(x,y)的累加;∑ (x,y) w 1 (x,y) represents the accumulation of w 1 (x,y) at all points on the picture;
CLCMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LCM掩膜特征;C LCMr (1) (x, y) represents the LCM mask feature of the reference picture when n=1 at point (x, y);
CLCMd (1)(x,y)表示失真图片在点(x,y)处n=1时的LCM掩膜特征;C LCMd (1) (x, y) represents the LCM mask feature of the distorted picture when n=1 at point (x, y);
CLCMd (2)(x,y)表示表示失真图片在点(x,y)处n=2时的LCM掩膜特征;C LCMd (2) (x, y) represents the LCM mask feature of the distorted picture when n=2 at point (x, y);
T3表示一个为了保证等式稳定性的非零常数。T 3 represents a non-zero constant to ensure the stability of the equation.
优选地,所述特征组合步骤:Preferably, the feature combining step:
根据获得的边结构相似度和边显著度相似度,计算获得局部质量相似度,计算公式如下:According to the obtained edge structure similarity and edge saliency similarity, the local quality similarity is calculated, and the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ =(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·αμ=ψ·α
其中,in,
SQM(x,y)表示点(x,y)处局部质量相似度;S QM (x, y) represents the local quality similarity at point (x, y);
ξ表示S1(x,y)在局部质量SQM(x,y)所占权重;ξ represents the weight of S 1 (x, y) in the local mass S QM (x, y);
ψ表示M S1(x,y)在局部质量SQM(x,y)所占权重;ψ represents the weight of MS 1 (x, y) in the local mass S QM (x, y);
μ表示M S2(x,y)在局部质量SQM(x,y)所占权重;μ represents the weight of MS 2 (x,y) in the local mass S QM (x,y);
α表示S2(x,y)中M S1(x,y)所占权重。α represents the weight occupied by MS 1 (x, y) in S 2 (x, y).
优选地,所述特征池化步骤:Preferably, the feature pooling step:
根据获得的局部质量图相似度,计算获得最终客观评价分数,计算公式如下:According to the obtained local quality map similarity, the final objective evaluation score is calculated and obtained, and the calculation formula is as follows:
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
其中,in,
S表示最终客观评价分数;S represents the final objective evaluation score;
w2(x,y)表示权重参数。w 2 (x, y) represents a weight parameter.
y1r (2)(x,y)表示参考图片在点(x,y)处n=2时的的边结构特征。y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at point (x, y).
根据本发明提供的一种基于联合多尺度图片特征的桌面内容图片客观质量评价系统,包括:An objective quality evaluation system for desktop content pictures based on joint multi-scale picture features provided according to the present invention includes:
图片处理模块:使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n),从y1 (n)提取获得边结构特征;Image processing module: use Gaussian pyramid and Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively, and extract edge structure features from y 1 (n) ;
边显著特征提取模块:使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征;Edge saliency feature extraction module: Using brightness mask and contrast mask, extract edge saliency from Gaussian pyramid processed image group y 0 (n) and Laplacian pyramid processed image group y 1 (n) feature;
特征相似度计算模块:根据获得的边结构特征和边显著度特征,计算获得边结构相似度和边显著度相似度;Feature similarity calculation module: According to the obtained edge structure features and edge saliency features, calculate the edge structure similarity and edge saliency similarity;
特征组合模块:根据获得的边结构相似度和边显著度相似度,计算获得最终局部质量图;Feature combination module: Calculate and obtain the final local quality map according to the obtained edge structure similarity and edge saliency similarity;
特征池化模块:根据获得的最终局部质量图,计算获得最终客观评价分数。Feature pooling module: Calculate the final objective evaluation score according to the obtained final local quality map.
优选地,所述图片处理模块:Preferably, the picture processing module:
使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n);Use the Gaussian pyramid and the Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively;
所述边显著特征提取模块:The edge salient feature extraction module:
使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征,计算公式如下:Using the luminance mask and the contrast mask, the edge saliency features are extracted from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid, and the calculation formula is as follows:
其中,in,
CLM (n)表示亮度掩膜计算结果,是基于高斯金字塔与拉普拉斯金字塔处理后的图片组的图像特征;C LM (n) represents the calculation result of the luminance mask, which is the image feature of the image group processed based on the Gaussian pyramid and the Laplacian pyramid;
表示y函数; represents the y function;
y1 (n)表示拉普拉斯金字塔处理后的图片组;y 1 (n) represents the image group after Laplacian pyramid processing;
y0 (n)表示高斯金字塔处理后的图片组;y 0 (n) represents the image group after Gaussian pyramid processing;
n表示图层的层数;n represents the number of layers;
图层y1 (1)表现出的图片特征为边结构特征;The image features represented by layer y 1 (1) are edge structure features;
表示将y0 (n+1)带入y函数的计算结果; Indicates the calculation result of bringing y 0 (n+1) into the y function;
γ1表示亮度对比度阈值;γ 1 represents the brightness contrast threshold;
||表示取绝对值操作;|| means taking the absolute value operation;
a1表示保证等式稳定性的常数;a 1 represents a constant that guarantees the stability of the equation;
CLCM (n)表示对比度掩膜计算结果,是基于CLM (n)处理后的图片组的图像特征;C LCM (n) represents the calculation result of the contrast mask, which is based on the image features of the image group processed by C LM (n) ;
n表示图层的层数;n represents the number of layers;
CLCM (1)表示n=1时表现出的图片特征为边显著度特征;C LCM (1) indicates that the image features displayed when n=1 are edge saliency features;
a2表示保证等式稳定性的常数;a 2 represents a constant that guarantees the stability of the equation;
表示将CLM (n+1)带入y函数的计算结果; Represents the calculation result of bringing C LM (n+1) into the y function;
γ2表示对比度可检测阈值;γ 2 represents the contrast detectable threshold;
G(x,y;σ)表示高斯核函数;G(x, y; σ) represents the Gaussian kernel function;
*表示卷积;* means convolution;
↑2表示上采样;↑2 means upsampling;
所述特征相似度计算模块:The feature similarity calculation module:
根据获得的边结构特征和边显著度特征,计算获得边结构相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the edge structure similarity is calculated and obtained, and the calculation formula is as follows:
其中,in,
S1(x,y)表示点(x,y)边结构相似度;S 1 (x, y) represents the structural similarity of the point (x, y) edge;
下标r与d分别表示该特征取自参考图片或失真图片;The subscripts r and d respectively indicate that the feature is taken from the reference picture or the distorted picture;
y1r (1)(x,y)表示参考图片在点(x,y)处n=1时的边结构特征;y 1r (1) (x, y) represents the edge structure feature of the reference picture when n=1 at point (x, y);
y1d (2)(x,y)表示失真图片在点(x,y)处n=2时的边结构特征;y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T1表示是一个为了保证等式稳定性的非零常数;T 1 represents a non-zero constant to ensure the stability of the equation;
根据获得的边结构特征和边显著度特征,计算获得边显著度相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the similarity of the obtained edge saliency is calculated, and the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
其中,in,
S2(x,y)表示点(x,y)边显著度相似度;S 2 (x, y) represents the point (x, y) edge saliency similarity;
α表示S2(x,y)中M S1(x,y)所占权重;α represents the weight of MS 1 (x, y) in S 2 (x, y);
MS1(x,y)表示通过相似度计算函数计算后的边结构相似度;MS 1 (x,y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x,y)表示通过相似度计算函数计算后的边显著相似度;MS 2 (x,y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x,y)表示权重因子;w 1 (x, y) represents the weight factor;
CLMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMr (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
CLMd (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMd (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
T2表示为了保证等式稳定性的非零常数;T 2 represents a non-zero constant to ensure the stability of the equation;
∑(x,y)w1(x,y)表示图片上所有的点处w1(x,y)的累加;∑ (x,y) w 1 (x,y) represents the accumulation of w 1 (x,y) at all points on the picture;
CLCMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LCM掩膜特征;C LCMr (1) (x, y) represents the LCM mask feature of the reference picture when n=1 at point (x, y);
CLCMd (1)(x,y)表示失真图片在点(x,y)处n=1时的LCM掩膜特征;C LCMd (1) (x, y) represents the LCM mask feature of the distorted picture when n=1 at point (x, y);
CLCMd (2)(x,y)表示表示失真图片在点(x,y)处n=2时的LCM掩膜特征;C LCMd (2) (x, y) represents the LCM mask feature of the distorted picture when n=2 at point (x, y);
T3表示一个为了保证等式稳定性的非零常数。T 3 represents a non-zero constant to ensure the stability of the equation.
优选地,所述特征组合模块:Preferably, the feature combination module:
根据获得的边结构相似度和边显著度相似度,计算获得局部质量相似度,计算公式如下:According to the obtained edge structure similarity and edge saliency similarity, the local quality similarity is calculated, and the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ =(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·αμ=ψ·α
其中,in,
SQM(x,y)表示点(x,y)处局部质量相似度;S QM (x, y) represents the local quality similarity at point (x, y);
ξ表示S1(x,y)在局部质量SQM(x,y)所占权重;ξ represents the weight of S 1 (x, y) in the local mass S QM (x, y);
ψ表示M S1(x,y)在局部质量SQM(x,y)所占权重;ψ represents the weight of MS 1 (x, y) in the local mass S QM (x, y);
μ表示M S2(x,y)在局部质量SQM(x,y)所占权重;μ represents the weight of MS 2 (x,y) in the local mass S QM (x,y);
α表示S2(x,y)中M S1(x,y)所占权重。α represents the weight occupied by MS 1 (x, y) in S 2 (x, y).
所述特征池化模块:The feature pooling module:
根据获得的局部质量图相似度,计算获得最终客观评价分数,计算公式如下:According to the obtained local quality map similarity, the final objective evaluation score is calculated and obtained, and the calculation formula is as follows:
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
其中,in,
S表示最终客观评价分数;S represents the final objective evaluation score;
w2(x,y)表示权重参数。w 2 (x, y) represents a weight parameter.
y1r (2)(x,y)表示参考图片在点(x,y)处n=2时的的边结构特征。y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at point (x, y).
根据本发明提供的一种存储有计算机程序的计算机可读存储介质,所述计算机程序被处理器执行时实现上述中任一项所述的基于联合多尺度图片特征的图片客观质量评价方法的步骤。According to a computer-readable storage medium storing a computer program provided by the present invention, when the computer program is executed by a processor, the steps of any one of the above-mentioned methods for evaluating objective picture quality based on joint multi-scale picture features are realized. .
与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:
本发明对桌面图片质量评价准确度更高,通过现有数据库验证发现,综合性能相比现有技术更加优越,并且对桌面图片的失真类型:高斯模糊、运动模糊与JPEG2000压缩失真具有突出优越性能。The present invention has higher accuracy in evaluating the quality of desktop pictures. It is found through the verification of the existing database that the comprehensive performance is more superior than the prior art, and the distortion types of desktop pictures: Gaussian blur, motion blur and JPEG2000 compression distortion have outstanding performance. .
附图说明Description of drawings
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:
图1为本发明提供的处理流程示意图。FIG. 1 is a schematic diagram of a processing flow provided by the present invention.
具体实施方式Detailed ways
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.
根据本发明提供的一种基于联合多尺度图片特征的图片客观质量评价方法,包括:According to a method for evaluating objective quality of pictures based on joint multi-scale picture features, the method includes:
图片处理步骤:使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n),从y1 (n)提取获得边结构特征;Image processing steps: use Gaussian pyramid and Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively, and extract edge structure features from y 1 (n) ;
边显著特征提取步骤:使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征;Edge saliency feature extraction step: Using the brightness mask and the contrast mask, extract the edge saliency from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid. feature;
特征相似度计算步骤:根据获得的边结构特征和边显著度特征,计算获得边结构相似度和边显著度相似度;Feature similarity calculation step: According to the obtained edge structure feature and edge saliency feature, calculate and obtain the edge structure similarity and edge saliency similarity;
特征组合步骤:根据获得的边结构相似度和边显著度相似度,计算获得最终局部质量图;Feature combination step: Calculate and obtain the final local quality map according to the obtained edge structure similarity and edge saliency similarity;
特征池化步骤:根据获得的最终局部质量图,计算获得最终客观评价分数。Feature pooling step: Calculate the final objective evaluation score according to the obtained final local quality map.
具体地,所述图片处理步骤:Specifically, the picture processing steps:
使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n)。Use the Gaussian pyramid and the Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively.
具体地,所述边显著特征提取步骤:Specifically, the edge salient feature extraction step:
使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征,计算公式如下:Using the luminance mask and the contrast mask, the edge saliency features are extracted from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid, and the calculation formula is as follows:
其中,in,
CLM (n)表示亮度掩膜计算结果,是基于高斯金字塔与拉普拉斯金字塔处理后的图片组的图像特征;C LM (n) represents the calculation result of the luminance mask, which is the image feature of the image group processed based on the Gaussian pyramid and the Laplacian pyramid;
表示y函数; represents the y function;
y1 (n)表示拉普拉斯金字塔处理后的图片组;y 1 (n) represents the image group after Laplacian pyramid processing;
y0 (n)表示高斯金字塔处理后的图片组;y 0 (n) represents the image group after Gaussian pyramid processing;
n表示图层的层数;n represents the number of layers;
图层y1 (1)表现出的图片特征为边结构特征;The image features represented by layer y 1 (1) are edge structure features;
表示将y0 (n+1)带入y函数的计算结果; Indicates the calculation result of bringing y 0 (n+1) into the y function;
γ1表示亮度对比度阈值;γ 1 represents the brightness contrast threshold;
||表示取绝对值操作;|| means taking the absolute value operation;
a1表示保证等式稳定性的常数;a 1 represents a constant that guarantees the stability of the equation;
CLCM (n)表示对比度掩膜计算结果,是基于CLM (n)处理后的图片组的图像特征;C LCM (n) represents the calculation result of the contrast mask, which is based on the image features of the image group processed by C LM (n) ;
n表示图层的层数;n represents the number of layers;
CLCM (1)表示n=1时表现出的图片特征为边显著度特征;C LCM (1) indicates that the image features displayed when n=1 are edge saliency features;
a2表示保证等式稳定性的常数;a 2 represents a constant that guarantees the stability of the equation;
表示将CLM (n+1)带入y函数的计算结果; Represents the calculation result of bringing C LM (n+1) into the y function;
γ2表示对比度可检测阈值;γ 2 represents the contrast detectable threshold;
G(x,y;σ)表示高斯核函数;G(x, y; σ) represents the Gaussian kernel function;
*表示卷积;* means convolution;
↑2表示上采样。↑2 means upsampling.
具体地,所述特征相似度计算步骤:Specifically, the feature similarity calculation steps:
根据获得的边结构特征和边显著度特征,计算获得边结构相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the edge structure similarity is calculated and obtained, and the calculation formula is as follows:
其中,in,
S1(x,y)表示点(x,y)边结构相似度;S 1 (x, y) represents the structural similarity of the point (x, y) edge;
下标r与d分别表示该特征取自参考图片或失真图片;The subscripts r and d respectively indicate that the feature is taken from the reference picture or the distorted picture;
y1r (1)(x,y)表示参考图片在点(x,y)处n=1时的边结构特征;y 1r (1) (x, y) represents the edge structure feature of the reference picture when n=1 at point (x, y);
y1d (2)(x,y)表示失真图片在点(x,y)处n=2时的边结构特征;y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T1表示是一个为了保证等式稳定性的非零常数;T 1 represents a non-zero constant to ensure the stability of the equation;
根据获得的边结构特征和边显著度特征,计算获得边显著度相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the similarity of the obtained edge saliency is calculated, and the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
其中,in,
S2(x,y)表示点(x,y)边显著度相似度;S 2 (x, y) represents the point (x, y) edge saliency similarity;
α表示S2(x,y)中M S1(x,y)所占权重;α represents the weight of MS 1 (x, y) in S 2 (x, y);
MS1(x,y)表示通过相似度计算函数计算后的边结构相似度;MS 1 (x,y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x,y)表示通过相似度计算函数计算后的边显著相似度;MS 2 (x,y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x,y)表示权重因子;w 1 (x, y) represents the weight factor;
CLMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMr (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
CLMd (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMd (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
T2表示为了保证等式稳定性的非零常数;T 2 represents a non-zero constant to ensure the stability of the equation;
∑(x,y)w1(x,y)表示图片上所有的点处w1(x,y)的累加;∑ (x,y) w 1 (x,y) represents the accumulation of w 1 (x,y) at all points on the picture;
CLCMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LCM掩膜特征;C LCMr (1) (x, y) represents the LCM mask feature of the reference picture when n=1 at point (x, y);
CLCMd (1)(x,y)表示失真图片在点(x,y)处n=1时的LCM掩膜特征;C LCMd (1) (x, y) represents the LCM mask feature of the distorted picture when n=1 at point (x, y);
CLCMd (2)(x,y)表示表示失真图片在点(x,y)处n=2时的LCM掩膜特征;C LCMd (2) (x, y) represents the LCM mask feature of the distorted picture when n=2 at point (x, y);
T3表示一个为了保证等式稳定性的非零常数。T 3 represents a non-zero constant to ensure the stability of the equation.
具体地,所述特征组合步骤:Specifically, the feature combining step:
根据获得的边结构相似度和边显著度相似度,计算获得局部质量相似度,计算公式如下:According to the obtained edge structure similarity and edge saliency similarity, the local quality similarity is calculated, and the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ =(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·αμ=ψ·α
其中,in,
SQM(x,y)表示点(x,y)处局部质量相似度;S QM (x, y) represents the local quality similarity at point (x, y);
ξ表示S1(x,y)在局部质量SQM(x,y)所占权重;ξ represents the weight of S 1 (x, y) in the local mass S QM (x, y);
ψ表示M S1(x,y)在局部质量SQM(x,y)所占权重;ψ represents the weight of MS 1 (x, y) in the local mass S QM (x, y);
μ表示M S2(x,y)在局部质量SQM(x,y)所占权重;μ represents the weight of MS 2 (x,y) in the local mass S QM (x,y);
α表示S2(x,y)中M S1(x,y)所占权重。α represents the weight occupied by MS 1 (x, y) in S 2 (x, y).
具体地,所述特征池化步骤:Specifically, the feature pooling steps:
根据获得的局部质量图相似度,计算获得最终客观评价分数,计算公式如下:According to the obtained local quality map similarity, the final objective evaluation score is calculated and obtained, and the calculation formula is as follows:
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
其中,in,
S表示最终客观评价分数;S represents the final objective evaluation score;
w2(x,y)表示权重参数。w 2 (x, y) represents a weight parameter.
y1r (2)(x,y)表示参考图片在点(x,y)处n=2时的的边结构特征。y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at point (x, y).
本发明提供的基于联合多尺度图片特征的桌面内容图片客观质量评价系统,可以通过本发明给的基于联合多尺度图片特征的图片客观质量评价方法的步骤流程实现。本领域技术人员可以将所述基于联合多尺度图片特征的图片客观质量评价方法,理解为所述基于联合多尺度图片特征的桌面内容图片客观质量评价系统的一个优选例。The objective quality evaluation system for desktop content pictures based on joint multi-scale picture features provided by the present invention can be implemented through the steps of the method for objective picture quality evaluation based on joint multi-scale picture features provided by the present invention. Those skilled in the art can understand the method for evaluating the objective quality of pictures based on joint multi-scale picture features as a preferred example of the system for evaluating objective quality of desktop content pictures based on joint multi-scale picture features.
根据本发明提供的一种基于联合多尺度图片特征的桌面内容图片客观质量评价系统,包括:An objective quality evaluation system for desktop content pictures based on joint multi-scale picture features provided according to the present invention includes:
图片处理模块:使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n),从y1 (n)提取获得边结构特征;Image processing module: use Gaussian pyramid and Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively, and extract edge structure features from y 1 (n) ;
边显著特征提取模块:使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征;Edge saliency feature extraction module: Using brightness mask and contrast mask, extract edge saliency from Gaussian pyramid processed image group y 0 (n) and Laplacian pyramid processed image group y 1 (n) feature;
特征相似度计算模块:根据获得的边结构特征和边显著度特征,计算获得边结构相似度和边显著度相似度;Feature similarity calculation module: According to the obtained edge structure features and edge saliency features, calculate the edge structure similarity and edge saliency similarity;
特征组合模块:根据获得的边结构相似度和边显著度相似度,计算获得最终局部质量图;Feature combination module: Calculate and obtain the final local quality map according to the obtained edge structure similarity and edge saliency similarity;
特征池化模块:根据获得的最终局部质量图,计算获得最终客观评价分数。Feature pooling module: Calculate the final objective evaluation score according to the obtained final local quality map.
具体地,所述图片处理模块:Specifically, the picture processing module:
使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n);Use the Gaussian pyramid and the Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively;
所述边显著特征提取模块:The edge salient feature extraction module:
使用亮度掩膜与对比度掩膜,从高斯金字塔处理后的图片组y0 (n)与拉普拉斯金字塔处理后的图片组y1 (n)中提取获得边显著度特征,计算公式如下:Using the luminance mask and the contrast mask, the edge saliency features are extracted from the image group y 0 (n) processed by the Gaussian pyramid and the image group y 1 ( n) processed by the Laplacian pyramid. The calculation formula is as follows:
其中,in,
CLM (n)表示亮度掩膜计算结果,是基于高斯金字塔与拉普拉斯金字塔处理后的图片组的图像特征;C LM (n) represents the calculation result of the luminance mask, which is the image feature of the image group processed based on the Gaussian pyramid and the Laplacian pyramid;
表示y函数; represents the y function;
y1 (n)表示拉普拉斯金字塔处理后的图片组;y 1 (n) represents the image group after Laplacian pyramid processing;
y0 (n)表示高斯金字塔处理后的图片组;y 0 (n) represents the image group after Gaussian pyramid processing;
n表示图层的层数;n represents the number of layers;
图层y1 (1)表现出的图片特征为边结构特征;The image features represented by layer y 1 (1) are edge structure features;
表示将y0 (n+1)带入y函数的计算结果; Indicates the calculation result of bringing y 0 (n+1) into the y function;
γ1表示亮度对比度阈值;γ 1 represents the brightness contrast threshold;
||表示取绝对值操作;|| means taking the absolute value operation;
a1表示保证等式稳定性的常数;a 1 represents a constant that guarantees the stability of the equation;
CLCM (n)表示对比度掩膜计算结果,是基于CLM (n)处理后的图片组的图像特征;C LCM (n) represents the calculation result of the contrast mask, which is based on the image features of the image group processed by C LM (n) ;
n表示图层的层数;n represents the number of layers;
CLCM (1)表示n=1时表现出的图片特征为边显著度特征;C LCM (1) indicates that the image features displayed when n=1 are edge saliency features;
a2表示保证等式稳定性的常数;a 2 represents a constant that guarantees the stability of the equation;
表示将CLM (n+1)带入y函数的计算结果; Represents the calculation result of bringing C LM (n+1) into the y function;
γ2表示对比度可检测阈值;γ 2 represents the contrast detectable threshold;
G(x,y;σ)表示高斯核函数;G(x, y; σ) represents the Gaussian kernel function;
*表示卷积;* means convolution;
↑2表示上采样;↑2 means upsampling;
所述特征相似度计算模块:The feature similarity calculation module:
根据获得的边结构特征和边显著度特征,计算获得边结构相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the edge structure similarity is calculated and obtained, and the calculation formula is as follows:
其中,in,
S1(x,y)表示点(x,y)边结构相似度;S 1 (x, y) represents the structural similarity of the point (x, y) edge;
下标r与d分别表示该特征取自参考图片或失真图片;The subscripts r and d respectively indicate that the feature is taken from the reference picture or the distorted picture;
y1r (1)(x,y)表示参考图片在点(x,y)处n=1时的边结构特征;y 1r (1) (x, y) represents the edge structure feature of the reference picture when n=1 at point (x, y);
y1d (2)(x,y)表示失真图片在点(x,y)处n=2时的边结构特征;y 1d (2) (x, y) represents the edge structure feature of the distorted picture when n=2 at point (x, y);
T1表示是一个为了保证等式稳定性的非零常数;T 1 represents a non-zero constant to ensure the stability of the equation;
根据获得的边结构特征和边显著度特征,计算获得边显著度相似度,计算公式如下:According to the obtained edge structure features and edge saliency features, the similarity of the obtained edge saliency is calculated, and the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y)
其中,in,
S2(x,y)表示点(x,y)边显著度相似度;S 2 (x, y) represents the point (x, y) edge saliency similarity;
α表示S2(x,y)中M S1(x,y)所占权重;α represents the weight of MS 1 (x, y) in S 2 (x, y);
MS1(x,y)表示通过相似度计算函数计算后的边结构相似度;MS 1 (x,y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x,y)表示通过相似度计算函数计算后的边显著相似度;MS 2 (x,y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x,y)表示权重因子;w 1 (x, y) represents the weight factor;
CLMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMr (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
CLMd (1)(x,y)表示参考图片在点(x,y)处n=1时的LM掩膜特征;C LMd (1) (x, y) represents the LM mask feature of the reference picture when n=1 at point (x, y);
T2表示为了保证等式稳定性的非零常数;T 2 represents a non-zero constant to ensure the stability of the equation;
∑(x,y)w1(x,y)表示图片上所有的点处w1(x,y)的累加;∑ (x,y) w 1 (x,y) represents the accumulation of w 1 (x,y) at all points on the picture;
CLCMr (1)(x,y)表示参考图片在点(x,y)处n=1时的LCM掩膜特征;C LCMr (1) (x, y) represents the LCM mask feature of the reference picture when n=1 at point (x, y);
CLCMd (1)(x,y)表示失真图片在点(x,y)处n=1时的LCM掩膜特征;C LCMd (1) (x, y) represents the LCM mask feature of the distorted picture when n=1 at point (x, y);
CLCMd (2)(x,y)表示表示失真图片在点(x,y)处n=2时的LCM掩膜特征;C LCMd (2) (x, y) represents the LCM mask feature of the distorted picture when n=2 at point (x, y);
T3表示一个为了保证等式稳定性的非零常数。T 3 represents a non-zero constant to ensure the stability of the equation.
具体地,所述特征组合模块:Specifically, the feature combination module:
根据获得的边结构相似度和边显著度相似度,计算获得局部质量相似度,计算公式如下:According to the obtained edge structure similarity and edge saliency similarity, the local quality similarity is calculated, and the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ =(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ
μ=ψ·αμ=ψ·α
其中,in,
SQM(x,y)表示点(x,y)处局部质量相似度;S QM (x, y) represents the local quality similarity at point (x, y);
ξ表示S1(x,y)在局部质量SQM(x,y)所占权重;ξ represents the weight of S 1 (x, y) in the local mass S QM (x, y);
ψ表示M S1(x,y)在局部质量SQM(x,y)所占权重;ψ represents the weight of MS 1 (x, y) in the local mass S QM (x, y);
μ表示M S2(x,y)在局部质量SQM(x,y)所占权重;μ represents the weight of MS 2 (x,y) in the local mass S QM (x,y);
α表示S2(x,y)中M S1(x,y)所占权重。α represents the weight occupied by MS 1 (x, y) in S 2 (x, y).
所述特征池化模块:The feature pooling module:
根据获得的局部质量图相似度,计算获得最终客观评价分数,计算公式如下:According to the obtained local quality map similarity, the final objective evaluation score is calculated and obtained, and the calculation formula is as follows:
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))w 2 (x,y)=max(y 1r (2) (x,y),y 1d (2) (x,y))
其中,in,
S表示最终客观评价分数;S represents the final objective evaluation score;
w2(x,y)表示权重参数。w 2 (x, y) represents a weight parameter.
y1r (2)(x,y)表示参考图片在点(x,y)处n=2时的的边结构特征。y 1r (2) (x, y) represents the edge structure feature of the reference picture when n=2 at point (x, y).
根据本发明提供的一种存储有计算机程序的计算机可读存储介质,所述计算机程序被处理器执行时实现上述中任一项所述的基于联合多尺度图片特征的图片客观质量评价方法的步骤。According to a computer-readable storage medium storing a computer program provided by the present invention, when the computer program is executed by a processor, the steps of any one of the above-mentioned methods for evaluating objective picture quality based on joint multi-scale picture features are realized. .
下面通过优选例,对本发明进行更为具体地说明。Hereinafter, the present invention will be described in more detail through preferred examples.
优选例1:Preferred Example 1:
本发明提出了一种基于联合多尺度的桌面内容图片的客观质量评价方法,提取图片的边结构特征与边显著度特征进行失真程度评价。具体地,结合人眼视觉系统特性,设计图片特征提取方式,所提取的图片特征包括两个:The invention proposes an objective quality evaluation method based on a joint multi-scale desktop content picture, which extracts the edge structure feature and edge saliency feature of the picture to evaluate the distortion degree. Specifically, combined with the characteristics of the human visual system, a picture feature extraction method is designed, and the extracted picture features include two:
1、边结构特征,1. Edge structure characteristics,
2、边显著度特征。2. Edge saliency features.
为实现上述目的,本发明采用了以下技术方案:To achieve the above object, the present invention has adopted the following technical solutions:
1、使用高斯金字塔与拉普拉斯金字塔将原图片处理成不同尺度的图片组,分别记为y0 (n)与y1 (n);令n=1从y1 (n)提取边结构特征;1. Use the Gaussian pyramid and the Laplacian pyramid to process the original image into image groups of different scales, denoted as y 0 (n) and y 1 (n) respectively; let n=1 extract the edge structure from y 1 (n) feature;
2、使用亮度掩膜(Luminance Masking,LM)与对比度掩膜(Luminance ContrastMasking,LCM)从两个金字塔中提取边显著度特征,具体计算方式如下:2. Use Luminance Masking (LM) and Contrast Masking (LCM) to extract edge saliency features from the two pyramids. The specific calculation methods are as follows:
其中G(x,y;σ)表示高斯核函数,*表示卷积,↑2表示上采样。in G(x, y; σ) represents the Gaussian kernel function, * represents convolution, and ↑2 represents upsampling.
亮度掩膜计算结果CLM (n)可以检测出人眼可识别的亮度变化,根据Buchsbaum曲线将γ1设为1;对比度掩膜计算结果CLCM (n)可以检测出人眼可识别对比度变化,根据对比度可检测阈值将γ2设为0.62。The brightness mask calculation result C LM (n) can detect the brightness change recognizable by the human eye, and γ 1 is set to 1 according to the Buchsbaum curve; the contrast mask calculation result C LCM (n) can detect the contrast change recognizable by the human eye. , and γ 2 is set to 0.62 according to the contrast detectable threshold.
令n=1从CLCM (n)提取边显著度特征。Let n=1 extract edge saliency features from C LCM (n) .
3、特征相似度计算3. Feature similarity calculation
边结构相似度: Edge structure similarity:
S2(x,y)=MS1(x,y)α·MS2(x,y),S 2 (x,y)=MS 1 (x,y) α ·MS 2 (x,y),
边显著度相似度: Edge saliency similarity:
其中,下标r与d分别表示该特征取自参考图片或失真图片,w1(x,y)=y1r (2),T1,T2,T3分别取0.07,1×10-50,0.01。Among them, the subscripts r and d respectively indicate that the feature is taken from the reference picture or the distorted picture, w 1 (x, y)=y 1r (2) , T 1 , T 2 , T 3 are respectively 0.07, 1×10-50 , 0.01.
4、特征组合4. Feature combination
最终局部质量图为:The final local mass map is:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ S QM (x,y)=(S 1 (x,y)) ξ ·(S 2 (x,y)) ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ,=(S 1 (x,y)) ξ ·MS 1 (x,y) μ ·MS 2 (x,y) ψ ,
其中μ=ψ·α,分别设为1.8,0.02,0.9。where μ=ψ·α, Set to 1.8, 0.02, 0.9 respectively.
5、特征池化5. Feature pooling
最终客观评价分数:Final objective evaluation score:
其中,w2=max(y1r (2)(x,y),y1d (2)(x,y))。where w 2 =max(y 1r (2) (x,y),y 1d (2) (x,y)).
为了说明以上模型的有效性,在桌面内容图片权威数据库SIQAD上进行测试。SIQAD数据库包括20张参考图片,每张图片对应7种失真的7个量级共980张失真图片。7种失真包括高斯噪声(GN),高斯模糊(GB),动态模糊(MB),对比度变化(CC),JPEG压缩(JPEG),JPEG2000压缩(J2K),层有效编码(LSC)。In order to illustrate the effectiveness of the above models, tests are performed on SIQAD, an authoritative database of desktop content images. The SIQAD database includes 20 reference pictures, each of which corresponds to 980 distorted pictures of 7 magnitudes of 7 kinds of distortion. The 7 kinds of distortion include Gaussian Noise (GN), Gaussian Blur (GB), Motion Blur (MB), Contrast Change (CC), JPEG Compression (JPEG), JPEG2000 Compression (J2K), Layer Valid Coding (LSC).
三个由VQEG专家组提出的专门用来衡量主观分数与客观评价分数一致性的指标被用来判断该模型的优越性,这三个指标分别是皮尔逊线性相关系数(Pearson linearcorrelation coefficient,PLCC),均方根误差(Root mean squared error,RMSE)与斯皮尔曼秩-阶相关系数(Spearman rank-order correlation coefficient,SROCC),其计算方式分别如下:Three indicators proposed by the VQEG expert group to measure the consistency of subjective scores and objective evaluation scores are used to judge the superiority of the model. These three indicators are the Pearson linear correlation coefficient (PLCC). , Root mean squared error (RMSE) and Spearman rank-order correlation coefficient (SROCC) are calculated as follows:
其中,m,Q分别代表主观分数与客观分数,分别代表主观分数与客观分数的平均值,di代表第i张图片主观分数排序的顺序与客观分数排序的顺序的差值。PLCC与SROCC的值位于0到1之间,越接近1说明主观分数与客观分数一致性越好;RMSE值越小,说明主观分数与客观分数之前的差别越小,模型的性能也就越好。Among them, m and Q represent the subjective score and objective score, respectively, represent the average of subjective scores and objective scores, respectively, and d i represents the difference between the order of subjective scores and the order of objective scores for the i-th picture. The values of PLCC and SROCC are between 0 and 1. The closer to 1, the better the consistency between the subjective score and the objective score; the smaller the RMSE value, the smaller the difference between the subjective score and the objective score, and the better the performance of the model. .
表1给出了在数据库SIQAD上的测试结果,其中,PSNR,SSIM,MSSIM,IWSSIM,VIF,IFC,FSIM,SCQ为针对自然图片设计的质量评价方法,SIQM,SQI,ESIM,MDOGS,GFM为近几年针对桌面内容图片设计的客观质量评价方法,通过对比各个方法数据可以看出:Table 1 shows the test results on the database SIQAD, where PSNR, SSIM, MSSIM, IWSSIM, VIF, IFC, FSIM, SCQ are the quality evaluation methods designed for natural pictures, SIQM, SQI, ESIM, MDOGS, GFM are In recent years, the objective quality evaluation methods designed for desktop content pictures can be seen by comparing the data of each method:
针对整体性能,本发明在PLCC,RMSE评价指标中位列第一名,SROCC位列第二名;For overall performance, the present invention ranks first in PLCC, RMSE evaluation index, and SROCC ranks second;
针对单独失真类型,本发明共获得9个第一名,1个第三名,明显优于其他方法,且在评价失真类型GB,MB,J2K时具有显著的优越性。For a single distortion type, the present invention has won 9 first places and 1 third place, which is obviously better than other methods, and has significant advantages in evaluating the distortion types GB, MB, J2K.
表一SIQAD数据库测试结果:Table 1 SIQAD database test results:
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system, device and each module provided by the present invention in the form of pure computer readable program code, the system, device and each module provided by the present invention can be completely implemented by logically programming the method steps. The same program is implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, among others. Therefore, the system, device and each module provided by the present invention can be regarded as a kind of hardware component, and the modules used for realizing various programs included in it can also be regarded as the structure in the hardware component; A module for realizing various functions can be regarded as either a software program for realizing a method or a structure within a hardware component.
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily, provided that there is no conflict.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122634.7A CN111598826B (en) | 2019-02-19 | 2019-02-19 | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122634.7A CN111598826B (en) | 2019-02-19 | 2019-02-19 | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111598826A true CN111598826A (en) | 2020-08-28 |
CN111598826B CN111598826B (en) | 2023-05-02 |
Family
ID=72183150
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910122634.7A Active CN111598826B (en) | 2019-02-19 | 2019-02-19 | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111598826B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288699A (en) * | 2020-10-23 | 2021-01-29 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for evaluating relative definition of image |
CN117952968A (en) * | 2024-03-26 | 2024-04-30 | 沐曦集成电路(上海)有限公司 | Image quality evaluation method based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102646269A (en) * | 2012-02-29 | 2012-08-22 | 中山大学 | An image processing method and device based on Laplacian pyramid |
CN104143188A (en) * | 2014-07-04 | 2014-11-12 | 上海交通大学 | Image quality assessment method based on multi-scale edge representation |
CN107578404A (en) * | 2017-08-22 | 2018-01-12 | 浙江大学 | Objective evaluation method of full-reference stereo image quality based on visual salient feature extraction |
US20180276492A1 (en) * | 2017-03-27 | 2018-09-27 | Samsung Electronics Co., Ltd | Image processing method and apparatus for object detection |
CN109255358A (en) * | 2018-08-06 | 2019-01-22 | 浙江大学 | A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map |
-
2019
- 2019-02-19 CN CN201910122634.7A patent/CN111598826B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102646269A (en) * | 2012-02-29 | 2012-08-22 | 中山大学 | An image processing method and device based on Laplacian pyramid |
CN104143188A (en) * | 2014-07-04 | 2014-11-12 | 上海交通大学 | Image quality assessment method based on multi-scale edge representation |
US20180276492A1 (en) * | 2017-03-27 | 2018-09-27 | Samsung Electronics Co., Ltd | Image processing method and apparatus for object detection |
CN107578404A (en) * | 2017-08-22 | 2018-01-12 | 浙江大学 | Objective evaluation method of full-reference stereo image quality based on visual salient feature extraction |
CN109255358A (en) * | 2018-08-06 | 2019-01-22 | 浙江大学 | A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288699A (en) * | 2020-10-23 | 2021-01-29 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for evaluating relative definition of image |
CN112288699B (en) * | 2020-10-23 | 2024-02-09 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for evaluating relative definition of image |
CN117952968A (en) * | 2024-03-26 | 2024-04-30 | 沐曦集成电路(上海)有限公司 | Image quality evaluation method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111598826B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230085605A1 (en) | Face image processing method, apparatus, device, and storage medium | |
CN106803069B (en) | Crowd happiness degree identification method based on deep learning | |
CN108647668A (en) | The construction method of multiple dimensioned lightweight Face datection model and the method for detecting human face based on the model | |
CN108509833B (en) | A face recognition method, device and device based on structured analysis dictionary | |
CN109948566B (en) | Double-flow face anti-fraud detection method based on weight fusion and feature selection | |
CN105224937A (en) | Based on the semantic color pedestrian of the fine granularity heavily recognition methods of human part position constraint | |
CN110287862B (en) | Anti-candid detection method based on deep learning | |
CN101694691A (en) | Method and device for synthesizing facial images | |
CN112132205B (en) | A remote sensing image classification method based on convolutional neural network | |
CN111881803B (en) | An animal face recognition method based on improved YOLOv3 | |
CN111339884B (en) | Image recognition method, related device and apparatus | |
CN111598826B (en) | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics | |
CN103942531B (en) | A kind of face identification system and its method | |
CN111814862A (en) | Fruit and vegetable identification method and device | |
CN117636131A (en) | Yolo-I model-based small target identification method and related device | |
CN106250871A (en) | City management case classification method and device | |
CN110415816B (en) | A multi-classification method for clinical images of skin diseases based on transfer learning | |
CN112990213B (en) | Digital multimeter character recognition system and method based on deep learning | |
CN113436125B (en) | Method, device and device for side-scan sonar simulation image generation based on style transfer | |
CN107203788B (en) | Medium-level visual drug image identification method | |
CN107292331A (en) | Based on unsupervised feature learning without with reference to screen image quality evaluating method | |
CN105956571B (en) | An age estimation method for face images | |
CN116884031A (en) | A cow face recognition method and device based on artificial intelligence | |
Junhua et al. | No-reference image quality assessment based on AdaBoost_BP neural network in wavelet domain | |
CN111931665B (en) | Under-sampling face recognition method based on intra-class variation dictionary modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |