CN104376565A - Non-reference image quality evaluation method based on discrete cosine transform and sparse representation - Google Patents

Non-reference image quality evaluation method based on discrete cosine transform and sparse representation Download PDF

Info

Publication number
CN104376565A
CN104376565A CN201410695579.8A CN201410695579A CN104376565A CN 104376565 A CN104376565 A CN 104376565A CN 201410695579 A CN201410695579 A CN 201410695579A CN 104376565 A CN104376565 A CN 104376565A
Authority
CN
China
Prior art keywords
image
natural scene
statistical nature
scene statistical
discrete cosine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410695579.8A
Other languages
Chinese (zh)
Other versions
CN104376565B (en
Inventor
张小华
焦李成
温阳
王爽
钟桦
田小林
朱虎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410695579.8A priority Critical patent/CN104376565B/en
Publication of CN104376565A publication Critical patent/CN104376565A/en
Application granted granted Critical
Publication of CN104376565B publication Critical patent/CN104376565B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于离散余弦变换和稀疏表示的无参考图像质量评价方法,主要解决现有技术对无参考图像质量评价不准确的问题。其实现步骤是:输入一副灰度图像,对其进行离散余弦变换并提取自然场景统计特征;提取一系列不同失真类型和不同内容的图像的自然场景统计特征,结合平均主观差异分数构建原始特征字典;对原始特征字典进行聚类,根据测试图像特征与原始特征字典中各类的近似程度自适应选择原子组成稀疏表示字典;利用稀疏表示在特征空间张成测试图像特征并计算稀疏表示系数,结合稀疏表示字典中的主观评价值进行线性加权求和,得到图像质量测度。本发明与主观评价结果具有较好的一致性,适用于对各种失真类型图像的质量评价。

The invention discloses a non-reference image quality evaluation method based on discrete cosine transform and sparse representation, which mainly solves the problem of inaccurate evaluation of the non-reference image quality in the prior art. The implementation steps are: input a grayscale image, perform discrete cosine transform on it and extract the natural scene statistical features; extract a series of natural scene statistical features of images with different distortion types and different contents, and construct the original feature by combining the average subjective difference score Dictionary; cluster the original feature dictionary, and adaptively select atoms to form a sparse representation dictionary according to the similarity between the test image features and the various types in the original feature dictionary; use the sparse representation to stretch the test image features in the feature space and calculate the sparse representation coefficients, Combined with the subjective evaluation values in the sparse representation dictionary, the linear weighted sum is obtained to obtain the image quality measure. The invention has better consistency with the subjective evaluation results, and is suitable for quality evaluation of various distortion types of images.

Description

基于离散余弦变换和稀疏表示的无参考图像质量评价方法No-reference Image Quality Assessment Method Based on Discrete Cosine Transform and Sparse Representation

技术领域technical field

本发明属于图像处理领域,涉及图像质量的客观测评,可用于图像采集、编码压缩、网络传输。The invention belongs to the field of image processing, relates to objective evaluation of image quality, and can be used for image collection, coding compression, and network transmission.

背景技术Background technique

图像是人类获取信息的重要途径,图像质量表示图像向人或设备提供信息的能力,直接关系着所获取信息的充分性与准确性。然而,图像在获取、处理、传输和存储的过程中,由于各种因素影响将不可避免的产生降质问题,这给信息的获取或图像的后期处理带来了极大困难。因此,建立有效的图像质量评价机制非常重要。如在图像去噪、图像融合等处理过程中可用于各种算法的性能比较、参数选择;在图像编码与通信领域可用于指导整个图像的传输过程并评估系统性能;此外,在图像处理算法优化、生物特征识别等科学领域也具有重大的意义。Image is an important way for human beings to obtain information. Image quality indicates the ability of image to provide information to people or equipment, and is directly related to the adequacy and accuracy of the information obtained. However, in the process of image acquisition, processing, transmission and storage, due to various factors, there will inevitably be degradation problems, which brings great difficulties to information acquisition or post-processing of images. Therefore, it is very important to establish an effective image quality evaluation mechanism. For example, it can be used for performance comparison and parameter selection of various algorithms in the process of image denoising and image fusion; in the field of image coding and communication, it can be used to guide the entire image transmission process and evaluate system performance; in addition, in image processing algorithm optimization , Biometric identification and other scientific fields are also of great significance.

图像质量评价从方法上可分为主观评价方法和客观评价方法。主观方法是凭借实验人员的主观感知来评价图像质量,而客观方法则依据模型给出的量化指标,模拟人类视觉系统感知机制衡量图像质量。由于人是图像的最终接收者,故主观质量评价是最可靠的评价方法。目前最常用的主观方法是平均主观分值法MOS和差分主观分值法DMOS,然而由于其评价结果易受主观因素的影响,在图像数量大的情况下耗时并且也不能自动实现,因此对客观图像评价方法的研究就显得特别重要。Image quality evaluation can be divided into subjective evaluation method and objective evaluation method from the method. The subjective method is to evaluate the image quality based on the subjective perception of the experimenter, while the objective method is to measure the image quality by simulating the perception mechanism of the human visual system based on the quantitative indicators given by the model. Since people are the final recipients of images, subjective quality evaluation is the most reliable evaluation method. At present, the most commonly used subjective methods are the average subjective score method MOS and the difference subjective score method DMOS. However, because the evaluation results are easily affected by subjective factors, it is time-consuming and cannot be realized automatically when the number of images is large, so it is necessary to The research of objective image evaluation method is particularly important.

根据评价时对参考图像的依赖程度,客观图像质量评价可分为全参考图像质量评价FR-IQA、部分参考图像质量评价RR-IQA和无参考图像质量评价NR-IQA。According to the degree of dependence on reference images during evaluation, objective image quality assessment can be divided into full reference image quality assessment FR-IQA, partial reference image quality assessment RR-IQA and no reference image quality assessment NR-IQA.

全参考图像质量评价FR-IQA方法的最大优点是对失真图像质量预测准确,目前常用的全参考方法有基于像素误差统计的方法MSE和PSNR、基于结构相似性的方法SSIM和基于人眼视觉系统的方法HVS等。然而由于这些方法都需要原始图像完整的先验知识,需要存储和传输的数据量较大,限制了其在许多实际领域的应用,因此部分参考型图像质量评价方法成为人们研究的热点之一。The biggest advantage of the FR-IQA method for full-reference image quality evaluation is that it can accurately predict the quality of distorted images. Currently, the commonly used full-reference methods include methods based on pixel error statistics MSE and PSNR, methods based on structural similarity SSIM and methods based on human visual system The method of HVS et al. However, since these methods require complete prior knowledge of the original image and require a large amount of data to be stored and transmitted, which limits their application in many practical fields, partial reference image quality evaluation methods have become one of the hot research topics.

部分参考图像质量评价RR-IQA方法不需要完整的原始参考图像,但需要利用一些参考图像的特征组合来获取失真图像的质量分数。虽然能够在减小所需传输信息量的基础上保证质量评价方法具备较好的准确性,但仍需传输原始图像的部分信息。在大多数实际应用中,原始图像的信息根本无法获得或者获取成本很高。Partial reference image quality assessment RR-IQA method does not need the complete original reference image, but needs to use the feature combination of some reference images to obtain the quality score of the distorted image. Although the accuracy of the quality evaluation method can be guaranteed on the basis of reducing the amount of information to be transmitted, part of the information of the original image still needs to be transmitted. In most practical applications, the information of the original image cannot be obtained at all or the acquisition cost is very high.

无参考型图像质量评价NR-IQA方法是一种不需要原始图像的任何先验信息,直接对失真图像进行质量评估的方法。由于人们目前对人类视觉系统和相应的大脑认知过程理解有限,其算法的设计和实现更加困难。目前无参考图像质量评价方法有:Sazzad等提出的针对JP2000失真的无参考图像质量评价方法“Z.M.P.Sazzad,Y.Kawayoke,and Y.Horita,No-reference image quality assessment for jpeg2000based on spatial features,Signal Process.Image Commun.,vol.23,no.4,pp.257–268,Apr.2008”,但该方法只针对JPEG2000压缩而不适用于评价模糊、噪声等对图像的影响。Moorthy等提出了基于学习的方法“A.K.Moorthyand A.C.Bovik,A two-step framework for constructing blind image quality indices,IEEE SignalProcess.Lett.,vol.17,no.5,pp.513–516,May 2010.”,这种方法是直接模拟图像特征与质量的映射关系,其数学模型和模型参数都需训练或手动获取,由于映射关系不能精确模拟,致使最终质量评价结果不够准确。Anush Krishna Moorthy等又提出了基于自然场景统计的方法“the Distortion Identification-based Image Verity and INtegrity Evaluation(DIIVINE)index”,该方法评价结果与主观评价有高度的一致性,但对一副图像提取的特征多达88维,特征提取过程耗时过长,并且庞大的特征维度也不方便分类和回归模型的训练。No-reference image quality assessment NR-IQA method is a method that directly evaluates the quality of distorted images without any prior information of the original image. Due to the limited understanding of the human visual system and the corresponding brain cognitive processes, the design and implementation of its algorithms are more difficult. At present, there are no reference image quality assessment methods: "Z.M.P.Sazzad, Y.Kawayoke, and Y.Horita, No-reference image quality assessment for jpeg2000based on spatial features, Signal Process" proposed by Sazzad et al. .Image Commun.,vol.23,no.4,pp.257–268,Apr.2008", but this method is only for JPEG2000 compression and is not suitable for evaluating the impact of blur, noise, etc. on images. Moorthy et al proposed a learning-based approach "A.K.Moorthy and A.C.Bovik, A two-step framework for constructing blind image quality indices, IEEE Signal Process. Lett., vol.17, no.5, pp.513–516, May 2010." , this method directly simulates the mapping relationship between image features and quality, and its mathematical model and model parameters need to be trained or manually obtained. Since the mapping relationship cannot be accurately simulated, the final quality evaluation result is not accurate enough. Anush Krishna Moorthy et al. proposed a method based on natural scene statistics "the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index". The feature has as many as 88 dimensions, and the feature extraction process takes too long, and the huge feature dimension is not convenient for the training of classification and regression models.

发明内容Contents of the invention

本发明的目的在于针对上述现有方法的不足,提出了一种基于离散余弦变换和稀疏表示的无参考图像质量评价方法,可以在降低图像提取特征维度的同时充分利用所提取的数据信息,用稀疏表示的方法对图像特征信息高效建模,提高质量评价结果的准确性。The purpose of the present invention is to address the shortcomings of the above-mentioned existing methods, and propose a non-reference image quality evaluation method based on discrete cosine transform and sparse representation, which can fully utilize the extracted data information while reducing the feature dimension of image extraction. The sparse representation method efficiently models image feature information and improves the accuracy of quality evaluation results.

实现本发明目的技术方案包括如下步骤:Achieving the technical solution of the object of the present invention comprises the following steps:

(1)读入一幅灰度图像I,对其进行离散余弦变换,提取一系列与主观感知相关联的自然场景统计特征f;(1) Read in a grayscale image I, perform discrete cosine transform on it, and extract a series of natural scene statistical features f associated with subjective perception;

(2)组建训练图像的原始特征字典;(2) set up the original feature dictionary of the training image;

(2a)重复步骤(1),提取n副训练图像的自然场景统计特征,并构成特征矩阵F,F=[f1,f2,…,fi,…,fn],其中fi为第i副训练图像的自然场景统计特征,i=1,2,…,n;(2a) Repeat step (1), extract the natural scene statistical features of n training images, and form a feature matrix F, F=[f 1 , f 2 ,...,f i ,...,f n ], where f i is The natural scene statistical features of the i-th training image, i=1,2,...,n;

(2b)整合n副训练图像的平均主观差异分数,并构成质量向量M,M=[m1,m2,...,mi,…,mn],其中mi为第i副训练图像的平均主观差异分数,i=1,2,…,n;(2b) Integrate the average subjective difference scores of n training images and form a quality vector M, M=[m 1 ,m 2 ,...,m i ,...,m n ], where m i is the i-th training Average subjective difference scores of images, i = 1, 2, ..., n;

(2c)将质量向量M和特征矩阵F进行对应结合,构建原始特征字典D:(2c) Correspondingly combine the quality vector M and the feature matrix F to construct the original feature dictionary D:

D = F M = [ f ‾ 1 , f ‾ 2 , . . . , f ‾ i , . . . , f ‾ n ] , 其中 f ‾ i = f i m i , i = 1,2 , . . . , n ; D. = f m = [ f ‾ 1 , f ‾ 2 , . . . , f ‾ i , . . . , f ‾ no ] , in f ‾ i = f i m i , i = 1,2 , . . . , no ;

(3)用K-means算法将原始特征字典D聚为H类,第k类的聚类中心为Ck,其中 C k = Fc k Mc k , Fck为特征聚类中心,Mck为质量聚类中心,k=1,2,…,H;(3) Use the K-means algorithm to cluster the original feature dictionary D into H categories, and the cluster center of the kth category is C k , where C k = Fc k Mike k , Fc k is the feature clustering center, Mc k is the quality clustering center, k=1,2,...,H;

(4)读入一副测试图像I',重复步骤(1),提取测试图像I'的自然场景统计特征f';(4) read in a test image I', repeat step (1), extract the natural scene statistical feature f' of the test image I';

(5)计算测试图像I'的自然场景统计特征f'与原始特征字典D中第k类的特征聚类中心Fck的欧式距离disk,用Pk表示该测试图像与原始特征字典D中第k类的近似程度,其中(5) Calculate the Euclidean distance dis k between the natural scene statistical feature f' of the test image I' and the feature cluster center Fc k of the kth class in the original feature dictionary D, and use P k to represent the test image and the original feature dictionary D The degree of approximation of the kth class, where

PP kk == 11 // disdis kk ΣΣ kk == 11 Hh 11 // disdis kk ,, kk == 1,21,2 ,, .. .. .. ,, Hh ;;

(6)根据近似程度Pk,从原始特征字典D的第k类中选择与特征聚类中心Fck的欧式距离最小的前Nk个样本作为测试图像I'的稀疏表示原子其中i=1,2,…,N,k=1,2,…,H,H类原子共同构成针对测试图像I'的稀疏表示字典D':(6) According to the degree of approximation P k , select the first N k samples from the kth class of the original feature dictionary D with the smallest Euclidean distance to the feature cluster center Fc k as the sparse representation atoms of the test image I' Where i=1,2,...,N,k=1,2,...,H, the H-type atoms together constitute a sparse representation dictionary D' for the test image I':

D ′ = [ D 1 , D 2 , . . . , D k , . . . , D H ] = F ′ M ′ , 其中 D k = [ f ‾ 1 k , f ‾ 2 k , . . . , f ‾ i k , . . . , f ‾ N k k ] , f ‾ i k = f i k m i k , fi k为从原始特征字典D中选择的第k类中的第i个样本的自然场景统计特征,为第k类中的第i个样本的平均主观差异分数,i=1,2,…,Nk,Nk=δ*Pk,k=1,2,…,H,δ为正常数; D. ′ = [ D. 1 , D. 2 , . . . , D. k , . . . , D. h ] = f ′ m ′ , in D. k = [ f ‾ 1 k , f ‾ 2 k , . . . , f ‾ i k , . . . , f ‾ N k k ] , f ‾ i k = f i k m i k , f i k is the natural scene statistical feature of the i-th sample in the k-th class selected from the original feature dictionary D, is the average subjective difference score of the i-th sample in the k-th category, i=1,2,...,N k , N k =δ*P k , k=1,2,...,H, δ is a normal number;

(7)根据稀疏表示的方法求解测试图像I'的自然场景统计特征f'在特征矩阵F'下的稀疏表示系数:其中argmin表示将使目标函数λ·||α||1+||f'-D′α||2最小的α=[α12,…,αk,…,αH]T赋值给α*k=1,2,…,H,i=1,2,…,Nk,RL表示L维实数空间,表示向量β=[β12,…,βl,…,βL]T的1范数,表示向量β的2范数,l=1,2,…,L,λ是用于平衡保真项和正则项的正常数,T代表转置操作;(7) Solve the sparse representation coefficient of the natural scene statistical feature f' of the test image I' under the feature matrix F' according to the method of sparse representation: Where argmin means the assignment of α=[α 12 ,…,α k ,…,α H ] T that will make the objective function λ·||α|| 1 +||f'-D′α|| 2 the smallest give α * , k=1,2,...,H, i=1,2,...,N k , R L represents L-dimensional real number space, Indicates the 1-norm of vector β=[β 12 ,…,β l ,…,β L ] T , Represents the 2-norm of the vector β, l=1,2,...,L, λ is a normal number used to balance the fidelity term and the regular term, and T represents the transpose operation;

(8)根据所构建的稀疏表示计算测试图像I'的质量:Q∈[0,100],其中为稀疏表示字典D'中第k类第i个原子fi k的质量分数,αk,i表示测试图像I'的自然场景统计特征f'在特征矩阵F'中的第k类第i个原子fi k的表示系数,Q为测试图像I'的最终质量测度;(8) Calculate the quality of the test image I' according to the constructed sparse representation: Q∈[0,100], where is the quality score of the i-th atom f i k of the k-th class in the sparse representation dictionary D', α k,i represents the natural scene statistical feature f' of the test image I' in the k-th class i-th in the feature matrix F' The representation coefficient of atom f i k , Q is the final quality measure of the test image I';

本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

1.本发明可充分利用数据库的信息,对数据库中不同失真类型图像的自然场景统计特征进行字典训练学习,从而能够对不同失真类型的图像进行质量评估。与现有大多数只针对特定失真类型的无参考图像质量评价算法相比,适用范围更广。1. The present invention can make full use of the information of the database, and carry out dictionary training and learning on the statistical features of natural scenes of images of different distortion types in the database, thereby being able to evaluate the quality of images of different distortion types. Compared with most of the existing no-reference image quality assessment algorithms that only target specific distortion types, it has a wider scope of application.

2.本发明利用已训练好的特征字典,不需要任何原始参考图像信息,直接对图像质量进行有效评估,与全参考型图像质量评价方法和部分参考型图像质量评价方法相比更便捷,应用范围更广。2. The present invention utilizes the feature dictionary that has been trained, does not need any original reference image information, and directly evaluates the image quality effectively, which is more convenient than the full-reference image quality evaluation method and the partial reference image quality evaluation method. Wider range.

3.本发明对特定测试图像能够在已训练的图像特征字典中自适应的选择相应的稀疏表示字典,与其他已有算法相比,评价结果与主观一致性更好。3. The present invention can adaptively select the corresponding sparse representation dictionary from the trained image feature dictionary for a specific test image. Compared with other existing algorithms, the evaluation result has better subjective consistency.

附图说明Description of drawings

图1是本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;

图2是本发明中对灰度图像I进行离散余弦变换后提取的自然场景统计特征;Fig. 2 is the natural scene statistical feature extracted after discrete cosine transform is carried out to grayscale image I among the present invention;

图3是本发明中对原始特征字典D用K-means聚类的结果;Fig. 3 is the result of K-means clustering to original feature dictionary D in the present invention;

图4是用本发明对测试图像的客观评价分数与平均主观差异分数对比图。Fig. 4 is a comparison chart of objective evaluation scores and average subjective difference scores of test images using the present invention.

具体实施方式detailed description

下面结合附图对本发明的具体实施步骤和效果做进一步的详细描述:Below in conjunction with accompanying drawing, specific implementation steps and effects of the present invention are described in further detail:

参照图1,本发明的实现步骤如下:With reference to Fig. 1, the realization steps of the present invention are as follows:

步骤1,提取灰度图像I在第一个尺度上的自然场景统计特征fsStep 1, extracting the natural scene statistical features f s of the grayscale image I on the first scale;

(1a)读入一副灰度图像I,将灰度图像I分解为5*5的相互之间有重叠的图像块,对每个图像块进行离散余弦变换,并去除离散余弦变换系数的直流分量;(1a) Read in a grayscale image I, decompose the grayscale image I into 5*5 overlapping image blocks, perform discrete cosine transform on each image block, and remove the direct current of discrete cosine transform coefficients weight;

(1b)分别对每个图像块的去除直流分量后的离散余弦变换系数使用广义高斯分布模型拟合,得到每个图像块的形状因子γ,取所有图像块的形状因子γ的均值作为第一个尺度上的自然场景统计特征fs,1的第一个元素fs,1,1,对所有图像块的形状因子按照从小到大排列,取前10%的均值作为第一个尺度上的自然场景统计特征fs,1的第二个元素fs,1,2,其中计算形状因子γ的广义高斯分布模型函数为: f ( x | α , β , γ ) = α * e - ( β ( x - μ ) T Σ - 1 ( x - μ ) ) γ , β = 1 σ Γ ( 3 / γ ) Γ ( 1 / γ ) , μ为均值,σ为方差,γ为形状因子,β为比例因子;(1b) Use the generalized Gaussian distribution model to fit the discrete cosine transform coefficients of each image block after removing the DC component to obtain the shape factor γ of each image block, and take the mean value of the shape factor γ of all image blocks as the first The first element f s,1,1 of the natural scene statistical feature f s,1 on a scale, arrange the shape factors of all image blocks from small to large, and take the mean value of the first 10% as the first scale The second element f s,1,2 of the natural scene statistical feature f s, 1, where the generalized Gaussian distribution model function for calculating the shape factor γ is: f ( x | α , β , γ ) = α * e - ( β ( x - μ ) T Σ - 1 ( x - μ ) ) γ , β = 1 σ Γ ( 3 / γ ) Γ ( 1 / γ ) , μ is the mean, σ is the variance, γ is the shape factor, and β is the scale factor;

(1c)计算每个图像块的频率变化系数取所有图像块的频率变化系数的均值作为第一个尺度上的自然场景统计特征fs,1的第三个元素fs,1,3,对所有图像块的频率变化系数按照从大到小排列,取前10%的均值作为第一个尺度上的自然场景统计特征fs,1的第四个元素fs,1,4,其中μ|x|和σ|x|分别为每个图像块的去除直流分量的离散余弦变换系数的绝对值的均值和标准差,||表示求绝对值;(1c) Calculate the frequency variation coefficient of each image block Take the frequency variation coefficients of all image blocks The mean value of is used as the third element f s,1,3 of the natural scene statistical feature f s,1 on the first scale, and the frequency variation coefficient for all image blocks Arranged in descending order, take the mean value of the top 10% as the fourth element f s,1,4 of the natural scene statistical features f s , 1 on the first scale, where μ |x| and σ |x| are the mean and standard deviation of the absolute value of the DC component-removed discrete cosine transform coefficients of each image block, respectively, and || indicates the absolute value;

(1d)按照从高频到低频的顺序,将每个图像块的离散余弦变换系数分为3个频带,计算每个图像块的频带能量变化率Rn,具体计算如下式:(1d) According to the order from high frequency to low frequency, the discrete cosine transform coefficients of each image block are divided into 3 frequency bands, and the frequency band energy change rate R n of each image block is calculated. The specific calculation is as follows:

其中代表第n个频带的方差,n代表每个频带,En代表第n个频带的能量,n=1,2,3,j为正整数; in Represents the variance of the nth frequency band, n represents each frequency band, E n represents the energy of the nth frequency band, n=1,2,3, j is a positive integer;

(1e)取所有图像块的的频带能量变化率Rn的均值作为第一个尺度上的自然场景统计特征fs,1的第五个元素fs,1,5,对所有图像块的频带能量变化率Rn按照从大到小排列,取前10%的均值作为第一个尺度上的自然场景统计特征fs,1的第六个元素fs,1,6(1e) Take the average value of the frequency band energy change rate R n of all image blocks as the fifth element f s,1,5 of the natural scene statistical feature f s,1 on the first scale, for the frequency bands of all image blocks The energy change rate R n is arranged in descending order, and the average value of the top 10% is taken as the sixth element f s,1,6 of the natural scene statistical feature f s,1 on the first scale;

(1f)将每个图像块的离散余弦变换系数分为沿45°,90°,135°这3个方向的子带,并用广义高斯分布函数分别对每个方向的子带拟合,得到3个方向子带的频率变化系数,并计算这3个方向子带的频率变化系数的方差 (1f) Divide the discrete cosine transform coefficients of each image block into subbands along the three directions of 45°, 90°, and 135°, and use the generalized Gaussian distribution function to fit the subbands in each direction respectively to obtain 3 The frequency variation coefficients of the three direction subbands, and calculate the variance of the frequency variation coefficients of the three direction subbands

(1g)取所有图像块的3个方向子带的频率变化系数的方差的均值作为第一个尺度上的自然场景统计特征fs,1的第七个元素fs,1,7,对取所有图像块的3个方向子带的频率变化系数的方差按照从大到小排列,取前10%的均值作为第一个尺度上的自然场景统计特征fs,1的第八个元素fs,1,8(1g) Take the variance of the frequency variation coefficients of the three direction subbands of all image blocks The mean value of is taken as the seventh element f s,1,7 of the natural scene statistical feature f s,1 on the first scale, and the variance of the frequency variation coefficient of the three direction sub-bands of all image blocks Arranged from largest to smallest, take the mean value of the top 10% as the eighth element f s,1,8 of the natural scene statistical feature f s,1 on the first scale;

(1h)得到灰度图像I在第一个尺度上的8维自然场景统计特征fs,1(1h) Obtain the 8-dimensional natural scene statistical features f s,1 of the grayscale image I on the first scale:

fs,1=[fs,1,1,fs,1,2,…,fs,1,i…,fs,1,8],i=1,2,…,8。f s,1 =[f s,1,1 ,f s,1,2 ,...,f s,1,i ...,f s,1,8 ], i=1,2,...,8.

步骤2,提取灰度图像I在多个尺度上的自然场景统计特征f;Step 2, extracting the natural scene statistical features f of the grayscale image I on multiple scales;

(2a)对灰度图像I进行下采样操作得到下采样图像I2,对下采样图像I2重复步骤1操作,得到灰度图像I在第二个尺度上的自然场景统计特征fs,2(2a) carry out the downsampling operation to the grayscale image I to obtain the downsampled image I2, repeat step 1 operation to the downsampled image I2, obtain the natural scene statistical feature f s,2 of the grayscale image I on the second scale;

(2b)对下采样图像I2再次进行下采样操作得到二次下采样图像I3,对二次下采样图像I3重复步骤1操作,得到灰度图像I在第三个尺度上的自然场景统计特征fs,3(2b) Perform the downsampling operation on the downsampled image I2 again to obtain the second downsampled image I3, repeat the step 1 operation on the second downsampled image I3, and obtain the natural scene statistical feature f of the grayscale image I on the third scale s,3 ;

(2c)求得灰度图像I的自然场景统计特征:f=[fs,1,fs,2,fs,3]T,T代表转置操作。(2c) Obtain the statistical characteristics of the natural scene of the grayscale image I: f=[f s,1 ,f s,2 ,f s,3 ] T , where T represents the transpose operation.

步骤3,根据步骤1和步骤2提取测试图像I'的自然场景统计特征f';Step 3, extracting the natural scene statistical feature f' of the test image I' according to step 1 and step 2;

步骤4,构建原始特征字典D;Step 4, construct the original feature dictionary D;

(4a)在训练集中对每幅图像依次进行步骤1和步骤2操作,提取n副训练图像的自然场景统计特征,并构成特征矩阵F,F=[f1,f2,…,fi,…,fn],其中fi为第i副训练图像的自然场景统计特征,i=1,2,…,n;(4a) Perform step 1 and step 2 operations on each image in the training set in turn, extract the natural scene statistical features of n training images, and form a feature matrix F, F=[f 1 ,f 2 ,…,f i , ..., f n ], where f i is the natural scene statistical feature of the ith secondary training image, i=1,2,...,n;

(4b)整合n副训练图像的平均主观差异分数,并构成质量向量M,M=[m1,m2,...,mi,…,mn],其中mi为第i副训练图像的平均主观差异分数,i=1,2,…,n;(4b) Integrate the average subjective difference scores of n training images to form a quality vector M, M=[m 1 ,m 2 ,...,m i ,...,m n ], where m i is the i-th training Average subjective difference scores of images, i = 1, 2, ..., n;

(4c)将质量向量M和特征矩阵F进行对应结合,构建原始特征字典D: D = F M = [ f ‾ 1 , f ‾ 2 , . . . , f ‾ i , . . . , f ‾ n ] , 其中 f ‾ i = f i m i , i = 1,2 , . . . , n ; (4c) Correspondingly combine the quality vector M and the feature matrix F to construct the original feature dictionary D: D. = f m = [ f ‾ 1 , f ‾ 2 , . . . , f ‾ i , . . . , f ‾ no ] , in f ‾ i = f i m i , i = 1,2 , . . . , no ;

步骤5,针对特定的测试图像在图像特征字典中选择相应的稀疏表示字典;Step 5, select a corresponding sparse representation dictionary in the image feature dictionary for a specific test image;

(5a)用K-means算法将原始特征字典D聚为H类,H类中第k类聚类中心为Ck,其中 C k = Fc k Mc k , Fck为特征聚类中心,Mck为质量聚类中心,k=1,2…H;(5a) Use the K-means algorithm to cluster the original feature dictionary D into H classes, and the cluster center of the kth class in H class is C k , where C k = Fc k Mike k , Fc k is the feature clustering center, Mc k is the quality clustering center, k=1,2...H;

(5b)读入一副测试图像I',依次进行步骤1和步骤2操作,提取测试图像I'的自然场景统计特征f';(5b) read in a pair of test image I', perform step 1 and step 2 operations in sequence, and extract the natural scene statistical feature f' of the test image I';

(5c)计算测试图像I'的自然场景统计特征f'与原始特征字典D中第k类的特征聚类中心Fck的欧式距离disk,得到测试图像I'与原始特征字典D中第k类的近似程度Pk,其中(5c) Calculate the Euclidean distance dis k between the natural scene statistical feature f' of the test image I' and the feature cluster center Fc k of the kth class in the original feature dictionary D, and obtain the test image I' and the kth class in the original feature dictionary D Class approximation P k , where

PP kk == 11 // disdis kk ΣΣ kk == 11 Hh 11 // disdis kk ,, kk == 1,21,2 ,, .. .. .. ,, Hh ;;

(5d)根据近似程度Pk,从原始特征字典D的第k类中选择与特征聚类中心Fck的欧式距离最小的前Nk个样本作为测试图像I'的稀疏表示原子其中i=1,2,…,N,k=1,2,…,H,H类原子共同构成针对测试图像I'的稀疏表示字典D':(5d) According to the degree of approximation P k , select the first N k samples with the smallest Euclidean distance from the feature cluster center Fc k from the kth class of the original feature dictionary D as the sparse representation atoms of the test image I' Where i=1,2,...,N,k=1,2,...,H, the H-type atoms together constitute a sparse representation dictionary D' for the test image I':

D ′ = [ D 1 , D 2 , . . . , D k , . . . , D H ] = F ′ M ′ , 其中 D k = [ f ‾ 1 k , f ‾ 2 k , . . . , f ‾ i k , . . . , f ‾ N k k ] , f ‾ i k = f i k m i k , fi k为从原始特征字典D中选择的第k类中的第i个样本的自然场景统计特征,为第k类中的第i个样本的平均主观差异分数,i=1,2,…,Nk,Nk=δ*Pk,k=1,2,…,H,δ为正常数。 D. ′ = [ D. 1 , D. 2 , . . . , D. k , . . . , D. h ] = f ′ m ′ , in D. k = [ f ‾ 1 k , f ‾ 2 k , . . . , f ‾ i k , . . . , f ‾ N k k ] , f ‾ i k = f i k m i k , f i k is the natural scene statistical feature of the i -th sample in the k-th class selected from the original feature dictionary D, is the average subjective difference score of the i-th sample in the k-th class, i=1,2,...,N k , N k =δ*P k , k=1,2,...,H, δ is a normal number.

步骤6,稀疏表示测试图像的特征;Step 6, sparsely represent the features of the test image;

根据稀疏表示的方法求解测试图像I'的自然场景统计特征f'在特征矩阵F'下的稀疏表示系数:其中arg min表示将使目标函数λ·||α||1+||f'-D′α||2最小的α=[α12,…,αk,…,αH]T赋值给α*k=1,2,…,H,i=1,2,…,Nk,RL表示L维实数空间,表示向量β=[β12,…,βl,…,βL]T的1范数,表示向量β的2范数,λ是用于平衡保真项和正则项的正常数,T代表转置操作。Solve the sparse representation coefficient of the natural scene statistical feature f' of the test image I' under the feature matrix F' according to the sparse representation method: Where arg min means α=[α 12 ,…,α k ,…,α H ] T that will make the objective function λ·||α|| 1 +||f'-D′α|| 2 the smallest assigned to α * , k=1,2,...,H, i=1,2,...,N k , R L represents L-dimensional real number space, Indicates the 1-norm of vector β=[β 12 ,…,β l ,…,β L ] T , Indicates the 2-norm of the vector β, λ is a normal number used to balance the fidelity term and the regular term, and T represents the transpose operation.

步骤7,利用下式计算测试图像I'的质量评价测度:Step 7, use the following formula to calculate the quality evaluation measure of the test image I':

Q∈[0,100],其中为稀疏表示字典D'中第k类第i个原子fi k的质量分数,αk,i表示测试图像I'的自然场景统计特征f'在特征矩阵F'中的第k类第i个原子fi k的表示系数,Q为测试图像I'的最终质量测度,Q值越小说明测试图像的质量越高。 Q∈[0,100], where is the quality score of the i-th atom f i k of the k-th class in the sparse representation dictionary D', α k,i represents the natural scene statistical feature f' of the test image I' in the k-th class i-th in the feature matrix F' The representation coefficient of atom f i k , Q is the final quality measure of the test image I', the smaller the Q value, the higher the quality of the test image.

本发明的效果可以通过以下实验进一步说明:Effect of the present invention can be further illustrated by following experiments:

1.实验条件与评分标准:1. Experimental conditions and scoring criteria:

本试验是在美国TEXAS大学的第二代LIVE图像质量评估数据库上进行的,该数据库包含29幅高分辨率的无失真RGB彩色图像和对应的五种类型的失真图像,包括175幅JPEG图像、169幅JPEG2000图像、145幅白噪声WN图像、145幅高斯模糊Gblur图像和145幅经过快速衰落FF信道后失真的图像。数据库给出了失真图像的平均主观差异分数描述失真图像的质量。This experiment was carried out on the second-generation LIVE image quality assessment database of the University of Texas in the United States, which contains 29 high-resolution undistorted RGB color images and corresponding five types of distorted images, including 175 JPEG images, 169 JPEG2000 images, 145 white noise WN images, 145 Gaussian blurred Gblur images and 145 distorted images after fast fading FF channel. The database gives the distorted image an average subjective difference score describing the quality of the distorted image.

为了测试本发明提出的图像质量客观评价结果与主观感知的一致性,本实验选择了以下3种度量标准:一是线性相关系数LCC,反映了客观评价方法预测的准确性;二是均方根误差RMSE,反映了客观评价方法的误差;三是Spearman等级次序相关系数SROCC,反映了客观评价结果预测的单调性。In order to test the consistency of the objective evaluation results of the image quality proposed by the present invention and the subjective perception, this experiment has selected the following three kinds of metrics: the first is the linear correlation coefficient LCC, which reflects the accuracy of the objective evaluation method prediction; the second is the root mean square The error RMSE reflects the error of the objective evaluation method; the third is the Spearman rank order correlation coefficient SROCC, which reflects the monotonicity of the objective evaluation result prediction.

试验中图像数据库被分为训练图像集和测试图像集,训练图像集用于训练原始特征字典,测试图像用于预测评价结果。根据原始图像个数的不同有采用两种的分组方式。第一组:训练集有15幅原始图像;第二组:训练集有23幅原始图像。In the experiment, the image database is divided into a training image set and a test image set. The training image set is used to train the original feature dictionary, and the test image is used to predict the evaluation results. According to the different number of original images, there are two grouping methods. The first group: the training set has 15 original images; the second group: the training set has 23 original images.

实验参数设置:1)原始特征字典聚类时H取4;2)构成稀疏表示字典时,δ取4;3)用稀疏表示求解稀疏表示系数时,λ根据经验取0.0001。Experimental parameter settings: 1) H is 4 when the original feature dictionary is clustered; 2) δ is 4 when forming a sparse representation dictionary; 3) λ is 0.0001 based on experience when using sparse representation to solve the sparse representation coefficient.

2.实验内容与结果:2. Experimental content and results:

实验1:LIVE_database2数据库上图像失真的评价Experiment 1: Evaluation of image distortion on the LIVE_database2 database

利用本发明提出的基于离散余弦变换和稀疏表示的无参考图像质量评价方法,使用两种分组方式对LIVE_database2数据库的测试图像集中进行质量评估,分别计算图像质量客观评价结果与主观感知一致性的3种度量标准:线性相关系数LCC、Spearman等级次序相关系数SROCC和均方根误差RMSE。Utilize the non-reference image quality evaluation method based on discrete cosine transform and sparse representation proposed by the present invention, use two grouping methods to conduct quality evaluation on the test images in the LIVE_database2 database, and calculate the 3 of the image quality objective evaluation results and subjective perception consistency respectively There are two metrics: Linear Correlation Coefficient LCC, Spearman Rank-Order Correlation Coefficient SROCC and Root Mean Square Error RMSE.

表1给出了LIVE_database2数据库上的失真测评。其中DCTSR1为本发明使用第一种分组方式的测试结果,DCTSR2为本发明使用第二种分组方式的测试结果,小波变换和稀疏表示结合方法1为小波变换和稀疏表示结合方法使用第一种分组方式的测试结果,小波变换和稀疏表示结合方法2为小波变换和稀疏表示结合方法使用第二种分组方式的测试结果。Table 1 shows the distortion evaluation on the LIVE_database2 database. Wherein DCTSR1 is the test result that the present invention uses the first grouping mode, DCTSR2 is the test result that the present invention uses the second grouping mode, and wavelet transform and sparse representation combination method 1 is that the wavelet transform and sparse representation combination method uses the first grouping The test results of the method, combined with wavelet transform and sparse representation Method 2 is the test result of the combined method of wavelet transform and sparse representation using the second grouping method.

表1本方法与其他图像质量评价方法的对比试验结果Table 1 Comparison test results between this method and other image quality evaluation methods

从表1可看出,在使用相同分组方式时本发明与现有大多数无参考方法相比有很好的优越性:1)有更高的预测准确性,即线性相关系数LCC比现有大多数方法大;2)有更严格的预测的单调性,即等级次序相关系数SROCC比现有大多数方法大;3)本发明对各种失真类型的图像都能很好预测。As can be seen from Table 1, when using the same grouping method, the present invention has good advantages compared with most of the existing no-reference methods: 1) higher prediction accuracy is arranged, that is, the linear correlation coefficient LCC is higher than the existing Most of the methods are large; 2) the monotonicity of the prediction is stricter, that is, the rank order correlation coefficient SROCC is larger than most of the existing methods; 3) the present invention can predict images of various distortion types well.

实验2:图像质量客观评价结果与主观感知一致性实验Experiment 2: Consistency experiment between objective image quality evaluation results and subjective perception

根据实验1中得到的测试图像的质量评价测度,画出本发明对测试图像的质量评价测度Q与测试图像的平均主观差异分数DMOS的散点关系图,并使用对数函数拟合得到最佳匹配曲线,如图4所示。图4中横坐标代表图像的质量评价测度,纵坐标代表图像的平均主观差异分数DMOS,‘·’代表本发明得到的图像质量评价测度,‘·’的分布与拟合出的最佳匹配曲线越接近,说明效果越好。其中:According to the quality evaluation measurement of the test image obtained in Experiment 1, draw the scatter relationship diagram of the present invention to the quality evaluation measurement Q of the test image and the average subjective difference score DMOS of the test image, and use logarithmic function fitting to obtain the best Matching curve, as shown in Figure 4. In Fig. 4, the abscissa represents the quality evaluation measure of the image, the ordinate represents the average subjective difference score DMOS of the image, '·' represents the image quality evaluation measure obtained by the present invention, the distribution of '·' and the best matching curve fitted The closer it is, the better the effect. in:

4(a)为本发明在JPEG2000压缩失真图像子库中的效果,4 (a) is the effect of the present invention in JPEG2000 compression distortion image sub-library,

4(b)为本发明在JPEG压缩失真图像子库中的效果,4 (b) is the effect of the present invention in the JPEG compression distortion image sub-library,

4(c)为本发明在白噪声WN失真图像子库中的效果,4(c) is the effect of the present invention in the white noise WN distorted image sub-library,

4(d)为本发明在高斯模糊Gblur失真图像子库中的效果,4(d) is the effect of the present invention in the Gaussian blur Gblur distortion image sub-library,

4(e)为本发明在经过快速衰落FF信道后失真的图像子库中的效果,4(e) is the effect of the present invention in the distorted image sub-library after passing through the fast fading FF channel,

4(f)为本发明在整个LIVE_database2数据库中的效果。4(f) is the effect of the present invention in the entire LIVE_database2 database.

从图4可看出,本发明‘·’的分布与拟合出的最佳匹配曲线比较接近,偏离曲线较少,在整个数据库的不同失真图像子库中性能都很稳定,与主观视觉感知一致性较高。It can be seen from Fig. 4 that the distribution of '·' in the present invention is relatively close to the fitted best matching curve, with less deviation from the curve, and the performance is very stable in different distorted image sub-libraries of the whole database, which is consistent with subjective visual perception The consistency is high.

Claims (2)

1., based on a non-reference picture quality appraisement method for discrete cosine transform and rarefaction representation, comprise the steps:
(1) read in a width gray level image I, discrete cosine transform is carried out to it, extract a series of natural scene statistical nature f be associated with subjective perception;
(2) the primitive character dictionary of training image is set up;
(2a) repeat step (1), extract the natural scene statistical nature of the secondary training image of n, and constitutive characteristic matrix F, F=[f 1, f 2..., f i..., f n], wherein f ibe the natural scene statistical nature of the n-th secondary training image, i=1,2 ..., n;
(2b) integrate the mean subjective discrepancy score of the secondary training image of n, and form quality vector M, M=[m 1, m 2..., m i..., m n], wherein m ibe the mean subjective discrepancy score of the n-th secondary training image, i=1,2 ..., n;
(2c) quality vector M and eigenmatrix F is carried out correspondence to combine, builds primitive character dictionary D: D = F M = [ f 1 ‾ , f 2 ‾ , . . . , f i ‾ , . . . , f n ‾ ] , Wherein f i ‾ = f i m i , i = 1,2 , . . . , n ;
(2d) gather for H class with K-means algorithm by primitive character dictionary D, in H class, kth class cluster centre is C k, wherein C k = Fc k Mc k , Fc kfor feature clustering center, Mc kfor clustering quality center, k=1,2...H.
(3) read in a secondary test pattern I', repeat step (1), extract the natural scene statistical nature f' of test pattern I';
(4) the feature clustering center Fc of kth class in the natural scene statistical nature f' of test pattern I' and primitive character dictionary D is calculated keuclidean distance dis k, obtain the degree of approximation P of kth class in test pattern I' and primitive character dictionary D k, wherein P k = 1 / dis k Σ k = 1 H 1 / dis k , k=1,2,...,H;
(5) according to degree of approximation P k, select and feature clustering center Fc from the kth class of primitive character dictionary D kthe minimum front N of Euclidean distance kindividual sample is as the rarefaction representation atom of test pattern I' wherein i=1,2 ..., N, k=1,2 ..., H, H class atom forms the rarefaction representation dictionary D' for test pattern I' jointly: D ′ = [ D 1 , D 2 , . . . , D k , . . . , D H ] = F ′ M ′ , Wherein D k = [ f ‾ 1 k , f ‾ 2 k , . . . , f ‾ i k , . . . , f ‾ N k k ] , f ‾ i k = f i k m i k , for the natural scene statistical nature of i-th sample in the kth class selected from primitive character dictionary D, for the mean subjective discrepancy score of the sample of i-th in kth class, i=1,2 ..., N k, N k=δ * P k, k=1,2 ..., H, δ are normal number;
(6) the rarefaction representation coefficient of natural scene statistical nature f' under eigenmatrix F' of test pattern I' is solved according to the method for rarefaction representation: wherein argmin represents and will make objective function λ || α || 1+ || f'-D ' α || 2minimum α=[α 1, α 2..., α k..., α h] tassignment is to α *, k=1,2 ..., H, i=1,2 ..., N k, R lrepresent that L ties up real number space, L = Σ k = 1 H N k , | | β | | 1 = Σ l = 1 L | β l | Represent vectorial β=[β 1, β 2..., β l..., β l] t1 norm, | | β | | 2 = Σ l = 1 L β l 2 Represent 2 norms of vectorial β, λ is the normal number for balancing fidelity item and regular terms, and T represents matrix transpose operation;
(7) quality of test pattern I' is calculated according to constructed rarefaction representation: q ∈ [0,100], wherein for kth class i-th atom in rarefaction representation dictionary D' massfraction, α k,irepresent natural scene statistical nature f' kth class i-th atom in eigenmatrix F' of test pattern I' expression coefficient, Q is that the final mass of test pattern I' is estimated.
2. the non-reference picture quality appraisement method based on discrete cosine transform and rarefaction representation according to claim 1, wherein described in step (1), discrete cosine transform is carried out to gray level image I, extract a series of natural scene statistical nature f be associated with subjective perception, concrete steps are as follows:
(1.1) the natural scene statistical nature f of gray level image I on first yardstick is extracted s, 1;
(1.1a) what gray level image I is decomposed into 5*5 has overlapping image block each other, carries out discrete cosine transform, and remove the DC component of discrete cosine transform coefficient to each image block;
(1.1b) respectively Generalized Gaussian Distribution Model matching is used to the discrete cosine transform coefficient after the removal DC component of each image block, obtain the form factor γ of each image block, get the average of the form factor γ of all image blocks as the natural scene statistical nature f on first yardstick s, 1first element f s, 1,1, to the form factor of all image blocks according to arranging from small to large, get the average of front 10% as the natural scene statistical nature f on first yardstick s, 1second element f s, 1,2, the Generalized Gaussian Distribution Model function wherein calculating form factor γ is: f ( x | α , β , γ ) = α * e - ( β ( x - μ ) T Σ - 1 ( x - μ ) ) γ , β = 1 σ Γ ( 3 / γ ) Γ ( 1 / γ ) , Γ ( z ) = ∫ 0 ∞ t z - 1 e - t dt , α = βγ 2 Γ ( 1 / γ ) , μ is average, and σ is variance, and γ is form factor, and β is scale factor;
(1.1c) the frequency change coefficient of each image block is calculated get the frequency change coefficient of all image blocks average as the natural scene statistical nature f on first yardstick s, 1the 3rd element f s, 1,3, to the frequency change coefficient of all image blocks according to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick s, 1the 4th element f s, Isosorbide-5-Nitrae, wherein μ | x|and σ | x|be respectively average and the standard deviation of the discrete cosine transform coefficient of the removal DC component of each image block, || represent and ask absolute value;
(1.1d) according to the order from high frequency to low frequency, the discrete cosine transform coefficient of each image block is divided into 3 frequency bands, calculates the frequency band energy rate of change R of each image block n, be specifically calculated as follows formula:
R n = | E n - 1 n - 1 &Sigma; j < n E j | E n + 1 n - 1 &Sigma; j < n E j , Wherein E n = &sigma; n 2 , N=1,2,3, represent the variance of the n-th frequency band, n represents each frequency band, E nrepresent the energy of the n-th frequency band, j is positive integer;
(1.1e) get all image blocks frequency band energy rate of change R naverage as the natural scene statistical nature f on first yardstick s, 1the 5th element f s, 1,5, to the frequency band energy rate of change R of all image blocks naccording to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick s, 1the 6th element f s, 1,6;
(1.1f) discrete cosine transform coefficient of each image block is divided into along 45 °, 90 °, the subband in 135 ° of these 3 directions, and with generalized Gaussian distribution function respectively to the subband matching in each direction, obtain the frequency change coefficient of 3 directional subbands, and calculate the variance of the frequency change coefficient of these 3 directional subbands
(1.1g) variance of the frequency change coefficient of 3 directional subbands of all image blocks is got average as the natural scene statistical nature f on first yardstick s, 1the 7th element f s, 1,7, to the variance of frequency change coefficient of 3 directional subbands of getting all image blocks according to order arrangement from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick s, 1the 8th element f s, 1,8;
(1.1h) the 8 dimension natural scene statistical nature fs of gray level image I on first yardstick are obtained s, 1:
f s,1=[f s,1,1,f s,1,2,...,f s,1,i...,f s,1,8],i=1,2,...,8。
(1.2) down-sampling operation is carried out to gray level image I and obtain down-sampled images I2, step (1.1) operation is repeated to down-sampled images I2, obtains the natural scene statistical nature f of gray level image I on second yardstick s, 2;
(1.3) down-sampling operation is carried out again to down-sampled images I2 and obtain secondary down-sampled images I3, step (1.1) operation is repeated to secondary down-sampled images I3, obtains the natural scene statistical nature f of gray level image I on the 3rd yardstick s, 3;
(1.4) the natural scene statistical nature of gray level image I is tried to achieve: f=[f s, 1, f s, 2, f s, 3] t, T represents matrix transpose operation.
CN201410695579.8A 2014-11-26 2014-11-26 Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation Expired - Fee Related CN104376565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410695579.8A CN104376565B (en) 2014-11-26 2014-11-26 Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410695579.8A CN104376565B (en) 2014-11-26 2014-11-26 Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation

Publications (2)

Publication Number Publication Date
CN104376565A true CN104376565A (en) 2015-02-25
CN104376565B CN104376565B (en) 2017-03-29

Family

ID=52555455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410695579.8A Expired - Fee Related CN104376565B (en) 2014-11-26 2014-11-26 Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation

Country Status (1)

Country Link
CN (1) CN104376565B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005990A (en) * 2015-07-02 2015-10-28 东南大学 Blind image quality evaluation method based on distinctive sparse representation
CN105007488A (en) * 2015-07-06 2015-10-28 浙江理工大学 Universal no-reference image quality evaluation method based on transformation domain and spatial domain
WO2016146038A1 (en) * 2015-03-13 2016-09-22 Shenzhen University System and method for blind image quality assessment
CN106127234A (en) * 2016-06-17 2016-11-16 西安电子科技大学 The non-reference picture quality appraisement method of feature based dictionary
CN106997585A (en) * 2016-01-22 2017-08-01 同方威视技术股份有限公司 Imaging system and image quality evaluating method
CN107194912A (en) * 2017-04-20 2017-09-22 中北大学 The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation
CN107798282A (en) * 2016-09-07 2018-03-13 北京眼神科技有限公司 Method and device for detecting human face of living body
CN108805850A (en) * 2018-06-05 2018-11-13 天津师范大学 A kind of frame image interfusion method merging trap based on atom
CN110308397A (en) * 2019-07-30 2019-10-08 重庆邮电大学 A Hybrid Convolutional Neural Network Driven Lithium Battery Multi-category Fault Diagnosis Modeling Method
CN111145150A (en) * 2019-12-20 2020-05-12 中国科学院光电技术研究所 Universal non-reference image quality evaluation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122353A (en) * 2011-03-11 2011-07-13 西安电子科技大学 Method for segmenting images by using increment dictionary learning and sparse representation
CN102722712A (en) * 2012-01-02 2012-10-10 西安电子科技大学 Multiple-scale high-resolution image object detection method based on continuity
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
US20140072209A1 (en) * 2012-09-13 2014-03-13 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122353A (en) * 2011-03-11 2011-07-13 西安电子科技大学 Method for segmenting images by using increment dictionary learning and sparse representation
CN102722712A (en) * 2012-01-02 2012-10-10 西安电子科技大学 Multiple-scale high-resolution image object detection method based on continuity
US20140072209A1 (en) * 2012-09-13 2014-03-13 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANISH MITTAL等: "No-Reference Image Quality Assessment in the Spatial Domain", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
MICHAL AHARON等: "K-SVD:An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *
MICHELE A.SAAD等: "Blind Image Quality Assessment:A Natural Scene Statistics Approach in the DCT Domain", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
徐健等: "基于聚类的自适应图像稀疏表示算法及其应用", 《光子学报》 *
高飞等: "主动特征学习及其在盲图像质量评价中的应用", 《计算机学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016146038A1 (en) * 2015-03-13 2016-09-22 Shenzhen University System and method for blind image quality assessment
US10909409B2 (en) 2015-03-13 2021-02-02 Shenzhen University System and method for blind image quality assessment
US10331971B2 (en) 2015-03-13 2019-06-25 Shenzhen University System and method for blind image quality assessment
CN105005990B (en) * 2015-07-02 2017-11-28 东南大学 A kind of blind image quality evaluating method based on distinctiveness rarefaction representation
CN105005990A (en) * 2015-07-02 2015-10-28 东南大学 Blind image quality evaluation method based on distinctive sparse representation
CN105007488A (en) * 2015-07-06 2015-10-28 浙江理工大学 Universal no-reference image quality evaluation method based on transformation domain and spatial domain
US10217204B2 (en) * 2016-01-22 2019-02-26 Nuctech Company Limited Imaging system and method of evaluating an image quality for the imaging system
CN106997585A (en) * 2016-01-22 2017-08-01 同方威视技术股份有限公司 Imaging system and image quality evaluating method
CN106127234B (en) * 2016-06-17 2019-05-03 西安电子科技大学 A no-reference image quality assessment method based on feature dictionary
CN106127234A (en) * 2016-06-17 2016-11-16 西安电子科技大学 The non-reference picture quality appraisement method of feature based dictionary
CN107798282A (en) * 2016-09-07 2018-03-13 北京眼神科技有限公司 Method and device for detecting human face of living body
CN107798282B (en) * 2016-09-07 2021-12-31 北京眼神科技有限公司 Method and device for detecting human face of living body
CN107194912A (en) * 2017-04-20 2017-09-22 中北大学 The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation
CN107194912B (en) * 2017-04-20 2020-12-29 中北大学 Brain CT/MR Image Fusion Method Based on Sparse Representation and Improved Coupled Dictionary Learning
CN108805850A (en) * 2018-06-05 2018-11-13 天津师范大学 A kind of frame image interfusion method merging trap based on atom
CN110308397B (en) * 2019-07-30 2021-04-02 重庆邮电大学 Lithium battery multi-class fault diagnosis modeling method driven by hybrid convolutional neural network
CN110308397A (en) * 2019-07-30 2019-10-08 重庆邮电大学 A Hybrid Convolutional Neural Network Driven Lithium Battery Multi-category Fault Diagnosis Modeling Method
CN111145150A (en) * 2019-12-20 2020-05-12 中国科学院光电技术研究所 Universal non-reference image quality evaluation method
CN111145150B (en) * 2019-12-20 2022-11-11 中国科学院光电技术研究所 Universal non-reference image quality evaluation method

Also Published As

Publication number Publication date
CN104376565B (en) 2017-03-29

Similar Documents

Publication Publication Date Title
CN104376565B (en) Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation
He et al. Sparse representation for blind image quality assessment
CN105208374B (en) A No-Reference Image Quality Objective Evaluation Method Based on Deep Learning
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN104751456B (en) Blind image quality evaluating method based on conditional histograms code book
CN104023230B (en) A kind of non-reference picture quality appraisement method based on gradient relevance
CN102945552A (en) No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN104202594B (en) A kind of method for evaluating video quality based on 3 D wavelet transformation
CN103366378B (en) Based on the no-reference image quality evaluation method of conditional histograms shape coincidence
CN103475898A (en) Non-reference image quality assessment method based on information entropy characters
CN109816646B (en) Non-reference image quality evaluation method based on degradation decision logic
CN102209257A (en) Stereo image quality objective evaluation method
CN107464222B (en) No-reference high dynamic range image objective quality assessment method based on tensor space
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN106600597A (en) Non-reference color image quality evaluation method based on local binary pattern
CN104658001A (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN105574901B (en) A kind of general non-reference picture quality appraisement method based on local contrast pattern
CN105160667A (en) Blind image quality evaluation method based on combining gradient signal and Laplacian of Gaussian (LOG) signal
CN107948635B (en) A No-reference Sonar Image Quality Evaluation Method Based on Degradation Measurement
Jiang et al. Supervised dictionary learning for blind image quality assessment using quality-constraint sparse coding
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN106127234B (en) A no-reference image quality assessment method based on feature dictionary
WO2016145571A1 (en) Method for blind image quality assessment based on conditional histogram codebook
CN103325113B (en) Partial reference type image quality evaluating method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170329