CN105913413B - An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning - Google Patents
An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning Download PDFInfo
- Publication number
- CN105913413B CN105913413B CN201610202181.5A CN201610202181A CN105913413B CN 105913413 B CN105913413 B CN 105913413B CN 201610202181 A CN201610202181 A CN 201610202181A CN 105913413 B CN105913413 B CN 105913413B
- Authority
- CN
- China
- Prior art keywords
- image block
- image
- value
- pixel
- values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 45
- 239000013598 vector Substances 0.000 claims abstract description 80
- 230000000007 visual effect Effects 0.000 claims abstract description 38
- 238000013441 quality evaluation Methods 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 48
- 239000011159 matrix material Substances 0.000 claims description 41
- 230000000750 progressive effect Effects 0.000 claims description 27
- 230000002087 whitening effect Effects 0.000 claims description 15
- 230000009467 reduction Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 4
- 206010063341 Metamorphopsia Diseases 0.000 claims description 3
- 230000008447 perception Effects 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 230000016776 visual perception Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000000977 primary visual cortex Anatomy 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于在线流形学习的彩色图像质量客观评价方法,其考虑到显著性与图像质量客观评价的关系,利用视觉显著检测算法,通过求取参考图像与失真图像各自的显著图来获得最大融合显著图,并在最大融合显著图中的图像块的最大显著性的基础上利用绝对差值来衡量参考图像块与对应的失真图像块的显著差异值,由此筛选提取到参考视觉重要图像块与失真视觉重要图像块,再利用参考视觉重要图像块与失真视觉重要图像块的流形特征向量来计算失真图像的客观质量评价值,评价效果明显提高,客观评价结果与主观感知之间的相关性高。
The invention discloses an objective evaluation method for color image quality based on online manifold learning, which takes into account the relationship between saliency and objective evaluation of image quality, uses a visual saliency detection algorithm, and obtains the respective saliency maps of a reference image and a distorted image by using a visual saliency detection algorithm. to obtain the maximum fused saliency map, and based on the maximum saliency of the image blocks in the maximum fused saliency map, the absolute difference value is used to measure the significant difference between the reference image block and the corresponding distorted image block, and the reference image block is extracted from the filter. Visually important image blocks and distorted visually important image blocks, and then use the manifold feature vectors of the reference visually important image blocks and the distorted visually important image blocks to calculate the objective quality evaluation value of the distorted image, the evaluation effect is significantly improved, and the objective evaluation results are consistent with subjective perception high correlation between them.
Description
技术领域technical field
本发明涉及一种图像质量评价方法,尤其是涉及一种基于在线流形学习的彩色图像质量客观评价方法。The invention relates to an image quality evaluation method, in particular to an objective evaluation method of color image quality based on online manifold learning.
背景技术Background technique
受图像处理系统性能的限制,在图像的获取、传输和编码等过程中,会引入各种类型的失真,失真的引入会降低图像的质量,同时也会阻碍人们从图像中获取信息。图像质量是比较各种图像处理算法性能优劣以及图像处理系统参数的重要指标,因此在图像传输、多媒体网络通信以及视频分析等领域中构建有效的图像质量评价方法具有重要价值。一般地,图像质量评价方法分为主观评价和客观评价两大类,由于图像的最终信宿是人类,因此主观评价方法是最可靠的评价方法,但其费时费力,且不易嵌入图像处理系统,所以在实际应用中受到了限制。相比之下,客观评价方法具有操作简单、便于实用等优点,是目前学术界乃至工业界的研究重点。Limited by the performance of the image processing system, various types of distortion will be introduced in the process of image acquisition, transmission and encoding. The introduction of distortion will reduce the quality of the image and also hinder people from obtaining information from the image. Image quality is an important index to compare the performance of various image processing algorithms and the parameters of image processing systems. Therefore, it is of great value to construct effective image quality evaluation methods in the fields of image transmission, multimedia network communication, and video analysis. Generally, image quality evaluation methods are divided into two categories: subjective evaluation and objective evaluation. Since the final destination of the image is human, the subjective evaluation method is the most reliable evaluation method, but it is time-consuming and labor-intensive, and it is not easy to embed in the image processing system, so limited in practical applications. In contrast, objective evaluation methods have the advantages of simple operation, convenience and practicality, and are currently the focus of research in academia and industry.
目前,最简单和使用最广泛的客观评价方法是峰值信噪比(peak signal tonoise ratio,PSNR)和均方误差(mean square error,MSE),该类方法计算简单、物理意义明确,但由于未考虑人眼的视觉特性,因此其评价结果往往会出现与人眼主观感受不吻合的情况。事实上,人眼对图像信号的处理并不是逐点进行的,鉴于此,研究人员通过引入人眼视觉特征,使得客观评价结果与人眼视觉感知吻合度更高。例如,基于结构相似度(Structural Similarity,SSIM)从图像的亮度、对比度和结构三个方面来表征图像的结构信息,进而评价图像质量。在其后续工作中,又基于SSIM提出了多尺度的SSIM评价方法、复小波SSIM评价方法和基于信息内容加权的SSIM评价方法,改进了SSIM的性能。除了基于结构相似度的评价方法,Sheikh等人将全参考图像质量评价看成是信息保真度问题,根据量化失真过程中图像信息的丢失量,提出了一种基于视觉信息保真度(Visual InformationFidelity,VIF)的图像质量评价方法。Chandler等人从图像的视觉感知的临界阈值和超阈值特性出发,结合小波变换,提出了一种基于小波视觉信噪比(visual signal-to-noiseratio,VSNR)的图像质量评价方法,该方法能较好地适应不同视觉条件。虽然研究者对人类视觉系统进行了深入探索,但由于人眼系统的复杂性,对人类视觉系统的认知仍比较肤浅,所以仍无法提出与人眼主观感知完全一致的图像质量客观评价方法。At present, the simplest and most widely used objective evaluation methods are peak signal tonoise ratio (PSNR) and mean square error (MSE). Considering the visual characteristics of the human eye, the evaluation results often do not match the subjective perception of the human eye. In fact, the processing of image signals by the human eye is not carried out point by point. In view of this, the researchers introduced the visual characteristics of the human eye to make the objective evaluation result more consistent with the visual perception of the human eye. For example, based on structural similarity (Structural Similarity, SSIM), the structural information of the image is characterized from three aspects of image brightness, contrast and structure, and then the image quality is evaluated. In its follow-up work, based on SSIM, a multi-scale SSIM evaluation method, a complex wavelet SSIM evaluation method and an SSIM evaluation method based on information content weighting are proposed to improve the performance of SSIM. In addition to the evaluation method based on structural similarity, Sheikh et al. regard the full reference image quality evaluation as an information fidelity problem. Information Fidelity, VIF) image quality assessment method. Starting from the critical threshold and super-threshold characteristics of visual perception of images, combined with wavelet transform, Chandler et al. proposed an image quality evaluation method based on wavelet visual signal-to-noise ratio (VSNR). Better adapt to different visual conditions. Although researchers have deeply explored the human visual system, due to the complexity of the human eye system, the cognition of the human visual system is still relatively superficial, so it is still impossible to propose an objective evaluation method of image quality that is completely consistent with the subjective perception of the human eye.
为了更好地体现人类视觉系统特性,基于稀疏表示和视觉关注度的图像质量客观评价方法越来越受关注。许多研究表明,稀疏表示能很好地描述人脑初级视觉皮层中神经元的活动。例如,Guha等人公开了一种基于稀疏表示的图像质量评价方法,该方法分为两个阶段,第一个阶段是字典学习阶段:将从参考图像中随机选取的图像块作为训练样本,利用KSVD算法训练出过完备字典;第二个阶段是评价阶段:利用正交匹配追踪算法(OMP)对参考图像中的图像块与对应的失真图像中的图像块进行稀疏编码,得到参考图像稀疏系数与失真图像稀疏系数,进而得到图像客观评价值。然而,该类基于稀疏表示的图像质量客观评价方法都需要利用正交匹配追踪算法进行稀疏编码,需要大量运动开销,而且,该类方法过完备字典的获取是通过离线操作完成的,需要大量有效的自然图像作为训练样本,而且对于有实时要求的图像处理具有局限性。In order to better reflect the characteristics of the human visual system, objective evaluation methods of image quality based on sparse representation and visual attention have attracted more and more attention. Many studies have shown that sparse representations can well describe the activity of neurons in the primary visual cortex of the human brain. For example, Guha et al. disclose an image quality evaluation method based on sparse representation. The method is divided into two stages. The first stage is the dictionary learning stage: image patches randomly selected from the reference image are used as training samples, using The KSVD algorithm trains an over-complete dictionary; the second stage is the evaluation stage: use the orthogonal matching pursuit algorithm (OMP) to sparsely encode the image blocks in the reference image and the corresponding image blocks in the distorted image, and obtain the reference image sparse coefficients And the sparse coefficient of the distorted image, and then the objective evaluation value of the image is obtained. However, these sparse representation-based image quality objective evaluation methods all need to use the orthogonal matching pursuit algorithm for sparse coding, which requires a lot of motion overhead. Moreover, the acquisition of over-complete dictionaries in this type of method is done through offline operations, which requires a large number of effective The natural images are used as training samples, and there are limitations for image processing with real-time requirements.
对于数字图像如此高维的数据,实质上存在大量的信息冗余,需要利用降维技术对其进行处理,而在降维的同时又期望其本质结构可以得以保持。流形学习方法(ManifoldLearning),自2000年在著名科学杂志《Science》被首次提出以来,已成为信息科学领域的研究热点。假设数据是均匀采样于一个高维欧氏空间中的低维流形,流形学习就是从高维采样数据中恢复低维流形结构,即找到高维空间中的低维流形,并求出相应的嵌入映射,以实现维数约简。有研究表明流形是感知的基础,大脑中以流形方式对事物进行感知。近年来,流形学习在图像去噪、人脸识别、人体行为检测等方面应用广泛,并取得了较好的效果。Deng等人针对局部保持投影(Locality Preserving Projection,LPP)算法中的列向量不是正交的问题,对其改进得到正交局部保持投影算法(Orthogonal Locality PreservingProjection,OLPP),该算法可找到数据的流形结构且具有线性特点,取得了更好的局部保持能力和判别能力。流形学习能够模拟图像信号在初级视觉皮层细胞中的描述,从而能准确地提取出图像的视觉感知特征。图像的低维流形特征较好地描述了各失真图像之间的非线性变化关系,失真图像在流形空间里会按照变化的类型和强度的大小来排列。因此,研究一种客观评价结果与人眼视觉感知吻合度高的基于流形学习的图像质量客观评价方法很有必要。For such high-dimensional data of digital images, there is essentially a large amount of information redundancy, which needs to be processed by dimensionality reduction technology, and it is expected that its essential structure can be maintained while reducing dimensionality. Manifold Learning has become a research hotspot in the field of information science since it was first proposed in the famous scientific journal "Science" in 2000. Assuming that the data is a low-dimensional manifold uniformly sampled in a high-dimensional Euclidean space, manifold learning is to recover the low-dimensional manifold structure from the high-dimensional sampling data, that is, to find the low-dimensional manifold in the high-dimensional space, and find the low-dimensional manifold in the high-dimensional space. The corresponding embedding map is generated to achieve dimensionality reduction. Studies have shown that manifolds are the basis of perception, and things are perceived in a manifold manner in the brain. In recent years, manifold learning has been widely used in image denoising, face recognition, human behavior detection, etc., and has achieved good results. Aiming at the problem that the column vectors in the Locality Preserving Projection (LPP) algorithm are not orthogonal, Deng et al. improved it to obtain the Orthogonal Locality Preserving Projection (OLPP) algorithm, which can find the flow of data. It has a linear structure and has a better local retention ability and discriminative ability. Manifold learning can simulate the description of image signals in primary visual cortex cells, so that the visual perception features of images can be accurately extracted. The low-dimensional manifold feature of the image can describe the nonlinear change relationship between the distorted images well, and the distorted images will be arranged in the manifold space according to the type of change and the magnitude of the intensity. Therefore, it is necessary to study an objective evaluation method of image quality based on manifold learning, which has a high degree of agreement between the objective evaluation results and human visual perception.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是提供一种基于在线流形学习的彩色图像质量客观评价方法,其能够有效地提高客观评价结果与主观感知之间的相关性。The technical problem to be solved by the present invention is to provide an objective evaluation method for color image quality based on online manifold learning, which can effectively improve the correlation between objective evaluation results and subjective perception.
本发明解决上述技术问题所采用的技术方案为:一种基于在线流形学习的彩色图像质量客观评价方法,其特征在于包括以下步骤:The technical scheme adopted by the present invention to solve the above-mentioned technical problems is: an objective evaluation method for color image quality based on online manifold learning, which is characterized by comprising the following steps:
①令IR表示宽度为W且高度为H的无失真的参考图像,令ID表示与IR对应的待评价的失真图像;① Let IR denote an undistorted reference image with a width of W and a height of H , and let ID denote the distorted image to be evaluated corresponding to IR;
②采用视觉显著检测算法,分别获取IR和ID各自的显著图,对应记为MR和MD;然后根据MR和MD,计算最大融合显著图,记为MF,将MF中坐标位置为(x,y)的像素点的像素值记为MF(x,y),MF(x,y)=max(MR(x,y),MD(x,y)),其中,1≤x≤W,1≤y≤H,max()为取最大值函数,MR(x,y)表示MR中坐标位置为(x,y)的像素点的像素值,MD(x,y)表示MD中坐标位置为(x,y)的像素点的像素值;(2) Using the visual saliency detection algorithm, obtain the saliency maps of IR and ID respectively, and denote them as MR and MD respectively ; then according to MR and MD , calculate the maximum fusion saliency map, denoted as MF , and denote MF The pixel value of the pixel with the middle coordinate position (x, y) is recorded as M F (x, y), M F (x, y) = max(MR (x, y), M D (x, y) ), where 1≤x≤W , 1≤y≤H , max() is the function of taking the maximum value, and MR (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in MR , MD (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in MD;
③将IR、ID、MR、MD和MF分别通过尺寸大小为8×8的滑动窗口划分成个互不重叠的大小相同的图像块;③ Divide IR , ID , MR , MD and MF into 8×8 sliding windows respectively. non-overlapping image blocks of the same size;
然后将IR和ID各自中的每个图像块中的所有像素点的R、G、B通道的颜色值向量化,将IR中的第j个图像块中的所有像素点的R、G、B通道的颜色值向量化后形成的颜色向量记为将ID中的第j个图像块中的所有像素点的R、G、B通道的颜色值向量化后形成的颜色向量记为其中,j的初始值为1, 和的维数均为192×1,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的R通道的颜色值,中的第65个元素至第128个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的G通道的颜色值,中的第129个元素至第192个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的B通道的颜色值,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的R通道的颜色值,中的第65个元素至第128个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的G通道的颜色值,中的第129个元素至第192个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的B通道的颜色值;Then, the color values of the R , G, and B channels of all pixels in each image block in each of IR and ID are vectorized, and the R , G, and B channels of all pixels in the jth image block in IR are vectorized. The color vector formed by the vectorization of the color values of the G and B channels is recorded as The color vector formed by quantizing the color values of the R, G, and B channels of all pixels in the jth image block in ID is denoted as Among them, the initial value of j is 1, and The dimensions of are 192×1, The values of the 1st element to the 64th element in IR are in a one-to-one correspondence with the color value of the R channel of each pixel point in the jth image block in the IR scanned in a progressive scan manner, The values of the 65th element to the 128th element in the one-to-one correspondence are the color values of the G channel of each pixel point in the jth image block in the IR scanned in a progressive scan manner, The values of the 129th element to the 192th element in the one-to-one correspondence are the color values of the B channel of each pixel point in the jth image block in the IR scanned in a progressive scan manner, The values of the 1st element to the 64th element in the one-to-one correspondence are the color values of the R channel of each pixel in the jth image block in the ID scanned in a progressive scan manner, The values of the 65th element to the 128th element in the one-to-one correspondence are the color values of the G channel of each pixel in the jth image block in the ID scanned in a progressive scan manner, The value of the 129th element to the 192nd element in the one-to-one correspondence is the color value of the B channel of each pixel point in the jth image block in the ID scanned in a progressive scanning manner;
并将MR、MD和MF各自中的每个图像块中的所有像素点的像素值向量化,将MR中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为将MD中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为将MF中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为其中,和的维数均为64×1,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MR中的第j个图像块中的每个像素点的像素值,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MD中的第j个图像块中的每个像素点的像素值,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MF中的第j个图像块中的每个像素点的像素值; Quantize the pixel values of all pixels in each image block in MR , MD and MF , and quantize the pixel values of all pixels in the jth image block in MR to form The pixel value vector of is denoted as The pixel value vector formed by quantizing the pixel value vector of all pixels in the jth image block in MD is denoted as The pixel value vector formed by quantizing the pixel value vector of all pixels in the jth image block in MF is denoted as in, and The dimensions of are 64×1, The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel in the jth image block in the MR scanned in a progressive scanning manner, The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel in the jth image block in the MD scanned in a progressive scan manner, The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel in the jth image block in the progressive scanning mode scanning MF;
④计算MF中的每个图像块的显著性,将MF中的第j个图像块的显著性记为dj,其中,1≤i≤64,表示中的第i个元素的值;④ Calculate the saliency of each image block in MF, and record the saliency of the jth image block in MF as d j , Among them, 1≤i≤64, express the value of the i-th element in ;
然后按从大到小的顺序排列MF中的所有图像块的显著性,排序后再确定前t1个显著性对应的图像块的序号,其中,λ1表示图像块选取比例系数,λ1∈(0,1];Then arrange the saliency of all image blocks in MF in descending order, and then determine the sequence number of the image blocks corresponding to the first t 1 saliency, where, λ 1 represents the scale coefficient of image block selection, λ 1 ∈(0,1];
接着找出IR中与所确定的t1个序号相应的图像块,并定义为参考图像块;找出ID中与所确定的t1个序号相应的图像块,并定义为失真图像块;找出MR中与所确定的t1个序号相应的图像块,并定义为参考显著图像块;找出MD中与所确定的t1个序号相应的图像块,并定义为失真显著图像块;Then find out the image blocks corresponding to the determined t1 serial numbers in IR, and define them as reference image blocks ; find out the image blocks corresponding to the determined t1 serial numbers in ID, and define them as distorted image blocks Find out the image block corresponding to the determined t 1 serial numbers in MR , and define it as a reference significant image block; Find out the image block corresponding to the determined t 1 serial numbers in MD, and define it as a significant distortion image block;
⑤利用绝对差值衡量IR中的每个参考图像块与ID中对应的失真图像块的显著差异值,将IR中的第t'个参考图像块与ID中的第t'个失真图像块的显著差异值记为et',其中,t'的初始值为1,1≤t'≤t1,符号“||”为取绝对值符号,表示MR中的第t'个参考显著图像块对应的像素值向量中的第i个元素的值,表示MD中的第t'个失真显著图像块对应的像素值向量中的第i个元素的值; ⑤Use the absolute difference to measure the significant difference between each reference image block in IR and the corresponding distorted image block in ID, and compare the t'th reference image block in IR with the t'th in ID The significant difference value of the distorted image block is denoted as e t ', Among them, the initial value of t' is 1, 1≤t'≤t1, the symbol "||" is the absolute value symbol, Represents the pixel value vector corresponding to the t'th reference salient image block in MR the value of the ith element in , Represents the pixel value vector corresponding to the t'th distorted image block in MD the value of the i-th element in ;
然后按从大到小的顺序排列衡量得到的t1个显著差异值,排序后再确定前t2个显著差异值对应的参考图像块和失真图像块,将所确定的t2个参考图像块定义为参考视觉重要图像块,并将所有参考视觉重要图像块对应的颜色向量构成的矩阵作为参考视觉重要图像块矩阵,记为YR;将所确定的t2个失真图像块定义为失真视觉重要图像块,并将所有失真视觉重要图像块对应的颜色向量构成的矩阵作为失真视觉重要图像块矩阵,记为YD,其中,t2=λ2×t1,λ2表示参考图像块和失真图像块选取比例系数,λ2∈(0,1],YR和YD的维数均为192×t2,YR中的第t”个列向量为所确定的第t”个参考图像块对应的颜色向量,YD中的第t”个列向量为所确定的第t”个失真图像块对应的颜色向量,t”的初始值为1,1≤t”≤t2;Then arrange the measured t 1 significant difference values in descending order, and then determine the reference image blocks and distorted image blocks corresponding to the first t 2 significant difference values after sorting, and assign the determined t 2 reference image blocks to the It is defined as a reference visual important image block, and the matrix formed by the color vectors corresponding to all reference visual important image blocks is used as a reference visual important image block matrix, denoted as Y R ; The determined t 2 distorted image blocks are defined as distorted vision important image blocks, and the matrix formed by the color vectors corresponding to all distorted visual important image blocks is taken as the distorted visual important image block matrix, denoted as Y D , where t 2 =λ 2 ×t 1 , λ 2 represents the reference image block and Distorted image block selects the scale coefficient, λ 2 ∈(0,1], the dimensions of Y R and Y D are both 192×t 2 , and the t”th column vector in Y R is the determined t”th reference. The color vector corresponding to the image block, the t"-th column vector in Y D is the color vector corresponding to the determined t"-th distorted image block, and the initial value of t" is 1, 1≤t"≤t 2 ;
⑥将YR中的每个列向量中的每个元素的值减去该列向量中所有元素的值的均值进行中心化,将中心化处理后得到的矩阵记为Y,其中,Y的维数为192×t2;⑥ Center the value of each element in each column vector in Y R minus the mean value of the values of all elements in the column vector, and denote the matrix obtained after centering as Y, where the dimension of Y is The number is 192×t 2 ;
然后利用主成分分析对Y进行降维以及白化操作,得到降维以及白化操作后的矩阵,记为Yw,Yw=W×Y,其中,Yw的维数为M×t2,W表示白化矩阵,W的维数为M×192,1<M<<192,符号“<<”为远小于符号;Then use principal component analysis to perform dimension reduction and whitening operations on Y to obtain a matrix after dimension reduction and whitening operations, which is denoted as Y w , Y w =W×Y, where the dimension of Y w is M×t 2 , W Indicates the whitening matrix, the dimension of W is M×192, 1<M<<192, and the symbol "<<" is much smaller than the symbol;
⑦利用正交局部保持投影算法对Yw进行在线训练,获取Yw的特征基矩阵,记为D,其中,D的维数为M×192; ⑦Using the orthogonal local preserving projection algorithm to train Yw online to obtain the feature base matrix of Yw, which is denoted as D, where the dimension of D is M×192;
⑧根据YR和D,计算每个参考视觉重要图像块的流形特征向量,将第t”个参考视觉重要图像块的流形特征向量记为ut”,其中,ut”的维数为M×1,为YR中的第t”个列向量;并根据YD和D,计算每个失真视觉重要图像块的流形特征向量,将第t”个失真视觉重要图像块的流形特征向量记为vt”,其中,vt”的维数为M×1,为YD中的第t”个列向量;⑧ Calculate the manifold feature vector of each reference visually important image block according to Y R and D, and denote the manifold feature vector of the t”th reference visually important image block as u t” , Among them, the dimension of u t” is M×1, is the t"th column vector in Y R ; and according to Y D and D, calculate the manifold feature vector of each distorted visual important image block, and record the manifold feature vector of the t" distorted visual important image block as v t” , Among them, the dimension of v t" is M × 1, is the t"th column vector in Y D ;
⑨根据所有参考视觉重要图像块的流形特征向量和所有失真视觉重要图像块的流形特征向量,计算ID的客观质量评价值,记为Score,其中,1≤m≤M,ut”(m)表示ut”中的第m个元素的值,vt”(m)表示vt”中的第m个元素的值,C为一个很小的常量,用于保证结果的稳定性。⑨ According to the manifold feature vectors of all reference visually important image blocks and the manifold feature vectors of all distorted visually important image blocks, calculate the objective quality evaluation value of ID, denoted as Score, Among them, 1≤m≤M, u t” (m) represents the value of the mth element in u t” , v t” (m) represents the value of the mth element in v t” , and C is a very Small constant, used to guarantee the stability of the result.
所述的步骤⑥中Yw的获取过程为:⑥_1、令C表示Y的协方差矩阵,其中,C的维数为192×192,YT为Y的转置;⑥_2、对C进行特征值分解,得到所有最大特征值和对应的特征向量,其中,特征向量的维数为192×1;⑥_3、取M个最大特征值和对应的M个特征向量;⑥_4、根据所取的M个最大特征值和对应的M个特征向量计算白化矩阵W,W=Ψ-1/2×ET,其中,Ψ的维数为M×M,Ψ=diag(ψ1,...,ψM),E的维数为192×M,E=[e1,...,eM],diag()为主对角线矩阵表示形式,ψ1,...,ψM对应表示所取的第1个、…、第M个最大特征值,e1,...,eM对应表示所取的第1个、…、第M个特征向量;⑥_5、根据W对Y进行白化操作,得到降维以及白化操作后的矩阵Yw,Yw=W×Y。The acquisition process of Y w in the described step ⑥ is: ⑥_1, let C represent the covariance matrix of Y, Among them, the dimension of C is 192×192, and Y T is the transpose of Y; ⑥_2, decompose the eigenvalues of C to obtain all the largest eigenvalues and corresponding eigenvectors, among which, the dimension of the eigenvectors is 192×1 ; ⑥_3, take M maximum eigenvalues and corresponding M eigenvectors; ⑥_4, calculate whitening matrix W according to the taken M maximum eigenvalues and corresponding M eigenvectors, W=Ψ- 1/2 ×E T , where the dimension of Ψ is M×M, Ψ=diag(ψ 1 ,...,ψ M ), The dimension of E is 192×M, E=[e 1 ,...,e M ], diag() is the main diagonal matrix representation, ψ 1 ,...,ψ M corresponds to the selected 1 , . dimension and the matrix Y w after the whitening operation, Y w =W×Y.
所述的步骤④中取λ1=0.7。In the step (4), λ 1 =0.7 is taken.
所述的步骤⑤中取λ2=0.6。In the step ⑤, λ 2 =0.6 is taken.
所述的步骤⑨中取C=0.04。In the described step ⑨, take C=0.04.
与现有技术相比,本发明的优点在于:Compared with the prior art, the advantages of the present invention are:
1)本发明方法考虑到显著性与图像质量客观评价的关系,利用视觉显著检测算法,通过求取参考图像与失真图像各自的显著图来获得最大融合显著图,并在最大融合显著图中的图像块的最大显著性的基础上利用绝对差值来衡量参考图像块与对应的失真图像块的显著差异值,由此筛选提取到参考视觉重要图像块与失真视觉重要图像块,再利用参考视觉重要图像块与失真视觉重要图像块的流形特征向量来计算失真图像的客观质量评价值,评价效果明显提高,客观评价结果与主观感知之间的相关性高。1) The method of the present invention takes into account the relationship between saliency and objective evaluation of image quality, and uses a visual saliency detection algorithm to obtain the maximum fusion saliency map by obtaining the respective saliency maps of the reference image and the distorted image, and in the maximum fusion saliency map. On the basis of the maximum saliency of the image block, the absolute difference value is used to measure the significant difference between the reference image block and the corresponding distorted image block, and then the reference visual important image block and the distorted visual important image block are filtered and extracted, and then the reference visual image block is extracted. The objective quality evaluation value of the distorted image is calculated by using the manifold feature vector of the important image block and the important image block of distorted vision, the evaluation effect is obviously improved, and the correlation between the objective evaluation result and the subjective perception is high.
2)本发明方法从图像数据本身出发通过流形学习的方式寻找数据的内在几何结构,训练得到特征基矩阵,利用特征基矩阵对参考视觉重要图像块与失真重要图像块进行降维得到流形特征向量,降维后的流形特征向量仍然保持了高维图像数据的几何特性,减少了很多冗余信息,在计算失真图像的客观质量评价值时更简单、更准确。2) The method of the present invention starts from the image data itself and searches for the intrinsic geometric structure of the data by means of manifold learning, obtains the feature base matrix through training, and uses the feature base matrix to reduce the dimension of the reference visual important image block and the distortion important image block to obtain the manifold The feature vector, the manifold feature vector after dimensionality reduction still maintains the geometric characteristics of high-dimensional image data, reduces a lot of redundant information, and is simpler and more accurate when calculating the objective quality evaluation value of the distorted image.
3)本发明方法针对现有的基于稀疏表示的图像质量客观评价方法中过完备字典的离线学习获取需要大量有效的训练样本及对有实时要求的图像处理存在局限性的问题,对已提取的参考视觉重要图像块利用正交局部保持投影算法在线学习训练得到特征基矩阵,可以实时地获取特征基矩阵,因此鲁棒性更高,评价效果更加稳定。3) The method of the present invention is aimed at the problem that offline learning and acquisition of over-complete dictionaries in the existing sparse representation-based image quality objective evaluation methods require a large number of effective training samples and have limitations in image processing with real-time requirements. The feature basis matrix is obtained by online learning and training of the orthogonal local preserving projection algorithm with reference to the visually important image blocks, and the feature basis matrix can be obtained in real time, so the robustness is higher and the evaluation effect is more stable.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the method of the present invention;
图2a为本发明方法在LIVE图像数据库中的散点拟合曲线图;Fig. 2a is the scatter fitting curve diagram of the inventive method in the LIVE image database;
图2b为本发明方法在CSIQ图像数据库中的散点拟合曲线图;Fig. 2b is the scatter fitting curve diagram of the method of the present invention in the CSIQ image database;
图2c为本发明方法在TID2008图像数据库中的散点拟合曲线图。Fig. 2c is a scatter fitting curve diagram of the method of the present invention in the TID2008 image database.
具体实施方式Detailed ways
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below with reference to the embodiments of the accompanying drawings.
本发明提出的一种基于在线流形学习的彩色图像质量客观评价方法,其总体实现框图如图1所示,其包括以下步骤:A method for objective evaluation of color image quality based on online manifold learning proposed by the present invention, its overall implementation block diagram is shown in Figure 1, which includes the following steps:
①令IR表示宽度为W且高度为H的无失真的参考图像,令ID表示与IR对应的待评价的失真图像。① Let IR denote the undistorted reference image with width W and height H , and let ID denote the distorted image to be evaluated corresponding to IR .
②采用现有的视觉显著检测算法(Saliency Detection based on SimplePriors,SDSP),分别获取IR和ID各自的显著图,对应记为MR和MD;然后根据MR和MD,计算最大融合显著图,记为MF,将MF中坐标位置为(x,y)的像素点的像素值记为MF(x,y),MF(x,y)=max(MR(x,y),MD(x,y)),其中,1≤x≤W,1≤y≤H,max()为取最大值函数,MR(x,y)表示MR中坐标位置为(x,y)的像素点的像素值,MD(x,y)表示MD中坐标位置为(x,y)的像素点的像素值。2) Adopt the existing visual saliency detection algorithm (Saliency Detection based on SimplePriors , SDSP ) to obtain the respective saliency maps of IR and ID respectively, which are correspondingly recorded as MR and MD ; then according to MR and MD , calculate the maximum The fused saliency map is denoted as MF , and the pixel value of the pixel whose coordinate position is (x, y) in MF is denoted as MF (x, y), MF (x, y)=max( MR ( x, y), MD (x, y)), where 1≤x≤W , 1≤y≤H , max() is the function of taking the maximum value, and MR (x, y) represents the coordinate position in MR is the pixel value of the pixel at (x, y), and MD (x, y) represents the pixel value of the pixel at the coordinate position (x, y) in MD.
③将IR、ID、MR、MD和MF分别通过尺寸大小为8×8的滑动窗口划分成个互不重叠的大小相同的图像块,若图像的尺寸大小不能被8×8整除,则多余的像素点不作处理。③ Divide IR , ID , MR , MD and MF into 8×8 sliding windows respectively. There are non-overlapping image blocks of the same size. If the size of the image cannot be divisible by 8×8, the extra pixels will not be processed.
然后将IR和ID各自中的每个图像块中的所有像素点的R、G、B通道的颜色值向量化,将IR中的第j个图像块中的所有像素点的R、G、B通道的颜色值向量化后形成的颜色向量记为将ID中的第j个图像块中的所有像素点的R、G、B通道的颜色值向量化后形成的颜色向量记为其中,j的初始值为1, 和的维数均为192×1,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的R通道的颜色值,即中的第1个元素的值为IR中的第j个图像块中第1行第1列的像素点的R通道的颜色值,中的第2个元素的值为IR中的第j个图像块中第1行第2列的像素点的R通道的颜色值,依次类推;中的第65个元素至第128个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的G通道的颜色值,即中的第65个元素的值为IR中的第j个图像块中第1行第1列的像素点的G通道的颜色值,中的第66个元素的值为IR中的第j个图像块中第1行第2列的像素点的G通道的颜色值,依次类推;中的第129个元素至第192个元素的值一一对应为以逐行扫描方式扫描IR中的第j个图像块中的每个像素点的B通道的颜色值,即中的第129个元素的值为IR中的第j个图像块中第1行第1列的像素点的B通道的颜色值,中的第130个元素的值为IR中的第j个图像块中第1行第2列的像素点的B通道的颜色值,依次类推;中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的R通道的颜色值,即中的第1个元素的值为ID中的第j个图像块中第1行第1列的像素点的R通道的颜色值,中的第2个元素的值为ID中的第j个图像块中第1行第2列的像素点的R通道的颜色值,依次类推;中的第65个元素至第128个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的G通道的颜色值,即中的第65个元素的值为ID中的第j个图像块中第1行第1列的像素点的G通道的颜色值,中的第66个元素的值为ID中的第j个图像块中第1行第2列的像素点的G通道的颜色值,依次类推;中的第129个元素至第192个元素的值一一对应为以逐行扫描方式扫描ID中的第j个图像块中的每个像素点的B通道的颜色值,即中的第129个元素的值为ID中的第j个图像块中第1行第1列的像素点的B通道的颜色值,中的第130个元素的值为ID中的第j个图像块中第1行第2列的像素点的B通道的颜色值,依次类推。Then, the color values of the R , G, and B channels of all pixels in each image block in each of IR and ID are vectorized, and the R , G, and B channels of all pixels in the jth image block in IR are vectorized. The color vector formed by the vectorization of the color values of the G and B channels is recorded as The color vector formed by quantizing the color values of the R, G, and B channels of all pixels in the jth image block in ID is denoted as Among them, the initial value of j is 1, and The dimensions of are 192×1, The values of the 1st element to the 64th element in IR correspond one-to-one to the color value of the R channel of each pixel point in the jth image block in the IR scanned in a progressive scan manner, that is, The value of the 1st element in IR is the color value of the R channel of the pixel in the 1st row and 1st column in the jth image block in IR, The value of the second element in IR is the color value of the R channel of the pixel in the 1st row and 2nd column in the jth image block in IR, and so on; The values of the 65th element to the 128th element in the one-to-one correspondence are the color values of the G channel of each pixel point in the jth image block in the IR scanned in a progressive scan manner, that is, The value of the 65th element in IR is the color value of the G channel of the pixel in the 1st row and 1st column in the jth image block in IR, The value of the 66th element in IR is the color value of the G channel of the pixel in the 1st row and 2nd column in the jth image block in IR, and so on; The values of the 129th element to the 192th element in the one-to-one correspondence are the color values of the B channel of each pixel point in the jth image block in the IR scanned in a progressive scanning manner, that is, The value of the 129th element in IR is the color value of the B channel of the pixel in the 1st row and 1st column in the jth image block in IR, The value of the 130th element in IR is the color value of the B channel of the pixel in the 1st row and 2nd column in the jth image block in IR, and so on; The values of the 1st element to the 64th element in the one-to-one correspondence are the color values of the R channel of each pixel point in the jth image block in the ID scanned in a progressive scan manner, that is The value of the 1st element in ID is the color value of the R channel of the pixel in the 1st row and 1st column in the jth image block in ID, The value of the second element in the ID is the color value of the R channel of the pixel in the 1st row and 2nd column in the jth image block in ID, and so on; The values of the 65th element to the 128th element in the one-to-one correspondence are the color values of the G channel of each pixel in the jth image block in the ID scanned in a progressive scan manner, that is, The value of the 65th element in ID is the color value of the G channel of the pixel in the 1st row and 1st column in the jth image block in ID, The value of the 66th element in the ID is the color value of the G channel of the pixel in the 1st row and 2nd column in the jth image block in ID, and so on; The values of the 129th element to the 192th element in the one-to-one correspondence are the color values of the B channel of each pixel point in the jth image block in the ID scanned in a progressive scan manner, that is, The value of the 129th element in ID is the color value of the B channel of the pixel in the 1st row and 1st column in the jth image block in ID, The value of the 130th element in ID is the color value of the B channel of the pixel in the 1st row and 2nd column in the jth image block in ID, and so on.
并将MR、MD和MF各自中的每个图像块中的所有像素点的像素值向量化,将MR中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为将MD中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为将MF中的第j个图像块中的所有像素点的像素值向量化后形成的像素值向量记为其中,和的维数均为64×1,中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MR中的第j个图像块中的每个像素点的像素值,即中的第1个元素的值为MR中的第j个图像块中第1行第1列的像素点的像素值,中的第2个元素的值为MR中的第j个图像块中第1行第2列的像素点的像素值,依次类推;中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MD中的第j个图像块中的每个像素点的像素值,即中的第1个元素的值为MD中的第j个图像块中第1行第1列的像素点的像素值,中的第2个元素的值为MD中的第j个图像块中第1行第2列的像素点的像素值,依次类推;中的第1个元素至第64个元素的值一一对应为以逐行扫描方式扫描MF中的第j个图像块中的每个像素点的像素值,即中的第1个元素的值为MF中的第j个图像块中第1行第1列的像素点的像素值,中的第2个元素的值为MF中的第j个图像块中第1行第2列的像素点的像素值,依次类推。 Quantize the pixel values of all pixels in each image block in MR , MD and MF , and quantize the pixel values of all pixels in the jth image block in MR to form The pixel value vector of is denoted as The pixel value vector formed by quantizing the pixel value vector of all pixels in the jth image block in MD is denoted as The pixel value vector formed by quantizing the pixel value vector of all pixels in the jth image block in MF is denoted as in, and The dimensions of are 64×1, The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel in the jth image block in the MR scanned in the progressive scanning manner, that is The value of the first element in MR is the pixel value of the pixel in the 1st row and 1st column in the jth image block in MR, The value of the second element in MR is the pixel value of the pixel in the 1st row and 2nd column in the jth image block in MR, and so on; The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel in the jth image block in the MD scanned in a progressive scan manner, that is The value of the first element in MD is the pixel value of the pixel in the 1st row and 1st column in the jth image block in MD, The value of the second element in MD is the pixel value of the pixel in the 1st row and 2nd column in the jth image block in MD, and so on; The values of the 1st element to the 64th element in the one-to-one correspondence are the pixel values of each pixel point in the jth image block in the MF scanned in the progressive scanning manner, that is The value of the first element in MF is the pixel value of the pixel in the 1st row and 1st column in the jth image block in MF, The value of the second element in MF is the pixel value of the pixel in the first row and second column in the jth image block in MF, and so on.
④计算MF中的每个图像块的显著性,将MF中的第j个图像块的显著性记为dj,其中,1≤i≤64,表示中的第i个元素的值,即表示MF中的第j个图像块中的第i个像素点的像素值。④ Calculate the saliency of each image block in MF, and record the saliency of the jth image block in MF as d j , Among them, 1≤i≤64, express The value of the i-th element in , that is, the pixel value of the i-th pixel in the j-th image block in MF.
然后按从大到小的顺序排列MF中的所有图像块的显著性,排序后再确定前t1个显著性(即最大的t1个显著性)对应的图像块的序号,其中,λ1表示图像块选取比例系数,λ1∈(0,1],在本实施例中取λ1=0.7。Then arrange the saliency of all image blocks in MF in descending order, and then determine the sequence numbers of the image blocks corresponding to the first t 1 saliency (that is, the largest t 1 saliency), where, λ 1 represents a scale coefficient for image block selection, λ 1 ∈(0,1], and λ 1 =0.7 in this embodiment.
接着找出IR中与所确定的t1个序号相应的图像块,并定义为参考图像块;找出ID中与所确定的t1个序号相应的图像块,并定义为失真图像块;找出MR中与所确定的t1个序号相应的图像块,并定义为参考显著图像块;找出MD中与所确定的t1个序号相应的图像块,并定义为失真显著图像块。Then find out the image blocks corresponding to the determined t1 serial numbers in IR, and define them as reference image blocks ; find out the image blocks corresponding to the determined t1 serial numbers in ID, and define them as distorted image blocks Find out the image block corresponding to the determined t 1 serial numbers in MR , and define it as a reference significant image block; Find out the image block corresponding to the determined t 1 serial numbers in MD, and define it as a significant distortion image block.
⑤利用绝对差值衡量IR中的每个参考图像块与ID中对应的失真图像块的显著差异值,将IR中的第t'个参考图像块与ID中的第t'个失真图像块的显著差异值记为et',其中,t'的初始值为1,1≤t'≤t1,符号“|·|”为取绝对值符号,表示MR中的第t'个参考显著图像块对应的像素值向量中的第i个元素的值,即表示MR中的第t'个参考显著图像块中的第i个像素点的像素值,表示MD中的第t'个失真显著图像块对应的像素值向量中的第i个元素的值,即表示MD中的第t'个失真显著图像块中的第i个像素点的像素值。 ⑤Use the absolute difference to measure the significant difference between each reference image block in IR and the corresponding distorted image block in ID, and compare the t'th reference image block in IR with the t'th in ID The significant difference value of the distorted image block is denoted as e t' , Among them, the initial value of t' is 1, 1≤t'≤t 1 , the symbol "|·|" is the absolute value symbol, Represents the pixel value vector corresponding to the t'th reference salient image block in MR The value of the i-th element in , that is, the pixel value of the i-th pixel in the t' -th reference salient image block in MR, Represents the pixel value vector corresponding to the t'th distorted image block in MD The value of the i-th element in , that is, the pixel value of the i-th pixel point in the t' -th distorted image block in MD.
然后按从大到小的顺序排列衡量得到的t1个显著差异值,排序后再确定前t2个显著差异值(即最大的t2个显著差异性)对应的参考图像块和失真图像块,将所确定的t2个参考图像块定义为参考视觉重要图像块,并将所有参考视觉重要图像块对应的颜色向量构成的矩阵作为参考视觉重要图像块矩阵,记为YR;将所确定的t2个失真图像块定义为失真视觉重要图像块,并将所有失真视觉重要图像块对应的颜色向量构成的矩阵作为失真视觉重要图像块矩阵,记为YD,其中,t2=λ2×t1,λ2表示参考图像块和失真图像块选取比例系数,λ2∈(0,1],在本实施例中取λ2=0.6,YR和YD的维数均为192×t2,YR中的第t”个列向量为所确定的第t”个参考图像块对应的颜色向量,YD中的第t”个列向量为所确定的第t”个失真图像块对应的颜色向量,t”的初始值为1,1≤t”≤t2。Then arrange the measured t 1 significant difference values in descending order, and then determine the reference image blocks and distorted image blocks corresponding to the first t 2 significant difference values (that is, the largest t 2 significant differences) , the determined t 2 reference image blocks are defined as reference visually important image blocks, and the matrix formed by the color vectors corresponding to all reference visually important image blocks is used as the reference visually important image block matrix, denoted as Y R ; the determined The t 2 distorted image blocks are defined as distorted visual important image blocks, and the matrix formed by the color vectors corresponding to all distorted visual important image blocks is taken as the distorted visual important image block matrix, denoted as Y D , where t 2 =λ 2 ×t 1 , λ 2 represents the scale coefficient selected for the reference image block and the distorted image block, λ 2 ∈(0,1], in this embodiment, λ 2 =0.6, and the dimensions of Y R and Y D are both 192× t 2 , the t"-th column vector in Y R is the color vector corresponding to the determined t"-th reference image block, and the t"-th column vector in Y D is the determined t"-th distorted image block The corresponding color vector, the initial value of t" is 1, 1≤t"≤t 2 .
⑥将YR中的每个列向量中的每个元素的值减去该列向量中所有元素的值的均值进行中心化,将中心化处理后得到的矩阵记为Y,其中,Y的维数为192×t2。⑥ Center the value of each element in each column vector in Y R minus the mean value of the values of all elements in the column vector, and denote the matrix obtained after centering as Y, where the dimension of Y is The number is 192×t 2 .
然后利用现有的主成分分析(Principal Components Analysis,PCA)对中心化处理后得到的Y进行降维以及白化操作,得到降维以及白化操作后的矩阵,记为Yw,Yw=W×Y,其中,Yw的维数为M×t2,W表示白化矩阵,W的维数为M×192,1<M<<192,符号“<<”为远小于符号。Then use the existing principal component analysis (Principal Components Analysis, PCA) to perform dimension reduction and whitening operations on Y obtained after the centralization process, and obtain the matrix after dimension reduction and whitening operations, denoted as Y w , Y w =W× Y, wherein the dimension of Y w is M×t 2 , W represents the whitening matrix, the dimension of W is M×192, 1<M<<192, and the symbol “<<” is much smaller than the symbol.
在本实施例中通过对样本数据协方差矩阵进行特征值分解来实现主成分分析过程,即步骤⑥中Yw的获取过程为:⑥_1、令C表示Y的协方差矩阵,其中,C的维数为192×192,YT为Y的转置;⑥_2、对C进行特征值分解,得到所有最大特征值和对应的特征向量,其中,特征向量的维数为192×1;⑥_3、取M个最大特征值和对应的M个特征向量,以实现对Y的降维操作,在本实施例中取M=8,即只取了前8个主成分用于训练,也就是说维数从192维降到了M=8维;⑥_4、根据所取的M个最大特征值和对应的M个特征向量计算白化矩阵W,W=Ψ-1/2×ET,其中,Ψ的维数为M×M,Ψ=diag(ψ1,...,ψM),E的维数为192×M,E=[e1,...,eM],diag()为主对角线矩阵表示形式,ψ1,...,ψM对应表示所取的第1个、…、第M个最大特征值,e1,...,eM对应表示所取的第1个、…、第M个特征向量;⑥_5、根据W对Y进行白化操作,得到降维以及白化操作后的矩阵Yw,Yw=W×Y。In this embodiment, the principal component analysis process is realized by eigenvalue decomposition of the sample data covariance matrix, that is, the acquisition process of Y w in step ⑥ is: ⑥_1, let C represent the covariance matrix of Y, Among them, the dimension of C is 192×192, and Y T is the transpose of Y; ⑥_2, decompose the eigenvalues of C to obtain all the largest eigenvalues and corresponding eigenvectors, among which, the dimension of the eigenvectors is 192×1 6-3, get M maximum eigenvalues and corresponding M eigenvectors, to realize the dimensionality reduction operation to Y, get M=8 in the present embodiment, namely only got the first 8 principal components for training, also That is to say, the dimension is reduced from 192 dimensions to M=8 dimensions; ⑥_4. Calculate the whitening matrix W according to the M largest eigenvalues and the corresponding M eigenvectors, W=Ψ -1/2 ×E T , where, The dimension of Ψ is M×M, Ψ=diag(ψ 1 ,...,ψ M ), The dimension of E is 192×M, E=[e 1 ,...,e M ], diag() is the main diagonal matrix representation, ψ 1 ,...,ψ M corresponds to the selected 1 , . dimension and the matrix Y w after the whitening operation, Y w =W×Y.
⑦利用现有的正交局部保持投影(OLPP)算法对Yw进行在线训练,获取Yw的特征基矩阵,记为D,其中,D的维数为M×192。 ⑦Using the existing Orthogonal Locality Preserving Projection (OLPP) algorithm to train Yw online to obtain the feature basis matrix of Yw , denoted as D, where the dimension of D is M×192.
⑧根据YR和D,计算每个参考视觉重要图像块的流形特征向量,将第t”个参考视觉重要图像块的流形特征向量记为ut”,其中,ut”的维数为M×1,为YR中的第t”个列向量;并根据YD和D,计算每个失真视觉重要图像块的流形特征向量,将第t”个失真视觉重要图像块的流形特征向量记为vt”,其中,vt”的维数为M×1,为YD中的第t”个列向量。⑧ Calculate the manifold feature vector of each reference visually important image block according to Y R and D, and denote the manifold feature vector of the t”th reference visually important image block as u t” , Among them, the dimension of u t” is M×1, is the t"th column vector in Y R ; and according to Y D and D, calculate the manifold feature vector of each distorted visual important image block, and record the manifold feature vector of the t" distorted visual important image block as v t” , Among them, the dimension of v t" is M × 1, is the t"th column vector in Y D.
⑨根据所有参考视觉重要图像块的流形特征向量和所有失真视觉重要图像块的流形特征向量,计算ID的客观质量评价值,记为Score,其中,1≤m≤M,ut”(m)表示ut”中的第m个元素的值,vt”(m)表示vt”中的第m个元素的值,C为一个很小的常量,用于保证结果的稳定性,在本实施例中取C=0.04。⑨ According to the manifold feature vectors of all reference visually important image blocks and the manifold feature vectors of all distorted visually important image blocks, calculate the objective quality evaluation value of ID, denoted as Score, Among them, 1≤m≤M, u t” (m) represents the value of the mth element in u t” , v t” (m) represents the value of the mth element in v t” , and C is a very A small constant is used to ensure the stability of the result, and in this embodiment, C=0.04.
为进一步说明本发明方法的可行性和有效性,进行实验。In order to further illustrate the feasibility and effectiveness of the method of the present invention, experiments were carried out.
在本实施例中,选取三个公开的权威图像数据库分别为LIVE图像数据库、CSIQ图像数据库、TID2008图像数据库进行实验。表1中详细说明了每个图像数据库的各个指标,包括参考图像数目、失真图像数目、失真类型数目。其中,每个图像数据库都提供了每幅失真图像的平均主观评分差值。In this embodiment, three public authoritative image databases are selected, namely, the LIVE image database, the CSIQ image database, and the TID2008 image database for experiments. Each index of each image database is detailed in Table 1, including the number of reference images, the number of distorted images, and the number of distortion types. Among them, each image database provides the average subjective score difference for each distorted image.
表1权威图像数据库的各项指标Table 1 Indicators of authoritative image database
接下来,分析利用本发明方法获取的每幅失真图像的客观质量评价值与平均主观评分差值之间的相关性。这里,利用评估图像质量评价方法的3个常用的客观参量作为评价指标,即线性相关性系数(Pearson Linear Correlation Coefficients,PLCC)反映预测的准确性、Spearman秩相关系数(Spearman Rank Order Correlation coefficient,SROCC)反映预测的单调性、均方根误差(Root mean squared error,RMSE)反映预测的一致性。其中,PLCC和SROCC的取值范围是[0,1],其值越接近1,表明图像质量客观评价方法越好,反之越差;RMSE值越小,表示图像质量客观评价方法的预测越准确,性能越好,反之,则越差。Next, the correlation between the objective quality evaluation value of each distorted image obtained by the method of the present invention and the average subjective score difference is analyzed. Here, three commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson Linear Correlation Coefficients (PLCC) to reflect the accuracy of prediction, Spearman Rank Order Correlation coefficient (SROCC) ) reflects the monotonicity of the forecast, and the root mean squared error (RMSE) reflects the consistency of the forecast. Among them, the value range of PLCC and SROCC is [0, 1]. The closer the value is to 1, the better the objective evaluation method of image quality is, and vice versa, the worse it is; the smaller the RMSE value, the more accurate the prediction of the objective image quality evaluation method. , the better the performance, and vice versa, the worse.
对于上述LIVE图像数据库、CSIQ图像数据库和TID2008图像数据库中的所有失真图像,分别按本发明方法的步骤①至步骤⑨的过程,采用相同的方式计算得到每幅失真图像的客观质量评价值。分析实验得到的失真图像的客观质量评价值与平均主观评分差值之间的相关性。首先获取客观质量评价值,然后将客观质量评价值做五参数Logistic函数非线性拟合,最后得到客观评价结果与平均主观评分差值之间的性能指标值。为了验证本发明的有效性,将本发明方法与现有的性能较为先进的6种全参考图像质量客观评价方法在表1列出的三个图像数据库上进行了比较分析,表示三个图像数据库的评价性能的PLCC、SROCC和RMSE系数如表2所列,表2中参与比较的6种方法分别为:经典的PSNR方法,Z.Wang提出的基于结构相似度的评价方法(SSIM),N.Damera Venkata提出的基于退化模型的方法(IFC),H.R.Sheikh提出的基于信息保真度准则的方法(VIF),D.M.Chandler提出的基于小波域的视觉信噪比的方法(VSNR),T.Guha提出的基于稀疏表示的图像质量评价方法(SPARQ)。由表2中所列的数据可见,本发明方法在LIVE图像数据库上的性能仅次于VIF方法,而在CSIQ图像数据库和TID图像数据库上都表现最优,因此,在三个图像数据库上按本发明方法计算得到的失真图像的客观质量评价值与平均主观评分差值之间都有很好的相关性。另外,LIVE图像数据库和CSIQ图像数据库的PLCC值和SROCC值都超过了0.94,失真类型更加复杂的TID2008图像数据库的PLCC和SROCC值也达到了0.82,并且加权平均后本发明方法的性能比现有的6种方法均有不同程度的提高。表明了本发明方法的客观评价结果与人眼主观感知的结果较为一致,并且评价效果稳定,充分说明了本发明方法的有效性。For all the distorted images in the above-mentioned LIVE image database, CSIQ image database and TID2008 image database, according to the process of step 1 to step 9 of the method of the present invention, the objective quality evaluation value of each distorted image is obtained by calculating in the same way. The correlation between the objective quality evaluation value of the distorted image obtained by the experiment and the average subjective score difference was analyzed. First, the objective quality evaluation value is obtained, and then the objective quality evaluation value is nonlinearly fitted with a five-parameter Logistic function, and finally the performance index value between the objective evaluation result and the average subjective score difference is obtained. In order to verify the effectiveness of the present invention, the method of the present invention is compared and analyzed on the three image databases listed in Table 1 with the existing six kinds of objective evaluation methods of full reference image quality with relatively advanced performance, indicating that the three image databases The PLCC, SROCC and RMSE coefficients of the evaluation performance are listed in Table 2. The six methods involved in the comparison in Table 2 are: the classic PSNR method, the structural similarity-based evaluation method (SSIM) proposed by Z. Wang, N .Damera Venkata's method based on degradation model (IFC), H.R.Sheikh's method based on information fidelity criterion (VIF), D.M.Chandler's method based on visual signal-to-noise ratio (VSNR) in wavelet domain, T. The sparse representation-based image quality assessment method (SPARQ) proposed by Guha. It can be seen from the data listed in Table 2 that the performance of the method of the present invention is second only to the VIF method on the LIVE image database, and the best performance on both the CSIQ image database and the TID image database. There is a good correlation between the objective quality evaluation value of the distorted image calculated by the method of the present invention and the average subjective score difference. In addition, the PLCC and SROCC values of the LIVE image database and the CSIQ image database both exceed 0.94, and the PLCC and SROCC values of the TID2008 image database with more complex distortion types also reach 0.82, and the performance of the method of the present invention after the weighted average is better than the existing ones. The six methods have different degrees of improvement. It is shown that the objective evaluation results of the method of the present invention are consistent with the subjective perception results of human eyes, and the evaluation effect is stable, which fully demonstrates the effectiveness of the method of the present invention.
表2本发明方法与现有的图像质量客观评价方法的性能比较Table 2 The performance comparison between the method of the present invention and the existing objective evaluation method of image quality
图2a给出了本发明方法在LIVE图像数据库中的散点拟合曲线图,图2b给出了本发明方法在CSIQ图像数据库中的散点拟合曲线图,图2c给出了本发明方法在TID2008图像数据库中的散点拟合曲线图。从图2a、图2b及图2c中可以清晰地看到,散点均分布在拟合线的附近并且呈现良好的单调性与连续性。Fig. 2a shows the scatter fitting curve diagram of the method of the present invention in the LIVE image database, Fig. 2b shows the scatter point fitting curve diagram of the method of the present invention in the CSIQ image database, and Fig. 2c shows the method of the present invention Scatter-fit curve plot in the TID2008 image database. It can be clearly seen from Fig. 2a, Fig. 2b and Fig. 2c that the scatter points are distributed near the fitting line and exhibit good monotonicity and continuity.
Claims (4)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610202181.5A CN105913413B (en) | 2016-03-31 | 2016-03-31 | An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning |
US15/197,604 US9846818B2 (en) | 2016-03-31 | 2016-06-29 | Objective assessment method for color image quality based on online manifold learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610202181.5A CN105913413B (en) | 2016-03-31 | 2016-03-31 | An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105913413A CN105913413A (en) | 2016-08-31 |
CN105913413B true CN105913413B (en) | 2019-02-22 |
Family
ID=56745319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610202181.5A Active CN105913413B (en) | 2016-03-31 | 2016-03-31 | An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning |
Country Status (2)
Country | Link |
---|---|
US (1) | US9846818B2 (en) |
CN (1) | CN105913413B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220962B (en) * | 2017-04-07 | 2020-04-21 | 北京工业大学 | An image detection method and device for tunnel cracks |
CN108921824A (en) * | 2018-06-11 | 2018-11-30 | 中国科学院国家空间科学中心 | A kind of color image quality evaluation method based on rarefaction feature extraction |
CN109003256B (en) * | 2018-06-13 | 2022-03-04 | 天津师范大学 | A Joint Sparse Representation-Based Quality Evaluation Method for Multi-Focus Image Fusion |
CN109003265B (en) * | 2018-07-09 | 2022-02-11 | 嘉兴学院 | A non-reference image quality objective evaluation method based on Bayesian compressed sensing |
CN109636397A (en) * | 2018-11-13 | 2019-04-16 | 平安科技(深圳)有限公司 | Transit trip control method, device, computer equipment and storage medium |
CN109523542B (en) * | 2018-11-23 | 2022-12-30 | 嘉兴学院 | No-reference color image quality evaluation method based on color vector included angle LBP operator |
CN109754391B (en) * | 2018-12-18 | 2021-10-22 | 北京爱奇艺科技有限公司 | Image quality evaluation method and device and electronic equipment |
CN109978834A (en) * | 2019-03-05 | 2019-07-05 | 方玉明 | A kind of screen picture quality evaluating method based on color and textural characteristics |
CN110189243B (en) * | 2019-05-13 | 2023-03-24 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Color image robust watermarking method based on tensor singular value decomposition |
CN110147792B (en) * | 2019-05-22 | 2021-05-28 | 齐鲁工业大学 | High-speed detection system and method for drug packaging characters based on memory optimization |
CN111127387B (en) * | 2019-07-11 | 2024-02-09 | 宁夏大学 | Quality evaluation method for reference-free image |
CN110399887B (en) * | 2019-07-19 | 2022-11-04 | 合肥工业大学 | Representative color extraction method based on visual saliency and histogram statistical technology |
US12079976B2 (en) * | 2020-02-05 | 2024-09-03 | Eigen Innovations Inc. | Methods and systems for reducing dimensionality in a reduction and prediction framework |
CN111354048B (en) * | 2020-02-24 | 2023-06-20 | 清华大学深圳国际研究生院 | Quality evaluation method and device for obtaining pictures by facing camera |
CN111881758B (en) * | 2020-06-29 | 2021-03-19 | 普瑞达建设有限公司 | Parking management method and system |
CN112233065B (en) * | 2020-09-15 | 2023-02-24 | 西北大学 | Total-blind image quality evaluation method based on multi-dimensional visual feature cooperation under saliency modulation |
US20240054607A1 (en) * | 2021-09-20 | 2024-02-15 | Meta Platforms, Inc. | Reducing the complexity of video quality metric calculations |
CN114170205B (en) * | 2021-12-14 | 2024-10-18 | 天津科技大学 | Contrast distortion image quality evaluation method fusing image entropy and structural similarity characteristics |
CN114418972B (en) * | 2022-01-06 | 2024-09-10 | 腾讯科技(深圳)有限公司 | Picture quality detection method, device, equipment and storage medium |
CN117456208B (en) * | 2023-11-07 | 2024-06-25 | 广东新裕信息科技有限公司 | Double-flow sketch quality evaluation method based on significance detection |
CN118890522A (en) * | 2024-06-25 | 2024-11-01 | 北京红云融通技术有限公司 | Video quality evaluation method, device, medium and equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036501A (en) * | 2014-06-03 | 2014-09-10 | 宁波大学 | Three-dimensional image quality objective evaluation method based on sparse representation |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN105447884A (en) * | 2015-12-21 | 2016-03-30 | 宁波大学 | Objective image quality evaluation method based on manifold feature similarity |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8340437B2 (en) * | 2007-05-29 | 2012-12-25 | University Of Iowa Research Foundation | Methods and systems for determining optimal features for classifying patterns or objects in images |
US8848970B2 (en) * | 2011-04-26 | 2014-09-30 | Digimarc Corporation | Salient point-based arrangements |
US9454712B2 (en) * | 2014-10-08 | 2016-09-27 | Adobe Systems Incorporated | Saliency map computation |
-
2016
- 2016-03-31 CN CN201610202181.5A patent/CN105913413B/en active Active
- 2016-06-29 US US15/197,604 patent/US9846818B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036501A (en) * | 2014-06-03 | 2014-09-10 | 宁波大学 | Three-dimensional image quality objective evaluation method based on sparse representation |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN105447884A (en) * | 2015-12-21 | 2016-03-30 | 宁波大学 | Objective image quality evaluation method based on manifold feature similarity |
Also Published As
Publication number | Publication date |
---|---|
US20170286798A1 (en) | 2017-10-05 |
CN105913413A (en) | 2016-08-31 |
US9846818B2 (en) | 2017-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105913413B (en) | An Objective Evaluation Method for Color Image Quality Based on Online Manifold Learning | |
CN105447884B (en) | A kind of method for objectively evaluating image quality based on manifold characteristic similarity | |
CN107464222B (en) | No-reference high dynamic range image objective quality assessment method based on tensor space | |
CN104376565B (en) | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation | |
CN102609681A (en) | Face recognition method based on dictionary learning models | |
CN111127374A (en) | A Pan-sharpening Method Based on Multi-scale Dense Networks | |
CN103761531A (en) | Sparse-coding license plate character recognition method based on shape and contour features | |
CN105260998A (en) | MCMC sampling and threshold low-rank approximation-based image de-noising method | |
CN108596890B (en) | Full-reference image quality objective evaluation method based on vision measurement rate adaptive fusion | |
CN106203256A (en) | A kind of low resolution face identification method based on sparse holding canonical correlation analysis | |
CN103077506A (en) | Local and non-local combined self-adaption image denoising method | |
CN106981058A (en) | A kind of optics based on sparse dictionary and infrared image fusion method and system | |
CN108389189A (en) | Stereo image quality evaluation method dictionary-based learning | |
CN103839075B (en) | SAR image classification method based on united sparse representation | |
CN104933425B (en) | A kind of hyperspectral data processing method | |
CN105574901A (en) | General reference-free image quality evaluation method based on local contrast mode | |
CN106599903B (en) | Signal reconstruction method for weighted least square dictionary learning based on correlation | |
WO2016145571A1 (en) | Method for blind image quality assessment based on conditional histogram codebook | |
CN106599833A (en) | Field adaptation and manifold distance measurement-based human face identification method | |
CN108596906B (en) | A full-reference screen image quality evaluation method based on sparse local-preserving projection | |
CN107481221B (en) | Full-reference Hybrid Distortion Image Quality Evaluation Method Based on Texture and Cartoon Sparse Representation | |
CN106210710A (en) | A kind of stereo image vision comfort level evaluation methodology based on multi-scale dictionary | |
CN109815889B (en) | Cross-resolution face recognition method based on feature representation set | |
CN109064403A (en) | Fingerprint image super-resolution method based on classification coupling dictionary rarefaction representation | |
Lin et al. | A CNN-based quality model for image interpolation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |