CN108765366A - It is a kind of based on autonomous learning without with reference to color image quality evaluation method - Google Patents
It is a kind of based on autonomous learning without with reference to color image quality evaluation method Download PDFInfo
- Publication number
- CN108765366A CN108765366A CN201810289172.3A CN201810289172A CN108765366A CN 108765366 A CN108765366 A CN 108765366A CN 201810289172 A CN201810289172 A CN 201810289172A CN 108765366 A CN108765366 A CN 108765366A
- Authority
- CN
- China
- Prior art keywords
- image
- dictionary
- atoms
- blocks
- image block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 21
- 239000013598 vector Substances 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000011156 evaluation Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 238000000513 principal component analysis Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000002087 whitening effect Effects 0.000 claims description 4
- 230000008447 perception Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000012163 sequencing technique Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 6
- 230000016776 visual perception Effects 0.000 abstract description 2
- 238000000611 regression analysis Methods 0.000 abstract 1
- 238000001303 quality assessment method Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于自主学习的无参考彩色图像质量评价方法,属于图像处理领域。该方法包括:首先,利用四元数理论将训练集中的彩色图像用四元数矩阵表示;其次,对训练图像进行分块并利用人眼视觉感知求得图像块的局部特征;然后,利用自主学习策略构建图像字典,将最具代表性的图像块作为字典的原子;最后,将待评价彩色图像进行相同的预处理后,计算待评价图像与图像字典之间的最大相似性,通过支持向量回归分析得到最终的质量评价分数,并实时更新图像字典。本发明充分考虑了图像字典中字典原子的代表性和自主学习能力,能够同时对不同失真种类的图像进行评价,且评价结果与主观评价趋于一致。
The invention relates to a self-learning-based non-reference color image quality evaluation method, which belongs to the field of image processing. The method includes: firstly, using the quaternion theory to represent the color images in the training set with a quaternion matrix; secondly, dividing the training image into blocks and using human visual perception to obtain the local features of the image block; then, using the autonomous The learning strategy constructs the image dictionary, and the most representative image blocks are used as the atoms of the dictionary; finally, after the same preprocessing is performed on the color image to be evaluated, the maximum similarity between the image to be evaluated and the image dictionary is calculated, and the support vector Regression analysis gets the final quality evaluation score and updates the image dictionary in real time. The invention fully considers the representativeness of the dictionary atoms in the image dictionary and the self-learning ability, and can evaluate images of different distortion types at the same time, and the evaluation result tends to be consistent with the subjective evaluation.
Description
技术领域technical field
本发明属于图像处理技术领域,特别是图像质量评价方法领域,涉及一种基于自主学习的无参考图像质量评价方法。The invention belongs to the technical field of image processing, in particular to the field of image quality evaluation methods, and relates to a no-reference image quality evaluation method based on autonomous learning.
背景技术Background technique
图像质量评价技术一直是图像处理领域的关键技术,可以用于评价图像处理方法的效果,或是依据图像质量来选择合适的图像处理方法。因此,图像质量评价技术在图像处理过程中具有非常重要的地位。Image quality evaluation technology has always been a key technology in the field of image processing, which can be used to evaluate the effect of image processing methods, or to select appropriate image processing methods based on image quality. Therefore, image quality evaluation technology plays a very important role in image processing.
根据是否需要参考图像,图像质量评价算法可分为三类:全参考质量评价,部分参考图像质量评价和无参考质量评价。全参考质量评价需要依赖完整的参考图像信息,部分参考质量评价需要部分参考图像信息,而无参考质量评价方法不需要参考图像。因为无参考质量评价方法更符合实际应用价值,故受到了更广泛的关注和研究。According to whether reference images are needed, image quality assessment algorithms can be divided into three categories: full-reference quality assessment, part-reference image quality assessment and no-reference quality assessment. Full-reference quality assessment needs to rely on complete reference image information, partial-reference quality assessment requires partial reference image information, and no-reference quality assessment method does not require reference images. Because the no-reference quality evaluation method is more in line with the practical application value, it has received more attention and research.
目前的无参考评价方法基本上都是从灰度图像出发进行研究,但图像除了在灰度图像的对比度失真外,还会发生色相偏移、饱和度降低等颜色失真,这使得无参考彩色图像质量评价更符合实际。现有的无参考彩色图像质量评价方法,通常是将其转换成灰度图像或单独测量每个颜色分量的质量然后将测量值与不同的权重组合来对彩色图像应用灰度测量。前者在灰度转换过程中会出现损耗,且忽略了图像的色彩信息。后者又很难确定权值找到最佳颜色模型。因此本发明直接从彩色图像的三基色出发,以满足彩色图像的无参考有效评价。The current no-reference evaluation methods basically start from the grayscale image, but in addition to the contrast distortion of the grayscale image, the image will also have color distortion such as hue shift and saturation reduction, which makes the no-reference color image Quality evaluation is more in line with reality. Existing no-reference color image quality assessment methods usually convert it to a grayscale image or measure the quality of each color component individually and then combine the measured values with different weights to apply grayscale measurements to the color image. The former will be lost in the grayscale conversion process and ignore the color information of the image. The latter is difficult to determine the weight to find the best color model. Therefore, the present invention directly starts from the three primary colors of the color image to satisfy the no-reference effective evaluation of the color image.
发明内容Contents of the invention
有鉴于此,本发明的目的在于提供一种基于自主学习的无参考彩色图像质量评价方法,其字典内原子的代表性强,能对不同失真类型的图像进行有效的评价;同时随着测试样本的次数增加能自主学习更新图像字典,适用性广。In view of this, the purpose of the present invention is to provide a method for evaluating the quality of color images without reference based on autonomous learning, which has strong representation of atoms in the dictionary and can effectively evaluate images of different distortion types; The increase of the number of times can independently learn and update the image dictionary, and has wide applicability.
为达到上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:
一种基于自主学习的无参考彩色图像质量评价方法,首先利用自主学习策略选择代表性强的样本构建图像字典,然后利用所构建的图像字典和待评价图像实行映射关系得到质量评价分数,最后再利用自主学习策略实时更新图像字典;A non-reference color image quality evaluation method based on autonomous learning. Firstly, an autonomous learning strategy is used to select representative samples to construct an image dictionary, and then the constructed image dictionary is used to perform a mapping relationship with the image to be evaluated to obtain a quality evaluation score. Update image dictionaries in real time using autonomous learning strategies;
该方法具体包括以下步骤:The method specifically includes the following steps:
S1:针对彩色图像,通过四元数理论将红(R)、绿(G)、蓝(B)三个颜色通道中的像素用一个超复数进行表示,得到彩色图像的四元数矩阵;S1: For color images, the pixels in the three color channels of red (R), green (G), and blue (B) are represented by a hypercomplex number through quaternion theory, and the quaternion matrix of the color image is obtained;
S2:将图像进行分块处理,通过人眼视觉特性提取图像块的局部特征,并消除图像块间的相关性;S2: Divide the image into blocks, extract the local features of the image block through the visual characteristics of the human eye, and eliminate the correlation between the image blocks;
S3:利用自主学习策略,自主选择图像块中相似性最小的图像块,并判断该图像块与字典内所有原子间的差异性,若差异性小则放入字典中,依次循环直至达到字典维度时输出字典;S3: Use the self-learning strategy to independently select the image block with the least similarity among the image blocks, and judge the difference between the image block and all atoms in the dictionary. If the difference is small, put it into the dictionary, and cycle in turn until reaching the dictionary dimension when output dictionary;
S4:通过待评价图像与字典间的映射关系,并通过支持向量回归SVR方法得到最终的质量评价分数;S4: Obtain the final quality evaluation score through the mapping relationship between the image to be evaluated and the dictionary, and through the support vector regression SVR method;
S5:根据待评价图像和求得的质量评价分数用自主学习策略实时更新字典。S5: Update the dictionary in real time with an autonomous learning strategy according to the image to be evaluated and the obtained quality evaluation score.
进一步,所述步骤S1具体包括:Further, the step S1 specifically includes:
将一组已知主观评价分数DMOS值的彩色图像作为训练样本,对每一幅彩色图像中红(R)、绿(G)、蓝(B)三个颜色通道中的像素用四元数的3个虚部表示,实部为0,这样彩色图像的每个像素表示为一个纯四元数:A group of color images with known subjective evaluation score DMOS values are used as training samples, and the pixels in the three color channels of red (R), green (G) and blue (B) in each color image are used with quaternion 3 imaginary parts, the real part is 0, so that each pixel of the color image is represented as a pure quaternion:
f(x,y)=fR(x,y)·i+fG(x,y)·j+fB(x,y)·kf(x,y)=f R (x,y) i+f G (x,y) j+f B (x,y) k
其中,x和y分别表示像素点在图像中的坐标,fR(x,y)、fG(x,y)、fB(x,y)是分别对应颜色通道内坐标为(x,y)的像素值,i、j、k是四元数的3个虚数单位。Among them, x and y respectively represent the coordinates of the pixel in the image, f R (x, y), f G (x, y), f B (x, y) are the coordinates in the corresponding color channel (x, y ), i, j, k are the three imaginary units of the quaternion.
进一步,所述步骤S2具体包括以下步骤:Further, the step S2 specifically includes the following steps:
S21:将每一幅图像分解成尺度为d×d的互不重叠图像块,设xc是图像块中心点,,则块内其他像素点为x1,x2,…xn,那么将图像块内其他像素点分别与xc相减可得到该图像块的像素差异值y',其数学表达式为:S21: Decompose each image into non-overlapping image blocks with a scale of d×d, let x c be the center point of the image block, then the other pixels in the block are x 1 , x 2 ,...x n , then the The pixel difference value y' of the image block can be obtained by subtracting other pixels in the image block from x c respectively, and its mathematical expression is:
y'=(x1-xc,x2-xc,…,xn-xc)y'=(x 1 -x c ,x 2 -x c ,…,x n -x c )
S22:由于人眼对图像的响应具有对数非线性特性,故可通过人眼非线性感知特性将图像块的像素差异值用一个局部特征向量来表示,其数学表达式:S22: Since the response of the human eye to the image has a logarithmic nonlinear characteristic, the pixel difference value of the image block can be represented by a local feature vector through the nonlinear perception characteristic of the human eye, and its mathematical expression is:
z=sign(y')·log(|y'|+1)z=sign(y') log(|y'|+1)
S23:利用图像块间的差异性消除相似图像块,其中差异性可通过图像块间的夹角求得,即S23: Use the difference between image blocks to eliminate similar image blocks, where the difference can be obtained through the angle between image blocks, that is
其中,D(zi)表示训练集U中图像块zi与其它图像块间的差异性,zi·zj表示图像块间的内积,||·||表示向量的模值;若D(zi)=0则说明两图像块相同,可删除后一个图像块以消除图像块间的相似性;Among them, D(z i ) represents the difference between the image block z i and other image blocks in the training set U, z i ·z j represents the inner product between image blocks, and ||·|| represents the modulus value of the vector; if D(z i )=0 means that the two image blocks are the same, and the latter image block can be deleted to eliminate the similarity between the image blocks;
S24:利用主成分分析(PCA)法对图像块进行白化,可消除图像块的冗余信息,数学表达式为:S24: Using principal component analysis (PCA) to whiten the image block, the redundant information of the image block can be eliminated, and the mathematical expression is:
其中,xi是原图像特征,xPCAwhite,i是白化后的图像特征,λi是PCA变换矩,C是避免分母为0时的一个小常数,m是图像块的总数。Among them, x i is the original image feature, x PCAwhite,i is the image feature after whitening, λ i is the PCA transformation moment, C is a small constant to avoid the denominator being 0, and m is the total number of image blocks.
进一步,所述步骤S3具体包括以下步骤:Further, the step S3 specifically includes the following steps:
S31:初始化:设训练集合为U,拟构建的原子数为K,字典S=Φ,Φ是一个空集;S31: Initialization: set the training set as U, the number of atoms to be constructed as K, the dictionary S=Φ, and Φ is an empty set;
S32:估计训练集U中图像块间的相似性,即计算图像块之间的欧几里德距离和夹角:S32: Estimate the similarity between the image blocks in the training set U, that is, calculate the Euclidean distance and angle between the image blocks:
其中,R(zi)表示训练集U中图像块zi与其它图像块间的最小相似性,是图像块zi和zj之间的欧氏距离,是图像块zi和zj之间的夹角。Among them, R( zi ) represents the minimum similarity between the image block z in the training set U and other image blocks, is the Euclidean distance between image patches z i and z j , is the angle between image blocks z i and z j .
S33:对训练集U中的图像块以相似性从小到大进行排序,按顺序将前K个图像块放入字典S中,构建初始字典;S33: Sort the image blocks in the training set U from small to large in similarity, put the first K image blocks into the dictionary S in order, and construct an initial dictionary;
S34:计算第K+1个图像块zK+1与字典S内所有原子之间的最小差异性值d,其中差异性为图像块zK+1与字典S内原子间的夹角,数学表达式为:S34: Calculate the minimum difference value d between the K+1th image block z K+1 and all atoms in the dictionary S, where the difference is the angle between the image block z K+1 and the atoms in the dictionary S, mathematics The expression is:
其中,sj表示字典S内的原子,zK+1·sj表示图像块与字典内原子sj间的内积,||·||表示向量的模值。Among them, s j represents the atom in the dictionary S, z K+1 · s j represents the inner product between the image block and the atom s j in the dictionary, and ||·|| represents the modulus of the vector.
同样的,利用相同方法计算字典S内原子间的最小差异性值D(si),其中si是字典内与其他原子间差异性最小的那个原子。Similarly, use the same method to calculate the minimum difference value D(s i ) among the atoms in the dictionary S, where s i is the atom with the smallest difference between the atoms in the dictionary and other atoms.
S35:若最小差异性值d>D,则更新字典,即用图像块zt替换字典S中的原子xj,然后使K=K+1并返回S34;反之则到S36S35: If the minimum difference value d>D, then update the dictionary, that is, replace the atom x j in the dictionary S with the image block z t , then make K=K+1 and return to S34; otherwise, go to S36
S36:输出字典。S36: Output a dictionary.
进一步,所述步骤S4具体包括:Further, the step S4 specifically includes:
S41:将待评价彩色图像按照步骤S1和步骤S2的方式进行预处理,得到待评价图像分块后图像块的局部特征向量集合其中表示待评价图像中某一图像块的局部特征向量;S41: Preprocess the color image to be evaluated according to steps S1 and S2 to obtain a set of local feature vectors of image blocks after the image to be evaluated is divided into blocks in Represents the local feature vector of a certain image block in the image to be evaluated;
S42:利用图像块之间的欧几里德距离和夹角的相关公式计算每一个图像块与字典S内原子间的最大相似性:S42: Calculate each image block by using the Euclidean distance and the related formula of the included angle between the image blocks Maximum similarity with atoms in dictionary S:
其中,表示待评价图像中图像块与字典S内所有原子的最大相似性值,是图像块和字典内原子sj之间的欧氏距离,是图像块和字典内原子sj之间的夹角。in, Indicates the image block in the image to be evaluated The maximum similarity value with all atoms in the dictionary S, is the image block and the Euclidean distance between atoms s j in the dictionary, is the image block and the angle between the atoms s j in the dictionary.
S43:将待评价图像的所有图像块与字典S内原子间的最大相似性按照一个向量矩阵的形式进行汇总,并放入支持向量回归SVR方法中,结合对应原子的DMOS值预测得到图像质量分数。其中该向量矩阵可表示为:S43: Summarize the maximum similarity between all the image blocks of the image to be evaluated and the atoms in the dictionary S in the form of a vector matrix, put them into the support vector regression SVR method, combine the DMOS value prediction of the corresponding atoms to obtain the image quality score . Where the vector matrix can be expressed as:
进一步,所述步骤S5具体包括:Further, the step S5 specifically includes:
S51:计算已知图像质量分数的待评价图像的图像块间相似性最小的图像块作为最具代表性的图像块,并计算该图像块与字典内所有原子的差异性,确定最小差异值d;;S51: Calculate the image block with the least similarity among the image blocks of the image to be evaluated whose image quality score is known as the most representative image block, and calculate the difference between the image block and all atoms in the dictionary, and determine the minimum difference value d ;;
S52:计算字典内所有原子间的差异性,确定最小差异性的原子和对应的差异值t;S52: Calculate the difference between all atoms in the dictionary, and determine the atom with the smallest difference and the corresponding difference value t;
S53:判断差异值d是否大于差异值t,若大于则将该图像块替换字典内最小差异值的原子,并返回S51继续执行,反之则不更新字典;S53: Determine whether the difference value d is greater than the difference value t, if it is greater, replace the image block with the atom with the smallest difference value in the dictionary, and return to S51 to continue execution, otherwise, the dictionary is not updated;
S54:输出更新后的字典。S54: Output the updated dictionary.
本发明的有益效果在于:本发明所述方法通过设计一个可以自主学习的图像字典,以实现图像字典内的原子代表性强,并能实时更新字典,使得对不同情况下的彩色图像质量评价效果都能取得很好的评价效果。The beneficial effects of the present invention are: the method of the present invention designs an image dictionary that can be learned independently to realize strong representation of atoms in the image dictionary, and can update the dictionary in real time, so that the color image quality evaluation effect under different situations can achieve good evaluation results.
附图说明Description of drawings
为了使本发明的目的、技术方案和有益效果更加清楚,本发明提供如下附图进行说明:In order to make the purpose, technical scheme and beneficial effect of the present invention clearer, the present invention provides the following drawings for illustration:
图1为本发明所述的彩色图像质量评价方法的流程框架图;Fig. 1 is the flow chart of the color image quality evaluation method of the present invention;
图2为本发明所述的图像字典自主学习的流程框架图。FIG. 2 is a flow chart of the autonomous learning of the image dictionary according to the present invention.
具体实施方式Detailed ways
下面将结合附图,对本发明的优选实施例进行详细的描述。The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
本发明根据彩色图像训练集中图像分块后图像块间的相似性及图像块与字典中原子间的差异性来自主的选取字典中的原子,每个原子都具有很强的代表性,即便是较低的维度也具有显著的评价效果;同时随着训练测试样本的次数增加能自主学习更新图像字典,适用性广。The present invention autonomously selects the atoms in the dictionary according to the similarity between the image blocks in the color image training set after the image is divided into blocks and the difference between the image blocks and the atoms in the dictionary. Each atom is very representative, even The lower dimension also has a significant evaluation effect; at the same time, as the number of training and testing samples increases, it can learn to update the image dictionary autonomously, and has wide applicability.
如图1所示,本发明所述的基于自主学习的无参考彩色图像质量评价方法具体包括以下几个步骤:As shown in Figure 1, the non-reference color image quality evaluation method based on autonomous learning of the present invention specifically includes the following steps:
1、图像预处理1. Image preprocessing
1.1、将一组已知主观评价分数DMOS值的彩色图像作为训练数据集,对每一幅彩色图像中红(R)、绿(G)、蓝(B)三个颜色通道中的像素用四元数的3个虚部表示,实部为0,这样彩色图像就用一个纯四元数矩阵来表示。相对于传统的分通道处理或者是变换成灰度图像后在处理的方法而言,四元数方法更能体现出彩色图像的完整性。1.1. A group of color images with known subjective evaluation score DMOS values are used as a training data set, and the pixels in the three color channels of red (R), green (G) and blue (B) in each color image are used four The three imaginary parts of the number represent, and the real part is 0, so that the color image is represented by a pure quaternion matrix. Compared with the traditional sub-channel processing or the method of processing after transforming into a grayscale image, the quaternion method can better reflect the integrity of the color image.
本发明可以使用常用的LIVE、CSIQ、TID等数据库图像作为训练数据库,当然也可以根据需要选择要测试设备的图像库,并组织主观评价,以达到所用的数据与主观感受一致。The present invention can use common LIVE, CSIQ, TID and other database images as the training database, of course, it can also select the image library of the equipment to be tested according to the needs, and organize subjective evaluation, so as to achieve the consistency between the used data and subjective experience.
1.2、将每一幅图像分解成尺度为d×d的互不重叠图像块,由于像素间的相关性可以很好地描述图像的失真,故本发明循环计算像素点与相邻像素点间的灰度差异特征。1.2. Each image is decomposed into non-overlapping image blocks with a scale of d×d. Since the correlation between pixels can well describe the distortion of the image, the present invention calculates the distance between a pixel and adjacent pixels in a loop. Grayscale difference features.
根据人眼视觉响应具有对数非线性性质,图像经对数变换后更符合人眼视觉感知,故由基于人眼非线性感知特性求得每一图像块的局部特征向量,然后以集合的形式表示每一幅图像的特征。According to the logarithmic nonlinearity of the human visual response, the image is more in line with the human visual perception after logarithmic transformation, so the local feature vector of each image block is obtained based on the nonlinear perception characteristics of the human eye, and then in the form of a set represent the features of each image.
1.3、利用图像块间的差异性消除相似图像块,当差异值为0时说明两个图像块相同,可删除后一个图像块以消除图像块间的相似性,这主要是因为图像块常出现重复的结构,这么操作可以保证每个训练样本的独立性,同时也可以减少计算量,增加时效性;1.3. Use the difference between image blocks to eliminate similar image blocks. When the difference value is 0, it means that the two image blocks are the same, and the latter image block can be deleted to eliminate the similarity between image blocks. This is mainly because image blocks often appear Repeated structure, this operation can ensure the independence of each training sample, and can also reduce the amount of calculation and increase timeliness;
1.4、为了去除图像特征间的相关性,减少图像块的冗余信息,本发明利用主成分分析(PCA)对图像块进行白化:1.4, in order to remove the correlation between image features, reduce the redundant information of image block, the present invention utilizes principal component analysis (PCA) to carry out whitening to image block:
其中,xi是原图像特征,xPCAwhite,i是白化后的图像特征,λi是PCA变换矩,C是避免分母为0时的一个小常数,m是图像块的总数。Among them, x i is the original image feature, x PCAwhite,i is the image feature after whitening, λ i is the PCA transformation moment, C is a small constant to avoid the denominator being 0, and m is the total number of image blocks.
2、图像字典构建2. Image dictionary construction
2.1、初始化:设训练集合为U,拟构建的原子数为K,字典S=Φ,Φ是一个空集;2.1. Initialization: Let the training set be U, the number of atoms to be constructed is K, the dictionary S=Φ, and Φ is an empty set;
2.2、估计训练集U中图像块间的相似性,即计算图像块之间的欧几里德距离和夹角:2.2. Estimate the similarity between the image blocks in the training set U, that is, calculate the Euclidean distance and angle between the image blocks:
其中,R(zi)表示训练集U中图像块zi与其它图像块间的最小相似性,是图像块zi和zj之间的欧氏距离,是图像块zi和zj之间的夹角。Among them, R( zi ) represents the minimum similarity between the image block z in the training set U and other image blocks, is the Euclidean distance between image patches z i and z j , is the angle between image blocks z i and z j .
2.3、对训练集U中的图像块以相似性从小到大进行排序,按顺序将前K个图像块放入字典S中,构建初始字典;2.3. Sort the image blocks in the training set U from small to large in similarity, put the first K image blocks into the dictionary S in order, and construct the initial dictionary;
2.4、计算第K+1个图像块zK+1与字典S内所有原子之间的最小差异性值d,其中差异性为图像块zK+1与字典S内原子间的夹角,数学表达式为:2.4. Calculate the minimum difference value d between the K+1th image block z K+1 and all atoms in the dictionary S, where the difference is the angle between the image block z K+1 and the atoms in the dictionary S, mathematics The expression is:
其中,sj表示字典S内的原子,zK+1·sj表示图像块与字典内原子sj间的内积,||·||表示向量的模值。Among them, s j represents the atom in the dictionary S, z K+1 · s j represents the inner product between the image block and the atom s j in the dictionary, and ||·|| represents the modulus of the vector.
同样的,利用相同方法计算字典S内原子间的最小差异性值D(si),其中si是字典内与其他原子间差异性最小的那个原子。Similarly, use the same method to calculate the minimum difference value D(s i ) among the atoms in the dictionary S, where s i is the atom with the smallest difference between the atoms in the dictionary and other atoms.
2.5、若最小差异性值d>D,则更新字典,即用图像块zt替换字典S中的原子xj,然后使K=K+1并返回2.4;反之则到2.6;2.5. If the minimum difference value d>D, then update the dictionary, that is, replace the atom x j in the dictionary S with the image block z t , then make K=K+1 and return 2.4; otherwise, go to 2.6;
2.6、输出字典。2.6. Output dictionary.
3、图像质量评价3. Image quality evaluation
3.1、将待评价彩色图像按照步骤1和步骤S2的方式进行预处理,得到待评价图像分块后图像块的局部特征向量集合其中表示待评价图像中某一图像块的局部特征向量;3.1. Preprocess the color image to be evaluated according to the methods of step 1 and step S2, and obtain the local feature vector set of the image block after the image to be evaluated is divided into blocks in Represents the local feature vector of a certain image block in the image to be evaluated;
3.2、利用图像块之间的欧几里德距离和夹角的相关公式计算每一个图像块与字典S内原子间的最大相似性:3.2. Calculate each image block using the Euclidean distance and angle-related formulas between image blocks Maximum similarity with atoms in dictionary S:
其中,表示待评价图像中图像块与字典S内所有原子的最大相似性值,是图像块和字典内原子sj之间的欧氏距离,是图像块和字典内原子sj之间的夹角。in, Indicates the image block in the image to be evaluated The maximum similarity value with all atoms in the dictionary S, is the image block and the Euclidean distance between atoms s j in the dictionary, is the image block and the angle between the atoms s j in the dictionary.
3.3、将待评价图像的所有图像块与字典S内原子间的最大相似性按照一个向量矩阵的形式进行汇总,并放入支持向量回归SVR方法中,结合对应原子的DMOS值预测得到图像质量分数。其中该向量矩阵可表示为:3.3. Summarize the maximum similarity between all image blocks of the image to be evaluated and the atoms in the dictionary S in the form of a vector matrix, and put them into the support vector regression SVR method, and combine the DMOS value prediction of the corresponding atoms to obtain the image quality score . Where the vector matrix can be expressed as:
4、图像字典更新,如图2所示:4. The image dictionary is updated, as shown in Figure 2:
4.1、计算已知图像质量分数的待评价图像的图像块间相似性最小的图像块,作为最具代表性的图像块,并计算该图像块与字典内所有原子的差异性,确定最小差异值d;;4.1. Calculate the image block with the least similarity between the image blocks of the image to be evaluated with the known image quality score, as the most representative image block, and calculate the difference between the image block and all atoms in the dictionary, and determine the minimum difference value d;
4.2、计算字典内所有原子间的差异性,确定最小差异性的原子和对应的差异值t;4.2. Calculate the difference between all atoms in the dictionary, and determine the atom with the smallest difference and the corresponding difference value t;
4.3、判断差异值d是否大于差异值t,若大于则将该图像块替换字典内最小差异值的原子,并返回4.1继续执行,反之则不更新字典;4.3. Determine whether the difference value d is greater than the difference value t. If it is greater, replace the image block with the atom with the minimum difference value in the dictionary, and return to 4.1 to continue execution. Otherwise, the dictionary will not be updated;
4.4、输出更新后的字典。4.4. Output the updated dictionary.
最后说明的是,以上优选实施例仅用以说明本发明的技术方案而非限制,尽管通过上述优选实施例已经对本发明进行了详细的描述,但本领域技术人员应当理解,可以在形式上和细节上对其作出各种各样的改变,而不偏离本发明权利要求书所限定的范围。Finally, it should be noted that the above preferred embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail through the above preferred embodiments, those skilled in the art should understand that it can be described in terms of form and Various changes may be made in the details without departing from the scope of the invention defined by the claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810289172.3A CN108765366B (en) | 2018-03-30 | 2018-03-30 | A no-reference color image quality assessment method based on self-learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810289172.3A CN108765366B (en) | 2018-03-30 | 2018-03-30 | A no-reference color image quality assessment method based on self-learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108765366A true CN108765366A (en) | 2018-11-06 |
CN108765366B CN108765366B (en) | 2021-11-02 |
Family
ID=63980832
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810289172.3A Active CN108765366B (en) | 2018-03-30 | 2018-03-30 | A no-reference color image quality assessment method based on self-learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108765366B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101650833A (en) * | 2009-09-10 | 2010-02-17 | 重庆医科大学 | Color image quality evaluation method |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
CN104361574A (en) * | 2014-10-14 | 2015-02-18 | 南京信息工程大学 | No-reference color image quality assessment method on basis of sparse representation |
CN105139428A (en) * | 2015-08-11 | 2015-12-09 | 鲁东大学 | Quaternion based speeded up robust features (SURF) description method and system for color image |
-
2018
- 2018-03-30 CN CN201810289172.3A patent/CN108765366B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101650833A (en) * | 2009-09-10 | 2010-02-17 | 重庆医科大学 | Color image quality evaluation method |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
CN104361574A (en) * | 2014-10-14 | 2015-02-18 | 南京信息工程大学 | No-reference color image quality assessment method on basis of sparse representation |
CN105139428A (en) * | 2015-08-11 | 2015-12-09 | 鲁东大学 | Quaternion based speeded up robust features (SURF) description method and system for color image |
Non-Patent Citations (3)
Title |
---|
DONG WU ETC.: ""Image Sharpness Assessment by Sparse Representation"", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
LEIDA LI ETC.: ""No-reference Image Quality Assessment With A Gradient-induced Dictionary"", 《KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS》 * |
张薇: ""图像客观质量评价算法及其应用研究"", 《万方数据库》 * |
Also Published As
Publication number | Publication date |
---|---|
CN108765366B (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108428227B (en) | No-reference image quality evaluation method based on full convolution neural network | |
CN109389591B (en) | Color Image Quality Evaluation Method Based on Color Descriptor | |
CN110046673A (en) | No reference tone mapping graph image quality evaluation method based on multi-feature fusion | |
CN109978854B (en) | An image quality assessment method for screen content based on edge and structural features | |
CN106709958A (en) | Gray scale gradient and color histogram-based image quality evaluation method | |
CN109218716B (en) | A reference-free tone-mapping image quality assessment method based on color statistics and information entropy | |
CN107743225B (en) | A Method for No-Reference Image Quality Prediction Using Multi-Layer Depth Representations | |
CN103996192A (en) | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model | |
CN109816646B (en) | Non-reference image quality evaluation method based on degradation decision logic | |
US11576478B2 (en) | Method for simulating the rendering of a make-up product on a body area | |
CN108399620B (en) | An Image Quality Evaluation Method Based on Low-Rank Sparse Matrix Decomposition | |
AU2020103251A4 (en) | Method and system for identifying metallic minerals under microscope based on bp nueral network | |
CN110415207A (en) | A Method of Image Quality Evaluation Based on Image Distortion Type | |
CN113784129B (en) | Point cloud quality assessment method, encoder, decoder and storage medium | |
CN104361574A (en) | No-reference color image quality assessment method on basis of sparse representation | |
Ayunts et al. | No-Reference Quality Metrics for Image Decolorization | |
CN111047618B (en) | Multi-scale-based non-reference screen content image quality evaluation method | |
CN112508847A (en) | Image quality evaluation method based on depth feature and structure weighted LBP feature | |
CN112330648B (en) | Non-reference image quality evaluation method and device based on semi-supervised learning | |
CN112270370B (en) | Vehicle apparent damage assessment method | |
CN117542121B (en) | Computer vision-based intelligent training and checking system and method | |
CN108765366A (en) | It is a kind of based on autonomous learning without with reference to color image quality evaluation method | |
CN113160115A (en) | Crop disease identification method and system based on improved depth residual error network | |
CN113425254B (en) | Body fat percentage prediction method for young men based on mixed data input body fat percentage prediction model | |
JP7512150B2 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240506 Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000 Patentee after: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd. Country or region after: China Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2 Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS Country or region before: China |
|
TR01 | Transfer of patent right |