CN103914835B - A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture - Google Patents
A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture Download PDFInfo
- Publication number
- CN103914835B CN103914835B CN201410104299.5A CN201410104299A CN103914835B CN 103914835 B CN103914835 B CN 103914835B CN 201410104299 A CN201410104299 A CN 201410104299A CN 103914835 B CN103914835 B CN 103914835B
- Authority
- CN
- China
- Prior art keywords
- dis
- imf
- org
- image
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 14
- 238000012360 testing method Methods 0.000 claims abstract description 91
- 230000000007 visual effect Effects 0.000 claims abstract description 71
- 238000012549 training Methods 0.000 claims abstract description 56
- 238000011156 evaluation Methods 0.000 claims abstract description 51
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 10
- 238000003064 k means clustering Methods 0.000 claims abstract description 7
- 230000004044 response Effects 0.000 claims description 50
- 238000012545 processing Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims 2
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000001914 filtration Methods 0.000 claims 1
- 238000006386 neutralization reaction Methods 0.000 claims 1
- 230000006870 function Effects 0.000 abstract description 49
- 238000010801 machine learning Methods 0.000 abstract description 4
- 238000005259 measurement Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000013210 evaluation model Methods 0.000 description 2
- 208000003464 asthenopia Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种针对模糊失真立体图像的无参考质量评价方法,其在训练阶段,选择多幅无失真立体图像和对应的模糊失真立体图像构成训练图像集,采用二维经验模式分解对模糊失真立体图像进行分解得到内蕴模式函数图像,并采用K均值聚类方法构造视觉字典表;通过获取模糊失真立体图像中的像素点的客观评价度量值,构造视觉质量表;在测试阶段,采用二维经验模式分解对测试立体图像进行分解得到内蕴模式函数图像,然后根据视觉字典表和视觉质量表,得到测试图像的图像质量客观评价预测值;优点是在训练阶段不需要复杂的机器学习训练过程,在测试阶段只需通过简单的视觉字典搜索过程就能得到图像质量客观评价预测值,且与主观评价值的一致性较好。
The invention discloses a no-reference quality evaluation method for fuzzy and distorted stereoscopic images. In the training stage, a plurality of undistorted stereoscopic images and corresponding fuzzy and distorted stereoscopic images are selected to form a training image set, and a two-dimensional empirical model is used to decompose the fuzzy and distorted stereoscopic images. The distorted stereo image is decomposed to obtain the intrinsic mode function image, and the K-means clustering method is used to construct the visual dictionary table; the visual quality table is constructed by obtaining the objective evaluation value of the pixels in the blurred and distorted stereo image; in the testing stage, the Two-dimensional empirical mode decomposition decomposes the test stereo image to obtain the intrinsic mode function image, and then obtains the image quality objective evaluation prediction value of the test image according to the visual dictionary table and the visual quality table; the advantage is that no complicated machine learning is required in the training phase In the training process, in the test phase, the objective evaluation value of image quality can be obtained only through a simple visual dictionary search process, and the consistency with the subjective evaluation value is good.
Description
技术领域technical field
本发明涉及一种图像质量评价方法,尤其是涉及一种针对模糊失真立体图像的无参考质量评价方法。The invention relates to an image quality evaluation method, in particular to a no-reference quality evaluation method for blurred and distorted stereoscopic images.
背景技术Background technique
随着图像编码技术和立体显示技术的迅速发展,立体图像技术受到了越来越广泛的关注与应用,已成为当前的一个研究热点。立体图像技术利用人眼的双目视差原理,双目各自独立地接收来自同一场景的左视点图像和右视点图像,通过大脑融合形成双目视差,从而欣赏到具有深度感和逼真感的立体图像。与单通道图像相比,立体图像需要同时保证两个通道的图像质量,因此对其进行质量评价具有非常重要的意义。然而,目前对立体图像质量缺乏有效的客观评价方法进行评价。因此,建立有效的立体图像质量客观评价模型具有十分重要的意义。With the rapid development of image coding technology and stereoscopic display technology, stereoscopic image technology has received more and more attention and applications, and has become a current research hotspot. Stereoscopic image technology utilizes the principle of binocular parallax of the human eye. Both eyes independently receive left and right viewpoint images from the same scene, and form binocular parallax through brain fusion, so as to enjoy stereoscopic images with a sense of depth and realism. . Compared with single-channel images, stereo images need to ensure the image quality of two channels at the same time, so it is very important to evaluate its quality. However, there is currently no effective objective evaluation method to evaluate the stereoscopic image quality. Therefore, it is of great significance to establish an effective objective evaluation model for stereoscopic image quality.
由于影响立体图像质量的因素较多,如左视点和右视点质量失真情况、立体感知情况,观察者视觉疲劳等,因此如何有效地进行无参考质量评价是亟需解决的难点问题。目前的无参考质量评价通常采用机器学习来预测评价模型,计算复杂度较高,并且训练模型需要预知各评价图像的主观评价值,并不适用于实际的应用场合,存在一定的局限性。稀疏表示将信号在已知的函数集上进行分解,力求在变换域上用尽量少的基函数来对原始信号进行逼近,目前的研究主要集中在字典构造和稀疏分解两方面。稀疏表示的一个关键问题就是如何有效地构造字典来表征图像的本质特征。目前已提出的字典构造算法包括:1)有学习过程的字典构造方法:通过机器学习来训练得到字典信息,如支持向量机等;2)无学习过程的字典构造方法:直接利用图像的特征来构造字典,如多尺度Gabor字典、多尺度高斯字典等。因此,如何进行无学习过程的字典构造,如何根据字典来进行无参考的质量估计,都是在无参考质量评价研究中需要重点解决的技术问题。Since there are many factors that affect the quality of stereoscopic images, such as the quality distortion of the left and right viewpoints, stereoscopic perception, and visual fatigue of the observer, how to effectively evaluate the quality without reference is a difficult problem that needs to be solved urgently. The current no-reference quality evaluation usually uses machine learning to predict the evaluation model, which has high computational complexity, and the training model needs to predict the subjective evaluation value of each evaluation image, which is not suitable for practical applications and has certain limitations. Sparse representation decomposes the signal on the known function set, and strives to approximate the original signal with as few basis functions as possible in the transform domain. The current research mainly focuses on dictionary construction and sparse decomposition. A key issue in sparse representation is how to effectively construct a dictionary to represent the essential features of an image. The dictionary construction algorithms that have been proposed so far include: 1) Dictionary construction methods with a learning process: training to obtain dictionary information through machine learning, such as support vector machines, etc.; 2) Dictionary construction methods without a learning process: directly using image features to Construct dictionaries, such as multi-scale Gabor dictionary, multi-scale Gaussian dictionary, etc. Therefore, how to construct a dictionary without a learning process and how to perform a reference-free quality estimation based on a dictionary are technical issues that need to be addressed in the research of no-reference quality evaluation.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种针对模糊失真立体图像的无参考质量评价方法,其能够有效地提高客观评价结果与主观感知的相关性。The technical problem to be solved by the present invention is to provide a no-reference quality evaluation method for blurred and distorted stereoscopic images, which can effectively improve the correlation between objective evaluation results and subjective perception.
本发明解决上述技术问题所采用的技术方案为:一种针对模糊失真立体图像的无参考质量评价方法,其特征在于包括训练阶段和测试阶段两个过程,具体步骤如下:The technical scheme adopted by the present invention to solve the above-mentioned technical problems is: a kind of no-reference quality evaluation method for fuzzy and distorted stereoscopic images, which is characterized in that it includes two processes of training phase and testing phase, and the specific steps are as follows:
①选取N幅原始的无失真立体图像;然后将选取的N幅原始的无失真立体图像和每幅原始的无失真立体图像对应的模糊失真立体图像构成训练图像集,记为{Si,org,Si,dis|1≤i≤N},Si,org表示训练图像集{Si,org,Si,dis|1≤i≤N}中的第i幅原始的无失真立体图像,Si,dis表示训练图像集{Si,org,Si,dis|1≤i≤N}中的第i幅原始的无失真立体图像对应的模糊失真立体图像;再将Si,org的左视点图像记为Li,org,将Si,org的右视点图像记为Ri,org,将Si,dis的左视点图像记为Li,dis,将Si,dis的右视点图像记为Ri,dis;① Select N original undistorted stereo images; then, the selected N original undistorted stereo images and the blurred and distorted stereo images corresponding to each original undistorted stereo image form a training image set, denoted as {S i,org ,S i,dis |1≤i≤N}, S i,org represents the i-th original undistorted stereo image in the training image set {S i,org ,S i,dis |1≤i≤N}, S i, dis represents the blurred and distorted stereo image corresponding to the i-th original undistorted stereo image in the training image set {S i , org , S i, dis |1≤i≤N}; The left viewpoint image is recorded as L i,org , the right viewpoint image of S i,org is recorded as R i,org , the left viewpoint image of S i,dis is recorded as L i,dis , and the right viewpoint image of S i,dis is The image is denoted as R i,dis ;
②对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像和右视点图像分别实施二维经验模式分解,得到训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像和右视点图像各自的内蕴模式函数图像,将Li,dis的内蕴模式函数图像记为将Ri,dis的内蕴模式函数图像记为其中,1≤x≤W,1≤y≤H,在此W表示和的宽度,在此H表示和的高度,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值;② Perform two-dimensional empirical mode decomposition on the left viewpoint image and right viewpoint image of each fuzzy and distorted stereo image in the training image set {S i,org ,S i,dis |1≤i≤N} to obtain the training image set In {S i,org ,S i,dis |1≤i≤N}, the respective intrinsic mode function images of the left viewpoint image and the right viewpoint image of each blurred and distorted stereo image, the intrinsic mode of L i,dis The function image is denoted as The intrinsic mode function image of R i,dis is recorded as Among them, 1≤x≤W, 1≤y≤H, where W represents and width, where H represents the and the height of, express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel point whose middle coordinate position is (x, y);
然后对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像的内蕴模式函数图像和右视点图像的内蕴模式函数图像进行线性加权,得到训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的内蕴模式函数图像,将Si,dis的内蕴模式函数图像记为将中坐标位置为(x,y)的像素点的像素值记为
③对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的内蕴模式函数图像进行非重叠的分块处理;然后采用K均值聚类方法对由每幅内蕴模式函数图像中的所有子块构成的集合进行聚类操作,得到每幅内蕴模式函数图像的K个聚类,其中,K表示每幅内蕴模式函数图像包含的聚类的总个数;接着根据每幅内蕴模式函数图像的K个聚类,获取每幅内蕴模式函数图像的视觉字典表;再根据所有内蕴模式函数图像的视觉字典表,得到训练图像集{Si,org,Si,dis|1≤i≤N}的视觉字典表,记为G,G={Gi|1≤i≤N},其中,Gi表示的视觉字典表,Gi={gi,k|1≤k≤K},gi,k表示的第k个聚类的视觉字典,gi,k亦表示的第k个聚类的质心;③ Perform non-overlapping block processing on the intrinsic mode function image of each fuzzy and distorted stereo image in the training image set {S i, org , S i, dis |1≤i≤N}; then use K-means clustering The method performs a clustering operation on the set composed of all sub-blocks in each intrinsic pattern function image, and obtains K clusters of each intrinsic pattern function image, where K represents the number of blocks contained in each intrinsic pattern function image The total number of clusters; then according to the K clusters of each intrinsic pattern function image, obtain the visual dictionary table of each intrinsic pattern function image; then obtain training according to the visual dictionary tables of all intrinsic pattern function images The visual dictionary table of the image set {S i,org ,S i,dis |1≤i≤N}, denoted as G, G={G i |1≤i≤N}, where G i represents Visual dictionary table, G i ={g i,k |1≤k≤K}, g i,k means The visual dictionary of the k-th cluster of , g i,k also represents The centroid of the kth cluster of ;
④通过计算训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅原始的无失真立体图像的左视点图像和右视点图像中的每个像素点在选定的中心频率和不同方向因子下的频率响应,及每幅模糊失真立体图像的左视点图像和右视点图像中的每个像素点在选定的中心频率和不同方向因子下的频率响应,获取每幅模糊失真立体图像中的每个像素点的客观评价度量值;然后根据每幅模糊失真立体图像中的每个像素点的客观评价度量值,获取每幅模糊失真立体图像的视觉质量表;再根据所有模糊失真立体图像的视觉质量表,得到训练图像集{Si,org,Si,dis|1≤i≤N}的视觉质量表,记为Q,Q={Qi|1≤i≤N},其中,Qi表示Si,dis的视觉质量表,Qi={qi,k|1≤k≤K},qi,k表示的第k个聚类的视觉质量;④ By calculating the training image set {S i, org , S i, dis |1≤i≤N} in each original undistorted stereoscopic image, each pixel in the left view image and right view image is selected The center frequency and the frequency response under different direction factors, and the frequency response of each pixel in the left view point image and right view point image of each fuzzy and distorted stereo image under the selected center frequency and different direction factors, to obtain each The objective evaluation measurement value of each pixel point in the blurred and distorted stereoscopic image; then according to the objective evaluation measurement value of each pixel point in each blurred and distorted stereoscopic image, the visual quality table of each blurred and distorted stereoscopic image is obtained; According to the visual quality table of all blurred and distorted stereo images, the visual quality table of the training image set {S i,org ,S i,dis |1≤i≤N} is obtained, denoted as Q, Q={Q i |1≤i ≤N}, where Q i represents the visual quality table of S i,dis , Q i ={q i,k |1≤k≤K}, q i,k represents The visual quality of the kth cluster of ;
⑤对于任意一副测试立体图像Stest,根据训练图像集{Si,org,Si,dis|1≤i≤N}的视觉字典表G和视觉质量表Q,计算得到Stest的图像质量客观评价预测值。⑤For any pair of test stereo images S test , calculate the image quality of S test according to the visual dictionary table G and visual quality table Q of the training image set {S i,org ,S i,dis |1≤i≤N} Objectively evaluate the predicted value.
所述的步骤②中取wL=0.9,wR=0.1。In the step ②, w L =0.9, w R =0.1.
所述的步骤③中的视觉字典表Gi的获取过程为:In the step ③ The acquisition process of the visual dictionary table Gi is:
③-1、将划分成个互不重叠的尺寸大小为16×16的子块,将由中的所有子块构成的集合记为其中,xi,t表示由中的第t个子块中的所有像素点组成的列向量,xi,t的维数为256;③-1. Will divided into Non-overlapping sub-blocks of size 16×16 will be composed of The set of all sub-blocks in is denoted as Among them, x i, t means by A column vector composed of all pixels in the tth sub-block in , the dimension of x i,t is 256;
③-2、采用K均值聚类方法对进行聚类操作,得到的K个聚类,然后将的每个聚类的质心作为视觉字典,得到的视觉字典表,记为Gi,Gi={gi,k|1≤k≤K},其中,K表示包含的聚类的总个数,gi,k表示的第k个聚类的视觉字典,gi,k亦表示的第k个聚类的质心,gi,k的维数为256。③-2. Using the K-means clustering method to Perform a clustering operation to get of K clusters, and then The centroids of each cluster of are used as a visual dictionary to get The visual dictionary table of , denoted as G i , G i ={g i,k |1≤k≤K}, where K means The total number of clusters included, g i,k means The visual dictionary of the k-th cluster of , g i,k also represents The centroid of the kth cluster of gi ,k has a dimension of 256.
所述的步骤④中Si,dis的视觉质量表Qi的获取过程为:The acquisition process of the visual quality table Q i of S i,dis in the step ④ is:
④-1、采用Gabor滤波器分别对Li,org、Ri,org、Li,dis和Ri,dis进行滤波处理,得到Li,org、Ri,org、Li,dis和Ri,dis中的每个像素点在不同中心频率和不同方向因子下的频率响应,将Li,org中坐标位置为(x,y)的像素点在中心频率为ω和方向因子为θ下的频率响应记为
④-2、根据Li,org和Ri,org中的每个像素点在选定的中心频率和不同方向因子下的频率响应,计算Si,org中的每个像素点的振幅,将Si,org中坐标位置为(x,y)的像素点的振幅记为
同样,根据Li,dis和Ri,dis中的每个像素点在选定的中心频率和不同方向因子下的频率响应,计算Si,dis中的每个像素点的振幅,将Si,dis中坐标位置为(x,y)的像素点的振幅记为
④-3、根据Si,org和Si,dis中的每个像素点的振幅,计算Si,dis中的每个像素点的客观评价度量值,将Si,dis中坐标位置为(x,y)的像素点的客观评价度量值记为ρi(x,y),
④-4、根据Si,dis中的每个像素点的客观评价度量值,得到Si,dis的视觉质量表,记为Qi,Qi={qi,k|1≤k≤K},其中,qi,k表示的第k个聚类的视觉质量,Ωk表示Si,dis中与的第k个聚类中包含的所有像素点坐标位置相同的像素点的坐标位置的集合,表示的第k个聚类中包含的像素点的总个数。④-4. According to the objective evaluation value of each pixel in S i ,dis , obtain the visual quality table of S i,dis, which is recorded as Q i , Q i ={q i,k |1≤k≤K }, where q i,k represent The visual quality of the k-th cluster of , Ω k means S i, dis and The set of coordinate positions of all pixels contained in the k-th cluster with the same coordinate positions of the pixels, express The total number of pixels contained in the kth cluster of .
所述的步骤⑤的具体过程为:The concrete process of described step 5. is:
⑤-1、将Stest的左视点图像记为Ltest,将Stest的右视点图像记为Rtest,对Ltest和Rtest分别实施二维经验模式分解,得到Ltest和Rtest各自的内蕴模式函数图像,对应记为和然后对和进行线性加权,得到Stest的内蕴模式函数图像,记为{IMFtest(x,y)},将{IMFtest(x,y)}中坐标位置为(x,y)的像素点的像素值记为IMFtest(x,y),
⑤-2、将{IMFtest(x,y)}划分成个互不重叠的尺寸大小为16×16的子块,将由{IMFtest(x,y)}中的所有子块构成的集合记为其中,yt表示由{IMFtest(x,y)}中的第t个子块中的所有像素点组成的列向量,yt的维数为256;⑤-2. Divide {IMF test (x,y)} into Non-overlapping sub-blocks of size 16×16, the set consisting of all sub-blocks in {IMF test (x,y)} is recorded as Among them, y t represents a column vector composed of all pixels in the t-th sub-block in {IMF test (x, y)}, and the dimension of y t is 256;
⑤-3、计算{IMFtest(x,y)}中的每个子块与G的最小欧式距离,将{IMFtest(x,y)}中的第t个子块与G的最小欧式距离记为δt,其中,符号“||||”为求欧氏距离符号,min()为取最小值函数;⑤-3. Calculate the minimum Euclidean distance between each sub-block in {IMF test (x, y)} and G, and record the minimum Euclidean distance between the t-th sub-block in {IMF test (x, y)} and G as δt , Among them, the symbol "||||" is the Euclidean distance symbol, and min() is the minimum value function;
⑤-4、计算{IMFtest(x,y)}中的每个子块的客观评价度量值,将{IMFtest(x,y)}中的第t个子块的客观评价度量值记为zt,其中,表示Q中δt对应的视觉字典对应的视觉质量,1≤i*≤N,1≤k*≤K,exp()表示以e为底的指数函数,e=2.71828183,λ为控制参数;⑤-4. Calculate the objective evaluation metric value of each sub-block in {IMF test (x, y)}, and denote the objective evaluation metric value of the tth sub-block in {IMF test (x, y)} as zt, in, Represents the visual quality corresponding to the visual dictionary corresponding to δ t in Q, 1≤i*≤N, 1≤k*≤K, exp() represents an exponential function with e as the base, e=2.71828183, and λ is the control parameter;
⑤-5、根据{IMFtest(x,y)}中的每个子块的客观评价度量值,计算Stest的图像质量客观评价预测值,记为Q, ⑤-5. According to the objective evaluation measurement value of each sub-block in {IMF test (x, y)}, calculate the image quality objective evaluation prediction value of S test , denoted as Q,
与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
1)本发明方法通过无监督学习方式构造视觉字典表和视觉质量表,这样避免了复杂的机器学习训练过程,并且本发明方法在训练阶段不需要预知各训练图像的主观评价值,因此更加适用于实际的应用场合。1) The method of the present invention constructs a visual dictionary table and a visual quality table through an unsupervised learning method, which avoids the complicated machine learning training process, and the method of the present invention does not need to predict the subjective evaluation value of each training image during the training phase, so it is more applicable in practical applications.
2)本发明方法在测试阶段,只需要通过简单的视觉字典搜索过程就能预测得到图像质量客观评价预测值,大大降低了测试过程的计算复杂度,并且预测得到的图像质量客观评价预测值与主观评价值保持了较好的一致性。2) In the test phase of the method of the present invention, the predicted value of the objective evaluation of image quality can be predicted only through a simple visual dictionary search process, which greatly reduces the computational complexity of the testing process, and the predicted value of the objective evaluation of image quality obtained by prediction is the same as The subjective evaluation value maintained a good consistency.
附图说明Description of drawings
图1为本发明方法的总体实现框图。Fig. 1 is an overall realization block diagram of the method of the present invention.
具体实施方式detailed description
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种针对模糊失真立体图像的无参考质量评价方法,其总体实现框图如图1所示,其包括训练阶段和测试阶段两个过程:在训练阶段,选择多幅原始的无失真立体图像和对应的模糊失真立体图像构成训练图像集,通过采用二维经验模式分解对训练图像集中的每幅模糊失真立体图像进行分解得到内蕴模式函数图像,然后对各内蕴模式函数图像进行非重叠的分块处理,并通过采用K均值聚类方法构造视觉字典表;通过计算训练图像集中的每幅原始的无失真立体图像和对应的模糊失真立体图像中的每个像素点在选定的中心频率和不同方向因子下的频率响应,获取每幅模糊失真立体图像中的每个像素点的图像质量客观评价预测值,构造视觉字典表对应的视觉质量表。在测试阶段,对于任意一副测试立体图像,采用二维经验模式分解对测试立体图像进行分解得到内蕴模式函数图像,然后对内蕴模式函数图像进行非重叠的分块处理,再根据已构造的视觉字典表和视觉质量表,计算得到测试图像的图像质量客观评价预测值。本发明的无参考质量评价方法的具体步骤如下:A no-reference quality evaluation method for fuzzy and distorted stereoscopic images proposed by the present invention, its overall implementation block diagram is shown in Figure 1, which includes two processes: the training phase and the testing phase: in the training phase, multiple original undistorted images are selected Stereo images and corresponding blurred and distorted stereo images constitute a training image set, and each fuzzy and distorted stereo image in the training image set is decomposed by using two-dimensional empirical mode decomposition to obtain intrinsic mode function images, and then each intrinsic mode function image is Non-overlapping block processing, and by using the K-means clustering method to construct a visual dictionary table; by calculating each original undistorted stereo image in the training image set and each pixel in the corresponding blurred and distorted stereo image in the selected The center frequency and the frequency response under different direction factors are used to obtain the image quality objective evaluation prediction value of each pixel in each blurred and distorted stereo image, and to construct a visual quality table corresponding to the visual dictionary table. In the test phase, for any pair of test stereo images, use two-dimensional empirical mode decomposition to decompose the test stereo image to obtain the intrinsic mode function image, and then perform non-overlapping block processing on the intrinsic mode function image, and then according to the constructed The visual dictionary table and the visual quality table are calculated to obtain the image quality objective evaluation prediction value of the test image. The concrete steps of the no-reference quality evaluation method of the present invention are as follows:
①选取N幅原始的无失真立体图像;然后将选取的N幅原始的无失真立体图像和每幅原始的无失真立体图像对应的模糊失真立体图像构成训练图像集,记为{Si,org,Si,dis|1≤i≤N},Si,org表示训练图像集{Si,org,Si,dis|1≤i≤N}中的第i幅原始的无失真立体图像,Si,dis表示训练图像集{Si,org,Si,dis|1≤i≤N}中的第i幅原始的无失真立体图像对应的模糊失真立体图像;再将Si,org的左视点图像记为Li,org,将Si,org的右视点图像记为Ri,org,将Si,dis的左视点图像记为Li,dis,将Si,dis的右视点图像记为Ri,dis;其中,如果N的值越大,则通过训练得到的视觉字典表和视觉质量表的精度也就越高,但计算复杂度也就越高,因此折衷考虑一般可选取所采用的图像库中的一半模糊失真图像进行处理,符号“{}”为集合表示符号。① Select N original undistorted stereo images; then, the selected N original undistorted stereo images and the blurred and distorted stereo images corresponding to each original undistorted stereo image form a training image set, denoted as {S i,org ,S i,dis |1≤i≤N}, S i,org represents the i-th original undistorted stereo image in the training image set {S i,org ,S i,dis |1≤i≤N}, S i, dis represents the blurred and distorted stereo image corresponding to the i-th original undistorted stereo image in the training image set {S i , org , S i, dis |1≤i≤N}; The left viewpoint image is recorded as L i,org , the right viewpoint image of S i,org is recorded as R i,org , the left viewpoint image of S i,dis is recorded as L i,dis , and the right viewpoint image of S i,dis is The image is recorded as R i,dis ; where, if the value of N is larger, the accuracy of the visual dictionary table and visual quality table obtained through training will be higher, but the computational complexity will be higher, so the trade-off consideration can generally be Select half of the fuzzy and distorted images in the image library used for processing, and the symbol "{}" is a set symbol.
在此,采用宁波大学立体图像库和LIVE立体图像库中的模糊失真立体图像进行实验。宁波大学立体图像库中的模糊失真立体图像由12幅无失真的立体图像在不同程度的高斯模糊情况下的60幅失真的立体图像构成,LIVE立体图像库中的模糊失真立体图像由19幅无失真的立体图像在不同程度的高斯模糊情况下的45幅失真的立体图像构成。在本实施例中,采用50%的模糊失真立体图像来构造训练图像集,即对于由宁波大学立体图像库构造的训练图像集,取N=30;对于由LIVE立体图像库构造的训练图像集,取N=22。Here, experiments are carried out using blurred and distorted stereo images from the Ningbo University stereo image library and the LIVE stereo image library. The blurred and distorted stereo images in the stereo image library of Ningbo University are composed of 12 undistorted stereo images and 60 distorted stereo images under different degrees of Gaussian blur. The blurred and distorted stereo images in the LIVE stereo image library are composed of 19 undistorted stereo images Distorted stereo images Constructed from 45 distorted stereo images with different degrees of Gaussian blur. In this embodiment, 50% blurred and distorted stereoscopic images are used to construct the training image set, that is, for the training image set constructed by the Ningbo University stereoscopic image library, N=30; for the training image set constructed by the LIVE stereoscopic image database , take N=22.
②对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像和右视点图像分别实施二维经验模式分解,得到训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像和右视点图像各自的内蕴模式函数图像,将Li,dis的内蕴模式函数图像记为将Ri,dis的内蕴模式函数图像记为其中,1≤x≤W,1≤y≤H,在此W表示和的宽度,在此H表示和的高度,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值。② Perform two-dimensional empirical mode decomposition on the left viewpoint image and right viewpoint image of each fuzzy and distorted stereo image in the training image set {S i,org ,S i,dis |1≤i≤N} to obtain the training image set In {S i,org ,S i,dis |1≤i≤N}, the respective intrinsic mode function images of the left viewpoint image and the right viewpoint image of each blurred and distorted stereo image, the intrinsic mode of L i,dis The function image is denoted as The intrinsic mode function image of R i,dis is recorded as Among them, 1≤x≤W, 1≤y≤H, where W represents and width, where H represents the and the height of, express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose middle coordinate position is (x, y).
然后对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的左视点图像的内蕴模式函数图像和右视点图像的内蕴模式函数图像进行线性加权,得到训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的内蕴模式函数图像,将Si,dis的内蕴模式函数图像记为将中坐标位置为(x,y)的像素点的像素值记为
③对训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅模糊失真立体图像的内蕴模式函数图像进行非重叠的分块处理;然后采用现有的K均值聚类方法对由每幅内蕴模式函数图像中的所有子块构成的集合进行聚类操作,得到每幅内蕴模式函数图像的K个聚类,其中,K表示每幅内蕴模式函数图像包含的聚类的总个数,K的值过大会出现过聚类现象,K的值过小会出现欠聚类现象,如在本实施例中取K=30;接着根据每幅内蕴模式函数图像的K个聚类,获取每幅内蕴模式函数图像的视觉字典表;再根据所有内蕴模式函数图像的视觉字典表,得到训练图像集{Si,org,Si,dis|1≤i≤N}的视觉字典表,记为G,G={Gi|1≤i≤N},其中,符号“{}”为集合表示符号,Gi表示的视觉字典表,Gi={gi,k|1≤k≤K},gi,k表示的第k个聚类的视觉字典,gi,k亦表示的第k个聚类的质心。③ Perform non-overlapping block processing on the intrinsic mode function image of each fuzzy and distorted stereo image in the training image set {S i, org , S i, dis |1≤i≤N}; then use the existing K The mean value clustering method performs a clustering operation on a set composed of all sub-blocks in each intrinsic pattern function image, and obtains K clusters of each intrinsic pattern function image, where K represents each intrinsic pattern function The total number of clusters contained in the image, if the value of K is too large, there will be clustering phenomenon, and if the value of K is too small, there will be under-clustering phenomenon, such as taking K=30 in this embodiment; K clustering of pattern function images to obtain the visual dictionary table of each intrinsic pattern function image; then according to the visual dictionary tables of all intrinsic pattern function images, the training image set {S i,org ,S i,dis | The visual dictionary table of 1≤i≤N} is denoted as G, G={G i |1≤i≤N}, where the symbol "{}" is a set representation symbol, and G i represents Visual dictionary table, G i ={g i,k |1≤k≤K}, g i,k means The visual dictionary of the k-th cluster of , g i,k also represents The centroid of the kth cluster of .
在此具体实施例中,步骤③中的视觉字典表Gi的获取过程为:In this specific embodiment, in step ③ The acquisition process of the visual dictionary table G i is:
③-1、将划分成个互不重叠的尺寸大小为16×16的子块,将由中的所有子块构成的集合记为其中,xi,t表示由中的第t个子块中的所有像素点组成的列向量,xi,t的维数为256。③-1. Will divided into Non-overlapping sub-blocks of size 16×16 will be composed of The set of all sub-blocks in is denoted as Among them, x i, t means by A column vector composed of all pixels in the t-th sub-block in , the dimension of x i,t is 256.
③-2、采用现有的K均值聚类方法对进行聚类操作,得到的K个聚类,然后将的每个聚类的质心作为视觉字典,得到的视觉字典表,记为Gi,Gi={gi,k|1≤k≤K},其中,K表示包含的聚类的总个数,K的值过大会出现过聚类现象,K的值过小会出现欠聚类现象,如在本实施例中取K=30,gi,k表示的第k个聚类的视觉字典,gi,k亦表示的第k个聚类的质心,gi,k的维数为256。③-2. Using the existing K-means clustering method to Perform a clustering operation to get of K clusters, and then The centroids of each cluster of are used as a visual dictionary to get The visual dictionary table of , denoted as G i , G i ={g i,k |1≤k≤K}, where K means The total number of clusters included, if the value of K is too large, there will be clustering phenomenon, if the value of K is too small, there will be under-clustering phenomenon, such as taking K=30 in this embodiment, g i,k represents The visual dictionary of the k-th cluster of , g i,k also represents The centroid of the kth cluster of gi ,k has a dimension of 256.
④通过计算训练图像集{Si,org,Si,dis|1≤i≤N}中的每幅原始的无失真立体图像的左视点图像和右视点图像中的每个像素点在选定的中心频率和不同方向因子下的频率响应,及每幅模糊失真立体图像的左视点图像和右视点图像中的每个像素点在选定的中心频率和不同方向因子下的频率响应,获取每幅模糊失真立体图像中的每个像素点的客观评价度量值;然后根据每幅模糊失真立体图像中的每个像素点的客观评价度量值,获取每幅模糊失真立体图像的视觉质量表;再根据所有模糊失真立体图像的视觉质量表,得到训练图像集{Si,org,Si,dis|1≤i≤N}的视觉质量表,记为Q,Q={Qi|1≤i≤N},其中,Qi表示Si,dis的视觉质量表,Qi={qi,k|1≤k≤K},qi,k表示的第k个聚类的视觉质量。④ By calculating the training image set {S i, org , S i, dis |1≤i≤N} in each original undistorted stereoscopic image, each pixel in the left viewpoint image and right viewpoint image is selected The center frequency and the frequency response under different direction factors, and the frequency response of each pixel in the left view point image and right view point image of each fuzzy and distorted stereo image under the selected center frequency and different direction factors, to obtain each The objective evaluation measurement value of each pixel point in the blurred and distorted stereoscopic image; then according to the objective evaluation measurement value of each pixel point in each blurred and distorted stereoscopic image, the visual quality table of each blurred and distorted stereoscopic image is obtained; According to the visual quality table of all blurred and distorted stereo images, the visual quality table of the training image set {S i,org ,S i,dis |1≤i≤N} is obtained, denoted as Q, Q={Q i |1≤i ≤N}, where Q i represents the visual quality table of S i,dis , Q i ={q i,k |1≤k≤K}, q i,k represents The visual quality of the kth cluster of .
在此具体实施例中,步骤④中Si,dis的视觉质量表Qi的获取过程为:In this specific embodiment, the acquisition process of the visual quality table Q i of S i,dis in step 4. is:
④-1、采用Gabor滤波器分别对Li,org、Ri,org、Li,dis和Ri,dis进行滤波处理,得到Li,org、Ri,org、Li,dis和Ri,dis中的每个像素点在不同中心频率和不同方向因子下的频率响应,将Li,org中坐标位置为(x,y)的像素点在中心频率为ω和方向因子为θ下的频率响应记为
④-2、根据Li,org和Ri,org中的每个像素点在选定的中心频率和不同方向因子下的频率响应,计算Si,org中的每个像素点的振幅,将Si,org中坐标位置为(x,y)的像素点的振幅记为
同样,根据Li,dis和Ri,dis中的每个像素点在选定的中心频率和不同方向因子下的频率响应,计算Si,dis中的每个像素点的振幅,将Si,dis中坐标位置为(x,y)的像素点的振幅记为
④-3、根据Si,org和Si,dis中的每个像素点的振幅,计算Si,dis中的每个像素点的客观评价度量值,将Si,dis中坐标位置为(x,y)的像素点的客观评价度量值记为ρi(x,y),
④-4、根据Si,dis中的每个像素点的客观评价度量值,得到Si,dis的视觉质量表,记为Qi,Qi={qi,k|1≤k≤K},其中,qi,k表示的第k个聚类的视觉质量,Ωk表示Si,dis中与的第k个聚类中包含的所有像素点坐标位置相同的像素点的坐标位置的集合,表示的第k个聚类中包含的像素点的总个数。④-4. According to the objective evaluation value of each pixel in S i ,dis , obtain the visual quality table of S i,dis, which is recorded as Q i , Q i ={q i,k |1≤k≤K }, where q i,k represent The visual quality of the k-th cluster of , Ω k means S i, dis and The set of coordinate positions of all pixels contained in the k-th cluster with the same coordinate positions of the pixels, express The total number of pixels contained in the kth cluster of .
⑤对于任意一副测试立体图像Stest,根据训练图像集{Si,org,Si,dis|1≤i≤N}的视觉字典表G和视觉质量表Q,计算得到Stest的图像质量客观评价预测值。⑤For any pair of test stereo images S test , calculate the image quality of S test according to the visual dictionary table G and visual quality table Q of the training image set {S i,org ,S i,dis |1≤i≤N} Objectively evaluate the predicted value.
在此具体实施例中,步骤⑤的具体过程为:In this specific embodiment, the concrete process of step 5. is:
⑤-1、将Stest的左视点图像记为Ltest,将Stest的右视点图像记为Rtest,对Ltest和Rtest分别实施二维经验模式分解,得到Ltest和Rtest各自的内蕴模式函数图像,对应记为和然后对和进行线性加权,得到Stest的内蕴模式函数图像,记为{IMFtest(x,y)},将{IMFtest(x,y)}中坐标位置为(x,y)的像素点的像素值记为IMFtest(x,y),
⑤-2、将{IMFtest(x,y)}划分成个互不重叠的尺寸大小为16×16的子块,将由{IMFtest(x,y)}中的所有子块构成的集合记为其中,yt表示由{IMFtest(x,y)}中的第t个子块中的所有像素点组成的列向量,yt的维数为256。⑤-2. Divide {IMF test (x,y)} into Non-overlapping sub-blocks of size 16×16, the set consisting of all sub-blocks in {IMF test (x,y)} is recorded as Among them, y t represents a column vector composed of all pixels in the t-th sub-block in {IMF test (x,y)}, and the dimension of y t is 256.
⑤-3、计算{IMFtest(x,y)}中的每个子块与G的最小欧式距离,将{IMFtest(x,y)}中的第t个子块与G的最小欧式距离记为δt,其中,符号“||||”为求欧氏距离符号,min()为取最小值函数。⑤-3. Calculate the minimum Euclidean distance between each sub-block in {IMF test (x, y)} and G, and record the minimum Euclidean distance between the t-th sub-block in {IMF test (x, y)} and G as δt , Among them, the symbol "||||" is the Euclidean distance symbol, and min() is the minimum value function.
⑤-4、计算{IMFtest(x,y)}中的每个子块的客观评价度量值,将{IMFtest(x,y)}中的第t个子块的客观评价度量值记为zt,其中,表示Q中δt对应的视觉字典对应的视觉质量,1≤i*≤N,1≤k*≤K,exp()表示以e为底的指数函数,e=2.71828183,λ为控制参数,在本实施例中取λ=300。⑤-4. Calculate the objective evaluation metric value of each sub-block in {IMF test (x, y)}, and record the objective evaluation metric value of the t-th sub-block in {IMF test (x, y)} as z t , in, Indicates the visual quality corresponding to the visual dictionary corresponding to δ t in Q, 1≤i*≤N, 1≤k*≤K, exp() indicates an exponential function with e as the base, e=2.71828183, λ is the control parameter, in In this embodiment, λ=300 is taken.
⑤-5、根据{IMFtest(x,y)}中的每个子块的客观评价度量值,计算Stest的图像质量客观评价预测值,记为Q, ⑤-5. According to the objective evaluation measurement value of each sub-block in {IMF test (x, y)}, calculate the image quality objective evaluation prediction value of S test , denoted as Q,
在此,采用宁波大学立体图像库和LIVE立体图像库来分析本实施例得到的模糊失真立体图像的图像质量客观评价预测值与平均主观评分差值之间的相关性。这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Pearson linear correlation coefficient,PLCC)、Spearman相关系数(Spearmanrank order correlation coefficient,SRCC)、Kendall相关系数(Kendall rank-ordercorrelation coefficient,KRCC)、均方误差(root mean squared error,RMSE),PLCC和RMSE反映失真的立体图像客观评价结果的准确性,SRCC和KRCC反映其单调性。Here, the stereoscopic image database of Ningbo University and the LIVE stereoscopic image database are used to analyze the correlation between the image quality objective evaluation prediction value and the average subjective score difference of the blurred and distorted stereoscopic images obtained in this embodiment. Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC), Spearman correlation coefficient (Spearman rank order correlation coefficient, SRCC) under nonlinear regression conditions, Kendall Correlation coefficient (Kendall rank-order correlation coefficient, KRCC), root mean squared error (root mean squared error, RMSE), PLCC and RMSE reflect the accuracy of objective evaluation results of distorted stereo images, SRCC and KRCC reflect its monotonicity.
利用本发明方法计算宁波大学立体图像库中的每幅模糊失真立体图像的图像质量客观评价预测值和LIVE立体图像库中的每幅模糊失真立体图像的图像质量客观评价预测值,再利用现有的主观评价方法获得宁波大学立体图像库中的每幅模糊失真立体图像的平均主观评分差值和LIVE立体图像库中的每幅模糊失真立体图像的平均主观评分差值。将按本发明方法计算得到的模糊失真立体图像的图像质量客观评价预测值做五参数Logistic函数非线性拟合,PLCC、SRCC和KRCC值越高,RMSE值越低说明客观评价方法与平均主观评分差值相关性越好。反映本发明方法的质量评价性能的PLCC、SRCC、KRCC和RMSE相关系数如表1所示。从表1所列的数据可知,按本实施例得到的模糊失真立体图像的最终的图像质量客观评价预测值与平均主观评分差值之间的相关性是很好的,表明客观评价结果与人眼主观感知的结果较为一致,足以说明本发明方法的有效性。Utilize the method of the present invention to calculate the image quality objective evaluation prediction value of each blurred and distorted stereoscopic image in the stereoscopic image database of Ningbo University and the image quality objective evaluation predicted value of each fuzzy and distorted stereoscopic image in the LIVE stereoscopic image database, and then use the existing The subjective evaluation method obtained the average subjective score difference of each fuzzy and distorted stereo image in the stereo image database of Ningbo University and the average subjective score difference of each fuzzy and distorted stereo image in the LIVE stereo image database. The image quality objective evaluation prediction value of the fuzzy and distorted stereoscopic image calculated by the method of the present invention is done five-parameter Logistic function nonlinear fitting, the higher the PLCC, SRCC and KRCC values, the lower the RMSE value shows that the objective evaluation method and the average subjective rating The better the difference correlation. The PLCC, SRCC, KRCC and RMSE correlation coefficients reflecting the quality evaluation performance of the method of the present invention are shown in Table 1. As can be seen from the data listed in Table 1, the correlation between the final image quality objective evaluation prediction value and the average subjective rating difference of the fuzzy and distorted stereoscopic image obtained by the present embodiment is very good, showing that the objective evaluation result is consistent with human The results of the subjective perception of the eye are relatively consistent, which is enough to illustrate the effectiveness of the method of the present invention.
表1本实施例得到的模糊失真立体图像的图像质量客观评价预测值与平均主观评分差值之间的相关性Table 1 Correlation between the predicted value of the image quality objective evaluation and the average subjective score difference of the fuzzy and distorted stereoscopic image obtained in this embodiment
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410104299.5A CN103914835B (en) | 2014-03-20 | 2014-03-20 | A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410104299.5A CN103914835B (en) | 2014-03-20 | 2014-03-20 | A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103914835A CN103914835A (en) | 2014-07-09 |
CN103914835B true CN103914835B (en) | 2016-08-17 |
Family
ID=51040491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410104299.5A Active CN103914835B (en) | 2014-03-20 | 2014-03-20 | A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103914835B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240248B (en) * | 2014-09-12 | 2017-05-03 | 宁波大学 | Method for objectively evaluating quality of three-dimensional image without reference |
CN104240255A (en) * | 2014-09-23 | 2014-12-24 | 上海交通大学 | Stereo image quality evaluation method based on nonlinear ocular dominance parallax compensation |
CN104820988B (en) * | 2015-05-06 | 2017-12-15 | 宁波大学 | One kind is without with reference to objective evaluation method for quality of stereo images |
CN105243385B (en) * | 2015-09-23 | 2018-11-09 | 宁波大学 | A kind of image quality evaluating method based on unsupervised learning |
CN105611285B (en) * | 2015-12-25 | 2017-06-16 | 浙江科技学院 | General non-reference picture quality appraisement method based on phase selective mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102609718A (en) * | 2012-01-15 | 2012-07-25 | 江西理工大学 | Method for generating vision dictionary set by combining different clustering algorithms |
CN102708567A (en) * | 2012-05-11 | 2012-10-03 | 宁波大学 | Visual perception-based three-dimensional image quality objective evaluation method |
CN103413283A (en) * | 2013-07-12 | 2013-11-27 | 西北工业大学 | Multi-focus image fusion method based on two-dimensional EMD and improved local energy |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9030530B2 (en) * | 2009-12-15 | 2015-05-12 | Thomson Licensing | Stereo-image quality and disparity/depth indications |
US20120044323A1 (en) * | 2010-08-20 | 2012-02-23 | Texas Instruments Incorporated | Method and Apparatus for 3D Image and Video Assessment |
-
2014
- 2014-03-20 CN CN201410104299.5A patent/CN103914835B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102609718A (en) * | 2012-01-15 | 2012-07-25 | 江西理工大学 | Method for generating vision dictionary set by combining different clustering algorithms |
CN102708567A (en) * | 2012-05-11 | 2012-10-03 | 宁波大学 | Visual perception-based three-dimensional image quality objective evaluation method |
CN103413283A (en) * | 2013-07-12 | 2013-11-27 | 西北工业大学 | Multi-focus image fusion method based on two-dimensional EMD and improved local energy |
Non-Patent Citations (4)
Title |
---|
A NEW OBJECTIVE STEREOSCOPIC IMAGE ASSESSMENT MODEL BASED ON STEREOSCOPIC PERCEPTION;Zhu Jiangying 等;《JOURNAL OF ELECTRONICS (CHINA)》;20131031;第30卷(第5期);469-475 * |
基于BEMD的无参考模糊失真立体图像质量评价方法;王珊珊 等;《光电工程》;20130930;第40卷(第9期);28-34 * |
基于EMD的无参考图像清晰度评价方法;贺金平 等;《航天返回与遥感》;20131031;第34卷(第5期);78-84 * |
基于区域的二维经验模式分解的图像融合算法;韩博 等;《红外技术》;20130930;第35卷(第9期);546-550 * |
Also Published As
Publication number | Publication date |
---|---|
CN103914835A (en) | 2014-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN104036502B (en) | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology | |
CN102209257B (en) | Stereo image quality objective evaluation method | |
CN104658001B (en) | Non-reference asymmetric distorted stereo image objective quality assessment method | |
CN104581143B (en) | A kind of based on machine learning without with reference to objective evaluation method for quality of stereo images | |
CN105744256A (en) | Three-dimensional image quality objective evaluation method based on graph-based visual saliency | |
CN104240248B (en) | Method for objectively evaluating quality of three-dimensional image without reference | |
CN103914835B (en) | A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN103581661A (en) | Method for evaluating visual comfort degree of three-dimensional image | |
CN104902268B (en) | Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method | |
CN106530282B (en) | An objective evaluation method of no-reference stereoscopic image quality based on spatial features | |
CN104408716A (en) | Three-dimensional image quality objective evaluation method based on visual fidelity | |
CN105574901B (en) | A kind of general non-reference picture quality appraisement method based on local contrast pattern | |
CN109429051B (en) | An objective evaluation method of no-reference stereoscopic video quality based on multi-view feature learning | |
CN102843572B (en) | Phase-based stereo image quality objective evaluation method | |
CN105243385B (en) | A kind of image quality evaluating method based on unsupervised learning | |
CN103413298A (en) | Three-dimensional image objective evaluation method based on visual characteristics | |
CN106651835A (en) | Entropy-based double-viewpoint reference-free objective stereo-image quality evaluation method | |
CN102903107B (en) | Three-dimensional picture quality objective evaluation method based on feature fusion | |
CN105376563A (en) | No-reference three-dimensional image quality evaluation method based on binocular fusion feature similarity | |
CN106791822A (en) | It is a kind of based on single binocular feature learning without refer to stereo image quality evaluation method | |
Karimi et al. | Blind stereo quality assessment based on learned features from binocular combined images | |
CN102708568B (en) | Stereoscopic image objective quality evaluation method on basis of structural distortion | |
CN105321175B (en) | An Objective Evaluation Method of Stereo Image Quality Based on Sparse Representation of Structural Texture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20191230 Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000 Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd. Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818 Patentee before: Ningbo University |
|
TR01 | Transfer of patent right |
Effective date of registration: 20200603 Address after: Room 501, office building, market supervision and Administration Bureau, Langchuan Avenue, Jianping Town, Langxi County, Xuancheng City, Anhui Province, 230000 Patentee after: Langxi pinxu Technology Development Co.,Ltd. Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000 Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221202 Address after: 276000 B303, B304, Longhu Software Park, Linyi Hi tech Industrial Development Zone, Shandong Province Patentee after: Shandong Lixin Information Technology Consulting Co.,Ltd. Address before: 230000 Room 501, office building, market supervision and Administration Bureau, Langchuan Avenue, Jianping Town, Langxi County, Xuancheng City, Anhui Province Patentee before: Langxi pinxu Technology Development Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CP01 | Change in the name or title of a patent holder |
Address after: 276000 B303, B304, Longhu Software Park, Linyi Hi tech Industrial Development Zone, Shandong Province Patentee after: Shandong Lixin Huachuang Big Data Technology Co.,Ltd. Address before: 276000 B303, B304, Longhu Software Park, Linyi Hi tech Industrial Development Zone, Shandong Province Patentee before: Shandong Lixin Information Technology Consulting Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |