CN102903107B - Three-dimensional picture quality objective evaluation method based on feature fusion - Google Patents
Three-dimensional picture quality objective evaluation method based on feature fusion Download PDFInfo
- Publication number
- CN102903107B CN102903107B CN201210357956.8A CN201210357956A CN102903107B CN 102903107 B CN102903107 B CN 102903107B CN 201210357956 A CN201210357956 A CN 201210357956A CN 102903107 B CN102903107 B CN 102903107B
- Authority
- CN
- China
- Prior art keywords
- org
- pixel
- dis
- image
- coordinate position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 79
- 230000004927 fusion Effects 0.000 title claims abstract description 19
- 238000005259 measurement Methods 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 46
- 238000001914 filtration Methods 0.000 claims description 16
- 241000065675 Cyclops Species 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 6
- 206010057648 Cyclopia Diseases 0.000 claims description 4
- 241001219085 Cyclopia Species 0.000 claims description 4
- 238000010586 diagram Methods 0.000 abstract description 15
- 230000008447 perception Effects 0.000 abstract description 5
- 238000007499 fusion processing Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 5
- 238000013441 quality evaluation Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 241001182632 Akko Species 0.000 description 2
- YYJNOYZRYGDPNH-MFKUBSTISA-N fenpyroximate Chemical compound C=1C=C(C(=O)OC(C)(C)C)C=CC=1CO/N=C/C=1C(C)=NN(C)C=1OC1=CC=CC=C1 YYJNOYZRYGDPNH-MFKUBSTISA-N 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000927721 Tritia Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种基于特征融合的立体图像质量客观评价方法,其首先分别计算原始的无失真的立体图像的独眼图和待评价的失真的立体图像的独眼图,并通过计算两个独眼图中的每个像素点的像素值的均值和标准差,得到待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值,再分别计算两个独眼图的显著图及两个独眼图之间的失真图,并对待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值,优点在于所获得的独眼图能够很好地对双目立体融合过程进行模拟,并且采用显著图和失真图进行融合,能有效地提高客观评价结果与主观感知的相关性。
The invention discloses an objective evaluation method of stereoscopic image quality based on feature fusion, which first calculates the cyclopean diagram of the original undistorted stereoscopic image and the cyclopean diagram of the distorted stereoscopic image to be evaluated, and calculates the cyclopean diagram of the two stereoscopic images The mean and standard deviation of the pixel values of each pixel in the image are obtained to obtain the objective evaluation value of each pixel in the cyclopean image of the distorted stereo image to be evaluated, and then the saliency map and the two The distortion map between the cyclopean images, and the objective evaluation measurement value of each pixel in the cyclopean image of the distorted stereo image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the distorted stereo image to be evaluated, The advantage is that the obtained cyclopean image can well simulate the binocular stereo fusion process, and the saliency image and the distortion image are used for fusion, which can effectively improve the correlation between the objective evaluation result and the subjective perception.
Description
技术领域technical field
本发明涉及一种图像质量评价方法,尤其是涉及一种基于特征融合的立体图像质量客观评价方法。The invention relates to an image quality evaluation method, in particular to an objective evaluation method for stereoscopic image quality based on feature fusion.
背景技术Background technique
随着图像编码技术和立体显示技术的迅速发展,立体图像技术受到了越来越广泛的关注与应用,已成为当前的一个研究热点。立体图像技术利用人眼的双目视差原理,双目各自独立地接收来自同一场景的左右视点图像,通过大脑融合形成双目视差,从而欣赏到具有深度感和逼真感的立体图像。由于采集系统、存储压缩及传输设备的影响,立体图像会不可避免地引入一系列的失真,而与单通道图像相比,立体图像需要同时保证两个通道的图像质量,对其进行质量评价具有非常重要的意义。然而,目前对立体图像质量缺乏有效的客观评价方法进行评价。因此,建立有效的立体图像质量客观评价模型具有十分重要的意义。With the rapid development of image coding technology and stereoscopic display technology, stereoscopic image technology has received more and more attention and applications, and has become a current research hotspot. Stereoscopic image technology utilizes the binocular parallax principle of the human eye. Both eyes independently receive left and right viewpoint images from the same scene, and form binocular parallax through brain fusion, so as to enjoy a stereoscopic image with a sense of depth and realism. Due to the impact of acquisition system, storage compression and transmission equipment, stereoscopic images will inevitably introduce a series of distortions. Compared with single-channel images, stereoscopic images need to ensure the image quality of two channels at the same time. very important meaning. However, there is currently no effective objective evaluation method to evaluate the stereoscopic image quality. Therefore, it is of great significance to establish an effective objective evaluation model for stereoscopic image quality.
目前的立体图像质量客观评价方法是直接将平面图像质量评价方法直接应用于评价立体图像质量,然而,对立体图像的左右视点图像进行融合产生立体感的过程并不是简单的左右视点图像叠加的过程,还难以用简单的数学方法来表示,因此,如何在立体图像质量评价过程中有效地对双目立体融合进行模拟,如何提取有效的特征信息对评价结果进行融合,使得客观评价结果更加感觉符合人类视觉系统,都是在对立体图像进行客观质量评价过程中需要研究解决的问题。The current objective evaluation method of stereoscopic image quality is to directly apply the planar image quality evaluation method to evaluate the stereoscopic image quality. However, the process of fusing the left and right viewpoint images of the stereoscopic image to produce a stereoscopic effect is not a simple process of superimposing the left and right viewpoint images. , it is difficult to use simple mathematical methods to express, therefore, how to effectively simulate binocular stereo fusion in the process of stereo image quality evaluation, how to extract effective feature information to fuse the evaluation results, so that the objective evaluation results are more in line with The human visual system is a problem that needs to be studied and solved in the process of objective quality evaluation of stereoscopic images.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种能够有效提高客观评价结果与主观感知的相关性的基于特征融合的立体图像质量客观评价方法。The technical problem to be solved by the present invention is to provide an objective evaluation method for stereoscopic image quality based on feature fusion that can effectively improve the correlation between objective evaluation results and subjective perception.
本发明解决上述技术问题所采用的技术方案为:一种基于特征融合的立体图像质量客观评价方法,其特征在于它的处理过程为:首先,根据原始的无失真的立体图像的左视点图像和右视点图像中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,及原始的无失真的立体图像的左视点图像和右视点图像之间的视差图像,获得原始的无失真的立体图像的独眼图;根据待评价的失真的立体图像的左视点图像和右视点图像中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,及原始的无失真的立体图像的左视点图像和右视点图像之间的视差图像,获得待评价的失真的立体图像的独眼图;其次,根据两个独眼图中的每个像素点的像素值的均值和标准差,获得待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值;再次,根据原始的无失真的立体图像的独眼图的振幅和相位,获得对应的显著图;根据待评价的失真的立体图像的独眼图的振幅和相位,获得对应的显著图;然后,根据两个显著图及两个独眼图之间的失真图,对待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值;最后,按照上述处理过程获取多幅不同失真类型不同失真程度的失真的立体图像的图像质量客观评价预测值。The technical scheme adopted by the present invention to solve the above-mentioned technical problems is: an objective evaluation method of stereoscopic image quality based on feature fusion, which is characterized in that its processing process is as follows: first, according to the left viewpoint image of the original undistorted stereoscopic image and The even symmetric frequency response and the odd symmetric frequency response of each pixel in the right view point image at different scales and directions, and the disparity image between the left view point image and the right view point image of the original undistorted stereo image, to obtain the original The cyclopean image of the undistorted stereo image; the even and odd symmetric frequency responses of each pixel in different scales and directions according to the left view point image and the right view point image of the distorted stereo image to be evaluated, and the original The parallax image between the left view point image and the right view point image of the undistorted stereo image obtains the cyclopean image of the distorted stereo image to be evaluated; secondly, according to the average sum of the pixel values of each pixel in the two cyclopia images Standard deviation, obtain the objective evaluation metric value of each pixel in the cyclopean image of the distorted stereo image to be evaluated; again, obtain the corresponding saliency map according to the amplitude and phase of the cyclope image of the original undistorted stereo image; According to the amplitude and phase of the cyclopean image of the distorted stereo image to be evaluated, the corresponding saliency map is obtained; then, according to the two saliency maps and the distortion map between the two cyclopean images, the cyclopean image of the distorted stereo image to be evaluated The objective evaluation measurement value of each pixel in the fusion is carried out to obtain the image quality objective evaluation prediction value of the distorted stereoscopic image to be evaluated; finally, according to the above-mentioned processing process, a plurality of distorted stereoscopic images of different distortion types and different degrees of distortion are obtained The predictive value of the image quality objective evaluation.
本发明的一种基于特征融合的立体图像质量客观评价方法的具体步骤为:The concrete steps of a kind of stereoscopic image quality objective evaluation method based on feature fusion of the present invention are:
①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,此处(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值;①Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, record the left viewpoint image of S org as {L org (x,y)}, and let the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left viewpoint image and the right viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left viewpoint image and the right viewpoint image, H represents the height of the left view point image and the right view point image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) y) means the pixel value of the pixel point whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) means that the coordinate position in {L dis (x, y)} is ( The pixel value of the pixel point of x, y), R dis (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in {R dis (x, y)};
②根据{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{Rdis(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,对应获取{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{Rdis(x,y)}中的每个像素点的振幅,然后根据{Lorg(x,y)}和{Rorg(x,y)}中的每个像素点的振幅及{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像中的每个像素点的像素值,计算Sorg的独眼图,记为{CMorg(x,y)},并根据{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅及{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像中的每个像素点的像素值,计算Sdis的独眼图,记为{CMdis(x,y)},其中,CMorg(x,y)表示{CMorg(x,y)}中坐标位置为(x,y)的像素点的像素值,CMdis(x,y)表示{CMdis(x,y)}中坐标位置为(x,y)的像素点的像素值;②According to each pixel in {L org (x,y)}, {R org (x,y)}, {L dis (x,y)}, {R dis (x,y)} at different scales The even symmetric frequency response and odd symmetric frequency response in the and direction, correspondingly get {L org (x,y)}, {R org (x,y)}, {L dis (x,y)}, {R dis (x ,y)}, and then according to the amplitude of each pixel in {L org (x,y)} and {R org (x,y)} and {L org (x,y) )} and {R org (x,y)} between the pixel value of each pixel in the disparity image, calculate the Cyclops of S org , denoted as {CM org (x,y)}, and according to {L The amplitude of each pixel in dis (x,y)} and {R dis (x,y)} and the disparity image between {L org (x,y)} and {R org (x,y)} The pixel value of each pixel in , calculate the Cyclops of S dis , recorded as {CM dis (x, y)}, where CM org (x, y) represents the coordinates in {CM org (x, y)} The pixel value of the pixel point whose position is (x, y), CM dis (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in {CM dis (x, y)};
③根据{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,计算{CMdis(x,y)}中的每个像素点的客观评价度量值,将{CMdis(x,y)}中坐标位置为(x,y)的像素点的客观评价度量值记为Qimage(x,y);③According to the mean and standard deviation of the pixel values of each pixel in {CM org (x,y)} and {CM dis (x,y)}, calculate each of {CM dis (x,y)} The objective evaluation measurement value of the pixel point, the objective evaluation measurement value of the pixel point whose coordinate position is (x, y) in {CM dis (x, y)} is recorded as Q image (x, y);
④根据{CMorg(x,y)}的振幅和相位,计算{CMorg(x,y)}的显著图,记为{SMorg(x,y)},并根据{CMdis(x,y)}的振幅和相位,计算{CMdis(x,y)}的显著图,记为{SMdis(x,y)},其中,SMorg(x,y)表示{SMorg(x,y)}中坐标位置为(x,y)的像素点的像素值,SMdis(x,y)表示{SMdis(x,y)}中坐标位置为(x,y)的像素点的像素值;④ According to the amplitude and phase of {CM org (x,y)}, calculate the saliency map of {CM org (x,y)}, denoted as {SM org (x,y)}, and according to {CM dis (x, y)} amplitude and phase, calculate the saliency map of {CM dis (x,y)}, denoted as {SM dis (x,y)}, where, SM org (x,y) means {SM org (x, The pixel value of the pixel whose coordinate position is (x, y) in y)}, SM dis (x, y) means the pixel of the pixel whose coordinate position is (x, y) in {SM dis (x, y)} value;
⑤计算{CMorg(x,y)}与{CMdis(x,y)}之间的失真图,记为{DM(x,y)},将{DM(x,y)}中坐标位置为(x,y)的像素点的像素值记为DM(x,y),DM(x,y)=(CMorg(x,y)-CMdis(x,y))2;⑤ Calculate the distortion map between {CM org (x,y)} and {CM dis (x,y)}, record it as {DM(x,y)}, and set the coordinate position in {DM(x,y)} Be that the pixel value of the pixel point of (x, y) is denoted as DM (x, y), DM (x, y)=(CM org (x, y)-CM dis (x, y)) 2 ;
⑥根据{SMorg(x,y)}和{SMdis(x,y)}及{DM(x,y)},对{CMdis(x,y)}中的每个像素点的客观评价度量值进行融合,得到Sdis的图像质量客观评价预测值,记为Q,
⑦采用n幅原始的无失真的立体图像,建立其在不同失真类型不同失真程度下的失真立体图像集合,该失真立体图像集合包括多幅失真的立体图像,利用主观质量评价方法分别获取失真立体图像集合中每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100],n≥1;⑦Using n original undistorted stereoscopic images, establish a set of distorted stereoscopic images under different distortion types and different degrees of distortion. The average subjective score difference of each distorted stereoscopic image in the image set is recorded as DMOS, DMOS=100-MOS, where MOS represents the mean subjective score, DMOS∈[0,100], n≥1;
⑧按照步骤①至步骤⑥计算Sdis的图像质量客观评价预测值的操作,分别计算失真立体图像集合中每幅失真的立体图像的图像质量客观评价预测值。8. According to the operation of calculating the image quality objective evaluation prediction value of S dis according to step ① to step ⑥, respectively calculate the image quality objective evaluation prediction value of each distorted stereoscopic image in the distorted stereoscopic image set.
所述的步骤②的具体过程为:The concrete process of described step 2. is:
②-1、对{Lorg(x,y)}进行滤波处理,得到{Lorg(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,将{Lorg(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的偶对称频率响应记为eα,θ(x,y),将{Lorg(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的奇对称频率响应记为oα,θ(x,y),其中,α表示滤波所采用的滤波器的尺度因子,1≤α≤4,θ表示滤波所采用的滤波器的方向因子,1≤θ≤4;②-1. Perform filtering on {L org (x, y)} to obtain the even symmetric frequency response and odd symmetric frequency response of each pixel in {L org (x, y)} in different scales and directions, The even symmetric frequency response of the pixel at the coordinate position (x, y) in {L org (x, y)} in different scales and directions is recorded as e α, θ (x, y), and {L org (x , y)}, the odd symmetric frequency response of the pixel at the coordinate position (x, y) in different scales and directions is denoted as o α, θ (x, y), where α represents the scale of the filter used for filtering Factor, 1≤α≤4, θ indicates the direction factor of the filter used for filtering, 1≤θ≤4;
②-2、根据{Lorg(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,计算{Lorg(x,y)}中的每个像素点的振幅,将{Lorg(x,y)}中坐标位置为(x,y)的像素点的振幅记为
②-3、按照步骤②-1至步骤②-2获取{Lorg(x,y)}中的每个像素点的振幅的操作,以相同的方式获取{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅,将{Rorg(x,y)}中坐标位置为(x,y)的像素点的振幅记为将{Ldis(x,y)}中坐标位置为(x,y)的像素点的振幅记为将{Rdis(x,y)}中坐标位置为(x,y)的像素点的振幅记为 ②-3. Obtain the amplitude of each pixel in {L org (x,y)} according to step ②-1 to step ②-2, and obtain {R org (x,y)} in the same way, The amplitude of each pixel in {L dis (x, y)} and {R dis (x, y)}, the pixel whose coordinate position is (x, y) in {R org (x, y)} The amplitude of The amplitude of the pixel at the coordinate position (x, y) in {L dis (x, y)} is recorded as Record the amplitude of the pixel at the coordinate position (x, y) in {R dis (x, y)} as
②-4、采用块匹配法计算{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像,记为其中,表示中坐标位置为(x,y)的像素点的像素值;②-4. Calculate the parallax image between {L org (x, y)} and {R org (x, y)} by block matching method, denoted as in, express The pixel value of the pixel point whose middle coordinate position is (x, y);
②-5、根据{Lorg(x,y)}和{Rorg(x,y)}中的每个像素点的振幅及中的每个像素点的像素值,计算Sorg的独眼图,记为{CMorg(x,y)},将{CMorg(x,y)}中坐标位置为(x,y)的像素点的像素值记为CMorg(x,y),
②-6、根据{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅及中的每个像素点的像素值,计算Sdis的独眼图,记为{CMdis(x,y)},将{CMdis(x,y)}中坐标位置为(x,y)的像素点的像素值记为CMdis(x,y),
所述的步骤②-1中对{Lorg(x,y)}进行滤波处理采用的滤波器为log-Garbor滤波器。The filter used for filtering {L org (x, y)} in the step ②-1 is a log-Garbor filter.
所述的步骤③的具体过程为:The concrete process of described step 3. is:
③-1、计算{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,将{CMorg(x,y)}中坐标位置为(x1,y1)的像素点的像素值的均值和标准差分别记为μorg(x1,y1)和σorg(x1,y1),将{CMdis(x,y)}中坐标位置为(x1,y1)的像素点的像素值的均值和标准差分别记为μdis(x1,y1)和σdis(x1,y1), ③-1. Calculate the mean and standard deviation of the pixel values of each pixel in {CM org (x, y)} and {CM dis (x, y)}, and put {CM org (x, y)} in The mean and standard deviation of the pixel values at the coordinate position (x 1 ,y 1 ) are recorded as μ org (x 1 ,y 1 ) and σ org (x 1 ,y 1 ) respectively, and {CM dis (x ,y)}, the mean and standard deviation of the pixel values of the pixel at the coordinate position (x 1 ,y 1 ) are recorded as μ dis (x 1 ,y 1 ) and σ dis (x 1 ,y 1 ), respectively,
③-2、根据{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,计算{CMdis(x,y)}中的每个像素点的客观评价度量值,将{CMdis(x,y)}中坐标位置为(x1,y1)的像素点的客观评价度量值记为Qimage(x1,y1),
所述的步骤④的具体过程为:The concrete process of described step 4. is:
④-1、对{CMorg(x,y)}进行离散傅立叶变换,得到{CMorg(x,y)}的振幅和相位,分别记为{Morg(u,v)}和{Aorg(u,v)},其中,u表示变换域的振幅或相位的宽度,v表示变换域的振幅或相位的高度,1≤u≤W,1≤v≤H,Morg(u,v)表示{Morg(u,v)}中坐标位置为(u,v)的像素点的振幅值,Aorg(u,v)表示{Aorg(u,v)}中坐标位置为(u,v)的像素点的相位值;④-1. Perform discrete Fourier transform on {CM org (x,y)} to obtain the amplitude and phase of {CM org (x,y)}, which are recorded as {M org (u,v)} and {A org (u,v)}, where u represents the amplitude or phase width of the transform domain, v represents the amplitude or phase height of the transform domain, 1≤u≤W, 1≤v≤H, M org (u,v) Indicates the amplitude value of the pixel point whose coordinate position is (u,v) in {M org (u,v)}, and A org (u,v) indicates that the coordinate position in {A org (u,v)} is (u, The phase value of the pixel point of v);
④-2、计算{Morg(u,v)}的高频分量的振幅,记为{Rorg(u,v)},将{Rorg(u,v)}中坐标位置为(u,v)的像素点的高频分量的振幅值记为Rorg(u,v),Rorg(u,v)=log(Morg(u,v))-hm(u,v)*log(Morg(u,v)),其中,log()为以e为底的对数函数,e=2.718281828,“*”为卷积操作符号,hm(u,v)表示m×m的均值滤波;④-2. Calculate the amplitude of the high-frequency component of {M org (u,v)}, record it as {R org (u,v)}, and set the coordinate position in {R org (u,v)} as (u, v) The amplitude value of the high-frequency component of the pixel point is recorded as R org (u,v), R org (u,v)=log(M org (u,v))-h m (u,v)*log (M org (u,v)), where log() is a logarithmic function based on e, e=2.718281828, "*" is the convolution operation symbol, h m (u,v) means m×m mean filtering;
④-3、根据{Rorg(u,v)}和{Aorg(u,v)}进行离散傅立叶反变换,将获得的反变换图像作为{CMorg(x,y)}的显著图,记为{SMorg(x,y)},其中,SMorg(x,y)表示{SMorg(x,y)}中坐标位置为(x,y)的像素点的像素值;④-3. Perform inverse discrete Fourier transform according to {R org (u, v)} and {A org (u, v)}, and use the obtained inverse transformed image as the saliency map of {CM org (x, y)}, Recorded as {SM org (x, y)}, where, SM org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {SM org (x, y)};
④-4、按照步骤④-1至步骤④-3获取{CMorg(x,y)}的显著图的操作,以相同的方式获取{CMdis(x,y)}的显著图,记为{SMdis(x,y)},其中,SMdis(x,y)表示{SMdis(x,y)}中坐标位置为(x,y)的像素点的像素值。④-4. Follow steps ④-1 to ④-3 to obtain the saliency map of {CM org (x, y)}, and obtain the saliency map of {CM dis (x, y)} in the same way, denoted as {SM dis (x, y)}, wherein, SM dis (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {SM dis (x, y)}.
与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
1)本发明方法通过分别计算原始的无失真的立体图像的独眼图和待评价的失真的立体图像的独眼图,并直接对失真的立体图像的独眼图进行评价,这样能够有效地对双目立体融合过程进行模拟,避免了对左视点图像和右视点图像的客观评价度量值进行线性加权的过程。1) The method of the present invention calculates the cyclopean diagram of the original undistorted stereoscopic image and the monocular diagram of the distorted stereoscopic image to be evaluated respectively, and directly evaluates the monocular diagram of the distorted stereoscopic image, so that the binocular The stereo fusion process is simulated, avoiding the process of linearly weighting the objective evaluation metrics of the left-viewpoint image and the right-viewpoint image.
2)本发明方法通过计算原始的无失真的立体图像的独眼图和待评价的失真的立体图像的独眼图的显著图及两个独眼图之间的失真图,并对待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值进行融合,可使得评价结果更加感觉符合人类视觉系统,从而有效地提高了客观评价结果与主观感知的相关性。2) The method of the present invention calculates the saliency map of the cyclopean diagram of the original undistorted stereoscopic image and the cyclopean diagram of the distorted stereoscopic image to be evaluated and the distortion diagram between the two cyclopean diagrams, and the distorted stereoscopic image to be evaluated The fusion of the objective evaluation measurement value of each pixel in the Cyclops image can make the evaluation result more in line with the human visual system, thus effectively improving the correlation between the objective evaluation result and subjective perception.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;
图2a为Akko(尺寸为640×480)立体图像的左视点图像;Fig. 2 a is the left viewpoint image of Akko (size is 640 * 480) stereoscopic image;
图2b为Akko(尺寸为640×480)立体图像的右视点图像;Fig. 2b is the right viewpoint image of Akko (size is 640 * 480) stereoscopic image;
图3a为Altmoabit(尺寸为1024×768)立体图像的左视点图像;Fig. 3 a is the left viewpoint image of Altmoabit (size is 1024 * 768) stereoscopic image;
图3b为Altmoabit(尺寸为1024×768)立体图像的右视点图像;Fig. 3b is the right viewpoint image of Altmoabit (size is 1024 * 768) stereoscopic image;
图4a为Balloons(尺寸为1024×768)立体图像的左视点图像;Fig. 4 a is the left viewpoint image of the stereoscopic image of Balloons (size is 1024 * 768);
图4b为Balloons(尺寸为1024×768)立体图像的右视点图像;Fig. 4b is the right viewpoint image of the stereoscopic image of Balloons (size is 1024 * 768);
图5a为Doorflower(尺寸为1024×768)立体图像的左视点图像;Fig. 5 a is the left viewpoint image of the stereoscopic image of Doorflower (size is 1024 * 768);
图5b为Doorflower(尺寸为1024×768)立体图像的右视点图像;Fig. 5b is the right viewpoint image of the stereoscopic image of Doorflower (size is 1024 * 768);
图6a为Kendo(尺寸为1024×768)立体图像的左视点图像;Fig. 6 a is the left viewpoint image of Kendo (size is 1024 * 768) stereoscopic image;
图6b为Kendo(尺寸为1024×768)立体图像的右视点图像;Fig. 6b is the right viewpoint image of the stereoscopic image of Kendo (size is 1024 * 768);
图7a为LeaveLaptop(尺寸为1024×768)立体图像的左视点图像;Fig. 7a is the left view point image of LeaveLaptop (size is 1024 * 768) stereoscopic image;
图7b为LeaveLaptop(尺寸为1024×768)立体图像的右视点图像;Fig. 7b is the right viewpoint image of LeaveLaptop (size is 1024 * 768) stereoscopic image;
图8a为Lovebierd1(尺寸为1024×768)立体图像的左视点图像;Fig. 8a is the left viewpoint image of the stereoscopic image of Lovebierd1 (size is 1024 * 768);
图8b为Lovebierd1(尺寸为1024×768)立体图像的右视点图像;Fig. 8b is the right viewpoint image of the stereoscopic image of Lovebierd1 (size is 1024 * 768);
图9a为Newspaper(尺寸为1024×768)立体图像的左视点图像;Fig. 9a is the left viewpoint image of the stereoscopic image of Newspaper (the size is 1024 * 768);
图9b为Newspaper(尺寸为1024×768)立体图像的右视点图像;Fig. 9b is the right viewpoint image of the stereoscopic image of Newspaper (the size is 1024×768);
图10a为Puppy(尺寸为720×480)立体图像的左视点图像;Fig. 10a is the left viewpoint image of Puppy (size is 720 * 480) stereoscopic image;
图10b为Puppy(尺寸为720×480)立体图像的右视点图像;Fig. 10b is the right viewpoint image of Puppy (size is 720 * 480) stereoscopic image;
图11a为Soccer2(尺寸为720×480)立体图像的左视点图像;Fig. 11a is the left viewpoint image of Soccer2 (size is 720 * 480) stereoscopic image;
图11b为Soccer2(尺寸为720×480)立体图像的右视点图像;Fig. 11b is the right viewpoint image of Soccer2 (size is 720 * 480) stereoscopic image;
图12a为Horse(尺寸为720×480)立体图像的左视点图像;Fig. 12a is the left viewpoint image of the stereoscopic image of Horse (size is 720 * 480);
图12b为Horse(尺寸为720×480)立体图像的右视点图像;Fig. 12b is the right viewpoint image of the stereoscopic image of Horse (the size is 720×480);
图13a为Xmas(尺寸为640×480)立体图像的左视点图像;Fig. 13a is the left viewpoint image of Xmas (size is 640 * 480) stereoscopic image;
图13b为Xmas(尺寸为640×480)立体图像的右视点图像;Fig. 13b is the right viewpoint image of Xmas (size is 640 * 480) stereoscopic image;
图14为失真立体图像集合中的各幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图。FIG. 14 is a scatter diagram of the difference between the predicted image quality objective evaluation value and the average subjective evaluation value of each distorted stereo image in the distorted stereo image set.
具体实施方式Detailed ways
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种基于特征融合的立体图像质量客观评价方法,其总体实现框图如图1所示,其处理过程为:首先,根据原始的无失真的立体图像的左视点图像和右视点图像中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,及原始的无失真的立体图像的左视点图像和右视点图像之间的视差图像,获得原始的无失真的立体图像的独眼图;根据待评价的失真的立体图像的左视点图像和右视点图像中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,及原始的无失真的立体图像的左视点图像和右视点图像之间的视差图像,获得待评价的失真的立体图像的独眼图;其次,根据两个独眼图中的每个像素点的像素值的均值和标准差,获得待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值;再次,根据原始的无失真的立体图像的独眼图的振幅和相位,获得对应的显著图;根据待评价的失真的立体图像的独眼图的振幅和相位,获得对应的显著图;然后,根据两个显著图及两个独眼图之间的失真图,对待评价的失真的立体图像的独眼图中的每个像素点的客观评价度量值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值;最后,按照上述处理过程获取多幅不同失真类型不同失真程度的失真的立体图像的图像质量客观评价预测值。A kind of stereoscopic image quality objective evaluation method based on feature fusion proposed by the present invention, its overall implementation block diagram is shown in Figure 1, and its processing process is: first, according to the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image The even symmetric frequency response and odd symmetric frequency response of each pixel in different scales and directions, and the disparity image between the left view point image and the right view point image of the original undistorted stereo image, to obtain the original undistorted stereoscopic image Cyclopsogram of the stereo image; according to the even and odd symmetry frequency responses of each pixel in the left view image and the right view image of the distorted stereo image to be evaluated at different scales and directions, and the original undistorted The disparity image between the left view point image and the right view point image of the stereo image, obtain the monocular image of the distorted stereo image to be evaluated; secondly, according to the mean and the standard deviation of the pixel values of each pixel in the two cyclopia images, Obtain the objective evaluation metric value of each pixel in the cyclopean image of the distorted stereo image to be evaluated; again, obtain the corresponding saliency map according to the amplitude and phase of the cyclopean image of the original undistorted stereo image; Then, according to the two saliency maps and the distortion map between the two cyclopean maps, each element in the cyclopean map of the distorted stereo image to be evaluated The objective evaluation measurement value of each pixel is fused to obtain the image quality objective evaluation prediction value of the distorted stereoscopic image to be evaluated; finally, the image quality of the distorted stereoscopic image of multiple different distortion types and different degrees of distortion is obtained according to the above-mentioned process. Objectively evaluate the predicted value.
本发明方法具体包括以下步骤:The inventive method specifically comprises the following steps:
①令Sorg为原始的无失真的立体图像,令Sdis为待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,此处(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值。①Let S org be the original undistorted stereo image, let S dis be the distorted stereo image to be evaluated, record the left viewpoint image of S org as {L org (x,y)}, and let the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left viewpoint image and the right viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left viewpoint image and the right viewpoint image, H represents the height of the left view point image and the right view point image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) y) means the pixel value of the pixel point whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) means that the coordinate position in {L dis (x, y)} is ( x, y), and R dis (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {R dis (x, y)}.
②根据{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{Rdis(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,对应获取{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{Rdis(x,y)}中的每个像素点的振幅,然后根据{Lorg(x,y)}和{Rorg(x,y)}中的每个像素点的振幅及{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像中的每个像素点的像素值,计算Sorg的独眼图(cyclopean map),记为{CMorg(x,y)},并根据{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅及{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像中的每个像素点的像素值,计算Sdis的独眼图,记为{CMdis(x,y)},其中,CMorg(x,y)表示{CMorg(x,y)}中坐标位置为(x,y)的像素点的像素值,CMdis(x,y)表示{CMdis(x,y)}中坐标位置为(x,y)的像素点的像素值。②According to each pixel in {L org (x,y)}, {R org (x,y)}, {L dis (x,y)}, {R dis (x,y)} at different scales The even symmetric frequency response and odd symmetric frequency response in the and direction, correspondingly get {L org (x,y)}, {R org (x,y)}, {L dis (x,y)}, {R dis (x ,y)}, and then according to the amplitude of each pixel in {L org (x,y)} and {R org (x,y)} and {L org (x,y) )} and {R org (x,y)} the pixel value of each pixel in the disparity image, calculate the cyclopean map of S org , denoted as {CM org (x,y)}, And according to the amplitude of each pixel in {L dis (x,y)} and {R dis (x,y)} and the relationship between {L org (x,y)} and {R org (x,y)} The pixel value of each pixel in the parallax image between, calculate the Cyclops of S dis , denoted as {CM dis (x, y)}, where, CM org (x, y) means {CM org (x, y )}, the pixel value of the pixel whose coordinate position is (x, y), CM dis (x, y) means the pixel value of the pixel whose coordinate position is (x, y) in {CM dis (x, y)} .
在本实施例中,步骤②的具体过程为:In this embodiment, the specific process of step ② is:
②-1、对{Lorg(x,y)}进行滤波处理,得到{Lorg(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,将{Lorg(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的偶对称频率响应记为eα,θ(x,y),将{Lorg(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的奇对称频率响应记为oα,θ(x,y),其中,α表示滤波所采用的滤波器的尺度因子,1≤α≤4,θ表示滤波所采用的滤波器的方向因子,1≤θ≤4。②-1. Perform filtering on {L org (x, y)} to obtain the even symmetric frequency response and odd symmetric frequency response of each pixel in {L org (x, y)} in different scales and directions, The even symmetric frequency response of the pixel at the coordinate position (x, y) in {L org (x, y)} in different scales and directions is recorded as e α, θ (x, y), and {L org (x , y)}, the odd symmetric frequency response of the pixel at the coordinate position (x, y) in different scales and directions is denoted as o α, θ (x, y), where α represents the scale of the filter used for filtering Factor, 1≤α≤4, θ indicates the direction factor of the filter used for filtering, 1≤θ≤4.
在此,对{Lorg(x,y)}进行滤波处理采用的滤波器为log-Garbor滤波器。Here, the filter used for filtering {L org (x, y)} is a log-Garbor filter.
②-2、根据{Lorg(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,计算{Lorg(x,y)}中的每个像素点的振幅,将{Lorg(x,y)}中坐标位置为(x,y)的像素点的振幅记为
②-3、按照步骤②-1至步骤②-2获取{Lorg(x,y)}中的每个像素点的振幅的操作,以相同的方式获取{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅,将{Rorg(x,y)}中坐标位置为(x,y)的像素点的振幅记为将{Ldis(x,y)}中坐标位置为(x,y)的像素点的振幅记为将{Rdis(x,y)}中坐标位置为(x,y)的像素点的振幅记为如获取{Ldis(x,y)}中的每个像素点的振幅的操作过程为:1)对{Ldis(x,y)}进行滤波处理,得到{Ldis(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,将{Ldis(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的偶对称频率响应记为eα,θ'(x,y),将{Ldis(x,y)}中坐标位置为(x,y)的像素点在不同尺度和方向的奇对称频率响应记为oα,θ'(x,y),其中,α表示滤波所采用的滤波器的尺度因子,1≤α≤4,θ表示滤波所采用的滤波器的方向因子,1≤θ≤4;2)根据{Ldis(x,y)}中的每个像素点在不同尺度和方向的偶对称频率响应和奇对称频率响应,计算{Ldis(x,y)}中的每个像素点的振幅,将{Ldis(x,y)}中坐标位置为(x,y)的像素点的振幅记为
②-4、采用块匹配法计算{Lorg(x,y)}与{Rorg(x,y)}之间的视差图像,记为其中,表示中坐标位置为(x,y)的像素点的像素值。②-4. Calculate the parallax image between {L org (x, y)} and {R org (x, y)} by block matching method, denoted as in, express The pixel value of the pixel whose middle coordinate position is (x, y).
②-5、根据{Lorg(x,y)}和{Rorg(x,y)}中的每个像素点的振幅及中的每个像素点的像素值,计算Sorg的独眼图,记为{CMorg(x,y)},将{CMorg(x,y)}中坐标位置为(x,y)的像素点的像素值记为CMorg(x,y),
②-6、根据{Ldis(x,y)}和{Rdis(x,y)}中的每个像素点的振幅及中的每个像素点的像素值,计算Sdis的独眼图,记为{CMdis(x,y)},将{CMdis(x,y)}中坐标位置为(x,y)的像素点的像素值记为CMdis(x,y),
③根据{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,计算{CMdis(x,y)}中的每个像素点的客观评价度量值,将{CMdis(x,y)}中坐标位置为(x,y)的像素点的客观评价度量值记为Qimage(x,y),将{CMdis(x,y)}中的所有像素点的客观评价度量值用集合表示为{Qimage(x,y)}。③According to the mean and standard deviation of the pixel values of each pixel in {CM org (x,y)} and {CM dis (x,y)}, calculate each of {CM dis (x,y)} The objective evaluation measurement value of the pixel point, the objective evaluation measurement value of the pixel point whose coordinate position is (x, y) in {CM dis (x, y)} is recorded as Q image (x, y), and {CM dis ( The objective evaluation metric values of all pixels in x, y)} are expressed as {Q image (x, y)} in a set.
在本实施例中,步骤③的具体过程为:In the present embodiment, the specific process of step 3. is:
③-1、计算{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,将{CMorg(x,y)}中坐标位置为(x1,y1)的像素点的像素值的均值和标准差分别记为μorg(x1,y1)和σorg(x1,y1),将{CMdis(x,y)}中坐标位置为(x1,y1)的像素点的像素值的均值和标准差分别记为μdis(x1,y1)和σdis(x1,y1), ③-1. Calculate the mean and standard deviation of the pixel values of each pixel in {CM org (x, y)} and {CM dis (x, y)}, and put {CM org (x, y)} in The mean and standard deviation of the pixel values at the coordinate position (x 1 ,y 1 ) are recorded as μ org (x 1 ,y 1 ) and σ org (x 1 ,y 1 ) respectively, and {CM dis (x ,y)}, the mean and standard deviation of the pixel values of the pixel at the coordinate position (x 1 ,y 1 ) are recorded as μ dis (x 1 ,y 1 ) and σ dis (x 1 ,y 1 ), respectively,
③-2、根据{CMorg(x,y)}和{CMdis(x,y)}中的每个像素点的像素值的均值和标准差,计算{CMdis(x,y)}中的每个像素点的客观评价度量值,将{CMdis(x,y)}中坐标位置为(x1,y1)的像素点的客观评价度量值记为Qimage(x1,y1),
④根据{CMorg(x,y)}的光谱冗余特性,即根据{CMorg(x,y)}的振幅和相位,计算{CMorg(x,y)}的显著图(saliency map),记为{SMorg(x,y)},并根据{CMdis(x,y)}的光谱冗余特性,即根据{CMdis(x,y)}的振幅和相位,计算{CMdis(x,y)}的显著图,记为{SMdis(x,y)},其中,SMorg(x,y)表示{SMorg(x,y)}中坐标位置为(x,y)的像素点的像素值,SMdis(x,y)表示{SMdis(x,y)}中坐标位置为(x,y)的像素点的像素值。④ Calculate the saliency map of {CM org (x, y)} according to the spectral redundancy characteristics of {CM org (x, y)}, that is, according to the amplitude and phase of {CM org (x, y)} , denoted as {SM org (x,y)}, and according to the spectral redundancy characteristics of {CM dis (x,y)}, that is, according to the amplitude and phase of {CM dis (x,y)}, calculate {CM dis The saliency map of (x,y)} is recorded as {SM dis (x,y)}, where SM org (x,y) means that the coordinate position in {SM org (x,y)} is (x,y) The pixel value of the pixel point of SM dis (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in {SM dis (x, y)}.
在本实施例中,步骤④的具体过程为:In the present embodiment, the specific process of step ④ is:
④-1、对{CMorg(x,y)}进行离散傅立叶变换,得到{CMorg(x,y)}的振幅和相位,分别记为{Morg(u,v)}和{Aorg(u,v)},其中,u表示变换域的振幅或相位的宽度,v表示变换域的振幅或相位的高度,1≤u≤W,1≤v≤H,Morg(u,v)表示{Morg(u,v)}中坐标位置为(u,v)的像素点的振幅值,Aorg(u,v)表示{Aorg(u,v)}中坐标位置为(u,v)的像素点的相位值。④-1. Perform discrete Fourier transform on {CM org (x,y)} to obtain the amplitude and phase of {CM org (x,y)}, which are recorded as {M org (u,v)} and {A org (u,v)}, where u represents the amplitude or phase width of the transform domain, v represents the amplitude or phase height of the transform domain, 1≤u≤W, 1≤v≤H, M org (u,v) Indicates the amplitude value of the pixel point whose coordinate position is (u,v) in {M org (u,v)}, and A org (u,v) indicates that the coordinate position in {A org (u,v)} is (u, v) The phase value of the pixel point.
④-2、计算{Morg(u,v)}的高频分量的振幅,记为{Rorg(u,v)},将{Rorg(u,v)}中坐标位置为(u,v)的像素点的高频分量的振幅值记为Rorg(u,v),Rorg(u,v)=log(Morg(u,v))-hm(u,v)*log(Morg(u,v)),其中,log()为以e为底的对数函数,e=2.718281828,“*”为卷积操作符号,hm(u,v)表示m×m的均值滤波,在本实施例中,取m=3。④-2. Calculate the amplitude of the high-frequency component of {M org (u,v)}, record it as {R org (u,v)}, and set the coordinate position in {R org (u,v)} as (u, v) The amplitude value of the high-frequency component of the pixel point is recorded as R org (u,v), R org (u,v)=log(M org (u,v))-h m (u,v)*log (M org (u,v)), where log() is a logarithmic function based on e, e=2.718281828, "*" is the convolution operation symbol, h m (u,v) means m×m For mean filtering, in this embodiment, m=3.
④-3、根据{Rorg(u,v)}和{Aorg(u,v)}进行离散傅立叶反变换,将获得的反变换图像作为{CMorg(x,y)}的显著图,记为{SMorg(x,y)},其中,SMorg(x,y)表示{SMorg(x,y)}中坐标位置为(x,y)的像素点的像素值。④-3. Perform inverse discrete Fourier transform according to {R org (u, v)} and {A org (u, v)}, and use the obtained inverse transformed image as the saliency map of {CM org (x, y)}, It is recorded as {SM org (x, y)}, where SM org (x, y) represents the pixel value of the pixel at the coordinate position (x, y) in {SM org (x, y)}.
④-4、按照步骤④-1至步骤④-3获取{CMorg(x,y)}的显著图的操作,以相同的方式获取{CMdis(x,y)}的显著图,记为{SMdis(x,y)},其中,SMdis(x,y)表示{SMdis(x,y)}中坐标位置为(x,y)的像素点的像素值。④-4. Follow steps ④-1 to ④-3 to obtain the saliency map of {CM org (x, y)}, and obtain the saliency map of {CM dis (x, y)} in the same way, denoted as {SM dis (x, y)}, wherein, SM dis (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {SM dis (x, y)}.
⑤计算{CMorg(x,y)}与{CMdis(x,y)}之间的失真图(distortion map),记为{DM(x,y)},将{DM(x,y)}中坐标位置为(x,y)的像素点的像素值记为DM(x,y),DM(x,y)=(CMorg(x,y)-CMdis(x,y))2。⑤ Calculate the distortion map between {CM org (x,y)} and {CM dis (x,y)}, denoted as {DM(x,y)}, and {DM(x,y) }, the pixel value of the pixel whose coordinate position is (x, y) is recorded as DM (x, y), DM (x, y) = (CM org (x, y)-CM dis (x, y)) 2 .
⑥根据{SMorg(x,y)}和{SMdis(x,y)}及{DM(x,y)},对{CMdis(x,y)}中的每个像素点的客观评价度量值进行融合,得到Sdis的图像质量客观评价预测值,记为Q,
⑦采用n幅原始的无失真的立体图像,建立其在不同失真类型不同失真程度下的失真立体图像集合,该失真立体图像集合包括多幅失真的立体图像,利用主观质量评价方法分别获取失真立体图像集合中每幅失真的立体图像的平均主观评分差值,记为DMOS,DMOS=100-MOS,其中,MOS表示主观评分均值,DMOS∈[0,100],n≥1。⑦Using n original undistorted stereoscopic images, establish a set of distorted stereoscopic images under different distortion types and different degrees of distortion. The average subjective score difference of each distorted stereo image in the image set is recorded as DMOS, DMOS=100-MOS, where MOS represents the mean subjective score, DMOS∈[0,100], n≥1.
在本实施例中,利用如图2a和图2b构成的立体图像、图3a和图3b构成的立体图像、图4a和图4b构成的立体图像、图5a和图5b构成的立体图像、图6a和图6b构成的立体图像、图7a和图7b构成的立体图像、图8a和图8b构成的立体图像、图9a和图9b构成的立体图像、图10a和图10b构成的立体图像、图11a和图11b构成的立体图像、图12a和图12b构成的立体图像、图13a和图13b构成的立体图像共12幅(n=12)无失真的立体图像建立了其在不同失真类型不同失真程度下的失真立体图像集合,该失真立体图像集合共包括4种失真类型的252幅失真的立体图像,其中JPEG压缩的失真的立体图像共60幅,JPEG2000压缩的失真的立体图像共60幅,高斯模糊(Gaussian Blur)的失真的立体图像共60幅,H.264编码的失真的立体图像共72幅。In this embodiment, the stereoscopic image composed of Fig. 2a and Fig. 2b, the stereoscopic image composed of Fig. 3a and Fig. 3b, the stereoscopic image composed of Fig. 4a and Fig. 4b, the stereoscopic image composed of Fig. 5a and Fig. Stereoscopic image composed of Fig. 6b, stereoscopic image composed of Fig. 7a and Fig. 7b, stereoscopic image composed of Fig. 8a and Fig. 8b, stereoscopic image composed of Fig. 9a and Fig. 9b, stereoscopic image composed of Fig. A total of 12 (n=12) undistorted stereoscopic images of the stereoscopic image constituted by Fig. 11b, the stereoscopic image constituted by Fig. 12a and Fig. 12b, and the stereoscopic image constituted by Fig. 13a and Fig. 13b have been established in different distortion types and different degrees of distortion. The following distorted stereo image collection, the distorted stereo image collection includes 252 distorted stereo images of 4 types of distortion, including 60 distorted stereo images compressed by JPEG, 60 distorted stereo images compressed by JPEG2000, Gaussian There are 60 distorted stereo images with Gaussian Blur, and 72 distorted stereo images with H.264 encoding.
⑧按照步骤①至步骤⑥计算Sdis的图像质量客观评价预测值的操作,分别计算失真立体图像集合中每幅失真的立体图像的图像质量客观评价预测值。8. According to the operation of calculating the image quality objective evaluation prediction value of S dis according to step ① to step ⑥, respectively calculate the image quality objective evaluation prediction value of each distorted stereoscopic image in the distorted stereoscopic image set.
采用图2a至图13b所示的12幅无失真的立体图像在不同程度的JPEG压缩、JPEG2000压缩、高斯模糊和H.264编码失真情况下的252幅失真的立体图像来分析本实施例得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的相关性。这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Pearson linear correlation coefficient,PLCC)、Spearman相关系数(Spearman rank order correlation coefficient,SROCC),Kendall相关系数(Kendall rank-order correlation coefficient,KROCC),均方误差(root mean squared error,RMSE),PLCC和RMSE反映失真的立体图像评价客观模型的准确性,SROCC和KROCC反映其单调性。将按本发明方法计算得到的失真的立体图像的图像质量客观评价预测值做五参数Logistic函数非线性拟合,PLCC、SROCC和KROCC值越高,RMSE值越低说明客观评价方法与平均主观评分差值相关性越好。将分别采用本发明方法与不采用本发明方法得到失真的立体图像的图像质量客观评价预测值与主观评分之间的Pearson相关系数、Spearman相关系数、Kendall相关系数和均方误差进行比较,比较结果如表1、表2、表3和表4所示,从表1、表2、表3和表4中可以看出,采用本发明方法得到的失真的立体图像的最终的图像质量客观评价预测值与平均主观评分差值之间的相关性是很高的,表明客观评价结果与人眼主观感知的结果较为一致,足以说明本发明方法的有效性。Using 12 undistorted stereoscopic images shown in Figures 2a to 13b, 252 distorted stereoscopic images under different degrees of JPEG compression, JPEG2000 compression, Gaussian blur and H.264 encoding distortion are used to analyze the results obtained in this embodiment. Correlation between image quality objective rating predictors and mean subjective rating difference for distorted stereoscopic images. Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC) and Spearman correlation coefficient (Spearman rank order correlation coefficient, SROCC) under nonlinear regression conditions, Kendall correlation coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error (root mean squared error, RMSE), PLCC and RMSE reflect the accuracy of the distorted stereoscopic image evaluation objective model, and SROCC and KROCC reflect its monotonicity. The five-parameter Logistic function nonlinear fitting is done on the image quality objective evaluation prediction value of the distorted stereoscopic image calculated by the method of the present invention, the higher the PLCC, SROCC and KROCC values, the lower the RMSE value shows that the objective evaluation method and the average subjective rating The better the difference correlation. The Pearson correlation coefficient, the Spearman correlation coefficient, the Kendall correlation coefficient and the mean square error between the image quality objective evaluation prediction value and the subjective rating of the distorted stereoscopic image obtained by the method of the present invention and the method of the present invention are compared respectively, and the comparison results As shown in Table 1, Table 2, Table 3 and Table 4, as can be seen from Table 1, Table 2, Table 3 and Table 4, the final image quality objective evaluation prediction of the distorted stereoscopic image obtained by the method of the present invention The correlation between the value and the average subjective score difference is very high, indicating that the objective evaluation result is relatively consistent with the subjective perception of the human eye, which is enough to illustrate the effectiveness of the method of the present invention.
图14给出了失真立体图像集合中的各幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图,散点越集中,说明客观评介结果与主观感知的一致性越好。从图14中可以看出,采用本发明方法得到的散点图比较集中,与主观评价数据之间的吻合度较高。Figure 14 shows the scatter diagram of the difference between the image quality objective evaluation prediction value and the average subjective evaluation value of each distorted stereo image in the distorted stereo image set. The more concentrated the scatter points, the consistency between the objective evaluation results and the subjective perception the better. It can be seen from FIG. 14 that the scatter diagram obtained by the method of the present invention is relatively concentrated, and has a high degree of agreement with the subjective evaluation data.
表1 利用本发明方法与不利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与主观评分之间的Pearson相关系数比较Table 1 Comparison of the Pearson correlation coefficient between the objective evaluation predictive value and the subjective rating of the image quality of the distorted stereoscopic image obtained by the method of the present invention and without using the method of the present invention
表2 利用本发明方法与不利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与主观评分之间的Spearman相关系数比较Table 2 Comparison of the Spearman correlation coefficient between the image quality objective evaluation prediction value and the subjective score of the distorted stereoscopic image obtained by the method of the present invention and without using the method of the present invention
表3 利用本发明方法与不利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与主观评分之间的Kendall相关系数比较Table 3 Comparison of the Kendall correlation coefficient between the objective evaluation predictive value and the subjective rating of the image quality of the distorted stereoscopic image obtained by the method of the present invention and not utilizing the method of the present invention
表4 利用本发明方法与不利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与主观评分之间的均方误差比较Table 4 Comparison of the mean square error between the objective evaluation predictive value and the subjective rating of the image quality of the distorted stereoscopic image obtained by the method of the present invention and without using the method of the present invention
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210357956.8A CN102903107B (en) | 2012-09-24 | 2012-09-24 | Three-dimensional picture quality objective evaluation method based on feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210357956.8A CN102903107B (en) | 2012-09-24 | 2012-09-24 | Three-dimensional picture quality objective evaluation method based on feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102903107A CN102903107A (en) | 2013-01-30 |
CN102903107B true CN102903107B (en) | 2015-07-08 |
Family
ID=47575320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210357956.8A Expired - Fee Related CN102903107B (en) | 2012-09-24 | 2012-09-24 | Three-dimensional picture quality objective evaluation method based on feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102903107B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103200420B (en) * | 2013-03-19 | 2015-03-25 | 宁波大学 | Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention |
CN103281556B (en) * | 2013-05-13 | 2015-05-13 | 宁波大学 | Objective evaluation method for stereo image quality on the basis of image decomposition |
CN103369348B (en) * | 2013-06-27 | 2015-03-25 | 宁波大学 | Three-dimensional image quality objective evaluation method based on regional importance classification |
CN106960432B (en) * | 2017-02-08 | 2019-10-25 | 宁波大学 | A No-reference Stereo Image Quality Evaluation Method |
CN107945151B (en) * | 2017-10-26 | 2020-01-21 | 宁波大学 | Repositioning image quality evaluation method based on similarity transformation |
CN108694705B (en) * | 2018-07-05 | 2020-12-11 | 浙江大学 | A method for multi-frame image registration and fusion denoising |
CN109903273B (en) * | 2019-01-30 | 2023-03-17 | 武汉科技大学 | Stereo image quality objective evaluation method based on DCT domain characteristics |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101378519A (en) * | 2008-09-28 | 2009-03-04 | 宁波大学 | Method for evaluating quality-lose referrence image quality base on Contourlet transformation |
CN101610425A (en) * | 2009-07-29 | 2009-12-23 | 清华大学 | A method and device for evaluating the quality of stereoscopic images |
CN102170581A (en) * | 2011-05-05 | 2011-08-31 | 天津大学 | Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4817246B2 (en) * | 2006-07-31 | 2011-11-16 | Kddi株式会社 | Objective video quality evaluation system |
-
2012
- 2012-09-24 CN CN201210357956.8A patent/CN102903107B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101378519A (en) * | 2008-09-28 | 2009-03-04 | 宁波大学 | Method for evaluating quality-lose referrence image quality base on Contourlet transformation |
CN101610425A (en) * | 2009-07-29 | 2009-12-23 | 清华大学 | A method and device for evaluating the quality of stereoscopic images |
CN102170581A (en) * | 2011-05-05 | 2011-08-31 | 天津大学 | Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method |
Non-Patent Citations (2)
Title |
---|
Quality Assessment of Stereoscopic Images;Alexandre Benoit,et al;《Quality Assessment of Stereoscopic Images》;20081014;1-13 * |
基于小波图像融合的非对称失真;周武杰等;《光电工程》;20111130;第38卷(第11期);100-105 * |
Also Published As
Publication number | Publication date |
---|---|
CN102903107A (en) | 2013-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102903107B (en) | Three-dimensional picture quality objective evaluation method based on feature fusion | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN103413298B (en) | A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic | |
CN102843572B (en) | Phase-based stereo image quality objective evaluation method | |
CN102708567B (en) | Visual perception-based three-dimensional image quality objective evaluation method | |
CN104394403B (en) | A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
CN103136748B (en) | The objective evaluation method for quality of stereo images of a kind of feature based figure | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN104581143A (en) | Reference-free three-dimensional picture quality objective evaluation method based on machine learning | |
CN104408716A (en) | Three-dimensional image quality objective evaluation method based on visual fidelity | |
CN102999912B (en) | A kind of objective evaluation method for quality of stereo images based on distortion map | |
CN103200420B (en) | Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention | |
CN104036502A (en) | No-reference fuzzy distorted stereo image quality evaluation method | |
CN105357519A (en) | Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic | |
CN102999911B (en) | Three-dimensional image quality objective evaluation method based on energy diagrams | |
CN103369348B (en) | Three-dimensional image quality objective evaluation method based on regional importance classification | |
CN102737380B (en) | Stereo image quality objective evaluation method based on gradient structure tensor | |
CN104767993A (en) | An Objective Quality Evaluation Method for Stereo Video Based on Quality Degradation and Temporal Weighting | |
CN102708568A (en) | A Stereoscopic Image Objective Quality Evaluation Method Based on Structural Distortion | |
CN105898279B (en) | An Objective Evaluation Method of Stereoscopic Image Quality | |
CN103914835B (en) | A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture | |
CN103745457B (en) | A kind of three-dimensional image objective quality evaluation method | |
CN103903259A (en) | Objective three-dimensional image quality evaluation method based on structure and texture separation | |
CN102271279B (en) | Objective analysis method for just noticeable change step length of stereo images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20191218 Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000 Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd. Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818 Patentee before: Ningbo University |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20201229 Address after: 213001 3rd floor, Jinhu innovation center, No.8 Taihu Middle Road, Xinbei District, Changzhou City, Jiangsu Province Patentee after: Jiangsu Qizhen Information Technology Service Co.,Ltd. Address before: 313000 room 1020, science and Technology Pioneer Park, 666 Chaoyang Road, Nanxun Town, Nanxun District, Huzhou, Zhejiang. Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150708 |
|
CF01 | Termination of patent right due to non-payment of annual fee |