CN103903259A - Objective three-dimensional image quality evaluation method based on structure and texture separation - Google Patents
Objective three-dimensional image quality evaluation method based on structure and texture separation Download PDFInfo
- Publication number
- CN103903259A CN103903259A CN201410105777.4A CN201410105777A CN103903259A CN 103903259 A CN103903259 A CN 103903259A CN 201410105777 A CN201410105777 A CN 201410105777A CN 103903259 A CN103903259 A CN 103903259A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- org
- image
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于结构纹理分离的立体图像质量客观评价方法,其首先分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像,再采用梯度相似性分别对左视点图像和右视点图像的结构图像进行评价,采用结构相似度分别对左视点图像和右视点图像的纹理图像进行评价,并通过融合获得待评价的失真的立体图像的图像质量客观评价预测值;优点在于分解得到的结构图像和纹理图像能够很好地表征图像结构和纹理信息对图像质量的影响,使得评价结果更加感觉符合人类视觉系统,从而有效地提高了客观评价结果与主观感知的相关性。
The invention discloses a method for objectively evaluating the quality of stereoscopic images based on structural texture separation. Firstly, the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated are respectively The structure and texture of the right view image are separated to obtain their respective structure images and texture images, and then the gradient similarity is used to evaluate the structure images of the left view image and the right view image respectively, and the structure similarity is used to evaluate the left view image and the right view image respectively. The texture image is evaluated, and the image quality prediction value of the distorted stereo image to be evaluated is obtained through fusion; the advantage is that the structure image and texture image obtained by decomposition can well represent the influence of image structure and texture information on image quality , so that the evaluation results are more in line with the human visual system, thus effectively improving the correlation between the objective evaluation results and subjective perception.
Description
技术领域technical field
本发明涉及一种图像质量评价方法,尤其是涉及一种基于结构纹理分离的立体图像质量客观评价方法。The invention relates to an image quality evaluation method, in particular to an objective evaluation method of stereoscopic image quality based on structure texture separation.
背景技术Background technique
随着图像编码技术和立体显示技术的迅速发展,立体图像技术受到了越来越广泛的关注与应用,已成为当前的一个研究热点。立体图像技术利用人眼的双目视差原理,双目各自独立地接收来自同一场景的左视点图像和右视点图像,通过大脑融合形成双目视差,从而欣赏到具有深度感和逼真感的立体图像。由于受到采集系统、存储压缩及传输设备的影响,立体图像会不可避免地引入一系列的失真,而与单通道图像相比,立体图像需要同时保证两个通道的图像质量,因此对其进行质量评价具有非常重要的意义。然而,目前对立体图像质量缺乏有效的客观评价方法进行评价。因此,建立有效的立体图像质量客观评价模型具有十分重要的意义。With the rapid development of image coding technology and stereoscopic display technology, stereoscopic image technology has received more and more attention and applications, and has become a current research hotspot. Stereoscopic image technology utilizes the principle of binocular parallax of the human eye. Both eyes independently receive left and right viewpoint images from the same scene, and form binocular parallax through brain fusion, so as to enjoy stereoscopic images with a sense of depth and realism. . Due to the influence of acquisition system, storage compression and transmission equipment, stereoscopic images will inevitably introduce a series of distortions. Compared with single-channel images, stereoscopic images need to ensure the image quality of two channels at the same time, so the quality of Evaluation is very important. However, there is currently no effective objective evaluation method to evaluate the stereoscopic image quality. Therefore, it is of great significance to establish an effective objective evaluation model for stereoscopic image quality.
目前的立体图像质量客观评价方法是将平面图像质量评价方法直接应用于评价立体图像质量,或通过评价视差图的质量来评价立体图像的深度感知,然而,对立体图像进行融合产生立体感的过程并不是简单的平面图像质量评价方法的扩展,并且人眼并不直接观看视差图,以视差图的质量来评价立体图像的深度感知并不十分准确。因此,如何在立体图像质量评价过程中有效地对双目立体感知过程进行模拟,以及如何对不同失真类型对立体感知质量的影响机理进行分析,使得评价结果能够更加客观地反映人类视觉系统,都是在对立体图像进行客观质量评价过程中需要研究解决的问题。The current objective evaluation method of stereoscopic image quality is to directly apply the planar image quality evaluation method to evaluate the quality of stereoscopic images, or to evaluate the depth perception of stereoscopic images by evaluating the quality of disparity maps. However, the process of fusion of stereoscopic images to produce stereoscopic It is not an extension of the simple planar image quality evaluation method, and the human eye does not directly view the disparity map, so it is not very accurate to evaluate the depth perception of the stereoscopic image with the quality of the disparity map. Therefore, how to effectively simulate the process of binocular stereo perception in the process of stereo image quality evaluation, and how to analyze the influence mechanism of different distortion types on the quality of stereo perception, so that the evaluation results can more objectively reflect the human visual system, are all important. It is a problem that needs to be studied and solved in the process of objective quality evaluation of stereoscopic images.
发明内容Contents of the invention
本发明所要解决的技术问题是提供一种能够有效地提高客观评价结果与主观感知的相关性的基于结构纹理分离的立体图像质量客观评价方法。The technical problem to be solved by the present invention is to provide an objective evaluation method of stereoscopic image quality based on structure texture separation that can effectively improve the correlation between objective evaluation results and subjective perception.
本发明解决上述技术问题所采用的技术方案为:一种基于结构纹理分离的立体图像质量客观评价方法,其特征在于它的处理过程为:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for objectively evaluating the quality of stereoscopic images based on structural texture separation, which is characterized in that its processing process is:
首先,分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像;First, implement structure texture separation on the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated respectively, and obtain respective structural images and texture images;
其次,通过计算原始的无失真的立体图像的左视点图像的结构图像中的每个像素点与待评价的失真的立体图像的左视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的左视点图像的结构图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的结构图像中的每个像素点与待评价的失真的立体图像的右视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的右视点图像的结构图像的图像质量客观评价预测值;Secondly, by calculating the gradient similarity between each pixel in the structural image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding pixel in the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated, Obtain the image quality objective evaluation prediction value of the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated; similarly, by calculating the difference between each pixel in the structural image of the right viewpoint image of the original undistorted stereoscopic image and to be evaluated The gradient similarity between corresponding pixels in the structural image of the right viewpoint image of the distorted stereoscopic image is obtained, and the image quality objective evaluation prediction value of the structural image of the right viewpoint image of the distorted stereoscopic image to be evaluated is obtained;
接着,通过计算原始的无失真的立体图像的左视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的左视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的左视点图像的纹理图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的右视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的右视点图像的纹理图像的图像质量客观评价预测值;Next, by calculating each sub-block with a size of 8×8 in the texture image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding size in the texture image of the left viewpoint image of the distorted stereoscopic image to be evaluated is the structural similarity between 8×8 sub-blocks, and obtain the image quality objective evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, by calculating the right Structural similarity between each 8×8 sub-block in the texture image of the viewpoint image and the corresponding 8×8 sub-block in the texture image of the right viewpoint image of the distorted stereo image to be evaluated , obtaining the image quality objective evaluation prediction value of the texture image of the right viewpoint image of the distorted stereoscopic image to be evaluated;
再者,对待评价的失真的立体图像的左视点图像和右视点图像的结构图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的结构图像的图像质量客观评价预测值;同样,对待评价的失真的立体图像的左视点图像和右视点图像的纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的纹理图像的图像质量客观评价预测值;Furthermore, the image quality objective evaluation prediction value of the structural image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the structural image of the distorted stereoscopic image to be evaluated; Similarly, the image quality objective evaluation prediction value of the texture image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the texture image of the distorted stereoscopic image to be evaluated;
最后,对待评价的失真的立体图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值。Finally, the image quality objective evaluation prediction value of the structure image and the texture image of the distorted stereo image to be evaluated are fused to obtain the image quality objective evaluation prediction value of the distorted stereo image to be evaluated.
本发明的基于结构纹理分离的立体图像质量客观评价方法具体包括以下步骤:The stereoscopic image quality objective evaluation method based on structure texture separation of the present invention specifically comprises the following steps:
①令Sorg表示原始的无失真的立体图像,令Sdis表示待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值;①Let S org denote the original undistorted stereo image, let S dis denote the distorted stereo image to be evaluated, denote the left viewpoint image of S org as {L org (x,y)}, and denote the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left-viewpoint image and the right-viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left-viewpoint image and the right-viewpoint image, and H represents The height of the left view image and the right view image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) indicates that the coordinate position in {L dis (x, y)} is (x, y) The pixel value of the pixel point of y), R dis (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in {R dis (x, y)};
②分别对{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}实施结构纹理分离,获得{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}各自的结构图像和纹理图像,将{Lorg(x,y)}的结构图像和纹理图像对应记为和将{Rorg(x,y)}的结构图像和纹理图像对应记为和将{Ldis(x,y)}的结构图像和纹理图像对应记为和将{Rdis(x,y)}的结构图像和纹理图像对应记为和其中,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值;② Implement structure and texture separation for {L org (x, y)}, {R org (x, y)}, {L dis (x, y)} and {R dis (x, y)} respectively, and obtain {L org (x,y)}, {R org (x,y)}, {L dis (x,y)} and {R dis (x,y)} respectively structure image and texture image, the {L org ( The corresponding structure image and texture image of x, y)} are recorded as and The structure image and texture image correspondence of {R org (x,y)} are recorded as and The structure image and texture image of {L dis (x,y)} are recorded as and Record the structure image and texture image correspondence of {Rdis(x,y)} as and in, express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel point whose middle coordinate position is (x, y);
③计算中的每个像素点与中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与中坐标位置为(x,y)的像素点之间的梯度相似性记为
同样,计算中的每个像素点与性,将中坐标位置为(x,y)的像素点与中坐标位置为(x,y)的像素点之间的梯度相似性记为
④通过获取中的每个尺寸大小为8×8的子块与中对应尺寸大小为8×8的子块之间的结构相似度,计算得到的图像质量客观评价预测值,记为 ④ By obtaining Each sub-block of size 8×8 in The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as The predicted value of the objective evaluation of image quality is denoted as
同样,通过获取中的每个尺寸大小为8×8的子块与中对应尺寸大小为8×8的子块之间的结构相似度,计算得到的图像质量客观评价预测值,记为 Likewise, by getting Each sub-block of size 8×8 in The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as The predicted value of the objective evaluation of image quality is denoted as
⑤对和进行融合,得到Sdis的结构图像的图像质量客观评价预测值,记为Qstr,其中,ws表示和的权值比重;⑤ right and Fusion is carried out to obtain the image quality objective evaluation prediction value of the structural image of S dis , denoted as Q str , Among them, w s means and weight ratio;
同样,对和进行融合,得到Sdis的纹理图像的图像质量客观评价预测值,记为Qtex,其中,wt表示和的权值比重;same, yes and Perform fusion to obtain the image quality objective evaluation prediction value of the texture image of S dis , denoted as Q tex , Among them, w t means and weight ratio;
⑥对Qstr和Qtex进行融合,得到Sdis的图像质量客观评价预测值,记为Q,Q=w×Qstr+(1-w)×Qtex,其中,w表示Qstr和Sdis的权值比重。⑥Fuse Q str and Q tex to obtain the predicted value of S dis image quality objective evaluation, denoted as Q, Q=w×Q str +(1-w)×Q tex , where w represents Q str and S dis weight ratio of .
所述的步骤②中{Lorg(x,y)}的结构图像和纹理图像的获取过程为:The structural image of {L org (x,y)} in the step ② and the texture image The acquisition process is:
②-1a、将{Lorg(x,y)}中当前待处理的像素点定义为当前像素点;②-1a. Define the current pixel to be processed in {L org (x, y)} as the current pixel;
②-2a、将当前像素点在{Lorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Lorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Lorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示中坐标位置为(x3,y3)的像素点的像素值;②-2a. Record the coordinate position of the current pixel point in {L org (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel point whose coordinate position is q in {L org (x,y)} within the 21×21 neighborhood window of is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock The pixel value of the pixel point whose coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 );
上述步骤②-2a中,对于任意一个邻域像素点、当前子块中的任意一个像素点,假设该像素点在{Lorg(x,y)}中的坐标位置为(x,y),如果x<1且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(1,y)的像素点的像素值赋值给该像素点;如果x>W且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(W,y)的像素点的像素值赋值给该像素点;如果1≤x≤W且y<1,则将{Lorg(x,y)}中坐标位置为(x,1)的像素点的像素值赋值给该像素点;如果1≤x≤W且y>H,则将{Lorg(x,y)}中坐标位置为(x,H)的像素点的像素值赋值给该像素点;如果x<1且y<1,则将{Lorg(x,y)}中坐标位置为(1,1)的像素点的像素值赋值给该像素点;如果x>W且y<1,则将{Lorg(x,y)}中坐标位置为(W,1)的像素点的像素值赋值给该像素点;如果x<1且y>H,则将{Lorg(x,y)}中坐标位置为(1,H)的像素点的像素值赋值给该像素点;如果x>W且y>H,则将{Lorg(x,y)}中坐标位置为(W,H)的像素点的像素值赋值给该像素点;In the above step ②-2a, for any pixel in the neighborhood or any pixel in the current sub-block, assuming that the coordinate position of the pixel in {L org (x,y)} is (x,y), If x<1 and 1≤y≤H, assign the pixel value of the pixel whose coordinate position is (1,y) in {L org (x,y)} to the pixel; if x>W and 1≤ y≤H, then assign the pixel value of the pixel whose coordinate position is (W,y) in {L org (x,y)} to the pixel; if 1≤x≤W and y<1, then { The pixel value of the pixel whose coordinate position is (x, 1) in L org (x, y)} is assigned to the pixel; if 1≤x≤W and y>H, then {L org (x,y) } assigns the pixel value of the pixel whose coordinate position is (x, H) to the pixel; if x<1 and y<1, then the coordinate position in {L org (x,y)} is (1,1 ) is assigned the pixel value of the pixel point; if x>W and y<1, then assign the pixel value of the pixel point whose coordinate position is (W,1) in {L org (x,y)} to The pixel; if x<1 and y>H, assign the pixel value of the pixel whose coordinate position is (1,H) in {L org (x,y)} to the pixel; if x>W and y>H, then assign the pixel value of the pixel whose coordinate position is (W, H) in {L org (x, y)} to the pixel;
②-3a、获取当前子块中的每个像素点的特征矢量,将当前子块中坐标位置为(x2,y2)的像素点的特征矢量记为
②-4a、根据当前子块中的每个像素点的特征矢量,计算当前子块的协方差矩阵,记为
②-5a、对当前子块的协方差矩阵进行Cholesky分解,得到当前子块的Sigma特征集,记为
②-6a、采用与步骤②-3a至步骤②-5a相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为的维数为7×15;②-6a. Use the same operation as step ②-3a to step ②-5a to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as The dimension of is 7×15;
②-7a、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为
②-8a、根据当前像素点的结构信息获取当前像素点的纹理信息,记为
②-9a、将{Lorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2a继续执行,直至{Lorg(x,y)}中的所有像素点处理完毕,得到{Lorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Lorg(x,y)}中的所有像素点的结构信息构成{Lorg(x,y)}的结构图像,记为由{Lorg(x,y)}中的所有像素点的纹理信息构成{Lorg(x,y)}的纹理图像,记为 ②-9a. Set the next pixel to be processed in {L org (x,y)} as the current pixel, and then return to step ②-2a to continue until all pixels in {L org (x,y)} Points are processed, and the structure information and texture information of each pixel in {L org (x, y)} are obtained, which is composed of the structure information of all pixels in {L org (x, y)} {L org ( The structural image of x,y)}, denoted as The texture image of {L org (x, y)} is composed of the texture information of all pixels in {L org (x, y)}, recorded as
采用与步骤②-1a至步骤②-9a获取{Lorg(x,y)}的结构图像和纹理图像相同的操作,获取{Rorg(x,y)}的结构图像和纹理图像{Ldis(x,y)}的结构图像和纹理图像{Rdis(x,y)}的结构图像和纹理图像 Obtain the structure image of {L org (x, y)} with step ②-1a to step ②-9a and the texture image The same operation, get the structure image of {R org (x,y)} and the texture image Structural image of {L dis (x,y)} and the texture image Structural image of {R dis (x,y)} and the texture image
所述的步骤④中的图像质量客观评价预测值的获取过程为:In the step ④ The predictive value of the image quality objective evaluation The acquisition process is:
④-1a、分别将和划分成个互不重叠的尺寸大小为8×8的子块,将中当前待处理的第k个子块定义为当前第一子块,将中当前待处理的第k个子块定义为当前第二子块,其中,k的初始值为1;④-1a, respectively and divided into Non-overlapping sub-blocks of size 8×8, the The kth sub-block currently to be processed in is defined as the current first sub-block, and the The kth sub-block currently to be processed in is defined as the current second sub-block, where, The initial value of k is 1;
④-2a、将当前第一子块记为将当前第二子块记为其中,(x4,y4)表示和中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,表示中坐标位置为(x4,y4)的像素点的像素值,表示中坐标位置为(x4,y4)的像素点的像素值;④-2a. Record the current first sub-block as Record the current second sub-block as Among them, (x 4 ,y 4 ) means and The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8, express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ), express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 );
④-3a、计算当前第一子块的均值和标准差,对应记为和
同样,计算当前第二子块的均值和标准差,对应记为和
④-4a、计算当前第一子块与当前第二子块之间的结构相似度,记为
④-5a、令k=k+1,将中下一个待处理的子块作为当前第一子块,将中下一个待处理的子块作为当前第二子块,然后返回步骤④-2a继续执行,直至和中的所有子块均处理完毕,得到中的每个子块与中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号;④-5a, let k=k+1, the In the next sub-block to be processed as the current first sub-block, the The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2a to continue until and All sub-blocks in are processed, resulting in Each subblock in The structural similarity between corresponding sub-blocks in k=k+1, where "=" is an assignment symbol;
④-6a、根据中的每个子块与中对应子块之间的结构相似度,计算的图像质量客观评价预测值,记为 ④-6a, according to Each subblock in The structural similarity between the corresponding sub-blocks in the calculation The predicted value of objective evaluation of image quality is denoted as
所述的步骤④中的图像质量客观评价预测值的获取过程为:In the step ④ The predictive value of the image quality objective evaluation The acquisition process is:
④-1b、分别将和划分成个互不重叠的尺寸大小为8×8的子块,将中当前待处理的第k个子块定义为当前第一子块,将中当前待处理的第k个子块定义为当前第二子块,其中,k的初始值为1;④-1b, respectively and divided into Non-overlapping sub-blocks of size 8×8, the The kth sub-block currently to be processed in is defined as the current first sub-block, and the The kth sub-block currently to be processed in is defined as the current second sub-block, where, The initial value of k is 1;
④-2b、将当前第一子块记为将当前第二子块记为其中,(x4,y4)表示和中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,表示中坐标位置为(x4,y4)的像素点的像素值,表示中坐标位置为(x4,y4)的像素点的像素值;④-2b. Record the current first sub-block as Record the current second sub-block as Among them, (x 4 ,y 4 ) means and The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8, express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ), express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 );
④-3b、计算当前第一子块的均值和标准差,对应记为和
同样,计算当前第二子块的均值和标准差,对应记为和
④-4b、计算当前第一子块与当前第二子块之间的结构相似度,记为
④-5b、令k=k+1,将中下一个待处理的子块作为当前第一子块,将中下一个待处理的子块作为当前第二子块,然后返回步骤④-2b继续执行,直至和中的所有子块均处理完毕,得到中的每个子块与中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号;④-5b, let k=k+1, the In the next sub-block to be processed as the current first sub-block, the The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2b to continue until and All sub-blocks in are processed, resulting in Each subblock in The structural similarity between corresponding sub-blocks in k=k+1, where "=" is an assignment symbol;
④-6b、根据中的每个子块与中对应子块之间的结构相似度,计算的图像质量客观评价预测值,记为 ④-6b, according to Each subblock in The structural similarity between the corresponding sub-blocks in the calculation The predicted value of the objective evaluation of image quality is denoted as
与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
1)本发明方法考虑到失真会导致图像结构或纹理信息的损失,因此将失真的立体图像分离为结构图像和纹理图像,并采用不同的参数分别对左视点图像和右视点图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,这样能够较好地反映立体图像的质量变化情况,使得评价结果更加符合人类视觉系统。1) The method of the present invention considers that distortion will lead to the loss of image structure or texture information, so the distorted stereoscopic image is separated into a structural image and a texture image, and different parameters are used to separate the structural images and texture images of the left viewpoint image and the right viewpoint image. The image quality objective evaluation prediction value of the texture image is fused, which can better reflect the quality change of the stereo image and make the evaluation result more in line with the human visual system.
2)本发明方法采用梯度相似性对结构图像进行评价,采用结构相似度对纹理图像进行评价,这样能够很好地表征结构和纹理信息的损失对图像质量的影响,从而能够有效地提高客观评价结果与主观感知的相关性。2) The method of the present invention uses gradient similarity to evaluate structural images, and uses structural similarity to evaluate texture images, which can well characterize the impact of loss of structure and texture information on image quality, thereby effectively improving objective evaluation Correlation of results to subjective perception.
附图说明Description of drawings
图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;
图2为利用本发明方法得到的宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图;Fig. 2 is the scatter diagram of the image quality objective evaluation prediction value and the average subjective score difference of each distorted stereoscopic image in the Ningbo University stereoscopic image database obtained by the inventive method;
图3为利用本发明方法得到的LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图。Fig. 3 is a scatter diagram of the difference between the image quality objective evaluation prediction value and the average subjective evaluation value of each distorted stereo image in the LIVE stereo image database obtained by the method of the present invention.
具体实施方式Detailed ways
以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种基于结构纹理分离的立体图像质量客观评价方法,其总体实现框图如图1所示,它的处理过程为:A kind of stereoscopic image quality objective evaluation method based on structural texture separation that the present invention proposes, its overall realization block diagram is as shown in Figure 1, and its processing process is:
首先,分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像;First, implement structure texture separation on the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated respectively, and obtain respective structural images and texture images;
其次,通过计算原始的无失真的立体图像的左视点图像的结构图像中的每个像素点与待评价的失真的立体图像的左视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的左视点图像的结构图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的结构图像中的每个像素点与待评价的失真的立体图像的右视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的右视点图像的结构图像的图像质量客观评价预测值;Secondly, by calculating the gradient similarity between each pixel in the structural image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding pixel in the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated, Obtain the image quality objective evaluation prediction value of the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated; similarly, by calculating the difference between each pixel in the structural image of the right viewpoint image of the original undistorted stereoscopic image and to be evaluated The gradient similarity between corresponding pixels in the structural image of the right viewpoint image of the distorted stereoscopic image is obtained, and the image quality objective evaluation prediction value of the structural image of the right viewpoint image of the distorted stereoscopic image to be evaluated is obtained;
接着,通过计算原始的无失真的立体图像的左视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的左视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的左视点图像的纹理图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的右视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的右视点图像的纹理图像的图像质量客观评价预测值;Next, by calculating each sub-block with a size of 8×8 in the texture image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding size in the texture image of the left viewpoint image of the distorted stereoscopic image to be evaluated is the structural similarity between 8×8 sub-blocks, and obtain the image quality objective evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, by calculating the right Structural similarity between each 8×8 sub-block in the texture image of the viewpoint image and the corresponding 8×8 sub-block in the texture image of the right viewpoint image of the distorted stereo image to be evaluated , obtaining the image quality objective evaluation prediction value of the texture image of the right viewpoint image of the distorted stereoscopic image to be evaluated;
再者,对待评价的失真的立体图像的左视点图像和右视点图像的结构图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的结构图像的图像质量客观评价预测值;同样,对待评价的失真的立体图像的左视点图像和右视点图像的纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的纹理图像的图像质量客观评价预测值;Furthermore, the image quality objective evaluation prediction value of the structural image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the structural image of the distorted stereoscopic image to be evaluated; Similarly, the image quality objective evaluation prediction value of the texture image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the texture image of the distorted stereoscopic image to be evaluated;
最后,对待评价的失真的立体图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值。Finally, the image quality objective evaluation prediction value of the structure image and the texture image of the distorted stereo image to be evaluated are fused to obtain the image quality objective evaluation prediction value of the distorted stereo image to be evaluated.
本发明的基于结构纹理分离的立体图像质量客观评价方法,它具体包括以下步骤:The stereoscopic image quality objective evaluation method based on structure texture separation of the present invention, it specifically comprises the following steps:
①令Sorg表示原始的无失真的立体图像,令Sdis表示待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值。①Let S org denote the original undistorted stereo image, let S dis denote the distorted stereo image to be evaluated, denote the left viewpoint image of S org as {L org (x,y)}, and denote the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left-viewpoint image and the right-viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left-viewpoint image and the right-viewpoint image, and H represents The height of the left view image and the right view image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) indicates that the coordinate position in {L dis (x, y)} is (x, y) y), and R dis (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {R dis (x, y)}.
在此,采用宁波大学立体图像库和LIVE立体图像库来分析本实施例得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的相关性。宁波大学立体图像库由12幅无失真的立体图像在不同失真程度的JPEG压缩情况下的60幅失真的立体图像、JPEG2000压缩情况下的60幅失真的立体图像、高斯模糊情况下的60幅失真的立体图像、高斯白噪声情况下的60幅失真的立体图像和H.264编码失真情况下的72幅失真的立体图像构成。LIVE立体图像库由20幅无失真的立体图像在不同失真程度的JPEG压缩情况下的80幅失真的立体图像、JPEG2000压缩情况下的80幅失真的立体图像、高斯模糊情况下的45幅失真的立体图像、高斯白噪声情况下的80幅失真的立体图像和Fast Fading失真情况下的80幅失真的立体图像构成。Here, the stereoscopic image database of Ningbo University and the LIVE stereoscopic image database are used to analyze the correlation between the image quality objective evaluation prediction value and the average subjective score difference of the distorted stereoscopic image obtained in this embodiment. The stereoscopic image library of Ningbo University consists of 12 undistorted stereoscopic images, 60 distorted stereoscopic images under JPEG compression with different degrees of distortion, 60 distorted stereoscopic images under JPEG2000 compression, and 60 distorted stereoscopic images under Gaussian blur Stereo images, 60 distorted stereo images in the case of Gaussian white noise, and 72 distorted stereo images in the case of H.264 encoding distortion. The LIVE stereoscopic image library consists of 20 undistorted stereoscopic images, 80 distorted stereoscopic images under JPEG compression with different distortion levels, 80 distorted stereoscopic images under JPEG2000 compression, and 45 distorted stereoscopic images under Gaussian blur Stereo images, 80 distorted stereo images in the case of Gaussian white noise, and 80 distorted stereo images in the case of Fast Fading distortion.
②分别对{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}实施结构纹理分离,获得{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}各自的结构图像和纹理图像,将{Lorg(x,y)}的结构图像和纹理图像对应记为和将{Rorg(x,y)}的结构图像和纹理图像对应记为和将{Ldis(x,y)}的结构图像和纹理图像对应记为和将{Rdis(x,y)}的结构图像和纹理图像对应记为和其中,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值,表示中坐标位置为(x,y)的像素点的像素值。② Implement structure and texture separation for {L org (x, y)}, {R org (x, y)}, {L dis (x, y)} and {R dis (x, y)} respectively, and obtain {L org (x,y)}, {R org (x,y)}, {L dis (x,y)} and {R dis (x,y)} respectively structure image and texture image, the {L org ( The corresponding structure image and texture image of x,y)} are recorded as and The structure image and texture image correspondence of {R org (x,y)} are recorded as and The structure image and texture image of {L dis (x,y)} are recorded as and The structure image and texture image of {R dis (x,y)} are recorded as and in, express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose coordinate position is (x, y), express The pixel value of the pixel whose middle coordinate position is (x, y).
在此具体实施例中,步骤②中{Lorg(x,y)}的结构图像和纹理图像的获取过程为:In this specific embodiment, the structure image of {L org (x, y)} in step ② and the texture image The acquisition process is:
②-1a、将{Lorg(x,y)}中当前待处理的像素点定义为当前像素点。②-1a. Define the current pixel point to be processed in {L org (x,y)} as the current pixel point.
②-2a、将当前像素点在{Lorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Lorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Lorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示中坐标位置为(x3,y3)的像素点的像素值。②-2a. Record the coordinate position of the current pixel point in {L org (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel point whose coordinate position is q in {L org (x,y)} within the 21×21 neighborhood window of is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).
上述步骤②-2a中,对于当前子块中的任意一个像素点,假设该像素点在{Lorg(x,y)}中的坐标位置为(x,y),如果x<1且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(1,y)的像素点的像素值赋值给该像素点;如果x>W且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(W,y)的像素点的像素值赋值给该像素点;如果1≤x≤W且y<1,则将{Lorg(x,y)}中坐标位置为(x,1)的像素点的像素值赋值给该像素点;如果1≤x≤W且y>H,则将{Lorg(x,y)}中坐标位置为(x,H)的像素点的像素值赋值给该像素点;如果x<1且y<1,则将{Lorg(x,y)}中坐标位置为(1,1)的像素点的像素值赋值给该像素点;如果x>W且y<1,则将{Lorg(x,y)}中坐标位置为(W,1)的像素点的像素值赋值给该像素点;如果x<1且y>H,则将{Lorg(x,y)}中坐标位置为(1,H)的像素点的像素值赋值给该像素点;如果x>W且y>H,则将{Lorg(x,y)}中坐标位置为(W,H)的像素点的像素值赋值给该像素点;同样,对于任意一个邻域像素点,也同上述当前子块中的任意一个像素点作同样的操作,使超出图像边界的邻域像素点的像素值由最邻近的边界像素点的像素值替代。即上述步骤②-2a中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Lorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Lorg(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Lorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2a, for any pixel in the current sub-block, assume that the coordinate position of the pixel in {L org (x,y)} is (x, y), if x<1 and 1≤ y≤H, then assign the pixel value of the pixel whose coordinate position is (1,y) in {L org (x,y)} to the pixel; if x>W and 1≤y≤H, then { The pixel value of the pixel whose coordinate position is (W, y) in L org (x, y)} is assigned to the pixel; if 1≤x≤W and y<1, then {L org (x, y) } assigns the pixel value of the pixel whose coordinate position is (x, 1) to the pixel; if 1≤x≤W and y>H, then the coordinate position in {L org (x,y)} is (x ,H) to assign the pixel value of the pixel point to the pixel point; if x<1 and y<1, then the pixel value of the pixel point whose coordinate position is (1,1) in {L org (x,y)} Assign a value to the pixel; if x>W and y<1, then assign the pixel value of the pixel whose coordinate position is (W,1) in {L org (x,y)} to the pixel; if x< 1 and y>H, then assign the pixel value of the pixel whose coordinate position is (1,H) in {L org (x,y)} to the pixel; if x>W and y>H, then assign { The pixel value of the pixel whose coordinate position is (W, H) in L org (x, y)} is assigned to the pixel; similarly, for any neighboring pixel, it is also the same as any pixel in the above-mentioned current sub-block Points do the same operation, so that the pixel value of the neighboring pixel point beyond the image boundary is replaced by the pixel value of the nearest boundary pixel point. That is, in the above step ②-2a, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {L org (x,y)}, then the The pixel value of the pixel point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {L org (x,y) }, the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if the current pixel is the center of each neighborhood pixel in the 21×21 neighborhood window If the coordinate position of a pixel in the block formed by the 9×9 neighborhood window exceeds the boundary of {L org (x,y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel .
②-3a、获取当前子块中的每个像素点的特征矢量,将当前子块中坐标位置为(x2,y2)的像素点的特征矢量记为
②-4a、根据当前子块中的每个像素点的特征矢量,计算当前子块的协方差矩阵,记为
②-5a、对当前子块的协方差矩阵进行Cholesky分解,得到当前子块的Sigma特征集,记为
②-6a、采用与步骤②-3a至步骤②-5a相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为的维数为7×15。②-6a. Use the same operation as step ②-3a to step ②-5a to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as The dimension of is 7×15.
②-7a、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为
②-8a、根据当前像素点的结构信息获取当前像素点的纹理信息,记为其中,Lorg(p)表示当前像素点的像素值。②-8a. According to the structure information of the current pixel point Get the texture information of the current pixel point, denoted as Wherein, L org (p) represents the pixel value of the current pixel point.
②-9a、将{Lorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2a继续执行,直至{Lorg(x,y)}中的所有像素点处理完毕,得到{Lorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Lorg(x,y)}中的所有像素点的结构信息构成{Lorg(x,y)}的结构图像,记为由{Lorg(x,y)}中的所有像素点的纹理信息构成{Lorg(x,y)}的纹理图像,记为 ②-9a. Set the next pixel to be processed in {L org (x,y)} as the current pixel, and then return to step ②-2a to continue until all pixels in {L org (x,y)} Points are processed, and the structure information and texture information of each pixel in {L org (x, y)} are obtained, which is composed of the structure information of all pixels in {L org (x, y)} {L org ( The structural image of x,y)}, denoted as The texture image of {L org (x, y)} is composed of the texture information of all pixels in {L org (x, y)}, recorded as
采用与步骤②-1a至步骤②-9a获取{Lorg(x,y)}的结构图像和纹理图像相同的操作,获取{Rorg(x,y)}的结构图像和纹理图像{Ldis(x,y)}的结构图像和纹理图像{Rdis(x,y)}的结构图像和纹理图像即:步骤②中{Rorg(x,y)}的结构图像和纹理图像的获取过程为:Obtain the structure image of {L org (x, y)} with step ②-1a to step ②-9a and the texture image The same operation, get the structure image of {R org (x,y)} and the texture image Structural image of {L dis (x,y)} and the texture image Structural image of {R dis (x,y)} and the texture image That is: the structural image of {R org (x,y)} in step ② and the texture image The acquisition process is:
②-1b、将{Rorg(x,y)}中当前待处理的像素点定义为当前像素点。②-1b. Define the current pixel to be processed in {R org (x, y)} as the current pixel.
②-2b、将当前像素点在{Rorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Rorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Rorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示中坐标位置为(x3,y3)的像素点的像素值。②-2b. Record the coordinate position of the current pixel in {R org (x, y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel except the current pixel A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered at the neighborhood pixel point whose coordinate position is q in {R org (x,y)} within the 21×21 neighborhood window of is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {R org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).
上述步骤②-2b中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Rorg(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2b, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {R org (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {R org (x,y)} , the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if each neighborhood pixel in the 21×21 neighborhood window with the current pixel as the center If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {R org (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
②-3b、获取当前子块中的每个像素点的特征矢量,将当前子块中坐标位置为(x2,y2)的像素点的特征矢量记为
②-4b、根据当前子块中的每个像素点的特征矢量,计算当前子块的协方差矩阵,记为
②-5b、对当前子块的协方差矩阵进行Cholesky分解,得到当前子块的Sigma特征集,记为
②-6b、采用与步骤②-3b至步骤②-5b相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为的维数为7×15。②-6b. Use the same operation as step ②-3b to step ②-5b to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as The dimension of is 7×15.
②-7b、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为
②-8b、根据当前像素点的结构信息获取当前像素点的纹理信息,记为其中,Rorg(p)表示当前像素点的像素值。②-8b. According to the structure information of the current pixel point Get the texture information of the current pixel point, denoted as Wherein, R org (p) represents the pixel value of the current pixel point.
②-9b、将{Rorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2b继续执行,直至{Rorg(x,y)}中的所有像素点处理完毕,得到{Rorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Rorg(x,y)}中的所有像素点的结构信息构成{Rorg(x,y)}的结构图像,记为由{Rorg(x,y)}中的所有像素点的纹理信息构成{Rorg(x,y)}的纹理图像,记为 ②-9b. Set the next pixel to be processed in {R org (x,y)} as the current pixel, and then return to step ②-2b to continue until all pixels in {R org (x,y)} Point processing is completed, and the structure information and texture information of each pixel in {R org (x, y)} are obtained, which is composed of the structure information of all pixels in {R org (x, y)} {R org ( The structural image of x,y)}, denoted as The texture image of {R org (x, y)} is composed of the texture information of all pixels in {R org (x, y)}, recorded as
步骤②中{Ldis(x,y)}的结构图像和纹理图像的获取过程为:Structural image of {L dis (x,y)} in step ② and the texture image The acquisition process is:
②-1c、将{Ldis(x,y)}中当前待处理的像素点定义为当前像素点。②-1c. Define the current pixel to be processed in {L dis (x, y)} as the current pixel.
②-2c、将当前像素点在{Ldis(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Ldis(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Ldis(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示中坐标位置为(x3,y3)的像素点的像素值。②-2c. Record the coordinate position of the current pixel point in {L dis (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel In the 21×21 neighborhood window of {L dis (x,y)}, the neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel whose coordinate position is q in {L dis (x,y)} is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L dis (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).
上述步骤②-2c中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Ldis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Ldis(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Ldis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above steps ②-2c, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {L dis (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {L dis (x,y)} boundary, the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {L dis (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
②-3c、获取当前子块中的每个像素点的特征矢量,将当前子块中坐标位置为(x2,y2)的像素点的特征矢量记为
②-4c、根据当前子块中的每个像素点的特征矢量,计算当前子块的协方差矩阵,记为
②-5c、对当前子块的协方差矩阵进行Cholesky分解,得到当前子块的Sigma特征集,记为
②-6c、采用与步骤②-3c至步骤②-5c相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为的维数为7×15。②-6c. Use the same operation as step ②-3c to step ②-5c to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as The dimension of is 7×15.
②-7c、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为
②-8c、根据当前像素点的结构信息获取当前像素点的纹理信息,记为
②-9c、将{Ldis(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2c继续执行,直至{Ldis(x,y)}中的所有像素点处理完毕,得到{Ldis(x,y)}中的每个像素点的结构信息和纹理信息,由{Ldis(x,y)}中的所有像素点的结构信息构成{Ldis(x,y)}的结构图像,记为由{Ldis(x,y)}中的所有像素点的纹理信息构成{Ldis(x,y)}的纹理图像,记为 ②-9c. Set the next pixel to be processed in {L dis (x,y)} as the current pixel, and then return to step ②-2c to continue until all pixels in {L dis (x,y)} Point processing is completed, and the structure information and texture information of each pixel in {L dis (x, y)} are obtained, which is composed of the structure information of all pixels in {L dis (x, y)} {L dis ( The structural image of x,y)}, denoted as The texture image of {L dis (x, y)} is composed of the texture information of all pixels in {L dis (x, y)}, denoted as
步骤②中{Rdis(x,y)}的结构图像和纹理图像的获取过程为:Structural image of {R dis (x,y)} in step ② and the texture image The acquisition process is:
②-1d、将{Rdis(x,y)}中当前待处理的像素点定义为当前像素点。②-1d. Define the current pixel point to be processed in {R dis (x, y)} as the current pixel point.
②-2d、将当前像素点在{Rdis(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Rdis(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Rdis(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示中坐标位置为(x3,y3)的像素点的像素值。②-2d. Record the coordinate position of the current pixel in {R dis (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel except the current pixel A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel In the 21×21 neighborhood window of {R dis (x,y)}, the neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel whose coordinate position is q in {R dis (x,y)} is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {R dis (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).
上述步骤②-2d中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rdis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Rdis(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rdis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2d, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {R dis (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {R dis (x,y)} , the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if each neighborhood pixel in the 21×21 neighborhood window with the current pixel as the center If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {R dis (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
②-3d、获取当前子块中的每个像素点的特征矢量,将当前子块中坐标位置为(x2,y2)的像素点的特征矢量记为
②-4d、根据当前子块中的每个像素点的特征矢量,计算当前子块的协方差矩阵,记为
②-5d、对当前子块的协方差矩阵进行Cholesky分解,得到当前子块的Sigma特征集,记为
②-6d、采用与步骤②-3d至步骤②-5d相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为的维数为7×15。②-6d. Using the same operation as step ②-3d to step ②-5d, obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as The dimension of is 7×15.
②-7d、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为
②-8d、根据当前像素点的结构信息获取当前像素点的纹理信息,记为
②-9d、将{Rdis(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2d继续执行,直至{Rdis(x,y)}中的所有像素点处理完毕,得到{Rdis(x,y)}中的每个像素点的结构信息和纹理信息,由{Rdis(x,y)}中的所有像素点的结构信息构成{Rdis(x,y)}的结构图像,记为由{Rdis(x,y)}中的所有像素点的纹理信息构成{Rdis(x,y)}的纹理图像,记为 ②-9d. Set the next pixel to be processed in {R dis (x, y)} as the current pixel, and then return to step ②-2d to continue until all pixels in {R dis (x, y)} Point processing is completed, and the structure information and texture information of each pixel in {R dis (x, y)} are obtained, which is composed of the structure information of all pixels in {R dis (x, y)} {R dis ( The structural image of x,y)}, denoted as The texture image of {R dis (x, y)} is composed of the texture information of all pixels in {R dis (x, y)}, recorded as
③与原始图像相比,结构图像由于将纹理等细节信息从原始图像中分离出,使结构信息更加稳定,因此本发明方法通过计算中的每个像素点与中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与中坐标位置为(x,y)的像素点之间的梯度相似性记为
同样,计算中的每个像素点与中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与中坐标位置为(x,y)的像素点之间的梯度相似性记为
④由于均值和标准差信息能够很好地评价图像细节信息变化,因此本发明方法通过获取中的每个尺寸大小为8×8的子块与中对应尺寸大小为8×8的子块之间的结构相似度,计算得到的图像质量客观评价预测值,记为 4. Because the mean value and standard deviation information can evaluate the change of image detail information well, so the method of the present invention obtains Each sub-block of size 8×8 in The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as The predicted value of the objective evaluation of image quality is denoted as
在此具体实施例中,步骤④中的图像质量客观评价预测值的获取过程为:In this specific embodiment, in step ④ The predictive value of the image quality objective evaluation The acquisition process is:
④-1a、分别将和划分成个互不重叠的尺寸大小为8×8的子块,将中当前待处理的第k个子块定义为当前第一子块,将中当前待处理的第k个子块定义为当前第二子块,其中,k的初始值为1。④-1a, respectively and divided into Non-overlapping sub-blocks of size 8×8, the The kth sub-block currently to be processed in is defined as the current first sub-block, and the The kth sub-block currently to be processed in is defined as the current second sub-block, where, The initial value of k is 1.
④-2a、将当前第一子块记为将当前第二子块记为其中,(x4,y4)表示和中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,表示中坐标位置为(x4,y4)的像素点的像素值,表示中坐标位置为(x4,y4)的像素点的像素值。④-2a. Record the current first sub-block as Record the current second sub-block as Among them, (x 4 ,y 4 ) means and The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8, express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ), express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ).
④-3a、计算当前第一子块的均值和标准差,对应记为和
同样,计算当前第二子块的均值和标准差,对应记为和
④-4a、计算当前第一子块与当前第二子块之间的结构相似度,记为
④-5a、令k=k+1,将中下一个待处理的子块作为当前第一子块,将中下一个待处理的子块作为当前第二子块,然后返回步骤④-2a继续执行,直至和中的所有子块均处理完毕,得到中的每个子块与中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号。④-5a, let k=k+1, the In the next sub-block to be processed as the current first sub-block, the The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2a to continue until and All sub-blocks in are processed, resulting in Each subblock in The structural similarity between corresponding sub-blocks in k=k+1, where "=" in k=k+1 is an assignment symbol.
④-6a、根据中的每个子块与中对应子块之间的结构相似度,计算的图像质量客观评价预测值,记为 ④-6a, according to Each subblock in The structural similarity between the corresponding sub-blocks in the calculation The predicted value of objective evaluation of image quality is denoted as
同样,通过获取中的每个尺寸大小为8×8的子块与中对应尺寸大小为8×8的子块之间的结构相似度,计算得到的图像质量客观评价预测值,记为 Likewise, by getting Each sub-block of size 8×8 in The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as The predicted value of the objective evaluation of image quality is denoted as
在此具体实施例中,所述的步骤④中的图像质量客观评价预测值的获取过程为:In this specific embodiment, in the described step ④ The predictive value of the image quality objective evaluation The acquisition process is:
④-1b、分别将和划分成个互不重叠的尺寸大小为8×8的子块,将中当前待处理的第k个子块定义为当前第一子块,将中当前待处理的第k个子块定义为当前第二子块,其中,k的初始值为1。④-1b, respectively and divided into Non-overlapping sub-blocks of size 8×8, the The kth sub-block currently to be processed in is defined as the current first sub-block, and the The kth sub-block currently to be processed in is defined as the current second sub-block, where, The initial value of k is 1.
④-2b、将当前第一子块记为将当前第二子块记为其中,(x4,y4)表示和中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,表示中坐标位置为(x4,y4)的像素点的像素值,表示中坐标位置为(x4,y4)的像素点的像素值。④-2b. Record the current first sub-block as Record the current second sub-block as Among them, (x 4 ,y 4 ) means and The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8, express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ), express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ).
④-3b、计算当前第一子块的均值和标准差,对应记为和
同样,计算当前第二子块的均值和标准差,对应记为和
④-4b、计算当前第一子块与当前第二子块之间的结构相似度,记为
④-5b、令k=k+1,将中下一个待处理的子块作为当前第一子块,将中下一个待处理的子块作为当前第二子块,然后返回步骤④-2b继续执行,直至和中的所有子块均处理完毕,得到中的每个子块与中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号。④-5b, let k=k+1, the In the next sub-block to be processed as the current first sub-block, the The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2b to continue until and All sub-blocks in are processed, resulting in Each subblock in The structural similarity between corresponding sub-blocks in k=k+1, where "=" in k=k+1 is an assignment symbol.
④-6b、根据中的每个子块与中对应子块之间的结构相似度,计算的图像质量客观评价预测值,记为 ④-6b, according to Each subblock in The structural similarity between the corresponding sub-blocks in the calculation The predicted value of the objective evaluation of image quality is denoted as
⑤对和进行融合,得到Sdis的结构图像的图像质量客观评价预测值,记为Qstr,其中,ws表示和的权值比重,在本实施例中,对于宁波大学立体图像库,取ws=0.980;对于LIVE立体图像库,取ws=0.629。⑤ right and Fusion is carried out to obtain the image quality objective evaluation prediction value of the structural image of S dis , denoted as Q str , Among them, w s means and In this embodiment, w s =0.980 for the Ningbo University stereoscopic image database; w s =0.629 for the LIVE stereoscopic image database.
同样,对和进行融合,得到Sdis的纹理图像的图像质量客观评价预测值,记为Qtex,其中,wt表示和的权值比重,在本实施例中,对于宁波大学立体图像库,取wt=0.888;对于LIVE立体图像库,取wt=0.503。same, yes and Perform fusion to obtain the image quality objective evaluation prediction value of the texture image of S dis , denoted as Q tex , Among them, w t means and In this embodiment, for the stereoscopic image database of Ningbo University, w t =0.888; for the LIVE stereoscopic image database, w t =0.503.
⑥对Qstr和Qtex进行融合,得到Sdis的图像质量客观评价预测值,记为Q,Q=w×Qstr+(1-w)×Qtex,其中,w表示Qstr和Sdis的权值比重,在本实施例中,对于宁波大学立体图像库,取w=0.882;对于LIVE立体图像库,取w=0.838。⑥Fuse Q str and Q tex to obtain the predicted value of S dis image quality objective evaluation, denoted as Q, Q=w×Q str +(1-w)×Q tex , where w represents Q str and S dis In this embodiment, w=0.882 is used for the stereoscopic image library of Ningbo University; w=0.838 is used for the LIVE stereoscopic image library.
这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Pearson linear correlation coefficient,PLCC)、Spearman相关系数(Spearman rank order correlation coefficient,SROCC)、Kendall相关系数(Kendall rank-order correlation coefficient,KROCC)、均方误差(root mean squared error,RMSE),PLCC和RMSE反映失真的立体图像客观评价结果的准确性,SROCC和KROCC反映其单调性。Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC) and Spearman correlation coefficient (Spearman rank order correlation coefficient, SROCC) under nonlinear regression conditions. Kendall correlation coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error (root mean squared error, RMSE), PLCC and RMSE reflect the accuracy of the objective evaluation results of the distorted stereoscopic image, SROCC and KROCC reflect its monotonicity.
利用本发明方法计算宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值和LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值,再利用现有的主观评价方法获得宁波大学立体图像库中的每幅失真的立体图像的平均主观评分差值和LIVE立体图像库中的每幅失真的立体图像的平均主观评分差值。将按本发明方法计算得到的失真的立体图像的图像质量客观评价预测值做五参数Logistic函数非线性拟合,PLCC、SROCC和KROCC值越高,RMSE值越低说明客观评价方法与平均主观评分差值相关性越好。表1、表2、表3和表4给出了采用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Pearson相关系数、Spearman相关系数、Kendall相关系数和均方误差。从表1、表2、表3和表4中可以看出,采用本发明方法得到的失真的立体图像的最终的图像质量客观评价预测值与平均主观评分差值之间的相关性是很高的,表明了客观评价结果与人眼主观感知的结果较为一致,足以说明本发明方法的有效性。Utilize the method of the present invention to calculate the image quality objective evaluation prediction value of each distorted stereoscopic image in the stereoscopic image database of Ningbo University and the image quality objective evaluation prediction value of each distorted stereoscopic image in the LIVE stereoscopic image database, and then use the existing The subjective evaluation method obtained the average subjective score difference of each distorted stereo image in the stereo image database of Ningbo University and the average subjective score difference of each distorted stereo image in the LIVE stereo image database. The five-parameter Logistic function nonlinear fitting is done on the image quality objective evaluation prediction value of the distorted stereoscopic image calculated by the method of the present invention, the higher the PLCC, SROCC and KROCC values, the lower the RMSE value shows that the objective evaluation method and the average subjective rating The better the difference correlation. Table 1, table 2, table 3 and table 4 provide the Pearson correlation coefficient, Spearman correlation coefficient, Kendall correlation between the image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention Coefficient and mean square error. As can be seen from Table 1, Table 2, Table 3 and Table 4, the correlation between the final image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention is very high It shows that the objective evaluation result is relatively consistent with the subjective perception result of human eyes, which is enough to illustrate the effectiveness of the method of the present invention.
图2给出了利用本发明方法得到的宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图,图3给出了利用本发明方法得到的LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图,散点越集中,说明客观评价结果与主观感知的一致性越好。从图2和图3中可以看出,采用本发明方法得到的散点图比较集中,与主观评价数据之间的吻合度较高。Fig. 2 has provided the scatter plot of the image quality objective evaluation prediction value and the average subjective rating difference of each distorted stereo image in the Ningbo University stereo image storehouse that utilizes the method of the present invention to obtain, and Fig. 3 has provided the scatter diagram utilizing the present invention The scatter diagram of the difference between the image quality objective evaluation prediction value and the average subjective evaluation value of each distorted stereo image in the LIVE stereo image database obtained by the method, the more concentrated the scatter points, the better the consistency between the objective evaluation results and the subjective perception . It can be seen from Fig. 2 and Fig. 3 that the scatter diagram obtained by the method of the present invention is relatively concentrated, and has a high degree of agreement with the subjective evaluation data.
表1利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评Table 1 utilizes the image quality objective evaluation prediction value and the average subjective evaluation of the distorted stereoscopic image that the inventive method obtains
分差值之间的Pearson相关系数比较Comparison of the Pearson correlation coefficient between the difference values
表2利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Spearman相关系数比较Table 2 utilizes Spearman's correlation coefficient comparison between the image quality objective evaluation prediction value of the distorted stereoscopic image obtained by the method of the present invention and the average subjective rating difference
表3利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Kendall相关系数比较Table 3 utilizes the Kendall correlation coefficient comparison between the image quality objective evaluation prediction value of the distorted stereoscopic image obtained by the method of the present invention and the average subjective rating difference
表4利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的均方误差比较Table 4 compares the mean square error between the image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410105777.4A CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410105777.4A CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103903259A true CN103903259A (en) | 2014-07-02 |
Family
ID=50994566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410105777.4A Pending CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103903259A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780432A (en) * | 2016-11-14 | 2017-05-31 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN105931257B (en) * | 2016-06-12 | 2018-08-31 | 西安电子科技大学 | SAR image method for evaluating quality based on textural characteristics and structural similarity |
CN109887023A (en) * | 2019-01-11 | 2019-06-14 | 杭州电子科技大学 | A binocular fusion stereo image quality evaluation method based on weighted gradient magnitude |
CN110363753A (en) * | 2019-07-11 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Image quality measure method, apparatus and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000278710A (en) * | 1999-03-26 | 2000-10-06 | Ricoh Co Ltd | Device for evaluating binocular stereoscopic vision picture |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | An Objective Evaluation Method of Stereoscopic Image Quality Based on Visual Perception |
CN102521825A (en) * | 2011-11-16 | 2012-06-27 | 宁波大学 | Three-dimensional image quality objective evaluation method based on zero watermark |
-
2014
- 2014-03-20 CN CN201410105777.4A patent/CN103903259A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000278710A (en) * | 1999-03-26 | 2000-10-06 | Ricoh Co Ltd | Device for evaluating binocular stereoscopic vision picture |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | An Objective Evaluation Method of Stereoscopic Image Quality Based on Visual Perception |
CN102521825A (en) * | 2011-11-16 | 2012-06-27 | 宁波大学 | Three-dimensional image quality objective evaluation method based on zero watermark |
Non-Patent Citations (5)
Title |
---|
KEMENG LI 等: "Objective quality assessment for stereoscopic images based on structure-texture decomposition", 《WSEAS TRANSACTIONS ON COMPUTERS》, 31 January 2014 (2014-01-31) * |
L. KARACAN 等: "Structure-preserving image smoothing via region covariances", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 32, no. 6, 1 November 2013 (2013-11-01), XP058033898, DOI: doi:10.1145/2508363.2508403 * |
M. SHLH 等: "MIQM: a multicamera image quality measure", 《EEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 21, no. 9, 22 May 2012 (2012-05-22) * |
WUFENG XUE 等: "Gradient Magnitude Similarity Deviation: AnHighly Efficient Perceptual Image Quality Index", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 23, no. 2, 3 December 2013 (2013-12-03) * |
靳鑫 等: "基于结构相似度的自适应图像质量评价", 《光电子激光》, vol. 25, no. 2, 28 February 2014 (2014-02-28) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931257B (en) * | 2016-06-12 | 2018-08-31 | 西安电子科技大学 | SAR image method for evaluating quality based on textural characteristics and structural similarity |
CN106780432A (en) * | 2016-11-14 | 2017-05-31 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN106780432B (en) * | 2016-11-14 | 2019-05-28 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN109887023A (en) * | 2019-01-11 | 2019-06-14 | 杭州电子科技大学 | A binocular fusion stereo image quality evaluation method based on weighted gradient magnitude |
CN110363753A (en) * | 2019-07-11 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Image quality measure method, apparatus and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102333233B (en) | Stereo image quality objective evaluation method based on visual perception | |
CN103581661B (en) | Method for evaluating visual comfort degree of three-dimensional image | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN102209257B (en) | Stereo image quality objective evaluation method | |
CN104581143A (en) | Reference-free three-dimensional picture quality objective evaluation method based on machine learning | |
CN104394403B (en) | A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN103413298B (en) | A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic | |
CN103136748B (en) | The objective evaluation method for quality of stereo images of a kind of feature based figure | |
CN104036502B (en) | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology | |
CN102843572B (en) | Phase-based stereo image quality objective evaluation method | |
CN104408716A (en) | Three-dimensional image quality objective evaluation method based on visual fidelity | |
CN104902268B (en) | Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method | |
CN102903107A (en) | Three-dimensional picture quality objective evaluation method based on feature fusion | |
CN106530282A (en) | Spatial feature-based non-reference three-dimensional image quality objective assessment method | |
CN105357519A (en) | No-reference stereo image quality objective evaluation method based on self-similarity feature | |
CN105654465A (en) | Stereo image quality evaluation method through parallax compensation and inter-viewpoint filtering | |
CN103200420B (en) | Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention | |
CN106651835A (en) | Entropy-based double-viewpoint reference-free objective stereo-image quality evaluation method | |
CN103903259A (en) | Objective three-dimensional image quality evaluation method based on structure and texture separation | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN102999912B (en) | A kind of objective evaluation method for quality of stereo images based on distortion map | |
CN105321175B (en) | An Objective Evaluation Method of Stereo Image Quality Based on Sparse Representation of Structural Texture | |
CN103745457B (en) | A kind of three-dimensional image objective quality evaluation method | |
CN102999911B (en) | Three-dimensional image quality objective evaluation method based on energy diagrams |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140702 |
|
WD01 | Invention patent application deemed withdrawn after publication |