CN103903259A - Objective three-dimensional image quality evaluation method based on structure and texture separation - Google Patents

Objective three-dimensional image quality evaluation method based on structure and texture separation Download PDF

Info

Publication number
CN103903259A
CN103903259A CN201410105777.4A CN201410105777A CN103903259A CN 103903259 A CN103903259 A CN 103903259A CN 201410105777 A CN201410105777 A CN 201410105777A CN 103903259 A CN103903259 A CN 103903259A
Authority
CN
China
Prior art keywords
mrow
msub
org
image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410105777.4A
Other languages
Chinese (zh)
Inventor
邵枫
李柯蒙
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201410105777.4A priority Critical patent/CN103903259A/en
Publication of CN103903259A publication Critical patent/CN103903259A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于结构纹理分离的立体图像质量客观评价方法,其首先分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像,再采用梯度相似性分别对左视点图像和右视点图像的结构图像进行评价,采用结构相似度分别对左视点图像和右视点图像的纹理图像进行评价,并通过融合获得待评价的失真的立体图像的图像质量客观评价预测值;优点在于分解得到的结构图像和纹理图像能够很好地表征图像结构和纹理信息对图像质量的影响,使得评价结果更加感觉符合人类视觉系统,从而有效地提高了客观评价结果与主观感知的相关性。

The invention discloses a method for objectively evaluating the quality of stereoscopic images based on structural texture separation. Firstly, the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated are respectively The structure and texture of the right view image are separated to obtain their respective structure images and texture images, and then the gradient similarity is used to evaluate the structure images of the left view image and the right view image respectively, and the structure similarity is used to evaluate the left view image and the right view image respectively. The texture image is evaluated, and the image quality prediction value of the distorted stereo image to be evaluated is obtained through fusion; the advantage is that the structure image and texture image obtained by decomposition can well represent the influence of image structure and texture information on image quality , so that the evaluation results are more in line with the human visual system, thus effectively improving the correlation between the objective evaluation results and subjective perception.

Description

一种基于结构纹理分离的立体图像质量客观评价方法An Objective Evaluation Method of Stereo Image Quality Based on Structural-Texture Separation

技术领域technical field

本发明涉及一种图像质量评价方法,尤其是涉及一种基于结构纹理分离的立体图像质量客观评价方法。The invention relates to an image quality evaluation method, in particular to an objective evaluation method of stereoscopic image quality based on structure texture separation.

背景技术Background technique

随着图像编码技术和立体显示技术的迅速发展,立体图像技术受到了越来越广泛的关注与应用,已成为当前的一个研究热点。立体图像技术利用人眼的双目视差原理,双目各自独立地接收来自同一场景的左视点图像和右视点图像,通过大脑融合形成双目视差,从而欣赏到具有深度感和逼真感的立体图像。由于受到采集系统、存储压缩及传输设备的影响,立体图像会不可避免地引入一系列的失真,而与单通道图像相比,立体图像需要同时保证两个通道的图像质量,因此对其进行质量评价具有非常重要的意义。然而,目前对立体图像质量缺乏有效的客观评价方法进行评价。因此,建立有效的立体图像质量客观评价模型具有十分重要的意义。With the rapid development of image coding technology and stereoscopic display technology, stereoscopic image technology has received more and more attention and applications, and has become a current research hotspot. Stereoscopic image technology utilizes the principle of binocular parallax of the human eye. Both eyes independently receive left and right viewpoint images from the same scene, and form binocular parallax through brain fusion, so as to enjoy stereoscopic images with a sense of depth and realism. . Due to the influence of acquisition system, storage compression and transmission equipment, stereoscopic images will inevitably introduce a series of distortions. Compared with single-channel images, stereoscopic images need to ensure the image quality of two channels at the same time, so the quality of Evaluation is very important. However, there is currently no effective objective evaluation method to evaluate the stereoscopic image quality. Therefore, it is of great significance to establish an effective objective evaluation model for stereoscopic image quality.

目前的立体图像质量客观评价方法是将平面图像质量评价方法直接应用于评价立体图像质量,或通过评价视差图的质量来评价立体图像的深度感知,然而,对立体图像进行融合产生立体感的过程并不是简单的平面图像质量评价方法的扩展,并且人眼并不直接观看视差图,以视差图的质量来评价立体图像的深度感知并不十分准确。因此,如何在立体图像质量评价过程中有效地对双目立体感知过程进行模拟,以及如何对不同失真类型对立体感知质量的影响机理进行分析,使得评价结果能够更加客观地反映人类视觉系统,都是在对立体图像进行客观质量评价过程中需要研究解决的问题。The current objective evaluation method of stereoscopic image quality is to directly apply the planar image quality evaluation method to evaluate the quality of stereoscopic images, or to evaluate the depth perception of stereoscopic images by evaluating the quality of disparity maps. However, the process of fusion of stereoscopic images to produce stereoscopic It is not an extension of the simple planar image quality evaluation method, and the human eye does not directly view the disparity map, so it is not very accurate to evaluate the depth perception of the stereoscopic image with the quality of the disparity map. Therefore, how to effectively simulate the process of binocular stereo perception in the process of stereo image quality evaluation, and how to analyze the influence mechanism of different distortion types on the quality of stereo perception, so that the evaluation results can more objectively reflect the human visual system, are all important. It is a problem that needs to be studied and solved in the process of objective quality evaluation of stereoscopic images.

发明内容Contents of the invention

本发明所要解决的技术问题是提供一种能够有效地提高客观评价结果与主观感知的相关性的基于结构纹理分离的立体图像质量客观评价方法。The technical problem to be solved by the present invention is to provide an objective evaluation method of stereoscopic image quality based on structure texture separation that can effectively improve the correlation between objective evaluation results and subjective perception.

本发明解决上述技术问题所采用的技术方案为:一种基于结构纹理分离的立体图像质量客观评价方法,其特征在于它的处理过程为:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for objectively evaluating the quality of stereoscopic images based on structural texture separation, which is characterized in that its processing process is:

首先,分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像;First, implement structure texture separation on the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated respectively, and obtain respective structural images and texture images;

其次,通过计算原始的无失真的立体图像的左视点图像的结构图像中的每个像素点与待评价的失真的立体图像的左视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的左视点图像的结构图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的结构图像中的每个像素点与待评价的失真的立体图像的右视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的右视点图像的结构图像的图像质量客观评价预测值;Secondly, by calculating the gradient similarity between each pixel in the structural image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding pixel in the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated, Obtain the image quality objective evaluation prediction value of the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated; similarly, by calculating the difference between each pixel in the structural image of the right viewpoint image of the original undistorted stereoscopic image and to be evaluated The gradient similarity between corresponding pixels in the structural image of the right viewpoint image of the distorted stereoscopic image is obtained, and the image quality objective evaluation prediction value of the structural image of the right viewpoint image of the distorted stereoscopic image to be evaluated is obtained;

接着,通过计算原始的无失真的立体图像的左视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的左视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的左视点图像的纹理图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的右视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的右视点图像的纹理图像的图像质量客观评价预测值;Next, by calculating each sub-block with a size of 8×8 in the texture image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding size in the texture image of the left viewpoint image of the distorted stereoscopic image to be evaluated is the structural similarity between 8×8 sub-blocks, and obtain the image quality objective evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, by calculating the right Structural similarity between each 8×8 sub-block in the texture image of the viewpoint image and the corresponding 8×8 sub-block in the texture image of the right viewpoint image of the distorted stereo image to be evaluated , obtaining the image quality objective evaluation prediction value of the texture image of the right viewpoint image of the distorted stereoscopic image to be evaluated;

再者,对待评价的失真的立体图像的左视点图像和右视点图像的结构图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的结构图像的图像质量客观评价预测值;同样,对待评价的失真的立体图像的左视点图像和右视点图像的纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的纹理图像的图像质量客观评价预测值;Furthermore, the image quality objective evaluation prediction value of the structural image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the structural image of the distorted stereoscopic image to be evaluated; Similarly, the image quality objective evaluation prediction value of the texture image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the texture image of the distorted stereoscopic image to be evaluated;

最后,对待评价的失真的立体图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值。Finally, the image quality objective evaluation prediction value of the structure image and the texture image of the distorted stereo image to be evaluated are fused to obtain the image quality objective evaluation prediction value of the distorted stereo image to be evaluated.

本发明的基于结构纹理分离的立体图像质量客观评价方法具体包括以下步骤:The stereoscopic image quality objective evaluation method based on structure texture separation of the present invention specifically comprises the following steps:

①令Sorg表示原始的无失真的立体图像,令Sdis表示待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值;①Let S org denote the original undistorted stereo image, let S dis denote the distorted stereo image to be evaluated, denote the left viewpoint image of S org as {L org (x,y)}, and denote the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left-viewpoint image and the right-viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left-viewpoint image and the right-viewpoint image, and H represents The height of the left view image and the right view image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) indicates that the coordinate position in {L dis (x, y)} is (x, y) The pixel value of the pixel point of y), R dis (x, y) represents the pixel value of the pixel point whose coordinate position is (x, y) in {R dis (x, y)};

②分别对{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}实施结构纹理分离,获得{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}各自的结构图像和纹理图像,将{Lorg(x,y)}的结构图像和纹理图像对应记为

Figure BDA0000479627040000032
将{Rorg(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000033
Figure BDA0000479627040000034
将{Ldis(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000035
Figure BDA0000479627040000036
将{Rdis(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000037
Figure BDA0000479627040000038
其中,
Figure BDA0000479627040000039
表示
Figure BDA00004796270400000310
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000311
表示
Figure BDA00004796270400000312
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000313
表示
Figure BDA00004796270400000314
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000315
表示
Figure BDA00004796270400000316
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000317
表示
Figure BDA00004796270400000318
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000319
表示
Figure BDA00004796270400000320
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000321
表示
Figure BDA00004796270400000322
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400000323
表示中坐标位置为(x,y)的像素点的像素值;② Implement structure and texture separation for {L org (x, y)}, {R org (x, y)}, {L dis (x, y)} and {R dis (x, y)} respectively, and obtain {L org (x,y)}, {R org (x,y)}, {L dis (x,y)} and {R dis (x,y)} respectively structure image and texture image, the {L org ( The corresponding structure image and texture image of x, y)} are recorded as and
Figure BDA0000479627040000032
The structure image and texture image correspondence of {R org (x,y)} are recorded as
Figure BDA0000479627040000033
and
Figure BDA0000479627040000034
The structure image and texture image of {L dis (x,y)} are recorded as
Figure BDA0000479627040000035
and
Figure BDA0000479627040000036
Record the structure image and texture image correspondence of {Rdis(x,y)} as
Figure BDA0000479627040000037
and
Figure BDA0000479627040000038
in,
Figure BDA0000479627040000039
express
Figure BDA00004796270400000310
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000311
express
Figure BDA00004796270400000312
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000313
express
Figure BDA00004796270400000314
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000315
express
Figure BDA00004796270400000316
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000317
express
Figure BDA00004796270400000318
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000319
express
Figure BDA00004796270400000320
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000321
express
Figure BDA00004796270400000322
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400000323
express The pixel value of the pixel point whose middle coordinate position is (x, y);

③计算

Figure BDA00004796270400000325
中的每个像素点与
Figure BDA00004796270400000326
中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与
Figure BDA00004796270400000328
中坐标位置为(x,y)的像素点之间的梯度相似性记为
Figure BDA00004796270400000329
Q L str ( x , y ) = 2 × m L , org str ( x , y ) × m L , dis str ( x , y ) + C 1 ( m L , org str ( x , y ) ) 2 + ( m L , dis str ( x , y ) ) 2 + C 1 , 其中, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 , m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400000333
表示
Figure BDA00004796270400000334
中坐标位置为(x,y)的像素点的水平方向梯度,表示
Figure BDA00004796270400000336
中坐标位置为(x,y)的像素点的垂直方向梯度,
Figure BDA00004796270400000337
表示
Figure BDA00004796270400000338
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA0000479627040000041
表示
Figure BDA0000479627040000042
中坐标位置为(x,y)的像素点的垂直方向梯度,C1为控制参数;然后根据
Figure BDA0000479627040000043
中的每个像素点与
Figure BDA0000479627040000044
中对应像素点之间的梯度相似性,计算
Figure BDA0000479627040000045
的图像质量客观评价预测值,记为
Figure BDA0000479627040000046
③ calculation
Figure BDA00004796270400000325
Each pixel in
Figure BDA00004796270400000326
The gradient similarity between the corresponding pixels in the The pixel point with the middle coordinate position (x, y) and
Figure BDA00004796270400000328
The gradient similarity between pixels whose coordinate positions are (x, y) is recorded as
Figure BDA00004796270400000329
Q L str ( x , the y ) = 2 × m L , org str ( x , the y ) × m L , dis str ( x , the y ) + C 1 ( m L , org str ( x , the y ) ) 2 + ( m L , dis str ( x , the y ) ) 2 + C 1 , in, m L , org str ( x , the y ) = ( gx L , org str ( x , the y ) ) 2 + ( gy L , org str ( x , the y ) ) 2 , m L , dis str ( x , the y ) = ( gx L , dis str ( x , the y ) ) 2 + ( gy L , dis str ( x , the y ) ) 2 ,
Figure BDA00004796270400000333
express
Figure BDA00004796270400000334
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y), express
Figure BDA00004796270400000336
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400000337
express
Figure BDA00004796270400000338
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA0000479627040000041
express
Figure BDA0000479627040000042
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y), C 1 is the control parameter; then according to
Figure BDA0000479627040000043
Each pixel in
Figure BDA0000479627040000044
The gradient similarity between the corresponding pixels in the calculation
Figure BDA0000479627040000045
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA0000479627040000046

同样,计算

Figure BDA0000479627040000047
中的每个像素点与
Figure BDA0000479627040000048
性,将
Figure BDA0000479627040000049
中坐标位置为(x,y)的像素点与
Figure BDA00004796270400000410
中坐标位置为(x,y)的像素点之间的梯度相似性记为 Q R str ( x , y ) = 2 × m R , org str ( x , y ) × m R , dis str ( x , y ) + C 1 ( m R , org str ( x , y ) ) 2 + ( m R , dis str ( x , y ) ) 2 + C 1 , 其中, m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2 , m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400000415
表示
Figure BDA00004796270400000416
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA00004796270400000417
表示
Figure BDA00004796270400000418
中坐标位置为(x,y)的像素点的垂直方向梯度,
Figure BDA00004796270400000419
表示
Figure BDA00004796270400000420
中坐标位置为(x,y)的像素点的水平方向梯度,表示
Figure BDA00004796270400000422
中坐标位置为(x,y)的像素点的垂直方向梯度,C1为控制参数;然后根据
Figure BDA00004796270400000423
中的每个像素点与
Figure BDA00004796270400000424
中对应像素点之间的梯度相似性,计算
Figure BDA00004796270400000425
的图像质量客观评价预测值,记为
Figure BDA00004796270400000426
Figure BDA00004796270400000427
Similarly, calculate
Figure BDA0000479627040000047
Each pixel in
Figure BDA0000479627040000048
sex, will
Figure BDA0000479627040000049
The pixel point with the middle coordinate position (x, y) and
Figure BDA00004796270400000410
The gradient similarity between pixels whose coordinate positions are (x, y) is recorded as Q R str ( x , the y ) = 2 × m R , org str ( x , the y ) × m R , dis str ( x , the y ) + C 1 ( m R , org str ( x , the y ) ) 2 + ( m R , dis str ( x , the y ) ) 2 + C 1 , in, m R , org str ( x , the y ) = ( gx R , org str ( x , the y ) ) 2 + ( gy R , org str ( x , the y ) ) 2 , m R , dis str ( x , the y ) = ( gx R , dis str ( x , the y ) ) 2 + ( gy R , dis str ( x , the y ) ) 2 ,
Figure BDA00004796270400000415
express
Figure BDA00004796270400000416
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400000417
express
Figure BDA00004796270400000418
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400000419
express
Figure BDA00004796270400000420
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y), express
Figure BDA00004796270400000422
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y), C 1 is the control parameter; then according to
Figure BDA00004796270400000423
Each pixel in
Figure BDA00004796270400000424
The gradient similarity between the corresponding pixels in the calculation
Figure BDA00004796270400000425
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400000426
Figure BDA00004796270400000427

④通过获取

Figure BDA00004796270400000428
中的每个尺寸大小为8×8的子块与
Figure BDA00004796270400000429
中对应尺寸大小为8×8的子块之间的结构相似度,计算得到
Figure BDA00004796270400000430
的图像质量客观评价预测值,记为 ④ By obtaining
Figure BDA00004796270400000428
Each sub-block of size 8×8 in
Figure BDA00004796270400000429
The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as
Figure BDA00004796270400000430
The predicted value of the objective evaluation of image quality is denoted as

同样,通过获取

Figure BDA00004796270400000432
中的每个尺寸大小为8×8的子块与
Figure BDA00004796270400000433
中对应尺寸大小为8×8的子块之间的结构相似度,计算得到的图像质量客观评价预测值,记为 Likewise, by getting
Figure BDA00004796270400000432
Each sub-block of size 8×8 in
Figure BDA00004796270400000433
The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as The predicted value of the objective evaluation of image quality is denoted as

⑤对

Figure BDA0000479627040000051
Figure BDA0000479627040000052
进行融合,得到Sdis的结构图像的图像质量客观评价预测值,记为Qstr
Figure BDA0000479627040000053
其中,ws表示
Figure BDA0000479627040000054
Figure BDA0000479627040000055
的权值比重;⑤ right
Figure BDA0000479627040000051
and
Figure BDA0000479627040000052
Fusion is carried out to obtain the image quality objective evaluation prediction value of the structural image of S dis , denoted as Q str ,
Figure BDA0000479627040000053
Among them, w s means
Figure BDA0000479627040000054
and
Figure BDA0000479627040000055
weight ratio;

同样,对

Figure BDA0000479627040000056
Figure BDA0000479627040000057
进行融合,得到Sdis的纹理图像的图像质量客观评价预测值,记为Qtex
Figure BDA0000479627040000058
其中,wt表示
Figure BDA0000479627040000059
Figure BDA00004796270400000510
的权值比重;same, yes
Figure BDA0000479627040000056
and
Figure BDA0000479627040000057
Perform fusion to obtain the image quality objective evaluation prediction value of the texture image of S dis , denoted as Q tex ,
Figure BDA0000479627040000058
Among them, w t means
Figure BDA0000479627040000059
and
Figure BDA00004796270400000510
weight ratio;

⑥对Qstr和Qtex进行融合,得到Sdis的图像质量客观评价预测值,记为Q,Q=w×Qstr+(1-w)×Qtex,其中,w表示Qstr和Sdis的权值比重。⑥Fuse Q str and Q tex to obtain the predicted value of S dis image quality objective evaluation, denoted as Q, Q=w×Q str +(1-w)×Q tex , where w represents Q str and S dis weight ratio of .

所述的步骤②中{Lorg(x,y)}的结构图像

Figure BDA00004796270400000511
和纹理图像
Figure BDA00004796270400000512
的获取过程为:The structural image of {L org (x,y)} in the step ②
Figure BDA00004796270400000511
and the texture image
Figure BDA00004796270400000512
The acquisition process is:

②-1a、将{Lorg(x,y)}中当前待处理的像素点定义为当前像素点;②-1a. Define the current pixel to be processed in {L org (x, y)} as the current pixel;

②-2a、将当前像素点在{Lorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为

Figure BDA00004796270400000513
将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Lorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为其中,p∈Ω,q∈Ω,在此Ω表示{Lorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块
Figure BDA00004796270400000515
中的像素点在当前子块
Figure BDA00004796270400000516
中的坐标位置,1≤x2≤9,1≤y2≤9,
Figure BDA00004796270400000517
表示当前子块
Figure BDA00004796270400000518
坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示
Figure BDA00004796270400000519
中的像素点在
Figure BDA00004796270400000520
中的坐标位置,1≤x3≤9,1≤y3≤9,
Figure BDA00004796270400000521
表示
Figure BDA00004796270400000522
中坐标位置为(x3,y3)的像素点的像素值;②-2a. Record the coordinate position of the current pixel point in {L org (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as
Figure BDA00004796270400000513
A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel point whose coordinate position is q in {L org (x,y)} within the 21×21 neighborhood window of is denoted as Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block
Figure BDA00004796270400000515
The pixels in the current sub-block
Figure BDA00004796270400000516
Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9,
Figure BDA00004796270400000517
represents the current subblock
Figure BDA00004796270400000518
The pixel value of the pixel point whose coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means
Figure BDA00004796270400000519
The pixels in
Figure BDA00004796270400000520
Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9,
Figure BDA00004796270400000521
express
Figure BDA00004796270400000522
The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 );

上述步骤②-2a中,对于任意一个邻域像素点、当前子块中的任意一个像素点,假设该像素点在{Lorg(x,y)}中的坐标位置为(x,y),如果x<1且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(1,y)的像素点的像素值赋值给该像素点;如果x>W且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(W,y)的像素点的像素值赋值给该像素点;如果1≤x≤W且y<1,则将{Lorg(x,y)}中坐标位置为(x,1)的像素点的像素值赋值给该像素点;如果1≤x≤W且y>H,则将{Lorg(x,y)}中坐标位置为(x,H)的像素点的像素值赋值给该像素点;如果x<1且y<1,则将{Lorg(x,y)}中坐标位置为(1,1)的像素点的像素值赋值给该像素点;如果x>W且y<1,则将{Lorg(x,y)}中坐标位置为(W,1)的像素点的像素值赋值给该像素点;如果x<1且y>H,则将{Lorg(x,y)}中坐标位置为(1,H)的像素点的像素值赋值给该像素点;如果x>W且y>H,则将{Lorg(x,y)}中坐标位置为(W,H)的像素点的像素值赋值给该像素点;In the above step ②-2a, for any pixel in the neighborhood or any pixel in the current sub-block, assuming that the coordinate position of the pixel in {L org (x,y)} is (x,y), If x<1 and 1≤y≤H, assign the pixel value of the pixel whose coordinate position is (1,y) in {L org (x,y)} to the pixel; if x>W and 1≤ y≤H, then assign the pixel value of the pixel whose coordinate position is (W,y) in {L org (x,y)} to the pixel; if 1≤x≤W and y<1, then { The pixel value of the pixel whose coordinate position is (x, 1) in L org (x, y)} is assigned to the pixel; if 1≤x≤W and y>H, then {L org (x,y) } assigns the pixel value of the pixel whose coordinate position is (x, H) to the pixel; if x<1 and y<1, then the coordinate position in {L org (x,y)} is (1,1 ) is assigned the pixel value of the pixel point; if x>W and y<1, then assign the pixel value of the pixel point whose coordinate position is (W,1) in {L org (x,y)} to The pixel; if x<1 and y>H, assign the pixel value of the pixel whose coordinate position is (1,H) in {L org (x,y)} to the pixel; if x>W and y>H, then assign the pixel value of the pixel whose coordinate position is (W, H) in {L org (x, y)} to the pixel;

②-3a、获取当前子块

Figure BDA0000479627040000061
中的每个像素点的特征矢量,将当前子块
Figure BDA0000479627040000062
中坐标位置为(x2,y2)的像素点的特征矢量记为
Figure BDA0000479627040000063
X L , org p ( x 2 , y 2 ) = [ I L , org p ( x 2 , y 2 ) , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; y | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; y 2 | , x 2 , y 2 ] ,其中,
Figure BDA0000479627040000065
的维数为7,符号“[]”为矢量表示符号,符号“||”为绝对值符号,
Figure BDA0000479627040000066
表示当前子块
Figure BDA0000479627040000067
中坐标位置为(x2,y2)的像素点的密度值,
Figure BDA0000479627040000068
Figure BDA0000479627040000069
在水平方向的一阶偏导数,
Figure BDA00004796270400000610
Figure BDA00004796270400000611
在垂直方向的一阶偏导数,
Figure BDA00004796270400000612
Figure BDA00004796270400000613
在水平方向的二阶偏导数,
Figure BDA00004796270400000614
Figure BDA00004796270400000615
在垂直方向的二阶偏导数;②-3a. Get the current sub-block
Figure BDA0000479627040000061
The feature vector of each pixel in the current sub-block
Figure BDA0000479627040000062
The feature vector of the pixel point whose coordinate position is (x2, y2) is recorded as
Figure BDA0000479627040000063
x L , org p ( x 2 , the y 2 ) = [ I L , org p ( x 2 , the y 2 ) , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; the y | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; the y 2 | , x 2 , the y 2 ] ,in,
Figure BDA0000479627040000065
The dimension is 7, the symbol “[]” is a vector representation symbol, and the symbol “||” is an absolute value symbol,
Figure BDA0000479627040000066
represents the current subblock
Figure BDA0000479627040000067
The density value of the pixel point whose coordinate position is (x 2 , y 2 ),
Figure BDA0000479627040000068
for
Figure BDA0000479627040000069
The first partial derivative in the horizontal direction,
Figure BDA00004796270400000610
for
Figure BDA00004796270400000611
The first partial derivative in the vertical direction,
Figure BDA00004796270400000612
for
Figure BDA00004796270400000613
The second partial derivative in the horizontal direction,
Figure BDA00004796270400000614
for
Figure BDA00004796270400000615
The second partial derivative in the vertical direction;

②-4a、根据当前子块

Figure BDA00004796270400000616
中的每个像素点的特征矢量,计算当前子块
Figure BDA00004796270400000617
的协方差矩阵,记为 C L , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; y 2 = 1 9 ( X L , org p ( x 2 , y 2 ) - &mu; L , org p ) ( X L , org p ( x 2 , y 2 ) - &mu; L , org p ) T , 其中,的维数为7×7,表示当前子块中的所有像素点的特征矢量的均值矢量,
Figure BDA0000479627040000075
Figure BDA0000479627040000076
的转置矢量;②-4a, according to the current sub-block
Figure BDA00004796270400000616
The feature vector of each pixel in the calculation of the current sub-block
Figure BDA00004796270400000617
The covariance matrix of is denoted as C L , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; the y 2 = 1 9 ( x L , org p ( x 2 , the y 2 ) - &mu; L , org p ) ( x L , org p ( x 2 , the y 2 ) - &mu; L , org p ) T , in, The dimension of is 7×7, represents the current subblock The mean vector of the feature vectors of all pixels in
Figure BDA0000479627040000075
for
Figure BDA0000479627040000076
The transpose vector;

②-5a、对当前子块

Figure BDA0000479627040000077
的协方差矩阵
Figure BDA0000479627040000078
进行Cholesky分解,得到当前子块
Figure BDA00004796270400000710
的Sigma特征集,记为 S L , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , org p ] , 其中,LT为L的转置矩阵,
Figure BDA00004796270400000713
的维数为7×15,符号“[]”为矢量表示符号,此处1≤i'≤7,L(1)表示L的第1列向量,L(i')表示L的第i'列向量,L(7)表示L的第7列向量;②-5a. For the current sub-block
Figure BDA0000479627040000077
The covariance matrix of
Figure BDA0000479627040000078
Perform Cholesky decomposition, get the current subblock
Figure BDA00004796270400000710
The Sigma feature set of is denoted as S L , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , org p ] , Among them, L T is the transpose matrix of L,
Figure BDA00004796270400000713
The dimension of is 7×15, and the symbol "[]" is a vector representation symbol, where 1≤i'≤7, L (1) represents the first column vector of L, and L (i') represents the i'th column of L Column vector, L (7) represents the 7th column vector of L;

②-6a、采用与步骤②-3a至步骤②-5a相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将的Sigma特征集记为

Figure BDA00004796270400000715
的维数为7×15;②-6a. Use the same operation as step ②-3a to step ②-5a to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set The Sigma feature set is denoted as
Figure BDA00004796270400000715
The dimension of is 7×15;

②-7a、根据当前子块的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为 I L , org str ( p ) , I L , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) , 其中,N'(p)表示{Lorg(x,y)}中以当前像素点为中心的21×21邻域窗口内的所有邻域像素点在{Lorg(x,y)}中的坐标位置的集合,exp()表示以e为底的指数函数,e=2.71828183,σ表示高斯函数的标准差,符号“||||”为欧氏距离计算符号,Lorg(q)表示{Lorg(x,y)}中坐标位置为q的像素点的像素值;②-7a, according to the current sub-block The Sigma feature set and the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel to obtain the structural information of the current pixel, denoted as I L , org str ( p ) , I L , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) , Among them, N'(p) represents the coordinates of all neighboring pixels in the 21×21 neighborhood window centered on the current pixel in {Lorg(x,y)} in { Lorg (x,y)} A collection of positions, exp() means the exponential function with e as the base, e=2.71828183, σ means the standard deviation of the Gaussian function, the symbol "||||" is the Euclidean distance calculation symbol, L org (q) means {L The pixel value of the pixel whose coordinate position is q in org (x, y)};

②-8a、根据当前像素点的结构信息

Figure BDA00004796270400000719
获取当前像素点的纹理信息,记为 I L , org tex ( p ) , I L , org tex ( p ) = L org ( p ) - I L , org str ( p ) , 其中,Lorg(p)表示当前像素点的像素值;②-8a. According to the structure information of the current pixel point
Figure BDA00004796270400000719
Get the texture information of the current pixel point, denoted as I L , org tex ( p ) , I L , org tex ( p ) = L org ( p ) - I L , org str ( p ) , Wherein, L org (p) represents the pixel value of the current pixel point;

②-9a、将{Lorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2a继续执行,直至{Lorg(x,y)}中的所有像素点处理完毕,得到{Lorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Lorg(x,y)}中的所有像素点的结构信息构成{Lorg(x,y)}的结构图像,记为

Figure BDA0000479627040000081
由{Lorg(x,y)}中的所有像素点的纹理信息构成{Lorg(x,y)}的纹理图像,记为
Figure BDA0000479627040000082
②-9a. Set the next pixel to be processed in {L org (x,y)} as the current pixel, and then return to step ②-2a to continue until all pixels in {L org (x,y)} Points are processed, and the structure information and texture information of each pixel in {L org (x, y)} are obtained, which is composed of the structure information of all pixels in {L org (x, y)} {L org ( The structural image of x,y)}, denoted as
Figure BDA0000479627040000081
The texture image of {L org (x, y)} is composed of the texture information of all pixels in {L org (x, y)}, recorded as
Figure BDA0000479627040000082

采用与步骤②-1a至步骤②-9a获取{Lorg(x,y)}的结构图像

Figure BDA0000479627040000083
和纹理图像相同的操作,获取{Rorg(x,y)}的结构图像
Figure BDA0000479627040000085
和纹理图像
Figure BDA0000479627040000086
{Ldis(x,y)}的结构图像和纹理图像
Figure BDA0000479627040000088
{Rdis(x,y)}的结构图像
Figure BDA0000479627040000089
和纹理图像
Figure BDA00004796270400000810
Obtain the structure image of {L org (x, y)} with step ②-1a to step ②-9a
Figure BDA0000479627040000083
and the texture image The same operation, get the structure image of {R org (x,y)}
Figure BDA0000479627040000085
and the texture image
Figure BDA0000479627040000086
Structural image of {L dis (x,y)} and the texture image
Figure BDA0000479627040000088
Structural image of {R dis (x,y)}
Figure BDA0000479627040000089
and the texture image
Figure BDA00004796270400000810

所述的步骤④中

Figure BDA00004796270400000811
的图像质量客观评价预测值
Figure BDA00004796270400000812
的获取过程为:In the step ④
Figure BDA00004796270400000811
The predictive value of the image quality objective evaluation
Figure BDA00004796270400000812
The acquisition process is:

④-1a、分别将

Figure BDA00004796270400000813
Figure BDA00004796270400000814
划分成
Figure BDA00004796270400000815
个互不重叠的尺寸大小为8×8的子块,将
Figure BDA00004796270400000816
中当前待处理的第k个子块定义为当前第一子块,将中当前待处理的第k个子块定义为当前第二子块,其中,
Figure BDA00004796270400000817
k的初始值为1;④-1a, respectively
Figure BDA00004796270400000813
and
Figure BDA00004796270400000814
divided into
Figure BDA00004796270400000815
Non-overlapping sub-blocks of size 8×8, the
Figure BDA00004796270400000816
The kth sub-block currently to be processed in is defined as the current first sub-block, and the The kth sub-block currently to be processed in is defined as the current second sub-block, where,
Figure BDA00004796270400000817
The initial value of k is 1;

④-2a、将当前第一子块记为

Figure BDA00004796270400000818
将当前第二子块记为
Figure BDA00004796270400000819
其中,(x4,y4)表示
Figure BDA00004796270400000820
Figure BDA00004796270400000821
中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,
Figure BDA00004796270400000822
表示
Figure BDA00004796270400000823
中坐标位置为(x4,y4)的像素点的像素值,表示
Figure BDA00004796270400000825
中坐标位置为(x4,y4)的像素点的像素值;④-2a. Record the current first sub-block as
Figure BDA00004796270400000818
Record the current second sub-block as
Figure BDA00004796270400000819
Among them, (x 4 ,y 4 ) means
Figure BDA00004796270400000820
and
Figure BDA00004796270400000821
The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8,
Figure BDA00004796270400000822
express
Figure BDA00004796270400000823
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ), express
Figure BDA00004796270400000825
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 );

④-3a、计算当前第一子块的均值和标准差,对应记为

Figure BDA00004796270400000827
&mu; L org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f L org , k ( x 4 , y 4 ) 64 , &sigma; L org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L org , k ( x 4 , y 4 ) - &mu; L org , k ) 2 64 ; ④-3a. Calculate the current first sub-block The mean and standard deviation of , corresponding to
Figure BDA00004796270400000827
and &mu; L org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f L org , k ( x 4 , the y 4 ) 64 , &sigma; L org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L org , k ( x 4 , the y 4 ) - &mu; L org , k ) 2 64 ;

同样,计算当前第二子块

Figure BDA00004796270400000830
的均值和标准差,对应记为
Figure BDA00004796270400000831
Figure BDA00004796270400000832
&mu; L dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f L dis , k ( x 4 , y 4 ) 64 , &sigma; L dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L dis , k ( x 4 , y 4 ) - &mu; L dis , k ) 2 64 ; Similarly, calculate the current second sub-block
Figure BDA00004796270400000830
The mean and standard deviation of , corresponding to
Figure BDA00004796270400000831
and
Figure BDA00004796270400000832
&mu; L dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f L dis , k ( x 4 , the y 4 ) 64 , &sigma; L dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L dis , k ( x 4 , the y 4 ) - &mu; L dis , k ) 2 64 ;

④-4a、计算当前第一子块与当前第二子块

Figure BDA0000479627040000093
之间的结构相似度,记为
Figure BDA00004796270400000933
Q L , k tex = 4 &times; ( &sigma; L org , k &times; &sigma; L dis , k ) &times; ( &mu; L org , k &times; &mu; L dis , k ) + C 2 ( ( &sigma; L org , k ) 2 + ( &sigma; L dis , k ) 2 ) + ( ( &mu; L org , k ) 2 + ( &mu; L dis , k ) 2 ) + C 2 , 其中,C2为控制参数;④-4a. Calculate the current first sub-block with the current second subblock
Figure BDA0000479627040000093
The structural similarity between
Figure BDA00004796270400000933
Q L , k tex = 4 &times; ( &sigma; L org , k &times; &sigma; L dis , k ) &times; ( &mu; L org , k &times; &mu; L dis , k ) + C 2 ( ( &sigma; L org , k ) 2 + ( &sigma; L dis , k ) 2 ) + ( ( &mu; L org , k ) 2 + ( &mu; L dis , k ) 2 ) + C 2 , Wherein, C 2 is a control parameter;

④-5a、令k=k+1,将

Figure BDA0000479627040000095
中下一个待处理的子块作为当前第一子块,将
Figure BDA0000479627040000096
中下一个待处理的子块作为当前第二子块,然后返回步骤④-2a继续执行,直至
Figure BDA0000479627040000097
Figure BDA0000479627040000098
中的所有子块均处理完毕,得到
Figure BDA0000479627040000099
中的每个子块与
Figure BDA00004796270400000910
中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号;④-5a, let k=k+1, the
Figure BDA0000479627040000095
In the next sub-block to be processed as the current first sub-block, the
Figure BDA0000479627040000096
The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2a to continue until
Figure BDA0000479627040000097
and
Figure BDA0000479627040000098
All sub-blocks in are processed, resulting in
Figure BDA0000479627040000099
Each subblock in
Figure BDA00004796270400000910
The structural similarity between corresponding sub-blocks in k=k+1, where "=" is an assignment symbol;

④-6a、根据

Figure BDA00004796270400000911
中的每个子块与
Figure BDA00004796270400000912
中对应子块之间的结构相似度,计算
Figure BDA00004796270400000913
的图像质量客观评价预测值,记为
Figure BDA00004796270400000914
④-6a, according to
Figure BDA00004796270400000911
Each subblock in
Figure BDA00004796270400000912
The structural similarity between the corresponding sub-blocks in the calculation
Figure BDA00004796270400000913
The predicted value of objective evaluation of image quality is denoted as
Figure BDA00004796270400000914

所述的步骤④中

Figure BDA00004796270400000915
的图像质量客观评价预测值
Figure BDA00004796270400000916
的获取过程为:In the step ④
Figure BDA00004796270400000915
The predictive value of the image quality objective evaluation
Figure BDA00004796270400000916
The acquisition process is:

④-1b、分别将

Figure BDA00004796270400000917
划分成
Figure BDA00004796270400000919
个互不重叠的尺寸大小为8×8的子块,将
Figure BDA00004796270400000920
中当前待处理的第k个子块定义为当前第一子块,将
Figure BDA00004796270400000921
中当前待处理的第k个子块定义为当前第二子块,其中,
Figure BDA00004796270400000922
k的初始值为1;④-1b, respectively
Figure BDA00004796270400000917
and divided into
Figure BDA00004796270400000919
Non-overlapping sub-blocks of size 8×8, the
Figure BDA00004796270400000920
The kth sub-block currently to be processed in is defined as the current first sub-block, and the
Figure BDA00004796270400000921
The kth sub-block currently to be processed in is defined as the current second sub-block, where,
Figure BDA00004796270400000922
The initial value of k is 1;

④-2b、将当前第一子块记为

Figure BDA00004796270400000923
将当前第二子块记为
Figure BDA00004796270400000924
其中,(x4,y4)表示
Figure BDA00004796270400000926
中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,
Figure BDA00004796270400000927
表示
Figure BDA00004796270400000928
中坐标位置为(x4,y4)的像素点的像素值,
Figure BDA00004796270400000934
表示中坐标位置为(x4,y4)的像素点的像素值;④-2b. Record the current first sub-block as
Figure BDA00004796270400000923
Record the current second sub-block as
Figure BDA00004796270400000924
Among them, (x 4 ,y 4 ) means and
Figure BDA00004796270400000926
The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8,
Figure BDA00004796270400000927
express
Figure BDA00004796270400000928
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ),
Figure BDA00004796270400000934
express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 );

④-3b、计算当前第一子块

Figure BDA00004796270400000930
的均值和标准差,对应记为
Figure BDA00004796270400000931
Figure BDA00004796270400000932
&mu; R org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f R org , k ( x 4 , y 4 ) 64 , &sigma; R org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R org , k ( x 4 , y 4 ) - &mu; R org , k ) 2 64 ; ④-3b. Calculate the current first sub-block
Figure BDA00004796270400000930
The mean and standard deviation of , corresponding to
Figure BDA00004796270400000931
and
Figure BDA00004796270400000932
&mu; R org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f R org , k ( x 4 , the y 4 ) 64 , &sigma; R org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R org , k ( x 4 , the y 4 ) - &mu; R org , k ) 2 64 ;

同样,计算当前第二子块

Figure BDA0000479627040000102
的均值和标准差,对应记为
Figure BDA0000479627040000103
Figure BDA0000479627040000104
&mu; R dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f R dis , k ( x 4 , y 4 ) 64 , &sigma; R dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R dis , k ( x 4 , y 4 ) - &mu; R dis , k ) 2 64 ; Similarly, calculate the current second sub-block
Figure BDA0000479627040000102
The mean and standard deviation of , corresponding to
Figure BDA0000479627040000103
and
Figure BDA0000479627040000104
&mu; R dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f R dis , k ( x 4 , the y 4 ) 64 , &sigma; R dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R dis , k ( x 4 , the y 4 ) - &mu; R dis , k ) 2 64 ;

④-4b、计算当前第一子块

Figure BDA0000479627040000106
与当前第二子块
Figure BDA0000479627040000107
之间的结构相似度,记为 Q R , k tex = 4 &times; ( &sigma; R org , k &times; &sigma; R dis , k ) &times; ( &mu; R org , k &times; &mu; R dis , k ) + C 2 ( ( &sigma; R org k ) 2 + ( &sigma; R dis , k ) 2 ) + ( ( &mu; R org , k ) 2 + ( &mu; R dis , k ) 2 ) + C 2 , 其中,C2为控制参数;④-4b. Calculate the current first sub-block
Figure BDA0000479627040000106
with the current second subblock
Figure BDA0000479627040000107
The structural similarity between Q R , k tex = 4 &times; ( &sigma; R org , k &times; &sigma; R dis , k ) &times; ( &mu; R org , k &times; &mu; R dis , k ) + C 2 ( ( &sigma; R org k ) 2 + ( &sigma; R dis , k ) 2 ) + ( ( &mu; R org , k ) 2 + ( &mu; R dis , k ) 2 ) + C 2 , Wherein, C 2 is a control parameter;

④-5b、令k=k+1,将

Figure BDA0000479627040000109
中下一个待处理的子块作为当前第一子块,将
Figure BDA00004796270400001010
中下一个待处理的子块作为当前第二子块,然后返回步骤④-2b继续执行,直至
Figure BDA00004796270400001011
Figure BDA00004796270400001012
中的所有子块均处理完毕,得到
Figure BDA00004796270400001013
中的每个子块与
Figure BDA00004796270400001014
中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号;④-5b, let k=k+1, the
Figure BDA0000479627040000109
In the next sub-block to be processed as the current first sub-block, the
Figure BDA00004796270400001010
The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2b to continue until
Figure BDA00004796270400001011
and
Figure BDA00004796270400001012
All sub-blocks in are processed, resulting in
Figure BDA00004796270400001013
Each subblock in
Figure BDA00004796270400001014
The structural similarity between corresponding sub-blocks in k=k+1, where "=" is an assignment symbol;

④-6b、根据

Figure BDA00004796270400001015
中的每个子块与
Figure BDA00004796270400001016
中对应子块之间的结构相似度,计算
Figure BDA00004796270400001017
的图像质量客观评价预测值,记为
Figure BDA00004796270400001019
④-6b, according to
Figure BDA00004796270400001015
Each subblock in
Figure BDA00004796270400001016
The structural similarity between the corresponding sub-blocks in the calculation
Figure BDA00004796270400001017
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400001019

与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:

1)本发明方法考虑到失真会导致图像结构或纹理信息的损失,因此将失真的立体图像分离为结构图像和纹理图像,并采用不同的参数分别对左视点图像和右视点图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,这样能够较好地反映立体图像的质量变化情况,使得评价结果更加符合人类视觉系统。1) The method of the present invention considers that distortion will lead to the loss of image structure or texture information, so the distorted stereoscopic image is separated into a structural image and a texture image, and different parameters are used to separate the structural images and texture images of the left viewpoint image and the right viewpoint image. The image quality objective evaluation prediction value of the texture image is fused, which can better reflect the quality change of the stereo image and make the evaluation result more in line with the human visual system.

2)本发明方法采用梯度相似性对结构图像进行评价,采用结构相似度对纹理图像进行评价,这样能够很好地表征结构和纹理信息的损失对图像质量的影响,从而能够有效地提高客观评价结果与主观感知的相关性。2) The method of the present invention uses gradient similarity to evaluate structural images, and uses structural similarity to evaluate texture images, which can well characterize the impact of loss of structure and texture information on image quality, thereby effectively improving objective evaluation Correlation of results to subjective perception.

附图说明Description of drawings

图1为本发明方法的总体实现框图;Fig. 1 is the overall realization block diagram of the inventive method;

图2为利用本发明方法得到的宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图;Fig. 2 is the scatter diagram of the image quality objective evaluation prediction value and the average subjective score difference of each distorted stereoscopic image in the Ningbo University stereoscopic image database obtained by the inventive method;

图3为利用本发明方法得到的LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图。Fig. 3 is a scatter diagram of the difference between the image quality objective evaluation prediction value and the average subjective evaluation value of each distorted stereo image in the LIVE stereo image database obtained by the method of the present invention.

具体实施方式Detailed ways

以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

本发明提出的一种基于结构纹理分离的立体图像质量客观评价方法,其总体实现框图如图1所示,它的处理过程为:A kind of stereoscopic image quality objective evaluation method based on structural texture separation that the present invention proposes, its overall realization block diagram is as shown in Figure 1, and its processing process is:

首先,分别对原始的无失真的立体图像的左视点图像和右视点图像、待评价的失真的立体图像的左视点图像和右视点图像实施结构纹理分离,获得各自的结构图像和纹理图像;First, implement structure texture separation on the left viewpoint image and the right viewpoint image of the original undistorted stereoscopic image, the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated respectively, and obtain respective structural images and texture images;

其次,通过计算原始的无失真的立体图像的左视点图像的结构图像中的每个像素点与待评价的失真的立体图像的左视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的左视点图像的结构图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的结构图像中的每个像素点与待评价的失真的立体图像的右视点图像的结构图像中对应像素点之间的梯度相似性,获取待评价的失真的立体图像的右视点图像的结构图像的图像质量客观评价预测值;Secondly, by calculating the gradient similarity between each pixel in the structural image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding pixel in the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated, Obtain the image quality objective evaluation prediction value of the structural image of the left viewpoint image of the distorted stereoscopic image to be evaluated; similarly, by calculating the difference between each pixel in the structural image of the right viewpoint image of the original undistorted stereoscopic image and to be evaluated The gradient similarity between corresponding pixels in the structural image of the right viewpoint image of the distorted stereoscopic image is obtained, and the image quality objective evaluation prediction value of the structural image of the right viewpoint image of the distorted stereoscopic image to be evaluated is obtained;

接着,通过计算原始的无失真的立体图像的左视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的左视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的左视点图像的纹理图像的图像质量客观评价预测值;同样,通过计算原始的无失真的立体图像的右视点图像的纹理图像中的每个尺寸大小为8×8的子块与待评价的失真的立体图像的右视点图像的纹理图像中对应尺寸大小为8×8的子块之间的结构相似度,获取待评价的失真的立体图像的右视点图像的纹理图像的图像质量客观评价预测值;Next, by calculating each sub-block with a size of 8×8 in the texture image of the left viewpoint image of the original undistorted stereoscopic image and the corresponding size in the texture image of the left viewpoint image of the distorted stereoscopic image to be evaluated is the structural similarity between 8×8 sub-blocks, and obtain the image quality objective evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, by calculating the right Structural similarity between each 8×8 sub-block in the texture image of the viewpoint image and the corresponding 8×8 sub-block in the texture image of the right viewpoint image of the distorted stereo image to be evaluated , obtaining the image quality objective evaluation prediction value of the texture image of the right viewpoint image of the distorted stereoscopic image to be evaluated;

再者,对待评价的失真的立体图像的左视点图像和右视点图像的结构图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的结构图像的图像质量客观评价预测值;同样,对待评价的失真的立体图像的左视点图像和右视点图像的纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的纹理图像的图像质量客观评价预测值;Furthermore, the image quality objective evaluation prediction value of the structural image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the structural image of the distorted stereoscopic image to be evaluated; Similarly, the image quality objective evaluation prediction value of the texture image of the left viewpoint image and the right viewpoint image of the distorted stereoscopic image to be evaluated is fused to obtain the image quality objective evaluation prediction value of the texture image of the distorted stereoscopic image to be evaluated;

最后,对待评价的失真的立体图像的结构图像和纹理图像的图像质量客观评价预测值进行融合,得到待评价的失真的立体图像的图像质量客观评价预测值。Finally, the image quality objective evaluation prediction value of the structure image and the texture image of the distorted stereo image to be evaluated are fused to obtain the image quality objective evaluation prediction value of the distorted stereo image to be evaluated.

本发明的基于结构纹理分离的立体图像质量客观评价方法,它具体包括以下步骤:The stereoscopic image quality objective evaluation method based on structure texture separation of the present invention, it specifically comprises the following steps:

①令Sorg表示原始的无失真的立体图像,令Sdis表示待评价的失真的立体图像,将Sorg的左视点图像记为{Lorg(x,y)},将Sorg的右视点图像记为{Rorg(x,y)},将Sdis的左视点图像记为{Ldis(x,y)},将Sdis的右视点图像记为{Rdis(x,y)},其中,(x,y)表示左视点图像和右视点图像中的像素点的坐标位置,1≤x≤W,1≤y≤H,W表示左视点图像和右视点图像的宽度,H表示左视点图像和右视点图像的高度,Lorg(x,y)表示{Lorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Rorg(x,y)表示{Rorg(x,y)}中坐标位置为(x,y)的像素点的像素值,Ldis(x,y)表示{Ldis(x,y)}中坐标位置为(x,y)的像素点的像素值,Rdis(x,y)表示{Rdis(x,y)}中坐标位置为(x,y)的像素点的像素值。①Let S org denote the original undistorted stereo image, let S dis denote the distorted stereo image to be evaluated, denote the left viewpoint image of S org as {L org (x,y)}, and denote the right viewpoint image of S org The image is recorded as {R org (x,y)}, the left view image of S dis is recorded as {L dis (x,y)}, and the right view image of S dis is recorded as {R dis (x,y)} , where (x, y) represents the coordinate position of the pixel in the left-viewpoint image and the right-viewpoint image, 1≤x≤W, 1≤y≤H, W represents the width of the left-viewpoint image and the right-viewpoint image, and H represents The height of the left view image and the right view image, L org (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {L org (x, y)}, R org (x, y) Indicates the pixel value of the pixel whose coordinate position is (x, y) in {R org (x, y)}, and L dis (x, y) indicates that the coordinate position in {L dis (x, y)} is (x, y) y), and R dis (x, y) represents the pixel value of the pixel whose coordinate position is (x, y) in {R dis (x, y)}.

在此,采用宁波大学立体图像库和LIVE立体图像库来分析本实施例得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的相关性。宁波大学立体图像库由12幅无失真的立体图像在不同失真程度的JPEG压缩情况下的60幅失真的立体图像、JPEG2000压缩情况下的60幅失真的立体图像、高斯模糊情况下的60幅失真的立体图像、高斯白噪声情况下的60幅失真的立体图像和H.264编码失真情况下的72幅失真的立体图像构成。LIVE立体图像库由20幅无失真的立体图像在不同失真程度的JPEG压缩情况下的80幅失真的立体图像、JPEG2000压缩情况下的80幅失真的立体图像、高斯模糊情况下的45幅失真的立体图像、高斯白噪声情况下的80幅失真的立体图像和Fast Fading失真情况下的80幅失真的立体图像构成。Here, the stereoscopic image database of Ningbo University and the LIVE stereoscopic image database are used to analyze the correlation between the image quality objective evaluation prediction value and the average subjective score difference of the distorted stereoscopic image obtained in this embodiment. The stereoscopic image library of Ningbo University consists of 12 undistorted stereoscopic images, 60 distorted stereoscopic images under JPEG compression with different degrees of distortion, 60 distorted stereoscopic images under JPEG2000 compression, and 60 distorted stereoscopic images under Gaussian blur Stereo images, 60 distorted stereo images in the case of Gaussian white noise, and 72 distorted stereo images in the case of H.264 encoding distortion. The LIVE stereoscopic image library consists of 20 undistorted stereoscopic images, 80 distorted stereoscopic images under JPEG compression with different distortion levels, 80 distorted stereoscopic images under JPEG2000 compression, and 45 distorted stereoscopic images under Gaussian blur Stereo images, 80 distorted stereo images in the case of Gaussian white noise, and 80 distorted stereo images in the case of Fast Fading distortion.

②分别对{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}实施结构纹理分离,获得{Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}和{Rdis(x,y)}各自的结构图像和纹理图像,将{Lorg(x,y)}的结构图像和纹理图像对应记为

Figure BDA0000479627040000121
Figure BDA0000479627040000122
将{Rorg(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000123
Figure BDA0000479627040000124
将{Ldis(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000125
Figure BDA0000479627040000126
将{Rdis(x,y)}的结构图像和纹理图像对应记为
Figure BDA0000479627040000131
Figure BDA0000479627040000132
其中,
Figure BDA0000479627040000133
表示
Figure BDA0000479627040000134
中坐标位置为(x,y)的像素点的像素值,
Figure BDA0000479627040000135
表示
Figure BDA0000479627040000136
中坐标位置为(x,y)的像素点的像素值,
Figure BDA0000479627040000137
表示
Figure BDA0000479627040000138
中坐标位置为(x,y)的像素点的像素值,
Figure BDA0000479627040000139
表示
Figure BDA00004796270400001310
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400001311
表示
Figure BDA00004796270400001312
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400001313
表示
Figure BDA00004796270400001314
中坐标位置为(x,y)的像素点的像素值,
Figure BDA00004796270400001315
表示
Figure BDA00004796270400001316
中坐标位置为(x,y)的像素点的像素值,表示
Figure BDA00004796270400001318
中坐标位置为(x,y)的像素点的像素值。② Implement structure and texture separation for {L org (x, y)}, {R org (x, y)}, {L dis (x, y)} and {R dis (x, y)} respectively, and obtain {L org (x,y)}, {R org (x,y)}, {L dis (x,y)} and {R dis (x,y)} respectively structure image and texture image, the {L org ( The corresponding structure image and texture image of x,y)} are recorded as
Figure BDA0000479627040000121
and
Figure BDA0000479627040000122
The structure image and texture image correspondence of {R org (x,y)} are recorded as
Figure BDA0000479627040000123
and
Figure BDA0000479627040000124
The structure image and texture image of {L dis (x,y)} are recorded as
Figure BDA0000479627040000125
and
Figure BDA0000479627040000126
The structure image and texture image of {R dis (x,y)} are recorded as
Figure BDA0000479627040000131
and
Figure BDA0000479627040000132
in,
Figure BDA0000479627040000133
express
Figure BDA0000479627040000134
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA0000479627040000135
express
Figure BDA0000479627040000136
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA0000479627040000137
express
Figure BDA0000479627040000138
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA0000479627040000139
express
Figure BDA00004796270400001310
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400001311
express
Figure BDA00004796270400001312
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400001313
express
Figure BDA00004796270400001314
The pixel value of the pixel whose coordinate position is (x, y),
Figure BDA00004796270400001315
express
Figure BDA00004796270400001316
The pixel value of the pixel whose coordinate position is (x, y), express
Figure BDA00004796270400001318
The pixel value of the pixel whose middle coordinate position is (x, y).

在此具体实施例中,步骤②中{Lorg(x,y)}的结构图像和纹理图像

Figure BDA00004796270400001320
的获取过程为:In this specific embodiment, the structure image of {L org (x, y)} in step ② and the texture image
Figure BDA00004796270400001320
The acquisition process is:

②-1a、将{Lorg(x,y)}中当前待处理的像素点定义为当前像素点。②-1a. Define the current pixel point to be processed in {L org (x,y)} as the current pixel point.

②-2a、将当前像素点在{Lorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为

Figure BDA00004796270400001321
将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Lorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为
Figure BDA00004796270400001322
其中,p∈Ω,q∈Ω,在此Ω表示{Lorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块中的像素点在当前子块
Figure BDA00004796270400001324
中的坐标位置,1≤x2≤9,1≤y2≤9,
Figure BDA00004796270400001325
表示当前子块
Figure BDA00004796270400001326
中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,表示
Figure BDA00004796270400001330
中坐标位置为(x3,y3)的像素点的像素值。②-2a. Record the coordinate position of the current pixel point in {L org (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as
Figure BDA00004796270400001321
A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel point whose coordinate position is q in {L org (x,y)} within the 21×21 neighborhood window of is denoted as
Figure BDA00004796270400001322
Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block The pixels in the current sub-block
Figure BDA00004796270400001324
Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9,
Figure BDA00004796270400001325
represents the current subblock
Figure BDA00004796270400001326
The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express
Figure BDA00004796270400001330
The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).

上述步骤②-2a中,对于当前子块中的任意一个像素点,假设该像素点在{Lorg(x,y)}中的坐标位置为(x,y),如果x<1且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(1,y)的像素点的像素值赋值给该像素点;如果x>W且1≤y≤H,则将{Lorg(x,y)}中坐标位置为(W,y)的像素点的像素值赋值给该像素点;如果1≤x≤W且y<1,则将{Lorg(x,y)}中坐标位置为(x,1)的像素点的像素值赋值给该像素点;如果1≤x≤W且y>H,则将{Lorg(x,y)}中坐标位置为(x,H)的像素点的像素值赋值给该像素点;如果x<1且y<1,则将{Lorg(x,y)}中坐标位置为(1,1)的像素点的像素值赋值给该像素点;如果x>W且y<1,则将{Lorg(x,y)}中坐标位置为(W,1)的像素点的像素值赋值给该像素点;如果x<1且y>H,则将{Lorg(x,y)}中坐标位置为(1,H)的像素点的像素值赋值给该像素点;如果x>W且y>H,则将{Lorg(x,y)}中坐标位置为(W,H)的像素点的像素值赋值给该像素点;同样,对于任意一个邻域像素点,也同上述当前子块中的任意一个像素点作同样的操作,使超出图像边界的邻域像素点的像素值由最邻近的边界像素点的像素值替代。即上述步骤②-2a中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Lorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Lorg(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Lorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2a, for any pixel in the current sub-block, assume that the coordinate position of the pixel in {L org (x,y)} is (x, y), if x<1 and 1≤ y≤H, then assign the pixel value of the pixel whose coordinate position is (1,y) in {L org (x,y)} to the pixel; if x>W and 1≤y≤H, then { The pixel value of the pixel whose coordinate position is (W, y) in L org (x, y)} is assigned to the pixel; if 1≤x≤W and y<1, then {L org (x, y) } assigns the pixel value of the pixel whose coordinate position is (x, 1) to the pixel; if 1≤x≤W and y>H, then the coordinate position in {L org (x,y)} is (x ,H) to assign the pixel value of the pixel point to the pixel point; if x<1 and y<1, then the pixel value of the pixel point whose coordinate position is (1,1) in {L org (x,y)} Assign a value to the pixel; if x>W and y<1, then assign the pixel value of the pixel whose coordinate position is (W,1) in {L org (x,y)} to the pixel; if x< 1 and y>H, then assign the pixel value of the pixel whose coordinate position is (1,H) in {L org (x,y)} to the pixel; if x>W and y>H, then assign { The pixel value of the pixel whose coordinate position is (W, H) in L org (x, y)} is assigned to the pixel; similarly, for any neighboring pixel, it is also the same as any pixel in the above-mentioned current sub-block Points do the same operation, so that the pixel value of the neighboring pixel point beyond the image boundary is replaced by the pixel value of the nearest boundary pixel point. That is, in the above step ②-2a, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {L org (x,y)}, then the The pixel value of the pixel point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {L org (x,y) }, the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if the current pixel is the center of each neighborhood pixel in the 21×21 neighborhood window If the coordinate position of a pixel in the block formed by the 9×9 neighborhood window exceeds the boundary of {L org (x,y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel .

②-3a、获取当前子块

Figure BDA0000479627040000141
中的每个像素点的特征矢量,将当前子块
Figure BDA0000479627040000142
中坐标位置为(x2,y2)的像素点的特征矢量记为
Figure BDA0000479627040000143
X L , org p ( x 2 , y 2 ) = [ I L , org p ( x 2 , y 2 ) , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; y | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; y 2 | , x 2 , y 2 ] 其中,
Figure BDA0000479627040000145
的维数为7,符号“[]”为矢量表示符号,符号“||”为绝对值符号,表示当前子块
Figure BDA00004796270400001528
中坐标位置为(x2,y2)的像素点的密度值,
Figure BDA00004796270400001529
在水平方向的一阶偏导数,
Figure BDA0000479627040000154
在垂直方向的一阶偏导数,
Figure BDA0000479627040000155
在水平方向的二阶偏导数,
Figure BDA0000479627040000156
在垂直方向的二阶偏导数。②-3a. Get the current sub-block
Figure BDA0000479627040000141
The feature vector of each pixel in the current sub-block
Figure BDA0000479627040000142
The feature vector of the pixel point whose coordinate position is (x2, y2) is recorded as
Figure BDA0000479627040000143
x L , org p ( x 2 , the y 2 ) = [ I L , org p ( x 2 , the y 2 ) , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; the y | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; the y 2 | , x 2 , the y 2 ] in,
Figure BDA0000479627040000145
The dimension is 7, the symbol “[]” is a vector representation symbol, and the symbol “||” is an absolute value symbol, represents the current subblock
Figure BDA00004796270400001528
The density value of the pixel point whose coordinate position is (x 2 , y 2 ), for
Figure BDA00004796270400001529
The first partial derivative in the horizontal direction, for
Figure BDA0000479627040000154
The first partial derivative in the vertical direction,
Figure BDA0000479627040000155
for The second partial derivative in the horizontal direction,
Figure BDA0000479627040000156
for Second order partial derivatives in the vertical direction.

②-4a、根据当前子块

Figure BDA0000479627040000158
中的每个像素点的特征矢量,计算当前子块
Figure BDA0000479627040000159
的协方差矩阵,记为
Figure BDA00004796270400001510
C L , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; y 2 = 1 9 ( X L , org p ( x 2 , y 2 ) - &mu; L , org p ) ( X L , org p ( x 2 , y 2 ) - &mu; L , org p ) T , 其中,
Figure BDA00004796270400001512
的维数为7×7,
Figure BDA00004796270400001513
表示当前子块
Figure BDA00004796270400001514
中的所有像素点的特征矢量的均值矢量,
Figure BDA00004796270400001515
Figure BDA00004796270400001516
的转置矢量。②-4a, according to the current sub-block
Figure BDA0000479627040000158
The feature vector of each pixel in the current sub-block is calculated
Figure BDA0000479627040000159
The covariance matrix of is denoted as
Figure BDA00004796270400001510
C L , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; the y 2 = 1 9 ( x L , org p ( x 2 , the y 2 ) - &mu; L , org p ) ( x L , org p ( x 2 , the y 2 ) - &mu; L , org p ) T , in,
Figure BDA00004796270400001512
The dimension of is 7×7,
Figure BDA00004796270400001513
represents the current subblock
Figure BDA00004796270400001514
The mean vector of the feature vectors of all pixels in
Figure BDA00004796270400001515
for
Figure BDA00004796270400001516
The transpose vector of .

②-5a、对当前子块

Figure BDA00004796270400001517
的协方差矩阵
Figure BDA00004796270400001518
进行Cholesky分解,
Figure BDA00004796270400001519
得到当前子块
Figure BDA00004796270400001520
的Sigma特征集,记为 S L , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , org p ] , 其中,LT为L的转置矩阵,
Figure BDA00004796270400001523
的维数为7×15,符号“[]”为矢量表示符号,此处1≤i'≤7,L(1)表示L的第1列向量,L(i')表示L的第i'列向量,L(7)表示L的第7列向量。②-5a. For the current sub-block
Figure BDA00004796270400001517
The covariance matrix of
Figure BDA00004796270400001518
Perform Cholesky decomposition,
Figure BDA00004796270400001519
get the current subblock
Figure BDA00004796270400001520
The Sigma feature set of is denoted as S L , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , org p ] , Among them, L T is the transpose matrix of L,
Figure BDA00004796270400001523
The dimension of is 7×15, and the symbol "[]" is a vector representation symbol, where 1≤i'≤7, L (1) represents the first column vector of L, and L (i') represents the i'th column of L Column vector, L (7) indicates the 7th column vector of L.

②-6a、采用与步骤②-3a至步骤②-5a相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将

Figure BDA00004796270400001524
的Sigma特征集记为
Figure BDA00004796270400001525
的维数为7×15。②-6a. Use the same operation as step ②-3a to step ②-5a to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set
Figure BDA00004796270400001524
The Sigma feature set is denoted as
Figure BDA00004796270400001525
The dimension of is 7×15.

②-7a、根据当前子块的Sigma特征集

Figure BDA00004796270400001527
和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为 I L , org str ( p ) , I L , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) , 其中,N'(p)表示{Lorg(x,y)}中以当前像素点为中心的21×21邻域窗口内的所有邻域像素点在{Lorg(x,y)}中的坐标位置的集合,exp()表示以e为底的指数函数,e=2.71828183,σ表示高斯函数的标准差,在本实施例中取σ=0.06,符号“||||”为欧氏距离计算符号,Lorg(q)表示{Lorg(x,y)}中坐标位置为q的像素点的像素值。②-7a, according to the current sub-block The Sigma feature set
Figure BDA00004796270400001527
and the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel to obtain the structural information of the current pixel, denoted as I L , org str ( p ) , I L , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , org p - S L , org q | | 2 2 &sigma; 2 ) , Among them, N'(p) means that all the neighboring pixels in {L org (x, y)} in the 21×21 neighborhood window centered on the current pixel point in {L org (x, y)} A collection of coordinate positions, exp() represents an exponential function with e as the base, e=2.71828183, σ represents the standard deviation of the Gaussian function, in this embodiment, σ=0.06, and the symbol "||||" is the Euclidean distance Calculation symbol, L org (q) represents the pixel value of the pixel whose coordinate position is q in {L org (x,y)}.

②-8a、根据当前像素点的结构信息

Figure BDA0000479627040000162
获取当前像素点的纹理信息,记为
Figure BDA0000479627040000163
其中,Lorg(p)表示当前像素点的像素值。②-8a. According to the structure information of the current pixel point
Figure BDA0000479627040000162
Get the texture information of the current pixel point, denoted as
Figure BDA0000479627040000163
Wherein, L org (p) represents the pixel value of the current pixel point.

②-9a、将{Lorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2a继续执行,直至{Lorg(x,y)}中的所有像素点处理完毕,得到{Lorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Lorg(x,y)}中的所有像素点的结构信息构成{Lorg(x,y)}的结构图像,记为

Figure BDA0000479627040000164
由{Lorg(x,y)}中的所有像素点的纹理信息构成{Lorg(x,y)}的纹理图像,记为
Figure BDA0000479627040000165
②-9a. Set the next pixel to be processed in {L org (x,y)} as the current pixel, and then return to step ②-2a to continue until all pixels in {L org (x,y)} Points are processed, and the structure information and texture information of each pixel in {L org (x, y)} are obtained, which is composed of the structure information of all pixels in {L org (x, y)} {L org ( The structural image of x,y)}, denoted as
Figure BDA0000479627040000164
The texture image of {L org (x, y)} is composed of the texture information of all pixels in {L org (x, y)}, recorded as
Figure BDA0000479627040000165

采用与步骤②-1a至步骤②-9a获取{Lorg(x,y)}的结构图像

Figure BDA0000479627040000166
和纹理图像
Figure BDA0000479627040000167
相同的操作,获取{Rorg(x,y)}的结构图像
Figure BDA0000479627040000168
和纹理图像
Figure BDA0000479627040000169
{Ldis(x,y)}的结构图像
Figure BDA00004796270400001610
和纹理图像
Figure BDA00004796270400001611
{Rdis(x,y)}的结构图像
Figure BDA00004796270400001612
和纹理图像
Figure BDA00004796270400001613
即:步骤②中{Rorg(x,y)}的结构图像和纹理图像
Figure BDA00004796270400001615
的获取过程为:Obtain the structure image of {L org (x, y)} with step ②-1a to step ②-9a
Figure BDA0000479627040000166
and the texture image
Figure BDA0000479627040000167
The same operation, get the structure image of {R org (x,y)}
Figure BDA0000479627040000168
and the texture image
Figure BDA0000479627040000169
Structural image of {L dis (x,y)}
Figure BDA00004796270400001610
and the texture image
Figure BDA00004796270400001611
Structural image of {R dis (x,y)}
Figure BDA00004796270400001612
and the texture image
Figure BDA00004796270400001613
That is: the structural image of {R org (x,y)} in step ② and the texture image
Figure BDA00004796270400001615
The acquisition process is:

②-1b、将{Rorg(x,y)}中当前待处理的像素点定义为当前像素点。②-1b. Define the current pixel to be processed in {R org (x, y)} as the current pixel.

②-2b、将当前像素点在{Rorg(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为

Figure BDA0000479627040000171
将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Rorg(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为
Figure BDA0000479627040000172
其中,p∈Ω,q∈Ω,在此Ω表示{Rorg(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块
Figure BDA0000479627040000173
中的像素点在当前子块
Figure BDA0000479627040000174
中的坐标位置,1≤x2≤9,1≤y2≤9,
Figure BDA0000479627040000175
表示当前子块
Figure BDA0000479627040000176
中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示
Figure BDA0000479627040000177
中的像素点在中的坐标位置,1≤x3≤9,1≤y3≤9,
Figure BDA0000479627040000179
表示
Figure BDA00004796270400001710
中坐标位置为(x3,y3)的像素点的像素值。②-2b. Record the coordinate position of the current pixel in {R org (x, y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel except the current pixel A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as
Figure BDA0000479627040000171
A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel The neighborhood sub-block composed of a 9×9 neighborhood window centered at the neighborhood pixel point whose coordinate position is q in {R org (x,y)} within the 21×21 neighborhood window of is denoted as
Figure BDA0000479627040000172
Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {R org (x,y)}, and (x 2 ,y 2 ) represents the current sub-block
Figure BDA0000479627040000173
The pixels in the current sub-block
Figure BDA0000479627040000174
Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9,
Figure BDA0000479627040000175
represents the current subblock
Figure BDA0000479627040000176
The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means
Figure BDA0000479627040000177
The pixels in Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9,
Figure BDA0000479627040000179
express
Figure BDA00004796270400001710
The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).

上述步骤②-2b中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Rorg(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rorg(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2b, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {R org (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {R org (x,y)} , the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if each neighborhood pixel in the 21×21 neighborhood window with the current pixel as the center If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {R org (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.

②-3b、获取当前子块

Figure BDA00004796270400001711
中的每个像素点的特征矢量,将当前子块
Figure BDA00004796270400001712
中坐标位置为(x2,y2)的像素点的特征矢量记为
Figure BDA00004796270400001713
X L , org p ( x 2 , y 2 ) = [ I L , org p ( x 2 , y 2 ) , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , y 2 ) &PartialD; y | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , y 2 ) &PartialD; y 2 | , x 2 , y 2 ] 其中,
Figure BDA00004796270400001715
的维数为7,符号“[]”为矢量表示符号,符号“||”为绝对值符号,表示当前子块
Figure BDA00004796270400001717
中坐标位置为(x2,y2)的像素点的密度值,
Figure BDA0000479627040000181
Figure BDA0000479627040000182
在水平方向的一阶偏导数,
Figure BDA0000479627040000183
Figure BDA0000479627040000184
在垂直方向的一阶偏导数,
Figure BDA0000479627040000185
Figure BDA0000479627040000186
在水平方向的二阶偏导数,
Figure BDA0000479627040000188
在垂直方向的二阶偏导数。②-3b. Get the current sub-block
Figure BDA00004796270400001711
The feature vector of each pixel in the current sub-block
Figure BDA00004796270400001712
The feature vector of the pixel point whose coordinate position is (x2, y2) is recorded as
Figure BDA00004796270400001713
x L , org p ( x 2 , the y 2 ) = [ I L , org p ( x 2 , the y 2 ) , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; x | , | &PartialD; I L , org p ( x 2 , the y 2 ) &PartialD; the y | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , org p ( x 2 , the y 2 ) &PartialD; the y 2 | , x 2 , the y 2 ] in,
Figure BDA00004796270400001715
The dimension is 7, the symbol “[]” is a vector representation symbol, and the symbol “||” is an absolute value symbol, represents the current subblock
Figure BDA00004796270400001717
The density value of the pixel point whose middle coordinate position is (x2, y2),
Figure BDA0000479627040000181
for
Figure BDA0000479627040000182
The first partial derivative in the horizontal direction,
Figure BDA0000479627040000183
for
Figure BDA0000479627040000184
The first partial derivative in the vertical direction,
Figure BDA0000479627040000185
for
Figure BDA0000479627040000186
The second partial derivative in the horizontal direction, for
Figure BDA0000479627040000188
Second order partial derivatives in the vertical direction.

②-4b、根据当前子块

Figure BDA0000479627040000189
中的每个像素点的特征矢量,计算当前子块
Figure BDA00004796270400001810
的协方差矩阵,记为
Figure BDA00004796270400001811
C R , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; y 2 = 1 9 ( X R , org p ( x 2 , y 2 ) - &mu; R , org p ) ( X R , org p ( x 2 , y 2 ) - &mu; R , org p ) T , 其中,
Figure BDA00004796270400001813
的维数为7×7,表示当前子块
Figure BDA00004796270400001815
中的所有像素点的特征矢量的均值矢量,
Figure BDA00004796270400001816
Figure BDA00004796270400001817
的转置矢量。②-4b, according to the current sub-block
Figure BDA0000479627040000189
The feature vector of each pixel in the current sub-block is calculated
Figure BDA00004796270400001810
The covariance matrix of is denoted as
Figure BDA00004796270400001811
C R , org p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; the y 2 = 1 9 ( x R , org p ( x 2 , the y 2 ) - &mu; R , org p ) ( x R , org p ( x 2 , the y 2 ) - &mu; R , org p ) T , in,
Figure BDA00004796270400001813
The dimension of is 7×7, represents the current subblock
Figure BDA00004796270400001815
The mean vector of the feature vectors of all pixels in
Figure BDA00004796270400001816
for
Figure BDA00004796270400001817
The transpose vector of .

②-5b、对当前子块

Figure BDA00004796270400001818
的协方差矩阵
Figure BDA00004796270400001819
进行Cholesky分解,
Figure BDA00004796270400001820
得到当前子块的Sigma特征集,记为
Figure BDA00004796270400001828
S R , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; R , org p ] , 其中,LT为L的转置矩阵,
Figure BDA00004796270400001823
的维数为7×15,符号“[]”为矢量表示符号,此处1≤i'≤7,L(1)表示L的第1列向量,L(i')表示L的第i'列向量,L(7)表示L的第7列向量。②-5b. For the current sub-block
Figure BDA00004796270400001818
The covariance matrix of
Figure BDA00004796270400001819
Perform Cholesky decomposition,
Figure BDA00004796270400001820
get the current subblock The Sigma feature set of is denoted as
Figure BDA00004796270400001828
S R , org p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; R , org p ] , Among them, L T is the transpose matrix of L,
Figure BDA00004796270400001823
The dimension of is 7×15, and the symbol "[]" is a vector representation symbol, where 1≤i'≤7, L (1) represents the first column vector of L, and L (i') represents the i'th column of L Column vector, L (7) indicates the 7th column vector of L.

②-6b、采用与步骤②-3b至步骤②-5b相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将

Figure BDA00004796270400001824
的Sigma特征集记为
Figure BDA00004796270400001825
的维数为7×15。②-6b. Use the same operation as step ②-3b to step ②-5b to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set
Figure BDA00004796270400001824
The Sigma feature set is denoted as
Figure BDA00004796270400001825
The dimension of is 7×15.

②-7b、根据当前子块

Figure BDA00004796270400001826
的Sigma特征集
Figure BDA00004796270400001827
和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为 I R , org str ( p ) , I R , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , org p - S R , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , org p - S R , org q | | 2 2 &sigma; 2 ) , 其中,N'(p)表示{Rorg(x,y)}中以当前像素点为中心的21×21邻域窗口内的所有邻域像素点在{Rorg(x,y)}中的坐标位置的集合,exp()表示以e为底的指数函数,e=2.71828183,σ表示高斯函数的标准差,在本实施例中取σ=0.06,符号“||||”为欧氏距离计算符号,Rorg(q)表示{Rorg(x,y)}中坐标位置为q的像素点的像素值。②-7b, according to the current sub-block
Figure BDA00004796270400001826
The Sigma feature set
Figure BDA00004796270400001827
and the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel to obtain the structural information of the current pixel, denoted as I R , org str ( p ) , I R , org str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , org p - S R , org q | | 2 2 &sigma; 2 ) &times; L org ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , org p - S R , org q | | 2 2 &sigma; 2 ) , Among them, N'(p) represents all the neighboring pixels in {R org (x, y)} in {R org (x, y)} in the 21×21 neighborhood window centered on the current pixel A collection of coordinate positions, exp() represents an exponential function with e as the base, e=2.71828183, σ represents the standard deviation of the Gaussian function, in this embodiment, σ=0.06, and the symbol "||||" is the Euclidean distance Calculation symbol, R org (q) represents the pixel value of the pixel at the coordinate position q in {R org (x,y)}.

②-8b、根据当前像素点的结构信息

Figure BDA0000479627040000192
获取当前像素点的纹理信息,记为
Figure BDA0000479627040000193
其中,Rorg(p)表示当前像素点的像素值。②-8b. According to the structure information of the current pixel point
Figure BDA0000479627040000192
Get the texture information of the current pixel point, denoted as
Figure BDA0000479627040000193
Wherein, R org (p) represents the pixel value of the current pixel point.

②-9b、将{Rorg(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2b继续执行,直至{Rorg(x,y)}中的所有像素点处理完毕,得到{Rorg(x,y)}中的每个像素点的结构信息和纹理信息,由{Rorg(x,y)}中的所有像素点的结构信息构成{Rorg(x,y)}的结构图像,记为

Figure BDA0000479627040000194
由{Rorg(x,y)}中的所有像素点的纹理信息构成{Rorg(x,y)}的纹理图像,记为
Figure BDA0000479627040000195
②-9b. Set the next pixel to be processed in {R org (x,y)} as the current pixel, and then return to step ②-2b to continue until all pixels in {R org (x,y)} Point processing is completed, and the structure information and texture information of each pixel in {R org (x, y)} are obtained, which is composed of the structure information of all pixels in {R org (x, y)} {R org ( The structural image of x,y)}, denoted as
Figure BDA0000479627040000194
The texture image of {R org (x, y)} is composed of the texture information of all pixels in {R org (x, y)}, recorded as
Figure BDA0000479627040000195

步骤②中{Ldis(x,y)}的结构图像和纹理图像的获取过程为:Structural image of {L dis (x,y)} in step ② and the texture image The acquisition process is:

②-1c、将{Ldis(x,y)}中当前待处理的像素点定义为当前像素点。②-1c. Define the current pixel to be processed in {L dis (x, y)} as the current pixel.

②-2c、将当前像素点在{Ldis(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为

Figure BDA0000479627040000198
将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Ldis(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为
Figure BDA0000479627040000199
其中,p∈Ω,q∈Ω,在此Ω表示{Ldis(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块
Figure BDA0000479627040000201
中的像素点在当前子块
Figure BDA0000479627040000202
中的坐标位置,1≤x2≤9,1≤y2≤9,
Figure BDA0000479627040000203
表示当前子块
Figure BDA0000479627040000204
中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示
Figure BDA0000479627040000205
中的像素点在
Figure BDA0000479627040000206
中的坐标位置,1≤x3≤9,1≤y3≤9,表示
Figure BDA0000479627040000208
中坐标位置为(x3,y3)的像素点的像素值。②-2c. Record the coordinate position of the current pixel point in {L dis (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel point except the current pixel point A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as
Figure BDA0000479627040000198
A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel In the 21×21 neighborhood window of {L dis (x,y)}, the neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel whose coordinate position is q in {L dis (x,y)} is denoted as
Figure BDA0000479627040000199
Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {L dis (x,y)}, and (x 2 ,y 2 ) represents the current sub-block
Figure BDA0000479627040000201
The pixels in the current sub-block
Figure BDA0000479627040000202
Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9,
Figure BDA0000479627040000203
represents the current subblock
Figure BDA0000479627040000204
The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means
Figure BDA0000479627040000205
The pixels in
Figure BDA0000479627040000206
Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express
Figure BDA0000479627040000208
The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).

上述步骤②-2c中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Ldis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Ldis(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Ldis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above steps ②-2c, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {L dis (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {L dis (x,y)} boundary, the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {L dis (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.

②-3c、获取当前子块

Figure BDA0000479627040000209
中的每个像素点的特征矢量,将当前子块
Figure BDA00004796270400002010
中坐标位置为(x2,y2)的像素点的特征矢量记为 X L , dis p ( x 2 , y 2 ) = [ I L , dis p ( x 2 , y 2 ) , | &PartialD; I L , dis p ( x 2 , y 2 ) &PartialD; x | , | &PartialD; I L , dis p ( x 2 , y 2 ) &PartialD; y | , | &PartialD; 2 I L , dis p ( x 2 , y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , dis p ( x 2 , y 2 ) &PartialD; y 2 | , x 2 , y 2 ] ,其中,
Figure BDA00004796270400002013
的维数为7,符号“[]”为矢量表示符号,符号“||”为绝对值符号,
Figure BDA00004796270400002014
表示当前子块
Figure BDA00004796270400002015
中坐标位置为(x2,y2)的像素点的密度值,
Figure BDA00004796270400002016
在水平方向的一阶偏导数,
Figure BDA00004796270400002018
在垂直方向的一阶偏导数,
Figure BDA00004796270400002020
Figure BDA00004796270400002021
在水平方向的二阶偏导数,
Figure BDA00004796270400002022
Figure BDA00004796270400002023
在垂直方向的二阶偏导数。②-3c. Get the current sub-block
Figure BDA0000479627040000209
The feature vector of each pixel in the current sub-block
Figure BDA00004796270400002010
The feature vector of the pixel point whose coordinate position is (x 2 , y 2 ) is denoted as x L , dis p ( x 2 , the y 2 ) = [ I L , dis p ( x 2 , the y 2 ) , | &PartialD; I L , dis p ( x 2 , the y 2 ) &PartialD; x | , | &PartialD; I L , dis p ( x 2 , the y 2 ) &PartialD; the y | , | &PartialD; 2 I L , dis p ( x 2 , the y 2 ) &PartialD; x 2 | , | &PartialD; 2 I L , dis p ( x 2 , the y 2 ) &PartialD; the y 2 | , x 2 , the y 2 ] ,in,
Figure BDA00004796270400002013
The dimension is 7, the symbol “[]” is a vector representation symbol, and the symbol “||” is an absolute value symbol,
Figure BDA00004796270400002014
represents the current subblock
Figure BDA00004796270400002015
The density value of the pixel point whose coordinate position is (x 2 , y 2 ),
Figure BDA00004796270400002016
for The first partial derivative in the horizontal direction,
Figure BDA00004796270400002018
for The first partial derivative in the vertical direction,
Figure BDA00004796270400002020
for
Figure BDA00004796270400002021
The second partial derivative in the horizontal direction,
Figure BDA00004796270400002022
for
Figure BDA00004796270400002023
Second order partial derivatives in the vertical direction.

②-4c、根据当前子块

Figure BDA0000479627040000211
中的每个像素点的特征矢量,计算当前子块
Figure BDA0000479627040000212
的协方差矩阵,记为
Figure BDA0000479627040000213
C L , dis p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; y 2 = 1 9 ( X L , dis p ( x 2 , y 2 ) - &mu; L , dis p ) ( X L , dis p ( x 2 , y 2 ) - &mu; L , dis p ) T , 其中,
Figure BDA0000479627040000215
的维数为7×7,表示当前子块
Figure BDA0000479627040000217
中的所有像素点的特征矢量的均值矢量,
Figure BDA0000479627040000218
的转置矢量。②-4c, according to the current sub-block
Figure BDA0000479627040000211
The feature vector of each pixel in the calculation of the current sub-block
Figure BDA0000479627040000212
The covariance matrix of is denoted as
Figure BDA0000479627040000213
C L , dis p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; the y 2 = 1 9 ( x L , dis p ( x 2 , the y 2 ) - &mu; L , dis p ) ( x L , dis p ( x 2 , the y 2 ) - &mu; L , dis p ) T , in,
Figure BDA0000479627040000215
The dimension of is 7×7, represents the current subblock
Figure BDA0000479627040000217
The mean vector of the feature vectors of all pixels in
Figure BDA0000479627040000218
for The transpose vector of .

②-5c、对当前子块

Figure BDA00004796270400002110
的协方差矩阵进行Cholesky分解,
Figure BDA00004796270400002112
得到当前子块
Figure BDA00004796270400002113
的Sigma特征集,记为
Figure BDA00004796270400002121
S L , dis p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , dis p ] , 其中,LT为L的转置矩阵,
Figure BDA00004796270400002115
的维数为7×15,符号“[]”为矢量表示符号,此处1≤i'≤7,L(1)表示L的第1列向量,L(i')表示L的第i'列向量,L(7)表示L的第7列向量。②-5c, for the current sub-block
Figure BDA00004796270400002110
The covariance matrix of Perform Cholesky decomposition,
Figure BDA00004796270400002112
get the current subblock
Figure BDA00004796270400002113
The Sigma feature set of is denoted as
Figure BDA00004796270400002121
S L , dis p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; L , dis p ] , Among them, L T is the transpose matrix of L,
Figure BDA00004796270400002115
The dimension of is 7×15, and the symbol "[]" is a vector representation symbol, where 1≤i'≤7, L (1) represents the first column vector of L, and L (i') represents the i'th column of L Column vector, L (7) indicates the 7th column vector of L.

②-6c、采用与步骤②-3c至步骤②-5c相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将

Figure BDA00004796270400002116
的Sigma特征集记为
Figure BDA00004796270400002117
的维数为7×15。②-6c. Use the same operation as step ②-3c to step ②-5c to obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set
Figure BDA00004796270400002116
The Sigma feature set is denoted as
Figure BDA00004796270400002117
The dimension of is 7×15.

②-7c、根据当前子块

Figure BDA00004796270400002118
的Sigma特征集
Figure BDA00004796270400002119
和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为 I L , dis str ( p ) , I L , dis str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , dis p - S L , dis q | | 2 2 &sigma; 2 ) &times; L dis ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , dis p - S L , dis q | | 2 2 &sigma; 2 ) , 其中,N'(p)表示{Ldis(x,y)}中以当前像素点为中心的21×21邻域窗口内的所有邻域像素点在{Ldis(x,y)}中的坐标位置的集合,exp()表示以e为底的指数函数,e=2.71828183,σ表示高斯函数的标准差,在本实施例中取σ=0.06,符号“||||”为欧氏距离计算符号,Ldis(q)表示{Ldis(x,y)}中坐标位置为q的像素点的像素值。②-7c, according to the current sub-block
Figure BDA00004796270400002118
The Sigma feature set
Figure BDA00004796270400002119
and the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel to obtain the structural information of the current pixel, denoted as I L , dis str ( p ) , I L , dis str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , dis p - S L , dis q | | 2 2 &sigma; 2 ) &times; L dis ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S L , dis p - S L , dis q | | 2 2 &sigma; 2 ) , Among them, N'(p) means that all neighboring pixels in {L dis (x, y)} in the 21×21 neighborhood window centered on the current pixel in {L dis (x, y)} A collection of coordinate positions, exp() represents an exponential function with e as the base, e=2.71828183, σ represents the standard deviation of the Gaussian function, in this embodiment, σ=0.06, and the symbol "||||" is the Euclidean distance Calculation symbol, L dis (q) represents the pixel value of the pixel point whose coordinate position is q in {L dis (x,y)}.

②-8c、根据当前像素点的结构信息

Figure BDA0000479627040000221
获取当前像素点的纹理信息,记为 I L , dis tex ( p ) , I L , dis tex ( p ) = L dis ( p ) - I L , dis str ( p ) , 其中,Ldis(p)表示当前像素点的像素值。②-8c, according to the structure information of the current pixel
Figure BDA0000479627040000221
Get the texture information of the current pixel point, denoted as I L , dis tex ( p ) , I L , dis tex ( p ) = L dis ( p ) - I L , dis str ( p ) , Wherein, L dis (p) represents the pixel value of the current pixel point.

②-9c、将{Ldis(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2c继续执行,直至{Ldis(x,y)}中的所有像素点处理完毕,得到{Ldis(x,y)}中的每个像素点的结构信息和纹理信息,由{Ldis(x,y)}中的所有像素点的结构信息构成{Ldis(x,y)}的结构图像,记为

Figure BDA0000479627040000223
由{Ldis(x,y)}中的所有像素点的纹理信息构成{Ldis(x,y)}的纹理图像,记为
Figure BDA0000479627040000224
②-9c. Set the next pixel to be processed in {L dis (x,y)} as the current pixel, and then return to step ②-2c to continue until all pixels in {L dis (x,y)} Point processing is completed, and the structure information and texture information of each pixel in {L dis (x, y)} are obtained, which is composed of the structure information of all pixels in {L dis (x, y)} {L dis ( The structural image of x,y)}, denoted as
Figure BDA0000479627040000223
The texture image of {L dis (x, y)} is composed of the texture information of all pixels in {L dis (x, y)}, denoted as
Figure BDA0000479627040000224

步骤②中{Rdis(x,y)}的结构图像

Figure BDA0000479627040000225
和纹理图像
Figure BDA0000479627040000226
的获取过程为:Structural image of {R dis (x,y)} in step ②
Figure BDA0000479627040000225
and the texture image
Figure BDA0000479627040000226
The acquisition process is:

②-1d、将{Rdis(x,y)}中当前待处理的像素点定义为当前像素点。②-1d. Define the current pixel point to be processed in {R dis (x, y)} as the current pixel point.

②-2d、将当前像素点在{Rdis(x,y)}中的坐标位置记为p,将以当前像素点为中心的21×21邻域窗口内除当前像素点外的每个像素点定义为邻域像素点,将以当前像素点为中心的9×9邻域窗口构成的块定义为当前子块,并记为

Figure BDA0000479627040000227
将以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块均定义为邻域子块,将在以当前像素点为中心的21×21邻域窗口内的且以在{Rdis(x,y)}中坐标位置为q的邻域像素点为中心的9×9邻域窗口构成的邻域子块记为
Figure BDA0000479627040000228
其中,p∈Ω,q∈Ω,在此Ω表示{Rdis(x,y)}中的所有像素点的坐标位置的集合,(x2,y2)表示当前子块
Figure BDA0000479627040000229
中的像素点在当前子块中的坐标位置,1≤x2≤9,1≤y2≤9,表示当前子块
Figure BDA00004796270400002212
中坐标位置为(x2,y2)的像素点的像素值,(x3,y3)表示
Figure BDA00004796270400002213
中的像素点在
Figure BDA00004796270400002214
中的坐标位置,1≤x3≤9,1≤y3≤9,表示
Figure BDA00004796270400002216
中坐标位置为(x3,y3)的像素点的像素值。②-2d. Record the coordinate position of the current pixel in {R dis (x,y)} as p, and record each pixel in the 21×21 neighborhood window centered on the current pixel except the current pixel A point is defined as a neighborhood pixel point, and a block composed of a 9×9 neighborhood window centered on the current pixel point is defined as the current sub-block, and is recorded as
Figure BDA0000479627040000227
A block composed of a 9×9 neighborhood window centered on each neighborhood pixel in a 21×21 neighborhood window centered on the current pixel is defined as a neighborhood sub-block, and will be centered on the current pixel In the 21×21 neighborhood window of {R dis (x,y)}, the neighborhood sub-block composed of a 9×9 neighborhood window centered on the neighborhood pixel whose coordinate position is q in {R dis (x,y)} is denoted as
Figure BDA0000479627040000228
Among them, p∈Ω, q∈Ω, where Ω represents the set of coordinate positions of all pixels in {R dis (x,y)}, and (x 2 ,y 2 ) represents the current sub-block
Figure BDA0000479627040000229
The pixels in the current sub-block Coordinate position in , 1≤x 2 ≤9, 1≤y 2 ≤9, represents the current subblock
Figure BDA00004796270400002212
The pixel value of the pixel point whose middle coordinate position is (x 2 , y 2 ), (x 3 , y 3 ) means
Figure BDA00004796270400002213
The pixels in
Figure BDA00004796270400002214
Coordinate position in , 1≤x 3 ≤9, 1≤y 3 ≤9, express
Figure BDA00004796270400002216
The pixel value of the pixel point whose middle coordinate position is (x 3 , y 3 ).

上述步骤②-2d中,如果以当前像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rdis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的邻域像素点的坐标位置超出了{Rdis(x,y)}的边界,则该邻域像素点的像素值以最邻近的边界像素点的像素值替代;如果以当前像素点为中心的21×21邻域窗口内的每个邻域像素点为中心的9×9邻域窗口构成的块内的某个像素点的坐标位置超出了{Rdis(x,y)}的边界,则该像素点的像素值以最邻近的边界像素点的像素值替代。In the above step ②-2d, if the coordinate position of a certain pixel in the block formed by the 9×9 neighborhood window centered on the current pixel exceeds the boundary of {R dis (x,y)}, then the pixel The pixel value of the point is replaced by the pixel value of the nearest border pixel point; if the coordinate position of the neighboring pixel point in the 21×21 neighborhood window centered on the current pixel point exceeds {R dis (x,y)} , the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest border pixel; if each neighborhood pixel in the 21×21 neighborhood window with the current pixel as the center If the coordinate position of a certain pixel in the block formed by the ×9 neighborhood window exceeds the boundary of {R dis (x, y)}, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.

②-3d、获取当前子块

Figure BDA0000479627040000231
中的每个像素点的特征矢量,将当前子块
Figure BDA0000479627040000232
中坐标位置为(x2,y2)的像素点的特征矢量记为
Figure BDA0000479627040000233
X R , dis p ( x 2 , y 2 ) = [ I R , dis p ( x 2 , y 2 ) , | &PartialD; I R , dis p ( x 2 , y 2 ) &PartialD; x | , | &PartialD; I R , dis p ( x 2 , y 2 ) &PartialD; y | , | &PartialD; 2 I R , dis p ( x 2 , y 2 ) &PartialD; x 2 | , | &PartialD; 2 I R , dis p ( x 2 , y 2 ) &PartialD; y 2 | , x 2 , y 2 ] ,其中,
Figure BDA0000479627040000235
的维数为7,符号“[]”为矢量表示符号,符号“||”为绝对值符号,表示当前子块
Figure BDA0000479627040000237
中坐标位置为(x2,y2)的像素点的密度值,
Figure BDA0000479627040000238
Figure BDA0000479627040000239
在水平方向的一阶偏导数,
Figure BDA00004796270400002310
Figure BDA00004796270400002311
在垂直方向的一阶偏导数,
Figure BDA00004796270400002313
在水平方向的二阶偏导数,
Figure BDA00004796270400002314
Figure BDA00004796270400002315
在垂直方向的二阶偏导数。②-3d. Get the current sub-block
Figure BDA0000479627040000231
The feature vector of each pixel in the current sub-block
Figure BDA0000479627040000232
The feature vector of the pixel point whose coordinate position is (x 2 , y 2 ) is denoted as
Figure BDA0000479627040000233
x R , dis p ( x 2 , the y 2 ) = [ I R , dis p ( x 2 , the y 2 ) , | &PartialD; I R , dis p ( x 2 , the y 2 ) &PartialD; x | , | &PartialD; I R , dis p ( x 2 , the y 2 ) &PartialD; the y | , | &PartialD; 2 I R , dis p ( x 2 , the y 2 ) &PartialD; x 2 | , | &PartialD; 2 I R , dis p ( x 2 , the y 2 ) &PartialD; the y 2 | , x 2 , the y 2 ] ,in,
Figure BDA0000479627040000235
The dimension is 7, the symbol “[]” is a vector representation symbol, and the symbol “||” is an absolute value symbol, represents the current subblock
Figure BDA0000479627040000237
The density value of the pixel point whose coordinate position is (x 2 , y 2 ),
Figure BDA0000479627040000238
for
Figure BDA0000479627040000239
The first partial derivative in the horizontal direction,
Figure BDA00004796270400002310
for
Figure BDA00004796270400002311
The first partial derivative in the vertical direction, for
Figure BDA00004796270400002313
The second partial derivative in the horizontal direction,
Figure BDA00004796270400002314
for
Figure BDA00004796270400002315
Second order partial derivatives in the vertical direction.

②-4d、根据当前子块

Figure BDA00004796270400002316
中的每个像素点的特征矢量,计算当前子块
Figure BDA00004796270400002317
的协方差矩阵,记为
Figure BDA00004796270400002318
C R , dis p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; y 2 = 1 9 ( X R , dis p ( x 2 , y 2 ) - &mu; R , dis p ) ( X R , dis p ( x 2 , y 2 ) - &mu; R , dis p ) T , 其中,
Figure BDA00004796270400002326
的维数为7×7,
Figure BDA00004796270400002320
表示当前子块中的所有像素点的特征矢量的均值矢量,
Figure BDA00004796270400002322
的转置矢量。②-4d, according to the current sub-block
Figure BDA00004796270400002316
The feature vector of each pixel in the current sub-block is calculated
Figure BDA00004796270400002317
The covariance matrix of is denoted as
Figure BDA00004796270400002318
C R , dis p = 1 7 &times; 7 - 1 &Sigma; x 2 = 1 9 &Sigma; the y 2 = 1 9 ( x R , dis p ( x 2 , the y 2 ) - &mu; R , dis p ) ( x R , dis p ( x 2 , the y 2 ) - &mu; R , dis p ) T , in,
Figure BDA00004796270400002326
The dimension of is 7×7,
Figure BDA00004796270400002320
represents the current subblock The mean vector of the feature vectors of all pixels in
Figure BDA00004796270400002322
for The transpose vector of .

②-5d、对当前子块

Figure BDA00004796270400002324
的协方差矩阵进行Cholesky分解,
Figure BDA0000479627040000241
得到当前子块
Figure BDA0000479627040000242
的Sigma特征集,记为
Figure BDA0000479627040000243
S R , dis p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; R , dis p ] , 其中,LT为L的转置矩阵,
Figure BDA0000479627040000245
的维数为7×15,符号“[]”为矢量表示符号,此处1≤i'≤7,L(1)表示L的第1列向量,L(i')表示L的第i'列向量,L(7)表示L的第7列向量。②-5d, for the current sub-block
Figure BDA00004796270400002324
The covariance matrix of Perform Cholesky decomposition,
Figure BDA0000479627040000241
get the current subblock
Figure BDA0000479627040000242
The Sigma feature set of is denoted as
Figure BDA0000479627040000243
S R , dis p = [ 10 &times; L ( 1 ) , . . . , 10 &times; L ( i &prime; ) , . . . , 10 &times; L ( 7 ) , - 10 &times; L ( 1 ) , . . . , - 10 &times; L ( i &prime; ) , . . . , - 10 &times; L ( 7 ) , &mu; R , dis p ] , Among them, L T is the transpose matrix of L,
Figure BDA0000479627040000245
The dimension of is 7×15, and the symbol "[]" is a vector representation symbol, where 1≤i'≤7, L (1) represents the first column vector of L, and L (i') represents the i'th column of L Column vector, L (7) indicates the 7th column vector of L.

②-6d、采用与步骤②-3d至步骤②-5d相同的操作,获取以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,将

Figure BDA0000479627040000246
的Sigma特征集记为
Figure BDA0000479627040000247
的维数为7×15。②-6d. Using the same operation as step ②-3d to step ②-5d, obtain the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel, and set
Figure BDA0000479627040000246
The Sigma feature set is denoted as
Figure BDA0000479627040000247
The dimension of is 7×15.

②-7d、根据当前子块

Figure BDA0000479627040000248
的Sigma特征集和以每个邻域像素点为中心的9×9邻域窗口构成的邻域子块的Sigma特征集,获取当前像素点的结构信息,记为 I R , dis str ( p ) , I R , dis str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , dis p - S R , dis q | | 2 2 &sigma; 2 ) &times; L dis ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , dis p - S R , dis q | | 2 2 &sigma; 2 ) , 其中,N'(p)表示{Rdis(x,y)}中以当前像素点为中心的21×21邻域窗口内的所有邻域像素点在{Rdis(x,y)}中的坐标位置的集合,exp()表示以e为底的指数函数,e=2.71828183,σ表示高斯函数的标准差,在本实施例中取σ=0.06,符号“||||”为欧氏距离计算符号,Rdis(q)表示{Rdis(x,y)}中坐标位置为q的像素点的像素值。②-7d, according to the current sub-block
Figure BDA0000479627040000248
The Sigma feature set and the Sigma feature set of the neighborhood sub-block composed of a 9×9 neighborhood window centered on each neighborhood pixel to obtain the structural information of the current pixel, denoted as I R , dis str ( p ) , I R , dis str ( p ) = &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , dis p - S R , dis q | | 2 2 &sigma; 2 ) &times; L dis ( q ) &Sigma; q &Element; N &prime; ( p ) exp ( - | | S R , dis p - S R , dis q | | 2 2 &sigma; 2 ) , Among them, N'(p) means that all the neighboring pixels in {R dis (x, y)} in the 21×21 neighborhood window centered on the current pixel in {R dis (x, y)} A collection of coordinate positions, exp() represents an exponential function with e as the base, e=2.71828183, σ represents the standard deviation of the Gaussian function, in this embodiment, σ=0.06, and the symbol "||||" is the Euclidean distance Calculation symbol, R dis (q) represents the pixel value of the pixel whose coordinate position is q in {R dis (x,y)}.

②-8d、根据当前像素点的结构信息

Figure BDA00004796270400002411
获取当前像素点的纹理信息,记为 I R , dis tex ( p ) , I R , dis tex ( p ) = R dis ( p ) - I R , dis str ( p ) , 其中,Rdis(p)表示当前像素点的像素值。②-8d, according to the structure information of the current pixel
Figure BDA00004796270400002411
Get the texture information of the current pixel point, denoted as I R , dis tex ( p ) , I R , dis tex ( p ) = R dis ( p ) - I R , dis str ( p ) , Wherein, R dis (p) represents the pixel value of the current pixel point.

②-9d、将{Rdis(x,y)}中下一个待处理的像素点作为当前像素点,然后返回步骤②-2d继续执行,直至{Rdis(x,y)}中的所有像素点处理完毕,得到{Rdis(x,y)}中的每个像素点的结构信息和纹理信息,由{Rdis(x,y)}中的所有像素点的结构信息构成{Rdis(x,y)}的结构图像,记为由{Rdis(x,y)}中的所有像素点的纹理信息构成{Rdis(x,y)}的纹理图像,记为 ②-9d. Set the next pixel to be processed in {R dis (x, y)} as the current pixel, and then return to step ②-2d to continue until all pixels in {R dis (x, y)} Point processing is completed, and the structure information and texture information of each pixel in {R dis (x, y)} are obtained, which is composed of the structure information of all pixels in {R dis (x, y)} {R dis ( The structural image of x,y)}, denoted as The texture image of {R dis (x, y)} is composed of the texture information of all pixels in {R dis (x, y)}, recorded as

③与原始图像相比,结构图像由于将纹理等细节信息从原始图像中分离出,使结构信息更加稳定,因此本发明方法通过计算

Figure BDA0000479627040000252
中的每个像素点与
Figure BDA0000479627040000253
中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与
Figure BDA0000479627040000255
中坐标位置为(x,y)的像素点之间的梯度相似性记为
Figure BDA0000479627040000256
Q L str ( x , y ) = 2 &times; m L , org str ( x , y ) &times; m L , dis str ( x , y ) + C 1 ( m L , org str ( x , y ) ) 2 + ( m L , dis str ( x , y ) ) 2 + C 1 , 其中, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 , m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400002510
表示
Figure BDA00004796270400002511
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA00004796270400002512
表示
Figure BDA00004796270400002513
中坐标位置为(x,y)的像素点的垂直方向梯度,
Figure BDA00004796270400002514
表示
Figure BDA00004796270400002515
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA00004796270400002516
表示
Figure BDA00004796270400002517
中坐标位置为(x,y)的像素点的垂直方向梯度,C1为控制参数,在本实施例中取C1=0.0026;然后根据
Figure BDA00004796270400002518
中的每个像素点与
Figure BDA00004796270400002519
中对应像素点之间的梯度相似性,计算
Figure BDA00004796270400002520
的图像质量客观评价预测值,记为
Figure BDA00004796270400002521
Q L str = &Sigma; x = 1 W &Sigma; y = 1 H Q L str ( x , y ) W &times; H . ③Compared with the original image, the structural image is more stable because the detailed information such as texture is separated from the original image, so the method of the present invention calculates
Figure BDA0000479627040000252
Each pixel in
Figure BDA0000479627040000253
The gradient similarity between the corresponding pixels in the The pixel point with the middle coordinate position (x, y) and
Figure BDA0000479627040000255
The gradient similarity between pixels whose coordinate positions are (x, y) is recorded as
Figure BDA0000479627040000256
Q L str ( x , the y ) = 2 &times; m L , org str ( x , the y ) &times; m L , dis str ( x , the y ) + C 1 ( m L , org str ( x , the y ) ) 2 + ( m L , dis str ( x , the y ) ) 2 + C 1 , in, m L , org str ( x , the y ) = ( gx L , org str ( x , the y ) ) 2 + ( gy L , org str ( x , the y ) ) 2 , m L , dis str ( x , the y ) = ( gx L , dis str ( x , the y ) ) 2 + ( gy L , dis str ( x , the y ) ) 2 ,
Figure BDA00004796270400002510
express
Figure BDA00004796270400002511
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400002512
express
Figure BDA00004796270400002513
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400002514
express
Figure BDA00004796270400002515
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400002516
express
Figure BDA00004796270400002517
Middle coordinate position is the vertical direction gradient of the pixel point of (x, y), C 1 is control parameter, gets C 1 =0.0026 in the present embodiment; Then according to
Figure BDA00004796270400002518
Each pixel in
Figure BDA00004796270400002519
The gradient similarity between the corresponding pixels in the calculation
Figure BDA00004796270400002520
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400002521
Q L str = &Sigma; x = 1 W &Sigma; the y = 1 h Q L str ( x , the y ) W &times; h .

同样,计算中的每个像素点与

Figure BDA00004796270400002524
中对应像素点之间的梯度相似性,将中坐标位置为(x,y)的像素点与
Figure BDA00004796270400002526
中坐标位置为(x,y)的像素点之间的梯度相似性记为
Figure BDA00004796270400002527
Q R str ( x , y ) = 2 &times; m R , org str ( x , y ) &times; m R , dis str ( x , y ) + C 1 ( m R , org str ( x , y ) ) 2 + ( m R , dis str ( x , y ) ) 2 + C 1 , 其中, m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2 , m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400002531
表示
Figure BDA00004796270400002532
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA00004796270400002533
表示
Figure BDA00004796270400002534
中坐标位置为(x,y)的像素点的垂直方向梯度,表示
Figure BDA0000479627040000262
中坐标位置为(x,y)的像素点的水平方向梯度,
Figure BDA0000479627040000263
表示
Figure BDA0000479627040000264
中坐标位置为(x,y)的像素点的垂直方向梯度,C1为控制参数,在本实施例中取C1=0.0026;然后根据
Figure BDA0000479627040000265
中的每个像素点与
Figure BDA0000479627040000266
中对应像素点之间的梯度相似性,计算
Figure BDA0000479627040000267
的图像质量客观评价预测值,记为
Figure BDA0000479627040000268
Q R str = &Sigma; x = 1 W &Sigma; y = 1 H Q R str ( x , y ) W &times; H . Similarly, calculate Each pixel in
Figure BDA00004796270400002524
The gradient similarity between the corresponding pixels in the The pixel point with the middle coordinate position (x, y) and
Figure BDA00004796270400002526
The gradient similarity between pixels whose coordinate positions are (x, y) is recorded as
Figure BDA00004796270400002527
Q R str ( x , the y ) = 2 &times; m R , org str ( x , the y ) &times; m R , dis str ( x , the y ) + C 1 ( m R , org str ( x , the y ) ) 2 + ( m R , dis str ( x , the y ) ) 2 + C 1 , in, m R , org str ( x , the y ) = ( gx R , org str ( x , the y ) ) 2 + ( gy R , org str ( x , the y ) ) 2 , m R , dis str ( x , the y ) = ( gx R , dis str ( x , the y ) ) 2 + ( gy R , dis str ( x , the y ) ) 2 ,
Figure BDA00004796270400002531
express
Figure BDA00004796270400002532
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA00004796270400002533
express
Figure BDA00004796270400002534
The vertical direction gradient of the pixel point whose middle coordinate position is (x, y), express
Figure BDA0000479627040000262
The horizontal direction gradient of the pixel point whose middle coordinate position is (x, y),
Figure BDA0000479627040000263
express
Figure BDA0000479627040000264
Middle coordinate position is the vertical direction gradient of the pixel point of (x, y), C 1 is control parameter, gets C 1 =0.0026 in the present embodiment; Then according to
Figure BDA0000479627040000265
Each pixel in
Figure BDA0000479627040000266
The gradient similarity between the corresponding pixels in the calculation
Figure BDA0000479627040000267
The predicted value of objective evaluation of image quality is denoted as
Figure BDA0000479627040000268
Q R str = &Sigma; x = 1 W &Sigma; the y = 1 h Q R str ( x , the y ) W &times; h .

④由于均值和标准差信息能够很好地评价图像细节信息变化,因此本发明方法通过获取中的每个尺寸大小为8×8的子块与中对应尺寸大小为8×8的子块之间的结构相似度,计算得到

Figure BDA00004796270400002612
的图像质量客观评价预测值,记为
Figure BDA00004796270400002613
4. Because the mean value and standard deviation information can evaluate the change of image detail information well, so the method of the present invention obtains Each sub-block of size 8×8 in The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as
Figure BDA00004796270400002612
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400002613

在此具体实施例中,步骤④中

Figure BDA00004796270400002614
的图像质量客观评价预测值
Figure BDA00004796270400002615
的获取过程为:In this specific embodiment, in step ④
Figure BDA00004796270400002614
The predictive value of the image quality objective evaluation
Figure BDA00004796270400002615
The acquisition process is:

④-1a、分别将

Figure BDA00004796270400002616
Figure BDA00004796270400002617
划分成
Figure BDA00004796270400002618
个互不重叠的尺寸大小为8×8的子块,将中当前待处理的第k个子块定义为当前第一子块,将
Figure BDA00004796270400002620
中当前待处理的第k个子块定义为当前第二子块,其中,
Figure BDA00004796270400002621
k的初始值为1。④-1a, respectively
Figure BDA00004796270400002616
and
Figure BDA00004796270400002617
divided into
Figure BDA00004796270400002618
Non-overlapping sub-blocks of size 8×8, the The kth sub-block currently to be processed in is defined as the current first sub-block, and the
Figure BDA00004796270400002620
The kth sub-block currently to be processed in is defined as the current second sub-block, where,
Figure BDA00004796270400002621
The initial value of k is 1.

④-2a、将当前第一子块记为将当前第二子块记为

Figure BDA00004796270400002623
其中,(x4,y4)表示
Figure BDA00004796270400002624
中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,
Figure BDA00004796270400002626
表示
Figure BDA00004796270400002627
中坐标位置为(x4,y4)的像素点的像素值,
Figure BDA00004796270400002628
表示中坐标位置为(x4,y4)的像素点的像素值。④-2a. Record the current first sub-block as Record the current second sub-block as
Figure BDA00004796270400002623
Among them, (x 4 ,y 4 ) means
Figure BDA00004796270400002624
and The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8,
Figure BDA00004796270400002626
express
Figure BDA00004796270400002627
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ),
Figure BDA00004796270400002628
express The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ).

④-3a、计算当前第一子块

Figure BDA00004796270400002630
的均值和标准差,对应记为
Figure BDA00004796270400002631
Figure BDA00004796270400002632
&mu; L org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f L org , k ( x 4 , y 4 ) 64 , &sigma; L org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L org , k ( x 4 , y 4 ) - &mu; L org , k ) 2 64 . ④-3a. Calculate the current first sub-block
Figure BDA00004796270400002630
The mean and standard deviation of , corresponding to
Figure BDA00004796270400002631
and
Figure BDA00004796270400002632
&mu; L org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f L org , k ( x 4 , the y 4 ) 64 , &sigma; L org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L org , k ( x 4 , the y 4 ) - &mu; L org , k ) 2 64 .

同样,计算当前第二子块

Figure BDA00004796270400002634
的均值和标准差,对应记为
Figure BDA00004796270400002635
&mu; L dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f L dis , k ( x 4 , y 4 ) 64 , &sigma; L dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L dis , k ( x 4 , y 4 ) - &mu; L dis , k ) 2 64 . Similarly, calculate the current second sub-block
Figure BDA00004796270400002634
The mean and standard deviation of , corresponding to
Figure BDA00004796270400002635
and &mu; L dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f L dis , k ( x 4 , the y 4 ) 64 , &sigma; L dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f L dis , k ( x 4 , the y 4 ) - &mu; L dis , k ) 2 64 .

④-4a、计算当前第一子块

Figure BDA0000479627040000272
与当前第二子块
Figure BDA0000479627040000273
之间的结构相似度,记为
Figure BDA0000479627040000274
Q L , k tex = 4 &times; ( &sigma; L org , k &times; &sigma; L dis , k ) &times; ( &mu; L org , k &times; &mu; L dis , k ) + C 2 ( ( &sigma; L org , k ) 2 + ( &sigma; L dis , k ) 2 ) + ( ( &mu; L org , k ) 2 + ( &mu; L dis , k ) 2 ) + C 2 , 其中,C2为控制参数,在本实施例中取C2=0.85。④-4a. Calculate the current first sub-block
Figure BDA0000479627040000272
with the current second subblock
Figure BDA0000479627040000273
The structural similarity between
Figure BDA0000479627040000274
Q L , k tex = 4 &times; ( &sigma; L org , k &times; &sigma; L dis , k ) &times; ( &mu; L org , k &times; &mu; L dis , k ) + C 2 ( ( &sigma; L org , k ) 2 + ( &sigma; L dis , k ) 2 ) + ( ( &mu; L org , k ) 2 + ( &mu; L dis , k ) 2 ) + C 2 , Wherein, C 2 is a control parameter, and in this embodiment, C 2 =0.85.

④-5a、令k=k+1,将

Figure BDA0000479627040000276
中下一个待处理的子块作为当前第一子块,将
Figure BDA0000479627040000277
中下一个待处理的子块作为当前第二子块,然后返回步骤④-2a继续执行,直至
Figure BDA0000479627040000278
Figure BDA0000479627040000279
中的所有子块均处理完毕,得到
Figure BDA00004796270400002710
中的每个子块与
Figure BDA00004796270400002711
中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号。④-5a, let k=k+1, the
Figure BDA0000479627040000276
In the next sub-block to be processed as the current first sub-block, the
Figure BDA0000479627040000277
The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2a to continue until
Figure BDA0000479627040000278
and
Figure BDA0000479627040000279
All sub-blocks in are processed, resulting in
Figure BDA00004796270400002710
Each subblock in
Figure BDA00004796270400002711
The structural similarity between corresponding sub-blocks in k=k+1, where "=" in k=k+1 is an assignment symbol.

④-6a、根据

Figure BDA00004796270400002712
中的每个子块与
Figure BDA00004796270400002713
中对应子块之间的结构相似度,计算
Figure BDA00004796270400002714
的图像质量客观评价预测值,记为
Figure BDA00004796270400002715
Figure BDA00004796270400002716
④-6a, according to
Figure BDA00004796270400002712
Each subblock in
Figure BDA00004796270400002713
The structural similarity between the corresponding sub-blocks in the calculation
Figure BDA00004796270400002714
The predicted value of objective evaluation of image quality is denoted as
Figure BDA00004796270400002715
Figure BDA00004796270400002716

同样,通过获取

Figure BDA00004796270400002717
中的每个尺寸大小为8×8的子块与
Figure BDA00004796270400002718
中对应尺寸大小为8×8的子块之间的结构相似度,计算得到
Figure BDA00004796270400002719
的图像质量客观评价预测值,记为
Figure BDA00004796270400002720
Likewise, by getting
Figure BDA00004796270400002717
Each sub-block of size 8×8 in
Figure BDA00004796270400002718
The structural similarity between the sub-blocks corresponding to the size of 8×8 in , is calculated as
Figure BDA00004796270400002719
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400002720

在此具体实施例中,所述的步骤④中

Figure BDA00004796270400002721
的图像质量客观评价预测值
Figure BDA00004796270400002722
的获取过程为:In this specific embodiment, in the described step ④
Figure BDA00004796270400002721
The predictive value of the image quality objective evaluation
Figure BDA00004796270400002722
The acquisition process is:

④-1b、分别将

Figure BDA00004796270400002723
Figure BDA00004796270400002724
划分成
Figure BDA00004796270400002725
个互不重叠的尺寸大小为8×8的子块,将
Figure BDA00004796270400002726
中当前待处理的第k个子块定义为当前第一子块,将
Figure BDA00004796270400002727
中当前待处理的第k个子块定义为当前第二子块,其中,
Figure BDA00004796270400002728
k的初始值为1。④-1b, respectively
Figure BDA00004796270400002723
and
Figure BDA00004796270400002724
divided into
Figure BDA00004796270400002725
Non-overlapping sub-blocks of size 8×8, the
Figure BDA00004796270400002726
The kth sub-block currently to be processed in is defined as the current first sub-block, and the
Figure BDA00004796270400002727
The kth sub-block currently to be processed in is defined as the current second sub-block, where,
Figure BDA00004796270400002728
The initial value of k is 1.

④-2b、将当前第一子块记为

Figure BDA00004796270400002729
将当前第二子块记为
Figure BDA00004796270400002730
其中,(x4,y4)表示
Figure BDA0000479627040000282
中的像素点的坐标位置,1≤x4≤8,1≤y4≤8,
Figure BDA0000479627040000283
表示
Figure BDA0000479627040000284
中坐标位置为(x4,y4)的像素点的像素值,
Figure BDA0000479627040000285
表示
Figure BDA0000479627040000286
中坐标位置为(x4,y4)的像素点的像素值。④-2b. Record the current first sub-block as
Figure BDA00004796270400002729
Record the current second sub-block as
Figure BDA00004796270400002730
Among them, (x 4 ,y 4 ) means and
Figure BDA0000479627040000282
The coordinate position of the pixel in , 1≤x 4 ≤8, 1≤y 4 ≤8,
Figure BDA0000479627040000283
express
Figure BDA0000479627040000284
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ),
Figure BDA0000479627040000285
express
Figure BDA0000479627040000286
The pixel value of the pixel point whose middle coordinate position is (x 4 , y 4 ).

④-3b、计算当前第一子块

Figure BDA0000479627040000287
的均值和标准差,对应记为
Figure BDA0000479627040000288
Figure BDA0000479627040000289
&mu; R org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f R org , k ( x 4 , y 4 ) 64 , &sigma; R org , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R org , k ( x 4 , y 4 ) - &mu; R org , k ) 2 64 ; ④-3b. Calculate the current first sub-block
Figure BDA0000479627040000287
The mean and standard deviation of , corresponding to
Figure BDA0000479627040000288
and
Figure BDA0000479627040000289
&mu; R org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f R org , k ( x 4 , the y 4 ) 64 , &sigma; R org , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R org , k ( x 4 , the y 4 ) - &mu; R org , k ) 2 64 ;

同样,计算当前第二子块

Figure BDA00004796270400002811
的均值和标准差,对应记为
Figure BDA00004796270400002812
Figure BDA00004796270400002813
&mu; R dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 f R dis , k ( x 4 , y 4 ) 64 , &sigma; R dis , k = &Sigma; y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R dis , k ( x 4 , y 4 ) - &mu; R dis , k ) 2 64 . Similarly, calculate the current second sub-block
Figure BDA00004796270400002811
The mean and standard deviation of , corresponding to
Figure BDA00004796270400002812
and
Figure BDA00004796270400002813
&mu; R dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 f R dis , k ( x 4 , the y 4 ) 64 , &sigma; R dis , k = &Sigma; the y 4 = 1 8 &Sigma; x 4 = 1 8 ( f R dis , k ( x 4 , the y 4 ) - &mu; R dis , k ) 2 64 .

④-4b、计算当前第一子块与当前第二子块

Figure BDA00004796270400002816
之间的结构相似度,记为
Figure BDA00004796270400002817
Q R , k tex = 4 &times; ( &sigma; R org , k &times; &sigma; R dis , k ) &times; ( &mu; R org , k &times; &mu; R dis , k ) + C 2 ( ( &sigma; R org , k ) 2 + ( &sigma; R dis , k ) 2 ) + ( ( &mu; R org , k ) 2 + ( &mu; R dis , k ) 2 ) + C 2 , 其中,C2为控制参数,在本实施例中取C2=0.85。④-4b. Calculate the current first sub-block with the current second subblock
Figure BDA00004796270400002816
The structural similarity between
Figure BDA00004796270400002817
Q R , k tex = 4 &times; ( &sigma; R org , k &times; &sigma; R dis , k ) &times; ( &mu; R org , k &times; &mu; R dis , k ) + C 2 ( ( &sigma; R org , k ) 2 + ( &sigma; R dis , k ) 2 ) + ( ( &mu; R org , k ) 2 + ( &mu; R dis , k ) 2 ) + C 2 , Wherein, C 2 is a control parameter, and in this embodiment, C 2 =0.85.

④-5b、令k=k+1,将

Figure BDA00004796270400002819
中下一个待处理的子块作为当前第一子块,将中下一个待处理的子块作为当前第二子块,然后返回步骤④-2b继续执行,直至
Figure BDA00004796270400002821
Figure BDA00004796270400002822
中的所有子块均处理完毕,得到
Figure BDA00004796270400002823
中的每个子块与
Figure BDA00004796270400002824
中对应子块之间的结构相似度,其中,k=k+1中的“=”为赋值符号。④-5b, let k=k+1, the
Figure BDA00004796270400002819
In the next sub-block to be processed as the current first sub-block, the The next sub-block to be processed is used as the current second sub-block, and then returns to step ④-2b to continue until
Figure BDA00004796270400002821
and
Figure BDA00004796270400002822
All sub-blocks in are processed, resulting in
Figure BDA00004796270400002823
Each subblock in
Figure BDA00004796270400002824
The structural similarity between corresponding sub-blocks in k=k+1, where "=" in k=k+1 is an assignment symbol.

④-6b、根据

Figure BDA00004796270400002825
中的每个子块与
Figure BDA00004796270400002826
中对应子块之间的结构相似度,计算
Figure BDA00004796270400002827
的图像质量客观评价预测值,记为
Figure BDA00004796270400002829
④-6b, according to
Figure BDA00004796270400002825
Each subblock in
Figure BDA00004796270400002826
The structural similarity between the corresponding sub-blocks in the calculation
Figure BDA00004796270400002827
The predicted value of the objective evaluation of image quality is denoted as
Figure BDA00004796270400002829

⑤对

Figure BDA00004796270400002830
Figure BDA00004796270400002831
进行融合,得到Sdis的结构图像的图像质量客观评价预测值,记为Qstr
Figure BDA00004796270400002832
其中,ws表示
Figure BDA00004796270400002833
Figure BDA00004796270400002834
的权值比重,在本实施例中,对于宁波大学立体图像库,取ws=0.980;对于LIVE立体图像库,取ws=0.629。⑤ right
Figure BDA00004796270400002830
and
Figure BDA00004796270400002831
Fusion is carried out to obtain the image quality objective evaluation prediction value of the structural image of S dis , denoted as Q str ,
Figure BDA00004796270400002832
Among them, w s means
Figure BDA00004796270400002833
and
Figure BDA00004796270400002834
In this embodiment, w s =0.980 for the Ningbo University stereoscopic image database; w s =0.629 for the LIVE stereoscopic image database.

同样,对

Figure BDA0000479627040000292
进行融合,得到Sdis的纹理图像的图像质量客观评价预测值,记为Qtex
Figure BDA0000479627040000293
其中,wt表示
Figure BDA0000479627040000294
Figure BDA0000479627040000295
的权值比重,在本实施例中,对于宁波大学立体图像库,取wt=0.888;对于LIVE立体图像库,取wt=0.503。same, yes and
Figure BDA0000479627040000292
Perform fusion to obtain the image quality objective evaluation prediction value of the texture image of S dis , denoted as Q tex ,
Figure BDA0000479627040000293
Among them, w t means
Figure BDA0000479627040000294
and
Figure BDA0000479627040000295
In this embodiment, for the stereoscopic image database of Ningbo University, w t =0.888; for the LIVE stereoscopic image database, w t =0.503.

⑥对Qstr和Qtex进行融合,得到Sdis的图像质量客观评价预测值,记为Q,Q=w×Qstr+(1-w)×Qtex,其中,w表示Qstr和Sdis的权值比重,在本实施例中,对于宁波大学立体图像库,取w=0.882;对于LIVE立体图像库,取w=0.838。⑥Fuse Q str and Q tex to obtain the predicted value of S dis image quality objective evaluation, denoted as Q, Q=w×Q str +(1-w)×Q tex , where w represents Q str and S dis In this embodiment, w=0.882 is used for the stereoscopic image library of Ningbo University; w=0.838 is used for the LIVE stereoscopic image library.

这里,利用评估图像质量评价方法的4个常用客观参量作为评价指标,即非线性回归条件下的Pearson相关系数(Pearson linear correlation coefficient,PLCC)、Spearman相关系数(Spearman rank order correlation coefficient,SROCC)、Kendall相关系数(Kendall rank-order correlation coefficient,KROCC)、均方误差(root mean squared error,RMSE),PLCC和RMSE反映失真的立体图像客观评价结果的准确性,SROCC和KROCC反映其单调性。Here, four commonly used objective parameters for evaluating image quality evaluation methods are used as evaluation indicators, namely Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC) and Spearman correlation coefficient (Spearman rank order correlation coefficient, SROCC) under nonlinear regression conditions. Kendall correlation coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error (root mean squared error, RMSE), PLCC and RMSE reflect the accuracy of the objective evaluation results of the distorted stereoscopic image, SROCC and KROCC reflect its monotonicity.

利用本发明方法计算宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值和LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值,再利用现有的主观评价方法获得宁波大学立体图像库中的每幅失真的立体图像的平均主观评分差值和LIVE立体图像库中的每幅失真的立体图像的平均主观评分差值。将按本发明方法计算得到的失真的立体图像的图像质量客观评价预测值做五参数Logistic函数非线性拟合,PLCC、SROCC和KROCC值越高,RMSE值越低说明客观评价方法与平均主观评分差值相关性越好。表1、表2、表3和表4给出了采用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Pearson相关系数、Spearman相关系数、Kendall相关系数和均方误差。从表1、表2、表3和表4中可以看出,采用本发明方法得到的失真的立体图像的最终的图像质量客观评价预测值与平均主观评分差值之间的相关性是很高的,表明了客观评价结果与人眼主观感知的结果较为一致,足以说明本发明方法的有效性。Utilize the method of the present invention to calculate the image quality objective evaluation prediction value of each distorted stereoscopic image in the stereoscopic image database of Ningbo University and the image quality objective evaluation prediction value of each distorted stereoscopic image in the LIVE stereoscopic image database, and then use the existing The subjective evaluation method obtained the average subjective score difference of each distorted stereo image in the stereo image database of Ningbo University and the average subjective score difference of each distorted stereo image in the LIVE stereo image database. The five-parameter Logistic function nonlinear fitting is done on the image quality objective evaluation prediction value of the distorted stereoscopic image calculated by the method of the present invention, the higher the PLCC, SROCC and KROCC values, the lower the RMSE value shows that the objective evaluation method and the average subjective rating The better the difference correlation. Table 1, table 2, table 3 and table 4 provide the Pearson correlation coefficient, Spearman correlation coefficient, Kendall correlation between the image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention Coefficient and mean square error. As can be seen from Table 1, Table 2, Table 3 and Table 4, the correlation between the final image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention is very high It shows that the objective evaluation result is relatively consistent with the subjective perception result of human eyes, which is enough to illustrate the effectiveness of the method of the present invention.

图2给出了利用本发明方法得到的宁波大学立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图,图3给出了利用本发明方法得到的LIVE立体图像库中的每幅失真的立体图像的图像质量客观评价预测值与平均主观评分差值的散点图,散点越集中,说明客观评价结果与主观感知的一致性越好。从图2和图3中可以看出,采用本发明方法得到的散点图比较集中,与主观评价数据之间的吻合度较高。Fig. 2 has provided the scatter plot of the image quality objective evaluation prediction value and the average subjective rating difference of each distorted stereo image in the Ningbo University stereo image storehouse that utilizes the method of the present invention to obtain, and Fig. 3 has provided the scatter diagram utilizing the present invention The scatter diagram of the difference between the image quality objective evaluation prediction value and the average subjective evaluation value of each distorted stereo image in the LIVE stereo image database obtained by the method, the more concentrated the scatter points, the better the consistency between the objective evaluation results and the subjective perception . It can be seen from Fig. 2 and Fig. 3 that the scatter diagram obtained by the method of the present invention is relatively concentrated, and has a high degree of agreement with the subjective evaluation data.

表1利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评Table 1 utilizes the image quality objective evaluation prediction value and the average subjective evaluation of the distorted stereoscopic image that the inventive method obtains

分差值之间的Pearson相关系数比较Comparison of the Pearson correlation coefficient between the difference values

Figure BDA0000479627040000301
Figure BDA0000479627040000301

表2利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Spearman相关系数比较Table 2 utilizes Spearman's correlation coefficient comparison between the image quality objective evaluation prediction value of the distorted stereoscopic image obtained by the method of the present invention and the average subjective rating difference

表3利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的Kendall相关系数比较Table 3 utilizes the Kendall correlation coefficient comparison between the image quality objective evaluation prediction value of the distorted stereoscopic image obtained by the method of the present invention and the average subjective rating difference

Figure BDA0000479627040000303
Figure BDA0000479627040000303

表4利用本发明方法得到的失真的立体图像的图像质量客观评价预测值与平均主观评分差值之间的均方误差比较Table 4 compares the mean square error between the image quality objective evaluation prediction value and the average subjective rating difference of the distorted stereoscopic image obtained by the method of the present invention

Figure BDA0000479627040000311
Figure BDA0000479627040000311

Claims (4)

1. A three-dimensional image quality objective evaluation method based on structure texture separation is characterized in that the processing process is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
2. The objective evaluation method for stereo image quality based on structure texture separation according to claim 1, characterized in that it comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAnd
Figure FDA0000479627030000022
will { Rorg(x, y) } structural and texture image correspondences are noted
Figure FDA0000479627030000023
Andwill { Ldis(x, y) } structural and texture image correspondences are noted
Figure FDA0000479627030000025
And
Figure FDA0000479627030000026
will { Rdis(x, y) } structural image and texture image correspondenceAnd
Figure FDA0000479627030000028
wherein,to represent
Figure FDA00004796270300000210
The middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000213
to represent
Figure FDA00004796270300000214
The middle coordinate position is the pixel value of the pixel point of (x, y),to represent
Figure FDA00004796270300000216
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000217
to represent
Figure FDA00004796270300000218
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000219
to represent
Figure FDA00004796270300000220
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000221
to represent
Figure FDA00004796270300000222
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000223
to represent
Figure FDA00004796270300000224
The middle coordinate position is the pixel value of the pixel point of (x, y);
calculating
Figure FDA00004796270300000225
Each pixel point in (1)
Figure FDA00004796270300000226
The gradient similarity between the corresponding pixel points will be
Figure FDA00004796270300000227
The pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure FDA00004796270300000231
<math> <mrow> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure FDA0000479627030000032
to represent
Figure FDA0000479627030000033
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to represent
Figure FDA0000479627030000035
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA0000479627030000036
to represent
Figure FDA0000479627030000037
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA0000479627030000038
to represent
Figure FDA0000479627030000039
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according to
Figure FDA00004796270300000310
Each pixel point in (1)
Figure FDA00004796270300000311
Calculating the gradient similarity between corresponding pixel points
Figure FDA00004796270300000312
The predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000313
Figure FDA00004796270300000314
Also, calculate
Figure FDA00004796270300000315
Each pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position and
Figure FDA00004796270300000318
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure FDA00004796270300000319
<math> <mrow> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein,
m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2
m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000325
to represent
Figure FDA00004796270300000326
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000327
to represent
Figure FDA00004796270300000328
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000329
to represent
Figure FDA00004796270300000330
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according toEach pixel point in (1)
Figure FDA00004796270300000332
Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000334
Figure FDA00004796270300000335
ObtainingEach of the sub-blocks having a size of 8 x 8 and
Figure FDA00004796270300000337
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure FDA00004796270300000338
The predicted value of objective evaluation of image quality is recorded as
Also, by obtaining
Figure FDA0000479627030000041
Each of the sub-blocks having a size of 8 x 8 and
Figure FDA0000479627030000042
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure FDA0000479627030000043
The predicted value of objective evaluation of image quality is recorded as
Figure FDA0000479627030000044
Fifthly, to
Figure FDA0000479627030000045
And
Figure FDA0000479627030000046
to carry outFusing to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr
Figure FDA0000479627030000047
Wherein, wsTo representAnd
Figure FDA0000479627030000049
the weight proportion of (2);
also, for
Figure FDA00004796270300000410
And
Figure FDA00004796270300000411
carrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qstr
Figure FDA00004796270300000412
Wherein, wtTo represent
Figure FDA00004796270300000413
Andthe weight proportion of (2);
sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisThe weight ratio of (2).
3. The objective evaluation method for stereo image quality based on structure texture separation as claimed in claim 2, wherein in step (ii) { L }, the objective evaluation method for stereo image quality based on structure texture separation is adoptedorg(x, y) } structural image
Figure FDA00004796270300000415
And texture images
Figure FDA00004796270300000416
The acquisition process comprises the following steps:
② 1a, will { LorgDefining the current pixel point to be processed in (x, y) as the current pixel point;
2a, setting the current pixel point at { LorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure FDA00004796270300000417
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure FDA00004796270300000418
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure FDA00004796270300000419
The pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,
Figure FDA00004796270300000421
Representing a current sub-block
Figure FDA00004796270300000422
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,
Figure FDA0000479627030000051
To represent
Figure FDA0000479627030000052
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1);
in the step 2a, for any neighborhood pixel point and any pixel point in the current sub-block, the pixel point is assumed to be in the { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point;if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point;
② 3a, obtaining the current sub-block
Figure FDA0000479627030000053
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure FDA0000479627030000054
The middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as
Figure FDA0000479627030000055
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein
Figure FDA0000479627030000057
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,
Figure FDA0000479627030000058
representing a current sub-block
Figure FDA0000479627030000059
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),
Figure FDA00004796270300000510
is composed ofThe first partial derivative in the horizontal direction,
Figure FDA00004796270300000512
is composed of
Figure FDA00004796270300000513
The first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,
Figure FDA0000479627030000061
is composed ofSecond partial derivatives in the vertical direction;
② 4a, according to the current sub-block
Figure FDA0000479627030000063
Calculating the current sub-block according to the feature vector of each pixel point
Figure FDA0000479627030000064
Covariance matrix of (2), as
Figure FDA0000479627030000065
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure FDA0000479627030000067
has a dimension of 7 x 7,
Figure FDA0000479627030000068
representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1), <math> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> </math> is composed of <math> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> </math> The transposed vector of (1);
② 5a, for the current sub-block
Figure FDA00004796270300000612
Covariance matrix ofThe Cholesky decomposition is carried out and,
Figure FDA00004796270300000614
obtaining the current sub-block
Figure FDA00004796270300000615
Sigma feature set of (D), noted
Figure FDA00004796270300000616
<math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)A 7 th column vector representing L;
secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set of
Figure FDA00004796270300000620
Has a dimension of 7 × 15;
7a, according to the current sub-block
Figure FDA00004796270300000621
Sigma feature set of
Figure FDA00004796270300000622
And a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { LorgAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) } are in { L }orgA set of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of a gaussian function, the symbol "|" is the euclidean distance calculation symbol, Lorg(q) represents { L }org(x, y) } the pixel value of a pixel point with the coordinate position of q;
② 8a, according to the structure information of current pixel point
Figure FDA0000479627030000071
Obtaining the texture information of the current pixel point and recording the texture information as I L , org tex ( p ) , I L , org tex ( p ) = L org ( p ) - I L , org str ( p ) , Wherein L isorg(p) representing a pixel value of a current pixel point;
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, noted
Figure FDA0000479627030000073
By
Figure FDA0000479627030000074
Texture information composition of all pixels in (1) { L }org(x, y) } texture image, noted
Figure FDA0000479627030000075
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural image
Figure FDA0000479627030000076
And texture imagesSame operation, get { Rorg(x, y) } structural imageAnd texture images{Ldis(x, y) } structural image
Figure FDA00004796270300000710
And texture images
Figure FDA00004796270300000711
{Rdis(x, y) } structural image
Figure FDA00004796270300000712
And texture images
Figure FDA00004796270300000713
4. The objective evaluation method for stereo image quality based on structure texture separation according to claim 2 or 3, characterized in that the step (c) is
Figure FDA00004796270300000714
Objectively evaluating the predicted value of image quality
Figure FDA00004796270300000715
The acquisition process comprises the following steps:
fourthly-1 a, respectively
Figure FDA00004796270300000716
Andis divided into
Figure FDA00004796270300000718
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure FDA00004796270300000719
Defining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure FDA00004796270300000721
k has an initial value of 1;
fourthly-2 a, recording the current first sub-block as
Figure FDA00004796270300000722
Record the current second sub-block as
Figure FDA00004796270300000723
Wherein (x)4,y4) To represent
Figure FDA00004796270300000724
And
Figure FDA00004796270300000725
x is more than or equal to 14≤8,1≤y4≤8,
Figure FDA00004796270300000726
To represent
Figure FDA00004796270300000727
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000082
And
Figure FDA0000479627030000083
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure FDA0000479627030000085
Mean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000086
And
Figure FDA0000479627030000087
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 a, calculating the current first sub-block
Figure FDA0000479627030000089
With the current second sub-block
Figure FDA00004796270300000810
Structural similarity between them, is recorded as
Figure FDA00004796270300000811
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5a, let k = k +1, will
Figure FDA00004796270300000813
Taking the next sub-block to be processed as the current first sub-block
Figure FDA00004796270300000814
Taking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reached
Figure FDA00004796270300000815
Andall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure FDA00004796270300000817
Each sub-block of (1) and
Figure FDA00004796270300000818
wherein "=" in k = k +1 is an assignment symbol;
fourthly-6 a, according to
Figure FDA00004796270300000819
Each sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculating
Figure FDA00004796270300000821
Objective evaluation of image qualityPredicted value, recorded as
Figure FDA00004796270300000822
In the step (iv)
Figure FDA00004796270300000824
Objectively evaluating the predicted value of image quality
Figure FDA00004796270300000825
The acquisition process comprises the following steps:
fourthly-1 b, respectively mixing
Figure FDA00004796270300000826
Andis divided into
Figure FDA00004796270300000828
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure FDA00004796270300000829
Defining the current kth sub-block to be processed as the current first sub-block
Figure FDA00004796270300000830
The current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure FDA00004796270300000831
k has an initial value of 1;
fourthly-2 b, recording the current first sub-block as
Figure FDA00004796270300000832
Record the current second sub-block as
Figure FDA00004796270300000833
Wherein (x)4,y4) To representAnd
Figure FDA0000479627030000092
x is more than or equal to 14≤8,1≤y4≤8,
Figure FDA0000479627030000093
To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),
Figure FDA0000479627030000095
to represent
Figure FDA0000479627030000096
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 b, calculating the current first sub-block
Figure FDA0000479627030000097
Mean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000098
And <math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notation
Figure FDA00004796270300000912
And
Figure FDA00004796270300000913
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 b, calculating the current first sub-block
Figure FDA00004796270300000915
With the current second sub-blockStructural similarity between them, is recorded as
Figure FDA00004796270300000917
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>R</mi> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5b, let k = k +1, will
Figure FDA00004796270300000919
Taking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reached
Figure FDA00004796270300000921
And
Figure FDA00004796270300000922
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure FDA00004796270300000923
Each sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
fourthly-6 b, according to
Figure FDA00004796270300000925
Each sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculating
Figure FDA00004796270300000927
The predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000928
Figure FDA00004796270300000929
CN201410105777.4A 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation Pending CN103903259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410105777.4A CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410105777.4A CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Publications (1)

Publication Number Publication Date
CN103903259A true CN103903259A (en) 2014-07-02

Family

ID=50994566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410105777.4A Pending CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Country Status (1)

Country Link
CN (1) CN103903259A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780432A (en) * 2016-11-14 2017-05-31 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN105931257B (en) * 2016-06-12 2018-08-31 西安电子科技大学 SAR image method for evaluating quality based on textural characteristics and structural similarity
CN109887023A (en) * 2019-01-11 2019-06-14 杭州电子科技大学 A binocular fusion stereo image quality evaluation method based on weighted gradient magnitude
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000278710A (en) * 1999-03-26 2000-10-06 Ricoh Co Ltd Device for evaluating binocular stereoscopic vision picture
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102209257A (en) * 2011-06-17 2011-10-05 宁波大学 Stereo image quality objective evaluation method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 An Objective Evaluation Method of Stereoscopic Image Quality Based on Visual Perception
CN102521825A (en) * 2011-11-16 2012-06-27 宁波大学 Three-dimensional image quality objective evaluation method based on zero watermark

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000278710A (en) * 1999-03-26 2000-10-06 Ricoh Co Ltd Device for evaluating binocular stereoscopic vision picture
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102209257A (en) * 2011-06-17 2011-10-05 宁波大学 Stereo image quality objective evaluation method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 An Objective Evaluation Method of Stereoscopic Image Quality Based on Visual Perception
CN102521825A (en) * 2011-11-16 2012-06-27 宁波大学 Three-dimensional image quality objective evaluation method based on zero watermark

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KEMENG LI 等: "Objective quality assessment for stereoscopic images based on structure-texture decomposition", 《WSEAS TRANSACTIONS ON COMPUTERS》, 31 January 2014 (2014-01-31) *
L. KARACAN 等: "Structure-preserving image smoothing via region covariances", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 32, no. 6, 1 November 2013 (2013-11-01), XP058033898, DOI: doi:10.1145/2508363.2508403 *
M. SHLH 等: "MIQM: a multicamera image quality measure", 《EEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 21, no. 9, 22 May 2012 (2012-05-22) *
WUFENG XUE 等: "Gradient Magnitude Similarity Deviation: AnHighly Efficient Perceptual Image Quality Index", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 23, no. 2, 3 December 2013 (2013-12-03) *
靳鑫 等: "基于结构相似度的自适应图像质量评价", 《光电子激光》, vol. 25, no. 2, 28 February 2014 (2014-02-28) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931257B (en) * 2016-06-12 2018-08-31 西安电子科技大学 SAR image method for evaluating quality based on textural characteristics and structural similarity
CN106780432A (en) * 2016-11-14 2017-05-31 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN106780432B (en) * 2016-11-14 2019-05-28 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN109887023A (en) * 2019-01-11 2019-06-14 杭州电子科技大学 A binocular fusion stereo image quality evaluation method based on weighted gradient magnitude
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN102209257B (en) Stereo image quality objective evaluation method
CN104581143A (en) Reference-free three-dimensional picture quality objective evaluation method based on machine learning
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN103413298B (en) A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN102843572B (en) Phase-based stereo image quality objective evaluation method
CN104408716A (en) Three-dimensional image quality objective evaluation method based on visual fidelity
CN104902268B (en) Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method
CN102903107A (en) Three-dimensional picture quality objective evaluation method based on feature fusion
CN106530282A (en) Spatial feature-based non-reference three-dimensional image quality objective assessment method
CN105357519A (en) No-reference stereo image quality objective evaluation method based on self-similarity feature
CN105654465A (en) Stereo image quality evaluation method through parallax compensation and inter-viewpoint filtering
CN103200420B (en) Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN106651835A (en) Entropy-based double-viewpoint reference-free objective stereo-image quality evaluation method
CN103903259A (en) Objective three-dimensional image quality evaluation method based on structure and texture separation
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description
CN102999912B (en) A kind of objective evaluation method for quality of stereo images based on distortion map
CN105321175B (en) An Objective Evaluation Method of Stereo Image Quality Based on Sparse Representation of Structural Texture
CN103745457B (en) A kind of three-dimensional image objective quality evaluation method
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140702

WD01 Invention patent application deemed withdrawn after publication