Summary of the invention
Technical matters to be solved by this invention is to provide a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, and it can improve the correlativity of objective evaluation result and subjective perception effectively.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, is characterized in that comprising the following steps:
1. make S
orgrepresent original undistorted stereo-picture, make S
disrepresent the stereo-picture of distortion to be evaluated, by S
orgleft visual point image be designated as { L
org(x, y) }, by S
orgright visual point image be designated as { R
org(x, y) }, by S
disleft visual point image be designated as { L
dis(x, y) }, by S
disright visual point image be designated as { R
dis(x, y) }, wherein, (x, y) represent the coordinate position of the pixel in left visual point image and right visual point image, 1≤x≤W, 1≤y≤H, W represents the width of left visual point image and right visual point image, and H represents the height of left visual point image and right visual point image, L
org(x, y) represents { L
org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R
org(x, y) represents { R
org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), L
dis(x, y) represents { L
dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R
dis(x, y) represents { R
dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y);
2. according to { L
org(x, y) } in each pixel and { R
org(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S
orgdisparity space image, be designated as { DSI
org(x, y, d) }, and according to { L
dis(x, y) } in each pixel and { R
dis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S
disdisparity space image, be designated as { DSI
dis(x, y, d) }, wherein, DSI
org(x, y, d) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
dis(x, y, d) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), 0≤d≤d
max, d
maxrepresent maximum disparity value;
3. calculate { DSI
org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
org(x, y, d), by { DSI
org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
org(x, y, d), by { DSI
org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
org(x, y, d);
Equally, calculate { DSI
dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
dis(x, y, d), by { DSI
dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
dis(x, y, d), by { DSI
dis(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
dis(x, y, d);
4. according to { DSI
org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI
org(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m
org(x, y, d),
Equally, according to { DSI
dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI
dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m
dis(x, y, d),
5. according to { DSI
org(x, y, d) } and { DSI
dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate { DSI
dis(x, y, d) } in the objective evaluation metric of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the objective evaluation metric of the pixel of (x, y, d) is designated as Q
dSI(x, y, d),
Wherein, C is for controlling parameter;
6. according to { DSI
dis(x, y, d) } in the objective evaluation metric of each pixel, calculate S
dispicture quality objective evaluation predicted value, be designated as Q,
wherein, Ω represents { DSI
dis(x, y, d) } in the set of coordinate position of all pixels, N represents { DSI
dis(x, y, d) } in total number of the pixel that comprises.
Described step is middle S 2.
orgthe acquisition process of disparity space image be:
2.-a1, by { L
org(x, y) } in current pending pixel be defined as current the first pixel, by { R
org(x, y) } in current pending pixel be defined as current the second pixel;
2.-a2, suppose that current the first pixel is { L
org(x, y) } in coordinate position be (x
1, y
1) pixel, suppose that current the second pixel is { R
org(x, y) } in coordinate position be (x
1, y
1) pixel, get parallax value d
0=0, then calculate current the first pixel and current the second pixel at this parallax value d
0under disparity space value, be designated as DSI
org(x
1, y
1, d
0), DSI
org(x
1, y
1, d
0)=| L
org(x
1, y
1)-R
org(x
1-d
0, y
1) |, wherein, 1≤x
1≤ W, 1≤y
1≤ H, 0≤d
0≤ d
max, d
maxrepresent maximum disparity value, L
org(x
1, y
1) expression { L
org(x, y) } in coordinate position be (x
1, y
1) the pixel value of pixel, R
org(x
1-d
0, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
0, y
1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-a3, choose d
maxindividual and d
0different parallax value, is designated as respectively
then calculate respectively current the first pixel and current the second pixel at this d
maxdisparity space value under individual different parallax value, corresponding is designated as respectively
DSI
org(x
1,y
1,d
1)=|L
org(x
1,y
1)-R
org(x
1-d
1,y
1)|,DSI
org(x
1,y
1,d
2)=|L
org(x
1,y
1)-R
org(x
1-d
2,y
1)|,DSI
org(x
1,y
1,d
i)=|L
org(x
1,y
1)-R
org(x
1-d
i,y
1)|,
Wherein, 1≤i≤d
max, d
i=d
0+ i,
dSI
org(x
1, y
1, d
1) represent that current the first pixel and current the second pixel are at parallax value d
1under disparity space value, DSI
org(x
1, y
1, d
2) represent that current the first pixel and current the second pixel are at parallax value d
2under disparity space value, DSI
org(x
1, y
1, d
i) represent that current the first pixel and current the second pixel are at parallax value d
iunder disparity space value,
represent that current the first pixel and current the second pixel are in parallax value
under disparity space value, R
org(x
1-d
1, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
1, y
1) the pixel value of pixel, R
org(x
1-d
2, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
2, y
1) the pixel value of pixel, R
org(x
1-d
i, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
i, y
1) the pixel value of pixel,
represent { R
org(x, y) } in coordinate position be
the pixel value of pixel;
2.-a4, by { L
org(x, y) } in next pending pixel as current the first pixel, by { R
org(x, y) } in next pending pixel as current the second pixel, then return step 2.-a2 continues execution, until { L
org(x, y) } and { R
org(x, y) } in all pixels be disposed, obtain S
orgdisparity space image, be designated as { DSI
org(x, y, d) }, wherein, DSI
org(x, y, d) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
orgthe value of (x, y, d) is { L
org(x, y) } in coordinate position be (x, y) pixel and { R
org(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Described step is middle S 2.
disthe acquisition process of disparity space image be:
2.-b1, by { L
dis(x, y) } in current pending pixel be defined as current the first pixel, by { R
dis(x, y) } in current pending pixel be defined as current the second pixel;
2.-b2, suppose that current the first pixel is { L
dis(x, y) } in coordinate position be (x
1, y
1) pixel, suppose that current the second pixel is { R
dis(x, y) } in coordinate position be (x
1, y
1) pixel, get parallax value d
0=0, then calculate current the first pixel and current the second pixel at this parallax value d
0under disparity space value, be designated as DSI
dis(x
1, y
1, d
0), DSI
dis(x
1, y
1, d
0)=| L
dis(x
1, y
1)-R
dis(x
1-d
0, y
1) |, wherein, 1≤x
1≤ W, 1≤y
1≤ H, 0≤d
0≤ d
max, d
maxrepresent maximum disparity value, L
dis(x
1, y
1) expression { L
dis(x, y) } in coordinate position be (x
1, y
1) the pixel value of pixel, R
dis(x
1-d
0, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
0, y
1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-b3, choose d
maxindividual and d
0different parallax value, is designated as respectively
then calculate respectively current the first pixel and current the second pixel at this d
maxdisparity space value under individual different parallax value, corresponding is designated as respectively
DSI
dis(x
1,y
1,d
1)=|L
dis(x
1,y
1)-R
dis(x
1-d
1,y
1)|,DSI
dis(x
1,y
1,d
2)=|L
dis(x
1,y
1)-R
dis(x
1-d
2,y
1)|,DSI
dis(x
1,y
1,d
i)=|L
dis(x
1,y
1)-R
dis(x
1-d
i,y
1)|,
Wherein, 1≤i≤d
max, d
i=d
0+ i,
dSI
dis(x
1, y
1, d
1) represent that current the first pixel and current the second pixel are at parallax value d
1under disparity space value, DSI
dis(x
1, y
1, d
2) represent that current the first pixel and current the second pixel are at parallax value d
2under disparity space value, DSI
dis(x
1, y
1, d
i) represent that current the first pixel and current the second pixel are at parallax value d
iunder disparity space value,
represent that current the first pixel and current the second pixel are in parallax value
under disparity space value, R
dis(x
1-d
1, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
1, y
1) the pixel value of pixel, R
dis(x
1-d
2, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
2, y
1) the pixel value of pixel, R
dis(x
1-d
i, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
i, y
1) the pixel value of pixel,
represent { R
dis(x, y) } in coordinate position be
the pixel value of pixel;
2.-b4, by { L
dis(x, y) } in next pending pixel as current the first pixel, by { R
dis(x, y) } in next pending pixel as current the second pixel, then return step 2.-b2 continues execution, until { L
dis(x, y) } and { R
dis(x, y) } in all pixels be disposed, obtain S
disdisparity space image, be designated as { DSI
dis(x, y, d) }, wherein, DSI
dis(x, y, d) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
disthe value of (x, y, d) is { L
dis(x, y) } in coordinate position be (x, y) pixel and { R
dis(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Described step 3. in { DSI
org(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-a1, employing horizontal gradient operator are to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
org(x, y, d),
Wherein, DSI
org(u, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-a2, employing VG (vertical gradient) operator are to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the vertical gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
org(x, y, d),
3.-a3, employing viewpoint gradient operator are to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
org(x, y, d),
Wherein, sign () is step function,
Above-mentioned steps 3.-a1 to step 3.-a3 in, if u<1, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (1, v, j) substitutes, if u>W, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (W, v, j) substitutes, if v<1, DSI
orgthe value of (u, v, j) is by DSI
org(u, 1, value j) substitutes, if v>H, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (u, H, j) substitutes, if j<0, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (u, v, 0) substitutes, if j>d
max, DSI
orgthe value of (u, v, j) is by DSI
org(u, v, d
max) value substitute, DSI
org(1, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI
org(W, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI
org(u, 1, j) represent { DSI
org(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI
org(u, H, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI
org(u, v, 0) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI
org(u, v, d
max) expression { DSI
org(x, y, d) } in coordinate position be (u, v, d
max) the disparity space value of pixel;
Described step 3. in { DSI
dis(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-b1, employing horizontal gradient operator are to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
dis(x, y, d),
Wherein, DSI
dis(u, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-b2, employing VG (vertical gradient) operator are to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the vertical gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
dis(x, y, d),
3.-b3, employing viewpoint gradient operator are to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the parallax directions gradient of the pixel of (x, y, d) is designated as gd
dis(x, y, d),
Wherein, sign () is step function,
Above-mentioned steps 3.-b1 to step 3.-b3 in, if u<1, DSI
disthe value of (u, v, j) is by DSI
disthe value of (1, v, j) substitutes, if u>W, DSI
disthe value of (u, v, j) is by DSI
disthe value of (W, v, j) substitutes, if v<1, DSI
disthe value of (u, v, j) is by DSI
dis(u, 1, value j) substitutes, if v>H, DSI
disthe value of (u, v, j) is by DSI
disthe value of (u, H, j) substitutes, if j<0, DSI
disthe value of (u, v, j) is by DSI
disthe value of (u, v, 0) substitutes, if j>d
max, DSI
disthe value of (u, v, j) is by DSI
dis(u, v, d
max) value substitute, DSI
dis(1, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI
dis(W, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI
dis(u, 1, j) represent { DSI
dis(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI
dis(u, H, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI
dis(u, v, 0) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI
dis(u, v, d
max) expression { DSI
dis(x, y, d) } in coordinate position be (u, v, d
max) the disparity space value of pixel.
Compared with prior art, the invention has the advantages that:
1) the inventive method is considered the impact of parallax on three-dimensional perception, therefore construct respectively the disparity space image of the disparity space image of original undistorted stereo-picture and the stereo-picture of distortion to be evaluated, avoided so complicated disparity estimation operation, and the disparity space image of constructing can reflect the impact of different parallax stereoscopic image quality well.
2) the inventive method is by calculating horizontal direction gradient, vertical gradient and the viewpoint direction gradient of the each pixel in disparity space image, obtain the three-dimensional gradient amplitude of the each pixel in disparity space image, the three-dimensional gradient amplitude obtaining has stronger stability and can reflect preferably the mass change situation of stereo-picture, therefore can effectively improve the correlativity of objective evaluation result and subjective perception.
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude that the present invention proposes, it totally realizes block diagram as shown in Figure 1, and it specifically comprises the following steps:
1. make S
orgrepresent original undistorted stereo-picture, make S
disrepresent the stereo-picture of distortion to be evaluated, by S
orgleft visual point image be designated as { L
org(x, y) }, by S
orgright visual point image be designated as { R
org(x, y) }, by S
disleft visual point image be designated as { L
dis(x, y) }, by S
disright visual point image be designated as { R
dis(x, y) }, wherein, (x, y) represent the coordinate position of the pixel in left visual point image and right visual point image, 1≤x≤W, 1≤y≤H, W represents the width of left visual point image and right visual point image, and H represents the height of left visual point image and right visual point image, L
org(x, y) represents { L
org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R
org(x, y) represents { R
org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), L
dis(x, y) represents { L
dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R
dis(x, y) represents { R
dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y).
2. according to { L
org(x, y) } in each pixel and { R
org(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S
orgdisparity space image, be designated as { DSI
org(x, y, d) }, and according to { L
dis(x, y) } in each pixel and { R
dis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S
disdisparity space image, be designated as { DSI
dis(x, y, d) }, wherein, DSI
org(x, y, d) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
dis(x, y, d) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), 0≤d≤d
max, d
maxrepresent maximum disparity value, get in the present embodiment d
max=31.
In this specific embodiment, step is middle S 2.
orgthe acquisition process of disparity space image be:
2.-a1, by { L
org(x, y) } in current pending pixel be defined as current the first pixel, by { R
org(x, y) } in current pending pixel be defined as current the second pixel.
2.-a2, suppose that current the first pixel is { L
org(x, y) } in coordinate position be (x
1, y
1) pixel, suppose that current the second pixel is { R
org(x, y) } in coordinate position be (x
1, y
1) pixel, current the first pixel is identical with the coordinate position of current the second pixel, gets parallax value d
0=0, then calculate current the first pixel and current the second pixel at this parallax value d
0under disparity space value, be designated as DSI
org(x
1, y
1, d
0), DSI
org(x
1, y
1, d
0)=| L
org(x
1, y
1)-R
org(x
1-d
0, y
1) |, wherein, 1≤x
1≤ W, 1≤y
1≤ H, 0≤d
0≤ d
max, d
maxrepresent maximum disparity value, L
org(x
1, y
1) expression { L
org(x, y) } in coordinate position be (x
1, y
1) the pixel value of pixel, R
org(x
1-d
0, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
0, y
1) the pixel value of pixel, " || " is the symbol that takes absolute value.
2.-a3, choose d
maxindividual and d
0different parallax value, is designated as respectively
then calculate respectively current the first pixel and current the second pixel at this d
maxdisparity space value under individual different parallax value, corresponding is designated as respectively
DSI
org(x
1,y
1,d
1)=|L
org(x
1,y
1)-R
org(x
1-d
1,y
1)|,DSI
org(x
1,y
1,d
2)=|L
org(x
1,y
1)-R
org(x
1-d
2,y
1)|,DSI
org(x
1,y
1,d
i)=|L
org(x
1,y
1)-R
org(x
1-d
i,y
1)|,
Wherein, 1≤i≤d
max, d
i=d
0+ i,
dSI
org(x
1, y
1, d
1) represent that current the first pixel and current the second pixel are at parallax value d
1under disparity space value, DSI
org(x
1, y
1, d
2) represent that current the first pixel and current the second pixel are at parallax value d
2under disparity space value, DSI
org(x
1, y
1, d
i) represent that current the first pixel and current the second pixel are at parallax value d
iunder disparity space value,
represent that current the first pixel and current the second pixel are in parallax value
under disparity space value, R
org(x
1-d
1, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
1, y
1) the pixel value of pixel, R
org(x
1-d
2, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
2, y
1) the pixel value of pixel, R
org(x
1-d
i, y
1) expression { R
org(x, y) } in coordinate position be (x
1-d
i, y
1) the pixel value of pixel,
represent { R
org(x, y) } in coordinate position be
the pixel value of pixel.
2.-a4, by { L
org(x, y) } in next pending pixel as current the first pixel, by { R
org(x, y) } in next pending pixel as current the second pixel, then return step 2.-a2 continues execution, until { L
org(x, y) } and { R
org(x, y) } in all pixels be disposed, obtain S
orgdisparity space image, be designated as { DSI
org(x, y, d) }, wherein, DSI
org(x, y, d) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
orgthe value of (x, y, d) is { L
org(x, y) } in coordinate position be (x, y) pixel and { R
org(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
In this specific embodiment, step is middle S 2.
disthe acquisition process of disparity space image be:
2.-b1, by { L
dis(x, y) } in current pending pixel be defined as current the first pixel, by { R
dis(x, y) } in current pending pixel be defined as current the second pixel.
2.-b2, suppose that current the first pixel is { L
dis(x, y) } in coordinate position be (x
1, y
1) pixel, suppose that current the second pixel is { R
dis(x, y) } in coordinate position be (x
1, y
1) pixel, current the first pixel is identical with the coordinate position of current the second pixel, gets parallax value d
0=0, then calculate current the first pixel and current the second pixel at this parallax value d
0under disparity space value, be designated as DSI
dis(x
1, y
1, d
0), DSI
dis(x
1, y
1, d
0)=| L
dis(x
1, y
1)-R
dis(x
1-d
0, y
1) |, wherein, 1≤x
1≤ W, 1≤y
1≤ H, 0≤d
0≤ d
max, d
maxrepresent maximum disparity value, L
dis(x
1, y
1) expression { L
dis(x, y) } in coordinate position be (x
1, y
1) the pixel value of pixel, R
dis(x
1-d
0, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
0, y
1) the pixel value of pixel, " || " is the symbol that takes absolute value.
2.-b3, choose d
maxindividual and d
0different parallax value, is designated as respectively
then calculate respectively current the first pixel and current the second pixel at this d
maxdisparity space value under individual different parallax value, corresponding is designated as respectively
DSI
dis(x
1,y
1,d
1)=|L
dis(x
1,y
1)-R
dis(x
1-d
1,y
1)|,DSI
dis(x
1,y
1,d
2)=|L
dis(x
1,y
1)-R
dis(x
1-d
2,y
1)|,DSI
dis(x
1,y
1,d
i)=|L
dis(x
1,y
1)-R
dis(x
1-d
i,y
1)|,
Wherein, 1≤i≤d
max, d
i=d
0+ i,
dSI
dis(x
1, y
1, d
1) represent that current the first pixel and current the second pixel are at parallax value d
1under disparity space value, DSI
dis(x
1, y
1, d
2) represent that current the first pixel and current the second pixel are at parallax value d
2under disparity space value, DSI
dis(x
1, y
1, d
i) represent that current the first pixel and current the second pixel are at parallax value d
iunder disparity space value,
represent that current the first pixel and current the second pixel are in parallax value
under disparity space value, R
dis(x
1-d
1, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
1, y
1) the pixel value of pixel, R
dis(x
1-d
2, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
2, y
1) the pixel value of pixel, R
dis(x
1-d
i, y
1) expression { R
dis(x, y) } in coordinate position be (x
1-d
i, y
1) the pixel value of pixel,
represent { R
dis(x, y) } in coordinate position be
the pixel value of pixel.
2.-b4, by { L
dis(x, y) } in next pending pixel as current the first pixel, by { R
dis(x, y) } in next pending pixel as current the second pixel, then return step 2.-b2 continues execution, until { L
dis(x, y) } and { R
dis(x, y) } in all pixels be disposed, obtain S
disdisparity space image, be designated as { DSI
dis(x, y, d) }, wherein, DSI
dis(x, y, d) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI
disthe value of (x, y, d) is { L
dis(x, y) } in coordinate position be (x, y) pixel and { R
dis(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
3. calculate { DSI
org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
org(x, y, d), by { DSI
org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
org(x, y, d), by { DSI
org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
org(x, y, d).
In this specific embodiment, step 3. in { DSI
org(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-a1, employing horizontal gradient operator are as shown in Figure 2 to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
org(x, y, d),
Wherein, DSI
org(u, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j).
3.-a2, employing VG (vertical gradient) operator are as shown in Figure 3 to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the vertical gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
org(x, y, d),
3.-a3, employing viewpoint gradient operator are as shown in Figure 4 to { DSI
org(x, y, d) } carry out convolution, obtain { DSI
org(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
org(x, y, d),
Wherein, sign () is step function,
Above-mentioned steps 3.-a1 to step 3.-a3 in, if u<1, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (1, v, j) substitutes, if u>W, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (W, v, j) substitutes, if v<1, DSI
orgthe value of (u, v, j) is by DSI
org(u, 1, value j) substitutes, if v>H, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (u, H, j) substitutes, if j<0, DSI
orgthe value of (u, v, j) is by DSI
orgthe value of (u, v, 0) substitutes, if j>d
max, DSI
orgthe value of (u, v, j) is by DSI
org(u, v, d
max) value substitute, DSI
org(1, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI
org(W, v, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI
org(u, 1, j) represent { DSI
org(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI
org(u, H, j) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI
org(u, v, 0) represents { DSI
org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI
org(u, v, d
max) expression { DSI
org(x, y, d) } in coordinate position be (u, v, d
max) the disparity space value of pixel.
Equally, calculate { DSI
dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
dis(x, y, d), by { DSI
dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
dis(x, y, d), by { DSI
dis(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd
dis(x, y, d).
In this specific embodiment, step 3. in { DSI
dis(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-b1, employing horizontal gradient operator are as shown in Figure 2 to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx
dis(x, y, d),
Wherein, DSI
dis(u, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j).
3.-b2, employing VG (vertical gradient) operator are as shown in Figure 3 to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the vertical gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy
dis(x, y, d),
3.-b3, employing viewpoint gradient operator are as shown in Figure 4 to { DSI
dis(x, y, d) } carry out convolution, obtain { DSI
dis(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the parallax directions gradient of the pixel of (x, y, d) is designated as gd
dis(x, y, d),
Wherein, sign () is step function,
Above-mentioned steps 3.-b1 to step 3.-b3 in, if u<1, DSI
disthe value of (u, v, j) is by DSI
disthe value of (1, v, j) substitutes, if u>W, DSI
disthe value of (u, v, j) is by DSI
disthe value of (W, v, j) substitutes, if v<1, DSI
disthe value of (u, v, j) is by DSI
dis(u, 1, value j) substitutes, if v>H, DSI
disthe value of (u, v, j) is by DSI
disthe value of (u, H, j) substitutes, if j<0, DSI
disthe value of (u, v, j) is by DSI
disthe value of (u, v, 0) substitutes, if j>d
max, DSI
disthe value of (u, v, j) is by DSI
dis(u, v, d
max) value substitute, DSI
dis(1, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI
dis(W, v, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI
dis(u, 1, j) represent { DSI
dis(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI
dis(u, H, j) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI
dis(u, v, 0) represents { DSI
dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI
dis(u, v, d
max) expression { DSI
dis(x, y, d) } in coordinate position be (u, v, d
max) the disparity space value of pixel.
4. according to { DSI
org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI
org(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI
org(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m
org(x, y, d),
Equally, according to { DSI
dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI
dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m
dis(x, y, d),
5. according to { DSI
org(x, y, d) } and { DSI
dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate { DSI
dis(x, y, d) } in the objective evaluation metric of each pixel, by { DSI
dis(x, y, d) } in coordinate position be that the objective evaluation metric of the pixel of (x, y, d) is designated as Q
dSI(x, y, d),
Wherein, C, for controlling parameter, gets C=0.85 in the present embodiment.
6. according to { DSI
dis(x, y, d) } in the objective evaluation metric of each pixel, calculate S
dispicture quality objective evaluation predicted value, be designated as Q,
wherein, Ω represents { DSI
dis(x, y, d) } in the set of coordinate position of all pixels, N represents { DSI
dis(x, y, d) } in total number of the pixel that comprises.
At this, adopt University Of Ningbo's stereo-picture storehouse and LIVE stereo-picture storehouse to analyze the picture quality objective evaluation predicted value and the average correlativity between subjective scoring difference of the stereo-picture of the distortion that the present embodiment obtains.The stereo-picture of 72 width distortions in the stereo-picture of 60 width distortions in stereo-picture, the white Gaussian noise situation of 60 width distortions in stereo-picture, the Gaussian Blur situation of 60 width distortions in stereo-picture, the JPEG2000 compression situation of the 60 width distortions in the JPEG of different distortion levels compression situation by 12 undistorted stereo-pictures of University Of Ningbo's stereo-picture storehouse and H.264 coding distortion situation forms.The stereo-picture of 80 width distortions in stereo-picture and the Fast Fading distortion situation of 80 width distortions in stereo-picture, the white Gaussian noise situation of 45 width distortions in stereo-picture, the Gaussian Blur situation of 80 width distortions in stereo-picture, the JPEG2000 compression situation of the 80 width distortions in the JPEG of different distortion levels compression situation by 20 undistorted stereo-pictures of LIVE stereo-picture storehouse forms.
Here, utilize 4 conventional objective parameters of evaluate image quality evaluating method as evaluation index, be Pearson correlation coefficient (the Pearson linear correlation coefficient under non-linear regression condition, PLCC), Spearman related coefficient (Spearman rank order correlation coefficient, SROCC), Kendall related coefficient (Kendall rank-order correlation coefficient, KROCC), square error (root mean squared error, RMSE), the accuracy of the three-dimensional image objective evaluation result of PLCC and RMSE reflection distortion, SROCC and KROCC reflect its monotonicity.
Utilize the inventive method to calculate the picture quality objective evaluation predicted value of the stereo-picture of the every width distortion in picture quality objective evaluation predicted value and the LIVE stereo-picture storehouse of stereo-picture of the every width distortion in University Of Ningbo's stereo-picture storehouse, recycle the average subjective scoring difference of the stereo-picture of the every width distortion in average subjective scoring difference and the LIVE stereo-picture storehouse of stereo-picture that existing subjective evaluation method obtains the every width distortion in University Of Ningbo's stereo-picture storehouse.The picture quality objective evaluation predicted value of the stereo-picture of the distortion calculating by the inventive method is done to five parameter L ogistic function nonlinear fittings, PLCC, SROCC and KROCC value are higher, and the lower explanation method for objectively evaluating of RMSE value is better with average subjective scoring difference correlativity.Table 1, table 2, table 3 and table 4 have provided the picture quality objective evaluation predicted value of the stereo-picture that adopts the distortion that the inventive method obtains and average Pearson correlation coefficient, Spearman related coefficient, Kendall related coefficient and the square error between subjective scoring difference.From table 1, table 2, table 3 and table 4, can find out, correlativity between the final picture quality objective evaluation predicted value of the stereo-picture of the distortion that employing the inventive method obtains and average subjective scoring difference is very high, the result that shows objective evaluation result and human eye subjective perception is more consistent, is enough to illustrate the validity of the inventive method.
Fig. 5 has provided the scatter diagram of picture quality objective evaluation predicted value with the average subjective scoring difference of the stereo-picture that utilizes the every width distortion in University Of Ningbo's stereo-picture storehouse that the inventive method obtains, Fig. 6 has provided the scatter diagram of picture quality objective evaluation predicted value with the average subjective scoring difference of the stereo-picture that utilizes the every width distortion in the LIVE stereo-picture storehouse that the inventive method obtains, loose point is more concentrated, illustrates that the consistance of objective evaluation result and subjective perception is better.As can be known from Fig. 5 and Fig. 6, adopt the scatter diagram that obtains of the inventive method more concentrated, and the goodness of fit between subjective assessment data is higher.
Table 1 utilizes the Pearson correlation coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Table 2 utilizes the Spearman related coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Table 3 utilizes the Kendall related coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Table 4 utilizes the square error comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains