CN103824292A - Three-dimensional image quality objective assessment method based on three-dimensional gradient amplitude - Google Patents

Three-dimensional image quality objective assessment method based on three-dimensional gradient amplitude Download PDF

Info

Publication number
CN103824292A
CN103824292A CN201410065127.1A CN201410065127A CN103824292A CN 103824292 A CN103824292 A CN 103824292A CN 201410065127 A CN201410065127 A CN 201410065127A CN 103824292 A CN103824292 A CN 103824292A
Authority
CN
China
Prior art keywords
dsi
pixel
dis
org
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410065127.1A
Other languages
Chinese (zh)
Other versions
CN103824292B (en
Inventor
邵枫
段芬芳
王珊珊
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Langxi pinxu Technology Development Co., Ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201410065127.1A priority Critical patent/CN103824292B/en
Publication of CN103824292A publication Critical patent/CN103824292A/en
Application granted granted Critical
Publication of CN103824292B publication Critical patent/CN103824292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional image quality objective assessment method based on a three-dimensional gradient amplitude. Firstly, a parallax error space picture of an undistorted three-dimensional image and a parallax error space picture of a distorted three-dimensional image to be assessed are calculated respectively, then a corresponding three-dimensional gradient amplitude is obtained by calculating the horizontal gradient, the vertical gradient and the view point gradient of each pixel point in the parallax error space picture of the undistorted three-dimensional image, a corresponding three-dimensional gradient amplitude is obtained by calculating the horizontal gradient, the vertical gradient and the view point gradient of each pixel point in the parallax error space picture of the distorted three-dimensional image to be assessed, and ultimately, according to the three-dimensional gradient amplitudes of each pixel point in the two parallax error space pictures, an image quality objective assessment prediction value of the distorted three-dimensional image to be assessed is obtained. The method has the advantages that the obtained three-dimensional gradient amplitudes are very stable and can well reflect quality changing conditions of the three-dimensional images and accordingly correlation of an objective assessment result and subjective perception can be effectively improved.

Description

A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude
Technical field
The present invention relates to a kind of image quality evaluating method, especially relate to a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude.
Background technology
Along with developing rapidly of image coding technique and stereo display technique, stereo-picture technology has been subject to paying close attention to more and more widely and application, has become a current study hotspot.Stereo-picture technology is utilized the binocular parallax principle of human eye, and binocular receives the left and right visual point image from Same Scene independently of one another, is merged and is formed binocular parallax, thereby enjoy the stereo-picture with depth perception and realism by brain.Owing to being subject to the impact of acquisition system, store compressed and transmission equipment, stereo-picture can inevitably be introduced a series of distortion, and compared with single channel image, stereo-picture need to guarantee the picture quality of two passages simultaneously, and therefore stereoscopic image is carried out quality assessment and had very important significance.But, lack at present effective method for objectively evaluating stereoscopic image quality and evaluate.Therefore, setting up effective stereo image quality objective evaluation model tool is of great significance.
Gradient amplitude is a kind of effectively picture structure information descriptor, evaluation method based on gradient amplitude has been applied to plane picture quality assessment, and for the stereo image quality evaluation based on gradient amplitude, need to solve following key issue: 1) three-dimensional perception evaluation reflects by parallax or depth information, how parallax or depth information are embedded in gradient amplitude and know characteristic to characterize truly stereoscopic sensation, remain one of difficulties in stereo image quality objective evaluation; 2) not all pixel all has strong structural information, and How to choose stable structure information is applied to quality assessment, and don't affects three-dimensional perceptual performance, is also the difficulties that needs solution in stereo image quality objective evaluation.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, and it can improve the correlativity of objective evaluation result and subjective perception effectively.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, is characterized in that comprising the following steps:
1. make S orgrepresent original undistorted stereo-picture, make S disrepresent the stereo-picture of distortion to be evaluated, by S orgleft visual point image be designated as { L org(x, y) }, by S orgright visual point image be designated as { R org(x, y) }, by S disleft visual point image be designated as { L dis(x, y) }, by S disright visual point image be designated as { R dis(x, y) }, wherein, (x, y) represent the coordinate position of the pixel in left visual point image and right visual point image, 1≤x≤W, 1≤y≤H, W represents the width of left visual point image and right visual point image, and H represents the height of left visual point image and right visual point image, L org(x, y) represents { L org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R org(x, y) represents { R org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), L dis(x, y) represents { L dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R dis(x, y) represents { R dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y);
2. according to { L org(x, y) } in each pixel and { R org(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, and according to { L dis(x, y) } in each pixel and { R dis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), 0≤d≤d max, d maxrepresent maximum disparity value;
3. calculate { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d);
Equally, calculate { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d);
4. according to { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI org(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI org(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m org(x, y, d), m org ( x , y , d ) = ( gx org ( x , y , d ) ) 2 + ( gy org ( x , y , d ) ) 2 + ( gd org ( x , y , d ) ) 2 ;
Equally, according to { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m dis(x, y, d), m dis ( x , y , d ) = ( gx dis ( x , y , d ) ) 2 + ( gy dis ( x , y , d ) ) 2 + ( gd dis ( x , y , d ) ) 2 ;
5. according to { DSI org(x, y, d) } and { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the objective evaluation metric of the pixel of (x, y, d) is designated as Q dSI(x, y, d), Q DSI ( x , y , d ) = 2 × m org ( x , y , d ) × m dis ( x , y , d ) + C ( m org ( x , y , d ) ) 2 + ( m dis ( x , y , d ) ) 2 + C , Wherein, C is for controlling parameter;
6. according to { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, calculate S dispicture quality objective evaluation predicted value, be designated as Q,
Figure BDA0000469762870000034
wherein, Ω represents { DSI dis(x, y, d) } in the set of coordinate position of all pixels, N represents { DSI dis(x, y, d) } in total number of the pixel that comprises.
Described step is middle S 2. orgthe acquisition process of disparity space image be:
2.-a1, by { L org(x, y) } in current pending pixel be defined as current the first pixel, by { R org(x, y) } in current pending pixel be defined as current the second pixel;
2.-a2, suppose that current the first pixel is { L org(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R org(x, y) } in coordinate position be (x 1, y 1) pixel, get parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI org(x 1, y 1, d 0), DSI org(x 1, y 1, d 0)=| L org(x 1, y 1)-R org(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L org(x 1, y 1) expression { L org(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R org(x 1-d 0, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-a3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure BDA0000469762870000041
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI org ( x 1 , y 1 , d 1 ) , DSI org ( x 1 , y 1 , d 2 ) , . . . , DSI org ( x 1 , y 1 , d i ) , . . . , DSI org ( x 1 , y 1 , d d max ) , DSI org(x 1,y 1,d 1)=|L org(x 1,y 1)-R org(x 1-d 1,y 1)|,DSI org(x 1,y 1,d 2)=|L org(x 1,y 1)-R org(x 1-d 2,y 1)|,DSI org(x 1,y 1,d i)=|L org(x 1,y 1)-R org(x 1-d i,y 1)|, DSI org ( x 1 , y 1 , d d max ) = | L org ( x 1 , y 1 ) - R org ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure BDA0000469762870000044
dSI org(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI org(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI org(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value,
Figure BDA0000469762870000045
represent that current the first pixel and current the second pixel are in parallax value
Figure BDA0000469762870000046
under disparity space value, R org(x 1-d 1, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R org(x 1-d 2, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R org(x 1-d i, y 1) expression { R org(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel,
Figure BDA0000469762870000054
represent { R org(x, y) } in coordinate position be
Figure BDA0000469762870000055
the pixel value of pixel;
2.-a4, by { L org(x, y) } in next pending pixel as current the first pixel, by { R org(x, y) } in next pending pixel as current the second pixel, then return step 2.-a2 continues execution, until { L org(x, y) } and { R org(x, y) } in all pixels be disposed, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI orgthe value of (x, y, d) is { L org(x, y) } in coordinate position be (x, y) pixel and { R org(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Figure BDA0000469762870000051
Described step is middle S 2. disthe acquisition process of disparity space image be:
2.-b1, by { L dis(x, y) } in current pending pixel be defined as current the first pixel, by { R dis(x, y) } in current pending pixel be defined as current the second pixel;
2.-b2, suppose that current the first pixel is { L dis(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R dis(x, y) } in coordinate position be (x 1, y 1) pixel, get parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI dis(x 1, y 1, d 0), DSI dis(x 1, y 1, d 0)=| L dis(x 1, y 1)-R dis(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L dis(x 1, y 1) expression { L dis(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R dis(x 1-d 0, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-b3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure BDA0000469762870000052
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI dis ( x 1 , y 1 , d 1 ) , DSI dis ( x 1 , y 1 , d 2 ) , . . . , DSI dis ( x 1 , y 1 , d i ) , . . . , DSI dis ( x 1 , y 1 , d d max ) , DSI dis(x 1,y 1,d 1)=|L dis(x 1,y 1)-R dis(x 1-d 1,y 1)|,DSI dis(x 1,y 1,d 2)=|L dis(x 1,y 1)-R dis(x 1-d 2,y 1)|,DSI dis(x 1,y 1,d i)=|L dis(x 1,y 1)-R dis(x 1-d i,y 1)|, DSI dis ( x 1 , y 1 , d d max ) = | L dis ( x 1 , y 1 ) - R dis ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure BDA0000469762870000062
dSI dis(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI dis(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI dis(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value,
Figure BDA0000469762870000063
represent that current the first pixel and current the second pixel are in parallax value
Figure BDA0000469762870000064
under disparity space value, R dis(x 1-d 1, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R dis(x 1-d 2, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R dis(x 1-d i, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel, represent { R dis(x, y) } in coordinate position be the pixel value of pixel;
2.-b4, by { L dis(x, y) } in next pending pixel as current the first pixel, by { R dis(x, y) } in next pending pixel as current the second pixel, then return step 2.-b2 continues execution, until { L dis(x, y) } and { R dis(x, y) } in all pixels be disposed, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI disthe value of (x, y, d) is { L dis(x, y) } in coordinate position be (x, y) pixel and { R dis(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Figure BDA0000469762870000067
Described step 3. in { DSI org(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-a1, employing horizontal gradient operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), gx org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( - Σ u = x - 2 u = x - 1 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) + Σ u = x + 1 u = x + 2 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, DSI org(u, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-a2, employing VG (vertical gradient) operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the vertical gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), gy org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( - Σ u = x - 2 u = x + 2 Σ v = y - 2 v = y - 1 DSI org ( u , v , j ) + Σ u = x - 2 u = x + 2 Σ v = y + 1 v = y + 2 DSI org ( u , v , j ) ) ;
3.-a3, employing viewpoint gradient operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d), gd org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( sign ( j - d ) × Σ u = x - 2 u = x + 2 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, sign () is step function,
Figure BDA0000469762870000073
Above-mentioned steps 3.-a1 to step 3.-a3 in, if u<1, DSI orgthe value of (u, v, j) is by DSI orgthe value of (1, v, j) substitutes, if u>W, DSI orgthe value of (u, v, j) is by DSI orgthe value of (W, v, j) substitutes, if v<1, DSI orgthe value of (u, v, j) is by DSI org(u, 1, value j) substitutes, if v>H, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, H, j) substitutes, if j<0, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, v, 0) substitutes, if j>d max, DSI orgthe value of (u, v, j) is by DSI org(u, v, d max) value substitute, DSI org(1, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI org(W, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI org(u, 1, j) represent { DSI org(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI org(u, H, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI org(u, v, 0) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI org(u, v, d max) expression { DSI org(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel;
Described step 3. in { DSI dis(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-b1, employing horizontal gradient operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), gx dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, DSI dis(u, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-b2, employing VG (vertical gradient) operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the vertical gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), gy dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI dis ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI dis ( u , v , j ) ) ;
3.-b3, employing viewpoint gradient operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the parallax directions gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d), gd dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, sign () is step function,
Above-mentioned steps 3.-b1 to step 3.-b3 in, if u<1, DSI disthe value of (u, v, j) is by DSI disthe value of (1, v, j) substitutes, if u>W, DSI disthe value of (u, v, j) is by DSI disthe value of (W, v, j) substitutes, if v<1, DSI disthe value of (u, v, j) is by DSI dis(u, 1, value j) substitutes, if v>H, DSI disthe value of (u, v, j) is by DSI disthe value of (u, H, j) substitutes, if j<0, DSI disthe value of (u, v, j) is by DSI disthe value of (u, v, 0) substitutes, if j>d max, DSI disthe value of (u, v, j) is by DSI dis(u, v, d max) value substitute, DSI dis(1, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI dis(W, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI dis(u, 1, j) represent { DSI dis(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI dis(u, H, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI dis(u, v, 0) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI dis(u, v, d max) expression { DSI dis(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel.
Compared with prior art, the invention has the advantages that:
1) the inventive method is considered the impact of parallax on three-dimensional perception, therefore construct respectively the disparity space image of the disparity space image of original undistorted stereo-picture and the stereo-picture of distortion to be evaluated, avoided so complicated disparity estimation operation, and the disparity space image of constructing can reflect the impact of different parallax stereoscopic image quality well.
2) the inventive method is by calculating horizontal direction gradient, vertical gradient and the viewpoint direction gradient of the each pixel in disparity space image, obtain the three-dimensional gradient amplitude of the each pixel in disparity space image, the three-dimensional gradient amplitude obtaining has stronger stability and can reflect preferably the mass change situation of stereo-picture, therefore can effectively improve the correlativity of objective evaluation result and subjective perception.
Accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 is horizontal gradient operator template;
Fig. 3 is VG (vertical gradient) operator template;
Fig. 4 is viewpoint gradient operator template;
Fig. 5 is the picture quality objective evaluation predicted value of stereo-picture and the scatter diagram of average subjective scoring difference that utilizes the every width distortion in University Of Ningbo's stereo-picture storehouse that the inventive method obtains;
Fig. 6 is the picture quality objective evaluation predicted value of stereo-picture and the scatter diagram of average subjective scoring difference that utilizes the every width distortion in the LIVE stereo-picture storehouse that the inventive method obtains.
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude that the present invention proposes, it totally realizes block diagram as shown in Figure 1, and it specifically comprises the following steps:
1. make S orgrepresent original undistorted stereo-picture, make S disrepresent the stereo-picture of distortion to be evaluated, by S orgleft visual point image be designated as { L org(x, y) }, by S orgright visual point image be designated as { R org(x, y) }, by S disleft visual point image be designated as { L dis(x, y) }, by S disright visual point image be designated as { R dis(x, y) }, wherein, (x, y) represent the coordinate position of the pixel in left visual point image and right visual point image, 1≤x≤W, 1≤y≤H, W represents the width of left visual point image and right visual point image, and H represents the height of left visual point image and right visual point image, L org(x, y) represents { L org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R org(x, y) represents { R org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), L dis(x, y) represents { L dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R dis(x, y) represents { R dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y).
2. according to { L org(x, y) } in each pixel and { R org(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, and according to { L dis(x, y) } in each pixel and { R dis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), 0≤d≤d max, d maxrepresent maximum disparity value, get in the present embodiment d max=31.
In this specific embodiment, step is middle S 2. orgthe acquisition process of disparity space image be:
2.-a1, by { L org(x, y) } in current pending pixel be defined as current the first pixel, by { R org(x, y) } in current pending pixel be defined as current the second pixel.
2.-a2, suppose that current the first pixel is { L org(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R org(x, y) } in coordinate position be (x 1, y 1) pixel, current the first pixel is identical with the coordinate position of current the second pixel, gets parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI org(x 1, y 1, d 0), DSI org(x 1, y 1, d 0)=| L org(x 1, y 1)-R org(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L org(x 1, y 1) expression { L org(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R org(x 1-d 0, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value.
2.-a3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure BDA0000469762870000111
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI org ( x 1 , y 1 , d 1 ) , DSI org ( x 1 , y 1 , d 2 ) , . . . , DSI org ( x 1 , y 1 , d i ) , . . . , DSI org ( x 1 , y 1 , d d max ) , DSI org(x 1,y 1,d 1)=|L org(x 1,y 1)-R org(x 1-d 1,y 1)|,DSI org(x 1,y 1,d 2)=|L org(x 1,y 1)-R org(x 1-d 2,y 1)|,DSI org(x 1,y 1,d i)=|L org(x 1,y 1)-R org(x 1-d i,y 1)|, DSI org ( x 1 , y 1 , d d max ) = | L org ( x 1 , y 1 ) - R org ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure BDA0000469762870000114
dSI org(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI org(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI org(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value,
Figure BDA0000469762870000115
represent that current the first pixel and current the second pixel are in parallax value
Figure BDA0000469762870000116
under disparity space value, R org(x 1-d 1, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R org(x 1-d 2, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R org(x 1-d i, y 1) expression { R org(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel,
Figure BDA0000469762870000117
represent { R org(x, y) } in coordinate position be
Figure BDA0000469762870000118
the pixel value of pixel.
2.-a4, by { L org(x, y) } in next pending pixel as current the first pixel, by { R org(x, y) } in next pending pixel as current the second pixel, then return step 2.-a2 continues execution, until { L org(x, y) } and { R org(x, y) } in all pixels be disposed, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI orgthe value of (x, y, d) is { L org(x, y) } in coordinate position be (x, y) pixel and { R org(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Figure BDA0000469762870000121
In this specific embodiment, step is middle S 2. disthe acquisition process of disparity space image be:
2.-b1, by { L dis(x, y) } in current pending pixel be defined as current the first pixel, by { R dis(x, y) } in current pending pixel be defined as current the second pixel.
2.-b2, suppose that current the first pixel is { L dis(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R dis(x, y) } in coordinate position be (x 1, y 1) pixel, current the first pixel is identical with the coordinate position of current the second pixel, gets parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI dis(x 1, y 1, d 0), DSI dis(x 1, y 1, d 0)=| L dis(x 1, y 1)-R dis(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L dis(x 1, y 1) expression { L dis(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R dis(x 1-d 0, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value.
2.-b3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure BDA0000469762870000122
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI dis ( x 1 , y 1 , d 1 ) , DSI dis ( x 1 , y 1 , d 2 ) , . . . , DSI dis ( x 1 , y 1 , d i ) , . . . , DSI dis ( x 1 , y 1 , d d max ) , DSI dis(x 1,y 1,d 1)=|L dis(x 1,y 1)-R dis(x 1-d 1,y 1)|,DSI dis(x 1,y 1,d 2)=|L dis(x 1,y 1)-R dis(x 1-d 2,y 1)|,DSI dis(x 1,y 1,d i)=|L dis(x 1,y 1)-R dis(x 1-d i,y 1)|, DSI dis ( x 1 , y 1 , d d max ) = | L dis ( x 1 , y 1 ) - R dis ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure BDA0000469762870000132
dSI dis(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI dis(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI dis(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value,
Figure BDA0000469762870000133
represent that current the first pixel and current the second pixel are in parallax value
Figure BDA0000469762870000134
under disparity space value, R dis(x 1-d 1, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R dis(x 1-d 2, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R dis(x 1-d i, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel, represent { R dis(x, y) } in coordinate position be
Figure BDA0000469762870000136
the pixel value of pixel.
2.-b4, by { L dis(x, y) } in next pending pixel as current the first pixel, by { R dis(x, y) } in next pending pixel as current the second pixel, then return step 2.-b2 continues execution, until { L dis(x, y) } and { R dis(x, y) } in all pixels be disposed, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI disthe value of (x, y, d) is { L dis(x, y) } in coordinate position be (x, y) pixel and { R dis(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Figure BDA0000469762870000137
3. calculate { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d).
In this specific embodiment, step 3. in { DSI org(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-a1, employing horizontal gradient operator are as shown in Figure 2 to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), gx org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, DSI org(u, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j).
3.-a2, employing VG (vertical gradient) operator are as shown in Figure 3 to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the vertical gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), gy org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI org ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI org ( u , v , j ) ) .
3.-a3, employing viewpoint gradient operator are as shown in Figure 4 to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d), gd org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, sign () is step function,
Figure BDA0000469762870000144
Above-mentioned steps 3.-a1 to step 3.-a3 in, if u<1, DSI orgthe value of (u, v, j) is by DSI orgthe value of (1, v, j) substitutes, if u>W, DSI orgthe value of (u, v, j) is by DSI orgthe value of (W, v, j) substitutes, if v<1, DSI orgthe value of (u, v, j) is by DSI org(u, 1, value j) substitutes, if v>H, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, H, j) substitutes, if j<0, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, v, 0) substitutes, if j>d max, DSI orgthe value of (u, v, j) is by DSI org(u, v, d max) value substitute, DSI org(1, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI org(W, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI org(u, 1, j) represent { DSI org(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI org(u, H, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI org(u, v, 0) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI org(u, v, d max) expression { DSI org(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel.
Equally, calculate { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d).
In this specific embodiment, step 3. in { DSI dis(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-b1, employing horizontal gradient operator are as shown in Figure 2 to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), gx dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, DSI dis(u, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j).
3.-b2, employing VG (vertical gradient) operator are as shown in Figure 3 to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the vertical gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), gy dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI dis ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI dis ( u , v , j ) ) .
3.-b3, employing viewpoint gradient operator are as shown in Figure 4 to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the parallax directions gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d), gd dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, sign () is step function,
Figure BDA0000469762870000163
Above-mentioned steps 3.-b1 to step 3.-b3 in, if u<1, DSI disthe value of (u, v, j) is by DSI disthe value of (1, v, j) substitutes, if u>W, DSI disthe value of (u, v, j) is by DSI disthe value of (W, v, j) substitutes, if v<1, DSI disthe value of (u, v, j) is by DSI dis(u, 1, value j) substitutes, if v>H, DSI disthe value of (u, v, j) is by DSI disthe value of (u, H, j) substitutes, if j<0, DSI disthe value of (u, v, j) is by DSI disthe value of (u, v, 0) substitutes, if j>d max, DSI disthe value of (u, v, j) is by DSI dis(u, v, d max) value substitute, DSI dis(1, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI dis(W, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI dis(u, 1, j) represent { DSI dis(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI dis(u, H, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI dis(u, v, 0) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI dis(u, v, d max) expression { DSI dis(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel.
4. according to { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI org(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI org(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m org(x, y, d), m org ( x , y , d ) = ( gx org ( x , y , d ) ) 2 + ( gy org ( x , y , d ) ) 2 + ( gd org ( x , y , d ) ) 2 .
Equally, according to { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m dis(x, y, d), m dis ( x , y , d ) = ( gx dis ( x , y , d ) ) 2 + ( gy dis ( x , y , d ) ) 2 + ( gd dis ( x , y , d ) ) 2 .
5. according to { DSI org(x, y, d) } and { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the objective evaluation metric of the pixel of (x, y, d) is designated as Q dSI(x, y, d), Q DSI ( x , y , d ) = 2 &times; m org ( x , y , d ) &times; m dis ( x , y , d ) + C ( m org ( x , y , d ) ) 2 + ( m dis ( x , y , d ) ) 2 + C , Wherein, C, for controlling parameter, gets C=0.85 in the present embodiment.
6. according to { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, calculate S dispicture quality objective evaluation predicted value, be designated as Q,
Figure BDA0000469762870000174
wherein, Ω represents { DSI dis(x, y, d) } in the set of coordinate position of all pixels, N represents { DSI dis(x, y, d) } in total number of the pixel that comprises.
At this, adopt University Of Ningbo's stereo-picture storehouse and LIVE stereo-picture storehouse to analyze the picture quality objective evaluation predicted value and the average correlativity between subjective scoring difference of the stereo-picture of the distortion that the present embodiment obtains.The stereo-picture of 72 width distortions in the stereo-picture of 60 width distortions in stereo-picture, the white Gaussian noise situation of 60 width distortions in stereo-picture, the Gaussian Blur situation of 60 width distortions in stereo-picture, the JPEG2000 compression situation of the 60 width distortions in the JPEG of different distortion levels compression situation by 12 undistorted stereo-pictures of University Of Ningbo's stereo-picture storehouse and H.264 coding distortion situation forms.The stereo-picture of 80 width distortions in stereo-picture and the Fast Fading distortion situation of 80 width distortions in stereo-picture, the white Gaussian noise situation of 45 width distortions in stereo-picture, the Gaussian Blur situation of 80 width distortions in stereo-picture, the JPEG2000 compression situation of the 80 width distortions in the JPEG of different distortion levels compression situation by 20 undistorted stereo-pictures of LIVE stereo-picture storehouse forms.
Here, utilize 4 conventional objective parameters of evaluate image quality evaluating method as evaluation index, be Pearson correlation coefficient (the Pearson linear correlation coefficient under non-linear regression condition, PLCC), Spearman related coefficient (Spearman rank order correlation coefficient, SROCC), Kendall related coefficient (Kendall rank-order correlation coefficient, KROCC), square error (root mean squared error, RMSE), the accuracy of the three-dimensional image objective evaluation result of PLCC and RMSE reflection distortion, SROCC and KROCC reflect its monotonicity.
Utilize the inventive method to calculate the picture quality objective evaluation predicted value of the stereo-picture of the every width distortion in picture quality objective evaluation predicted value and the LIVE stereo-picture storehouse of stereo-picture of the every width distortion in University Of Ningbo's stereo-picture storehouse, recycle the average subjective scoring difference of the stereo-picture of the every width distortion in average subjective scoring difference and the LIVE stereo-picture storehouse of stereo-picture that existing subjective evaluation method obtains the every width distortion in University Of Ningbo's stereo-picture storehouse.The picture quality objective evaluation predicted value of the stereo-picture of the distortion calculating by the inventive method is done to five parameter L ogistic function nonlinear fittings, PLCC, SROCC and KROCC value are higher, and the lower explanation method for objectively evaluating of RMSE value is better with average subjective scoring difference correlativity.Table 1, table 2, table 3 and table 4 have provided the picture quality objective evaluation predicted value of the stereo-picture that adopts the distortion that the inventive method obtains and average Pearson correlation coefficient, Spearman related coefficient, Kendall related coefficient and the square error between subjective scoring difference.From table 1, table 2, table 3 and table 4, can find out, correlativity between the final picture quality objective evaluation predicted value of the stereo-picture of the distortion that employing the inventive method obtains and average subjective scoring difference is very high, the result that shows objective evaluation result and human eye subjective perception is more consistent, is enough to illustrate the validity of the inventive method.
Fig. 5 has provided the scatter diagram of picture quality objective evaluation predicted value with the average subjective scoring difference of the stereo-picture that utilizes the every width distortion in University Of Ningbo's stereo-picture storehouse that the inventive method obtains, Fig. 6 has provided the scatter diagram of picture quality objective evaluation predicted value with the average subjective scoring difference of the stereo-picture that utilizes the every width distortion in the LIVE stereo-picture storehouse that the inventive method obtains, loose point is more concentrated, illustrates that the consistance of objective evaluation result and subjective perception is better.As can be known from Fig. 5 and Fig. 6, adopt the scatter diagram that obtains of the inventive method more concentrated, and the goodness of fit between subjective assessment data is higher.
Table 1 utilizes the Pearson correlation coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Figure BDA0000469762870000181
Table 2 utilizes the Spearman related coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Figure BDA0000469762870000191
Table 3 utilizes the Kendall related coefficient comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains
Figure BDA0000469762870000192
Table 4 utilizes the square error comparison between picture quality objective evaluation predicted value and the average subjective scoring difference of stereo-picture of the distortion that the inventive method obtains

Claims (3)

1. the objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, is characterized in that comprising the following steps:
1. make S orgrepresent original undistorted stereo-picture, make S disrepresent the stereo-picture of distortion to be evaluated, by S orgleft visual point image be designated as { L org(x, y) }, by S orgright visual point image be designated as { R org(x, y) }, by S disleft visual point image be designated as { L dis(x, y) }, by S disright visual point image be designated as { R dis(x, y) }, wherein, (x, y) represent the coordinate position of the pixel in left visual point image and right visual point image, 1≤x≤W, 1≤y≤H, W represents the width of left visual point image and right visual point image, and H represents the height of left visual point image and right visual point image, L org(x, y) represents { L org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R org(x, y) represents { R org(x, y) } in the pixel value of the coordinate position pixel that is (x, y), L dis(x, y) represents { L dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y), R dis(x, y) represents { R dis(x, y) } in the pixel value of the coordinate position pixel that is (x, y);
2. according to { L org(x, y) } in each pixel and { R org(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, and according to { L dis(x, y) } in each pixel and { R dis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), 0≤d≤d max, d maxrepresent maximum disparity value;
3. calculate { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d);
Equally, calculate { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), by { DSI dis(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d);
4. according to { DSI org(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI org(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI org(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m org(x, y, d), m org ( x , y , d ) = ( gx org ( x , y , d ) ) 2 + ( gy org ( x , y , d ) ) 2 + ( gd org ( x , y , d ) ) 2 ;
Equally, according to { DSI dis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint direction gradient of each pixel, calculate { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the three-dimensional gradient amplitude of the pixel of (x, y, d) is designated as m dis(x, y, d), m dis ( x , y , d ) = ( gx dis ( x , y , d ) ) 2 + ( gy dis ( x , y , d ) ) 2 + ( gd dis ( x , y , d ) ) 2 ;
5. according to { DSI org(x, y, d) } and { DSI dis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the objective evaluation metric of the pixel of (x, y, d) is designated as Q dSI(x, y, d), Q DSI ( x , y , d ) = 2 &times; m org ( x , y , d ) &times; m dis ( x , y , d ) + C ( m org ( x , y , d ) ) 2 + ( m dis ( x , y , d ) ) 2 + C , Wherein, C is for controlling parameter;
6. according to { DSI dis(x, y, d) } in the objective evaluation metric of each pixel, calculate S dispicture quality objective evaluation predicted value, be designated as Q,
Figure FDA0000469762860000024
wherein, Ω represents { DSI dis(x, y, d) } in the set of coordinate position of all pixels, N represents { DSI dis(x, y, d) } in total number of the pixel that comprises.
2. a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude according to claim 1, is characterized in that 2. middle S of described step orgthe acquisition process of disparity space image be:
2.-a1, by { L org(x, y) } in current pending pixel be defined as current the first pixel, by { R org(x, y) } in current pending pixel be defined as current the second pixel;
2.-a2, suppose that current the first pixel is { L org(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R org(x, y) } in coordinate position be (x 1, y 1) pixel, get parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI org(x 1, y 1, d 0), DSI org(x 1, y 1, d 0)=| L org(x 1, y 1)-R org(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L org(x 1, y 1) expression { L org(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R org(x 1-d 0, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-a3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure FDA0000469762860000036
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI org ( x 1 , y 1 , d 1 ) , DSI org ( x 1 , y 1 , d 2 ) , . . . , DSI org ( x 1 , y 1 , d i ) , . . . , DSI org ( x 1 , y 1 , d d max ) , DSI org(x 1,y 1,d 1)=|L org(x 1,y 1)-R org(x 1-d 1,y 1)|,DSI org(x 1,y 1,d 2)=|L org(x 1,y 1)-R org(x 1-d 2,y 1)|,DSI org(x 1,y 1,d i)=|L org(x 1,y 1)-R org(x 1-d i,y 1)|, DSI org ( x 1 , y 1 , d d max ) = | L org ( x 1 , y 1 ) - R org ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure FDA0000469762860000033
dSI org(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI org(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI org(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value,
Figure FDA0000469762860000034
represent that current the first pixel and current the second pixel are in parallax value
Figure FDA0000469762860000035
under disparity space value, R org(x 1-d 1, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R org(x 1-d 2, y 1) expression { R org(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R org(x 1-d i, y 1) expression { R org(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel,
Figure FDA0000469762860000041
represent { R org(x, y) } in coordinate position be
Figure FDA0000469762860000042
the pixel value of pixel;
2.-a4, by { L org(x, y) } in next pending pixel as current the first pixel, by { R org(x, y) } in next pending pixel as current the second pixel, then return step 2.-a2 continues execution, until { L org(x, y) } and { R org(x, y) } in all pixels be disposed, obtain S orgdisparity space image, be designated as { DSI org(x, y, d) }, wherein, DSI org(x, y, d) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI orgthe value of (x, y, d) is { L org(x, y) } in coordinate position be (x, y) pixel and { R org(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Described step is middle S 2. disthe acquisition process of disparity space image be:
2.-b1, by { L dis(x, y) } in current pending pixel be defined as current the first pixel, by { R dis(x, y) } in current pending pixel be defined as current the second pixel;
2.-b2, suppose that current the first pixel is { L dis(x, y) } in coordinate position be (x 1, y 1) pixel, suppose that current the second pixel is { R dis(x, y) } in coordinate position be (x 1, y 1) pixel, get parallax value d 0=0, then calculate current the first pixel and current the second pixel at this parallax value d 0under disparity space value, be designated as DSI dis(x 1, y 1, d 0), DSI dis(x 1, y 1, d 0)=| L dis(x 1, y 1)-R dis(x 1-d 0, y 1) |, wherein, 1≤x 1≤ W, 1≤y 1≤ H, 0≤d 0≤ d max, d maxrepresent maximum disparity value, L dis(x 1, y 1) expression { L dis(x, y) } in coordinate position be (x 1, y 1) the pixel value of pixel, R dis(x 1-d 0, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 0, y 1) the pixel value of pixel, " || " is the symbol that takes absolute value;
2.-b3, choose d maxindividual and d 0different parallax value, is designated as respectively
Figure FDA0000469762860000044
then calculate respectively current the first pixel and current the second pixel at this d maxdisparity space value under individual different parallax value, corresponding is designated as respectively DSI dis ( x 1 , y 1 , d 1 ) , DSI dis ( x 1 , y 1 , d 2 ) , . . . , DSI dis ( x 1 , y 1 , d i ) , . . . , DSI dis ( x 1 , y 1 , d d max ) , DSI dis(x 1,y 1,d 1)=|L dis(x 1,y 1)-R dis(x 1-d 1,y 1)|,DSI dis(x 1,y 1,d 2)=|L dis(x 1,y 1)-R dis(x 1-d 2,y 1)|,DSI dis(x 1,y 1,d i)=|L dis(x 1,y 1)-R dis(x 1-d i,y 1)|, DSI dis ( x 1 , y 1 , d d max ) = | L dis ( x 1 , y 1 ) - R dis ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤d max, d i=d 0+ i,
Figure FDA0000469762860000053
dSI dis(x 1, y 1, d 1) represent that current the first pixel and current the second pixel are at parallax value d 1under disparity space value, DSI dis(x 1, y 1, d 2) represent that current the first pixel and current the second pixel are at parallax value d 2under disparity space value, DSI dis(x 1, y 1, d i) represent that current the first pixel and current the second pixel are at parallax value d iunder disparity space value, represent that current the first pixel and current the second pixel are in parallax value
Figure FDA0000469762860000055
under disparity space value, R dis(x 1-d 1, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 1, y 1) the pixel value of pixel, R dis(x 1-d 2, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d 2, y 1) the pixel value of pixel, R dis(x 1-d i, y 1) expression { R dis(x, y) } in coordinate position be (x 1-d i, y 1) the pixel value of pixel,
Figure FDA0000469762860000056
represent { R dis(x, y) } in coordinate position be the pixel value of pixel;
2.-b4, by { L dis(x, y) } in next pending pixel as current the first pixel, by { R dis(x, y) } in next pending pixel as current the second pixel, then return step 2.-b2 continues execution, until { L dis(x, y) } and { R dis(x, y) } in all pixels be disposed, obtain S disdisparity space image, be designated as { DSI dis(x, y, d) }, wherein, DSI dis(x, y, d) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (x, y, d), DSI disthe value of (x, y, d) is { L dis(x, y) } in coordinate position be (x, y) pixel and { R dis(x, y) } in the coordinate position disparity space value of pixel under parallax value d that be (x, y),
Figure FDA0000469762860000058
3. a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude according to claim 1 and 2, is characterized in that { DSI during described step 3. org(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-a1, employing horizontal gradient operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx org(x, y, d), gx org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, DSI org(u, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-a2, employing VG (vertical gradient) operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the vertical gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy org(x, y, d), gy org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI org ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI org ( u , v , j ) ) ;
3.-a3, employing viewpoint gradient operator are to { DSI org(x, y, d) } carry out convolution, obtain { DSI org(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI org(x, y, d) } in coordinate position be that the viewpoint direction gradient of the pixel of (x, y, d) is designated as gd org(x, y, d), gd org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, sign () is step function,
Figure FDA0000469762860000064
Above-mentioned steps 3.-a1 to step 3.-a3 in, if u<1, DSI orgthe value of (u, v, j) is by DSI orgthe value of (1, v, j) substitutes, if u>W, DSI orgthe value of (u, v, j) is by DSI orgthe value of (W, v, j) substitutes, if v<1, DSI orgthe value of (u, v, j) is by DSI org(u, 1, value j) substitutes, if v>H, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, H, j) substitutes, if j<0, DSI orgthe value of (u, v, j) is by DSI orgthe value of (u, v, 0) substitutes, if j>d max, DSI orgthe value of (u, v, j) is by DSI org(u, v, d max) value substitute, DSI org(1, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI org(W, v, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI org(u, 1, j) represent { DSI org(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI org(u, H, j) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI org(u, v, 0) represents { DSI org(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI org(u, v, d max) expression { DSI org(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel;
Described step 3. in { DSI dis(x, y, d) } in the acquisition process of horizontal direction gradient, vertical gradient and viewpoint direction gradient of each pixel be:
3.-b1, employing horizontal gradient operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the horizontal direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the horizontal direction gradient of the pixel of (x, y, d) is designated as gx dis(x, y, d), gx dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, DSI dis(u, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, j);
3.-b2, employing VG (vertical gradient) operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the vertical gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the vertical gradient of the pixel of (x, y, d) is designated as gy dis(x, y, d), gy dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI dis ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI dis ( u , v , j ) ) ;
3.-b3, employing viewpoint gradient operator are to { DSI dis(x, y, d) } carry out convolution, obtain { DSI dis(x, y, d) } in the viewpoint direction gradient of each pixel, by { DSI dis(x, y, d) } in coordinate position be that the parallax directions gradient of the pixel of (x, y, d) is designated as gd dis(x, y, d), gd dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, sign () is step function,
Figure FDA0000469762860000074
Above-mentioned steps 3.-b1 to step 3.-b3 in, if u<1, DSI disthe value of (u, v, j) is by DSI disthe value of (1, v, j) substitutes, if u>W, DSI disthe value of (u, v, j) is by DSI disthe value of (W, v, j) substitutes, if v<1, DSI disthe value of (u, v, j) is by DSI dis(u, 1, value j) substitutes, if v>H, DSI disthe value of (u, v, j) is by DSI disthe value of (u, H, j) substitutes, if j<0, DSI disthe value of (u, v, j) is by DSI disthe value of (u, v, 0) substitutes, if j>d max, DSI disthe value of (u, v, j) is by DSI dis(u, v, d max) value substitute, DSI dis(1, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (1, v, j), DSI dis(W, v, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (W, v, j), DSI dis(u, 1, j) represent { DSI dis(x, y, d) } in coordinate position be (u, 1, the disparity space value of pixel j), DSI dis(u, H, j) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, H, j), DSI dis(u, v, 0) represents { DSI dis(x, y, d) } in coordinate position be the disparity space value of the pixel of (u, v, 0), DSI dis(u, v, d max) expression { DSI dis(x, y, d) } in coordinate position be (u, v, d max) the disparity space value of pixel.
CN201410065127.1A 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude Active CN103824292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410065127.1A CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410065127.1A CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Publications (2)

Publication Number Publication Date
CN103824292A true CN103824292A (en) 2014-05-28
CN103824292B CN103824292B (en) 2016-09-07

Family

ID=50759333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410065127.1A Active CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Country Status (1)

Country Link
CN (1) CN103824292B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820988A (en) * 2015-05-06 2015-08-05 宁波大学 Method for objectively evaluating quality of stereo image without reference
CN109389591A (en) * 2018-09-30 2019-02-26 西安电子科技大学 Color image quality evaluation method based on colored description

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025400A1 (en) * 2006-07-31 2008-01-31 Kddi Corporation Objective perceptual video quality evaluation apparatus
CN102737380A (en) * 2012-06-05 2012-10-17 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025400A1 (en) * 2006-07-31 2008-01-31 Kddi Corporation Objective perceptual video quality evaluation apparatus
CN102737380A (en) * 2012-06-05 2012-10-17 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MAALOUF A等: "《International Conference on Acoustics,Speech and Signal Processing》", 27 May 2011, article "CYCLOP: A Stereo Color Image Quality Assessment Metric" *
姜求平等: "基于视差空间图的立体图像质量客观评价方法", 《光 电 子 激 光》, vol. 24, no. 12, 15 December 2013 (2013-12-15), pages 2 - 1 *
段芬芳等: "基于三维结构张量的立体图像质量客观评价方法", 《光 电 子激 光》, vol. 25, 15 January 2014 (2014-01-15), pages 2 - 1 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820988A (en) * 2015-05-06 2015-08-05 宁波大学 Method for objectively evaluating quality of stereo image without reference
CN104820988B (en) * 2015-05-06 2017-12-15 宁波大学 One kind is without with reference to objective evaluation method for quality of stereo images
CN109389591A (en) * 2018-09-30 2019-02-26 西安电子科技大学 Color image quality evaluation method based on colored description
CN109389591B (en) * 2018-09-30 2020-11-20 西安电子科技大学 Color descriptor-based color image quality evaluation method

Also Published As

Publication number Publication date
CN103824292B (en) 2016-09-07

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN102209257B (en) Stereo image quality objective evaluation method
CN104243976B (en) A kind of three-dimensional image objective quality evaluation method
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN103413298B (en) A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN104036501A (en) Three-dimensional image quality objective evaluation method based on sparse representation
CN102843572B (en) Phase-based stereo image quality objective evaluation method
CN102903107B (en) Three-dimensional picture quality objective evaluation method based on feature fusion
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN102521825B (en) Three-dimensional image quality objective evaluation method based on zero watermark
CN104811691A (en) Stereoscopic video quality objective evaluation method based on wavelet transformation
CN104361583B (en) A kind of method determining asymmetric distortion three-dimensional image objective quality
CN103369348B (en) Three-dimensional image quality objective evaluation method based on regional importance classification
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN102999912B (en) A kind of objective evaluation method for quality of stereo images based on distortion map
CN103200420B (en) Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN104243974B (en) A kind of stereoscopic video quality method for objectively evaluating based on Three-dimensional DCT
CN103903259A (en) Objective three-dimensional image quality evaluation method based on structure and texture separation
CN105898279A (en) Stereoscopic image quality objective evaluation method
CN105069794A (en) Binocular rivalry based totally blind stereo image quality evaluation method
CN103824292A (en) Three-dimensional image quality objective assessment method based on three-dimensional gradient amplitude
CN103281556B (en) Objective evaluation method for stereo image quality on the basis of image decomposition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191216

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200603

Address after: Room 501, office building, market supervision and Administration Bureau, Langchuan Avenue, Jianping Town, Langxi County, Xuancheng City, Anhui Province, 230000

Patentee after: Langxi pinxu Technology Development Co., Ltd

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

TR01 Transfer of patent right