CN103824292B - A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude - Google Patents

A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude Download PDF

Info

Publication number
CN103824292B
CN103824292B CN201410065127.1A CN201410065127A CN103824292B CN 103824292 B CN103824292 B CN 103824292B CN 201410065127 A CN201410065127 A CN 201410065127A CN 103824292 B CN103824292 B CN 103824292B
Authority
CN
China
Prior art keywords
pixel
dsi
dis
org
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410065127.1A
Other languages
Chinese (zh)
Other versions
CN103824292A (en
Inventor
邵枫
段芬芳
王珊珊
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Langxi pinxu Technology Development Co., Ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201410065127.1A priority Critical patent/CN103824292B/en
Publication of CN103824292A publication Critical patent/CN103824292A/en
Application granted granted Critical
Publication of CN103824292B publication Critical patent/CN103824292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, it calculates the disparity space image of stereo-picture of undistorted stereo-picture and distortion to be evaluated the most respectively, again by the horizontal gradient of each pixel in the disparity space image of the undistorted stereo-picture of calculating, vertical gradient and viewpoint gradient obtain the three-dimensional gradient amplitude of correspondence, by calculating the horizontal gradient of each pixel in the disparity space image of the stereo-picture of distortion to be evaluated, vertical gradient and viewpoint gradient obtain the three-dimensional gradient amplitude of correspondence, three-dimensional gradient amplitude finally according to each pixel in two width disparity space images, obtain the picture quality objective evaluation predictive value of the stereo-picture of distortion to be evaluated;Advantage is that obtained three-dimensional gradient amplitude has stronger stability and can preferably reflect the mass change situation of stereo-picture, therefore can be effectively improved the dependency of objective evaluation result and subjective perception.

Description

A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude
Technical field
The present invention relates to a kind of image quality evaluating method, especially relate to a kind of axonometric chart based on three-dimensional gradient amplitude As assessment method for encoding quality.
Background technology
Along with developing rapidly of image coding technique and stereo display technique, stereo-picture technology receives more and more extensive Concern and application, it has also become a current study hotspot.Stereo-picture technology utilizes the binocular parallax principle of human eye, binocular Receive the left and right visual point image from Same Scene independently of one another, merged by brain and form binocular parallax, thus enjoy There is the stereo-picture of depth perception and realism.Owing to being affected by acquisition system, storage compression and transmission equipment, stereo-picture Can inevitably introduce a series of distortion, and compared with single channel image, stereo-picture needs to ensure two passages simultaneously Picture quality, therefore stereo-picture is carried out quality evaluation and has very important significance.But, lack effective visitor at present See evaluation methodology stereo image quality is evaluated.Therefore, set up effective stereo image quality objective evaluation model to have Highly important meaning.
Gradient amplitude is that a kind of effective image structure information describes son, and evaluation methodology based on gradient amplitude is applied In plane picture quality evaluation, and for stereo image quality evaluation based on gradient amplitude, need to solve following key Problem: 1) three-dimensional perception evaluation reflects by parallax or depth information how parallax or depth information are embedded into gradient width Degree knows characteristic characterizing third dimension truly, remains one of difficulties in stereo image quality objective evaluation;2) and Not every pixel all has strong structural information, how to select stable structural information to be applied to quality evaluation, and The most do not affect three-dimensional perceptual performance, be the difficulties needing in stereo image quality objective evaluation to solve yet.
Summary of the invention
It is objective that the technical problem to be solved is to provide a kind of stereo image quality based on three-dimensional gradient amplitude Evaluation methodology, it can be effectively improved the dependency of objective evaluation result and subjective perception.
The present invention solves the technical scheme that above-mentioned technical problem used: a kind of axonometric chart based on three-dimensional gradient amplitude As assessment method for encoding quality, it is characterised in that comprise the following steps:
1. S is madeorgRepresent original undistorted stereo-picture, make SdisRepresent the stereo-picture of distortion to be evaluated, will SorgLeft view dot image be designated as { Lorg(x, y) }, by SorgRight visual point image be designated as { Rorg(x, y) }, by SdisLeft view point diagram As being designated as { Ldis(x, y) }, by SdisRight visual point image be designated as { Rdis(x, y) }, wherein, (x y) represents left view dot image and the right side The coordinate position of the pixel in visual point image, 1≤x≤W, 1≤y≤H, W represent left view dot image and the width of right visual point image Degree, H represents left view dot image and the height of right visual point image, Lorg(x y) represents { Lorg(x, y) } in coordinate position be (x, y) The pixel value of pixel, Rorg(x y) represents { Rorg(x, y) } in coordinate position be (x, the pixel value of pixel y), Ldis (x y) represents { Ldis(x, y) } in coordinate position be (x, the pixel value of pixel y), Rdis(x y) represents { Rdis(x, y) } in Coordinate position is (x, the pixel value of pixel y);
2. according to { Lorg(x, y) } in each pixel and { Rorg(x, y) } in the pixel of respective coordinates position many Disparity space value under individual parallax value, it is thus achieved that SorgDisparity space image, be designated as { DSIorg(x, y, d) }, and according to { Ldis(x, Y) each pixel in } and { Rdis(x, y) } in the pixel of the respective coordinates position disparity space under multiple parallax value Value, it is thus achieved that SdisDisparity space image, be designated as { DSIdis(x, y, d) }, wherein, DSIorg(x, y d) represent { DSIorg(x,y,d)} Middle coordinate position is (x, y, the disparity space value of pixel d), DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate Position is (x, y, the disparity space value of pixel d), 0≤d≤dmax, dmaxRepresent maximum disparity value;
3. { DSI is calculatedorg(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, by { DSIorg(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxorg(x,y, D), by { DSIorg(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gyorg(x, y, d), By { DSIorg(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gdorg(x,y,d);
Equally, { DSI is calculateddis(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard Point direction gradient, by { DSIdis(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxdis (x, y, d), by { DSIdis(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gydis(x, Y, d), by { DSIdis(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gddis(x,y, d);
4. according to { DSIorg(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, calculate { DSIorg(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIorg(x, y, d) } in coordinate Position is that (x, y, the three-dimensional gradient amplitude of pixel d) is designated as morg(x, y, d), m org ( x , y , d ) = ( gx org ( x , y , d ) ) 2 + ( gy org ( x , y , d ) ) 2 + ( gd org ( x , y , d ) ) 2 ;
Equally, according to { DSIdis(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard Point direction gradient, calculates { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIdis(x, y, d) } in Coordinate position is that (x, y, the three-dimensional gradient amplitude of pixel d) is designated as mdis(x, y, d), m dis ( x , y , d ) = ( gx dis ( x , y , d ) ) 2 + ( gy dis ( x , y , d ) ) 2 + ( gd dis ( x , y , d ) ) 2 ;
5. according to { DSIorg(x, y, d) } and { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate {DSIdis(x, y, d) } in the objective evaluation metric of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, Y, the objective evaluation metric of pixel d) is designated as QDSI(x, y, d), Q DSI ( x , y , d ) = 2 × m org ( x , y , d ) × m dis ( x , y , d ) + C ( m org ( x , y , d ) ) 2 + ( m dis ( x , y , d ) ) 2 + C , Wherein, C is for controlling parameter;
6. according to { DSIdis(x, y, d) } in the objective evaluation metric of each pixel, calculate SdisPicture quality Objective evaluation predictive value, is designated as Q,Wherein, Ω represents { DSIdis(x, y, d) } in all pictures The set of the coordinate position of vegetarian refreshments, N represents { DSIdis(x, y, d) } in total number of pixel of comprising.
Described step 2. middle SorgThe acquisition process of disparity space image be:
2.-a1, by { Lorg(x, y) } in currently pending pixel be defined as current first pixel, by { Rorg(x, Y) pixel currently pending in } is defined as current second pixel;
2.-a2, assume that current first pixel is { Lorg(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that when Front second pixel is { Rorg(x, y) } in coordinate position be (x1,y1) pixel, take parallax value d0=0, then calculate current First pixel and current second pixel are in this parallax value d0Under disparity space value, be designated as DSIorg(x1,y1,d0), DSIorg (x1,y1,d0)=|Lorg(x1,y1)-Rorg(x1-d0,y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent Big parallax value, Lorg(x1,y1) represent { Lorg(x, y) } in coordinate position be (x1,y1) the pixel value of pixel, Rorg(x1-d0, y1) represent { Rorg(x, y) } in coordinate position be (x1-d0,y1) the pixel value of pixel, " | | " is the symbol that takes absolute value;
2.-a3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyThen distinguish Calculate current first pixel and current second pixel at this dmaxDisparity space value under individual different parallax value is right That answers is designated as respectively DSI org ( x 1 , y 1 , d 1 ) , DSI org ( x 1 , y 1 , d 2 ) , . . . , DSI org ( x 1 , y 1 , d i ) , . . . , DSI org ( x 1 , y 1 , d d max ) , DSIorg(x1,y1,d1)=|Lorg(x1,y1)-Rorg(x1-d1,y1) |, DSIorg(x1,y1,d2)=|Lorg(x1,y1)-Rorg(x1-d2, y1) |, DSIorg(x1,y1,di)=|Lorg(x1,y1)-Rorg(x1-di,y1) |, DSI org ( x 1 , y 1 , d d max ) = | L org ( x 1 , y 1 ) - R org ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤dmax, di=d0+ i, DSIorg(x1,y1,d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIorg(x1,y1,d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIorg(x1,y1,di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rorg(x1-d1,y1) represent { Rorg(x, y) } in coordinate position be (x1-d1,y1) the pixel value of pixel, Rorg(x1-d2,y1) Represent { Rorg(x, y) } in coordinate position be (x1-d2,y1) the pixel value of pixel, Rorg(x1-di,y1) represent { Rorg(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rorg(x, y) } in coordinate bit It is set toThe pixel value of pixel;
2.-a4, by { Lorg(x, y) } in next pending pixel as current first pixel, by { Rorg(x, Y) in }, next pending pixel is as current second pixel, is then back to step 2.-a2 and continues executing with, until {Lorg(x, y) } and { Rorg(x, y) } in all pixels be disposed, it is thus achieved that SorgDisparity space image, be designated as { DSIorg (x, y, d) }, wherein, DSIorg(x, y d) represent { DSIorg(x, y, d) } in coordinate position be (x, y, the parallax of pixel d) Spatial value, DSIorg(x, y, value d) is { Lorg(x, y) } in coordinate position be (x, pixel y) and { Rorg(x, y) } middle seat Mark be set to (x, the disparity space value under parallax value d of pixel y),
Described step 2. middle SdisThe acquisition process of disparity space image be:
2.-b1, by { Ldis(x, y) } in currently pending pixel be defined as current first pixel, by { Rdis(x, Y) pixel currently pending in } is defined as current second pixel;
2.-b2, assume that current first pixel is { Ldis(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that when Front second pixel is { Rdis(x, y) } in coordinate position be (x1,y1) pixel, take parallax value d0=0, then calculate current First pixel and current second pixel are in this parallax value d0Under disparity space value, be designated as DSIdis(x1,y1,d0), DSIdis (x1,y1,d0)=|Ldis(x1,y1)-Rdis(x1-d0,y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent Big parallax value, Ldis(x1,y1) represent { Ldis(x, y) } in coordinate position be (x1,y1) the pixel value of pixel, Rdis(x1-d0, y1) represent { Rdis(x, y) } in coordinate position be (x1-d0,y1) the pixel value of pixel, " | | " is the symbol that takes absolute value;
2.-b3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyThen distinguish Calculate current first pixel and current second pixel at this dmaxDisparity space value under individual different parallax value is right That answers is designated as respectively DSI dis ( x 1 , y 1 , d 1 ) , DSI dis ( x 1 , y 1 , d 2 ) , . . . , DSI dis ( x 1 , y 1 , d i ) , . . . , DSI dis ( x 1 , y 1 , d d max ) , DSIdis(x1,y1,d1)=|Ldis(x1,y1)-Rdis(x1-d1,y1) |, DSIdis(x1,y1,d2)=|Ldis(x1,y1)-Rdis(x1-d2, y1) |, DSIdis(x1,y1,di)=|Ldis(x1,y1)-Rdis(x1-di,y1) |, DSI dis ( x 1 , y 1 , d d max ) = | L dis ( x 1 , y 1 ) - R dis ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤dmax, di=d0+ i, DSIdis(x1,y1,d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIdis(x1,y1,d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIdis(x1,y1,di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rdis(x1-d1,y1) represent { Rdis(x, y) } in coordinate position be (x1-d1,y1) the pixel value of pixel, Rdis(x1-d2,y1) Represent { Rdis(x, y) } in coordinate position be (x1-d2,y1) the pixel value of pixel, Rdis(x1-di,y1) represent { Rdis(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rdis(x, y) } in coordinate bit It is set toThe pixel value of pixel;
2.-b4, by { Ldis(x, y) } in next pending pixel as current first pixel, by { Rdis(x, Y) in }, next pending pixel is as current second pixel, is then back to step 2.-b2 and continues executing with, until {Ldis(x, y) } and { Rdis(x, y) } in all pixels be disposed, it is thus achieved that SdisDisparity space image, be designated as { DSIdis (x, y, d) }, wherein, DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate position be (x, y, the parallax of pixel d) Spatial value, DSIdis(x, y, value d) is { Ldis(x, y) } in coordinate position be (x, pixel y) and { Rdis(x, y) } middle seat Mark be set to (x, the disparity space value under parallax value d of pixel y),
Described step 3. in { DSIorg(x, y, d) } in the horizontal direction gradient of each pixel, vertical gradient With the acquisition process of viewpoint direction gradient it is:
3.-a1, employing horizontal gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in The horizontal direction gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, the level side of pixel d) It is designated as gx to gradientorg(x, y, d), gx org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( - Σ u = x - 2 u = x - 1 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) + Σ u = x + 1 u = x + 2 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, DSIorg(u, v j) represent { DSIorg(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j);
3.-a2, employing vertical gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in The vertical gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, the Vertical Square of pixel d) It is designated as gy to gradientorg(x, y, d), gy org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( - Σ u = x - 2 u = x + 2 Σ v = y - 2 v = y - 1 DSI org ( u , v , j ) + Σ u = x - 2 u = x + 2 Σ v = y + 1 v = y + 2 DSI org ( u , v , j ) ) ;
3.-a3, employing viewpoint gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in The viewpoint direction gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, the viewpoint side of pixel d) It is designated as gd to gradientorg(x, y, d), gd org ( x , y , d ) = Σ j = d - 2 j = d + 2 ( sign ( j - d ) × Σ u = x - 2 u = x + 2 Σ v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, Sign () is jump function,
Above-mentioned steps 3.-a1 is in step 3.-a3, if u < 1, then DSIorg(u, v, value j) is by DSIorg(1, v, j) Value substitutes, if u > W, then DSIorg(u, v, value j) is by DSIorg(W, v, value j) substitutes, if v < 1, then DSIorg(u,v, J) value is by DSIorg(u, 1, value j) substitutes, if v > H, then DSIorg(u, v, value j) is by DSIorg(u, H, value j) is replaced Generation, if j < 0, then DSIorg(u, v, value j) is by DSIorgThe value of (u, v, 0) substitutes, if j > dmax, then DSIorg(u,v,j) Value by DSIorg(u,v,dmax) value substitute, DSIorg(1, v, j) represent { DSIorg(x, y, d) } in coordinate position be (1, v, The disparity space value of pixel j), DSIorg(W, v j) represent { DSIorg(x, y, d) } in coordinate position be (W, v, picture j) The disparity space value of vegetarian refreshments, DSIorg(u, 1, j) represent { DSIorg(x, y, d) } in coordinate position be (u, 1, pixel j) Disparity space value, DSIorg(u, H j) represent { DSIorg(x, y, d) } in coordinate position be that (u, H, the parallax of pixel j) is empty Between be worth, DSIorg(u, v, 0) represents { DSIorg(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIorg(u,v,dmax) represent { DSIorg(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel;
Described step 3. in { DSIdis(x, y, d) } in the horizontal direction gradient of each pixel, vertical gradient With the acquisition process of viewpoint direction gradient it is:
3.-b1, employing horizontal gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in The horizontal direction gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, the level side of pixel d) It is designated as gx to gradientdis(x, y, d), gx dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, DSIdis(u, v j) represent { DSIdis(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j);
3.-b2, employing vertical gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in The vertical gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, the Vertical Square of pixel d) It is designated as gy to gradientdis(x, y, d), gy dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI dis ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI dis ( u , v , j ) ) ;
3.-b3, employing viewpoint gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in The viewpoint direction gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, the parallax side of pixel d) It is designated as gd to gradientdis(x, y, d), gd dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Wherein, Sign () is jump function,
Above-mentioned steps 3.-b1 is in step 3.-b3, if u < 1, then DSIdis(u, v, value j) is by DSIdis(1, v, j) Value substitutes, if u > W, then DSIdis(u, v, value j) is by DSIdis(W, v, value j) substitutes, if v < 1, then DSIdis(u,v, J) value is by DSIdis(u, 1, value j) substitutes, if v > H, then DSIdis(u, v, value j) is by DSIdis(u, H, value j) is replaced Generation, if j < 0, then DSIdis(u, v, value j) is by DSIdisThe value of (u, v, 0) substitutes, if j > dmax, then DSIdis(u,v,j) Value by DSIdis(u,v,dmax) value substitute, DSIdis(1, v, j) represent { DSIdis(x, y, d) } in coordinate position be (1, v, The disparity space value of pixel j), DSIdis(W, v j) represent { DSIdis(x, y, d) } in coordinate position be (W, v, picture j) The disparity space value of vegetarian refreshments, DSIdis(u, 1, j) represent { DSIdis(x, y, d) } in coordinate position be (u, 1, pixel j) Disparity space value, DSIdis(u, H j) represent { DSIdis(x, y, d) } in coordinate position be that (u, H, the parallax of pixel j) is empty Between be worth, DSIdis(u, v, 0) represents { DSIdis(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIdis(u,v,dmax) represent { DSIdis(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel.
Compared with prior art, it is an advantage of the current invention that:
1) impact that third dimension is known by the inventive method in view of parallax, constructs original undistorted solid the most respectively The disparity space image of image and the disparity space image of the stereo-picture of distortion to be evaluated, this avoid the disparity estimation of complexity Operate, and the disparity space image constructed can reflect the different parallax impact on stereo image quality well.
2) the inventive method is by the horizontal direction gradient of each pixel in calculating disparity space image, vertical direction ladder Degree and viewpoint direction gradient, obtain the three-dimensional gradient amplitude of each pixel in disparity space image, it is thus achieved that three-dimensional gradient Amplitude has stronger stability and can preferably reflect the mass change situation of stereo-picture, therefore can be effectively improved visitor See the dependency of evaluation result and subjective perception.
Accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 is horizontal gradient operator template;
Fig. 3 is vertical gradient operator template;
Fig. 4 is viewpoint gradient operator template;
Fig. 5 is the figure of the stereo-picture of the every width distortion in the University Of Ningbo's stereo-picture storehouse utilizing the inventive method to obtain Scatterplot as Objective Quality Assessment predictive value with mean subjective scoring difference;
Fig. 6 is the figure picture element of the stereo-picture of the every width distortion in the LIVE stereo-picture storehouse utilizing the inventive method to obtain Amount objective evaluation predictive value and the scatterplot of mean subjective scoring difference.
Detailed description of the invention
Below in conjunction with accompanying drawing embodiment, the present invention is described in further detail.
A kind of based on three-dimensional gradient amplitude the objective evaluation method for quality of stereo images that the present invention proposes, it totally realizes Block diagram is as it is shown in figure 1, it specifically includes following steps:
1. S is madeorgRepresent original undistorted stereo-picture, make SdisRepresent the stereo-picture of distortion to be evaluated, will SorgLeft view dot image be designated as { Lorg(x, y) }, by SorgRight visual point image be designated as { Rorg(x, y) }, by SdisLeft view point diagram As being designated as { Ldis(x, y) }, by SdisRight visual point image be designated as { Rdis(x, y) }, wherein, (x y) represents left view dot image and the right side The coordinate position of the pixel in visual point image, 1≤x≤W, 1≤y≤H, W represent left view dot image and the width of right visual point image Degree, H represents left view dot image and the height of right visual point image, Lorg(x y) represents { Lorg(x, y) } in coordinate position be (x, y) The pixel value of pixel, Rorg(x y) represents { Rorg(x, y) } in coordinate position be (x, the pixel value of pixel y), Ldis (x y) represents { Ldis(x, y) } in coordinate position be (x, the pixel value of pixel y), Rdis(x y) represents { Rdis(x, y) } in Coordinate position is (x, the pixel value of pixel y).
2. according to { Lorg(x, y) } in each pixel and { Rorg(x, y) } in the pixel of respective coordinates position many Disparity space value under individual parallax value, it is thus achieved that SorgDisparity space image, be designated as { DSIorg(x, y, d) }, and according to { Ldis(x, Y) each pixel in } and { Rdis(x, y) } in the pixel of the respective coordinates position disparity space under multiple parallax value Value, it is thus achieved that SdisDisparity space image, be designated as { DSIdis(x, y, d) }, wherein, DSIorg(x, y d) represent { DSIorg(x,y,d)} Middle coordinate position is (x, y, the disparity space value of pixel d), DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate Position is (x, y, the disparity space value of pixel d), 0≤d≤dmax, dmaxRepresent maximum disparity value, take in the present embodiment dmax=31。
In this particular embodiment, step 2. middle SorgThe acquisition process of disparity space image be:
2.-a1, by { Lorg(x, y) } in currently pending pixel be defined as current first pixel, by { Rorg(x, Y) pixel currently pending in } is defined as current second pixel.
2.-a2, assume that current first pixel is { Lorg(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that when Front second pixel is { Rorg(x, y) } in coordinate position be (x1,y1) pixel, the most current first pixel and current second The coordinate position of pixel is identical, takes parallax value d0=0, then calculate current first pixel and current second pixel at this Parallax value d0Under disparity space value, be designated as DSIorg(x1,y1,d0), DSIorg(x1,y1,d0)=|Lorg(x1,y1)-Rorg(x1-d0, y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent maximum disparity value, Lorg(x1,y1) represent { Lorg(x, Y) in }, coordinate position is (x1,y1) the pixel value of pixel, Rorg(x1-d0,y1) represent { Rorg(x, y) } in coordinate position be (x1-d0,y1) the pixel value of pixel, " | | " is the symbol that takes absolute value.
2.-a3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyThen distinguish Calculate current first pixel and current second pixel at this dmaxDisparity space value under individual different parallax value is right That answers is designated as respectively DSI org ( x 1 , y 1 , d 1 ) , DSI org ( x 1 , y 1 , d 2 ) , . . . , DSI org ( x 1 , y 1 , d i ) , . . . , DSI org ( x 1 , y 1 , d d max ) , DSIorg(x1,y1,d1)=|Lorg(x1,y1)-Rorg(x1-d1,y1) |, DSIorg(x1,y1,d2)=|Lorg(x1,y1)-Rorg(x1-d2, y1) |, DSIorg(x1,y1,di)=|Lorg(x1,y1)-Rorg(x1-di,y1) |, DSI org ( x 1 , y 1 , d d max ) = | L org ( x 1 , y 1 ) - R org ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤dmax, di=d0+ i, DSIorg(x1,y1,d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIorg(x1,y1,d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIorg(x1,y1,di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rorg(x1-d1,y1) represent { Rorg(x, y) } in coordinate position be (x1-d1,y1) the pixel value of pixel, Rorg(x1-d2,y1) Represent { Rorg(x, y) } in coordinate position be (x1-d2,y1) the pixel value of pixel, Rorg(x1-di,y1) represent { Rorg(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rorg(x, y) } in coordinate bit It is set toThe pixel value of pixel.
2.-a4, by { Lorg(x, y) } in next pending pixel as current first pixel, by { Rorg(x, Y) in }, next pending pixel is as current second pixel, is then back to step 2.-a2 and continues executing with, until {Lorg(x, y) } and { Rorg(x, y) } in all pixels be disposed, it is thus achieved that SorgDisparity space image, be designated as { DSIorg (x, y, d) }, wherein, DSIorg(x, y d) represent { DSIorg(x, y, d) } in coordinate position be (x, y, the parallax of pixel d) Spatial value, DSIorg(x, y, value d) is { Lorg(x, y) } in coordinate position be (x, pixel y) and { Rorg(x, y) } middle seat Mark be set to (x, the disparity space value under parallax value d of pixel y),
In this particular embodiment, step 2. middle SdisThe acquisition process of disparity space image be:
2.-b1, by { Ldis(x, y) } in currently pending pixel be defined as current first pixel, by { Rdis(x, Y) pixel currently pending in } is defined as current second pixel.
2.-b2, assume that current first pixel is { Ldis(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that when Front second pixel is { Rdis(x, y) } in coordinate position be (x1,y1) pixel, the most current first pixel and current second The coordinate position of pixel is identical, takes parallax value d0=0, then calculate current first pixel and current second pixel at this Parallax value d0Under disparity space value, be designated as DSIdis(x1,y1,d0), DSIdis(x1,y1,d0)=|Ldis(x1,y1)-Rdis(x1-d0, y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent maximum disparity value, Ldis(x1,y1) represent { Ldis(x, Y) in }, coordinate position is (x1,y1) the pixel value of pixel, Rdis(x1-d0,y1) represent { Rdis(x, y) } in coordinate position be (x1-d0,y1) the pixel value of pixel, " | | " is the symbol that takes absolute value.
2.-b3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyThen distinguish Calculate current first pixel and current second pixel at this dmaxDisparity space value under individual different parallax value is right That answers is designated as respectively DSI dis ( x 1 , y 1 , d 1 ) , DSI dis ( x 1 , y 1 , d 2 ) , . . . , DSI dis ( x 1 , y 1 , d i ) , . . . , DSI dis ( x 1 , y 1 , d d max ) , DSIdis(x1,y1,d1)=|Ldis(x1,y1)-Rdis(x1-d1,y1) |, DSIdis(x1,y1,d2)=|Ldis(x1,y1)-Rdis(x1-d2, y1) |, DSIdis(x1,y1,di)=|Ldis(x1,y1)-Rdis(x1-di,y1) |, DSI dis ( x 1 , y 1 , d d max ) = | L dis ( x 1 , y 1 ) - R dis ( x 1 - d d max , y 1 ) | , Wherein, 1≤i≤dmax, di=d0+ i, DSIdis(x1,y1,d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIdis(x1,y1,d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIdis(x1,y1,di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rdis(x1-d1,y1) represent { Rdis(x, y) } in coordinate position be (x1-d1,y1) the pixel value of pixel, Rdis(x1-d2,y1) Represent { Rdis(x, y) } in coordinate position be (x1-d2,y1) the pixel value of pixel, Rdis(x1-di,y1) represent { Rdis(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rdis(x, y) } in coordinate bit It is set toThe pixel value of pixel.
2.-b4, by { Ldis(x, y) } in next pending pixel as current first pixel, by { Rdis(x, Y) in }, next pending pixel is as current second pixel, is then back to step 2.-b2 and continues executing with, until {Ldis(x, y) } and { Rdis(x, y) } in all pixels be disposed, it is thus achieved that SdisDisparity space image, be designated as { DSIdis (x, y, d) }, wherein, DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate position be (x, y, the parallax of pixel d) Spatial value, DSIdis(x, y, value d) is { Ldis(x, y) } in coordinate position be (x, pixel y) and { Rdis(x, y) } middle seat Mark be set to (x, the disparity space value under parallax value d of pixel y),
3. { DSI is calculatedorg(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, by { DSIorg(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxorg(x,y, D), by { DSIorg(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gyorg(x, y, d), By { DSIorg(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gdorg(x,y,d)。
In this particular embodiment, step 3. in { DSIorg(x, y, d) } in each pixel horizontal direction gradient, The acquisition process of vertical gradient and viewpoint direction gradient is:
3.-a1, employing horizontal gradient operator as shown in Figure 2 are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg (x, y, d) } in the horizontal direction gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, picture d) The horizontal direction gradient of vegetarian refreshments is designated as gxorg(x, y, d), gx org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, DSIorg(u, v j) represent { DSIorg(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j).
3.-a2, employing vertical gradient operator as shown in Figure 3 are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg (x, y, d) } in the vertical gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, picture d) The vertical gradient of vegetarian refreshments is designated as gyorg(x, y, d),
gy org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI org ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI org ( u , v , j ) ) .
3.-a3, employing viewpoint gradient operator as shown in Figure 4 are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg (x, y, d) } in the viewpoint direction gradient of each pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, picture d) The viewpoint direction gradient of vegetarian refreshments is designated as gdorg(x, y, d), gd org ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI org ( u , v , j ) ) , Wherein, Sign () is jump function,
Above-mentioned steps 3.-a1 is in step 3.-a3, if u < 1, then DSIorg(u, v, value j) is by DSIorg(1, v, j) Value substitutes, if u > W, then DSIorg(u, v, value j) is by DSIorg(W, v, value j) substitutes, if v < 1, then DSIorg(u,v, J) value is by DSIorg(u, 1, value j) substitutes, if v > H, then DSIorg(u, v, value j) is by DSIorg(u, H, value j) is replaced Generation, if j < 0, then DSIorg(u, v, value j) is by DSIorgThe value of (u, v, 0) substitutes, if j > dmax, then DSIorg(u,v,j) Value by DSIorg(u,v,dmax) value substitute, DSIorg(1, v, j) represent { DSIorg(x, y, d) } in coordinate position be (1, v, The disparity space value of pixel j), DSIorg(W, v j) represent { DSIorg(x, y, d) } in coordinate position be (W, v, picture j) The disparity space value of vegetarian refreshments, DSIorg(u, 1, j) represent { DSIorg(x, y, d) } in coordinate position be (u, 1, pixel j) Disparity space value, DSIorg(u, H j) represent { DSIorg(x, y, d) } in coordinate position be that (u, H, the parallax of pixel j) is empty Between be worth, DSIorg(u, v, 0) represents { DSIorg(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIorg(u,v,dmax) represent { DSIorg(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel.
Equally, { DSI is calculateddis(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard Point direction gradient, by { DSIdis(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxdis (x, y, d), by { DSIdis(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gydis(x, Y, d), by { DSIdis(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gddis(x,y, d)。
In this particular embodiment, step 3. in { DSIdis(x, y, d) } in each pixel horizontal direction gradient, The acquisition process of vertical gradient and viewpoint direction gradient is:
3.-b1, employing horizontal gradient operator as shown in Figure 2 are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis (x, y, d) } in the horizontal direction gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, picture d) The horizontal direction gradient of vegetarian refreshments is designated as gxdis(x, y, d), gx dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x - 1 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) + &Sigma; u = x + 1 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Its In, DSIdis(u, v j) represent { DSIdis(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j).
3.-b2, employing vertical gradient operator as shown in Figure 3 are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis (x, y, d) } in the vertical gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, picture d) The vertical gradient of vegetarian refreshments is designated as gydis(x, y, d), gy dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( - &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y - 1 DSI dis ( u , v , j ) + &Sigma; u = x - 2 u = x + 2 &Sigma; v = y + 1 v = y + 2 DSI dis ( u , v , j ) ) .
3.-b3, employing viewpoint gradient operator as shown in Figure 4 are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis (x, y, d) } in the viewpoint direction gradient of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, picture d) The parallax directions gradient of vegetarian refreshments is designated as gddis(x, y, d), gd dis ( x , y , d ) = &Sigma; j = d - 2 j = d + 2 ( sign ( j - d ) &times; &Sigma; u = x - 2 u = x + 2 &Sigma; v = y - 2 v = y + 2 DSI dis ( u , v , j ) ) , Its In, sign () is jump function,
Above-mentioned steps 3.-b1 is in step 3.-b3, if u < 1, then DSIdis(u, v, value j) is by DSIdis(1, v, j) Value substitutes, if u > W, then DSIdis(u, v, value j) is by DSIdis(W, v, value j) substitutes, if v < 1, then DSIdis(u,v, J) value is by DSIdis(u, 1, value j) substitutes, if v > H, then DSIdis(u, v, value j) is by DSIdis(u, H, value j) is replaced Generation, if j < 0, then DSIdis(u, v, value j) is by DSIdisThe value of (u, v, 0) substitutes, if j > dmax, then DSIdis(u,v,j) Value by DSIdis(u,v,dmax) value substitute, DSIdis(1, v, j) represent { DSIdis(x, y, d) } in coordinate position be (1, v, The disparity space value of pixel j), DSIdis(W, v j) represent { DSIdis(x, y, d) } in coordinate position be (W, v, picture j) The disparity space value of vegetarian refreshments, DSIdis(u, 1, j) represent { DSIdis(x, y, d) } in coordinate position be (u, 1, pixel j) Disparity space value, DSIdis(u, H j) represent { DSIdis(x, y, d) } in coordinate position be that (u, H, the parallax of pixel j) is empty Between be worth, DSIdis(u, v, 0) represents { DSIdis(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIdis(u,v,dmax) represent { DSIdis(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel.
4. according to { DSIorg(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, calculate { DSIorg(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIorg(x, y, d) } in coordinate Position is that (x, y, the three-dimensional gradient amplitude of pixel d) is designated as morg(x, y, d), m org ( x , y , d ) = ( gx org ( x , y , d ) ) 2 + ( gy org ( x , y , d ) ) 2 + ( gd org ( x , y , d ) ) 2 .
Equally, according to { DSIdis(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard Point direction gradient, calculates { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIdis(x, y, d) } in Coordinate position is that (x, y, the three-dimensional gradient amplitude of pixel d) is designated as mdis(x, y, d), m dis ( x , y , d ) = ( gx dis ( x , y , d ) ) 2 + ( gy dis ( x , y , d ) ) 2 + ( gd dis ( x , y , d ) ) 2 .
5. according to { DSIorg(x, y, d) } and { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate {DSIdis(x, y, d) } in the objective evaluation metric of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, Y, the objective evaluation metric of pixel d) is designated as QDSI(x, y, d), Q DSI ( x , y , d ) = 2 &times; m org ( x , y , d ) &times; m dis ( x , y , d ) + C ( m org ( x , y , d ) ) 2 + ( m dis ( x , y , d ) ) 2 + C , Wherein, C, for controlling parameter, takes C=0.85 in the present embodiment.
6. according to { DSIdis(x, y, d) } in the objective evaluation metric of each pixel, calculate SdisPicture quality Objective evaluation predictive value, is designated as Q,Wherein, Ω represents { DSIdis(x, y, d) } in all pictures The set of the coordinate position of vegetarian refreshments, N represents { DSIdis(x, y, d) } in total number of pixel of comprising.
Here, use University Of Ningbo's stereo-picture storehouse and LIVE stereo-picture storehouse to analyze distortion that the present embodiment obtains Dependency between the picture quality objective evaluation predictive value of stereo-picture and mean subjective scoring difference.University Of Ningbo's axonometric chart As storehouse by the stereo-picture of 12 undistorted stereo-pictures, 60 width distortions in the case of the JPEG compression of different distortion levels, The stereo-picture of 60 width distortions in the case of JPEG2000 compression, the stereo-picture of 60 width distortions in the case of Gaussian Blur, height The stereo-picture structure of 72 width distortions in the case of the stereo-picture of 60 width distortions in the case of this white noise and H.264 coding distortion Become.LIVE stereo-picture storehouse is lost by 20 undistorted stereo-pictures, 80 width in the case of the JPEG compression of different distortion levels The stereo-picture of 80 width distortions in the case of genuine stereo-picture, JPEG2000 compression, 45 width distortions in the case of Gaussian Blur Stereo-picture, the stereo-picture of 80 width distortions in the case of white Gaussian noise and Fast Fading distortion in the case of 80 width The stereo-picture of distortion is constituted.
Here, utilize the conventional objective parameters of assessment 4 of image quality evaluating method as evaluation index, the most non-linear time Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC) under the conditions of returning, Spearman correlation coefficient (Spearman rank order correlation coefficient, SROCC), Kendall phase Close coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error (root mean Squared error, RMSE), the accuracy of three-dimensional image objective evaluation result of PLCC and RMSE reflection distortion, SROCC and KROCC reflects its monotonicity.
The inventive method is utilized to calculate the picture quality of stereo-picture of every width distortion in University Of Ningbo's stereo-picture storehouse The picture quality objective evaluation predictive value of the stereo-picture of the every width distortion in objective evaluation predictive value and LIVE stereo-picture storehouse, The average of stereo-picture recycling the every width distortion in existing subjective evaluation method acquisition University Of Ningbo's stereo-picture storehouse is led See the mean subjective scoring difference of the stereo-picture of the every width distortion marked in difference and LIVE stereo-picture storehouse.Will be by the present invention The picture quality objective evaluation predictive value of the stereo-picture of the calculated distortion of method does five parameter Logistic function non-thread Property matching, PLCC, SROCC and KROCC value is the highest, RMSE value the lowest explanation method for objectively evaluating and mean subjective scoring difference phase Guan Xingyue is good.Table 1, table 2, table 3 and table 4 give the picture quality visitor of the stereo-picture of the distortion using the inventive method to obtain See the Pearson correlation coefficient between evaluation and foreca value and mean subjective scoring difference, Spearman correlation coefficient, Kendall Correlation coefficient and mean square error.It can be seen that use the vertical of the distortion that obtains of the inventive method from table 1, table 2, table 3 and table 4 Dependency between the final picture quality objective evaluation predictive value of body image and mean subjective scoring difference is the highest, table Bright objective evaluation result is more consistent with the result of human eye subjective perception, it is sufficient to the effectiveness of the inventive method is described.
Fig. 5 gives the stereo-picture of the every width distortion in the University Of Ningbo's stereo-picture storehouse utilizing the inventive method to obtain The scatterplot of picture quality objective evaluation predictive value and mean subjective scoring difference, Fig. 6 gives and utilizes the inventive method to obtain To LIVE stereo-picture storehouse in the picture quality objective evaluation predictive value of stereo-picture of every width distortion comment with mean subjective Dividing the scatterplot of difference, scatterplot is more concentrated, and illustrates that objective evaluation result is the best with the concordance of subjective perception.From Fig. 5 and Fig. 6 It can be seen that the goodness of fit that the scatterplot using the inventive method to obtain compares between concentration, and subjective assessment data is higher.
Table 1 utilizes the picture quality objective evaluation predictive value of the stereo-picture of the distortion that the inventive method obtains with average Pearson correlation coefficient between subjective scoring difference compares
Table 2 utilizes the picture quality objective evaluation predictive value of the stereo-picture of the distortion that the inventive method obtains with average Spearman correlation coefficient between subjective scoring difference compares
Table 3 utilizes the picture quality objective evaluation predictive value of the stereo-picture of the distortion that the inventive method obtains with average Kendall correlation coefficient between subjective scoring difference compares
Table 4 utilizes the picture quality objective evaluation predictive value of the stereo-picture of the distortion that the inventive method obtains with average Mean square error between subjective scoring difference compares

Claims (2)

1. an objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude, it is characterised in that comprise the following steps:
1. S is madeorgRepresent original undistorted stereo-picture, make SdisRepresent the stereo-picture of distortion to be evaluated, by Sorg's Left view dot image is designated as { Lorg(x, y) }, by SorgRight visual point image be designated as { Rorg(x, y) }, by SdisLeft view dot image note For { Ldis(x, y) }, by SdisRight visual point image be designated as { Rdis(x, y) }, wherein, (x y) represents left view dot image and right viewpoint The coordinate position of the pixel in image, 1≤x≤W, 1≤y≤H, W represent left view dot image and the width of right visual point image, H Represent left view dot image and the height of right visual point image, Lorg(x y) represents { Lorg(x, y) } in coordinate position be (x, picture y) The pixel value of vegetarian refreshments, Rorg(x y) represents { Rorg(x, y) } in coordinate position be (x, the pixel value of pixel y), Ldis(x,y) Represent { Ldis(x, y) } in coordinate position be (x, the pixel value of pixel y), Rdis(x y) represents { Rdis(x, y) } in coordinate Position is (x, the pixel value of pixel y);
2. according to { Lorg(x, y) } in each pixel and { Rorg(x, y) } in the pixel of respective coordinates position regard multiple Disparity space value under difference, it is thus achieved that SorgDisparity space image, be designated as { DSIorg(x, y, d) }, and according to { Ldis(x, y) } in Each pixel and { Rdis(x, y) } in the pixel of the respective coordinates position disparity space value under multiple parallax value, it is thus achieved that SdisDisparity space image, be designated as { DSIdis(x, y, d) }, wherein, DSIorg(x, y d) represent { DSIorg(x, y, d) } in coordinate Position is (x, y, the disparity space value of pixel d), DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate position be (x, y, the disparity space value of pixel d), 0≤d≤dmax, dmaxRepresent maximum disparity value;
3. { DSI is calculatedorg(x, y, d) } in the horizontal direction gradient of each pixel, vertical gradient and viewpoint direction ladder Degree, by { DSIorg(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxorg(x, y, d), By { DSIorg(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gyorg(x, y, d), will {DSIorg(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gdorg(x,y,d);
Equally, { DSI is calculateddis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, by { DSIdis(x, y, d) } in coordinate position be that (x, y, the horizontal direction gradient of pixel d) is designated as gxdis(x,y, D), by { DSIdis(x, y, d) } in coordinate position be that (x, y, the vertical gradient of pixel d) is designated as gydis(x, y, d), By { DSIdis(x, y, d) } in coordinate position be that (x, y, the viewpoint direction gradient of pixel d) is designated as gddis(x,y,d);
Described step 3. in { DSIorg(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard The acquisition process of some direction gradient is:
3.-a1, employing horizontal gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in each The horizontal direction gradient of pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, pixel d) horizontal direction ladder Degree is designated as gxorg(x, y, d),Its In, DSIorg(u, v j) represent { DSIorg(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j);
3.-a2, employing vertical gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in each The vertical gradient of pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, pixel d) vertical direction ladder Degree is designated as gyorg(x, y, d),
3.-a3, employing viewpoint gradient operator are to { DSIorg(x, y, d) } carry out convolution, obtain { DSIorg(x, y, d) } in each The viewpoint direction gradient of pixel, by { DSIorg(x, y, d) } in coordinate position be (x, y, pixel d) viewpoint direction ladder Degree is designated as gdorg(x, y, d),Wherein, sign () is Jump function,
Above-mentioned steps 3.-a1 is in step 3.-a3, if u < 1, then DSIorg(u, v, value j) is by DSIorg(1, v, value j) is replaced Generation, if u > W, then DSIorg(u, v, value j) is by DSIorg(W, v, value j) substitutes, if v < 1, then DSIorg(u, v, j) Value is by DSIorg(u, 1, value j) substitutes, if v > H, then DSIorg(u, v, value j) is by DSIorg(u, H, value j) substitutes, as Really j < 0, then DSIorg(u, v, value j) is by DSIorgThe value of (u, v, 0) substitutes, if j > dmax, then DSIorg(u, v, value j) by DSIorg(u,v,dmax) value substitute, DSIorg(1, v, j) represent { DSIorg(x, y, d) } in coordinate position be (1, v, picture j) The disparity space value of vegetarian refreshments, DSIorg(W, v j) represent { DSIorg(x, y, d) } in coordinate position be (W, v, pixel j) Disparity space value, DSIorg(u, 1, j) represent { DSIorg(x, y, d) } in coordinate position be (u, 1, the parallax of pixel j) is empty Between be worth, DSIorg(u, H j) represent { DSIorg(x, y, d) } in coordinate position be (u, H, the disparity space value of pixel j), DSIorg(u, v, 0) represents { DSIorg(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIorg (u,v,dmax) represent { DSIorg(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel;
Described step 3. in { DSIdis(x, y, d) } in horizontal direction gradient, the vertical gradient of each pixel and regard The acquisition process of some direction gradient is:
3.-b1, employing horizontal gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in each The horizontal direction gradient of pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, pixel d) horizontal direction ladder Degree is designated as gxdis(x, y, d),Its In, DSIdis(u, v j) represent { DSIdis(x, y, d) } in coordinate position be (u, v, the disparity space value of pixel j);
3.-b2, employing vertical gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in each The vertical gradient of pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, pixel d) vertical direction ladder Degree is designated as gydis(x, y, d),
3.-b3, employing viewpoint gradient operator are to { DSIdis(x, y, d) } carry out convolution, obtain { DSIdis(x, y, d) } in each The viewpoint direction gradient of pixel, by { DSIdis(x, y, d) } in coordinate position be (x, y, pixel d) parallax directions ladder Degree is designated as gddis(x, y, d),Wherein, sign () is Jump function,
Above-mentioned steps 3.-b1 is in step 3.-b3, if u < 1, then DSIdis(u, v, value j) is by DSIdis(1, v, value j) is replaced Generation, if u > W, then DSIdis(u, v, value j) is by DSIdis(W, v, value j) substitutes, if v < 1, then DSIdis(u, v, j) Value is by DSIdis(u, 1, value j) substitutes, if v > H, then DSIdis(u, v, value j) is by DSIdis(u, H, value j) substitutes, as Really j < 0, then DSIdis(u, v, value j) is by DSIdisThe value of (u, v, 0) substitutes, if j > dmax, then DSIdis(u, v, value j) by DSIdis(u,v,dmax) value substitute, DSIdis(1, v, j) represent { DSIdis(x, y, d) } in coordinate position be (1, v, picture j) The disparity space value of vegetarian refreshments, DSIdis(W, v j) represent { DSIdis(x, y, d) } in coordinate position be (W, v, pixel j) Disparity space value, DSIdis(u, 1, j) represent { DSIdis(x, y, d) } in coordinate position be (u, 1, the parallax of pixel j) is empty Between be worth, DSIdis(u, H j) represent { DSIdis(x, y, d) } in coordinate position be (u, H, the disparity space value of pixel j), DSIdis(u, v, 0) represents { DSIdis(x, y, d) } in coordinate position be the disparity space value of pixel of (u, v, 0), DSIdis (u,v,dmax) represent { DSIdis(x, y, d) } in coordinate position be (u, v, dmax) the disparity space value of pixel;
4. according to { DSIorg(x, y, d) } in the horizontal direction gradient of each pixel, vertical gradient and viewpoint direction ladder Degree, calculates { DSIorg(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIorg(x, y, d) } in coordinate position For (x, y, the three-dimensional gradient amplitude of pixel d) is designated as morg(x, y, d),
Equally, according to { DSIdis(x, y, d) } in horizontal direction gradient, vertical gradient and the viewpoint side of each pixel To gradient, calculate { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, by { DSIdis(x, y, d) } in coordinate Position is that (x, y, the three-dimensional gradient amplitude of pixel d) is designated as mdis(x, y, d),
5. according to { DSIorg(x, y, d) } and { DSIdis(x, y, d) } in the three-dimensional gradient amplitude of each pixel, calculate {DSIdis(x, y, d) } in the objective evaluation metric of each pixel, by { DSIdis(x, y, d) } in coordinate position be (x, Y, the objective evaluation metric of pixel d) is designated as QDSI(x, y, d), Wherein, C is for controlling parameter;
6. according to { DSIdis(x, y, d) } in the objective evaluation metric of each pixel, calculate SdisPicture quality objective Evaluation and foreca value, is designated asWherein, Ω represents { DSIdis(x, y, d) } in all pixels The set of the coordinate position of point, N represents { DSIdis(x, y, d) } in total number of pixel of comprising.
A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude the most according to claim 1, it is special Levy and be described step 2. middle SorgThe acquisition process of disparity space image be:
2.-a1, by { Lorg(x, y) } in currently pending pixel be defined as current first pixel, by { Rorg(x, y) } in Currently pending pixel is defined as current second pixel;
2.-a2, assume that current first pixel is { Lorg(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that current Two pixels are { Rorg(x, y) } in coordinate position be (x1,y1) pixel, take parallax value d0=0, then calculate current first Pixel and current second pixel are in this parallax value d0Under disparity space value, be designated as DSIorg(x1,y1,d0), DSIorg(x1, y1,d0)=| Lorg(x1,y1)-Rorg(x1-d0, y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent maximum Parallax value, Lorg(x1, y1) represent { Lorg(x, y) } in coordinate position be (x1,y1) the pixel value of pixel, Rorg(x1-d0, y1) represent { Rorg(x, y) } in coordinate position be (x1-d0, y1) the pixel value of pixel, " | | " is the symbol that takes absolute value;
2.-a3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyCalculate the most respectively Current first pixel and current second pixel are at this dmaxDisparity space value under individual different parallax value, corresponding It is designated as respectivelyDSIorg (x1,y1,d1)=| Lorg(x1,y1)-Rorg(x1-d1,y1) |, DSIorg(x1,y1, d2)=| Lorg(x1, y1)-Rorg(x1-d2,y1) |, DSIorg(x1,y1,di)=| Lorg(x1,y1)-Rorg(x1-di,y1) |,Wherein, 1≤i≤dmax, di=d0+ i, DSIorg(x1,y1,d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIorg(x1,y1, d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIorg(x1, y1,di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rorg(x1-d1,y1) represent { Rorg(x, y) } in coordinate position be (x1-d1,y1) the pixel value of pixel, Rorg(x1-d2, y1) Represent { Rorg(x, y) } in coordinate position be (x1-d2, y1) the pixel value of pixel, Rorg(x1-di,y1) represent { Rorg(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rorg(x, y) } in coordinate bit It is set toThe pixel value of pixel;
2.-a4, by { Lorg(x, y) } in next pending pixel as current first pixel, by { Rorg(x, y) } in Next pending pixel, as current second pixel, is then back to step 2.-a2 and continues executing with, until { Lorg(x, } and { R y)org(x, y) } in all pixels be disposed, it is thus achieved that SorgDisparity space image, be designated as { DSIorg(x, y, d) }, Wherein, DSIorg(x, y d) represent { DSIorg(x, y, d) } in coordinate position be (x, y, the disparity space value of pixel d), DSIorg(x, y, value d) is { Lorg(x, y) } in coordinate position be (x, pixel y) and { Rorg(x, y) } in coordinate position be (x, the disparity space value under parallax value d of pixel y),
Described step 2. middle SdisThe acquisition process of disparity space image be:
2.-b1, by { Ldis(x, y) } in currently pending pixel be defined as current first pixel, by { Rdis(x, y) } in Currently pending pixel is defined as current second pixel;
2.-b2, assume that current first pixel is { Ldis(x, y) } in coordinate position be (x1,y1) pixel, it is assumed that current Two pixels are { Rdis(x, y) } in coordinate position be (x1,y1) pixel, take parallax value d0=0, then calculate current first Pixel and current second pixel are in this parallax value d0Under disparity space value, be designated as DSIdis(x1,y1,d0), DSIdis(x1, y1,d0)=| Ldis(x1, y1)-Rdis(x1-d0, y1) |, wherein, 1≤x1≤ W, 1≤y1≤ H, 0≤d0≤dmax, dmaxRepresent maximum Parallax value, Ldis(x1, y1) represent { Ldis(x, y) } in coordinate position be (x1, y1) the pixel value of pixel, Rdis(x1-d0, y1) represent { Rdis(x, y) } in coordinate position be (x1-d0, y1) the pixel value of pixel, " | | " is the symbol that takes absolute value;
2.-b3, choose dmaxIndividual and d0Different parallax value, is designated as respectivelyCalculate the most respectively Current first pixel and current second pixel are at this dmaxDisparity space value under individual different parallax value, corresponding It is designated as respectivelyDSIdis (x1, y1, d1)=| Ldis(x1, y1)-Rdis(x1-d1, y1) |, DSIdis(x1, y1, d2)=| Ldis(x1, y1)-Rdis(x1-d2, y1) |, DSIdis(x1, y1, di)=| Ldis(x1, y1)-Rdis(x1-di, y1) |,Wherein, 1≤i≤dmax, di=d0+ i, DSIdis(x1, y1, d1) represent that current first pixel and current second pixel are in parallax value d1Under disparity space value, DSIdis(x1, y1, d2) represent that current first pixel and current second pixel are in parallax value d2Under disparity space value, DSIdis(x1, y1, di) represent that current first pixel and current second pixel are in parallax value diUnder disparity space value,Represent that current first pixel and current second pixel are in parallax valueUnder disparity space value, Rdis(x1-d1, y1) represent { Rdis(x, y) } in coordinate position be (x1-d1, y1) the pixel value of pixel, Rdis(x1-d2, y1) Represent { Rdis(x, y) } in coordinate position be (x1-d2, y1) the pixel value of pixel, Rdis(x1-di, y1) represent { Rdis(x, Y) in }, coordinate position is (x1-di,y1) the pixel value of pixel,Represent { Rdis(x, y) } in coordinate bit It is set toThe pixel value of pixel;
2.-b4, by { Ldis(x, y) } in next pending pixel as current first pixel, by { Rdis(x, y) } in Next pending pixel, as current second pixel, is then back to step 2.-b2 and continues executing with, until { Ldis(x, } and { R y)dis(x, y) } in all pixels be disposed, it is thus achieved that SdisDisparity space image, be designated as { DSIdis(x, y, d) }, Wherein, DSIdis(x, y d) represent { DSIdis(x, y, d) } in coordinate position be (x, y, the disparity space value of pixel d), DSIdis(x, y, value d) is { Ldis(x, y) } in coordinate position be (x, pixel y) and { Rdis(x, y) } in coordinate position be (x, the disparity space value under parallax value d of pixel y),
CN201410065127.1A 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude Active CN103824292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410065127.1A CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410065127.1A CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Publications (2)

Publication Number Publication Date
CN103824292A CN103824292A (en) 2014-05-28
CN103824292B true CN103824292B (en) 2016-09-07

Family

ID=50759333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410065127.1A Active CN103824292B (en) 2014-02-26 2014-02-26 A kind of objective evaluation method for quality of stereo images based on three-dimensional gradient amplitude

Country Status (1)

Country Link
CN (1) CN103824292B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820988B (en) * 2015-05-06 2017-12-15 宁波大学 One kind is without with reference to objective evaluation method for quality of stereo images
CN109389591B (en) * 2018-09-30 2020-11-20 西安电子科技大学 Color descriptor-based color image quality evaluation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4817246B2 (en) * 2006-07-31 2011-11-16 Kddi株式会社 Objective video quality evaluation system
CN102737380B (en) * 2012-06-05 2014-12-10 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor

Also Published As

Publication number Publication date
CN103824292A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN102209257B (en) Stereo image quality objective evaluation method
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN104243976B (en) A kind of three-dimensional image objective quality evaluation method
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN104954778B (en) Objective stereo image quality assessment method based on perception feature set
CN105407349A (en) No-reference objective three-dimensional image quality evaluation method based on binocular visual perception
CN102521825B (en) Three-dimensional image quality objective evaluation method based on zero watermark
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN104240248B (en) Method for objectively evaluating quality of three-dimensional image without reference
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN103338379B (en) Stereoscopic video objective quality evaluation method based on machine learning
CN105635743A (en) Minimum noticeable distortion method and system based on saliency detection and total variation
CN102903107B (en) Three-dimensional picture quality objective evaluation method based on feature fusion
CN103413298A (en) Three-dimensional image objective evaluation method based on visual characteristics
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN103745457B (en) A kind of three-dimensional image objective quality evaluation method
CN102999912B (en) A kind of objective evaluation method for quality of stereo images based on distortion map
CN103369348B (en) Three-dimensional image quality objective evaluation method based on regional importance classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191216

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200603

Address after: Room 501, office building, market supervision and Administration Bureau, Langchuan Avenue, Jianping Town, Langxi County, Xuancheng City, Anhui Province, 230000

Patentee after: Langxi pinxu Technology Development Co., Ltd

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.