CN102708568B - Stereoscopic image objective quality evaluation method on basis of structural distortion - Google Patents

Stereoscopic image objective quality evaluation method on basis of structural distortion Download PDF

Info

Publication number
CN102708568B
CN102708568B CN201210145034.0A CN201210145034A CN102708568B CN 102708568 B CN102708568 B CN 102708568B CN 201210145034 A CN201210145034 A CN 201210145034A CN 102708568 B CN102708568 B CN 102708568B
Authority
CN
China
Prior art keywords
org
dis
designated
sigma
coordinate position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210145034.0A
Other languages
Chinese (zh)
Other versions
CN102708568A (en
Inventor
蒋刚毅
毛香英
王晓东
郁梅
周俊明
彭宗举
邵枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201210145034.0A priority Critical patent/CN102708568B/en
Publication of CN102708568A publication Critical patent/CN102708568A/en
Application granted granted Critical
Publication of CN102708568B publication Critical patent/CN102708568B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a stereoscopic image objective quality evaluation method on the basis of the structural distortion, which comprises the following steps: firstly, respectively carrying out regional division on left and right viewpoint images of an undistorted stereoscopic image and a distorted stereoscopic image to obtain an eye sensitive region and a corresponding nonsensitive region and then respectively obtaining evaluation indexes of the sensitive region and the nonsensitive region from two aspects of the structural amplitude distortion and the structural direction distortion; secondly, acquiring quality evaluation values of the left and right viewpoint images; thirdly, sampling singular value difference and a mean deviation ratio of residual images of which singular values are deprived to evaluate the distortion condition of the depth perception of a stereoscopic image so as to obtain an evaluation value of stereoscopic perceived quality; and finally, combining the quality of the left and right viewpoint images with the stereoscopic perceived quality to obtain a final quality evaluation result of the stereoscopic image. The method disclosed by the invention avoids simulating each composition part of an eye vision system, but sufficiently utilizes structure information of the stereoscopic image, so the consistency of the objective evaluation result and the subjective perception is effectively improved.

Description

A kind of three-dimensional image objective quality evaluation method based on structure distortion
Technical field
The present invention relates to a kind of image quality evaluation technology, especially relate to a kind of three-dimensional image objective quality evaluation method based on structure distortion.
Background technology
Stereo image quality is evaluated in stereoscopic image/video system in occupation of consequence very, not only can pass judgment on the quality of Processing Algorithm in stereoscopic image/video system, and can also optimize and design this algorithm, to improve the efficiency of stereoscopic image/video disposal system.Stereo image quality evaluation method is mainly divided into two classes: subjective quality assessment and evaluating objective quality.Subjective quality assessment method is exactly that several observers are weighted to average comprehensive evaluation to the quality of stereo-picture to be evaluated, its result meets human visual system's characteristic, but it is subject to, and calculating is inconvenient, speed is slow, the restriction of high in cost of production factors, cause embedded system difficult, thereby cannot extensively be promoted in actual applications.And that method for evaluating objective quality has is simple to operate, cost is low, be easy to realize and the feature such as real-time optimization algorithm, become the emphasis of stereo image quality evaluation study.
At present, the three-dimensional image objective quality evaluation model of main flow comprises left and right view-point image quality evaluation and depth perception quality assessment two parts.But, because the mankind are limited to human visual system's understanding, be difficult to simulate exactly each ingredient of human eye, so the consistance between these models and subjective perception is not fine.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of three-dimensional image objective quality evaluation method based on structure distortion, and it can effectively improve the consistance between three-dimensional image objective quality evaluation result and subjective perception.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of three-dimensional image objective quality evaluation method based on structure distortion, is characterized in that comprising the following steps:
1. make S orgundistorted stereo-picture for original, makes S disfor the stereo-picture of distortion to be evaluated, by original undistorted stereo-picture S orgleft viewpoint gray level image be designated as L org, by original undistorted stereo-picture S orgright viewpoint gray level image be designated as R org, by the stereo-picture S of distortion to be evaluated disleft viewpoint gray level image be designated as L dis, by the stereo-picture S of distortion to be evaluated disright viewpoint gray level image be designated as R dis;
2. to L organd L dis, R organd R dis4 width images are implemented respectively region and are divided, and obtain respectively L organd L dis, R organd R diseach self-corresponding sensitizing range matrix mapping graph of 4 width images, by L organd L disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A l, for A lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A l(i, j), by R organd R disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A r, for A rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A r(i, j), wherein, 0≤i≤(W-8), 0≤j≤(H-8), W represents L herein org, L dis, R organd R diswide, H represents L org, L dis, R organd R disheight;
3. by L organd L dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate L organd L disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph l, for B lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B l(i, j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , Wherein, B l(i, j) also represents L orgoverlapping block and L that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L org ( i + x , j + y ) - U org , L ( i , j ) ) 2 , U org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L org ( i + x , j + y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) 2 , U dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L dis ( i + x , j + y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( L org ( i + x , j + y ) - U org , L ( i , j ) ) × ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) ) , L org(i+x, j+y) represents L orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), L dis(i+x, j+y) represents L dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, 0≤i≤(W-8) herein, 0≤j≤(H-8);
By R organd R dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate R organd R disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph r, for B rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B r(i, j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , Wherein, B r(i, j) also represents R orgoverlapping block and R that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R org ( i + x , j + y ) - U org , R ( i , j ) ) 2 , U org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R org ( i + x , j + y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) 2 , U dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R dis ( i + x , j + y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( R org ( i + x , j + y ) - U org , R ( i , j ) ) × ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) ) , R org(i+x, j+y) represents R orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), R dis(i+x, j+y) represents R dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, 0≤i≤(W-8) herein, 0≤j≤(H-8);
4. to L organd L dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively L organd L diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by L orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, L, for I h, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, L(i, j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , By L orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, L, for I v, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, L(i, j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , By L disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, L, for I h, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, L(i, j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , By L disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, L, for I v, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, L(i, j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Wherein, L org(i+2, j), L org(i+2, j+1), L org(i+2, j+2), L org(i, j), L org(i, j+1), L org(i, j+2), L org(i+1, j+2), L org(i+1, j) be the corresponding L that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), L dis(i+2, j), L dis(i+2, j+1), L dis(i+2, j+2), L dis(i, j), L dis(i, j+1), L dis(i, j+2), L dis(i+1, j+2), L dis(i+1, j) be the corresponding L that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j);
To R organd R dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively R organd R diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by R orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, R, for I h, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, R(i, j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , By R orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, R, for I v, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, R(i, j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , By R disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, R, for I h, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, R(i, j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , By R disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, R, for I v, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, R(i, j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Wherein, R org(i+2, j), R org(i+2, j+1), R org(i+2, j+2), R org(i, j), R org(i, j+1), R org(i, j+2), R org(i+1, j+2), R org(i+1, j) be the corresponding R that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), R dis(i+2, j), R dis(i+2, j+1), R dis(i+2, j+2), R dis(i, j), R dis(i, j+1), R dis(i, j+2), R dis(i+1, j+2), R dis(i+1, j) be the corresponding R that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j);
5. calculate L organd L disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure l, for E lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as E l(i, j), E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , Wherein, C 2represent constant;
Calculate R organd R disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure r, for E rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as E r(i, j), E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 ;
6. calculate L organd L disstructure distortion evaluation of estimate, be designated as Q l, Q l1* Q m,L+ ω 2* Q nm, L, Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A L ( i , j ) ) , Wherein, ω 1represent L organd L disthe weighted value of middle sensitizing range, ω 2represent L organd L disthe weighted value of middle de-militarized zone;
Calculate R organd R disstructure distortion evaluation of estimate, be designated as Q r, Q r=ω ' 1* Q m,R+ ω ' 2* Q nm, R, Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A R ( i , j ) ) , Wherein, ω ' 1represent R organd R disthe weighted value of middle sensitizing range, ω ' 2represent R organd R disthe weighted value of middle de-militarized zone;
7. according to Q land Q rcalculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgspatial frequency measuring similarity, be designated as Q f, Q f1* Q l+ (1-β 1) * Q r, wherein, β 1represent Q lweights;
8. calculate L organd R orgabsolute difference image, take matrix representation as calculate L disand R disabsolute difference image, take matrix representation as wherein, " || " is the symbol that takes absolute value;
9. will with width image is divided into respectively the size of individual non-overlapping copies is 8 * 8 piece, then right with implement respectively svd, obtain for all in width image the singular value mapping graph that the corresponding singular value matrix by its each piece forms with the singular value mapping graph that the corresponding singular value matrix by its each piece forms, will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G org, for G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G dis, for G disin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as wherein, W lRrepresent with wide, H lRrepresent with height, 0≤p≤7,0≤q≤7;
10. calculate corresponding singular value mapping graph and the singular value deviation evaluation of estimate of corresponding singular value mapping graph, is designated as K, K = 64 W LR × H LR × Σ n = 0 W LR × H LR / 64 - 1 Σ p = 0 7 ( G org n ( p , p ) × | G org n ( p , p ) - G dis n ( p , p ) | ) Σ p = 0 7 G org n ( p , p ) , Wherein, represent G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates, represent G disin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates;
right with implement respectively svd, obtain respectively with each self-corresponding 2 orthogonal matrixes and 1 singular value matrix, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ organd V org, will the singular value matrix of implementing to obtain after svd is designated as O org, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ disand V dis, will the singular value matrix of implementing to obtain after svd is designated as O dis,
calculate respectively with width image is deprived the residual matrix diagram after singular value, will the residual matrix diagram of depriving after singular value is designated as X org, X orgorg* Λ * V org, will the residual matrix diagram of depriving after singular value is designated as X dis, X disdis* Λ * V dis, wherein, Λ representation unit matrix, the size of Λ and O organd O disin the same size;
calculate X organd X dismean bias rate, be designated as wherein, x represents X organd X disin the horizontal ordinate of pixel, y represents X organd X disin the ordinate of pixel;
calculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgthree-dimensional perception evaluating deg amount, be designated as Q s, wherein, τ represents constant, for regulate K and at Q smiddle risen importance;
according to Q fand Q s, calculate the stereo-picture S of distortion to be evaluated disimage quality evaluation score value, be designated as Q, Q=Q f* (Q s) ρ, wherein, ρ represents weight coefficient value.
Described step is middle L 2. organd L disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph lacquisition process be:
2.-a1, to L orgmake level and vertical direction Sobel operator and process, obtain L orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l1and Z v, l1, then calculate L orggradient magnitude figure, be designated as Z l1, wherein, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l1(x, y) represents Z h, l1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l1(x, y) represents Z v, l1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l1wide, H' represents Z l1height;
2.-a2, to L dismake level and vertical direction Sobel operator and process, obtain L dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l2and Z v, l2, then calculate L disgradient magnitude figure, be designated as Z l2, wherein, Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l2(x, y) represents Z h, l2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l2(x, y) represents Z v, l2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l2wide, H' represents Z l2height;
2.-a3, required threshold value T while calculating zoning, T = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z l 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z l 2 ( x , y ) ) , Wherein, α is constant, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y);
2.-a4, by Z l1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l1(i, j), by Z l2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l2(i, j), judgement Z l1(i, j) >T or Z l2whether (i, j) >T sets up, if set up, determines L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A l(i, j)=1, otherwise, determine L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A l(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8);
Described step is middle R 2. organd R disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph racquisition process be:
2.-b1, to R orgmake level and vertical direction Sobel operator and process, obtain R orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r1and Z v, r1, then calculate R orggradient magnitude figure, be designated as Z r1, wherein, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r1(x, y) represents Z h, r1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r1(x, y) represents Z v, r1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r1wide, H' represents Z r1height;
2.-b2, to R dismake level and vertical direction Sobel operator and process, obtain R dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r2and Z v, r2, then calculate R disgradient magnitude figure, be designated as Z r2, wherein, Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r2(x, y) represents Z h, r2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r2(x, y) represents Z v, r2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r2wide, H' represents Z r2height;
2.-b3, required threshold value T' while calculating zoning, T ′ = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z r 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z r 2 ( x , y ) ) , Wherein, α is constant, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y);
2.-b4, by Z r1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r1(i, j), by Z r2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r2(i, j), judgement Z r1(i, j) >T or Z r2whether (i, j) >T sets up, if set up, determines R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A r(i, j)=1, otherwise, determine R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A r(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
Described step is middle β 7. 1acquisition process be:
7.-1, adopt n undistorted stereo-picture to set up its distortion stereographic map image set under the different distortion levels of different type of distortion, this distortion stereographic map image set comprises the stereo-picture of several distortions, wherein, and n >=1;
7.-2, utilize subjective quality assessment method to obtain the average subjective scoring difference of the stereo-picture of the concentrated every width distortion of distortion stereo-picture, be designated as DMOS, DMOS=100-MOS, wherein, MOS represents subjective scoring average, DMOS ∈ [0,100];
7.-3, according to step 1. to step operating process 6., the left visual point image of the stereo-picture of every width distortion that calculated distortion stereo-picture is concentrated is with respect to the evaluation of estimate Q of the sensitizing range of the left visual point image of the undistorted stereo-picture of correspondence m,Levaluation of estimate Q with de-militarized zone nm, L;
7.-4, adopt Mathematical Fitting method matching distortion stereo-picture to concentrate average subjective scoring difference DMOS and the corresponding Q of the stereo-picture of distortion m,Land Q nm, Lthereby, obtain β 1value.
Compared with prior art, the invention has the advantages that first respectively the left visual point image of the stereo-picture of undistorted stereo-picture and distortion and right visual point image are carried out to region division, obtain human eye sensitizing range and corresponding de-militarized zone, then from structure amplitude distortion and structure direction distortion two aspects, draw respectively the evaluation index of sensitizing range and de-militarized zone; Next adopts linear weighted function to obtain respectively left view-point image quality evaluation of estimate and right view-point image quality evaluation of estimate, and then obtains left and right view-point image quality evaluation of estimate; Again according to singular value, can characterize preferably stereo-picture structural information characteristic, sampling singular value difference is weighed the distortion situation of the depth perception of stereo-picture with the mean bias rate of depriving the residual image after singular value, thereby obtains the evaluation of estimate of three-dimensional perceived quality; Finally by left and right view-point image quality and three-dimensional perceived quality with nonlinear way combination, obtain the final mass evaluation result of stereo-picture, because the inventive method avoids simulating each ingredient of human visual system, and take full advantage of the structural information of stereo-picture, therefore effectively improved the consistance of objective evaluation result and subjective perception.
Accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 a is Akko & Kayo (640 * 480) stereo-picture;
Fig. 2 b is Alt Moabit (1024 * 768) stereo-picture;
Fig. 2 c is Balloons (1024 * 768) stereo-picture;
Fig. 2 d is Door Flowers (1024 * 768) stereo-picture;
Fig. 2 e is Kendo (1024 * 768) stereo-picture;
Fig. 2 f is Leaving Laptop (1024 * 768) stereo-picture;
Fig. 2 g is Lovebird1 (1024 * 768) stereo-picture;
Fig. 2 h is Newspaper (1024 * 768) stereo-picture;
Fig. 2 i is Xmas (640 * 480) stereo-picture;
Fig. 2 j is Puppy (720 * 480) stereo-picture;
Fig. 2 k is Soccer2 (720 * 480) stereo-picture;
Fig. 2 l is Horse (480 * 270) stereo-picture;
Fig. 3 is that the left view-point image quality of the inventive method is evaluated block diagram;
Fig. 4 a is different α and ω 1under left view-point image quality and the CC performance change figure between subjective perceptual quality;
Fig. 4 b is different α and ω 1under left view-point image quality and the SROCC performance change figure between subjective perceptual quality;
Fig. 4 c is different α and ω 1under left view-point image quality and the RMSE performance change figure between subjective perceptual quality;
Fig. 5 a is at ω 1in=1 situation, the left view-point image quality under different α and the CC performance change figure between subjective perceptual quality;
Fig. 5 b is at ω 1in=1 situation, the left view-point image quality under different α and the SROCC performance change figure between subjective perceptual quality;
Fig. 5 c is at ω 1in=1 situation, the left view-point image quality under different α and the RMSE performance change figure between subjective perceptual quality;
Fig. 6 a is different beta 1under left and right view-point image quality and the CC performance change figure between subjective perceptual quality;
Fig. 6 b is different beta 1under left and right view-point image quality and the SROCC performance change figure between subjective perceptual quality;
Fig. 6 c is different beta 1under left and right view-point image quality and the RMSE performance change figure between subjective perceptual quality;
Fig. 7 a is three-dimensional depth perception quality under different τ and the CC performance change figure between subjective perceptual quality;
Fig. 7 b is three-dimensional depth perception quality under different τ and the SROCC performance change figure between subjective perceptual quality;
Fig. 7 c is three-dimensional depth perception quality under different τ and the RMSE performance change figure between subjective perceptual quality;
Fig. 8 a is stereo image quality under different ρ and the CC performance change figure between subjective perceptual quality;
Fig. 8 b is stereo image quality under different ρ and the SROCC performance change figure between subjective perceptual quality;
Fig. 8 c is stereo image quality under different ρ and the RMSE performance change figure between subjective perceptual quality.
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
A kind of three-dimensional image objective quality evaluation method based on structure distortion that the present invention proposes, its angle from structure distortion has been evaluated respectively the three-dimensional perceived quality of left and right view-point image quality and stereo-picture, and the mode of employing nonlinear weight obtains the final mass evaluation of estimate of stereo-picture.What Fig. 1 had provided the inventive method totally realizes block diagram, and it comprises the following steps:
1. make S orgundistorted stereo-picture for original, makes S disfor the stereo-picture of distortion to be evaluated, by original undistorted stereo-picture S orgleft viewpoint gray level image be designated as L org, by original undistorted stereo-picture S orgright viewpoint gray level image be designated as R org, by the stereo-picture S of distortion to be evaluated disleft viewpoint gray level image be designated as L dis, by the stereo-picture S of distortion to be evaluated disright viewpoint gray level image be designated as R dis.
2. to L organd L dis, R organd R dis4 width images are implemented respectively region and are divided, and obtain respectively L organd L dis, R organd R diseach self-corresponding sensitizing range matrix mapping graph of 4 width images, by L organd L disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A l, for A lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A l(i, j), by R organd R disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A r, for A rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A r(i, j), wherein, 0≤i≤(W-8), 0≤j≤(H-8), W represents L herein org, L dis, R organd R diswide, H represents L org, L dis, R organd R disheight.
In this specific embodiment, step is middle L 2. organd L disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph lacquisition process be:
2.-a1, to L orgmake level and vertical direction Sobel operator and process, obtain L orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l1and Z v, l1, then calculate L orggradient magnitude figure, be designated as Z l1, wherein, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l1(x, y) represents Z h, l1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l1(x, y) represents Z v, l1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l1wide, H' represents Z l1height.
2.-a2, to L dismake level and vertical direction Sobel operator and process, obtain L dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l2and Z v, l2, then calculate L disgradient magnitude figure, be designated as Z l2, wherein, Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l2(x, y) represents Z h, l2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l2(x, y) represents Z v, l2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l2wide, H' represents Z l2height.
2.-a3, required threshold value T while calculating zoning, T = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z l 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z l 2 ( x , y ) ) , Wherein, W' represents Z l1and Z l2wide, H' represents Z l1and Z l2height, α is constant, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y).
2.-a4, by Z l1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l1(i, j), by Z l2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l2(i, j), judgement Z l1(i, j) >T or Z l2whether (i, j) >T sets up, if set up, determines L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A l(i, j)=1, otherwise, determine L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A l(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
In this specific embodiment, step is middle R 2. organd R disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph racquisition process be:
2.-b1, to R orgmake level and vertical direction Sobel operator and process, obtain R orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r1and Z v, r1, then calculate R orggradient magnitude figure, be designated as Z r1, wherein, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r1(x, y) represents Z h, r1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r1(x, y) represents Z v, r1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r1wide, H' represents Z r1height.
2.-b2, to R dismake level and vertical direction Sobel operator and process, obtain R dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r2and Z v, r2, then calculate R disgradient magnitude figure, be designated as Z r2, wherein, Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r2(x, y) represents Z h, r2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r2(x, y) represents Z v, r2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r2wide, H' represents Z r2height.
2.-b3, required threshold value T' while calculating zoning, T ′ = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z r 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z r 2 ( x , y ) ) , Wherein, W' represents Z r1and Z r2wide, H' represents Z r1and Z r2height, α is constant, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y).
2.-b4, by Z r1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r1(i, j), by Z r2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r2(i, j), judgement Z r1(i, j) >T or Z r2whether (i, j) >T sets up, if set up, determines R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A r(i, j)=1, otherwise, determine R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A r(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
In the present embodiment, utilize as Fig. 2 a, Fig. 2 b, Fig. 2 c, Fig. 2 d, Fig. 2 e, Fig. 2 f, Fig. 2 g, Fig. 2 h, Fig. 2 i, Fig. 2 j, 12 undistorted stereo-pictures shown in Fig. 2 k and Fig. 2 l are set up its distortion stereographic map image set under the different distortion levels of different type of distortion, type of distortion comprises JPEG compression, JP2K compression, white Gaussian noise, Gaussian Blur and H264 coding distortion, and the left visual point image of stereo-picture and right visual point image are simultaneously with degree distortion, this distortion stereographic map image set comprises the stereo-picture of 312 width distortions altogether, the stereo-picture of the distortion that wherein JPEG compresses is totally 60 width, the stereo-picture of the distortion of JPEG2000 compression is totally 60 width, the stereo-picture of white Gaussian noise distortion is totally 60 width, the stereo-picture of Gaussian Blur distortion is totally 60 width, the stereo-picture of H264 coding distortion is totally 72 width.Above-mentioned 312 width stereo-pictures are carried out to Region Segmentation as above.
In the present embodiment, α value has determined the order of accuarcy that sensitizing range is divided, if value is excessive, sensitizing range can be mistaken as de-militarized zone, if value is too small, de-militarized zone can be mistaken as sensitizing range, so the contribution of the deterministic process of its value and left view-point image quality or right view-point image quality stereoscopic image quality decides.
3. by L organd L dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate L organd L disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph l, for B lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B l(i, j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , Wherein, B l(i, j) also represents L orgoverlapping block and L that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L org ( i + x , j + y ) - U org , L ( i , j ) ) 2 , U org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L org ( i + x , j + y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) 2 , U dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L dis ( i + x , j + y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( L org ( i + x , j + y ) - U org , L ( i , j ) ) × ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) ) , L org(i+x, j+y) represents L orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), L dis(i+x, j+y) represents L dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, C 1be for fear of B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 Denominator there is zero situation, desirable C in actual application 1=0.01,0≤i≤(W-8) herein, 0≤j≤(H-8).
At this, consider the correlativity between the pixel of image, left or right that overlapping block that size is 8 * 8 is the most adjacent with it has 7 column weights folded, equally, this 8 * 8 overlapping block the most adjacent with it upper piece or lower have 7 row overlapping.
By R organd R dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate R organd R disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph r, for B rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B r(i, j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , Wherein, B r(i, j) also represents R orgoverlapping block and R that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R org ( i + x , j + y ) - U org , R ( i , j ) ) 2 , U org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R org ( i + x , j + y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) 2 , U dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R dis ( i + x , j + y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( R org ( i + x , j + y ) - U org , R ( i , j ) ) × ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) ) , R org(i+x, j+y) represents R orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), R dis(i+x, j+y) represents R dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, C 1be for fear of B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 Denominator there is zero situation, desirable C in actual application 1=0.01,0≤i≤(W-8) herein, 0≤j≤(H-8).
4. to L organd L dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively L organd L diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by L orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, L, for I h, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, L(i, j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , By L orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, L, for I v, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, L(i, j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , By L disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, L, for I h, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, L(i, j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , By L disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, L, for I v, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, L(i, j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Wherein, L org(i+2, j), L org(i+2, j+1), L org(i+2, j+2), L org(i, j), L org(i, j+1), L org(i, j+2), L org(i+1, j+2), L org(i+1, j) be the corresponding L that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), L dis(i+2, j), L dis(i+2, j+1), L dis(i+2, j+2), L dis(i, j), L dis(i, j+1), L dis(i, j+2), L dis(i+1, j+2), L dis(i+1, j) be the corresponding L that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j).
To R organd R dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively R organd R diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by R orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, R, for I h, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, R(i, j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , By R orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, R, for I v, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, R(i, j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , By R disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, R, for I h, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, R(i, j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , By R disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, R, for I v, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, R(i, j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Wherein, R org(i+2, j), R org(i+2, j+1), R org(i+2, j+2), R org(i, j), R org(i, j+1), R org(i, j+2), R org(i+1, j+2), R org(i+1, j) be the corresponding R that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), R dis(i+2, j), R dis(i+2, j+1), R dis(i+2, j+2), R dis(i, j), R dis(i, j+1), R dis(i, j+2), R dis(i+1, j+2), R dis(i+1, j) be the corresponding R that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j).
5. calculate L organd L disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure l, for E lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as E l(i, j), E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , Wherein, C 2represent constant, C 2be for fear of E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 Denominator appear as zero situation, desirable C in actual application 2=0.02.
Calculate R organd R disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure r, for E rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as E r(i, j), E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 , C 2be for fear of E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 Denominator appear as zero situation, desirable C in actual application 2=0.02.
6. calculate L organd L disstructure distortion evaluation of estimate, be designated as Q l, Q l1* Q m,L+ ω 2* Q nm, L, Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A L ( i , j ) ) , Wherein, ω 1represent L organd L disthe weighted value of middle sensitizing range, ω 2represent L organd L disthe weighted value of middle de-militarized zone.
Calculate R organd R disstructure distortion evaluation of estimate, be designated as Q r, Q r=ω ' 1* Q m,R+ ω ' 2* Q nm, R, Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A R ( i , j ) ) , Wherein, ω ' 1represent R organd R disthe weighted value of middle sensitizing range, ω ' 2represent R organd R disthe weighted value of middle de-militarized zone.
In the present embodiment, Fig. 3 has provided the block diagram of realizing of left view-point image quality evaluation.The undistorted stereo-pictures of as shown in Fig. 2 a to Fig. 2 l 12 of utilization are set up the distortion stereographic map image set that the stereo-picture by 312 width distortions forms, to the stereo-picture of this 312 width distortion, adopt known subjective quality assessment method to carry out subjective quality assessment, the stereo-picture that obtains 312 width distortions average subjective scoring difference (DMOS separately, Difference Mean Opinion Scores), i.e. the subjective quality score value of the stereo-picture of every width distortion.DMOS is the difference of subjective scoring average (MOS) and full marks (100), i.e. DMOS=100-MOS, therefore, the quality of the stereo-picture of the larger expression distortion of DMOS value is poorer, the quality of the stereo-picture of the less expression distortion of DMOS value is better, and the span of DMOS is [0,100].On the other hand, 6. 1. the stereo-picture of above-mentioned 312 width distortions extremely calculated to the corresponding Q of stereo-picture of every width distortion by the inventive method step m,Land Q nm, L; Then adopt Q l1* Q m,L+ (1-ω 1) * Q nm, Lmake four parameter L ogistic function nonlinear fittings, obtain α and ω 1value.Here, utilize 3 conventional objective parameters of evaluate image quality evaluating method as evaluation index, be Pearson correlation coefficient (the Correlation Coefficient under non-linear regression condition, CC), Spearman related coefficient (Spearman Rank-Order Correlation Coefficient, SROCC) and square error coefficient (Rooted Mean Squared Error, RMSE), the precision of this objective models of stereo-picture evaluation function of CC reflection distortion, the monotonicity situation between SROCC reflection objective models and subjective perception.RMSE reflects the accuracy of its prediction.CC and SROCC value higher explanation three-dimensional image objective evaluation method and DMOS correlativity are better, and RMSE value lower explanation three-dimensional image objective evaluation method and DMOS correlativity are better.Fig. 4 a has provided different α and ω 1under the left view-point image quality of 312 width stereo-pictures and the CC performance change between subjective perceptual quality, Fig. 4 b has provided different α and ω 1under the left view-point image quality of 312 width stereo-pictures and the SROCC performance change between subjective perceptual quality, Fig. 4 c has provided different α and ω 1under the left view-point image quality of 312 width stereo-pictures and the RMSE performance change between subjective perceptual quality, analysis chart 4a, Fig. 4 b and Fig. 4 c, known CC and SROCC are can be along with the change of ω 1 value large and become large, and RMSE is along with ω 1the change of value is large and diminish, and illustrate that left view-point image quality is mainly that quality by sensitizing range determines, and the change of α value is little to the performance impact between left view-point image quality and subjective perception.Fig. 5 a has provided at ω 1=1, ω 2in=0 situation, the left view-point image quality of 312 width stereo-pictures under different α and the CC performance change between subjective perceptual quality; Fig. 5 b has provided at ω 1=1, ω 2in=0 situation, the left view-point image quality of 312 width stereo-pictures under different α and the SROCC performance change between subjective perceptual quality; Fig. 5 c has provided at ω 1=1, ω 2in=0 situation, the left view-point image quality of 312 width stereo-pictures under different α and the RMSE performance change between subjective perceptual quality; Analysis chart 5a, Fig. 5 b and Fig. 5 c, known CC, SROCC and RMSE value all fluctuate on hundredths, but all have a peak value.Therefore, in the present embodiment, get ω 1=1, α=2.1.
7. according to Q land Q rcalculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgspatial frequency measuring similarity, be designated as Q f, Q f1* Q l+ (1-β 1) * Q r, wherein, β 1represent Q lweights.
In this specific embodiment, step is middle β 7. 1acquisition process be:
7.-1, adopt n undistorted stereo-picture to set up its distortion stereographic map image set under the different distortion levels of different type of distortion, this distortion stereographic map image set comprises the stereo-picture of several distortions, wherein, and n >=1.
7.-2, utilize subjective quality assessment method to obtain the average subjective scoring difference of the stereo-picture of the concentrated every width distortion of distortion stereo-picture, be designated as DMOS, DMOS=100-MOS, wherein, MOS represents subjective scoring average, DMOS ∈ [0,100].
7.-3, according to step 1. to step operating process 6., the left visual point image of the stereo-picture of every width distortion that calculated distortion stereo-picture is concentrated is with respect to the evaluation of estimate Q of the sensitizing range of the left visual point image of the undistorted stereo-picture of correspondence m,Levaluation of estimate Q with de-militarized zone nm, L.
7.-4, adopt Mathematical Fitting method matching distortion stereo-picture to concentrate average subjective scoring difference DMOS and the corresponding Q of the stereo-picture of distortion m,Land Q nm, Lthereby, obtain β 1value.
In the present embodiment, β 1determined Q lthe contribution of stereoscopic image quality, for blocking effect, half of stereo image quality the chances are the quality of left visual point image and the quality sum of right visual point image, for fuzzy distortion, stereo image quality depends primarily on quality that viewpoint preferably.Due to the distortion that left visual point image and the right visual point image of this stereo-picture test library is subject to same degree simultaneously, the quality of left visual point image and the mass change of right visual point image are little, therefore β 1the subjective performance impact that changes stereoscopic image is little.First 6. 1. the stereo-picture of above-mentioned 312 width distortions extremely calculated to the corresponding Q of stereo-picture of every width distortion by the inventive method step land Q r, then adopt four parameter fittings to obtain β 1value.Fig. 6 a has provided different beta 1under the quality of left and right visual point image and the CC performance change between subjective perceptual quality, Fig. 6 a has provided different beta 1under the quality of left and right visual point image and the SROCC performance change between subjective perceptual quality, Fig. 6 a has provided different beta 1under the quality of left and right visual point image and the RMSE performance change between subjective perceptual quality, analysis chart 6a, Fig. 6 b and Fig. 6 c, known along with β 1the variation of value, CC, SROCC and RMSE value change little, fluctuate, but all have peak value on hundredths.Here, get β 1=0.5.
8. calculate L organd R orgabsolute difference image matrix represent, calculate L disand R disabsolute difference image matrix represent, wherein, " || " is the symbol that takes absolute value.
9. will with width image is divided into respectively the size of individual non-overlapping copies is 8 * 8 piece, then right with implement respectively svd, obtain for all in width image the singular value mapping graph that the corresponding singular value matrix by its each piece forms with the singular value mapping graph that the corresponding singular value matrix by its each piece forms, will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G org, for G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G dis, for G disin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as wherein, W lRrepresent with wide, H lRrepresent with height, 0≤p≤7,0≤q≤7.
At this, in order to reduce computation complexity, the piece of 8 * 8 the most adjacent with it left or right or upper piece or lower do not repeat row or repeated rows, i.e. piece and piece non-overlapping copies.
10. calculate corresponding singular value mapping graph and the singular value deviation evaluation of estimate of corresponding singular value mapping graph, is designated as K, K = 64 W LR × H LR × Σ n = 0 W LR × H LR / 64 - 1 Σ p = 0 7 ( G org n ( p , p ) × | G org n ( p , p ) - G dis n ( p , p ) | ) Σ p = 0 7 G org n ( p , p ) , Wherein, represent G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates, represent G disin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates.
right with implement respectively svd, obtain respectively with each self-corresponding 2 orthogonal matrixes and 1 singular value matrix, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ organd V org, will the singular value matrix of implementing to obtain after svd is designated as O org, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ disand V dis, will the singular value matrix of implementing to obtain after svd is designated as O dis,
calculate respectively with width image is deprived the residual matrix diagram after singular value, will the residual matrix diagram of depriving after singular value is designated as X org, X orgorg* Λ * V org, will the residual matrix diagram of depriving after singular value is designated as X dis, X disdis* Λ * V dis, wherein, Λ representation unit matrix, the size of Λ and O organd O disin the same size.
calculate X organd X dismean bias rate, be designated as wherein, x represents X organd X disin the horizontal ordinate of pixel, y represents X organd X disin the ordinate of pixel.
calculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgthree-dimensional perception evaluating deg amount, be designated as Q s, wherein, τ represents constant, for regulate K and at Q smiddle risen importance.
In the present embodiment, first ask for the absolute difference image of stereo-picture and 10 undistorted stereo-pictures of above-mentioned 312 width distortions, then according to the step of the inventive method 8. extremely calculate every width distortion the corresponding K of stereo-picture and at this, τ value size has determined the importance that singular value deviation and residual risk are risen in depth perception is evaluated.Fig. 7 a has provided the three-dimensional perceived quality of stereo-picture of 312 width distortions under different τ and the CC performance change between subjective perception, Fig. 7 b has provided the three-dimensional perceived quality of stereo-picture of 312 width distortions under different τ and the SROCC performance change between subjective perception, Fig. 7 c has provided the three-dimensional perceived quality of stereo-picture of 312 width distortions under different τ and the RMSE performance change between subjective perception, Fig. 7 a, in Fig. 7 b and Fig. 7 c, τ changes in [164] scope, analysis chart 7a, Fig. 7 b and Fig. 7 c, known CC, all there is an extreme value in the variation of SROCC and RMSE and τ, and position is roughly the same, here get τ=-8.
according to Q fand Q s, calculate the stereo-picture S of distortion to be evaluated disimage quality evaluation score value, be designated as Q, Q=Q f* (Q s) ρ, wherein, ρ represents weight coefficient value.
In the present embodiment, to the stereo-picture of above-mentioned 312 width distortions by the inventive method step 1. extremely calculate the corresponding Q of stereo-picture of every width distortion fand Q s, then adopt Q=Q f* (Q s) ρmake four parameter L ogistic function nonlinear fittings, obtain ρ, ρ value has determined quality and the contribution of three-dimensional perceived quality in stereo image quality of left and right visual point image.Q fand Q svalue is all along with stereo-picture distortion level is deepened and diminishes, therefore the span of ρ value is for being greater than 0.Fig. 8 a has provided the quality of 312 width stereo-pictures under different ρ values and the CC performance change between subjective perceptual quality, Fig. 8 b has provided the quality of 312 width stereo-pictures under different ρ values and the SROCC performance change between subjective perceptual quality, Fig. 8 c has provided the quality of 312 width stereo-pictures under different ρ values and the RMSE performance change between subjective perceptual quality, analysis chart 8a, Fig. 8 b, Fig. 8 c, known ρ value obtains and too large or too littlely all can affect the consistance between stereo image quality objective evaluation model and subjective perception, under ρ value situation of change, CC, all there is extreme point in SROCC and RMSE value, and approximate location is identical, here get ρ=0.3.
The image quality evaluation function Q=Q of the stereo-picture of the distortion that analysis the present embodiment obtains f* (Q s) 0.3final appraisal results and the correlativity between subjective scoring DMOS.The image quality evaluation function Q=Q of the stereo-picture of the distortion first obtaining by the present embodiment f* (Q s) 0.3the output valve Q of the final stereo image quality evaluation result calculating, is then output valve Q four parameter L ogistic function nonlinear fittings, finally obtains the performance index value between three-dimensional objective evaluation model and subjective perception.Here, utilize 4 of evaluate image quality evaluating method conventional objective parameters as evaluation index, CC, SROCC, be often worth ratio (Outlier Ratio, OR), RMSE.The dispersion degree of the objective rating model of OR reflection stereo image quality, the distortion stereo-picture number proportion that in all distortion stereo-pictures, the evaluation of estimate after four parameter fittings and the difference between DMOS are greater than a certain threshold value.Table 1 has provided assess performance CC, SROCC, OR and RMSE coefficient, as seen from the data in Table 1, and the image quality evaluation function Q=Q of the stereo-picture of the distortion obtaining by the present embodiment f* (Q s) 0.3correlativity between the output valve Q of the final appraisal results that calculate and subjective scoring DMOS is very high, CC value and SROCC value all surpass 0.92, RMSE value, lower than 6.5, shows that the result of objective evaluation result and human eye subjective perception is more consistent, and the validity of the inventive method has been described.
The image quality evaluation score value of the stereo-picture of the distortion that this enforcement of table 1 obtains and the correlativity between subjective scoring
? Gblur JP2K JPEG WN H264 ALL
Number 60 60 60 60 72 312
CC 0.9658 0.9479 0.9533 0.9554 0.9767 0.9235
SROCC 0.9655 0.9489 0.9524 0.9274 0.9545 0.9430
OR 0 0 0 0 0 0
RMSE 5.4719 3.8180 4.3010 4.6151 3.0135 6.5890

Claims (3)

1. the three-dimensional image objective quality evaluation method based on structure distortion, is characterized in that comprising the following steps:
1. make S orgundistorted stereo-picture for original, makes S disfor the stereo-picture of distortion to be evaluated, by original undistorted stereo-picture S orgleft viewpoint gray level image be designated as L org, by original undistorted stereo-picture S orgright viewpoint gray level image be designated as R org, by the stereo-picture S of distortion to be evaluated disleft viewpoint gray level image be designated as L dis, by the stereo-picture S of distortion to be evaluated disright viewpoint gray level image be designated as R dis;
2. to L organd L dis, R organd R dis4 width images are implemented respectively region and are divided, and obtain respectively L organd L dis, R organd R diseach self-corresponding sensitizing range matrix mapping graph of 4 width images, by L organd L disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A l, for A lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A l(i, j), by R organd R disthe matrix of coefficients of implementing respectively each self-corresponding sensitizing range matrix mapping graph of obtaining after region is divided is all designated as A r, for A rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as A r(i, j), wherein, 0≤i≤(W-8), 0≤j≤(H-8), W represents L herein org, L dis, R organd R diswide, H represents L org, L dis, R organd R disheight;
3. by L organd L dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate L organd L disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph l, for B lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B l(i, j), B L ( i , j ) = 2 × σ org , dis , L ( i , j ) + C 1 ( σ org , L ( i , j ) ) 2 + ( σ dis , L ( i , j ) ) 2 + C 1 , Wherein, B l(i, j) also represents L orgoverlapping block and L that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L org ( i + x , j + y ) - U org , L ( i , j ) ) 2 , U org , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L org ( i + x , j + y ) , σ dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) 2 , U dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 L dis ( i + x , j + y ) , σ org , dis , L ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( L org ( i + x , j + y ) - U org , L ( i , j ) ) × ( L dis ( i + x , j + y ) - U dis , L ( i , j ) ) ) , L org(i+x, j+y) represents L orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), L dis(i+x, j+y) represents L dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, 0≤i≤(W-8) herein, 0≤j≤(H-8);
By R organd R dis2 width images are divided into respectively the overlapping block that (W-7) * (H-7) individual size is 8 * 8, then calculate R organd R disthe structure amplitude distortion mapping graph of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as B by the matrix of coefficients of this structure amplitude distortion mapping graph r, for B rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as B r(i, j), B R ( i , j ) = 2 × σ org , dis , R ( i , j ) + C 1 ( σ org , R ( i , j ) ) 2 + ( σ dis , R ( i , j ) ) 2 + C 1 , Wherein, B r(i, j) also represents R orgoverlapping block and R that the size that middle upper left corner coordinate position is (i, j) is 8 * 8 disthe structure amplitude distortion value of the overlapping block that the size that middle upper left corner coordinate position is (i, j) is 8 * 8, σ org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R org ( i + x , j + y ) - U org , R ( i , j ) ) 2 , U org , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R org ( i + x , j + y ) , σ dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) 2 , U dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 R dis ( i + x , j + y ) , σ org , dis , R ( i , j ) = 1 64 Σ x = 0 7 Σ y = 0 7 ( ( R org ( i + x , j + y ) - U org , R ( i , j ) ) × ( R dis ( i + x , j + y ) - U dis , R ( i , j ) ) ) , R org(i+x, j+y) represents R orgmiddle coordinate position is the pixel value of the pixel of (i+x, j+y), R dis(i+x, j+y) represents R dismiddle coordinate position is the pixel value of the pixel of (i+x, j+y), C 1represent constant, 0≤i≤(W-8) herein, 0≤j≤(H-8);
4. to L organd L dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively L organd L diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by L orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, L, for I h, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, L(i, j), I h , org , L ( i , j ) = L org ( i + 2 , j ) + 2 L org ( i + 2 , j + 1 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i , j + 1 ) - L org ( i , j + 2 ) , By L orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, L, for I v, org, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, L(i, j), I v , org , L ( i , j ) = L org ( i , j + 2 ) + 2 L org ( i + 1 , j + 2 ) + L org ( i + 2 , j + 2 ) - L org ( i , j ) - 2 L org ( i + 1 , j ) - L org ( i + 2 , j ) , By L disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, L, for I h, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, L(i, j), I h , dis , L ( i , j ) = L dis ( i + 2 , j ) + 2 L dis ( i + 2 , j + 1 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i , j + 1 ) - L dis ( i , j + 2 ) , By L disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, L, for I v, dis, Lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, L(i, j), I v , dis , L ( i , j ) = L dis ( i , j + 2 ) + 2 L dis ( i + 1 , j + 2 ) + L dis ( i + 2 , j + 2 ) - L dis ( i , j ) - 2 L dis ( i + 1 , j ) - L dis ( i + 2 , j ) , Wherein, L org(i+2, j), L org(i+2, j+1), L org(i+2, j+2), L org(i, j), L org(i, j+1), L org(i, j+2), L org(i+1, j+2), L org(i+1, j) be the corresponding L that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), L dis(i+2, j), L dis(i+2, j+1), L dis(i+2, j+2), L dis(i, j), L dis(i, j+1), L dis(i, j+2), L dis(i+1, j+2), L dis(i+1, j) be the corresponding L that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j);
To R organd R dis2 width images are carrying out horizontal and the processing of vertical direction Sobel operator respectively, obtains respectively R organd R diseach self-corresponding horizontal direction gradient matrix mapping graph of 2 width images and vertical gradient matrix mapping graph, by R orgthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, org, R, for I h, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, org, R(i, j), I h , org , R ( i , j ) = R org ( i + 2 , j ) + 2 R org ( i + 2 , j + 1 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i , j + 1 ) - R org ( i , j + 2 ) , By R orgthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, org, R, for I v, org, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, org, R(i, j), I v , org , R ( i , j ) = R org ( i , j + 2 ) + 2 R org ( i + 1 , j + 2 ) + R org ( i + 2 , j + 2 ) - R org ( i , j ) - 2 R org ( i + 1 , j ) - R org ( i + 2 , j ) , By R disthe matrix of coefficients of the corresponding horizontal direction gradient matrix mapping graph that carrying out horizontal direction Sobel operator obtains after processing is designated as I h, dis, R, for I h, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I h, dis, R(i, j), I h , dis , R ( i , j ) = R dis ( i + 2 , j ) + 2 R dis ( i + 2 , j + 1 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i , j + 1 ) - R dis ( i , j + 2 ) , By R disthe matrix of coefficients of the corresponding vertical gradient matrix mapping graph that enforcement vertical direction Sobel operator obtains after processing is designated as I v, dis, R, for I v, dis, Rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as I v, dis, R(i, j), I v , dis , R ( i , j ) = R dis ( i , j + 2 ) + 2 R dis ( i + 1 , j + 2 ) + R dis ( i + 2 , j + 2 ) - R dis ( i , j ) - 2 R dis ( i + 1 , j ) - R dis ( i + 2 , j ) , Wherein, R org(i+2, j), R org(i+2, j+1), R org(i+2, j+2), R org(i, j), R org(i, j+1), R org(i, j+2), R org(i+1, j+2), R org(i+1, j) be the corresponding R that represents respectively orgmiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j), R dis(i+2, j), R dis(i+2, j+1), R dis(i+2, j+2), R dis(i, j), R dis(i, j+1), R dis(i, j+2), R dis(i+1, j+2), R dis(i+1, j) be the corresponding R that represents respectively dismiddle coordinate position is the pixel value of the pixel of (i+2, j), (i+2, j+1), (i+2, j+2), (i, j), (i, j+1), (i, j+2), (i+1, j+2), (i+1, j);
5. calculate L organd L disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure l, for E lmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as E l(i, j), E L ( i , j ) = I h , org , L ( i , j ) × I h , dis , L ( i , j ) + I v , org , L ( i , j ) × I v , dis , L ( i , j ) + C 2 ( I h , org , L ( i , j ) ) 2 + ( I v , org , L ( i , j ) ) 2 × ( I h , dis , L ( i , j ) ) 2 + ( I v , dis , L ( i , j ) ) 2 + C 2 , Wherein, C 2represent constant;
Calculate R organd R disthe structure direction distortion map figure of two overlapping blocks that in 2 width images, all coordinate positions are identical, is designated as E by the matrix of coefficients of this structure direction distortion map figure r, for E rmiddle coordinate position is the coefficient value that (i, j) locates, and is designated as ER (i, j), E R ( i , j ) = I h , org , R ( i , j ) × I h , dis , R ( i , j ) + I v , org , R ( i , j ) × I v , dis , R ( i , j ) + C 2 ( I h , org , R ( i , j ) ) 2 + ( I v , org , R ( i , j ) ) 2 × ( I h , dis , R ( i , j ) ) 2 + ( I v , dis , R ( i , j ) ) 2 + C 2 ;
6. calculate L organd L disstructure distortion evaluation of estimate, be designated as Q l, Q l1* Q m,L+ ω 2* Q nm, L, Q m , L = 1 N L , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × A L ( i , j ) ) , N L , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A L ( i , j ) , Q nm , L = 1 N L , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B L ( i , j ) + E L ( i , j ) ) × ( 1 - A L ( i , j ) ) ) , N L , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A L ( i , j ) ) , Wherein, ω 1represent L organd L disthe weighted value of middle sensitizing range, ω 2represent L organd L disthe weighted value of middle de-militarized zone;
Calculate R organd R disstructure distortion evaluation of estimate, be designated as Q r, Q r=ω ' 1* Q m,R+ ω ' 2* Q nm, R, Q m , R = 1 N R , m Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × A R ( i , j ) ) , N R , m = Σ i = 0 W - 8 Σ j = 0 H - 8 A R ( i , j ) , Q nm , R = 1 N R , nm Σ i = 0 W - 8 Σ j = 0 H - 8 ( 0.5 × ( B R ( i , j ) + E R ( i , j ) ) × ( 1 - A R ( i , j ) ) ) , N R , nm = Σ i = 0 W - 8 Σ j = 0 H - 8 ( 1 - A R ( i , j ) ) , Wherein, ω ' 1represent R organd R disthe weighted value of middle sensitizing range, ω ' 2represent R organd R disthe weighted value of middle de-militarized zone;
7. according to Q land Q rcalculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgspatial frequency measuring similarity, be designated as Q f, Q f1* Q l+ (1-β 1) * Q r, wherein, β 1represent Q lweights;
8. calculate L organd R orgabsolute difference image, take matrix representation as calculate L disand R disabsolute difference image, take matrix representation as wherein, " || " is the symbol that takes absolute value;
9. will with width image is divided into respectively the size of individual non-overlapping copies is 8 * 8 piece, then right with implement respectively svd, obtain for all in width image the singular value mapping graph that the corresponding singular value matrix by its each piece forms with the singular value mapping graph that the corresponding singular value matrix by its each piece forms, will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G org, for G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as will the matrix of coefficients of the singular value mapping graph obtaining after enforcement svd is designated as G dis, for G disin in the singular value matrix of n piece coordinate position be the singular value that (p, q) locates, be designated as wherein, W lRrepresent with wide, H lRrepresent with height, 0≤p≤7,0≤q≤7;
10. calculate corresponding singular value mapping graph and the singular value deviation evaluation of estimate of corresponding singular value mapping graph, is designated as K, K = 64 W LR × H LR × Σ n = 0 W LR × H LR / 64 - 1 Σ p = 0 7 ( G org n ( p , p ) × | G org n ( p , p ) - G dis n ( p , p ) | ) Σ p = 0 7 G org n ( p , p ) , Wherein, represent G orgin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates, represent G disin in the singular value matrix of n piece coordinate position be the singular value that (p, p) locates;
right with implement respectively svd, obtain respectively with each self-corresponding 2 orthogonal matrixes and 1 singular value matrix, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ organd V org, will the singular value matrix of implementing to obtain after svd is designated as O org, will 2 orthogonal matrixes implementing to obtain after svd are designated as respectively χ disand V dis, will the singular value matrix of implementing to obtain after svd is designated as O dis,
calculate respectively with width image is deprived the residual matrix diagram after singular value, will the residual matrix diagram of depriving after singular value is designated as X org, X orgorg* Λ * V org, will the residual matrix diagram of depriving after singular value is designated as X dis, X disdis* Λ * V dis, wherein, Λ representation unit matrix, the size of Λ and O organd O disin the same size;
calculate X organd X dismean bias rate, be designated as wherein, x represents X organd X disin the horizontal ordinate of pixel, y represents X organd X disin the ordinate of pixel;
calculate the stereo-picture S of distortion to be evaluated diswith respect to original undistorted stereo-picture S orgthree-dimensional perception evaluating deg amount, be designated as Q s, wherein, τ represents constant, for regulate K and at Q smiddle risen importance;
according to Q fand Q s, calculate the stereo-picture S of distortion to be evaluated disimage quality evaluation score value, be designated as Q, Q=Q f* (Q s) ρ, wherein, ρ represents weight coefficient value.
2. a kind of three-dimensional image objective quality evaluation method based on structure distortion according to claim 1, is characterized in that 2. middle L of described step organd L disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph lacquisition process be:
2.-a1, to L orgmake level and vertical direction Sobel operator and process, obtain L orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l1and Z v, l1, then calculate L orggradient magnitude figure, be designated as Z l1, wherein, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l1(x, y) represents Z h, l1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l1(x, y) represents Z v, l1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l1wide, H' represents Z l1height;
2.-a2, to L dismake level and vertical direction Sobel operator and process, obtain L dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, l2and Z v, l2, then calculate L disgradient magnitude figure, be designated as Z l2, wherein, Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, l2(x, y) represents Z h, l2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, l2(x, y) represents Z v, l2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein l2wide, H' represents Z l2height;
2.-a3, required threshold value T while calculating zoning, T = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z l 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z l 2 ( x , y ) ) , Wherein, α is constant, Z l1(x, y) represents Z l1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z l2(x, y) represents Z l2middle coordinate position is the gradient magnitude of the pixel of (x, y);
2.-a4, by Z l1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l1(i, j), by Z l2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z l2(i, j), judgement Z l1(i, j) >T or Z l2whether (i, j) >T sets up, if set up, determines L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A l(i, j)=1, otherwise, determine L organd L dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A l(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8);
Described step is middle R 2. organd R disthe coefficient matrices A of each self-corresponding sensitizing range matrix mapping graph racquisition process be:
2.-b1, to R orgmake level and vertical direction Sobel operator and process, obtain R orghorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r1and Z v, r1, then calculate R orggradient magnitude figure, be designated as Z r1, wherein, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r1(x, y) represents Z h, r1middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r1(x, y) represents Z v, r1middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r1wide, H' represents Z r1height;
2.-b2, to R dismake level and vertical direction Sobel operator and process, obtain R dishorizontal direction gradient image and vertical gradient image, be designated as respectively Z h, r2and Z v, r2, then calculate R disgradient magnitude figure, be designated as Z r2, wherein, Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y), Z h, r2(x, y) represents Z h, r2middle coordinate position is the horizontal direction Grad of the pixel of (x, y), Z v, r2(x, y) represents Z v, r2middle coordinate position is the vertical gradient value of the pixel of (x, y), 1≤x≤W', and 1≤y≤H', W' represents Z herein r2wide, H' represents Z r2height;
2.-b3, required threshold value T' while calculating zoning, T ′ = α × 1 W ′ × H ′ × ( Σ x = 0 W ′ Σ y = 0 H ′ Z r 1 ( x , y ) + Σ x = 0 W ′ Σ y = 0 H ′ Z r 2 ( x , y ) ) , Wherein, α is constant, Z r1(x, y) represents Z r1middle coordinate position is the gradient magnitude of the pixel of (x, y), Z r2(x, y) represents Z r2middle coordinate position is the gradient magnitude of the pixel of (x, y);
2.-b4, by Z r1middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r1(i, j), by Z r2middle coordinate position is that the gradient magnitude of the pixel of (i, j) is designated as Z r2(i, j), judgement Z r1(i, j) >T or Z r2whether (i, j) >T sets up, if set up, determines R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to sensitizing range, and makes A r(i, j)=1, otherwise, determine R organd R dismiddle coordinate position is that the pixel of (i, j) belongs to de-militarized zone, and makes A r(i, j)=0, wherein, 0≤i≤(W-8), 0≤j≤(H-8).
3. a kind of three-dimensional image objective quality evaluation method based on structure distortion according to claim 1 and 2, is characterized in that 7. middle β of described step 1acquisition process be:
7.-1, adopt n undistorted stereo-picture to set up its distortion stereographic map image set under the different distortion levels of different type of distortion, this distortion stereographic map image set comprises the stereo-picture of several distortions, wherein, and n >=1;
7.-2, utilize subjective quality assessment method to obtain the average subjective scoring difference of the stereo-picture of the concentrated every width distortion of distortion stereo-picture, be designated as DMOS, DMOS=100-MOS, wherein, MOS represents subjective scoring average, DMOS ∈ [0,100];
7.-3, according to step 1. to step operating process 6., the left visual point image of the stereo-picture of every width distortion that calculated distortion stereo-picture is concentrated is with respect to the evaluation of estimate Q of the sensitizing range of the left visual point image of the undistorted stereo-picture of correspondence m,Levaluation of estimate Q with de-militarized zone nm, L;
7.-4, adopt Mathematical Fitting method matching distortion stereo-picture to concentrate average subjective scoring difference DMOS and the corresponding Q of the stereo-picture of distortion m,Land Q nm, Lthereby, obtain β 1value.
CN201210145034.0A 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion Expired - Fee Related CN102708568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210145034.0A CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210145034.0A CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Publications (2)

Publication Number Publication Date
CN102708568A CN102708568A (en) 2012-10-03
CN102708568B true CN102708568B (en) 2014-11-05

Family

ID=46901288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210145034.0A Expired - Fee Related CN102708568B (en) 2012-05-11 2012-05-11 Stereoscopic image objective quality evaluation method on basis of structural distortion

Country Status (1)

Country Link
CN (1) CN102708568B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103108209B (en) * 2012-12-28 2015-03-11 宁波大学 Stereo image objective quality evaluation method based on integration of visual threshold value and passage
CN104036502B (en) * 2014-06-03 2016-08-24 宁波大学 A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN104574363B (en) * 2014-12-12 2017-09-29 南京邮电大学 A kind of full reference image quality appraisement method for considering gradient direction difference
CN108074241B (en) * 2018-01-16 2021-10-22 深圳大学 Quality scoring method and device for target image, terminal and storage medium
CN110232680B (en) * 2019-05-30 2021-04-27 广智微芯(扬州)有限公司 Image ambiguity evaluation method and device
CN113920065B (en) * 2021-09-18 2023-04-28 天津大学 Imaging quality evaluation method for visual detection system of industrial site

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833766A (en) * 2010-05-11 2010-09-15 天津大学 Stereo image objective quality evaluation algorithm based on GSSIM
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 Three-dimensional image objective quality evaluation method
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7512286B2 (en) * 2003-10-27 2009-03-31 Hewlett-Packard Development Company, L.P. Assessing image quality
WO2008115405A2 (en) * 2007-03-16 2008-09-25 Sti Medicals Systems, Llc A method of image quality assessment to procuce standardized imaging data
JP4895204B2 (en) * 2007-03-22 2012-03-14 富士フイルム株式会社 Image component separation device, method, and program, and normal image generation device, method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833766A (en) * 2010-05-11 2010-09-15 天津大学 Stereo image objective quality evaluation algorithm based on GSSIM
CN101872479A (en) * 2010-06-09 2010-10-27 宁波大学 Three-dimensional image objective quality evaluation method
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
利用奇异值分解法的立体图像客观质量评价模型;周俊明等;《计算机辅助设计与图形学学报》;20110531;第23卷(第5期);870-877 *
周俊明等.利用奇异值分解法的立体图像客观质量评价模型.《计算机辅助设计与图形学学报》.2011,第23卷(第5期),870-877. *
基于三维特征和结构相似度的图像质量评价方法;沈丽丽等;《光电子·激光》;20101130;第21卷(第11期);1713-1719 *
沈丽丽等.基于三维特征和结构相似度的图像质量评价方法.《光电子·激光》.2010,第21卷(第11期),1713-1719. *

Also Published As

Publication number Publication date
CN102708568A (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN101872479B (en) Three-dimensional image objective quality evaluation method
CN102663747B (en) Stereo image objectivity quality evaluation method based on visual perception
CN102708568B (en) Stereoscopic image objective quality evaluation method on basis of structural distortion
CN103517065B (en) Method for objectively evaluating quality of degraded reference three-dimensional picture
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN102547368B (en) Objective evaluation method for quality of stereo images
CN101950422B (en) Singular value decomposition(SVD)-based image quality evaluation method
CN104036501A (en) Three-dimensional image quality objective evaluation method based on sparse representation
CN104202594B (en) A kind of method for evaluating video quality based on 3 D wavelet transformation
CN102209257A (en) Stereo image quality objective evaluation method
CN103281554B (en) Video objective quality evaluation method based on human eye visual characteristics
CN102521825B (en) Three-dimensional image quality objective evaluation method based on zero watermark
CN105407349A (en) No-reference objective three-dimensional image quality evaluation method based on binocular visual perception
CN103400378A (en) Method for objectively evaluating quality of three-dimensional image based on visual characteristics of human eyes
CN104036502A (en) No-reference fuzzy distorted stereo image quality evaluation method
CN103413298A (en) Three-dimensional image objective evaluation method based on visual characteristics
CN103475897A (en) Adaptive image quality evaluation method based on distortion type judgment
CN103136748A (en) Stereo-image quality objective evaluation method based on characteristic image
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN102567990B (en) Stereo image objective quality estimation method
CN106412571A (en) Video quality evaluation method based on gradient similarity standard deviation
CN103108209B (en) Stereo image objective quality evaluation method based on integration of visual threshold value and passage

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141105

Termination date: 20210511

CF01 Termination of patent right due to non-payment of annual fee