CN102708568A - Stereoscopic image objective quality evaluation method on basis of structural distortion - Google Patents
Stereoscopic image objective quality evaluation method on basis of structural distortion Download PDFInfo
- Publication number
- CN102708568A CN102708568A CN2012101450340A CN201210145034A CN102708568A CN 102708568 A CN102708568 A CN 102708568A CN 2012101450340 A CN2012101450340 A CN 2012101450340A CN 201210145034 A CN201210145034 A CN 201210145034A CN 102708568 A CN102708568 A CN 102708568A
- Authority
- CN
- China
- Prior art keywords
- org
- dis
- sigma
- value
- coordinate position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a stereoscopic image objective quality evaluation method on the basis of the structural distortion, which comprises the following steps: firstly, respectively carrying out regional division on left and right viewpoint images of an undistorted stereoscopic image and a distorted stereoscopic image to obtain an eye sensitive region and a corresponding nonsensitive region and then respectively obtaining evaluation indexes of the sensitive region and the nonsensitive region from two aspects of the structural amplitude distortion and the structural direction distortion; secondly, acquiring quality evaluation values of the left and right viewpoint images; thirdly, sampling singular value difference and a mean deviation ratio of residual images of which singular values are deprived to evaluate the distortion condition of the depth perception of a stereoscopic image so as to obtain an evaluation value of stereoscopic perceived quality; and finally, combining the quality of the left and right viewpoint images with the stereoscopic perceived quality to obtain a final quality evaluation result of the stereoscopic image. The method disclosed by the invention avoids simulating each composition part of an eye vision system, but sufficiently utilizes structure information of the stereoscopic image, so the consistency of the objective evaluation result and the subjective perception is effectively improved.
Description
Technical Field
The invention relates to an image quality evaluation technology, in particular to a three-dimensional image objective quality evaluation method based on structural distortion.
Background
The evaluation of the quality of the stereo image occupies a very important position in a stereo image/video system, and not only can judge the quality of a processing algorithm in the stereo image/video system, but also can optimize and design the algorithm so as to improve the efficiency of the stereo image/video processing system. The stereo image quality evaluation methods are mainly classified into two types: subjective quality assessment and objective quality assessment. The subjective quality evaluation method is to perform weighted average comprehensive evaluation on the quality of a stereo image to be evaluated by a plurality of observers, and the result accords with the characteristics of a human visual system, but the subjective quality evaluation method is limited by a plurality of factors such as inconvenient calculation, low speed, high cost and the like, so that the subjective quality evaluation method causes difficulty in embedding the system, and cannot be widely popularized in practical application. The objective quality evaluation method has the characteristics of simple operation, low cost, easy realization, real-time optimization algorithm and the like, and becomes the key point of the quality evaluation research of the three-dimensional image.
Currently, a mainstream stereo image objective quality evaluation model comprises two parts, namely left and right viewpoint image quality evaluation and depth perception quality evaluation. However, since human perception of the visual system of the human eye is limited, it is difficult to accurately simulate the various components of the human eye, and thus the consistency between these models and subjective perception is not good.
Disclosure of Invention
The invention aims to provide a three-dimensional image objective quality evaluation method based on structural distortion, which can effectively improve the consistency between the three-dimensional image objective quality evaluation result and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a three-dimensional image objective quality evaluation method based on structural distortion is characterized by comprising the following steps:
① order SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, the original undistorted stereo image S is takenorgIs recorded as LorgThe original undistorted stereo image S is processedorgIs recorded as RorgDistorted stereoscopic image S to be evaluateddisIs recorded as LdisDistorted stereoscopic image S to be evaluateddisIs recorded as Rdis;
② pairs LorgAnd Ldis、RorgAnd RdisThe 4 images were divided into regions to obtain LorgAnd Ldis、RorgAnd RdisThe sensitive area matrix mapping map corresponding to each of the 4 images will LorgAnd LdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ALFor ALThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AL(i,j),R is to beorgAnd RdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ARFor ARThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AR(i,j),Wherein, 0 is not less than i not more than (W-8), 0 is not less than j not more than (H-8), W represents Lorg、Ldis、RorgAnd RdisH denotes Lorg、Ldis、RorgAnd RdisHigh of (d);
③ mixing LorgAnd LdisThe 2 images were divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, respectively, and then L was calculatedorgAnd LdisMapping the structural amplitude distortion of two overlapped blocks with the same coordinate position in 2 images, and mapping the structural amplitudeThe coefficient matrix of the distortion map is denoted as BLFor BLThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BL(i,j), Wherein, BL(i, j) also denotes LorgOverlapping block L of size 8 × 8 with middle upper left coordinate position (i, j)disThe upper-left coordinate position is the structural amplitude distortion value of the overlapped block with the size of 8 × 8 (i, j), Lorg(i + x, j + y) denotes LorgThe pixel value of the pixel point with the middle coordinate position of (i + x, j + y), Ldis(i + x, j + y) denotes LdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
r is to beorgAnd RdisThe 2 images were each divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, and R was calculatedorgAnd RdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsRFor BRThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BR(i,j), Wherein, BR(i, j) also represents RorgThe size of the overlapped block with the upper left corner coordinate position of (i, j) being 8 × 8 and RdisThe coordinate position of the middle upper left corner is (i, j)The structural amplitude distortion values of the overlapping blocks of 8 × 8, Rorg(i + x, j + y) represents RorgThe pixel value R of the pixel point with the middle coordinate position of (i + x, j + y)dis(i + x, j + y) represents RdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
④ pairs LorgAnd LdisRespectively carrying out Sobel operator processing in the horizontal direction and Sobel operator processing in the vertical direction on the 2 images to respectively obtain LorgAnd LdisL corresponding to the horizontal direction gradient matrix mapping map and the vertical direction gradient matrix mapping map of each of the 2 imagesorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,LFor Ih,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,L(i,j), L will be mixedorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,LFor Iv,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,LFor Ih,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,LFor Iv,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,L(i,j), Wherein, Lorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i +1, j) are respectively corresponded to LorgThe pixel values of the pixel points with the middle coordinate positions of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j), Ldis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i +1, j) are respectively corresponded to LdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
to RorgAnd RdisRespectively carrying out Sobel operator processing in the horizontal direction and the vertical direction on the 2 images to respectively obtain RorgAnd RdisThe gradient matrix mapping map in the horizontal direction and the gradient matrix mapping map in the vertical direction corresponding to the 2 images are respectively obtained by mapping RorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,RFor Ih,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,R(i,j), R is to beorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,RFor Iv,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,R(i,j), R is to bedisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,RFor Ih,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,R(i,j), R is to bedisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,RFor Iv,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,R(i,j), Wherein R isorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i +1, j) each represents RorgThe middle coordinate position is the pixel value, R, of the pixel points of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j)dis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i +1, j) each represents RdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
⑤ calculation LorgAnd LdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsLFor ELThe value of the coefficient at the middle coordinate position (i, j), denoted as EL(i,j), Wherein, C2Represents a constant;
calculation of RorgAnd RdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsRFor ERThe value of the coefficient at the middle coordinate position (i, j), denoted as ER(i,j),
⑥ calculation LorgAnd LdisStructural distortion evaluation value of (1), denoted as QL,QL=ω1×Qm,L+ω2×Qnm,L, Wherein, ω is1Representation LorgAnd LdisWeight value of middle sensitive area, omega2Representation LorgAnd LdisThe weight value of the middle non-sensitive area;
calculation of RorgAnd RdisStructural distortion evaluation value of (1), denoted as QR,QR=ω′1×Qm,R+ω′2×Qnm,R, Wherein, omega'1Represents RorgAnd RdisWeight value of Medium sensitive region, ω'2Represents RorgAnd RdisThe weight value of the middle non-sensitive area;
⑦ according to QLAnd QRCalculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe spatial frequency similarity measure of (2), denoted as QF,QF=β1×QL+(1-β1)×QRWherein, β1Represents QLThe weight of (2);
⑧ calculation LorgAnd RorgIs expressed in matrix form as Calculation LdisAnd RdisIs expressed in matrix form as Wherein, "|" is an absolute value symbol;
⑨ will beAndthe 2 images are respectively divided intoA non-overlapping block of size 8 × 8, and thenAndall the blocks in the 2 images are respectively subjected to singular value decomposition to obtainCorresponding singular value map composed of singular value matrix of each block thereof anda corresponding singular value mapping map composed of the singular value matrix of each block thereof is toThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GorgFor GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWill be provided withObtained by performing singular value decompositionThe coefficient matrix of the singular value mapping graph is marked as GdisFor GdisThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWherein, WLRTo representAndwidth of (H)LRTo representAndis high in the direction of the horizontal axis, 0≤p≤7,0≤q≤7;
⑩ calculationCorresponding singular value map andthe singular value deviation evaluation value of the corresponding singular value map, denoted as K, wherein the content of the first and second substances,represents GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, p),represents GdisThe coordinate position in the singular value matrix of the nth block is a singular value at (p, p);
to pairAndrespectively carrying out singular value decomposition to respectively obtainAnd2 orthogonal matrixes and 1 singular value matrix corresponding to each other are used for generating a plurality of singular value matrixes 2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chiorgAnd VorgWill beThe singular value matrix obtained after singular value decomposition is recorded as Oorg,Will be provided with2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chidisAnd VdisWill beThe singular value matrix obtained after singular value decomposition is recorded as Odis,
Respectively calculateAnd2 residual matrix images of the images after the singular value deprivationThe residual matrix image after the singular value deprivation is marked as Xorg,Xorg=χorg×Λ×VorgWill beThe residual matrix image after the singular value deprivation is marked as Xdis,Xdis=χdis×Λ×VdisWhere Λ denotes the identity matrix, Λ size and OorgAnd OdisAre consistent in size;
calculating XorgAnd XdisThe mean deviation ratio of (D) is recorded as Wherein X represents XorgAnd XdisThe abscissa of the pixel point in (1) and y represents XorgAnd XdisThe ordinate of the pixel point in (1);
calculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe stereo perception evaluation metric of (1), denoted as QS,Wherein τ represents a constant for adjusting K andat QSThe importance of (1);
according to QFAnd QSCalculating a distorted stereoscopic image S to be evaluateddisThe image quality evaluation score of (1), denoted as Q, Q = QF×(QS)ρWhere ρ represents a weight coefficient value.
L of the step ②orgAnd LdisCoefficient matrix A of the respective corresponding sensitive area matrix mapLThe acquisition process comprises the following steps:
② -a1, pair LorgProcessing with Sobel operator in horizontal and vertical directions to obtain LorgRespectively, as Z, in the horizontal direction and in the vertical directionh,l1And Zv,l1Then calculate LorgIs marked as Zl1, Wherein Z isl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l1(x, y) represents Zh,l1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l1(x, y) represents Zv,l1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl1Width of (A), H' represents Zl1High of (d);
② -a2, pair LdisProcessing with Sobel operator in horizontal and vertical directions to obtain LdisRespectively, as Z, in the horizontal direction and in the vertical directionh,l2And Zv,l2Then calculate LdisIs marked as Zl2, Wherein Z isl2(x, y) represents Zl2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l2(x, y) represents Zh,l2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l2(x, y) represents Zv,l2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl2Width of (A), H' represents Zl2High of (d);
② -a3, calculating the threshold value T required when dividing the region, wherein α is a constant, Zl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionl2(x, y) represents Zl2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② -a4, mixing Z withl1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl1(i, j) reacting Zl2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl2(i, j), judgment of Zl1(i,j)>T or Zl2(i,j)>If T is true, then a determination L is madeorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AL(i, j) =1, otherwise, determination LorgAnd LdisMiddle seatThe pixel point with the mark position of (i, j) belongs to the non-sensitive area, and order AL(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8);
in the step ②, RorgAnd RdisCoefficient matrix A of the respective corresponding sensitive area matrix mapRThe acquisition process comprises the following steps:
② -b1, p.RorgPerforming Sobel operator processing in horizontal and vertical directions to obtain RorgRespectively, as Z, in the horizontal direction and in the vertical directionh,r1And Zv,r1Then calculating RorgIs marked as Zr1, Wherein Z isr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r1(x, y) represents Zh,r1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r1(x, y) represents Zv,r1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr1Width of (A), H' represents Zr1High of (d);
② -b2, p.RdisPerforming Sobel operator processing in horizontal and vertical directions to obtain RdisRespectively, as Z, in the horizontal direction and in the vertical directionh,r2And Zv,r2Then calculating RdisIs marked as Zr2, Wherein Z isr2(x, y) represents Zr2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r2(x, y) represents Zh,r2With (x, y) as the middle coordinate positionHorizontal gradient value, Z, of pixel pointsv,r2(x, y) represents Zv,r2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr2Width of (A), H' represents Zr2High of (d);
② -b3, calculating the threshold value T' required when dividing the region, wherein α is a constant, Zr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionr2(x, y) represents Zr2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② -b4, mixing Z withr1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr1(i, j) reacting Zr2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr2(i, j), judgment of Zr1(i,j)>T or Zr2(i,j)>Whether T is true, and if so, determining RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AR(i, j) =1, otherwise, determine RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AR(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8).
β of the step ⑦1The acquisition process comprises the following steps:
⑦ -1, adopting n undistorted stereo images to establish a distorted stereo image set under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, and n is more than or equal to 1;
⑦ -2, obtaining an average subjective score difference of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and recording the average subjective score difference as DMOS, wherein DMOS =100-MOS, wherein MOS represents the subjective score mean, DMOS ∈ [0,100 ];
⑦ -3, according to the operation procedures of step ① to step ⑥, an evaluation value Q of a sensitive area of a left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set with respect to a left viewpoint image of a corresponding undistorted stereoscopic image is calculatedm,LAnd evaluation value Q of non-sensitive regionnm,L;
⑦ -4, use numberMean subjective score difference DMOS and corresponding Q of distorted stereo image in fitting distorted stereo image set by mathematical fitting methodm,LAnd Qnm,LThereby obtaining β1The value is obtained.
Compared with the prior art, the method has the advantages that firstly, the left viewpoint image and the right viewpoint image of the undistorted stereo image and the distorted stereo image are respectively subjected to region division to obtain the human eye sensitive region and the corresponding non-sensitive region, and then the evaluation indexes of the sensitive region and the non-sensitive region are respectively obtained from the two aspects of structural amplitude distortion and structural direction distortion; secondly, respectively obtaining a left viewpoint image quality evaluation value and a right viewpoint image quality evaluation value by adopting linear weighting, and further obtaining left and right viewpoint image quality evaluation values; according to the singular value, the structural information characteristics of the stereo image can be well represented, the distortion condition of the depth perception of the stereo image is measured by sampling the singular value difference and the mean deviation ratio of the residual image after the singular value is deprived, and therefore the evaluation value of the stereo perception quality is obtained; and finally, combining the image quality of the left viewpoint and the right viewpoint and the stereoscopic perception quality in a nonlinear mode to obtain a final quality evaluation result of the stereoscopic image.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is an Akko & Kayo (640 × 480) stereo image;
FIG. 2b is an Alt Moabit (1024 × 768) stereoscopic image;
fig. 2c is a balloon (1024 × 768) stereoscopic image;
FIG. 2d is a stereo image of Door Flowers (1024 × 768);
fig. 2e is a Kendo (1024 × 768) stereoscopic image;
FIG. 2f is a L viewing L apt (1024 × 768) stereo image;
FIG. 2g is a L ovespird 1(1024 × 768) stereo image;
fig. 2h is a newsapper (1024 × 768) stereoscopic image;
FIG. 2i is an Xmas (640 × 480) stereo image;
FIG. 2j is a Puppy (720 × 480) stereo image;
fig. 2k is a stereo image of Soccer2(720 × 480);
FIG. 2l is a Horse (480 × 270) stereo image;
FIG. 3 is a block diagram of left viewpoint image quality evaluation according to the method of the present invention;
FIG. 4a shows the difference α and ω1A CC performance change graph between the image quality of the lower left viewpoint and the subjective perception quality;
FIG. 4b shows the difference α and ω1The SROCC performance change graph between the lower left viewpoint image quality and the subjective perception quality;
FIG. 4c shows the difference α and ω1A graph of RMSE performance variation between the lower left viewpoint image quality and the subjective perceptual quality;
FIG. 5a is at ω1In the case of =1, CC performance variation graph between left view image quality and subjective perceptual quality at different α;
FIG. 5b is at ω1In the case of =1, the SROCC performance variation graph between left view image quality and subjective perceptual quality at different α;
FIG. 5c is a graph at ω1In the case of =1, the RMSE performance variation plot between left view image quality and subjective perceptual quality at different α;
FIG. 6a shows a variant β1A CC performance change graph between the image quality of the lower left viewpoint and the image quality of the lower right viewpoint and the subjective perception quality;
FIG. 6b shows a difference β1The SROCC performance change graph between the left and right viewpoint image quality and the subjective perception quality;
FIG. 6c shows a difference β1The RMSE performance change graph between the left and right viewpoint image quality and the subjective perception quality;
fig. 7a is a graph of CC performance variation between stereo depth perception quality and subjective perception quality at different τ;
FIG. 7b is a graph of the SROCC performance variation between the stereo depth perception quality and the subjective perception quality at different τ;
fig. 7c is a graph of the RMSE performance variation between stereo depth perception quality and subjective perception quality at different τ;
fig. 8a is a graph of CC performance variation between stereo image quality and subjective perceptual quality at different ρ;
FIG. 8b is a graph of the SROCC performance variation between stereo image quality and subjective perceptual quality at different ρ;
fig. 8c is a graph of the RMSE performance variation between stereo image quality and subjective perceptual quality at different p.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
According to the method for evaluating the objective quality of the stereo image based on the structural distortion, the image quality of the left and right viewpoints and the stereo perception quality of the stereo image are evaluated from the angle of the structural distortion, and the final quality evaluation value of the stereo image is obtained in a nonlinear weighting mode. Fig. 1 shows a block diagram of a general implementation of the method of the invention, comprising the following steps:
① order SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, the original undistorted stereo image S is takenorgIs recorded as LorgThe original undistorted stereo image S is processedorgIs recorded as RorgDistorted stereoscopic image S to be evaluateddisIs recorded as LdisDistorted stereoscopic image S to be evaluateddisIs recorded as Rdis。
② pairs LorgAnd Ldis、RorgAnd RdisThe 4 images were divided into regions to obtain LorgAnd Ldis、RorgAnd RdisThe sensitive area matrix mapping map corresponding to each of the 4 images will LorgAnd LdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ALFor ALThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AL(i,j),R is to beorgAnd RdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ARFor ARThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AR(i,j),Wherein, 0 is not less than i not more than (W-8), 0 is not less than j not more than (H-8), W represents Lorg、Ldis、RorgAnd RdisH denotes Lorg、Ldis、RorgAnd RdisIs high.
In this embodiment, L of step ②orgAnd LdisCoefficient matrix A of the respective corresponding sensitive area matrix mapLThe acquisition process comprises the following steps:
② -a1, pair LorgProcessing with Sobel operator in horizontal and vertical directions to obtain LorgRespectively, as Z, in the horizontal direction and in the vertical directionh,l1And Zv,l1Then calculate LorgIs marked as Zl1, Wherein Z isl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l1(x, y) represents Zh,l1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l1(x, y) represents Zv,l1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl1Width of (A), H' represents Zl1Is high.
② -a2, pair LdisProcessing with Sobel operator in horizontal and vertical directions to obtain LdisRespectively, as Z, in the horizontal direction and in the vertical directionh,l2And Zv,l2Then calculate LdisIs marked as Zl2, Wherein Z isl2(x, y) represents Zl2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l2(x, y) represents Zh,l2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l2(x, y) represents Zv,l2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl2Width of (A), H' represents Zl2Is high.
② -a3, calculating the threshold value T required when dividing the region, wherein W' represents Zl1And Zl2Width of (A), H' represents Zl1And Zl2α is constant, Zl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionl2(x, y) represents Zl2And the gradient amplitude of the pixel point with the middle coordinate position (x, y).
② -a4, mixing Z withl1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl1(i, j) reacting Zl2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl2(i, j), judgment of Zl1(i,j)>T or Zl2(i,j)>If T is true, then a determination L is madeorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AL(i, j) =1, otherwise, determination LorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AL(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8).
In this embodiment, R in step ②orgAnd RdisCoefficient matrix A of the respective corresponding sensitive area matrix mapRIs obtained by:
② -b1, p.RorgPerforming Sobel operator processing in horizontal and vertical directions to obtain RorgRespectively, as Z, in the horizontal direction and in the vertical directionh,r1And Zv,r1Then calculating RorgIs marked as Zr1, Wherein Z isr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r1(x, y) represents Zh,r1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r1(x, y) represents Zv,r1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr1Width of (A), H' represents Zr1Is high.
② -b2, p.RdisPerforming Sobel operator processing in horizontal and vertical directions to obtain RdisRespectively, as Z, in the horizontal direction and in the vertical directionh,r2And Zv,r2Then calculating RdisIs marked as Zr2, Wherein Z isr2(x, y) represents Zr2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r2(x, y) represents Zh,r2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r2(x, y) represents Zv,r2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr2Width of (A), H' represents Zr2Is high.
② -b3, calculation partitioningThe threshold value T' required for the region, wherein W' represents Zr1And Zr2Width of (A), H' represents Zr1And Zr2α is constant, Zr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionr2(x, y) represents Zr2And the gradient amplitude of the pixel point with the middle coordinate position (x, y).
② -b4, mixing Z withr1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr1(i, j) reacting Zr2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr2(i, j), judgment of Zr1(i,j)>T or Zr2(i,j)>Whether T is true, and if so, determining RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AR(i, j) =1, otherwise, determine RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AR(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8).
In the present embodiment, a set of distorted stereo images under different distortion degrees of different distortion types is created by using 12 undistorted stereo images as shown in fig. 2a, 2b, 2c, 2d, 2e, 2f, 2g, 2H, 2i, 2j, 2k and 2l, the distortion types include JPEG compression, JP2K compression, white gaussian noise, gaussian blur and H264 coding distortion, and the left viewpoint image and the right viewpoint image of the stereo image are simultaneously distorted to the same degree, the set of distorted stereo images includes 312 distorted stereo images, wherein the total number of distorted stereo images of JPEG compression is 60, the total number of distorted stereo images of JPEG2000 compression is 60, the total number of distorted stereo images of white gaussian noise is 60, the total number of distorted stereo images of gaussian blur is 60, and the total number of distorted stereo images of H264 coding is 72. The above-described region segmentation is performed on the 312 stereoscopic images.
In this embodiment, the α value determines the accuracy of the sensitive region division, and if the value is too large, the sensitive region will be mistaken for the non-sensitive region, and if the value is too small, the non-sensitive region will be mistaken for the sensitive region, so the determination process of the value is determined by the contribution of the left viewpoint image quality or the right viewpoint image quality to the stereoscopic image quality.
③ mixing LorgAnd LdisThe 2 images were divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, respectively, and then L was calculatedorgAnd LdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsLFor BLThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BL(i,j), Wherein, BL(i, j) also denotes LorgOverlapping block L of size 8 × 8 with middle upper left coordinate position (i, j)disThe upper-left coordinate position is the structural amplitude distortion value of the overlapped block with the size of 8 × 8 (i, j), Lorg(i + x, j + y) denotes LorgThe pixel value of the pixel point with the middle coordinate position of (i + x, j + y), Ldis(i + x, j + y) denotes LdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, C1To avoid the use of The denominator of (A) is zero, and can be taken as C in the practical application process1=0.01, where 0. ltoreq. i.ltoreq (W-8), 0. ltoreq. j.ltoreq.H-8.
Here, considering the correlation between the pixel points of the image, an overlapped block having a size of 8 × 8 has 7 column overlaps with its most adjacent left or right block, and similarly, the 8 × 8 overlapped block has 7 row overlaps with its most adjacent upper or lower block.
R is to beorgAnd RdisThe 2 images were each divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, and R was calculatedorgAnd RdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsRFor BRThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BR(i,j), Wherein, BR(i, j) also represents RorgThe size of the overlapped block with the upper left corner coordinate position of (i, j) being 8 × 8 and RdisThe upper-left coordinate position is the structural amplitude distortion value of the overlapped block with the size of 8 × 8 (i, j), Rorg(i + x, j + y) represents RorgThe pixel value R of the pixel point with the middle coordinate position of (i + x, j + y)dis(i + x, j + y) represents RdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, C1To avoid the use of The denominator of (A) is zero, and can be taken as C in the practical application process1=0.01, where 0. ltoreq. i.ltoreq (W-8), 0. ltoreq. j.ltoreq.H-8.
④ pairs LorgAnd LdisRespectively carrying out Sobel operator processing in the horizontal direction and Sobel operator processing in the vertical direction on the 2 images to respectively obtain LorgAnd Ldis2 imagesL corresponding to the horizontal gradient matrix map and the vertical gradient matrix map respectivelyorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,LFor Ih,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,L(i,j), L will be mixedorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,LFor Iv,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,LFor Ih,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,LFor Iv,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,L(i,j), Wherein, Lorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i +1, j) are respectively corresponded to LorgThe pixel values of the pixel points with the middle coordinate positions of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j), Ldis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i +1, j) are respectively corresponded to LdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j).
To RorgAnd RdisRespectively carrying out Sobel operator processing in the horizontal direction and the vertical direction on the 2 images to respectively obtain RorgAnd RdisThe gradient matrix mapping map in the horizontal direction and the gradient matrix mapping map in the vertical direction corresponding to the 2 images are respectively obtained by mapping RorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,RFor Ih,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,R(i,j), R is to beorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,RFor Iv,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,R(i,j), R is to bedisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,RFor Ih,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,R(i,j), R is to bedisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,RFor Iv,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,R(i,j), Wherein R isorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i +1, j) each represents RorgThe middle coordinate position is the pixel value, R, of the pixel points of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j)dis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i +1, j) each represents RdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j).
⑤ calculation LorgAnd LdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsLFor ELThe value of the coefficient at the middle coordinate position (i, j), denoted as EL(i,j), Wherein, C2Represents a constant, C2To avoid the use of The denominator of C is zero, and C can be taken in the practical application process2=0.02。
Calculation of RorgAnd RdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsRFor ERThe value of the coefficient at the middle coordinate position (i, j), denoted as ER(i,j), C2To avoid the use of The denominator of C is zero, and C can be taken in the practical application process2=0.02。
⑥ calculation LorgAnd LdisStructural distortion evaluation value of (1), denoted as QL,QL=ω1×Qm,L+ω2×Qnm,L, Wherein, ω is1Representation LorgAnd LdisWeight value of middle sensitive area, omega2Representation LorgAnd LdisThe weight value of the non-sensitive area.
Calculation of RorgAnd RdisStructural distortion evaluation value of (1), denoted as QR,QR=ω′1×Qm,R+ω′2×Qnm,R, Wherein, omega'1Represents RorgAnd RdisWeight value of Medium sensitive region, ω'2Represents RorgAnd RdisThe weight value of the non-sensitive area.
In the present embodiment, fig. 3 shows a block diagram of implementation of left viewpoint image quality evaluation. A distorted stereo image set composed of 312 distorted stereo images is established by using 12 undistorted stereo images as shown in fig. 2a to 2l, and subjective quality evaluation is performed on the 312 distorted stereo images by using a known subjective quality evaluation method to obtain an average subjective score Difference (DMOS) of the 312 distorted stereo images, that is, a subjective quality score of each distorted stereo image. DMOS is a difference between a subjective score Mean (MOS) and a full score (100), that is, DMOS =100-MOS, and thus, the larger the DMOS value, the worse the quality of a distorted stereoscopic image, the smaller the DMOS value, the better the quality of a distorted stereoscopic image, and the DMOS has a value range of [0,100 ″]On the other hand, the Q corresponding to each distorted stereoscopic image is calculated for the 312 distorted stereoscopic images according to steps ① to ⑥ of the method of the present inventionm,LAnd Qnm,L(ii) a Then using QL=ω1×Qm,L+(1-ω1)×Qnm,LPerforming four-parameter L logistic function nonlinear fitting to obtain α and omega1The value is obtained. Here, 3 common objective parameters of the evaluation method for evaluating image quality are used as evaluation indexes, namely, Pearson Correlation Coefficient (CC), Spearman Correlation Coefficient (SROCC) and Mean square Error Coefficient (RMSE) under nonlinear regression condition, CC reflects the accuracy of an objective model of a distorted stereo image evaluation function, and SROCC reflects the monotonicity between the objective model and subjective perception. RMSE reflects the accuracy of its prediction. The higher the CC and SROCC values areThe better the correlation between the bright stereo image objective evaluation method and DMOS, the lower the RMSE value, the better the correlation between the stereo image objective evaluation method and DMOS, FIG. 4a shows that the difference is α and ω1The CC performance variation between left view image quality and subjective perceptual quality for the lower 312 stereo images, and fig. 4b gives the difference α and ω1The SROCC performance variation between left view image quality and subjective perceptual quality for the lower 312 stereo images, fig. 4c gives the difference α and ω1The RMSE performance variation between the left viewpoint image quality and the subjective perceptual quality of the lower 312 stereo images, and by analyzing fig. 4a, 4b and 4c, it can be known that CC and SROCC will follow ω1The value becomes larger and the RMSE becomes larger with ω1The larger and smaller values indicate that the left view image quality is mainly determined by the quality of the sensitive area, while the change of α value has little impact on the performance between left view image quality and subjective perception fig. 5a gives the difference between ω and1=1、ω2where =0, the CC performance variation between left view image quality and subjective perceptual quality for 312 stereo images at different α, and fig. 5b shows the variation between ω and the subjective perceptual quality1=1、ω2In the case of =0, the SROCC performance variation between the left view image quality and the subjective perceptual quality of 312 stereo images at different α, and fig. 5c shows the SROCC performance variation at ω1=1、ω2Where =0, the RMSE performance between the left viewpoint image quality and the subjective perceptual quality of 312 stereo images at different α is varied, and analyzing fig. 5a, 5b, and 5c, it can be seen that the CC, SROCC, and RMSE values all fluctuate in percentile, but have a peak1=1,α=2.1。
⑦ according to QLAnd QRCalculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe spatial frequency similarity measure of (2), denoted as QF,QF=β1×QL+(1-β1)×QRWherein, β1Represents QLThe weight of (2).
In this embodiment, β of step ⑦1The acquisition process comprises the following steps:
⑦ -1, using n undistorted stereo images to build a distorted stereo image set under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, wherein n is more than or equal to 1.
⑦ -2, obtaining the average subjective score difference of each distorted stereo image in the distorted stereo image set by a subjective quality evaluation method, and marking as DMOS, DMOS =100-MOS, wherein MOS represents the subjective score mean, DMOS ∈ [0,100 ].
⑦ -3, according to the operation procedures of step ① to step ⑥, an evaluation value Q of a sensitive area of a left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set with respect to a left viewpoint image of a corresponding undistorted stereoscopic image is calculatedm,LAnd evaluation value Q of non-sensitive regionnm,L。
⑦ -4, fitting the mean subjective score difference DMOS and corresponding Q of distorted stereo images in the set of distorted stereo images by a mathematical fitting methodm,LAnd Qnm,LThereby obtaining β1The value is obtained.
In this embodiment, β1Determine QLThe contribution to the stereo image quality, for blockiness, is about half of the sum of the quality of the left viewpoint image and the quality of the right viewpoint image, and for blur distortion, the stereo image quality depends mainly on the viewpoint with better quality1First, the Q corresponding to each distorted stereoscopic image is calculated according to steps ① to ⑥ of the method of the present invention for the 312 distorted stereoscopic imagesLAnd QRThen using four parameter fitting to obtain β1The difference β is given in fig. 6a1The variation of CC performance between the quality of the left and right view images and the subjective perceived quality under (fig. 6 a) gives a difference β1The SROCC performance variation between the quality of the left and right view images at hand and the subjective perceived quality, figure 6a shows the difference β1Quality of left and right viewpoint imagesAnalysis of FIGS. 6a, 6b, and 6c for the change in RMSE performance from subjective perceptual quality shows that the variation with β1The change in value for CC, SROCC and RMSE was modest, fluctuating in percentiles, but peaked here, β was taken1=0.5。
⑧ calculation LorgAnd RorgIs used as a matrix for absolute difference imageIs shown, i.e.Calculation LdisAnd RdisIs used as a matrix for absolute difference imageIs shown, i.e.Wherein, "| |" is an absolute value symbol.
⑨ will beAndthe 2 images are respectively divided intoA non-overlapping block of size 8 × 8, and thenAndall the blocks in the 2 images are respectively subjected to singular value decomposition to obtainCorresponding singular value map composed of singular value matrix of each block thereof anda corresponding singular value mapping map composed of the singular value matrix of each block thereof is toThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GorgFor GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWill be provided withThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GdisFor GdisThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWherein, WLRTo representAndwidth of (H)LRTo representAndis high in the direction of the horizontal axis, 0≤p≤7,0≤q≤7。
here, in order to reduce the computational complexity, a block of 8 × 8 has no repeated columns or repeated rows with its nearest left or right block or top or bottom block, i.e., blocks do not overlap with blocks.
⑩ calculationCorresponding singular value map andthe singular value deviation evaluation value of the corresponding singular value map, denoted as K, wherein the content of the first and second substances,represents GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, p),represents GdisThe coordinate position in the singular value matrix of the nth block is the singular value at (p, p).
To pairAndrespectively carrying out singular value decomposition to respectively obtainAnd2 orthogonal matrixes and 1 singular value matrix corresponding to each other are used for generating a plurality of singular value matrixes 2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chiorgAnd VorgWill beThe singular value matrix obtained after singular value decomposition is recorded as Oorg,Will be provided with2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chidisAnd VdisWill beThe singular value matrix obtained after singular value decomposition is recorded as Odis,
Respectively calculateAnd2 residual matrix images of the images after the singular value deprivationThe residual matrix image after the singular value deprivation is marked as Xorg,Xorg=χorg×Λ×VorgWill beThe residual matrix image after the singular value deprivation is marked as Xdis,Xdis=χdis×Λ×VdisWhere Λ denotes the identity matrix, Λ size and OorgAnd OdisAre consistent in size.
Calculating XorgAnd XdisThe mean deviation ratio of (D) is recorded as Wherein X represents XorgAnd XdisThe abscissa of the pixel point in (1) and y represents XorgAnd XdisThe ordinate of the pixel point in (1).Calculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe stereo perception evaluation metric of (1), denoted as QS,Wherein τ represents a constant for adjusting K andat QSPlays an important role in.
In this embodiment, the absolute difference image of the 312 distorted stereoscopic images and the 10 undistorted stereoscopic images is first obtained, and then steps ⑧ to ⑧ of the method according to the inventionCalculating to obtain the corresponding K sum of each distorted stereo imageHere, the magnitude of τ determines the importance of the singular value bias and the residual information in the depth perception evaluation. FIG. 7a shows the CC performance variation between the stereo perceived quality and the subjective perception of 312 distorted stereo images at different τ, FIG. 7b shows the SROCC performance variation between the stereo perceived quality and the subjective perception of 312 distorted stereo images at different τ, FIG. 7c shows the RMSE performance variation between the stereo perceived quality and the subjective perception of 312 distorted stereo images at different τ, and τ is [ -164 ] in FIGS. 7a, 7b, and 7c]In-range variation, as shown in fig. 7a, 7b and 7c, it can be seen that the variation of CC, SROCC and RMSE and τ all have an extreme value and are approximately the same in position, where τ is-8.
According to QFAnd QSCalculating a distorted stereoscopic image S to be evaluateddisThe image quality evaluation score of (1), denoted as Q, Q = QF×(QS)ρWhere ρ represents a weight coefficient value.
In this embodiment, the 312 distorted stereoscopic images are processed according to steps ① to ① of the method of the present inventionCalculating to obtain Q corresponding to each distorted stereo imageFAnd QSThen adopt Q = QF×(QS)ρPerforming four-parameter L logistic function nonlinear fitting to obtain rho, wherein the rho value determines the quality of left and right viewpoint images and the contribution of the stereo perception quality in the stereo image qualityFAnd QSThe values are all reduced along with the deepening of the distortion degree of the stereo image, so the value range of the rho value is more than 0. FIG. 8a shows 312 stereo images at different values of ρThe CC performance variation between the quality of the stereo image and the subjective perception quality is given in fig. 8b, the SROCC performance variation between the quality of the 312 stereo images and the subjective perception quality under different rho values is given in fig. 8c, the RMSE performance variation between the quality of the 312 stereo images and the subjective perception quality under different rho values is given in fig. 8a, fig. 8b and fig. 8c, it can be known that the consistency between the objective evaluation model of stereo image quality and the subjective perception is affected when the rho value is too large or too small, and the CC, SROCC and RMSE values have extreme points and the same approximate position when the rho value varies, where rho = 0.3.
The image quality evaluation function Q = Q of the distorted stereoscopic image obtained in this embodiment was analyzedF×(QS)0.3The correlation between the final evaluation result of (a) and the subjective score DMOS. First, the image quality evaluation function Q = Q of the distorted stereoscopic image obtained in the present embodimentF×(QS)0.3The method comprises the steps of calculating an output value Q of a final stereo image quality evaluation result, performing L objective function nonlinear fitting on the output value Q, and finally obtaining a performance index value between a stereo objective evaluation model and subjective perception, wherein 4 common objective parameters of an evaluation image quality evaluation method are used as evaluation indexes, namely CC, SROCC, constant Ratio (Outlier Ratio, OR) and RMSE.OR reflect the dispersion degree of the stereo image quality objective rating model, namely the proportion of the number of distorted stereo images of which the difference between the evaluation value after four-parameter fitting and DMOS is larger than a certain threshold value in all distorted stereo images is given in table 1, evaluation performance CC, SROCC, OR and RMSE coefficients are given in table 1, and the data in table 1 show that the image quality evaluation function Q = Q of the distorted stereo images obtained according to the embodimentF×(QS)0.3The correlation between the output value Q of the final evaluation result obtained by calculation and the subjective evaluation DMOS is very high, the CC value and the SROCC value both exceed 0.92, and the RMSE value is lower than 6.5, which shows that the objective evaluation result is more consistent with the result of subjective perception of human eyes, and the effectiveness of the method is demonstrated. Table 1 correlation between image quality evaluation score and subjective score of distorted stereoscopic image obtained by this embodiment
Gblur | JP2K | JPEG | WN | H264 | ALL | |
Number of | 60 | 60 | 60 | 60 | 72 | 312 |
CC | 0.9658 | 0.9479 | 0.9533 | 0.9554 | 0.9767 | 0.9235 |
SROCC | 0.9655 | 0.9489 | 0.9524 | 0.9274 | 0.9545 | 0.9430 |
OR | 0 | 0 | 0 | 0 | 0 | 0 |
RMSE | 5.4719 | 3.8180 | 4.3010 | 4.6151 | 3.0135 | 6.5890 |
Claims (3)
1. A three-dimensional image objective quality evaluation method based on structural distortion is characterized by comprising the following steps:
① order SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, the original undistorted stereo image S is takenorgIs recorded as LorgThe original undistorted stereo image S is processedorgIs recorded as RorgDistorted to be evaluatedStereo image SdisIs recorded as LdisDistorted stereoscopic image S to be evaluateddisIs recorded as Rdis;
② pairs LorgAnd Ldis、RorgAnd RdisThe 4 images were divided into regions to obtain LorgAnd Ldis、RorgAnd RdisThe sensitive area matrix mapping map corresponding to each of the 4 images will LorgAnd LdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ALFor ALThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AL(i,j),R is to beorgAnd RdisRespectively carrying out region division to obtain coefficient matrixes of respectively corresponding sensitive region matrix mapping graphs, and recording the coefficient matrixes as ARFor ARThe value of the coefficient at the middle coordinate position (i, j), which is denoted as AR(i,j),Wherein, 0 is not less than i not more than (W-8), 0 is not less than j not more than (H-8), W represents Lorg、Ldis、RorgAnd RdisH denotes Lorg、Ldis、RorgAnd RdisHigh of (d);
③ mixing LorgAnd LdisThe 2 images were divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, respectively, and then L was calculatedorgAnd LdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsLFor BLThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BL(i,j), Wherein, BL(i, j) also denotes LorgOverlapping block L of size 8 × 8 with middle upper left coordinate position (i, j)disThe upper-left coordinate position is the structural amplitude distortion value of the overlapped block with the size of 8 × 8 (i, j), Lorg(i + x, j + y) denotes LorgThe pixel value of the pixel point with the middle coordinate position of (i + x, j + y), Ldis(i + x, j + y) denotes LdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
r is to beorgAnd RdisThe 2 images were each divided into (W-7) × (H-7) overlapping blocks of size 8 × 8, and R was calculatedorgAnd RdisThe structural amplitude distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as B in the coefficient matrix of the structural amplitude distortion mapping mapsRFor BRThe value of the coefficient at the middle coordinate position (i, j), which is denoted as BR(i,j), Wherein, BR(i, j) also represents RorgThe size of the overlapped block with the upper left corner coordinate position of (i, j) being 8 × 8 and RdisThe upper-left coordinate position is the structural amplitude distortion value of the overlapped block with the size of 8 × 8 (i, j), Rorg(i + x, j + y) represents RorgThe pixel value R of the pixel point with the middle coordinate position of (i + x, j + y)dis(i + x, j + y) represents RdisThe pixel value C of the pixel point with the middle coordinate position of (i + x, j + y)1Represents a constant, where 0. ltoreq. i.ltoreq.W-8, 0. ltoreq. j.ltoreq.H-8;
④ pairs LorgAnd LdisRespectively carrying out Sobel operator processing in the horizontal direction and Sobel operator processing in the vertical direction on the 2 images to respectively obtain LorgAnd LdisL corresponding to the horizontal direction gradient matrix mapping map and the vertical direction gradient matrix mapping map of each of the 2 imagesorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,LFor Ih,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,L(i,j), L will be mixedorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,LFor Iv,org,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,LFor Ih,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,L(i,j), L will be mixeddisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,LFor Iv,dis,LThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,L(i,j), Wherein, Lorg(i+2,j)、Lorg(i+2,j+1)、Lorg(i+2,j+2)、Lorg(i,j)、Lorg(i,j+1)、Lorg(i,j+2)、Lorg(i+1,j+2)、Lorg(i +1, j) are respectively corresponded to LorgThe pixel values of the pixel points with the middle coordinate positions of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), and (i +1, j), Ldis(i+2,j)、Ldis(i+2,j+1)、Ldis(i+2,j+2)、Ldis(i,j)、Ldis(i,j+1)、Ldis(i,j+2)、Ldis(i+1,j+2)、Ldis(i +1, j) are respectively corresponded to LdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
to RorgAnd RdisRespectively carrying out Sobel operator processing in the horizontal direction and the vertical direction on the 2 images to respectively obtain RorgAnd Rdis2 imagesA horizontal direction gradient matrix map and a vertical direction gradient matrix map corresponding to each other, and R isorgThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,org,RFor Ih,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,org,R(i,j), R is to beorgThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,org,RFor Iv,org,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,org,R(i,j), R is to bedisThe coefficient matrix of the corresponding horizontal gradient matrix mapping graph obtained after horizontal Sobel operator processing is carried out is marked as Ih,dis,RFor Ih,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Ih,dis,R(i,j), R is to bedisThe coefficient matrix of the corresponding vertical gradient matrix mapping graph obtained after the vertical Sobel operator is implemented is marked as Iv,dis,RFor Iv,dis,RThe value of the coefficient at the middle coordinate position (I, j), which is denoted as Iv,dis,R(i,j), Wherein R isorg(i+2,j)、Rorg(i+2,j+1)、Rorg(i+2,j+2)、Rorg(i,j)、Rorg(i,j+1)、Rorg(i,j+2)、Rorg(i+1,j+2)、Rorg(i +1, j) each represents RorgThe middle coordinate positions are (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2) and (i)Pixel value of a +1, j) pixel point, Rdis(i+2,j)、Rdis(i+2,j+1)、Rdis(i+2,j+2)、Rdis(i,j)、Rdis(i,j+1)、Rdis(i,j+2)、Rdis(i+1,j+2)、Rdis(i +1, j) each represents RdisThe middle coordinate position is the pixel value of the pixel point of (i +2, j), (i +2, j +1), (i +2, j +2), (i, j +1), (i, j +2), (i +1, j +2), or (i +1, j);
⑤ calculation LorgAnd LdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsLFor ELThe value of the coefficient at the middle coordinate position (i, j), denoted as EL(i,j), Wherein, C2Represents a constant;
calculation of RorgAnd RdisThe structural direction distortion mapping maps of two overlapped blocks with the same coordinate positions in 2 images are recorded as E in the coefficient matrix of the structural direction distortion mapping mapsRFor ERThe value of the coefficient at the middle coordinate position (i, j), denoted as ER(i,j),
⑥ calculation LorgAnd LdisStructural distortion evaluation value of (1), denoted as QL,QL=ω1×Qm,L+ω2×Qnm,L, Wherein, ω is1Representation LorgAnd LdisWeight value of middle sensitive area, omega2Representation LorgAnd LdisThe weight value of the middle non-sensitive area;
calculation of RorgAnd RdisStructural distortion evaluation value of (1), denoted as QR,QR=ω′1×Qm,R+ω′2×Qnm,R, Wherein, omega'1Represents RorgAnd RdisWeight value of Medium sensitive region, ω'2Represents RorgAnd RdisThe weight value of the middle non-sensitive area;
⑦ according to QLAnd QRCalculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe spatial frequency similarity measure of (2), denoted as QF,QF=β1×QL+(1-β1)×QRWherein, β1Represents QLThe weight of (2);
⑧ calculation LorgAnd RorgIs expressed in matrix form as Calculation LdisAnd RdisIs expressed in matrix form as Wherein, "|" is an absolute value symbol;
⑨ will beAndthe 2 images are respectively divided intoA non-overlapping block of size 8 × 8, and thenAndall the blocks in the 2 images are respectively subjected to singular value decomposition to obtainCorresponding singular value map composed of singular value matrix of each block thereof anda corresponding singular value mapping map composed of the singular value matrix of each block thereof is toThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GorgFor GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q), which is recorded asWill be provided withThe coefficient matrix of the singular value mapping graph obtained after singular value decomposition is recorded as GdisFor GdisThe coordinate position in the singular value matrix of the nth block is the singular value at (p, q),it is marked asWherein, WLRTo representAndwidth of (H)LRTo representAndis high in the direction of the horizontal axis, 0≤p≤7,0≤q≤7;
⑩ calculationCorresponding singular value map andthe singular value deviation evaluation value of the corresponding singular value map, denoted as K, wherein the content of the first and second substances,represents GorgThe coordinate position in the singular value matrix of the nth block is the singular value at (p, p),represents GdisThe coordinate position in the singular value matrix of the nth block is a singular value at (p, p);
to pairAndrespectively carrying out singular value decomposition to respectively obtainAnd2 orthogonal matrixes and 1 singular value matrix corresponding to each other are used for generating a plurality of singular value matrixes2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chiorgAnd VorgWill beThe singular value matrix obtained after singular value decomposition is recorded as Oorg,Will be provided with2 orthogonal matrixes obtained after singular value decomposition are respectively recorded as chidisAnd VdisWill beThe singular value matrix obtained after singular value decomposition is recorded as Odis,
Respectively calculateAnd2 residual matrix images of the images after the singular value deprivationThe residual matrix image after the singular value deprivation is marked as Xorg,Xorg=χorg×Λ×VorgWill beThe residual matrix image after the singular value deprivation is marked as Xdis,Xdis=χdis×Λ×VdisWhere Λ denotes the identity matrix, Λ size and OorgAnd OdisThe sizes are consistent;
calculating XorgAnd XdisThe mean deviation ratio of (D) is recorded as Wherein X represents XorgAnd XdisThe abscissa of the pixel point in (1) and y represents XorgAnd XdisThe ordinate of the pixel point in (1);
calculating a distorted stereo image S to be evaluateddisRelative to the original undistorted stereo image SorgThe stereo perception evaluation metric of (1), denoted as QS,Wherein τ represents a constant for adjusting K andat QSThe importance of (1);
2. The method for objective quality assessment of stereo images based on structural distortion as claimed in claim 1, wherein L of said step ②orgAnd LdisCoefficient matrix A of the respective corresponding sensitive area matrix mapLThe acquisition process comprises the following steps:
② -a1, pair LorgProcessing with Sobel operator in horizontal and vertical directions to obtain LorgRespectively, as Z, in the horizontal direction and in the vertical directionh,l1And Zv,l1Then calculate LorgIs marked as Zl1, Wherein Z isl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l1(x, y) represents Zh,l1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l1(x, y) represents Zv,l1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl1Width of (A), H' represents Zl1High of (d);
② -a2, pair LdisProcessing with Sobel operator in horizontal and vertical directions to obtain LdisRespectively, as Z, in the horizontal direction and in the vertical directionh,l2And Zv,l2Then calculateLdisIs marked as Zl2, Wherein Z isl2(x, y) represents Zl2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,l2(x, y) represents Zh,l2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,l2(x, y) represents Zv,l2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zl2Width of (A), H' represents Zl2High of (d);
② -a3, calculating the threshold value T required when dividing the region, wherein α is a constant, Zl1(x, y) represents Zl1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionl2(x, y) represents Zl2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② -a4, mixing Z withl1Ladder with pixel points of (i, j) as middle coordinate positionsThe amplitude of the degree is recorded as Zl1(i, j) reacting Zl2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zl2(i, j), judgment of Zl1(i,j)>T or Zl2(i,j)>If T is true, then a determination L is madeorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AL(i, j) =1, otherwise, determination LorgAnd LdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AL(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8);
in the step ②, RorgAnd RdisCoefficient matrix A of the respective corresponding sensitive area matrix mapRThe acquisition process comprises the following steps:
② -b1, p.RorgPerforming Sobel operator processing in horizontal and vertical directions to obtain RorgRespectively, as Z, in the horizontal direction and in the vertical directionh,r1And Zv,r1Then calculating RorgIs marked as Zr1, Wherein Z isr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r1(x, y) represents Zh,r1The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r1(x, y) represents Zv,r1The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr1Width of (A), H' represents Zr1High of (d);
② -b2, p.RdisPerforming Sobel operator processing in horizontal and vertical directions to obtain RdisRespectively, as Z, in the horizontal direction and in the vertical directionh,r2And Zv,r2Then calculating RdisIs marked as Zr2, Wherein Z isr2(x, y) represents Zr2Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionh,r2(x, y) represents Zh,r2The horizontal gradient value Z of the pixel point with the (x, y) middle coordinate positionv,r2(x, y) represents Zv,r2The gradient value in the vertical direction of the pixel point with the middle coordinate position (x, y) is that x is more than or equal to 1 and less than or equal to W ', y is more than or equal to 1 and less than or equal to H ', wherein W ' represents Zr2Width of (A), H' represents Zr2High of (d);
② -b3, calculating the threshold value T' required when dividing the region, wherein α is a constant, Zr1(x, y) represents Zr1Gradient amplitude, Z, of pixel point with (x, y) as middle coordinate positionr2(x, y) represents Zr2The gradient amplitude of the pixel point with the middle coordinate position (x, y);
② -b4, mixing Z withr1The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr1(i, j) reacting Zr2The gradient amplitude of the pixel point with the middle coordinate position (i, j) is recorded as Zr2(i, j), judgment of Zr1(i,j)>T or Zr2(i,j)>Whether T is true, and if so, determining RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the sensitive area, and order AR(i, j) =1, otherwise, determine RorgAnd RdisThe pixel point with the middle coordinate position (i, j) belongs to the non-sensitive area, and order AR(i, j) =0, wherein i is more than or equal to 0 and is less than or equal to (W-8), and j is more than or equal to 0 and is less than or equal to (H-8).
3. The method for objective quality assessment of stereo images based on structural distortion as claimed in claim 1 or 2, wherein β of said step ⑦1The acquisition process comprises the following steps:
⑦ -1, adopting n undistorted stereo images to establish a distorted stereo image set under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, and n is more than or equal to 1;
⑦ -2, obtaining an average subjective score difference of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and recording the average subjective score difference as DMOS, wherein DMOS =100-MOS, wherein MOS represents the subjective score mean, DMOS ∈ [0,100 ];
⑦ -3, according to the operation procedures of step ① to step ⑥, an evaluation value Q of a sensitive area of a left viewpoint image of each distorted stereoscopic image in the distorted stereoscopic image set with respect to a left viewpoint image of a corresponding undistorted stereoscopic image is calculatedm,LAnd evaluation value Q of non-sensitive regionnm,L;
⑦ -4, fitting the mean subjective score difference DMOS and corresponding Q of distorted stereo images in the set of distorted stereo images by a mathematical fitting methodm,LAnd Qnm,LThereby obtaining β1The value is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210145034.0A CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210145034.0A CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102708568A true CN102708568A (en) | 2012-10-03 |
CN102708568B CN102708568B (en) | 2014-11-05 |
Family
ID=46901288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210145034.0A Expired - Fee Related CN102708568B (en) | 2012-05-11 | 2012-05-11 | Stereoscopic image objective quality evaluation method on basis of structural distortion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102708568B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103108209A (en) * | 2012-12-28 | 2013-05-15 | 宁波大学 | Stereo image objective quality evaluation method based on integration of visual threshold value and passage |
CN104574363A (en) * | 2014-12-12 | 2015-04-29 | 南京邮电大学 | Full reference image quality assessment method in consideration of gradient direction difference |
CN104036502B (en) * | 2014-06-03 | 2016-08-24 | 宁波大学 | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology |
CN108074241A (en) * | 2018-01-16 | 2018-05-25 | 深圳大学 | Quality score method, apparatus, terminal and the storage medium of target image |
CN110232680A (en) * | 2019-05-30 | 2019-09-13 | 广智微芯(扬州)有限公司 | A kind of image blur evaluation method and device |
CN113920065A (en) * | 2021-09-18 | 2022-01-11 | 天津大学 | Imaging quality evaluation method for visual inspection system in industrial field |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050089246A1 (en) * | 2003-10-27 | 2005-04-28 | Huitao Luo | Assessing image quality |
US20080226147A1 (en) * | 2007-03-16 | 2008-09-18 | Sti Medical Systems, Llc | Method to provide automated quality feedback to imaging devices to achieve standardized imaging data |
US20080232667A1 (en) * | 2007-03-22 | 2008-09-25 | Fujifilm Corporation | Device, method and recording medium containing program for separating image component, and device, method and recording medium containing program for generating normal image |
CN101833766A (en) * | 2010-05-11 | 2010-09-15 | 天津大学 | Stereo image objective quality evaluation algorithm based on GSSIM |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | Three-dimensional image objective quality evaluation method |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
-
2012
- 2012-05-11 CN CN201210145034.0A patent/CN102708568B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050089246A1 (en) * | 2003-10-27 | 2005-04-28 | Huitao Luo | Assessing image quality |
US20080226147A1 (en) * | 2007-03-16 | 2008-09-18 | Sti Medical Systems, Llc | Method to provide automated quality feedback to imaging devices to achieve standardized imaging data |
US20080232667A1 (en) * | 2007-03-22 | 2008-09-25 | Fujifilm Corporation | Device, method and recording medium containing program for separating image component, and device, method and recording medium containing program for generating normal image |
CN101833766A (en) * | 2010-05-11 | 2010-09-15 | 天津大学 | Stereo image objective quality evaluation algorithm based on GSSIM |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | Three-dimensional image objective quality evaluation method |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
Non-Patent Citations (2)
Title |
---|
周俊明等: "利用奇异值分解法的立体图像客观质量评价模型", 《计算机辅助设计与图形学学报》, vol. 23, no. 5, 31 May 2011 (2011-05-31), pages 870 - 877 * |
沈丽丽等: "基于三维特征和结构相似度的图像质量评价方法", 《光电子·激光》, vol. 21, no. 11, 30 November 2010 (2010-11-30), pages 1713 - 1719 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103108209A (en) * | 2012-12-28 | 2013-05-15 | 宁波大学 | Stereo image objective quality evaluation method based on integration of visual threshold value and passage |
CN103108209B (en) * | 2012-12-28 | 2015-03-11 | 宁波大学 | Stereo image objective quality evaluation method based on integration of visual threshold value and passage |
CN104036502B (en) * | 2014-06-03 | 2016-08-24 | 宁波大学 | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology |
CN104574363A (en) * | 2014-12-12 | 2015-04-29 | 南京邮电大学 | Full reference image quality assessment method in consideration of gradient direction difference |
CN104574363B (en) * | 2014-12-12 | 2017-09-29 | 南京邮电大学 | A kind of full reference image quality appraisement method for considering gradient direction difference |
CN108074241A (en) * | 2018-01-16 | 2018-05-25 | 深圳大学 | Quality score method, apparatus, terminal and the storage medium of target image |
CN108074241B (en) * | 2018-01-16 | 2021-10-22 | 深圳大学 | Quality scoring method and device for target image, terminal and storage medium |
CN110232680A (en) * | 2019-05-30 | 2019-09-13 | 广智微芯(扬州)有限公司 | A kind of image blur evaluation method and device |
CN110232680B (en) * | 2019-05-30 | 2021-04-27 | 广智微芯(扬州)有限公司 | Image ambiguity evaluation method and device |
CN113920065A (en) * | 2021-09-18 | 2022-01-11 | 天津大学 | Imaging quality evaluation method for visual inspection system in industrial field |
CN113920065B (en) * | 2021-09-18 | 2023-04-28 | 天津大学 | Imaging quality evaluation method for visual detection system of industrial site |
Also Published As
Publication number | Publication date |
---|---|
CN102708568B (en) | 2014-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101872479B (en) | Three-dimensional image objective quality evaluation method | |
CN102333233B (en) | Stereo image quality objective evaluation method based on visual perception | |
CN103581661B (en) | Method for evaluating visual comfort degree of three-dimensional image | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
CN103347196B (en) | Method for evaluating stereo image vision comfort level based on machine learning | |
CN102547368B (en) | Objective evaluation method for quality of stereo images | |
CN102708568A (en) | Stereoscopic image objective quality evaluation method on basis of structural distortion | |
CN102209257A (en) | Stereo image quality objective evaluation method | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN104036501A (en) | Three-dimensional image quality objective evaluation method based on sparse representation | |
CN103517065B (en) | Method for objectively evaluating quality of degraded reference three-dimensional picture | |
CN102708567B (en) | Visual perception-based three-dimensional image quality objective evaluation method | |
CN104811691B (en) | A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation | |
CN103413298B (en) | A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic | |
CN105338343A (en) | No-reference stereo image quality evaluation method based on binocular perception | |
CN104902268B (en) | Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method | |
CN104954778A (en) | Objective stereo image quality assessment method based on perception feature set | |
CN104408716A (en) | Three-dimensional image quality objective evaluation method based on visual fidelity | |
CN104036502A (en) | No-reference fuzzy distorted stereo image quality evaluation method | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN102737380B (en) | Stereo image quality objective evaluation method based on gradient structure tensor | |
CN103745457B (en) | A kind of three-dimensional image objective quality evaluation method | |
CN104361583A (en) | Objective quality evaluation method of asymmetrically distorted stereo images | |
CN104144339B (en) | A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images | |
CN102567990B (en) | Stereo image objective quality estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20141105 Termination date: 20210511 |
|
CF01 | Termination of patent right due to non-payment of annual fee |