CN102722888A - Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision - Google Patents

Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision Download PDF

Info

Publication number
CN102722888A
CN102722888A CN2012101635968A CN201210163596A CN102722888A CN 102722888 A CN102722888 A CN 102722888A CN 2012101635968 A CN2012101635968 A CN 2012101635968A CN 201210163596 A CN201210163596 A CN 201210163596A CN 102722888 A CN102722888 A CN 102722888A
Authority
CN
China
Prior art keywords
image
picture
influence
diff
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012101635968A
Other languages
Chinese (zh)
Inventor
沈丽丽
张晶
侯春萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN2012101635968A priority Critical patent/CN102722888A/en
Publication of CN102722888A publication Critical patent/CN102722888A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention belongs to the field of image process, and provides a quality evaluation method capable of comprehensively considering physiological and psychological stereoscopic vision clues, and can effectively realize objective quality evaluation on a stereoscopic image. According to the technical scheme of the invention, a stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision comprises the following steps of: a first step of calculating absolute difference of a left view and a right view; a second step of converting the absolute difference into a grey-scale image which is taken as a characteristic parameter capable of representing the quality of a stereoscopic image; a third step of carrying out K-means cluster segmentation on a converted grey-scale image; a fourth step of differentially treating by distributing different weights for different class diagrams; a fifth step of calculating a WMSSIM value between a class diagram of the original stereoscopic image and a class diagram of a distortion stereoscopic image; and a sixth step of finally obtaining stereoscopic image quality evaluation index 3DM. The method is mainly applied to stereoscopic image objective quality evaluation.

Description

Three-dimensional image objective quality evaluation method based on physiology and psychological stereoscopic vision
Technical field
The invention belongs to image processing field, the evaluating objective quality system of stereo-picture especially, particularly a kind of three-dimensional image objective quality evaluation method based on physiology and psychological stereoscopic vision.
Background technology
The quality evaluating method of digital plane image can be divided into two types at present: subjective assessment and objective evaluation.The subjective quality evaluation procedure need be selected a large amount of observers; Each observer will carry out a large amount of tests repeatedly to a plurality of test patterns; Length consuming time and expense are high; Be subject to the influence of the complicacy and the otherness of testee's individual psychology and physiological change, operation easier is big and can't accomplish real-time.Therefore how setting up the method for evaluating objective quality that matches with the subjective assessment result becomes the research topic that needs to be resolved hurrily.Objective evaluation is according to certain measurement standard, earlier the related characteristics parameter that characterizes picture quality is carried out mathematical modeling, the quality index that obtains being correlated with; And then estimate according to the quality index that is drawn.
Method for evaluating objective quality mainly is divided into two types: based on the method for evaluating objective quality of statistic with based on the method for evaluating objective quality of human-eye visual characteristic.Traditional method for objectively evaluating image quality based on the mathematical statistics amount mainly contains square error and Y-PSNR method, and the root-mean-square error method that derives from etc.Thinking based on error statistics amount assess image quality is: through the local difference that characteristic quantity comes comparison distorted image and original image, on entire image, obtain a total average statistics amount, then this statistic is associated with the quality of image.The advantage of these class methods is simple, therefore in plurality of applications, is widely used; Shortcoming is to consider that not these differences for the influence of the mankind's visually-perceptible and the characteristics of image itself, can not reflect the quality of visual information sometimes really, and the result often has bigger difference with subjective assessment.
Early 1990s; Occurred utilizing human visual system to come the new method of assess image quality; Make research get into a new stage to image quality evaluating method; From transferred the error statistics evaluation of combination human eye vision apperceive characteristic to based on the simple error statistics evaluation of Pixel-level in the past, a collection of more complete human eye vision computation model is suggested.The purpose of these models is that anthropomorphic dummy's vision system carries out objective evaluation to picture quality.But, mostly be to be directed against plane picture, do not relate to the evaluating objective quality of stereo visual information.When stereo visual information is estimated, can't all apply mechanically these vision modes; Stereo-picture is different with plane picture, exists the correlativity of height between the adjacent viewpoint of stereo-picture.If the picture quality of two adjacent viewpoint is all very high, but the parallax between viewpoint is less, the image stereoscopic sensation that the observer feels can reduce.
Summary of the invention
The present invention is intended to overcome the deficiency of prior art; A kind of quality evaluating method of taking all factors into consideration physiology and psychological stereoscopic vision clue is provided; Can realize effectively that through this method stereoscopic image carries out objective quality assessment, for achieving the above object, the technical scheme that the present invention takes is; Three-dimensional image objective quality evaluation method based on physiology and psychological stereoscopic vision comprises the steps:
The first step, it is right to get a pair of viewpoint with binocular parallax, calculates the antipode Diff between the view of the left and right sides 0=| f (x 1)-f (x 2) |, wherein, f (x 1), f (x 2) be the pixel value of left and right sides view;
In second step, convert the antipode value to grayscale image Diff=rgb2gray (Diff 0), as the characteristic parameter that can characterize stereo image quality, rgb2gray () is the function that true color image is converted into gray level image;
The 3rd step, the gray level image after the conversion is done the K-means cluster segmentation, when watching natural scene, the simulation human eye vision gets used to the characteristics that the object that similarity is strong carries out cluster;
Suppose X={x 1, x 2..., x nBe the set of n object, the K-mean algorithm is gathered into K type to object set X in cluster process, use class k(k=1,2 ..., K) K classification of expression cluster, as the clustering criteria function, P is point all in each type summation to the cluster centre distance with objective function P, making has objective function P minimum:
P = Σ k = 1 K Σ s = 1 n k d ( x s , z k )
Wherein, x sBe the data object in all kinds of, n kBe all kinds of data numbers,
Figure BDA00001669965900022
X representes all kinds of averages, z 1, z 2..., z KRepresent the center of each clustering cluster; D (x s, z k) be the distance or the similarity at object s and the center of type k, adopt Euclidean distance to represent here, trend towards finding to have the class of close density and size based on the algorithm of this distance metric:
d(x s,z k)=(x s-z k) 2
Initial conditions is the cluster number K, and the sample set that comprises n data object; Output condition is: K the cluster that satisfies the variance minimum sandards;
The 4th step; Because the content based in the class figure that obtains after the antipode image segmentation of K-means algorithm to original stereo-picture has similar character; Therefore think that the importance of each type image is also identical, can be through distributing different weights to treat w_seg with a certain discrimination to different class figure kBe inhomogeneous weighting coefficient, confirm by following formula:
w _ seg k = m k Σ k = 1 K m k
M in the formula kThe gray average of type of being k, i.e. z k
The 5th step, utilize the weighted average construction similarity criterion, calculate the WMSSIM value between the class figure of class figure and distortion stereo-picture of original stereo-picture, computing formula as shown in the formula:
WMSSIM ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
W_blk iIt is the weight coefficient of each sub-piece;
The 6th step, finally use the WMSSIM criterion, the quality evaluation index 3DM between the original and distortion differential image after the calculating cluster,
3 DM = Σ k = 1 K [ w _ seg k WMSSIM k ( c 1 Diff 1 , c 2 Diff 2 ) ]
Diff 1, Diff 2Be respectively the class diagram data of original stereo-picture antipode image and the class diagram data of distortion stereo-picture antipode image, c 1, c 2Be the weight of type of distributing to Fig. 1 with type Fig. 2;
The 7th step, estimate according to the index stereoscopic image quality that obtains, 3DM is normalized numerical value, and this value is big more, and picture quality is good more.
Utilize the weighted average construction similarity criterion, calculate the WMSSIM value between the class figure of class figure and distortion stereo-picture of original stereo-picture, concrete grammar is following:
1. the antipode image with stereo-picture carries out equal-sized division, is divided into the B sub-block, and every sub-block is designated as B i, contain the individual pixel of N=M * M ' in each piece;
2. calculate every sub-block B iBrightness, grain details and piecemeal locus to the factor of influence of human eye vision, confirm the weights of every sub-block;
The influence of A brightness
The logarithm of vision subjective luminance and light stimulus intensity is proportional, promptly
Figure BDA00001669965900031
Wherein α is a constant, and I is the subjective luminance value, I oBe the absolute door limit value;
If I Imax=max{Lum 1, Lum 2..., Lum NBe the maximum brightness value in i the piecemeal district, wherein Lum lBe the brightness of each pixel in the piece,
Figure BDA00001669965900032
Be the average brightness value in i sub-block district, use
Figure BDA00001669965900033
Represent the light stimulus intensity of this sub-piece approx, make a=1, then can derive the weights influence factor s that causes by the brightness variation iFor:
s i = lg I i max I iavg
The influence of B grain details
Grain details factor of influence d iBe expressed as:
d i = ( 1 N - 1 Σ l = 1 N ( Lum l - 1 N Σ l = 1 N Lum l ) 2 ) 1 / 2
The influence of C locus
With locus factor of influence r iThe photosensitive cell of expression human eye is in amphiblestroid macular area comparatively intensive this characteristic that distributes,
r i = 1 - ( x io - x c ) 2 + ( y io - y c ) 2 r
In the formula, x Io, y IoThe center of expression piecemeal i; x c, y cThe centre coordinate of expression original image; R representes that each point is to the ultimate range of centre coordinate in the original image;
Confirming of D weighting coefficient
Influence three key factors of human-eye visual characteristic in the sub-piece of analysis-by-synthesis: brightness factor of influence s i, texture factor of influence d iWith locus factor of influence r i, account for the proportion of the total factor of influence of entire image through the combined influence factor of calculating every sub-block, can obtain the weight coefficient w_blk of every sub-block i:
w _ blk i = W i W Sum = s i 2 + d i 2 + r i 2 W Sum
Wherein, W iBe the factor of influence of every sub-block after comprehensive, W SumBe the combined influence factor W of all sub-pieces iSummation;
W Sum = Σ i = 1 B W i
3. calculate every sub-block B iStructural similarity, carry out weighted sum according to human-eye visual characteristic in every sub-block, thereby can obtain weighted average construction similarity evaluation value WMSSIM:
W ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
4. the RGB component being carried out making even after the identical operation all obtains the 3D_SSIM value, because the antipode image of stereo-picture is colored, (x y), promptly obtains W therefore to need the W of calculated difference figure RGB component respectively R(x, y), W G(x, y), W B(x, y) data obtain parameter 3D_SSIM after the statistical average;
WMSSIM(x,y)=E(W R(x,y)+W G(x,y)+W B(x,y))
The Diff here (x); Diff (y) is respectively original and the right absolute difference image data of distortion stereo-picture, and (x y) is the weighted average construction similarity evaluation value of utilizing the distortion stereo-picture antipode information that the WMSSIM criterion obtains to WMSSIM; If gray level image; Then W (x, y) promptly be WMSSIM (x, y).
Technical characterstic of the present invention and effect:
The present invention takes into account that the physics clue of psychological stereoscopic vision come; The characteristic information of cutting apart stereo-picture through the mode of utilizing psychological stereoscopic vision perception image; Simulate the process of eye-observation image information with the method for gray scale cluster; Propose a kind of new associating psychology and the image quality evaluating method of physiology stereoscopic vision, and provided implementation method.Experimental result shows that method of the present invention is simple, can be advantageously used in the quality assessment of stereo-picture, to the evaluation result and the subjective assessment result's of distortion stereo image quality high conformity.
Description of drawings
Fig. 1 left view.
Fig. 2 right view.
Fig. 3 antipode figure.
Fig. 4 class Fig. 1.
Fig. 5 class Fig. 2.
Quality evaluation result curve during Fig. 6 c1=c2.
Quality evaluation result curve during Fig. 7 c1=0.8c2=0.2.
Fig. 8 subjective assessment result.
Fig. 9 entire block diagram.
Embodiment
The objective stereo image quality evaluation method of taking all factors into consideration physiology and psychological stereoscopic vision comprises that mainly the characteristic information of stereo-picture extracts and effectively expression, the modeling of stereoscopic vision characteristic, stereo image quality evaluation measurement criterion and subjective evaluation and test feedback modifiers.
At first extract the antipode information of original stereo-picture and distortion stereo-picture, antipode information is carried out the gray scale cluster after, calculate the different information after the gray scale cluster with the weighted average construction similarity function, obtain final evaluation index.
The first step, it is right to get a pair of viewpoint with binocular parallax, calculates the antipode Diff between the view of the left and right sides 0=| f (x 1)-f (x 2) |.Wherein, f (x 1), f (x 2) be the pixel value of left and right sides view.
In second step, convert the antipode value to grayscale image Diff=rgb2gray (Diff 0), as the characteristic parameter that can characterize stereo image quality.
The 3rd step, the gray level image after the conversion is done the K-means cluster segmentation, when watching natural scene, the simulation human eye vision gets used to the characteristics that the object that similarity is strong carries out cluster.
Suppose X={x 1, x 2..., x nIt is the set of n object.The K-mean algorithm is gathered into K type to object set X in cluster process, use class k(k=1,2 ..., K) K classification of expression cluster.As the clustering criteria function, P is point all in each type summation to the cluster centre distance with objective function P, makes objective function P minimum, has:
P = Σ k = 1 K Σ s = 1 n k d ( x s , z k )
Wherein, x sBe the data object in all kinds of, n kBe all kinds of data numbers,
Figure BDA00001669965900052
X representes all kinds of averages, z 1, z 2..., z KRepresent the center of each clustering cluster; D (x s, z k) be the distance or the similarity at object s and the center of type k.Here adopt Euclidean distance to represent, trend towards finding to have the class of close density and size based on the algorithm of this distance metric:
d(x s,z k)=(x s-z k) 2
Initial conditions is the cluster number K, and the sample set that comprises n data object; Output condition is: K the cluster that satisfies the variance minimum sandards.
The 4th step; Because the content based in the class figure that obtains after the antipode image segmentation of K-means algorithm to original stereo-picture has similar character; Therefore think that the importance of each type image is also identical, can be through distributing different weights to treat with a certain discrimination to different class figure.W_seg kBe inhomogeneous weighting coefficient, confirm by following formula:
w _ seg k = m k Σ k = 1 K m k
M in the formula kThe gray average of type of being k, i.e. z k
The 5th step, utilize the weighted average construction similarity criterion, calculate the WMSSIM value between the class figure of class figure and distortion stereo-picture of original stereo-picture, computing formula as shown in the formula:
WMSSIM ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
W_blk iIt is the weight coefficient of each sub-piece.
The 6th step, finally use the WMSSIM criterion, the quality evaluation index 3DM between the original and distortion differential image after the calculating cluster,
3 DM = Σ k = 1 K [ w _ seg k WMSSIM k ( c 1 Diff 1 , c 2 Diff 2 ) ]
Diff 1, Diff 2Be respectively the class diagram data of original stereo-picture antipode image and the class diagram data of distortion stereo-picture antipode image.c 1, c 2Be the weight of type of distributing to Fig. 1 with type Fig. 2.
The 7th step, estimate according to the index stereoscopic image quality that obtains, 3DM is normalized numerical value, and this value is big more, and picture quality is good more.
Further specify the present invention below in conjunction with accompanying drawing and embodiment.
The right antipode information of stereo-picture directly influences the quality of stereo-picture.The antipode map contour of original (under the undistorted situation) stereo-picture is clear, and is consistent with the profile of original stereo-picture itself; And the right antipode map contour of distortion stereo-picture is in disorder, differs greatly with the profile of original stereo-picture itself.Therefore, can judge the quality of distortion stereo image quality through the difference degree that compares them.PSNR and MSE have only considered the difference between the pixel, do not consider the visual characteristic of human eye, and this evaluation method also is the difference of estimating on the whole between original image and the distorted image, can't reflect image local area differentiation condition of different.At some in particular cases, the PSNR value of image descends behind the increase noise, and observed picture quality increases on the contrary.The image fault degree depends on the content of image to a great extent, and MSE and PSNR method are not all considered the texture shielding effect of picture material.In order better to approach the subjective assessment result, need the perception of simulating human vision.
The first step, Fig. 1 and Fig. 2 are that a pair of viewpoint with binocular parallax is right, Fig. 3 is that this stereo-picture is to the antipode figure Diff between the view of the left and right sides 0=| f (x 1)-f (x 2) |.Wherein, f (x 1), f (x 2) be the pixel value of left and right sides view.
In second step, convert the antipode value to grayscale image Diff=rgb2gray (Diff 0), as the characteristic parameter that can characterize stereo image quality.
The 3rd step, the gray level image after the conversion is done the K-means cluster segmentation, when watching natural scene, the simulation human eye vision gets used to the characteristics that the object that similarity is strong carries out cluster.
Suppose X={x 1, x 2..., x nIt is the set of n object.The K-mean algorithm is gathered into K type to object set X in cluster process, use class k(k=1,2 ..., K) K classification of expression cluster.As the clustering criteria function, P is point all in each type summation to the cluster centre distance with objective function P, makes objective function P minimum, has:
P = Σ k = 1 K Σ s = 1 n k d ( x s , z k )
Wherein, x sBe the data object in all kinds of, n kBe all kinds of data numbers, X representes all kinds of averages, z 1, z 2..., z KRepresent the center of each clustering cluster; D (x s, z k) be the distance or the similarity at object s and the center of type k.Here adopt Euclidean distance to represent, trend towards finding to have the class of close density and size based on the algorithm of this distance metric:
d(x s,z k)=(x s-z k) 2
Initial conditions is the cluster number K, and the sample set that comprises n data object; Output condition is: K the cluster that satisfies the variance minimum sandards.
The 4th step; Because the content based in the class figure that obtains after the antipode image segmentation of K-means algorithm to original stereo-picture has similar character; Therefore think that the importance of each type image is also identical, can be through distributing different weights to treat with a certain discrimination to different class figure.W_seg kBe inhomogeneous weighting coefficient, confirm by following formula:
w _ seg k = m k Σ k = 1 K m k
M in the formula kThe gray average of type of being k, i.e. z k
In the 5th step, calculate weighted average construction similarity (the WMSSIM:Weighted Mean structural similarity) value between the class figure of class figure and distortion stereo-picture of original stereo-picture.
Human visual system's a fundamental characteristics is the susceptibility of local contrast, and vision only to (brightness or texture) in the visual field the regional interested of marked change is taken place.Receive the influence of this characteristic; When human eye is watched piece image; Only can take up and note watching, these regional distortions also than other area sensitives, and are usually neglected those even brightness smooth regions or the close texture region of spatial frequency the zone of marked change in the image.The identical image of two width of cloth PSNR (Peak Signal to Noise Ratio), because distortion zone is different, human eye is also different to their quality assessment., when carrying out quality assessment, will treat with a certain discrimination different image areas, as zones of different is adopted different weighting coefficients, weights show that more greatly this zone is big more to the influence of entire image for this reason.
The different image of absolute parallax can be represented the characteristic of stereo-picture, can be applied to the quality assessment of stereo-picture.Therefore this algorithm utilizes this characteristic that picture quality is carried out the estimation of weighted average construction similarity, and concrete grammar is following:
1. the antipode image with stereo-picture carries out equal-sized division, is divided into the B sub-block, and every sub-block is designated as B i, contain the individual pixel of N=M * M ' in each piece.
2. calculate every sub-block B iBrightness, grain details and piecemeal locus to the factor of influence of human eye vision, confirm the weights of every sub-block.
The influence of A brightness
The logarithm of vision subjective luminance and light stimulus intensity is proportional, promptly
Figure BDA00001669965900071
Wherein a is a constant, and I is the subjective luminance value, I oBe the absolute door limit value.
If I Imax=max{Lum 1, Lum 2..., Lum NBe the maximum brightness value in i the piecemeal district, wherein Lum lBrightness for each pixel in the piece.
Figure BDA00001669965900072
Be the average brightness value in i sub-block district, use
Figure BDA00001669965900073
Represent the light stimulus intensity of this sub-piece approx, make a=1, then can derive the weights influence factor s that causes by the brightness variation iFor:
s i = lg I i max I iavg
The influence of B grain details
Grain details factor of influence d iBe expressed as:
d i = ( 1 N - 1 Σ l = 1 N ( Lum l - 1 N Σ l = 1 N Lum l ) 2 ) 1 / 2
The influence of C locus
With locus factor of influence r iThe photosensitive cell of expression human eye is in comparatively intensive this characteristic of amphiblestroid macular area distribution.
r i = 1 - ( x io - x c ) 2 + ( y io - y c ) 2 r
In the formula, x Io, y IoThe center of expression piecemeal i; x c, y cThe centre coordinate of expression original image; R representes that each point is to the ultimate range of centre coordinate in the original image.
Confirming of D weighting coefficient
Influence three key factors of human-eye visual characteristic in the sub-piece of analysis-by-synthesis: brightness factor of influence s i, texture factor of influence d iWith locus factor of influence r i, account for the proportion of the total factor of influence of entire image through the combined influence factor of calculating every sub-block, can obtain the weight coefficient w_blk of every sub-block i:
w _ blk i = W i W Sum = s i 2 + d i 2 + r i 2 W Sum
Wherein, W iBe the factor of influence of every sub-block after comprehensive, W SumBe the combined influence factor W of all sub-pieces iSummation.
W Sum = Σ i = 1 B W i
3. calculate every sub-block B iStructural similarity, carry out weighted sum according to human-eye visual characteristic in every sub-block, thereby can obtain weighted average construction similarity evaluation value WMSSIM:
W ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
4. the RGB component is carried out making even after the identical operation and all obtain the 3D_SSIM value.Because the antipode image of stereo-picture is colored, therefore (x y), obtains W to the W of needs difference calculated difference figure RGB component R(x, y), W G(x, y), W B(x, y) data obtain parameter 3D_SSIM after the statistical average.
WMSSIM(x,y)=E(W R(x,y)+W G(x,y)+W B(x,y))
The Diff here (x), Diff (y) are respectively original and the right absolute difference image data of distortion stereo-picture.(x y) is the weighted average construction similarity evaluation value of utilizing the distortion stereo-picture antipode information that the WMSSIM criterion obtains to WMSSIM.If gray level image, then W (x, y) promptly be WMSSIM (x, y).
The 6th step, finally use the WMSSIM criterion, the quality evaluation index 3DM between the original and distortion differential image after the calculating cluster,
3 DM = Σ k = 1 K [ w _ seg k WMSSIM k ( c 1 Diff 1 , c 2 Diff 2 ) ]
Diff 1, Diff 2Be respectively the class diagram data of original stereo-picture antipode image and the class diagram data of distortion stereo-picture antipode image.c 1, c 2Be the weight of type of distributing to Fig. 1 with type Fig. 2.
Three-dimensional form " 3DWINDOWS-19A01 type " the Computerized 3 D imaging device that when experimentizing, still adopts Tianjin Three-Dimensional Imaging Technology Co., Ltd. to produce.
Fig. 6 is c 1=c 2The time simulation result, at this moment the weight of two types of class figure is identical; Fig. 7 is c 1=0.8, c 2=0.2 o'clock simulation result; Fig. 8 is the subjective quality appraisal curve.Through contrast, can find out that the variation tendency similarity of curve of curve and MOS value of the evaluation of estimate 3DM that this algorithm obtains is very high.
Further the observation experiment result can find, a type Fig. 1 gets big weight c among Fig. 7 1=0.8 o'clock, the similarity of the curvilinear motion of objective evaluation index 3DM value and subjective assessment MOS value was higher.Be respectively through calculating the objective evaluation value to obtain utilizing three kinds of images (Art.bmp, Dolls.bmp and Teddy.bmp) that this algorithm obtains and the coefficient R between the DMOS value: 0.8876,0.8980,0.8918.
The 7th step, estimate according to the index stereoscopic image quality that obtains, Fig. 9 is the process flow diagram of whole algorithm, and final evaluation index is 3DM, and 3DM is normalized numerical value, and this value is big more, and picture quality is good more.

Claims (2)

1. the three-dimensional image objective quality evaluation method based on physiology and psychological stereoscopic vision is characterized in that, comprises the steps:
The first step, it is right to get a pair of viewpoint with binocular parallax, calculates the antipode Diff between the view of the left and right sides 0=| f (x 1)-f (x 2) |, wherein, f (x 1), f (x 2) be the pixel value of left and right sides view;
In second step, convert the antipode value to grayscale image Diff=rgb2gray (Diff 0), as the characteristic parameter that can characterize stereo image quality, rgb2gray () is the function that true color image is converted into gray level image;
The 3rd step, the gray level image after the conversion is done the K-means cluster segmentation, when watching natural scene, the simulation human eye vision gets used to the characteristics that the object that similarity is strong carries out cluster;
Suppose X={x 1, x 2..., x nBe the set of n object, the K-mean algorithm is gathered into K type to object set X in cluster process, use class k(k=1,2 ..., K) K classification of expression cluster, as the clustering criteria function, P is point all in each type summation to the cluster centre distance with objective function P, making has objective function P minimum:
P = Σ k = 1 K Σ s = 1 n k d ( x s , z k )
Wherein, x sBe the data object in all kinds of, n kBe all kinds of data numbers,
Figure FDA00001669965800012
X representes all kinds of averages, z 1, z 2..., z KRepresent the center of each clustering cluster; D (x s, z k) be the distance or the similarity at object s and the center of type k, adopt Euclidean distance to represent here, trend towards finding to have the class of close density and size based on the algorithm of this distance metric:
d(x s,z k)=(x s-z k) 2
Initial conditions is the cluster number K, and the sample set that comprises n data object; Output condition is: K the cluster that satisfies the variance minimum sandards;
The 4th step; Because the content based in the class figure that obtains after the antipode image segmentation of K-means algorithm to original stereo-picture has similar character; Therefore think that the importance of each type image is also identical, can be through distributing different weights to treat w_seg with a certain discrimination to different class figure kBe inhomogeneous weighting coefficient, confirm by following formula:
w _ seg k = m k Σ k = 1 K m k
M in the formula kThe gray average of type of being k, i.e. z k
The 5th step, utilize the weighted average construction similarity criterion, calculate the WMSSIM value between the class figure of class figure and distortion stereo-picture of original stereo-picture, computing formula as shown in the formula:
WMSSIM ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
W_blk iIt is the weight coefficient of each sub-piece;
The 6th step, finally use the WMSSIM criterion, the quality evaluation index 3DM between the original and distortion differential image after the calculating cluster,
3 DM = Σ k = 1 K [ w _ seg k WMSSIM k ( c 1 Diff 1 , c 2 Diff 2 ) ]
Diff 1, Diff 2Be respectively the class diagram data of original stereo-picture antipode image and the class diagram data of distortion stereo-picture antipode image, c 1, c 2Be the weight of type of distributing to Fig. 1 with type Fig. 2;
The 7th step, estimate according to the index stereoscopic image quality that obtains, 3DM is normalized numerical value, and this value is big more, and picture quality is good more.
2. as claimed in claim 1; Three-dimensional image objective quality evaluation method based on physiology and psychological stereoscopic vision is characterized in that, utilizes the weighted average construction similarity criterion; Calculate the WMSSIM value between the class figure of class figure and distortion stereo-picture of original stereo-picture, concrete grammar is following:
1. the antipode image with stereo-picture carries out equal-sized division, is divided into the B sub-block, and every sub-block is designated as B i, contain the individual pixel of N=M * M ' in each piece;
2. calculate every sub-block B iBrightness, grain details and piecemeal locus to the factor of influence of human eye vision, confirm the weights of every sub-block;
The influence of A brightness
The logarithm of vision subjective luminance and light stimulus intensity is proportional, promptly
Figure FDA00001669965800021
Wherein α is a constant, and I is the subjective luminance value, I oBe the absolute door limit value;
If I Imax=max{Lum 1, Lum 2..., Lum NBe the maximum brightness value in i the piecemeal district, wherein Lum lBe the brightness of each pixel in the piece,
Figure FDA00001669965800022
Be the average brightness value in i sub-block district, use Represent the light stimulus intensity of this sub-piece approx, make a=1, then can derive the weights influence factor s that causes by the brightness variation iFor:
s i = lg I i max I iavg
The influence of B grain details
Grain details factor of influence d iBe expressed as:
d i = ( 1 N - 1 Σ l = 1 N ( Lum l - 1 N Σ l = 1 N Lum l ) 2 ) 1 / 2
The influence of C locus
With locus factor of influence r iThe photosensitive cell of expression human eye is in amphiblestroid macular area comparatively intensive this characteristic that distributes,
r i = 1 - ( x io - x c ) 2 + ( y io - y c ) 2 r
In the formula, x Io, y IoThe center of expression piecemeal i; x c, y cThe centre coordinate of expression original image; R representes that each point is to the ultimate range of centre coordinate in the original image;
Confirming of D weighting coefficient
Influence three key factors of human-eye visual characteristic in the sub-piece of analysis-by-synthesis: brightness factor of influence s i, texture factor of influence d iWith locus factor of influence r i, account for the proportion of the total factor of influence of entire image through the combined influence factor of calculating every sub-block, can obtain the weight coefficient w_blk of every sub-block i:
w _ blk i = W i W Sum = s i 2 + d i 2 + r i 2 W Sum
Wherein, W iBe the factor of influence of every sub-block after comprehensive, W SumBe the combined influence factor W of all sub-pieces iSummation;
W Sum = Σ i = 1 B W i
3. calculate every sub-block B iStructural similarity, carry out weighted sum according to human-eye visual characteristic in every sub-block, thereby can obtain weighted average construction similarity evaluation value WMSSIM:
W ( x , y ) = Σ i = 1 B [ w _ blk i MSSIM ( Diff 1 , Diff 2 ) ]
4. the RGB component being carried out making even after the identical operation all obtains the 3D_SSIM value, because the antipode image of stereo-picture is colored, (x y), promptly obtains W therefore to need the W of calculated difference figure RGB component respectively R(x, y), W G(x, y), W B(x, y) data obtain parameter 3D_SSIM after the statistical average;
WMSSIM(x,y)=E(W R(x,y)+W G(x,y)+W B(x,y))
The Diff here (x); Diff (y) is respectively original and the right absolute difference image data of distortion stereo-picture, and (x y) is the weighted average construction similarity evaluation value of utilizing the distortion stereo-picture antipode information that the WMSSIM criterion obtains to WMSSIM; If gray level image; Then W (x, y) promptly be WMSSIM (x, y).
CN2012101635968A 2012-05-22 2012-05-22 Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision Pending CN102722888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101635968A CN102722888A (en) 2012-05-22 2012-05-22 Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101635968A CN102722888A (en) 2012-05-22 2012-05-22 Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision

Publications (1)

Publication Number Publication Date
CN102722888A true CN102722888A (en) 2012-10-10

Family

ID=46948634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101635968A Pending CN102722888A (en) 2012-05-22 2012-05-22 Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision

Country Status (1)

Country Link
CN (1) CN102722888A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096125A (en) * 2013-02-22 2013-05-08 吉林大学 Stereoscopic video visual comfort evaluation method based on region segmentation
CN103139598A (en) * 2013-03-14 2013-06-05 天津大学 Measuring method of normalization contrast ranges affecting comfort levels of three-dimensional pictures
CN103873866A (en) * 2014-03-18 2014-06-18 天津大学 GGoP level distortion recursion method of multi-viewpoint stereoscopic video in packet loss network
CN107371016A (en) * 2017-07-25 2017-11-21 天津大学 Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods
CN107507238A (en) * 2016-06-14 2017-12-22 马自达汽车株式会社 Texture evaluation system
CN109978933A (en) * 2019-01-03 2019-07-05 北京中科慧眼科技有限公司 The confidence level detection method of parallax information data, device and automated driving system
CN110189312A (en) * 2019-05-24 2019-08-30 北京百度网讯科技有限公司 Luminance evaluation method, apparatus, electronic equipment and the storage medium of eye fundus image
CN112703532A (en) * 2020-12-03 2021-04-23 华为技术有限公司 Image processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010091494A1 (en) * 2009-02-11 2010-08-19 Ecole De Technologie Superieure Method and system for determining structural similarity between images
CN102170581A (en) * 2011-05-05 2011-08-31 天津大学 Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010091494A1 (en) * 2009-02-11 2010-08-19 Ecole De Technologie Superieure Method and system for determining structural similarity between images
CN102170581A (en) * 2011-05-05 2011-08-31 天津大学 Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LILI SHEN等: "Quality assessment of stereo images with stereo vision", 《IMAGE AND SIGNAL PROCESSING,2009.CISP’09.2ND INTERNATIONAL CONGRESS ON》 *
沈丽丽: "立体视觉信息客观质量评价算法研究", 《中国博士学位论文全文数据库》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096125A (en) * 2013-02-22 2013-05-08 吉林大学 Stereoscopic video visual comfort evaluation method based on region segmentation
CN103139598A (en) * 2013-03-14 2013-06-05 天津大学 Measuring method of normalization contrast ranges affecting comfort levels of three-dimensional pictures
CN103873866A (en) * 2014-03-18 2014-06-18 天津大学 GGoP level distortion recursion method of multi-viewpoint stereoscopic video in packet loss network
CN107507238A (en) * 2016-06-14 2017-12-22 马自达汽车株式会社 Texture evaluation system
CN107507238B (en) * 2016-06-14 2020-09-25 马自达汽车株式会社 Texture evaluation system
CN107371016A (en) * 2017-07-25 2017-11-21 天津大学 Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods
CN109978933A (en) * 2019-01-03 2019-07-05 北京中科慧眼科技有限公司 The confidence level detection method of parallax information data, device and automated driving system
CN110189312A (en) * 2019-05-24 2019-08-30 北京百度网讯科技有限公司 Luminance evaluation method, apparatus, electronic equipment and the storage medium of eye fundus image
CN112703532A (en) * 2020-12-03 2021-04-23 华为技术有限公司 Image processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN105208374B (en) A kind of non-reference picture assessment method for encoding quality based on deep learning
CN101610425B (en) Method for evaluating stereo image quality and device
CN102982535A (en) Stereo image quality evaluation method based on peak signal to noise ratio (PSNR) and structural similarity (SSIM)
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN102209257A (en) Stereo image quality objective evaluation method
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN106303507B (en) Video quality evaluation without reference method based on space-time united information
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN102595185A (en) Stereo image quality objective evaluation method
CN104036493B (en) No-reference image quality evaluation method based on multifractal spectrum
CN103426173B (en) Objective evaluation method for stereo image quality
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
CN109919959A (en) Tone mapping image quality evaluating method based on color, naturality and structure
CN103945217B (en) Based on complex wavelet domain half-blindness image quality evaluating method and the system of entropy
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN102547368A (en) Objective evaluation method for quality of stereo images
CN101976444A (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN104811691A (en) Stereoscopic video quality objective evaluation method based on wavelet transformation
CN109788275A (en) Naturality, structure and binocular asymmetry are without reference stereo image quality evaluation method
CN103136748A (en) Stereo-image quality objective evaluation method based on characteristic image
CN104202594A (en) Video quality evaluation method based on three-dimensional wavelet transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121010