CN102333233A - Stereo image quality objective evaluation method based on visual perception - Google Patents

Stereo image quality objective evaluation method based on visual perception Download PDF

Info

Publication number
CN102333233A
CN102333233A CN201110284944A CN201110284944A CN102333233A CN 102333233 A CN102333233 A CN 102333233A CN 201110284944 A CN201110284944 A CN 201110284944A CN 201110284944 A CN201110284944 A CN 201110284944A CN 102333233 A CN102333233 A CN 102333233A
Authority
CN
China
Prior art keywords
distortion
dis
point image
visual point
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110284944A
Other languages
Chinese (zh)
Other versions
CN102333233B (en
Inventor
邵枫
蒋刚毅
郁梅
李福翠
彭宗举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANTONG OUKE NC EQUIPMENT Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN 201110284944 priority Critical patent/CN102333233B/en
Publication of CN102333233A publication Critical patent/CN102333233A/en
Application granted granted Critical
Publication of CN102333233B publication Critical patent/CN102333233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a stereo image quality objective evaluation method based on visual perception. Firstly, a stereo image is divided into a strong edge block, a weak edge block , a flat block and a texture block, and characteristic information reflecting image quality and depth perception of different zone blocks is extracted through computation so as to obtain stereo image characteristic vectors; and then the characteristic vectors of distorted stereo images of the same distortion type in a distorted stereo image set are trained through support vector regression, and each distorted stereo image of the same distortion type is tested by a support vector regression training model to obtain the objective image quality evaluation forecast value of each distorted stereo image. The method has the advantages that the obtained characteristic vector information reflecting the image quality and the depth perception has stronger stability and can better reflect the quality change condition of the stereo images, and the relevance of an objective evaluation result and subjective perception is improved.

Description

A kind of stereo image quality method for objectively evaluating based on visually-perceptible
Technical field
The present invention relates to a kind of image quality evaluating method, especially relate to a kind of stereo image quality method for objectively evaluating based on visually-perceptible.
Background technology
Along with developing rapidly of image coding technique and stereo display technique, the stereo-picture technology has received concern and application more and more widely, has become a current research focus.The binocular parallax principle of stereo-picture techniques make use human eye, binocular receive the left and right sides visual point image from Same Scene independently of one another, merge through brain and form binocular parallax, thereby enjoy the stereo-picture with depth perception and sense true to nature.Because the influence of acquisition system, store compressed and transmission equipment; Stereo-picture can be introduced a series of distortion inevitably; And compare with the single channel image, stereo-picture need guarantee two channel image quality simultaneously, it is carried out quality evaluation have very important significance.Yet present stereoscopic image quality lacks effective method for objectively evaluating and estimates.Therefore, set up effective stereo image quality objective evaluation model and have crucial meaning.
The stereo image quality method for objectively evaluating mainly can be divided into two types: 1) based on the left and right sides channel image quality evaluation of picture quality; It directly applies to the evaluation stereo image quality with the plane picture quality evaluating method; Yet the left and right sides visual point image of stereoscopic image merges the relief process of generation also to be difficult to represent with simple mathematic method; And also exist between the visual point image of the left and right sides to influence each other, left and right sides visual point image is carried out the simple linear weighting be difficult to estimate effectively stereo image quality; 2) based on the left and right sides channel image quality evaluation of three-dimensional perception; It reflects through parallax information or depth information; Yet because the limitation of present disparity estimation and depth estimation technology; How effectively depth image or anaglyph quality to be estimated to characterize third dimension knowledge characteristic truly, remain one of difficult point problem in the stereo image quality objective evaluation.Therefore, being attached in the evaluation method during how with picture quality and depth perception information of same, making evaluation result feel to meet the human visual system more, all is to carry out the problem that need research and solve in the evaluating objective quality process in stereoscopic image.
Summary of the invention
Technical problem to be solved by this invention provides a kind of stereo image quality method for objectively evaluating that can effectively improve the correlation of objective evaluation result and subjective perception.
The present invention solves the problems of the technologies described above the technical scheme that is adopted: a kind of stereo image quality method for objectively evaluating based on visually-perceptible is characterized in that may further comprise the steps:
1. make S OrgUndistorted stereo-picture for original makes S DisFor the stereo-picture of distortion to be evaluated, with S OrgLeft visual point image be designated as { L Org(x, y) }, with S OrgRight visual point image be designated as { R Org(x, y) }, with S DisLeft visual point image be designated as { L Dis(x, y) }, with S DisRight visual point image be designated as { R Dis(x, y) }, wherein, (x, the y) coordinate position of pixel in left visual point image of expression and the right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H representes the height of left visual point image and right visual point image, L Org(x, y) expression S OrgLeft visual point image { L Org(x, y) } in coordinate position be (x, the pixel value of pixel y), R Org(x, y) expression S OrgRight visual point image { R Org(x, y) } in coordinate position be (x, the pixel value of pixel y), L Dis(x, y) expression S DisLeft visual point image { L Dis(x, y) } in coordinate position be (x, the pixel value of pixel y), R Dis(x, y) expression S DisRight visual point image { R Dis(x, y) } in coordinate position be (x, the pixel value of pixel y);
2. utilize the visual masking effect of human vision, extract undistorted left visual point image { L respectively background illumination and texture Org(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L Org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R Org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) expression { J L(x, y) } in coordinate position be (x, the pixel value of pixel y), J R(x, y) expression { J R(x, y) } in coordinate position be (x, the pixel value of pixel y);
3. obtain undistorted left visual point image { L respectively through regional detection algorithm Org(x, y) } and the left visual point image { L of distortion Dis(x, y) }, undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of each 8 * 8 sub-piece, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block;
4. according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality and the right visual point image { R of distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality, obtain the left visual point image { L of distortion respectively Dis(x, y) } be used to reflect the characteristic vector of picture quality and the right visual point image { R of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, again to the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } be used to reflect that the characteristic vector of picture quality carries out linear weighted function, obtains S DisThe characteristic vector that is used to reflect picture quality, be designated as F q
5. according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects depth perception and the space structure intensity that is used to reflect depth perception, obtain S DisThe characteristic vector that is used to reflect depth perception, be designated as F s
6. with S DisThe characteristic vector F that is used to reflect picture quality qWith the characteristic vector F that is used to reflect depth perception sForm new characteristic vector, as S DisCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a new characteristic vector;
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions; Utilize the subjective quality evaluation method to obtain the average subjective scoring difference of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture respectively, be designated as DMOS, DMOS=100-MOS; Wherein, MOS representes the subjective scoring average, DMOS ∈ [0; 100], n >=1;
8. adopt and calculating S DisThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width of cloth distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture;
9. adopt support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained; And the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width of cloth distortion of same type of distortion; Calculate the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion of identical type of distortion in the set of distortion stereo-picture; Evaluating objective quality predicted value for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the n ' expression distortion stereo-picture set.
Described step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T l(x, y) },
Figure BDA0000093732490000041
Wherein, T l(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be (x, the visual threshold value of the visual masking effect of the background illumination of pixel y),
Figure BDA0000093732490000042
Represent undistorted left visual point image { L Org(x, y) } in be that (x, pixel y) they are the average brightness of all pixels in 5 * 5 windows at center with coordinate position;
2.-2, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be that (η is the controlling elements greater than 0 for x, the visual threshold value of the visual masking effect of the texture of pixel y), and (x y) representes undistorted left visual point image { L G Org(x, y) } in coordinate position be that (x, pixel y) carry out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L Org(x, y) } edge image in coordinate position be that (x, pixel y) carry out the edge weighted value that Gauss's LPF obtains;
2.-3, to undistorted left visual point image { L Org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L Org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C L, t* min{T l(x, y), T t(x, y) }, wherein, C L, tThe parameter of the visual masking effect eclipse effect of expression control background illumination and texture, 0<C L, t<1, min{} is for getting minimum value function;
2.-4,2.-1 employing and step to 2.-3 identical operations, obtain undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
The detailed process of the regional detection algorithm of described step in 3. is:
3.-1, respectively with undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } be divided into
Figure BDA0000093732490000051
8 * 8 sub-pieces of individual non-overlapping copies define undistorted left visual point image { L Org(x, y) } in l 8 * 8 sub-pieces be the current first sub-piece, be designated as
Figure BDA0000093732490000052
Left visual point image { the L of definition distortion Dis(x, y) } in l 8 * 8 sub-pieces be the current second sub-piece, be designated as
Figure BDA0000093732490000053
Wherein,
Figure BDA0000093732490000054
(x 2, y 2) the expression current first sub-piece With the current second sub-piece
Figure BDA0000093732490000056
The coordinate position of middle pixel, 1≤x 2≤8,1≤y 2≤8,
Figure BDA0000093732490000057
Represent the current first sub-piece Middle coordinate position is (x 2, y 2) the pixel value of pixel,
Figure BDA0000093732490000059
Represent the current second sub-piece
Figure BDA00000937324900000510
Middle coordinate position is (x 2, y 2) the pixel value of pixel;
3.-2, calculate the current first sub-piece respectively
Figure BDA00000937324900000511
With the current second sub-piece In the Grad of all pixels, for the current first sub-piece
Figure BDA00000937324900000513
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G Ox(x 2', y 2') |+| G Oy(x 2', y 2') |, for the current second sub-piece
Figure BDA00000937324900000514
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G Dx(x 2', y 2') |+| G Dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G Ox(x 2', y 2') the expression current first sub-piece
Figure BDA00000937324900000515
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Oy(x 2', y 2') the expression current first sub-piece
Figure BDA00000937324900000516
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, G Dx(x 2', y 2') the expression current second sub-piece Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Dy(x 2', y 2') the expression current second sub-piece
Figure BDA00000937324900000518
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign;
3.-3, find out the current first sub-piece
Figure BDA0000093732490000061
In the maximum of Grad of all pixels, be designated as G Max, then according to G MaxCalculate first Grads threshold and second Grads threshold, be designated as T respectively 1And T 2, T 1=0.12 * G Max, T 2=0.06 * G Max
3.-4, for the current first sub-piece
Figure BDA0000093732490000062
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000063
Middle coordinate position is (x 2', y 2') pixel, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')>T 1Whether set up, if then judge the current first sub-piece Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000065
Middle coordinate position is (x 2', y 2') pixel be strong fringe region, Num 1=Nun 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0;
3.-5, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2')>T 1And P o(x 2', y 2')<=T 1Whether set up, if then judge the current first sub-piece Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000067
Middle coordinate position is (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0;
3.-6, judge P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if then judge the current first sub-piece
Figure BDA0000093732490000068
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000069
Middle coordinate position is (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0;
3.-7, judge the current first sub-piece
Figure BDA00000937324900000610
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA00000937324900000611
Middle coordinate position is (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0;
3.-8, return step and 3.-4 continue remaining pixel in current first sub-piece
Figure BDA00000937324900000612
and the current second sub-piece
Figure BDA00000937324900000613
is handled, 8 * 8 pixels in current first sub-piece
Figure BDA00000937324900000614
and the current second sub-piece
Figure BDA00000937324900000615
all dispose;
3.-9, with Num 1, Num 2, Num 3And Num 4In maximum The corresponding area type as the current first sub-piece
Figure BDA0000093732490000071
With the current second sub-piece Block type, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block;
3.-10, make l "=l+1, l=l ", with undistorted left visual point image { L Org(x, y) } in the next one 8 * 8 sub-pieces as the current first sub-piece, with the left visual point image { L of distortion Dis(x, y) } in the next one 8 * 8 sub-pieces as the current second sub-piece, return step and 3.-2 continue to carry out, until undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in
Figure BDA0000093732490000073
8 * 8 sub-pieces of individual non-overlapping copies all dispose, and obtain undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces, wherein, l " initial value be 0;
3.-11,3.-1 employing and step to 3.-10 identical operations, obtain undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces.
Described step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), Fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) Min ( Max ( | L Org ( x 3 , y 3 ) - L Dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect picture quality of pixel, 1≤x 2≤8,1≤y 2≤8, N kLeft visual point image { the L of expression distortion Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤W, 1≤y 3≤H, L Org(x 3, y 3) expression { L Org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L Dis(x 3, y 3) expression { L Dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign;
4.-2, with the left visual point image { L of distortion Dis(x, y) } in the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as { fq with set k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained first characteristic vector, is designated as F 1, wherein, F 1Dimension be 256;
4.-3, to undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain undistorted left visual point image { L respectively Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with undistorted left visual point image { L Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as Left visual point image { L with distortion Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000082
Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 ;
4.-4, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as
Figure BDA0000093732490000084
Figure BDA0000093732490000085
Wherein, the left visual point image { L of l ' expression distortion Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number;
4.-5, with the left visual point image { L of distortion Dis(x, y) } in the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as with set
Figure BDA0000093732490000086
Then will
Figure BDA0000093732490000087
In all elements arrange in order and obtain second characteristic vector, be designated as F 2, wherein, F 2Dimension be 32;
4.-6, with the first characteristic vector F 1With the second characteristic vector F 2Form new characteristic vector, as the left visual point image { L of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the first characteristic vector F 1With the second characteristic vector F 2Couple together and form a new characteristic vector;
4.-7, to the right visual point image { R of distortion Dis(x, y) } adopt with step 4.-1 to 4.-6 identical operations, obtain the right visual point image { R of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F R, wherein, F RDimension be 288;
4.-8, to the left visual point image { L of distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality LRight visual point image { R with distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality RCarry out linear weighted function, obtain S DisThe characteristic vector that is used to reflect picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion Dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion Dis(x, y) } weights proportion, w L+ w R=1.
Described step detailed process 5. is:
5.-1, calculate undistorted left visual point image { L respectively Org(x, y) } and undistorted right visual point image { R Org(x, y) } absolute difference image, the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image and undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as { D respectively Org(x, y) }, { D Dis(x, y) } and Δ J (x, y) }, D Org(x, y)=| L Org(x, y)-R Org(x, y) |, D Dis(x, y)=| L Dis(x, y)-R Dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D Org(x, y) expression { D Org(x, y) } in coordinate position be (x, the pixel value of pixel y), D Dis(x, y) expression { D Dis(x, y) } in coordinate position be (x, the pixel value of pixel y), Δ J (x, y) the middle coordinate position of expression { Δ J (x, y) } is that (" || " is for asking absolute value sign for x, the pixel value of pixel y);
5.-2,3. identical operations of employing and step, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in the block type of each 8 * 8 sub-piece;
5.-3, calculate { D Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as { fd k(x 2, y 2), for { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), Fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) Min ( Max ( | D Org ( x 4 , y 4 ) - D Dis ( x 4 , y 4 ) | - Δ J ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect depth perception of pixel, 1≤x 2≤8,1≤y 2≤8, M kExpression { D Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at { D Org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤W, 1≤y 4≤H, D Org(x 4, y 4) expression { D Org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D Dis(x 4, y 4) expression { D Dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) coordinate position is (x in the expression { Δ J (x, y) } 4, y 4) the pixel value of pixel;
5.-4, with { D Dis(x, y) } in the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types represent { fd with set k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256;
5.-5, to { D Org(x, y) } and { D Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with { D Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000102
With { D Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as Wherein, the dimension of singular value vector is 8,
5.-6, calculate { D Dis(x, y) } in all block types be the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as
Figure BDA0000093732490000111
Figure BDA0000093732490000112
Wherein, l " expression { D Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at { D Org(x, y) } or { Δ J (x, y) } in sequence number;
5.-7, with { D Dis(x, y) } in the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types be expressed as with set
Figure BDA0000093732490000113
Then will In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32;
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form new characteristic vector, as S DisThe characteristic vector that is used to reflect depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a new characteristic vector.
Described step detailed process 9. is:
9.-1, the stereo-picture of all distortions of same type of distortion in the distortion stereo-picture set is divided into mutually disjoint 5 groups of subclass, select 4 groups of subclass composing training sample datas set wherein arbitrarily, be designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q representes training sample data set omega qIn the width of cloth number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width of cloth distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width of cloth distortion, 1≤k≤q;
9.-2, structure X kRegression function f (X k),
Figure BDA0000093732490000115
Wherein, f () is the function representation form, and w is a weight vector, w TBe the transposed matrix of w, b is a bias term,
Figure BDA0000093732490000116
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width of cloth distortion kLinear function,
Figure BDA0000093732490000117
D (X k, X l) be the kernel function in the support vector regression,
Figure BDA0000093732490000118
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width of cloth distortion, γ is a nuclear parameter, is used to reflect the scope of importing sample value; The scope of sample value is big more, and the γ value is also just big more, and exp () expression is the exponential function at the end with e; E=2.71828183, " || || for asking the Euclidean distance symbol;
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make that the regression function value that obtains through training is minimum with the error between the average subjective scoring difference, match obtains the weight vector w of optimum OptBias term b with optimum Opt, with the weight vector w of optimum OptBias term b with optimum OptCombination be designated as (w Opt, b Opt), ( w Opt , b Opt ) = Arg Min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains OptBias term b with optimum OptStructure support vector regression training pattern is designated as
Figure BDA0000093732490000122
Wherein, ψ representes training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure BDA0000093732490000123
Expression minimizes probability density function, X InpExpress support for the input vector of vector regression training pattern, (w Opt) TBe w OptTransposed matrix,
Figure BDA0000093732490000124
Express support for the input vector X of vector regression training pattern InpLinear function;
9.-4, according to the support vector regression training pattern; Stereo-picture to remaining the every width of cloth distortion in 1 group of subclass is tested; Prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in this group subclass; Evaluating objective quality predicted value for the stereo-picture of j width of cloth distortion in this group subclass is designated as Q with it j, Q j=f (X j),
Figure BDA0000093732490000125
Wherein, X jThe characteristic vector of representing the stereo-picture of j width of cloth distortion in this group subclass, The linear function of representing the stereo-picture of j width of cloth distortion in this group subclass;
9.-5, according to step 9.-1 to 9.-4 process; Respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture.
Described step 4. with step 8. in the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflecting picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
Compared with prior art, the invention has the advantages that:
1) the inventive method considers that the perception of zones of different stereo has different responses; Stereo-picture is divided into strong edge block, weak edge block, flat block and texture block also to be estimated respectively; Simultaneously picture quality and depth perception information are attached in the evaluation procedure, make evaluation result feel to meet the human visual system more.
2) the inventive method obtains minimum discernable distorted image according to the visual characteristic of human eye; And extract the characteristic information of zones of different piece respectively and form the characteristic vector of stereo-picture through computer memory noise intensity and space structure intensity; The characteristic vector information of the stereo-picture that obtains has stronger stability and can reflect the mass change situation of stereo-picture preferably, has improved the correlation of objective evaluation result and subjective perception.
Description of drawings
Fig. 1 is the overall realization block diagram of the inventive method;
Fig. 2 a is the left visual point image of Akko (being of a size of 640 * 480) stereo-picture;
Fig. 2 b is the right visual point image of Akko (being of a size of 640 * 480) stereo-picture;
Fig. 3 a is the left visual point image of Altmoabit (being of a size of 1024 * 768) stereo-picture;
Fig. 3 b is the right visual point image of Altmoabit (being of a size of 1024 * 768) stereo-picture;
Fig. 4 a is the left visual point image of Balloons (being of a size of 1024 * 768) stereo-picture;
Fig. 4 b is the right visual point image of Balloons (being of a size of 1024 * 768) stereo-picture;
Fig. 5 a is the left visual point image of Doorflower (being of a size of 1024 * 768) stereo-picture;
Fig. 5 b is the right visual point image of Doorflower (being of a size of 1024 * 768) stereo-picture;
Fig. 6 a is the left visual point image of Kendo (being of a size of 1024 * 768) stereo-picture;
Fig. 6 b is the right visual point image of Kendo (being of a size of 1024 * 768) stereo-picture;
Fig. 7 a is the left visual point image of LeaveLaptop (being of a size of 1024 * 768) stereo-picture;
Fig. 7 b is the right visual point image of LeaveLaptop (being of a size of 1024 * 768) stereo-picture;
Fig. 8 a is the left visual point image of Lovebierd1 (being of a size of 1024 * 768) stereo-picture;
Fig. 8 b is the right visual point image of Lovebierd1 (being of a size of 1024 * 768) stereo-picture;
Fig. 9 a is the left visual point image of Newspaper (being of a size of 1024 * 768) stereo-picture;
Fig. 9 b is the right visual point image of Newspaper (being of a size of 1024 * 768) stereo-picture;
Figure 10 a is the left visual point image of Puppy (being of a size of 720 * 480) stereo-picture;
Figure 10 b is the right visual point image of Puppy (being of a size of 720 * 480) stereo-picture;
Figure 11 a is the left visual point image of Soccer2 (being of a size of 720 * 480) stereo-picture;
Figure 11 b is the right visual point image of Soccer2 (being of a size of 720 * 480) stereo-picture;
Figure 12 a is the left visual point image of Horse (being of a size of 720 * 480) stereo-picture;
Figure 12 b is the right visual point image of Horse (being of a size of 720 * 480) stereo-picture;
Figure 13 a is the left visual point image of Xmas (being of a size of 640 * 480) stereo-picture;
Figure 13 b is the right visual point image of Xmas (being of a size of 640 * 480) stereo-picture.
Embodiment
Embodiment describes in further detail the present invention below in conjunction with accompanying drawing.
A kind of stereo image quality method for objectively evaluating that the present invention proposes based on visually-perceptible, it realizes that totally block diagram is as shown in Figure 1, it mainly may further comprise the steps:
1. make S OrgUndistorted stereo-picture for original makes S DisFor the stereo-picture of distortion to be evaluated, with S OrgLeft visual point image be designated as { L Org(x, y) }, with S OrgRight visual point image be designated as { R Org(x, y) }, with S DisLeft visual point image be designated as { L Dis(x, y) }, with S DisRight visual point image be designated as { R Dis(x, y) }, wherein, (x, the y) coordinate position of pixel in left visual point image of expression and the right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H representes the height of left visual point image and right visual point image, L Org(x, y) expression S OrgLeft visual point image { L Org(x, y) } in coordinate position be (x, the pixel value of pixel y), R Org(x, y) expression S OrgRight visual point image { R Org(x, y) } in coordinate position be (x, the pixel value of pixel y), L Dis(x, y) expression S DisLeft visual point image { L Dis(x, y) } in coordinate position be (x, the pixel value of pixel y), R Dis(x, y) expression S DisRight visual point image { R Dis(x, y) } in coordinate position be (x, the pixel value of pixel y).
2. human visual system (HVS) characteristic shows; Human eye is non to changing less attribute or noise in the image; Only if the change intensity of this attribute or noise surpasses a certain threshold value, this threshold value be exactly minimum discernable distortion (Just noticeable distortion, JND).And the visual masking effect of human eye is a kind of local effect, receives the influence of factors such as background illuminance, texture complexity, and background is bright more, and texture is more complicated, and boundary value is just high more.Therefore the present invention utilizes the visual masking effect of human vision to background illumination and texture, extracts undistorted left visual point image { L respectively Org(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L Org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R Org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) expression { J L(x, y) } in coordinate position be (x, the pixel value of pixel y), J R(x, y) expression { J R(x, y) } in coordinate position be (x, the pixel value of pixel y).
In this specific embodiment, step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T t(x, y) },
Figure BDA0000093732490000151
Wherein, T l(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be (x, the visual threshold value of the visual masking effect of the background illumination of pixel y),
Figure BDA0000093732490000152
Represent undistorted left visual point image { L Org(x, y) } in be that (x, pixel y) they are the average brightness of all pixels in 5 * 5 windows at center with coordinate position; In actual process; Also can adopt other big or small window, but through a large amount of experiments, the result can obtain best effect when showing the window that adopts 5 * 5 sizes.
2.-2, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be (η is the controlling elements greater than 0 for x, the visual threshold value of the visual masking effect of the texture of pixel y), in the present embodiment, η=0.05, (x y) representes undistorted left visual point image { L G Org(x, y) } in coordinate position be that (x, pixel y) carry out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L Org(x, y) } edge image in coordinate position be that (x, pixel y) carry out the edge weighted value that Gauss's LPF obtains.
2.-3, to undistorted left visual point image { L Org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L Org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C L, t* min{T l(x, y), T t(x, y) }, wherein, C L, tThe parameter of the visual masking effect eclipse effect of expression control background illumination and texture, 0<C L, t<1, in the present embodiment, C L, t=0.5, min{} is for getting minimum value function.
2.-4,2.-1 employing and step to 2.-3 identical operations, obtain undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
3. because the human visual system has different susceptibilitys to edge of image, texture, smooth region, the responsiveness of zones of different stereo perception also should be different, therefore, in the stereo image quality evaluation, should consider the contribution of zones of different to estimating respectively.The present invention obtains undistorted left visual point image { L respectively through regional detection algorithm Org(x, y) } and the left visual point image { L of distortion Dis(x, y) }, undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of each 8 * 8 sub-piece, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block.
In this specific embodiment, the detailed process of the regional detection algorithm of step in 3. is:
3.-1, respectively with undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } be divided into
Figure BDA0000093732490000161
8 * 8 sub-pieces of individual non-overlapping copies define undistorted left visual point image { L Org(x, y) } in l 8 * 8 sub-pieces be the current first sub-piece, be designated as
Figure BDA0000093732490000162
Left visual point image { the L of definition distortion Dis(x, y) } in l 8 * 8 sub-pieces be the current second sub-piece, be designated as
Figure BDA0000093732490000163
Wherein, (x 2, y 2) the expression current first sub-piece With the current second sub-piece
Figure BDA0000093732490000166
The coordinate position of middle pixel, 1≤x 2≤8,1≤y 2≤8,
Figure BDA0000093732490000167
Represent the current first sub-piece
Figure BDA0000093732490000168
Middle coordinate position is (x 2, y 2) the pixel value of pixel,
Figure BDA0000093732490000169
Represent the current second sub-piece
Figure BDA00000937324900001610
Middle coordinate position is (x 2, y 2) the pixel value of pixel.
3.-2, calculate the current first sub-piece respectively
Figure BDA00000937324900001611
With the current second sub-piece
Figure BDA00000937324900001612
In the Grad of all pixels, for the current first sub-piece
Figure BDA00000937324900001613
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G Ox(x 2', y 2') |+| G Oy(x 2', y 2') |, for the current second sub-piece
Figure BDA00000937324900001614
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G Dx(x 2', y 2') |+G Dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G Ox(x 2', y 2') the expression current first sub-piece
Figure BDA0000093732490000171
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Oy(x 2', y 2') the expression current first sub-piece Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, G Dx(x 2', y 2') the expression current second sub-piece
Figure BDA0000093732490000173
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Dy(x 2', y 2') the expression current second sub-piece
Figure BDA0000093732490000174
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign.
3.-3, find out the current first sub-piece In the maximum of Grad of all pixels, be designated as G Max, then according to G MaxCalculate first Grads threshold and second Grads threshold, be designated as T respectively 1And T 2, T 1=0.12 * G Max, T 2=0.06 * G Max
3.-4, for the current first sub-piece
Figure BDA0000093732490000176
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000177
Middle coordinate position is (x 2', y 2') pixel, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')>T 1Whether set up, if then judge the current first sub-piece
Figure BDA0000093732490000178
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000179
Middle coordinate position is (x 2', y 2') pixel be strong fringe region, Num 1=Num 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0.
3.-5, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2')>T 1And P o(x 2', y 2')<=T 1Whether set up, if then judge the current first sub-piece Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA00000937324900001711
Middle coordinate position is (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0.
3.-6, judge P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if then judge the current first sub-piece
Figure BDA00000937324900001712
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA00000937324900001713
Middle coordinate position is (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0.
3.-7, judge the current first sub-piece
Figure BDA0000093732490000181
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure BDA0000093732490000182
Middle coordinate position is (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0.
3.-8, return step and 3.-4 continue remaining pixel in current first sub-piece
Figure BDA0000093732490000183
and the current second sub-piece
Figure BDA0000093732490000184
is handled, 8 * 8 pixels in current first sub-piece
Figure BDA0000093732490000185
and the current second sub-piece
Figure BDA0000093732490000186
all dispose.
3.-9, with Num 1, Num 2, Num 3And Num 4In maximum The corresponding area type as the current first sub-piece
Figure BDA0000093732490000187
With the current second sub-piece
Figure BDA0000093732490000188
Block type, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block.
3.-10, make l "=l+1, l=l ", with undistorted left visual point image { L Org(x, y) } in the next one 8 * 8 sub-pieces as the current first sub-piece, with the left visual point image { L of distortion Dis(x, y) } in the next one 8 * 8 sub-pieces as the current second sub-piece, return step and 3.-2 continue to carry out, until undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in
Figure BDA0000093732490000189
8 * 8 sub-pieces of individual non-overlapping copies all dispose, and obtain undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces, wherein, l " initial value be 0.
3.-11,3.-1 employing and step to 3.-10 identical operations, obtain undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces.
4. because the quality of stereo image quality is directly relevant with left and right sides view-point image quality; In image quality evaluation, introduce the correlation that vision perception characteristics such as visual sensitivity, multichannel characteristic, masking effect can improve evaluation model and subjective scoring; Consider distortion perceptibility and perception saturation phenomenon, with the discernable distorted image of minimum as vision perception characteristic.The present invention is according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality and the right visual point image { R of distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality, obtain the left visual point image { L of distortion respectively Dis(x, y) } be used to reflect the characteristic vector of picture quality and the right visual point image { R of distortion Dos(x, y) } the characteristic vector that is used to reflect picture quality, again to the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } be used to reflect that the characteristic vector of picture quality carries out linear weighted function, obtains S DisThe characteristic vector that is used to reflect picture quality, be designated as F q
In this specific embodiment, step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), Fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) Min ( Max ( | L Org ( x 3 , y 3 ) - L Dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect picture quality of pixel, 1≤x 2≤8,1≤y 2≤8, N kLeft visual point image { the L of expression distortion Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, in the present embodiment, ST k=30, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤W, 1≤y 3≤H, L Org(x 3, y 3) expression { L Org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L Dis(x 3, y 3) expression { L Dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign.
4.-2, with the left visual point image { L of distortion Dis(x, y) } in the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as { fq with set k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained first characteristic vector, is designated as F 1, wherein, F 1Dimension be 256.
4.-3, to undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain undistorted left visual point image { L respectively Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with undistorted left visual point image { L Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000201
Left visual point image { L with distortion Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000202
Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 .
4.-4, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as
Figure BDA0000093732490000205
Wherein, the left visual point image { L of l ' expression distortion Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number.
4.-5, with the left visual point image { L of distortion Dis(x, y) } in the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as with set
Figure BDA0000093732490000206
Then will
Figure BDA0000093732490000207
In all elements arrange in order and obtain second characteristic vector, be designated as F 2, wherein, F 2Dimension be 32.
4.-6, with the first characteristic vector F 1With the second characteristic vector F 2Form new characteristic vector, as the left visual point image { L of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the first characteristic vector F 1With the second characteristic vector F 2Couple together and form a new characteristic vector.
4.-7, to the right visual point image { R of distortion Dis(x, y) } adopt with step 4.-1 to 4.-6 identical operations, obtain the right visual point image { R of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F R, wherein, F RDimension be 288.
4.-8, to the left visual point image { L of distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality LRight visual point image { R with distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality RCarry out linear weighted function, obtain S DisThe characteristic vector that is used to reflect picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion Dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion Dis(x, y) } weights proportion, w L+ w R=1.
5. existing research shows; When the difference of the absolute difference image of the left and right sides visual point image of the absolute difference image of undistorted left and right sides visual point image and distortion surpasses some threshold values; Then the decline of depth perception is just discovered by human eye easily, therefore, can estimate the depth perception of stereo-picture with the similarity of the absolute difference image of the left and right sides visual point image of the absolute difference image of undistorted left and right sides visual point image and distortion; Absolute difference image is similar more, and depth perception is strong more.Therefore, the present invention is according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects depth perception and the space structure intensity that is used to reflect depth perception, obtain S DisThe characteristic vector that is used to reflect depth perception, be designated as F s
In this specific embodiment, step detailed process 5. is:
5.-1, calculate undistorted left visual point image { L respectively Org(x, y) } and undistorted right visual point image { R Org(x, y) } absolute difference image, the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image and undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as { D respectively Org(x, y) }, { D Dis(x, y) } and Δ J (x, y) }, D Org(x, y)=| L Org(x, y)-R Org(x, y) |, D Dis(x, y)=| L Dis(x, y)-R Dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D Org(x, y) expression { D Org(x, y) } in coordinate position be (x, the pixel value of pixel y), D Dis(x, y) expression { D Dis(x, y) } in coordinate position be that ((x, y) the middle coordinate position of expression { Δ J (x, y) } is that (" || " is for asking absolute value sign for x, the pixel value of pixel y) to Δ J for x, the pixel value of pixel y).
5.-2,3. identical operations of employing and step, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in the block type of each 8 * 8 sub-piece.
5.-3, calculate { D Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as { fd k(x 2, y 2), for { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), Fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) Min ( Max ( | D Org ( x 4 , y 4 ) - D Dis ( x 4 , y 4 ) | - Δ J ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect depth perception of pixel, 1≤x 2≤8,1≤y 2≤8, M kExpression { D Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at { D Org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤W, 1≤y 4≤H, D Org(x 4, y 4) expression { D Org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D Dis(x 4, y 4) expression { D Dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) coordinate position is (x in the expression { Δ J (x, y) } 4, y 4) the pixel value of pixel.
5.-4, with { D Dis(x, y) } in the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types represent { fd with set k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256.
5.-5, to { D Org(x, y) } and { D Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with { D Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000222
With { D Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure BDA0000093732490000231
Wherein, the dimension of singular value vector is 8,
Figure BDA0000093732490000232
5.-6, calculate { D Dis(x, y) } in all block types be the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as
Figure BDA0000093732490000233
Figure BDA0000093732490000234
Wherein, l " expression { D Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at { D Org(x, y) } or { Δ J (x, y) } in sequence number.
5.-7, with { D Dis(x, y) } in the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types be expressed as with set Then will
Figure BDA0000093732490000236
In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32.
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form new characteristic vector, as S DisThe characteristic vector that is used to reflect depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a new characteristic vector.
6. with S DisThe characteristic vector F that is used to reflect picture quality qWith the characteristic vector F that is used to reflect depth perception sForm new characteristic vector, as S DisCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a new characteristic vector.
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions; Utilize the subjective quality evaluation method to obtain the average subjective scoring difference of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture respectively, be designated as DMOS, DMOS=100-MOS; Wherein, MOS representes the subjective scoring average, DMOS ∈ [0; 100], n >=1.
In the present embodiment; Because the stereo-picture of test obtains through H.264 encoding; Therefore the type of distortion of training sample and test sample book should be consistent in support vector regression; Utilize as stereo-picture that Fig. 2 a and stereo-picture, Figure 13 a and Figure 13 b that stereo-picture, Figure 12 a and Figure 12 b that stereo-picture, Figure 11 a and Figure 11 b that stereo-picture, Figure 10 a and Figure 10 b that stereo-picture, Fig. 9 a and Fig. 9 b that stereo-picture, Fig. 8 a and Fig. 8 b that stereo-picture, Fig. 7 a and Fig. 7 b that stereo-picture, Fig. 6 a and Fig. 6 b that stereo-picture, Fig. 5 a and Fig. 5 b that stereo-picture, Fig. 4 a and Fig. 4 b that stereo-picture, Fig. 3 a and Fig. 3 b that Fig. 2 b constitutes constitute constitute constitute constitute constitute constitute constitute constitute constitute constitute constitute the undistorted stereo-picture of totally 12 width of cloth (n=12) set up its distortion stereo-picture under the different distortion levels of coding distortion type H.264 and gathered, the stereo-picture of distortion has 72 width of cloth in this distortion stereo-picture is gathered.
8. adopt and calculating S DisThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width of cloth distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture.
In this specific embodiment; According to the stereoscopic vision masking effect inconsistent characteristic of human eye to different type of distortion; Left visual point image to the stereo-picture of different type of distortion is provided with different weights proportion with right visual point image; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflecting picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
9. because the characteristic vector of the stereo-picture of distortion is the higher dimensional space vector; Need in higher dimensional space, construct linear decision function and realize the non-linear decision function in the former space; (Support Vector Regression SVR) is the method for the non-linear higher dimensional space conversion of a kind of reasonable realization to support vector regression.Therefore the inventive method adopts support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained; And the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width of cloth distortion of same type of distortion; Calculate the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion of identical type of distortion in the set of distortion stereo-picture; Evaluating objective quality predicted value for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the n ' expression distortion stereo-picture set.
In this specific embodiment, step detailed process 9. is:
9.-1, the stereo-picture of all distortions of same type of distortion in the distortion stereo-picture set is divided into mutually disjoint 5 groups of subclass, select 4 groups of subclass composing training sample datas set wherein arbitrarily, be designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q representes training sample data set omega qIn the width of cloth number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width of cloth distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width of cloth distortion, 1≤k≤q.
9.-2, structure X kRegression function f (X k),
Figure BDA0000093732490000251
Wherein, f () is the function representation form, and w is a weight vector, w TBe the transposed matrix of w, b is a bias term,
Figure BDA0000093732490000252
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width of cloth distortion kLinear function,
Figure BDA0000093732490000253
D (X k, X l) be the kernel function in the support vector regression,
Figure BDA0000093732490000254
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width of cloth distortion, γ is a nuclear parameter, is used to reflect the scope of importing sample value; The scope of sample value is big more, and the γ value is also just big more, and exp () expression is the exponential function at the end with e; E=2.71828183, " || || for asking the Euclidean distance symbol.
In the present embodiment, JPEG compression artefacts, JPEG 2000 compression artefacts, Gaussian Blur distortion, white noise distortion and H.264 the γ value of coding distortion get 42,52,54,130 and 116 respectively.
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make that the regression function value that obtains through training is minimum with the error between the average subjective scoring difference, match obtains the weight vector w of optimum OptBias term b with optimum Opt, with the weight vector w of optimum OptBias term b with optimum OptCombination be designated as (w Opt, b Opt), ( w Opt , b Opt ) = Arg Min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains OptBias term b with optimum OptStructure support vector regression training pattern is designated as
Figure BDA0000093732490000256
Wherein, ψ representes training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure BDA0000093732490000257
Expression minimizes probability density function, X InpExpress support for the input vector of vector regression training pattern, (w Opt) TBe w OptTransposed matrix,
Figure BDA0000093732490000258
Express support for the input vector X of vector regression training pattern InpLinear function.
9.-4, according to the support vector regression training pattern; Stereo-picture to remaining the every width of cloth distortion in 1 group of subclass is tested; Prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in this group subclass; Evaluating objective quality predicted value for the stereo-picture of j width of cloth distortion in this group subclass is designated as Q with it j, Q j=f (X j),
Figure BDA0000093732490000261
Wherein, X jThe characteristic vector of representing the stereo-picture of j width of cloth distortion in this group subclass,
Figure BDA0000093732490000262
The linear function of representing the stereo-picture of j width of cloth distortion in this group subclass.
9.-5, according to step 9.-1 to 9.-4 process; Respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture.
Adopt 12 undistorted stereo-pictures shown in Fig. 2 a to Figure 13 b to analyze the objective image quality evaluation predicted value and the average correlation between the subjective scoring difference of the stereo-picture of the distortion that present embodiment obtains at the stereo-picture of in various degree JPEG compression, JPEG2000 compression, Gaussian Blur, white noise and H.264 312 width of cloth distortions under the coding distortion situation.Here; 2 objective parameters commonly used that utilize the evaluate image quality evaluating method are as evaluation index; Be under the nonlinear regression condition Pearson correlation coefficient (Correlation Coefficient, CC), the Spearman coefficient correlation (Rank-Order Correlation Coefficient, ROCC); The stereo-picture of CC reflection distortion is estimated the accuracy of objective models, and ROCC reflects its monotonicity.The objective image evaluation quality predicted value of the stereo-picture of the distortion that will calculate by present embodiment is done four parameter L ogistic function nonlinear fittings, and the high more explanation method for objectively evaluating of CC and ROCC value is good more with average subjective scoring difference correlation.Table 1 has been listed the image quality of stereoscopic images of the distortion that present embodiment obtains and has been estimated the correlation between predicted value and the subjective scoring; Can know from the data that table 1 is listed; Correlation between the final objective image quality evaluation predicted value of the stereo-picture of the distortion that obtains by present embodiment and the average subjective scoring difference is very high; The result who shows objective evaluation result and human eye subjective perception is more consistent, is enough to explain the validity of the inventive method.
Table 2 has provided and has adopted the image quality of stereoscopic images of the distortion that the different characteristic vector obtains to estimate the correlation between predicted value and the subjective scoring; From table 2, can find out; Only adopt evaluation predicted value single or that two characteristic vectors obtain all and between the subjective scoring all to have bigger correlation; The feature extracting method that the inventive method is described is effective; And in conjunction with the characteristic vector of reflection picture quality and depth perception, the evaluation predicted value and the correlation between the subjective scoring that obtain are stronger, are enough to explain that this method is effective.
The image quality of stereoscopic images of the distortion that table 1 present embodiment obtains is estimated the correlation between predicted value and the subjective scoring
Figure BDA0000093732490000271
The image quality of stereoscopic images of the distortion that table 2 employing different characteristic vector obtains is estimated the correlation between predicted value and the subjective scoring
Figure BDA0000093732490000272

Claims (7)

1. stereo image quality method for objectively evaluating based on visually-perceptible is characterized in that may further comprise the steps:
1. make S OrgUndistorted stereo-picture for original makes S DisFor the stereo-picture of distortion to be evaluated, with S OrgLeft visual point image be designated as { L Org(x, y) }, with S OrgRight visual point image be designated as { R Org(x, y) }, with S DisLeft visual point image be designated as { L Dis(x, y) }, with S DisRight visual point image be designated as { R Dis(x, y) }, wherein, (x, the y) coordinate position of pixel in left visual point image of expression and the right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H representes the height of left visual point image and right visual point image, L Org(x, y) expression S OrgLeft visual point image { L Org(x, y) } in coordinate position be (x, the pixel value of pixel y), R Org(x, y) expression S OrgRight visual point image { R Org(x, y) } in coordinate position be (x, the pixel value of pixel y), L Dis(x, y) expression S DisLeft visual point image { L Dis(x, y) } in coordinate position be (x, the pixel value of pixel y), R Dis(x, y) expression S DisRight visual point image { R Dis(x, y) } in coordinate position be (x, the pixel value of pixel y);
2. utilize the visual masking effect of human vision, extract undistorted left visual point image { L respectively background illumination and texture Org(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L Org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R Org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) expression { J L(x, y) } in coordinate position be (x, the pixel value of pixel y), J R(x, y) expression { J R(x, y) } in coordinate position be (x, the pixel value of pixel y);
3. obtain undistorted left visual point image { L respectively through regional detection algorithm Org(x, y) } and the left visual point image { L of distortion Dis(x, y) }, undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of each 8 * 8 sub-piece, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block;
4. according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality and the right visual point image { R of distortion Dis(x, y) } in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects picture quality and the space structure intensity that is used to reflect picture quality, obtain the left visual point image { L of distortion respectively Dis(x, y) } be used to reflect the characteristic vector of picture quality and the right visual point image { R of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, again to the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } be used to reflect that the characteristic vector of picture quality carries out linear weighted function, obtains S DisThe characteristic vector that is used to reflect picture quality, be designated as F q
5. according to undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) }, through the left visual point image { L of calculated distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image in various block types 8 * 8 sub-pieces be used to spatial noise intensity that reflects depth perception and the space structure intensity that is used to reflect depth perception, obtain S DisThe characteristic vector that is used to reflect depth perception, be designated as F s
6. with S DisThe characteristic vector F that is used to reflect picture quality qWith the characteristic vector F that is used to reflect depth perception sForm new characteristic vector, as S DisCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a new characteristic vector;
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions; Utilize the subjective quality evaluation method to obtain the average subjective scoring difference of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture respectively, be designated as DMOS, DMOS=100-MOS; Wherein, MOS representes the subjective scoring average, DMOS ∈ [0; 100], n >=1;
8. adopt and calculating S DisThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width of cloth distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture;
9. adopt support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained; And the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width of cloth distortion of same type of distortion; Calculate the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion of identical type of distortion in the set of distortion stereo-picture; Evaluating objective quality predicted value for the stereo-picture of i width of cloth distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n ', the width of cloth number of the stereo-picture of the distortion that comprises in the n ' expression distortion stereo-picture set.
2. a kind of stereo image quality method for objectively evaluating based on visually-perceptible according to claim 1 is characterized in that described step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T l(x, y) }, Wherein, T l(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be (x, the visual threshold value of the visual masking effect of the background illumination of pixel y), Represent undistorted left visual point image { L Org(x, y) } in be that (x, pixel y) they are the average brightness of all pixels in 5 * 5 windows at center with coordinate position;
2.-2, calculate undistorted left visual point image { L Org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) the undistorted left visual point image { L of expression Org(x, y) } in coordinate position be that (η is the controlling elements greater than 0 for x, the visual threshold value of the visual masking effect of the texture of pixel y), and (x y) representes undistorted left visual point image { L G Org(x, y) } in coordinate position be that (x, pixel y) carry out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L Org(x, y) } edge image in coordinate position be that (x, pixel y) carry out the edge weighted value that Gauss's LPF obtains;
2.-3, to undistorted left visual point image { L Org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L Org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C L, t* min{T l(x, y), T t(x, y) }, wherein, C L, tThe parameter of the visual masking effect eclipse effect of expression control background illumination and texture, 0<C L, t<1, min{} is for getting minimum value function;
2.-4,2.-1 employing and step to 2.-3 identical operations, obtain undistorted right visual point image { R Org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
3. a kind of stereo image quality method for objectively evaluating based on visually-perceptible according to claim 1 and 2 is characterized in that the detailed process of the regional detection algorithm during described step is 3. is:
3.-1, respectively with undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } be divided into
Figure FDA0000093732480000041
8 * 8 sub-pieces of individual non-overlapping copies define undistorted left visual point image { L Org(x, y) } in l 8 * 8 sub-pieces be the current first sub-piece, be designated as
Figure FDA0000093732480000042
Left visual point image { the L of definition distortion Dis(x, y) } in l 8 * 8 sub-pieces be the current second sub-piece, be designated as
Figure FDA0000093732480000043
Wherein, (x 2, y 2) the expression current first sub-piece With the current second sub-piece
Figure FDA0000093732480000046
The coordinate position of middle pixel, 1≤x 2≤8,1≤y 2≤8,
Figure FDA0000093732480000047
Represent the current first sub-piece Middle coordinate position is (x 2, y 2) the pixel value of pixel,
Figure FDA0000093732480000049
Represent the current second sub-piece
Figure FDA00000937324800000410
Middle coordinate position is (x 2, y 2) the pixel value of pixel;
3.-2, calculate the current first sub-piece respectively
Figure FDA00000937324800000411
With the current second sub-piece
Figure FDA00000937324800000412
In the Grad of all pixels, for the current first sub-piece
Figure FDA00000937324800000413
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G Ox(x 2', y 2') |+| G Oy(x 2', y 2') |, for the current second sub-piece
Figure FDA00000937324800000414
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G Dx(x 2', y 2') |+| G Dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G Ox(x 2', y 2') the expression current first sub-piece
Figure FDA00000937324800000415
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Oy(x 2', y 2') the expression current first sub-piece
Figure FDA00000937324800000416
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, G Dx(x 2', y 2') the expression current second sub-piece
Figure FDA0000093732480000051
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G Dy(x 2', y 2') the expression current second sub-piece
Figure FDA0000093732480000052
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign;
3.-3, find out the current first sub-piece
Figure FDA0000093732480000053
In the maximum of Grad of all pixels, be designated as G Max, then according to G MaxCalculate first Grads threshold and second Grads threshold, be designated as T respectively 1And T 2, T 1=0.12 * G Max, T 2=0.06 * G Max
3.-4, for the current first sub-piece
Figure FDA0000093732480000054
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure FDA0000093732480000055
Middle coordinate position is (x 2', y 2') pixel, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')>T 1Whether set up, if then judge the current first sub-piece
Figure FDA0000093732480000056
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure FDA0000093732480000057
Middle coordinate position is (x 2', y 2') pixel be strong fringe region, Num 1=Num 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0;
3.-5, judge P o(x 2', y 2')>T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2')>T 1And P o(x 2', y 2')<=T 1Whether set up, if then judge the current first sub-piece Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece Middle coordinate position is (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0;
3.-6, judge P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if then judge the current first sub-piece
Figure FDA00000937324800000510
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure FDA00000937324800000511
Middle coordinate position is (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0;
3.-7, judge the current first sub-piece
Figure FDA00000937324800000512
Middle coordinate position is (x 2', y 2') the pixel and the current second sub-piece
Figure FDA00000937324800000513
Middle coordinate position is (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0;
3.-8, return step and 3.-4 continue remaining pixel in current first sub-piece
Figure FDA0000093732480000061
and the current second sub-piece
Figure FDA0000093732480000062
is handled, 8 * 8 pixels in current first sub-piece
Figure FDA0000093732480000063
and the current second sub-piece
Figure FDA0000093732480000064
all dispose;
3.-9, with Num 1, Num 2, Num 3And Num 4In maximum The corresponding area type as the current first sub-piece
Figure FDA0000093732480000065
With the current second sub-piece Block type, be designated as p, wherein, { 1,2,3,4}, p=1 represent strong edge block to p ∈, and p=2 representes weak edge block, and p=3 representes smooth block, and p=4 representes texture block;
3.-10, make l "=l+1, l=l ", with undistorted left visual point image { L Org(x, y) } in the next one 8 * 8 sub-pieces as the current first sub-piece, with the left visual point image { L of distortion Dis(x, y) } in the next one 8 * 8 sub-pieces as the current second sub-piece, return step and 3.-2 continue to carry out, until undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in
Figure FDA0000093732480000067
8 * 8 sub-pieces of individual non-overlapping copies all dispose, and obtain undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces, wherein, l " initial value be 0;
3.-11,3.-1 employing and step to 3.-10 identical operations, obtain undistorted right visual point image { R Org(x, y) } and the right visual point image { R of distortion Dis(x, y) } in the block type of all 8 * 8 sub-pieces.
4. a kind of stereo image quality method for objectively evaluating based on visually-perceptible according to claim 3 is characterized in that described step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), Fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) Min ( Max ( | L Org ( x 3 , y 3 ) - L Dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect picture quality of pixel, 1≤x 2≤8,1≤y 2≤8, N kLeft visual point image { the L of expression distortion Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤W, 1≤y 3≤H, L Org(x 3, y 3) expression { L Org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L Dis(x 3, y 3) expression { L Dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign;
4.-2, with the left visual point image { L of distortion Dis(x, y) } in the spatial noise intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as { fq with set k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained first characteristic vector, is designated as F 1, wherein, F 1Dimension be 256;
4.-3, to undistorted left visual point image { L Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain undistorted left visual point image { L respectively Org(x, y) } and the left visual point image { L of distortion Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with undistorted left visual point image { L Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as Left visual point image { L with distortion Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure FDA0000093732480000072
Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 ;
4.-4, the left visual point image { L of calculated distortion Dis(x, y) } in all block types be the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of k, be designated as
Figure FDA0000093732480000074
Figure FDA0000093732480000075
Wherein, the left visual point image { L of l ' expression distortion Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at undistorted left visual point image { L Org(x, y) } or undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number;
4.-5, with the left visual point image { L of distortion Dis(x, y) } in the space structure intensity that is used to reflect picture quality of 8 * 8 sub-pieces of various block types be expressed as with set Then will
Figure FDA0000093732480000082
In all elements arrange in order and obtain second characteristic vector, be designated as F 2, wherein, F 2Dimension be 32;
4.-6, with the first characteristic vector F 1With the second characteristic vector F 2Form new characteristic vector, as the left visual point image { L of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the first characteristic vector F 1With the second characteristic vector F 2Couple together and form a new characteristic vector;
4.-7, to the right visual point image { R of distortion Dis(x, y) } adopt with step 4.-1 to 4.-6 identical operations, obtain the right visual point image { R of distortion Dis(x, y) } the characteristic vector that is used to reflect picture quality, be designated as F R, wherein, F RDimension be 288;
4.-8, to the left visual point image { L of distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality LRight visual point image { R with distortion Dis(x, y) } the characteristic vector F that is used to reflect picture quality RCarry out linear weighted function, obtain S DisThe characteristic vector that is used to reflect picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion Dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion Dis(x, y) } weights proportion, w L+ w R=1.
5. a kind of stereo image quality method for objectively evaluating based on visually-perceptible according to claim 4 is characterized in that described step detailed process 5. is:
5.-1, calculate undistorted left visual point image { L respectively Org(x, y) } and undistorted right visual point image { R Org(x, y) } absolute difference image, the left visual point image { L of distortion Dis(x, y) } and the right visual point image { R of distortion Dis(x, y) } absolute difference image and undistorted left visual point image { L Org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R Org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as { D respectively Org(x, y) }, { D Dis(x, y) } and Δ J (x, y) }, D Org(x, y)=| L Org(x, y)-R Org(x, y) |, D Dis(x, y)=| L Dis(x, y)-R Dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D Org(x, y) expression { D Org(x, y) } in coordinate position be (x, the pixel value of pixel y), D Dis(x, y) expression { D Dis(x, y) } in coordinate position be (x, the pixel value of pixel y), Δ J (x, y) the middle coordinate position of expression { Δ J (x, y) } is that (" || " is for asking absolute value sign for x, the pixel value of pixel y);
5.-2,3. identical operations of employing and step, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in the block type of each 8 * 8 sub-piece;
5.-3, calculate { D Dis(x, y) } in all block types be the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as { fd k(x 2, y 2), for { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), Fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) Min ( Max ( | D Org ( x 4 , y 4 ) - D Dis ( x 4 , y 4 ) | - Δ J ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) the spatial noise intensity that is used to reflect depth perception of pixel, 1≤x 2≤8,1≤y 2≤8, M kExpression { D Dis(x, y) } in block type be the number of 8 * 8 sub-pieces of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D Dis(x, y) } in block type be that coordinate position is (x in 8 * 8 sub-pieces of k 2, y 2) pixel at { D Org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤W, 1≤y 4≤H, D Org(x 4, y 4) expression { D Org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D Dis(x 4, y 4) expression { D Dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) coordinate position is (x in the expression { Δ J (x, y) } 4, y 4) the pixel value of pixel;
5.-4, with { D Dis(x, y) } in the spatial noise intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types represent { fd with set k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements among 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256;
5.-5, to { D Org(x, y) } and { D Dis(x, y) } in each 8 * 8 sub-piece implement singular value decomposition respectively, obtain { D respectively Org(x, y) } and { D Dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-piece, with { D Org(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure FDA0000093732480000101
With { D Dis(x, y) } in the singular value vector of l 8 * 8 sub-pieces be designated as
Figure FDA0000093732480000102
Wherein, the dimension of singular value vector is 8,
Figure FDA0000093732480000103
5.-6, calculate { D Dis(x, y) } in all block types be the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of k, be designated as Wherein, l " expression { D Dis(x, y) } in block type be that 8 * 8 sub-pieces of k are at { D Org(x, y) } or { Δ J (x, y) } in sequence number;
5.-7, with { D Dis(x, y) } in the space structure intensity that is used to reflect depth perception of 8 * 8 sub-pieces of various block types be expressed as with set
Figure FDA0000093732480000106
Then will In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32;
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form new characteristic vector, as S DisThe characteristic vector that is used to reflect depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a new characteristic vector.
6. a kind of stereo image quality method for objectively evaluating based on visually-perceptible according to claim 5 is characterized in that described step detailed process 9. is:
9.-1, the stereo-picture of all distortions of same type of distortion in the distortion stereo-picture set is divided into mutually disjoint 5 groups of subclass, select 4 groups of subclass composing training sample datas set wherein arbitrarily, be designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q representes training sample data set omega qIn the width of cloth number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width of cloth distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width of cloth distortion, 1≤k≤q;
9.-2, structure X kRegression function f (X k),
Figure FDA0000093732480000108
Wherein, f () is the function representation form, and w is a weight vector, w TBe the transposed matrix of w, b is a bias term,
Figure FDA0000093732480000109
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width of cloth distortion kLinear function,
Figure FDA0000093732480000111
D (X k, X l) be the kernel function in the support vector regression,
Figure FDA0000093732480000112
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width of cloth distortion, γ is a nuclear parameter, is used to reflect the scope of importing sample value; The scope of sample value is big more, and the γ value is also just big more, and exp () expression is the exponential function at the end with e; E=2.71828183, " || || for asking the Euclidean distance symbol;
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make that the regression function value that obtains through training is minimum with the error between the average subjective scoring difference, match obtains the weight vector w of optimum OptBias term b with optimum Opt, with the weight vector w of optimum OptBias term b with optimum OptCombination be designated as (w Opt, b Opt), ( w Opt , b Opt ) = Arg Min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains OptBias term b with optimum OptStructure support vector regression training pattern is designated as
Figure FDA0000093732480000114
Wherein, ψ representes training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure FDA0000093732480000115
Expression minimizes probability density function, X InpExpress support for the input vector of vector regression training pattern, (w Opt) TBe w OptTransposed matrix,
Figure FDA0000093732480000116
Express support for the input vector X of vector regression training pattern InpLinear function;
9.-4, according to the support vector regression training pattern; Stereo-picture to remaining the every width of cloth distortion in 1 group of subclass is tested; Prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in this group subclass; Evaluating objective quality predicted value for the stereo-picture of j width of cloth distortion in this group subclass is designated as Q with it j, Q j=f (X j),
Figure FDA0000093732480000117
Wherein, X jThe characteristic vector of representing the stereo-picture of j width of cloth distortion in this group subclass,
Figure FDA0000093732480000118
The linear function of representing the stereo-picture of j width of cloth distortion in this group subclass;
9.-5, according to step 9.-1 to 9.-4 process; Respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width of cloth distortion in the set of distortion stereo-picture.
7. a kind of stereo image quality method for objectively evaluating according to claim 6 based on visually-perceptible; It is characterized in that described step 4. with step 8. in the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflecting picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflecting picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
CN 201110284944 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception Active CN102333233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110284944 CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110284944 CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Publications (2)

Publication Number Publication Date
CN102333233A true CN102333233A (en) 2012-01-25
CN102333233B CN102333233B (en) 2013-11-06

Family

ID=45484815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110284944 Active CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Country Status (1)

Country Link
CN (1) CN102333233B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595185A (en) * 2012-02-27 2012-07-18 宁波大学 Stereo image quality objective evaluation method
CN102663747A (en) * 2012-03-23 2012-09-12 宁波大学 Stereo image objectivity quality evaluation method based on visual perception
CN102737380A (en) * 2012-06-05 2012-10-17 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor
CN102769749A (en) * 2012-06-29 2012-11-07 宁波大学 Post-processing method for depth image
CN102843572A (en) * 2012-06-29 2012-12-26 宁波大学 Phase-based stereo image quality objective evaluation method
CN103442248A (en) * 2013-08-22 2013-12-11 北京大学 System for evaluating compression quality of image based on binocular stereoscopic vision
CN103475897A (en) * 2013-09-09 2013-12-25 宁波大学 Adaptive image quality evaluation method based on distortion type judgment
CN103475896A (en) * 2013-07-24 2013-12-25 同济大学 Interactive video and audio experience-quality assessment platform and method based on Qos
CN103517065A (en) * 2013-09-09 2014-01-15 宁波大学 Method for objectively evaluating quality of degraded reference three-dimensional picture
CN103841411A (en) * 2014-02-26 2014-06-04 宁波大学 Method for evaluating quality of stereo image based on binocular information processing
CN103903259A (en) * 2014-03-20 2014-07-02 宁波大学 Objective three-dimensional image quality evaluation method based on structure and texture separation
CN104933696A (en) * 2014-03-21 2015-09-23 联想(北京)有限公司 Method of determining illumination condition and electronic equipment
CN105282543A (en) * 2015-10-26 2016-01-27 浙江科技学院 Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN105430397A (en) * 2015-11-20 2016-03-23 清华大学深圳研究生院 3D (three-dimensional) image experience quality prediction method and apparatus
CN105635727A (en) * 2015-12-29 2016-06-01 北京大学 Subjective image quality evaluation method based on paired comparison and device thereof
CN105791849A (en) * 2014-12-25 2016-07-20 中兴通讯股份有限公司 Image compression method and device
CN105828061A (en) * 2016-05-11 2016-08-03 宁波大学 Virtual viewpoint quality evaluation method based on visual masking effect
CN106097327A (en) * 2016-06-06 2016-11-09 宁波大学 In conjunction with manifold feature and the objective evaluation method for quality of stereo images of binocular characteristic
CN106412569A (en) * 2016-09-28 2017-02-15 宁波大学 No-reference multi-distortion three-dimensional image quality evaluation method based on feature selection
CN107396095A (en) * 2017-08-28 2017-11-24 方玉明 One kind is without with reference to three-dimensional image quality evaluation method
CN107438180A (en) * 2017-08-28 2017-12-05 中国科学院深圳先进技术研究院 The depth perception quality evaluating method of 3 D video
CN105894522B (en) * 2016-04-28 2018-05-25 宁波大学 A kind of more distortion objective evaluation method for quality of stereo images
CN112508856A (en) * 2020-11-16 2021-03-16 北京理工大学 Distortion type detection method for mixed distortion image
CN112770105A (en) * 2020-12-07 2021-05-07 宁波大学 Repositioning stereo image quality evaluation method based on structural features
CN115187519A (en) * 2022-06-21 2022-10-14 上海市计量测试技术研究院 Image quality evaluation method, system and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009075245A1 (en) * 2007-12-12 2009-06-18 Nec Corporation Image quality evaluation system, and device, method and program used for the evaluation system
CN101562758A (en) * 2009-04-16 2009-10-21 浙江大学 Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009075245A1 (en) * 2007-12-12 2009-06-18 Nec Corporation Image quality evaluation system, and device, method and program used for the evaluation system
CN101562758A (en) * 2009-04-16 2009-10-21 浙江大学 Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
周武杰等: "立体图像质量评价方法研究", 《INTERNATIONAL CONFERENCE OF CHINA COMMUNICATION AND INFORMATION TECHNOLOGY》 *
杨嘉晨: "基于人眼视觉特征的立体图像质量客观评价方法", 《天津大学学报》 *
王正友等: "基于掩盖效应的无参考数字图像质量评价", 《计算机应用》 *
王磊等: "基于SVM和GA的图像质量评价方法", 《计算机工程》 *
王阿红: "一种基于人眼视觉特性的立体图像质量客观评价方法", 《光电工程》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595185B (en) * 2012-02-27 2014-06-25 宁波大学 Stereo image quality objective evaluation method
CN102595185A (en) * 2012-02-27 2012-07-18 宁波大学 Stereo image quality objective evaluation method
CN102663747A (en) * 2012-03-23 2012-09-12 宁波大学 Stereo image objectivity quality evaluation method based on visual perception
CN102663747B (en) * 2012-03-23 2014-08-27 宁波大学 Stereo image objectivity quality evaluation method based on visual perception
CN102737380A (en) * 2012-06-05 2012-10-17 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor
CN102737380B (en) * 2012-06-05 2014-12-10 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor
CN102843572A (en) * 2012-06-29 2012-12-26 宁波大学 Phase-based stereo image quality objective evaluation method
CN102843572B (en) * 2012-06-29 2014-11-05 宁波大学 Phase-based stereo image quality objective evaluation method
CN102769749B (en) * 2012-06-29 2015-03-18 宁波大学 Post-processing method for depth image
CN102769749A (en) * 2012-06-29 2012-11-07 宁波大学 Post-processing method for depth image
CN103475896A (en) * 2013-07-24 2013-12-25 同济大学 Interactive video and audio experience-quality assessment platform and method based on Qos
CN103442248A (en) * 2013-08-22 2013-12-11 北京大学 System for evaluating compression quality of image based on binocular stereoscopic vision
CN103442248B (en) * 2013-08-22 2015-08-12 北京大学 A kind of image compression quality appraisal procedure based on binocular stereo vision
CN103517065A (en) * 2013-09-09 2014-01-15 宁波大学 Method for objectively evaluating quality of degraded reference three-dimensional picture
CN103475897A (en) * 2013-09-09 2013-12-25 宁波大学 Adaptive image quality evaluation method based on distortion type judgment
CN103841411A (en) * 2014-02-26 2014-06-04 宁波大学 Method for evaluating quality of stereo image based on binocular information processing
CN103841411B (en) * 2014-02-26 2015-10-28 宁波大学 A kind of stereo image quality evaluation method based on binocular information processing
CN103903259A (en) * 2014-03-20 2014-07-02 宁波大学 Objective three-dimensional image quality evaluation method based on structure and texture separation
CN104933696B (en) * 2014-03-21 2017-12-29 联想(北京)有限公司 Determine the method and electronic equipment of light conditions
CN104933696A (en) * 2014-03-21 2015-09-23 联想(北京)有限公司 Method of determining illumination condition and electronic equipment
CN105791849A (en) * 2014-12-25 2016-07-20 中兴通讯股份有限公司 Image compression method and device
CN105791849B (en) * 2014-12-25 2019-08-06 中兴通讯股份有限公司 Picture compression method and device
CN105282543A (en) * 2015-10-26 2016-01-27 浙江科技学院 Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN105430397B (en) * 2015-11-20 2018-04-17 清华大学深圳研究生院 A kind of 3D rendering Quality of experience Forecasting Methodology and device
CN105430397A (en) * 2015-11-20 2016-03-23 清华大学深圳研究生院 3D (three-dimensional) image experience quality prediction method and apparatus
CN105635727A (en) * 2015-12-29 2016-06-01 北京大学 Subjective image quality evaluation method based on paired comparison and device thereof
CN105635727B (en) * 2015-12-29 2017-06-16 北京大学 Evaluation method and device based on the image subjective quality for comparing in pairs
CN105894522B (en) * 2016-04-28 2018-05-25 宁波大学 A kind of more distortion objective evaluation method for quality of stereo images
CN105828061A (en) * 2016-05-11 2016-08-03 宁波大学 Virtual viewpoint quality evaluation method based on visual masking effect
CN106097327A (en) * 2016-06-06 2016-11-09 宁波大学 In conjunction with manifold feature and the objective evaluation method for quality of stereo images of binocular characteristic
CN106097327B (en) * 2016-06-06 2018-11-02 宁波大学 In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN106412569B (en) * 2016-09-28 2017-12-15 宁波大学 A kind of selection of feature based without referring to more distortion stereo image quality evaluation methods
CN106412569A (en) * 2016-09-28 2017-02-15 宁波大学 No-reference multi-distortion three-dimensional image quality evaluation method based on feature selection
CN107438180A (en) * 2017-08-28 2017-12-05 中国科学院深圳先进技术研究院 The depth perception quality evaluating method of 3 D video
CN107396095A (en) * 2017-08-28 2017-11-24 方玉明 One kind is without with reference to three-dimensional image quality evaluation method
CN107396095B (en) * 2017-08-28 2019-01-15 方玉明 A kind of no reference three-dimensional image quality evaluation method
CN107438180B (en) * 2017-08-28 2019-02-22 中国科学院深圳先进技术研究院 The depth perception quality evaluating method of 3 D video
CN112508856A (en) * 2020-11-16 2021-03-16 北京理工大学 Distortion type detection method for mixed distortion image
CN112508856B (en) * 2020-11-16 2022-09-09 北京理工大学 Distortion type detection method for mixed distortion image
CN112770105A (en) * 2020-12-07 2021-05-07 宁波大学 Repositioning stereo image quality evaluation method based on structural features
CN112770105B (en) * 2020-12-07 2022-06-03 宁波大学 Repositioning stereo image quality evaluation method based on structural features
CN115187519A (en) * 2022-06-21 2022-10-14 上海市计量测试技术研究院 Image quality evaluation method, system and computer readable medium

Also Published As

Publication number Publication date
CN102333233B (en) 2013-11-06

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN102209257B (en) Stereo image quality objective evaluation method
CN102547368B (en) Objective evaluation method for quality of stereo images
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN104036501A (en) Three-dimensional image quality objective evaluation method based on sparse representation
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN103338379B (en) Stereoscopic video objective quality evaluation method based on machine learning
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN105654465B (en) A kind of stereo image quality evaluation method filtered between the viewpoint using parallax compensation
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
CN104240248B (en) Method for objectively evaluating quality of three-dimensional image without reference
CN104023227B (en) A kind of objective evaluation method of video quality based on spatial domain and spatial structure similitude
CN104902268B (en) Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method
CN104036502A (en) No-reference fuzzy distorted stereo image quality evaluation method
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN104361583A (en) Objective quality evaluation method of asymmetrically distorted stereo images
CN104346809A (en) Image quality evaluation method for image quality dataset adopting high dynamic range
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN103108209B (en) Stereo image objective quality evaluation method based on integration of visual threshold value and passage
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN103914835A (en) Non-reference quality evaluation method for fuzzy distortion three-dimensional images
CN102271279B (en) Objective analysis method for just noticeable change step length of stereo images
CN108848365A (en) A kind of reorientation stereo image quality evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191219

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201123

Address after: 226500 Jiangsu city of Nantong province Rugao City Lin Zi Zhen Hong Wei River Road No. 8

Patentee after: NANTONG OUKE NC EQUIPMENT Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.