CN102333233B - Stereo image quality objective evaluation method based on visual perception - Google Patents

Stereo image quality objective evaluation method based on visual perception Download PDF

Info

Publication number
CN102333233B
CN102333233B CN 201110284944 CN201110284944A CN102333233B CN 102333233 B CN102333233 B CN 102333233B CN 201110284944 CN201110284944 CN 201110284944 CN 201110284944 A CN201110284944 A CN 201110284944A CN 102333233 B CN102333233 B CN 102333233B
Authority
CN
China
Prior art keywords
point image
distortion
dis
visual point
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110284944
Other languages
Chinese (zh)
Other versions
CN102333233A (en
Inventor
邵枫
蒋刚毅
郁梅
李福翠
彭宗举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANTONG OUKE NC EQUIPMENT Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN 201110284944 priority Critical patent/CN102333233B/en
Publication of CN102333233A publication Critical patent/CN102333233A/en
Application granted granted Critical
Publication of CN102333233B publication Critical patent/CN102333233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a stereo image quality objective evaluation method based on visual perception. Firstly, a stereo image is divided into a strong edge block, a weak edge block , a flat block and a texture block, and characteristic information reflecting image quality and depth perception of different zone blocks is extracted through computation so as to obtain stereo image characteristic vectors; and then the characteristic vectors of distorted stereo images of the same distortion type in a distorted stereo image set are trained through support vector regression, and each distorted stereo image of the same distortion type is tested by a support vector regression training model to obtain the objective image quality evaluation forecast value of each distorted stereo image. The method has the advantages that the obtained characteristic vector information reflecting the image quality and the depth perception has stronger stability and can better reflect the quality change condition of the stereo images, and the relevance of an objective evaluation result and subjective perception is improved.

Description

A kind of objective evaluation method for quality of stereo images based on visually-perceptible
Technical field
The present invention relates to a kind of image quality evaluating method, especially relate to a kind of objective evaluation method for quality of stereo images based on visually-perceptible.
Background technology
Along with developing rapidly of image coding technique and stereo display technique, the stereo-picture technology has been subject to paying close attention to more and more widely and using, and has become a current study hotspot.The stereo-picture technology is utilized the binocular parallax principle of human eye, and binocular receives the left and right visual point image from Same Scene independently of one another, merges by brain and forms binocular parallax, thereby enjoy the stereo-picture with depth perception and realism.Impact due to acquisition system, store compressed and transmission equipment, stereo-picture can inevitably be introduced a series of distortion, and compare with the single channel image, stereo-picture need to guarantee the picture quality of two passages simultaneously, it is carried out quality evaluation have very important significance.Yet the effective method for objectively evaluating of stereoscopic image quality shortage is estimated at present.Therefore, setting up effective stereo image quality objective evaluation model tool is of great significance.
Objective evaluation method for quality of stereo images mainly can be divided into two classes: 1) based on the left and right channel image quality evaluation of picture quality, it directly applies to the evaluation stereo image quality with the plane picture quality evaluating method, yet the left and right visual point image of stereoscopic image merges the relief process of generation also to be difficult to represent with simple mathematical method, and also exist between the visual point image of left and right to influence each other, the left and right visual point image is carried out the simple linear weighting be difficult to effectively estimate stereo image quality; 2) based on the left and right channel image quality evaluation of three-dimensional perception, it reflects by parallax information or depth information, yet the limitation due to present disparity estimation and estimation of Depth technology, how effectively depth image or anaglyph quality are estimated to characterize truly third dimension and know characteristic, remain one of difficulties in the stereo image quality objective evaluation.Therefore, how picture quality and depth perception information being attached in evaluation method simultaneously, making evaluation result more feel to meet the human visual system, is all needing in the evaluating objective quality process to carry out in stereoscopic image the problem researched and solved.
Summary of the invention
Technical problem to be solved by this invention is to provide a kind of objective evaluation method for quality of stereo images that can effectively improve the correlation of objective evaluation result and subjective perception.
The present invention solves the problems of the technologies described above the technical scheme that adopts: a kind of objective evaluation method for quality of stereo images based on visually-perceptible is characterized in that comprising the following steps:
1. make S orgUndistorted stereo-picture for original makes S disFor the stereo-picture of distortion to be evaluated, with S orgLeft visual point image be designated as { L org(x, y) }, with S orgRight visual point image be designated as { R org(x, y) }, with S disLeft visual point image be designated as { L dis(x, y) }, with S disRight visual point image be designated as { R dis(x, y) }, wherein, the coordinate position of pixel in (x, y) left visual point image of expression and right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H represents the height of left visual point image and right visual point image, L org(x, y) represents S orgLeft visual point image { L org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R org(x, y) represents S orgRight visual point image { R org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), L dis(x, y) represents S disLeft visual point image { L dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R dis(x, y) represents S disRight visual point image { R dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y);
2. utilize human vision to the visual masking effect of background illumination and texture, extract respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) represents { J L(x, y) } in coordinate position be the pixel value of the pixel of (x, y), J R(x, y) represents { J R(x, y) } in coordinate position be the pixel value of the pixel of (x, y);
3. obtain respectively undistorted left visual point image { L by regional detection algorithm org(x, y) } and the left visual point image { L of distortion dis(x, y) }, undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of each 8 * 8 sub-block, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block;
4. according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, and the right visual point image { R of distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, obtain respectively the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality and the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, then to the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality carry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q
5. according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for the reflection depth perception and be used for the space structure intensity of reflection depth perception, obtain S disThe characteristic vector that is used for the reflection depth perception, be designated as F s
6. with S disThe characteristic vector F that is used for reflection picture quality qWith the characteristic vector F that is used for the reflection depth perception sForm the New Characteristics vector, as S disCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a New Characteristics vector;
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions, utilizes the subjective quality assessment method to obtain respectively the average subjective scoring difference of the stereo-picture of every width distortion in the set of distortion stereo-picture, is designated as DMOS, DMOS=100-MOS, wherein, MOS represents the subjective scoring average, DMOS ∈ [0,100], n 〉=1;
8. adopt and calculate S disThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n ', the width number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture;
9. adopt support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained, and the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width distortion of same type of distortion, calculate the evaluating objective quality predicted value of the stereo-picture of every width distortion of identical type of distortion in the set of distortion stereo-picture, evaluating objective quality predicted value for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n ', the width number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture.
Described step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T l(x, y) },
Figure BDA0000093732490000041
Wherein, T l(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of background illumination of the pixel of (x, y),
Figure BDA0000093732490000042
Represent undistorted left visual point image { L org(x, y) } in the average brightness of all pixels in 5 * 5 windows centered by pixel take coordinate position as (x, y);
2.-2, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of texture of the pixel of (x, y), η is the controlling elements greater than 0, G (x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be that the pixel of (x, y) carries out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L org(x, y) } edge image in coordinate position be that the pixel of (x, y) carries out the Weighted Edges value that Gassian low-pass filter obtains;
2.-3, to undistorted left visual point image { L org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C L, t* min{T l(x, y), T t(x, y) }, wherein, C L, tThe parameter of the visual masking effect eclipse effect of background illumination and texture, 0<C are controlled in expression L, t<1, min{} is for getting minimum value function;
2. undistorted right visual point image { R is obtained in-4, the employing operation identical with step 2.-1 to 2.-3 org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
The detailed process of the regional detection algorithm of described step in 3. is:
3.-1, respectively with undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } be divided into
Figure BDA0000093732490000051
8 * 8 sub-blocks of individual non-overlapping copies define undistorted left visual point image { L org(x, y) } in l 8 * 8 sub-blocks be current the first sub-block, be designated as
Figure BDA0000093732490000052
Left visual point image { the L of definition distortion dis(x, y) } in l 8 * 8 sub-blocks be current the second sub-block, be designated as
Figure BDA0000093732490000053
Wherein,
Figure BDA0000093732490000054
(x 2, y 2) current the first sub-block of expression
Figure BDA0000093732490000055
With current the second sub-block
Figure BDA0000093732490000056
The coordinate position of middle pixel, 1≤x 2≤ 8,1≤y 2≤ 8, Represent current the first sub-block
Figure BDA0000093732490000058
Middle coordinate position is (x 2, y 2) the pixel value of pixel,
Figure BDA0000093732490000059
Represent current the second sub-block
Figure BDA00000937324900000510
Middle coordinate position is (x 2, y 2) the pixel value of pixel;
3.-2, calculate respectively current the first sub-block With current the second sub-block
Figure BDA00000937324900000512
In the Grad of all pixels, for current the first sub-block
Figure BDA00000937324900000513
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G ox(x 2', y 2') |+| G oy(x 2', y 2') |, for current the second sub-block Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G dx(x 2', y 2') |+| G dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G ox(x 2', y 2') current the first sub-block of expression
Figure BDA00000937324900000515
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G oy(x 2', y 2') current the first sub-block of expression
Figure BDA00000937324900000516
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, G dx(x 2', y 2') current the second sub-block of expression
Figure BDA00000937324900000517
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G dy(x 2', y 2') current the second sub-block of expression
Figure BDA00000937324900000518
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign;
3.-3, find out current the first sub-block
Figure BDA0000093732490000061
In the maximum of Grad of all pixels, be designated as G max, then according to G maxCalculate the first Grads threshold and the second Grads threshold, be designated as respectively T 1And T 2, T 1=0.12 * G max, T 2=0.06 * G max
3.-4, for current the first sub-block
Figure BDA0000093732490000062
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000063
Middle coordinate position is (x 2', y 2') pixel, the judgement P o(x 2', y 2')>T 1And P d(x 2', y 2')>T 1Whether set up, if so, judge current the first sub-block
Figure BDA0000093732490000064
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000065
Middle coordinate position is (x 2', y 2') pixel be strong fringe region, Num 1=Nun 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0;
3.-5, judgement P o(x 2', y 2')>T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2')>T 1And P o(x 2', y 2')<=T 1Whether set up, if so, judge current the first sub-block Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000067
Middle coordinate position is (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0;
3.-6, judgement P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if so, judge current the first sub-block
Figure BDA0000093732490000068
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000069
Middle coordinate position is (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0;
3.-7, judge current the first sub-block
Figure BDA00000937324900000610
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA00000937324900000611
Middle coordinate position is (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0;
3.-8, returning to step 3.-4 continues current the first sub-block With current the second sub-block
Figure BDA00000937324900000613
In remaining pixel process, until current the first sub-block
Figure BDA00000937324900000614
With current the second sub-block In 8 * 8 pixels all be disposed;
3.-9, with Num 1, Num 2, Num 3And Num 4In the corresponding area type of maximum as current the first sub-block
Figure BDA0000093732490000071
With current the second sub-block
Figure BDA0000093732490000072
Block type, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block;
3.-10, make l "=l+1, l=l ", with undistorted left visual point image { L org(x, y) } in the next one 8 * 8 sub-blocks as current the first sub-block, with the left visual point image { L of distortion dis(x, y) } in the next one 8 * 8 sub-blocks as current the second sub-block, return to step 3.-2 and continue to carry out, until undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in
Figure BDA0000093732490000073
8 * 8 sub-blocks of individual non-overlapping copies all are disposed, and obtain undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks, wherein, l " initial value be 0;
3.-11, the employing operation identical with step 3.-1 to 3.-10 obtains undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks.
Described step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the spatial noise intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) min ( max ( | L org ( x 3 , y 3 ) - L dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for reflection picture quality of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, N kLeft visual point image { the L of expression distortion dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤ W, 1≤y 3≤ H, L org(x 3, y 3) expression { L org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L dis(x 3, y 3) expression { L dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign;
4.-2, with the left visual point image { L of distortion dis(x, y) } in the spatial noise intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be { fq k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the First Characteristic vector, is designated as F 1, wherein, F 1Dimension be 256;
4.-3, to undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with undistorted left visual point image { L org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000081
Left visual point image { L with distortion dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000082
Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 ;
4.-4, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the space structure intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as
Figure BDA0000093732490000085
Wherein, the left visual point image { L of l ' expression distortion dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number;
4.-5, with the left visual point image { L of distortion dis(x, y) } in the space structure intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be
Figure BDA0000093732490000086
Then will
Figure BDA0000093732490000087
In all elements arrange in order and obtain the Second Characteristic vector, be designated as F 2, wherein, F 2Dimension be 32;
4.-6, with the First Characteristic vector F 1With the Second Characteristic vector F 2Form the New Characteristics vector, as the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the First Characteristic vector F 1With the Second Characteristic vector F 2Couple together and form a New Characteristics vector;
4.-7, to the right visual point image { R of distortion dis(x, y) } adopt the operation identical with step 4.-1 to 4.-6, obtain the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F R, wherein, F RDimension be 288;
4.-8, to the left visual point image { L of distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality LRight visual point image { R with distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality RCarry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion dis(x, y) } weights proportion, w L+ w R=1.
Described step detailed process 5. is:
5.-1, calculate respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } absolute difference image, the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image and undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as respectively { D org(x, y) }, { D dis(x, y) } and { Δ J (x, y) }, D org(x, y)=| L org(x, y)-R org(x, y) |, D dis(x, y)=| L dis(x, y)-R dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D org(x, y) represents { D org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), D dis(x, y) represents { D dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), the middle coordinate position of Δ J (x, y) expression { Δ J (x, y) } is the pixel value of the pixel of (x, y), " || " is for asking absolute value sign;
5. { D is obtained respectively in-2,3. identical operation of employing and step org(x, y) } and { D dis(x, y) } in the block type of each 8 * 8 sub-block;
5.-3, calculate { D dis(x, y) } in all block types be the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as { fd k(x 2, y 2), for { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) min ( max ( | D org ( x 4 , y 4 ) - D dis ( x 4 , y 4 ) | - ΔJ ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for the reflection depth perception of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, M kExpression { D dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at { D org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤ W, 1≤y 4≤ H, D org(x 4, y 4) expression { D org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D dis(x 4, y 4) expression { D dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) represent that in { Δ J (x, y) }, coordinate position is (x 4, y 4) the pixel value of pixel;
5.-4, with { D dis(x, y) } in the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types with set expression { fd k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256;
5.-5, to { D org(x, y) } and { D dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively { D org(x, y) } and { D dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with { D org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000102
With { D dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000103
Wherein, the dimension of singular value vector is 8,
Figure BDA0000093732490000104
5.-6, calculate { D dis(x, y) } in all block types be the space structure intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as Wherein, l " expression { D dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at { D org(x, y) } or { Δ J (x, y) } in sequence number;
5.-7, with { D dis(x, y) } in the space structure intensity set expression that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types be
Figure BDA0000093732490000113
Then will
Figure BDA0000093732490000114
In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32;
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form the New Characteristics vector, as S disThe characteristic vector that is used for the reflection depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a New Characteristics vector.
Described step detailed process 9. is:
9.-1, the stereo-picture with all distortions of same type of distortion in the set of distortion stereo-picture is divided into mutually disjoint 5 groups of subsets, selects arbitrarily 4 groups of subset composing training sample datas set wherein, is designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q represents training sample data set omega qIn the width number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width distortion, 1≤k≤q;
9.-2, structure X kRegression function f (X k),
Figure BDA0000093732490000115
Wherein, f () is the function representation form, and w is weight vector, w TBe the transposed matrix of w, b is bias term,
Figure BDA0000093732490000116
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width distortion kLinear function,
Figure BDA0000093732490000117
D(X k, X l) be the kernel function in support vector regression,
Figure BDA0000093732490000118
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width distortion, γ is nuclear parameter, is used for the scope of reflection input sample value, the scope of sample value is larger, and the γ value is also just larger, the exponential function of exp () expression take e the end of as, e=2.71828183, " || || for asking the Euclidean distance symbol;
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make the regression function value and the error between average subjective scoring difference that obtain through training minimum, match obtains optimum weight vector w optBias term b with optimum opt, with the weight vector w of optimum optBias term b with optimum optCombination be designated as (w opt, b opt), ( w opt , b opt ) = arg min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains optBias term b with optimum optStructure support vector regression training pattern is designated as
Figure BDA0000093732490000122
Wherein, ψ represents training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure BDA0000093732490000123
Expression minimizes probability density function, X inpExpress support for the input vector of vector regression training pattern, (w opt) TBe w optTransposed matrix, Express support for the input vector X of vector regression training pattern inpLinear function;
9.-4, according to the support vector regression training pattern, the stereo-picture that remains the every width distortion in 1 group of subset is tested, prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width distortion in this group subset, evaluating objective quality predicted value for the stereo-picture of j width distortion in this group subset is designated as Q with it j, Q j=f (X j),
Figure BDA0000093732490000125
Wherein, X jThe characteristic vector that represents the stereo-picture of j width distortion in this group subset,
Figure BDA0000093732490000126
The linear function that represents the stereo-picture of j width distortion in this group subset;
9.-5, according to the process of step 9.-1 to 9.-4, respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width distortion in the set of distortion stereo-picture.
Described step 4. with step 8. in the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflection picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
Compared with prior art, the invention has the advantages that:
1) the inventive method considers that zones of different has different responses to three-dimensional perception, stereo-picture is divided into Strong edge blocks, weak edge block, flat block and texture block and estimates respectively, simultaneously picture quality and depth perception information are attached in evaluation procedure, make evaluation result more feel to meet the human visual system.
2) the inventive method obtains minimum discernable distorted image according to the visual characteristic of human eye, and extract respectively the characteristic information of zones of different piece and form the characteristic vector of stereo-picture by computer memory noise intensity and space structure intensity, the characteristic vector information of the stereo-picture that obtains has stronger stability and can reflect preferably the mass change situation of stereo-picture, has improved the correlation of objective evaluation result and subjective perception.
Description of drawings
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 a is the left visual point image of Akko (being of a size of 640 * 480) stereo-picture;
Fig. 2 b is the right visual point image of Akko (being of a size of 640 * 480) stereo-picture;
Fig. 3 a is the left visual point image of Altmoabit (being of a size of 1024 * 768) stereo-picture;
Fig. 3 b is the right visual point image of Altmoabit (being of a size of 1024 * 768) stereo-picture;
Fig. 4 a is the left visual point image of Balloons (being of a size of 1024 * 768) stereo-picture;
Fig. 4 b is the right visual point image of Balloons (being of a size of 1024 * 768) stereo-picture;
Fig. 5 a is the left visual point image of Doorflower (being of a size of 1024 * 768) stereo-picture;
Fig. 5 b is the right visual point image of Doorflower (being of a size of 1024 * 768) stereo-picture;
Fig. 6 a is the left visual point image of Kendo (being of a size of 1024 * 768) stereo-picture;
Fig. 6 b is the right visual point image of Kendo (being of a size of 1024 * 768) stereo-picture;
Fig. 7 a is the left visual point image of LeaveLaptop (being of a size of 1024 * 768) stereo-picture;
Fig. 7 b is the right visual point image of LeaveLaptop (being of a size of 1024 * 768) stereo-picture;
Fig. 8 a is the left visual point image of Lovebierd1 (being of a size of 1024 * 768) stereo-picture;
Fig. 8 b is the right visual point image of Lovebierd1 (being of a size of 1024 * 768) stereo-picture;
Fig. 9 a is the left visual point image of Newspaper (being of a size of 1024 * 768) stereo-picture;
Fig. 9 b is the right visual point image of Newspaper (being of a size of 1024 * 768) stereo-picture;
Figure 10 a is the left visual point image of Puppy (being of a size of 720 * 480) stereo-picture;
Figure 10 b is the right visual point image of Puppy (being of a size of 720 * 480) stereo-picture;
Figure 11 a is the left visual point image of Soccer2 (being of a size of 720 * 480) stereo-picture;
Figure 11 b is the right visual point image of Soccer2 (being of a size of 720 * 480) stereo-picture;
Figure 12 a is the left visual point image of Horse (being of a size of 720 * 480) stereo-picture;
Figure 12 b is the right visual point image of Horse (being of a size of 720 * 480) stereo-picture;
Figure 13 a is the left visual point image of Xmas (being of a size of 640 * 480) stereo-picture;
Figure 13 b is the right visual point image of Xmas (being of a size of 640 * 480) stereo-picture.
Embodiment
Embodiment is described in further detail the present invention below in conjunction with accompanying drawing.
A kind of objective evaluation method for quality of stereo images based on visually-perceptible that the present invention proposes, it totally realizes block diagram as shown in Figure 1, it mainly comprises the following steps:
1. make S orgUndistorted stereo-picture for original makes S disFor the stereo-picture of distortion to be evaluated, with S orgLeft visual point image be designated as { L org(x, y) }, with S orgRight visual point image be designated as { R org(x, y) }, with S disLeft visual point image be designated as { L dis(x, y) }, with S disRight visual point image be designated as { R dis(x, y) }, wherein, the coordinate position of pixel in (x, y) left visual point image of expression and right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H represents the height of left visual point image and right visual point image, L org(x, y) represents S orgLeft visual point image { L org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R org(x, y) represents S orgRight visual point image { R org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), L dis(x, y) represents S disLeft visual point image { L dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R dis(x, y) represents S disRight visual point image { R dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y).
2. human visual system (HVS) characteristic shows, human eye is non to changing less attribute or noise in image, unless the change intensity of this attribute or noise surpasses a certain threshold value, this threshold value is exactly minimum discernable distortion (Just noticeable distortion, JND).And the visual masking effect of human eye is a kind of local effect, is subjected to the impact of the factors such as background illuminance, Texture complication, and background is brighter, and texture is more complicated, and boundary value is just higher.Therefore the present invention utilizes human vision to the visual masking effect of background illumination and texture, extracts respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) represents { J L(x, y) } in coordinate position be the pixel value of the pixel of (x, y), J R(x, y) represents { J R(x, y) } in coordinate position be the pixel value of the pixel of (x, y).
In this specific embodiment, step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T t(x, y) },
Figure BDA0000093732490000151
Wherein, T l(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of background illumination of the pixel of (x, y),
Figure BDA0000093732490000152
Represent undistorted left visual point image { L org(x, y) } in take coordinate position as (x, the average brightness of all pixels in 5 * 5 windows centered by pixel y), in actual process, also can adopt other big or small window, but through great many of experiments, result can obtain best effect when showing the window that adopts 5 * 5 sizes.
2.-2, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of texture of the pixel of (x, y), η is the controlling elements greater than 0, in the present embodiment, η=0.05, G (x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be that the pixel of (x, y) carries out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L org(x, y) } edge image in coordinate position be that the pixel of (x, y) carries out the Weighted Edges value that Gassian low-pass filter obtains.
2.-3, to undistorted left visual point image { L org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C L, t* min{T l(x, y), T t(x, y) }, wherein, C L, tThe parameter of the visual masking effect eclipse effect of background illumination and texture, 0<C are controlled in expression L, t<1, in the present embodiment, C L, t=0.5, min{} is for getting minimum value function.
2. undistorted right visual point image { R is obtained in-4, the employing operation identical with step 2.-1 to 2.-3 org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
3. because the human visual system has different susceptibilitys to edge, texture, the smooth region of image, zones of different also should be different to the responsiveness of three-dimensional perception, therefore, should consider respectively the contribution of zones of different to estimating in the stereo image quality evaluation.The present invention obtains respectively undistorted left visual point image { L by regional detection algorithm org(x, y) } and the left visual point image { L of distortion dis(x, y) }, undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of each 8 * 8 sub-block, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block.
In this specific embodiment, the detailed process of the regional detection algorithm of step in 3. is:
3.-1, respectively with undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } be divided into
Figure BDA0000093732490000161
8 * 8 sub-blocks of individual non-overlapping copies define undistorted left visual point image { L org(x, y) } in l 8 * 8 sub-blocks be current the first sub-block, be designated as
Figure BDA0000093732490000162
Left visual point image { the L of definition distortion dis(x, y) } in l 8 * 8 sub-blocks be current the second sub-block, be designated as
Figure BDA0000093732490000163
Wherein,
Figure BDA0000093732490000164
(x 2, y 2) current the first sub-block of expression
Figure BDA0000093732490000165
With current the second sub-block
Figure BDA0000093732490000166
The coordinate position of middle pixel, 1≤x 2≤ 8,1≤y 2≤ 8,
Figure BDA0000093732490000167
Represent current the first sub-block
Figure BDA0000093732490000168
Middle coordinate position is (x 2, y 2) the pixel value of pixel,
Figure BDA0000093732490000169
Represent current the second sub-block
Figure BDA00000937324900001610
Middle coordinate position is (x 2, y 2) the pixel value of pixel.
3.-2, calculate respectively current the first sub-block With current the second sub-block
Figure BDA00000937324900001612
In the Grad of all pixels, for current the first sub-block
Figure BDA00000937324900001613
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G ox(x 2', y 2') |+| G oy(x 2', y 2') |, for current the second sub-block
Figure BDA00000937324900001614
Middle coordinate position is (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G dx(x 2', y 2') |+G dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G ox(x 2', y 2') current the first sub-block of expression Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G oy(x 2', y 2') current the first sub-block of expression Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, G dx(x 2', y 2') current the second sub-block of expression
Figure BDA0000093732490000173
Middle coordinate position is (x 2', y 2') the horizontal gradient value of pixel, G dy(x 2', y 2') current the second sub-block of expression
Figure BDA0000093732490000174
Middle coordinate position is (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign.
3.-3, find out current the first sub-block In the maximum of Grad of all pixels, be designated as G max, then according to G maxCalculate the first Grads threshold and the second Grads threshold, be designated as respectively T 1And T 2, T 1=0.12 * G max, T 2=0.06 * G max
3.-4, for current the first sub-block
Figure BDA0000093732490000176
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000177
Middle coordinate position is (x 2', y 2') pixel, the judgement P o(x 2', y 2')>T 1And P d(x 2', y 2')>T 1Whether set up, if so, judge current the first sub-block
Figure BDA0000093732490000178
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000179
Middle coordinate position is (x 2', y 2') pixel be strong fringe region, Num 1=Num 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0.
3.-5, judgement P o(x 2', y 2')>T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2')>T 1And P o(x 2', y 2')<=T 1Whether set up, if so, judge current the first sub-block Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA00000937324900001711
Middle coordinate position is (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0.
3.-6, judgement P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if so, judge current the first sub-block
Figure BDA00000937324900001712
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA00000937324900001713
Middle coordinate position is (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0.
3.-7, judge current the first sub-block
Figure BDA0000093732490000181
Middle coordinate position is (x 2', y 2') pixel and current the second sub-block
Figure BDA0000093732490000182
Middle coordinate position is (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0.
3.-8, returning to step 3.-4 continues current the first sub-block
Figure BDA0000093732490000183
With current the second sub-block
Figure BDA0000093732490000184
In remaining pixel process, until current the first sub-block With current the second sub-block
Figure BDA0000093732490000186
In 8 * 8 pixels all be disposed.
3.-9, with Num 1, Num 2, Num 3And Num 4In the corresponding area type of maximum as current the first sub-block
Figure BDA0000093732490000187
With current the second sub-block
Figure BDA0000093732490000188
Block type, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block.
3.-10, make l "=l+1, l=l ", with undistorted left visual point image { L org(x, y) } in the next one 8 * 8 sub-blocks as current the first sub-block, with the left visual point image { L of distortion dis(x, y) } in the next one 8 * 8 sub-blocks as current the second sub-block, return to step 3.-2 and continue to carry out, until undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in
Figure BDA0000093732490000189
8 * 8 sub-blocks of individual non-overlapping copies all are disposed, and obtain undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks, wherein, l " initial value be 0.
3.-11, the employing operation identical with step 3.-1 to 3.-10 obtains undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks.
4. because the quality of stereo image quality is directly relevant with the left and right view-point image quality, introduce the correlation that the vision perception characteristics such as visual sensitivity, multichannel characteristic, masking effect can improve evaluation model and subjective scoring in image quality evaluation, consider distortion perceptibility and perception saturation phenomenon, with the discernable distorted image of minimum as vision perception characteristic.The present invention is according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, and the right visual point image { R of distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, obtain respectively the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality and the right visual point image { R of distortion dos(x, y) } the characteristic vector that is used for reflection picture quality, then to the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality carry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q
In this specific embodiment, step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the spatial noise intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) min ( max ( | L org ( x 3 , y 3 ) - L dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for reflection picture quality of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, N kLeft visual point image { the L of expression distortion dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, in the present embodiment, ST k=30, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤ W, 1≤y 3≤ H, L org(x 3, y 3) expression { L org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L dis(x 3, y 3) expression { L dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign.
4.-2, with the left visual point image { L of distortion dis(x, y) } in the spatial noise intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be { fq k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the First Characteristic vector, is designated as F 1, wherein, F 1Dimension be 256.
4.-3, to undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with undistorted left visual point image { L org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000201
Left visual point image { L with distortion dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 .
4.-4, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the space structure intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as
Figure BDA0000093732490000204
Figure BDA0000093732490000205
Wherein, the left visual point image { L of l ' expression distortion dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number.
4.-5, with the left visual point image { L of distortion dis(x, y) } in the space structure intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be
Figure BDA0000093732490000206
Then will
Figure BDA0000093732490000207
In all elements arrange in order and obtain the Second Characteristic vector, be designated as F 2, wherein, F 2Dimension be 32.
4.-6, with the First Characteristic vector F 1With the Second Characteristic vector F 2Form the New Characteristics vector, as the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the First Characteristic vector F 1With the Second Characteristic vector F 2Couple together and form a New Characteristics vector.
4.-7, to the right visual point image { R of distortion dis(x, y) } adopt the operation identical with step 4.-1 to 4.-6, obtain the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F R, wherein, F RDimension be 288.
4.-8, to the left visual point image { L of distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality LRight visual point image { R with distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality RCarry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion dis(x, y) } weights proportion, w L+ w R=1.
5. existing studies show that, when the difference of the absolute difference image of the left and right visual point image of the absolute difference image of undistorted left and right visual point image and distortion surpasses some threshold values, the decline of depth perception is just easily discovered by human eye, therefore, can estimate with the similarity of the absolute difference image of the left and right visual point image of the absolute difference image of undistorted left and right visual point image and distortion the depth perception of stereo-picture, absolute difference image is more similar, and depth perception is stronger.Therefore, the present invention is according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for the reflection depth perception and be used for the space structure intensity of reflection depth perception, obtain S disThe characteristic vector that is used for the reflection depth perception, be designated as F s
In this specific embodiment, step detailed process 5. is:
5.-1, calculate respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } absolute difference image, the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image and undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as respectively { D org(x, y) }, { D dis(x, y) } and { Δ J (x, y) }, D org(x, y)=| L org(x, y)-R org(x, y) |, D dis(x, y)=| L dis(x, y)-R dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D org(x, y) represents { D org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), D dis(x, y) represents { D dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), the middle coordinate position of Δ J (x, y) expression { Δ J (x, y) } is the pixel value of the pixel of (x, y), " || " is for asking absolute value sign.
5. { D is obtained respectively in-2,3. identical operation of employing and step org(x, y) } and { D dis(x, y) } in the block type of each 8 * 8 sub-block.
5.-3, calculate { D dis(x, y) } in all block types be the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as { fd k(x 2, y 2), for { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) min ( max ( | D org ( x 4 , y 4 ) - D dis ( x 4 , y 4 ) | - ΔJ ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for the reflection depth perception of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, M kExpression { D dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at { D org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤ W, 1≤y 4≤ H, D org(x 4, y 4) expression { D org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D dis(x 4, y 4) expression { D dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) represent that in { Δ J (x, y) }, coordinate position is (x 4, y 4) the pixel value of pixel.
5.-4, with { D dis(x, y) } in the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types with set expression { fd k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256.
5.-5, to { D org(x, y) } and { D dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively { D org(x, y) } and { D dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with { D org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000222
With { D dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure BDA0000093732490000231
Wherein, the dimension of singular value vector is 8,
Figure BDA0000093732490000232
5.-6, calculate { D dis(x, y) } in all block types be the space structure intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as
Figure BDA0000093732490000234
Wherein, l " expression { D dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at { D org(x, y) } or { Δ J (x, y) } in sequence number.
5.-7, with { D dis(x, y) } in the space structure intensity set expression that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types be
Figure BDA0000093732490000235
Then will
Figure BDA0000093732490000236
In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32.
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form the New Characteristics vector, as S disThe characteristic vector that is used for the reflection depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a New Characteristics vector.
6. with S disThe characteristic vector F that is used for reflection picture quality qWith the characteristic vector F that is used for the reflection depth perception sForm the New Characteristics vector, as S disCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a New Characteristics vector.
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions, utilizes the subjective quality assessment method to obtain respectively the average subjective scoring difference of the stereo-picture of every width distortion in the set of distortion stereo-picture, is designated as DMOS, DMOS=100-MOS, wherein, MOS represents the subjective scoring average, DMOS ∈ [0,100], n 〉=1.
in the present embodiment, because the stereo-picture of testing obtains by H.264 encoding, therefore the type of distortion of training sample and test sample book should be consistent in support vector regression, utilize the stereo-picture as Fig. 2 a and Fig. 2 b formation, the stereo-picture that Fig. 3 a and Fig. 3 b consist of, the stereo-picture that Fig. 4 a and Fig. 4 b consist of, the stereo-picture that Fig. 5 a and Fig. 5 b consist of, the stereo-picture that Fig. 6 a and Fig. 6 b consist of, the stereo-picture that Fig. 7 a and Fig. 7 b consist of, the stereo-picture that Fig. 8 a and Fig. 8 b consist of, the stereo-picture that Fig. 9 a and Fig. 9 b consist of, the stereo-picture that Figure 10 a and Figure 10 b consist of, the stereo-picture that Figure 11 a and Figure 11 b consist of, the stereo-picture that Figure 12 a and Figure 12 b consist of, stereo-picture that Figure 13 a and Figure 13 b the consist of undistorted stereo-picture of totally 12 width (n=12) has been set up its distortion stereo-picture set under the different distortion levels of coding distortion type H.264, the stereo-picture of distortion has 72 width in this distortion stereo-picture set.
8. adopt and calculate S disThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n ', the width number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture.
In this specific embodiment, according to the stereoscopic vision masking effect inconsistent characteristic of human eye to different type of distortion, left visual point image and right visual point image to the stereo-picture of different type of distortion arrange different weights proportion, in the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflection picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
9. the characteristic vector due to the stereo-picture of distortion is the higher dimensional space vector, need to construct linear decision function and realize non-linear decision function in former space in higher dimensional space, support vector regression (Support Vector Regression, SVR) is a kind of reasonable method that realizes non-linear higher dimensional space conversion.Therefore the inventive method adopts support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained, and the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width distortion of same type of distortion, calculate the evaluating objective quality predicted value of the stereo-picture of every width distortion of identical type of distortion in the set of distortion stereo-picture, evaluating objective quality predicted value for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n ', the width number of the stereo-picture of the distortion that comprises in the set of n ' expression distortion stereo-picture.
In this specific embodiment, step detailed process 9. is:
9.-1, the stereo-picture with all distortions of same type of distortion in the set of distortion stereo-picture is divided into mutually disjoint 5 groups of subsets, selects arbitrarily 4 groups of subset composing training sample datas set wherein, is designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q represents training sample data set omega qIn the width number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width distortion, 1≤k≤q.
9.-2, structure X kRegression function f (X k), Wherein, f () is the function representation form, and w is weight vector, w TBe the transposed matrix of w, b is bias term,
Figure BDA0000093732490000252
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width distortion kLinear function,
Figure BDA0000093732490000253
D(X k, X l) be the kernel function in support vector regression,
Figure BDA0000093732490000254
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width distortion, γ is nuclear parameter, is used for the scope of reflection input sample value, the scope of sample value is larger, and the γ value is also just larger, the exponential function of exp () expression take e the end of as, e=2.71828183, " || || for asking the Euclidean distance symbol.
In the present embodiment, JPEG compression artefacts, JPEG 2000 compression artefacts, Gaussian Blur distortion, white noise distortion and H.264 the γ value of coding distortion get respectively 42,52,54,130 and 116.
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make the regression function value and the error between average subjective scoring difference that obtain through training minimum, match obtains optimum weight vector w optBias term b with optimum opt, with the weight vector w of optimum optBias term b with optimum optCombination be designated as (w opt, b opt), ( w opt , b opt ) = arg min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains optBias term b with optimum optStructure support vector regression training pattern is designated as
Figure BDA0000093732490000256
Wherein, ψ represents training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure BDA0000093732490000257
Expression minimizes probability density function, X inpExpress support for the input vector of vector regression training pattern, (w opt) TBe w optTransposed matrix,
Figure BDA0000093732490000258
Express support for the input vector X of vector regression training pattern inpLinear function.
9.-4, according to the support vector regression training pattern, the stereo-picture that remains the every width distortion in 1 group of subset is tested, prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width distortion in this group subset, evaluating objective quality predicted value for the stereo-picture of j width distortion in this group subset is designated as Q with it j, Q j=f (X j), Wherein, X jThe characteristic vector that represents the stereo-picture of j width distortion in this group subset,
Figure BDA0000093732490000262
The linear function that represents the stereo-picture of j width distortion in this group subset.
9.-5, according to the process of step 9.-1 to 9.-4, respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width distortion in the set of distortion stereo-picture.
Adopt 12 undistorted stereo-pictures shown in Fig. 2 a to Figure 13 b to analyze objective image quality evaluation predicted value and the average correlation between the subjective scoring difference of the stereo-picture of the distortion that the present embodiment obtains at the stereo-picture of in various degree JPEG compression, JPEG2000 compression, Gaussian Blur, white noise and H.264 312 width distortions in the coding distortion situation.Here, utilize 2 objective parameters commonly used of evaluate image quality evaluating method as evaluation index, be Pearson correlation coefficient (the Correlation Coefficient under the nonlinear regression condition, CC), Spearman coefficient correlation (Rank-Order Correlation Coefficient, ROCC), the stereo-picture of CC reflection distortion is estimated the accuracy of objective models, and ROCC reflects its monotonicity.The objective image evaluation quality predicted value of the stereo-picture of the distortion that will calculate by the present embodiment is done four parameter L ogistic function nonlinear fittings, and the higher explanation method for objectively evaluating of CC and ROCC value is better with average subjective scoring difference correlation.Table 1 has been listed the image quality evaluation predicted value of stereo-picture of the distortion that the present embodiment obtains and correlation between subjective scoring, from the listed data of table 1 as can be known, correlation between the final objective image quality evaluation predicted value of the stereo-picture of the distortion that obtains by the present embodiment and average subjective scoring difference is very high, the result that shows objective evaluation result and human eye subjective perception is more consistent, is enough to illustrate the validity of the inventive method.
Table 2 has provided the image quality evaluation predicted value of the stereo-picture that adopts the distortion that the different characteristic vector obtains and correlation between subjective scoring, as can be seen from Table 2, only adopt evaluation predicted value single or that two characteristic vectors obtain all and between subjective scoring all to have larger correlation, the feature extracting method that the inventive method is described is effective, and the characteristic vector in conjunction with reflection picture quality and depth perception, the evaluation predicted value and the correlation between subjective scoring that obtain are stronger, are enough to illustrate that this method is effective.
The image quality evaluation predicted value of the stereo-picture of the distortion that table 1 the present embodiment obtains and the correlation between subjective scoring
Figure BDA0000093732490000271
The image quality evaluation predicted value of the stereo-picture of the distortion that table 2 employing different characteristic vector obtains and the correlation between subjective scoring
Figure BDA0000093732490000272

Claims (4)

1. objective evaluation method for quality of stereo images based on visually-perceptible is characterized in that comprising the following steps:
1. make S orgUndistorted stereo-picture for original makes S disFor the stereo-picture of distortion to be evaluated, with S orgLeft visual point image be designated as { L org(x, y) }, with S orgRight visual point image be designated as { R org(x, y) }, with S disLeft visual point image be designated as { L dis(x, y) }, with S disRight visual point image be designated as { R dis(x, y) }, wherein, the coordinate position of pixel in (x, y) left visual point image of expression and right visual point image, 1≤x≤W, 1≤y≤H, W represent the width of left visual point image and right visual point image, H represents the height of left visual point image and right visual point image, L org(x, y) represents S orgLeft visual point image { L org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R org(x, y) represents S orgRight visual point image { R org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), L dis(x, y) represents S disLeft visual point image { L dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), R dis(x, y) represents S disRight visual point image { R dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y);
2. utilize human vision to the visual masking effect of background illumination and texture, extract respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image, with undistorted left visual point image { L org(x, y) } minimum discernable distorted image be designated as { J L(x, y) }, with undistorted right visual point image { R org(x, y) } minimum discernable distorted image be designated as { J R(x, y) }, wherein, J L(x, y) represents { J L(x, y) } in coordinate position be the pixel value of the pixel of (x, y), J R(x, y) represents { J R(x, y) } in coordinate position be the pixel value of the pixel of (x, y);
3. obtain respectively undistorted left visual point image { L by regional detection algorithm org(x, y) } and the left visual point image { L of distortion dis(x, y) }, undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of each 8 * 8 sub-block, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block;
4. according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, and the right visual point image { R of distortion dis(x, y) } in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for reflection picture quality and be used for the space structure intensity of reflection picture quality, obtain respectively the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality and the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, then to the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality carry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q
Described step detailed process 4. is:
4.-1, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the spatial noise intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as { fq k(x 2, y 2), for the left visual point image { L of distortion dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection picture quality spatial noise intensity be designated as fq k(x 2, y 2), fq k ( x 2 , y 2 ) = 1 N k Σ ( x 3 , y 3 ) min ( max ( | L org ( x 3 , y 3 ) - L dis ( x 3 , y 3 ) | - J L ( x 3 , y 3 ) , 0 ) , ST k ) 2 , Wherein, k ∈ { p|1≤p≤4}, fq k(x 2, y 2) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for reflection picture quality of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, N kLeft visual point image { the L of expression distortion dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, max () is for getting max function, and min () is for getting minimum value function, (x 3, y 3) expression distortion left visual point image { L dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in coordinate position, 1≤x 3≤ W, 1≤y 3≤ H, L org(x 3, y 3) expression { L org(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, L dis(x 3, y 3) expression { L dis(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, J L(x 3, y 3) expression { J L(x, y) } in coordinate position be (x 3, y 3) the pixel value of pixel, " || " is for asking absolute value sign;
4.-2, with the left visual point image { L of distortion dis(x, y) } in the spatial noise intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be { fq k(x 2, y 2) | 1≤k≤4}, then with { fq k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the First Characteristic vector, is designated as F 1, wherein, F 1Dimension be 256;
4.-3, to undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with undistorted left visual point image { L org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure FDA0000351766760000031
Left visual point image { L with distortion dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure FDA0000351766760000032
Wherein, the dimension of singular value vector is 8, 1 ≤ l ≤ W × H 8 × 8 ;
4.-4, the left visual point image { L of calculated distortion dis(x, y) } in all block types be the space structure intensity that is used for reflection picture quality of 8 * 8 sub-blocks of k, be designated as
Figure FDA0000351766760000034
Figure FDA0000351766760000035
Wherein, l' represents the left visual point image { L of distortion dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at undistorted left visual point image { L org(x, y) } or undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } in sequence number;
4.-5, with the left visual point image { L of distortion dis(x, y) } in the space structure intensity set expression that is used for reflection picture quality of 8 * 8 sub-blocks of various block types be Then will
Figure FDA0000351766760000037
In all elements arrange in order and obtain the Second Characteristic vector, be designated as F 2, wherein, F 2Dimension be 32;
4.-6, with the First Characteristic vector F 1With the Second Characteristic vector F 2Form the New Characteristics vector, as the left visual point image { L of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F L, F L=[F 1, F 2], wherein, F LDimension be 288, " [] " is the vector representation symbol, [F 1, F 2] represent the First Characteristic vector F 1With the Second Characteristic vector F 2Couple together and form a New Characteristics vector;
4.-7, to the right visual point image { R of distortion dis(x, y) } adopt the operation identical with step 4.-1 to 4.-6, obtain the right visual point image { R of distortion dis(x, y) } the characteristic vector that is used for reflection picture quality, be designated as F R, wherein, F RDimension be 288;
4.-8, to the left visual point image { L of distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality LRight visual point image { R with distortion dis(x, y) } the characteristic vector F that is used for reflection picture quality RCarry out linear weighted function, obtain S disThe characteristic vector that is used for reflection picture quality, be designated as F q, F q=w L* F L+ w R* F R, wherein, w LLeft visual point image { the L of expression distortion dis(x, y) } weights proportion, w RRight visual point image { the R of expression distortion dis(x, y) } weights proportion, w L+ w R=1;
5. according to undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) }, by the left visual point image { L of calculated distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image in various block types 8 * 8 sub-blocks the spatial noise intensity that is used for the reflection depth perception and be used for the space structure intensity of reflection depth perception, obtain S disThe characteristic vector that is used for the reflection depth perception, be designated as F s
Described step detailed process 5. is:
5.-1, calculate respectively undistorted left visual point image { L org(x, y) } and undistorted right visual point image { R org(x, y) } absolute difference image, the left visual point image { L of distortion dis(x, y) } and the right visual point image { R of distortion dis(x, y) } absolute difference image and undistorted left visual point image { L org(x, y) } minimum discernable distorted image { J L(x, y) } and undistorted right visual point image { R org(x, y) } minimum discernable distorted image { J R(x, y) } absolute difference image, be designated as respectively { D org(x, y) }, { D dis(x, y) } and { Δ J (x, y) }, D org(x, y)=| L org(x, y)-R org(x, y) |, D dis(x, y)=| L dis(x, y)-R dis(x, y) |, Δ J (x, y)=| J L(x, y)-J R(x, y) |, wherein, D org(x, y) represents { D org(x, y) } in coordinate position be the pixel value of the pixel of (x, y), D dis(x, y) represents { D dis(x, y) } in coordinate position be the pixel value of the pixel of (x, y), the middle coordinate position of Δ J (x, y) expression { Δ J (x, y) } is the pixel value of the pixel of (x, y), " || " is for asking absolute value sign;
5. { D is obtained respectively in-2,3. identical operation of employing and step org(x, y) } and { D dis(x, y) } in the block type of each 8 * 8 sub-block;
5.-3, calculate { D dis(x, y) } in all block types be the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as { fd k(x 2, y 2), for { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel, use it for the reflection depth perception spatial noise intensity be designated as fd k(x 2, y 2), fd k ( x 2 , y 2 ) = 1 M k Σ ( x 3 , y 3 ) min ( max ( | D org ( x 4 , y 4 ) - D dis ( x 4 , y 4 ) | - ΔJ ( x 4 , y 4 ) , 0 ) , ST k ) 2 , Wherein, fd k(x 2, y 2) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) the spatial noise intensity that is used for the reflection depth perception of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, M kExpression { D dis(x, y) } in block type be the number of 8 * 8 sub-blocks of k, ST kFor describing the saturation threshold value of error perception, (x 4, y 4) expression { D dis(x, y) } in block type be that in 8 * 8 sub-blocks of k, coordinate position is (x 2, y 2) pixel at { D org(x, y) } or { Δ J (x, y) } in coordinate position, 1≤x 4≤ W, 1≤y 4≤ H, D org(x 4, y 4) expression { D org(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, D dis(x 4, y 4) expression { D dis(x, y) } in coordinate position be (x 4, y 4) the pixel value of pixel, Δ J (x 4, y 4) represent that in { Δ J (x, y) }, coordinate position is (x 4, y 4) the pixel value of pixel;
5.-4, with { D dis(x, y) } in the spatial noise intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types with set expression { fd k(x 2, y 2) | 1≤k≤4}, then with { fd k(x 2, y 2) | all elements in 1≤k≤4} is arranged in order and is obtained the 3rd characteristic vector, is designated as F 3, wherein, F 3Dimension be 256;
5.-5, to { D org(x, y) } and { D dis(x, y) } in each 8 * 8 sub-block implement respectively singular value decomposition, obtain respectively { D org(x, y) } and { D dis(x, y) } in each self-corresponding singular value vector of each 8 * 8 sub-block, with { D org(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as With { D dis(x, y) } in the singular value vector of l 8 * 8 sub-blocks be designated as
Figure FDA0000351766760000053
Wherein, the dimension of singular value vector is 8,
Figure FDA0000351766760000054
5.-6, calculate { D dis(x, y) } in all block types be the space structure intensity that is used for the reflection depth perception of 8 * 8 sub-blocks of k, be designated as
Figure FDA0000351766760000061
Figure FDA0000351766760000062
Wherein, l'' represents { D dis(x, y) } in block type be that 8 * 8 sub-blocks of k are at { D org(x, y) } or { Δ J (x, y) } in sequence number;
5.-7, with { D dis(x, y) } in the space structure intensity set expression that is used for the reflection depth perception of 8 * 8 sub-blocks of various block types be
Figure FDA0000351766760000063
Then will
Figure FDA0000351766760000064
In all elements arrange in order and obtain the 4th characteristic vector, be designated as F 4, wherein, F 4Dimension be 32;
5.-8, with the 3rd characteristic vector F 3With the 4th characteristic vector F 4Form the New Characteristics vector, as S disThe characteristic vector that is used for the reflection depth perception, be designated as F s, F s=[F 3, F 4], wherein, F sDimension be 288, " [] " is the vector representation symbol, [F 3, F 4] represent the 3rd characteristic vector F 3With the 4th characteristic vector F 4Couple together and form a New Characteristics vector;
6. with S disThe characteristic vector F that is used for reflection picture quality qWith the characteristic vector F that is used for the reflection depth perception sForm the New Characteristics vector, as S disCharacteristic vector, be designated as X, X=[F q, F s], " [] " is the vector representation symbol, [F q, F s] represent characteristic vector F qWith characteristic vector F sCouple together and form a New Characteristics vector;
7. adopt n undistorted stereo-picture, set up its distortion stereo-picture set under the different distortion levels of different type of distortion, this distortion stereo-picture set comprises the stereo-picture of several distortions, utilizes the subjective quality assessment method to obtain respectively the average subjective scoring difference of the stereo-picture of every width distortion in the set of distortion stereo-picture, is designated as DMOS, DMOS=100-MOS, wherein, MOS represents the subjective scoring average, DMOS ∈ [0,100], n 〉=1;
8. adopt and calculate S disThe identical method of characteristic vector X, the characteristic vector of the stereo-picture of every width distortion in the set of calculated distortion stereo-picture respectively, the characteristic vector for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as X with it i, wherein, 1≤i≤n', n' represent the width number of the stereo-picture of the distortion that comprises in the set of distortion stereo-picture;
9. adopt support vector regression that the characteristic vector of the stereo-picture of all distortions of identical type of distortion in the set of distortion stereo-picture is trained, and the support vector regression training pattern of utilizing training to obtain is tested the stereo-picture of every width distortion of same type of distortion, calculate the evaluating objective quality predicted value of the stereo-picture of every width distortion of identical type of distortion in the set of distortion stereo-picture, evaluating objective quality predicted value for the stereo-picture of i width distortion in the set of distortion stereo-picture is designated as Q with it i, Q i=f (X i), f () is the function representation form, Q i=f (X i) expression Q iBe X iFunction, wherein, 1≤i≤n', n' represent the width number of the stereo-picture of the distortion that comprises in the set of distortion stereo-picture;
Described step detailed process 9. is:
9.-1, the stereo-picture with all distortions of same type of distortion in the set of distortion stereo-picture is divided into mutually disjoint 5 groups of subsets, selects arbitrarily 4 groups of subset composing training sample datas set wherein, is designated as Ω q, { X k, DMOS k∈ Ω q, wherein, q represents training sample data set omega qIn the width number of stereo-picture of the distortion that comprises, X kExpression training sample data set omega qIn the characteristic vector of stereo-picture of k width distortion, DMOS kExpression training sample data set omega qIn the average subjective scoring difference of stereo-picture of k width distortion, 1≤k≤q;
9.-2, structure X kRegression function f (X k),
Figure FDA0000351766760000071
Wherein, f () is the function representation form, and w is weight vector, w TBe the transposed matrix of w, b is bias term,
Figure FDA0000351766760000072
Expression training sample data set omega qIn the characteristic vector X of stereo-picture of k width distortion kLinear function,
Figure FDA0000351766760000073
D(X k, X l) be the kernel function in support vector regression,
Figure FDA0000351766760000074
X lBe training sample data set omega qIn the characteristic vector of stereo-picture of l width distortion, γ is nuclear parameter, is used for the scope of reflection input sample value, the scope of sample value is larger, and the γ value is also just larger, the exponential function of exp () expression take e the end of as, e=2.71828183, " || || " for asking the Euclidean distance symbol;
9.-3, adopt support vector regression to training sample data set omega qIn the characteristic vector of stereo-picture of all distortion train, make the regression function value and the error between average subjective scoring difference that obtain through training minimum, match obtains optimum weight vector w optBias term b with optimum opt, with the weight vector w of optimum optBias term b with optimum optCombination be designated as (w opt, b opt), ( w opt , b opt ) = arg min ( w , b ) ∈ Ψ Σ k = 1 q ( f ( X k ) - DMOS k ) 2 , The weight vector w of the optimum that utilization obtains optBias term b with optimum optStructure support vector regression training pattern is designated as
Figure FDA0000351766760000081
Wherein, Ψ represents training sample data set omega qIn the set of combination of the characteristic vector of stereo-picture of all distortion all weight vector of training and bias term,
Figure FDA0000351766760000082
Expression minimizes probability density function, X inpExpress support for the input vector of vector regression training pattern, (w opt) TBe w optTransposed matrix,
Figure FDA0000351766760000083
Express support for the input vector X of vector regression training pattern inpLinear function;
9.-4, according to the support vector regression training pattern, the stereo-picture that remains the every width distortion in 1 group of subset is tested, prediction obtains the evaluating objective quality predicted value of the stereo-picture of every width distortion in this group subset, evaluating objective quality predicted value for the stereo-picture of j width distortion in this group subset is designated as Q with it j, Q j=f (X j), Wherein, X jThe characteristic vector that represents the stereo-picture of j width distortion in this group subset,
Figure FDA0000351766760000085
The linear function that represents the stereo-picture of j width distortion in this group subset;
9.-5, according to the process of step 9.-1 to 9.-4, respectively the stereo-picture of all distortions of different type of distortion in the set of distortion stereo-picture is trained, obtain the evaluating objective quality predicted value of the stereo-picture of every width distortion in the set of distortion stereo-picture.
2. a kind of objective evaluation method for quality of stereo images based on visually-perceptible according to claim 1 is characterized in that described step detailed process 2. is:
2.-1, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of background illumination, be designated as { T l(x, y) },
Figure FDA0000351766760000086
Wherein, T l(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of background illumination of the pixel of (x, y),
Figure FDA0000351766760000087
Represent undistorted left visual point image { L org(x, y) } in the average brightness of all pixels in 5 * 5 windows centered by pixel take coordinate position as (x, y);
2.-2, calculate undistorted left visual point image { L org(x, y) } the visual threshold value set of visual masking effect of texture, be designated as { T t(x, y) }, T t(x, y)=η * G (x, y) * W e(x, y), wherein, T t(x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be the visual threshold value of visual masking effect of texture of the pixel of (x, y), η is the controlling elements greater than 0, G (x, y) represents undistorted left visual point image { L org(x, y) } in coordinate position be that the pixel of (x, y) carries out the maximum weighted mean value that directed high-pass filtering obtains, W e(x, y) expression is to undistorted left visual point image { L org(x, y) } edge image in coordinate position be that the pixel of (x, y) carries out the Weighted Edges value that Gassian low-pass filter obtains;
2.-3, to undistorted left visual point image { L org(x, y) } the visual threshold value set { T of visual masking effect of background illumination l(x, y) } and the visual threshold value set { T of the visual masking effect of texture t(x, y) } merge, obtain undistorted left visual point image { L org(x, y) } minimum discernable distorted image, be designated as { J L(x, y) }, J L(x, y)=T l(x, y)+T t(x, y)-C l,t* min{T l(x, y), T t(x, y) }, wherein, C l,tThe parameter of the visual masking effect eclipse effect of background illumination and texture, 0<C are controlled in expression l,t<1, min{} is for getting minimum value function;
2. undistorted right visual point image { R is obtained in-4, the employing operation identical with step 2.-1 to 2.-3 org(x, y) } minimum discernable distorted image, be designated as { J R(x, y) }.
3. a kind of objective evaluation method for quality of stereo images based on visually-perceptible according to claim 1 and 2 is characterized in that the detailed process of the regional detection algorithm during described step is 3. is:
3.-1, respectively with undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } be divided into
Figure FDA0000351766760000091
8 * 8 sub-blocks of individual non-overlapping copies define undistorted left visual point image { L org(x, y) } in l 8 * 8 sub-blocks be current the first sub-block, be designated as { f l o(x 2, y 2), the left visual point image { L of definition distortion dis(x, y) } in l 8 * 8 sub-blocks be current the second sub-block, be designated as { f l d(x 2, y 2), wherein,
Figure FDA0000351766760000092
(x 2, y 2) current the first sub-block { f of expression l o(x 2, y 2) and current the second sub-block { f l d(x 2, y 2) in the coordinate position of pixel, 1≤x 2≤ 8,1≤y 2≤ 8, f l o(x 2, y 2) current the first sub-block { f of expression l o(x 2, y 2) in coordinate position be (x 2, y 2) the pixel value of pixel, f l d(x 2, y 2) current the second sub-block { f of expression l d(x 2, y 2) in coordinate position be (x 2, y 2) the pixel value of pixel;
3.-2, calculate respectively current the first sub-block { f l o(x 2, y 2) and current the second sub-block { f l d(x 2, y 2) in the Grad of all pixels, for current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel, its Grad is designated as P o(x 2', y 2'), P o(x 2', y 2')=| G ox(x 2', y 2') |+| G oy(x 2', y 2') |, for current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel, its Grad is designated as P d(x 2', y 2'), P d(x 2', y 2')=| G dx(x 2', y 2') |+| G dy(x 2', y 2') |, wherein, 1≤x 2'≤8,1≤y 2'≤8, G ox(x 2', y 2') current the first sub-block { f of expression l o(x 2, y 2) in coordinate position be (x 2', y 2') the horizontal gradient value of pixel, G oy(x 2', y 2') current the first sub-block { f of expression l o(x 2, y 2) in coordinate position be (x 2', y 2') the vertical gradient value of pixel, G dx(x 2', y 2') current the second sub-block { f of expression l d(x 2, y 2) in coordinate position be (x 2', y 2') the horizontal gradient value of pixel, G dy(x 2', y 2') current the second sub-block { f of expression l d(x 2, y 2) in coordinate position be (x 2', y 2') the vertical gradient value of pixel, " || " is for asking absolute value sign;
3.-3, find out current the first sub-block { f l o(x 2, y 2) in the maximum of Grad of all pixels, be designated as G max, then according to G maxCalculate the first Grads threshold and the second Grads threshold, be designated as respectively T 1And T 2, T 1=0.12 * G max, T 2=0.06 * G max
3.-4, for current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel and current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel, the judgement P o(x 2', y 2') T 1And P d(x 2', y 2') T 1Whether set up, if so, judge current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel and current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel be strong fringe region, Num 1=Num 1+ 1, then execution in step 3.-8, otherwise, execution in step 3.-5, wherein, Num 1Initial value be 0;
3.-5, judgement P o(x 2', y 2') T 1And P d(x 2', y 2')<=T 1, perhaps P d(x 2', y 2') T 1And P o(x 2', y 2')<=T 1Whether set up, if so, judge current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel and current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel be weak fringe region, Num 2=Num 2+ 1, then execution in step 3.-8, otherwise, execution in step 3.-6, wherein, Num 2Initial value be 0;
3.-6, judgement P o(x 2', y 2')<T 2And P d(x 2', y 2')<T 1Whether set up, if so, judge current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel and current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel be smooth region, Num 3=Num 3+ 1, then execution in step 3.-8, otherwise, execution in step 3.-7, wherein, Num 3Initial value be 0;
3.-7, judge current the first sub-block { f l o(x 2, y 2) in coordinate position be (x 2', y 2') pixel and current the second sub-block { f l d(x 2, y 2) in coordinate position be (x 2', y 2') pixel be texture region, Num 4=Num 4+ 1, wherein, Num 4Initial value be 0;
3.-8, returning to step 3.-4 continues current the first sub-block { f l o(x 2, y 2) and current the second sub-block { f l d(x 2, y 2) in remaining pixel process, until current the first sub-block { f l o(x 2, y 2) and current the second sub-block { f l d(x 2, y 2) in 8 * 8 pixels all be disposed;
3.-9, with Num 1, Num 2, Num 3And Num 4In the corresponding area type of maximum as current the first sub-block { f l o(x 2, y 2) and current the second sub-block { f l d(x 2, y 2) block type, be designated as p, wherein, p ∈ 1,2,3,4}, p=1 represents Strong edge blocks, p=2 represents weak edge block, p=3 represents smooth block, p=4 represents texture block;
3.-10, make l''=l+1, l=l'' is with undistorted left visual point image { L org(x, y) } in the next one 8 * 8 sub-blocks as current the first sub-block, with the left visual point image { L of distortion dis(x, y) } in the next one 8 * 8 sub-blocks as current the second sub-block, return to step 3.-2 and continue to carry out, until undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in
Figure FDA0000351766760000111
8 * 8 sub-blocks of individual non-overlapping copies all are disposed, and obtain undistorted left visual point image { L org(x, y) } and the left visual point image { L of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks, wherein, the initial value of l'' is 0;
3.-11, the employing operation identical with step 3.-1 to 3.-10 obtains undistorted right visual point image { R org(x, y) } and the right visual point image { R of distortion dis(x, y) } in the block type of all 8 * 8 sub-blocks.
4. a kind of objective evaluation method for quality of stereo images based on visually-perceptible according to claim 1, it is characterized in that described step 4. with step 8. in the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG compression artefacts, get w L=0.50, w R=0.50; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the JPEG2000 compression artefacts, get w L=0.15, w R=0.85; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the Gaussian Blur distortion, get w L=0.10, w R=0.90; In the characteristic vector process that is used for reflection picture quality of the stereo-picture that calculates the white noise distortion, get w L=0.20, w R=0.80; Calculating the H.264 characteristic vector process that is used for reflection picture quality of the stereo-picture of coding distortion, get w L=0.10, w R=0.90.
CN 201110284944 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception Active CN102333233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110284944 CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110284944 CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Publications (2)

Publication Number Publication Date
CN102333233A CN102333233A (en) 2012-01-25
CN102333233B true CN102333233B (en) 2013-11-06

Family

ID=45484815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110284944 Active CN102333233B (en) 2011-09-23 2011-09-23 Stereo image quality objective evaluation method based on visual perception

Country Status (1)

Country Link
CN (1) CN102333233B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595185B (en) * 2012-02-27 2014-06-25 宁波大学 Stereo image quality objective evaluation method
CN102663747B (en) * 2012-03-23 2014-08-27 宁波大学 Stereo image objectivity quality evaluation method based on visual perception
CN102737380B (en) * 2012-06-05 2014-12-10 宁波大学 Stereo image quality objective evaluation method based on gradient structure tensor
CN102769749B (en) * 2012-06-29 2015-03-18 宁波大学 Post-processing method for depth image
CN102843572B (en) * 2012-06-29 2014-11-05 宁波大学 Phase-based stereo image quality objective evaluation method
CN103475896A (en) * 2013-07-24 2013-12-25 同济大学 Interactive video and audio experience-quality assessment platform and method based on Qos
CN103442248B (en) * 2013-08-22 2015-08-12 北京大学 A kind of image compression quality appraisal procedure based on binocular stereo vision
CN103475897B (en) * 2013-09-09 2015-03-11 宁波大学 Adaptive image quality evaluation method based on distortion type judgment
CN103517065B (en) * 2013-09-09 2015-04-08 宁波大学 Method for objectively evaluating quality of degraded reference three-dimensional picture
CN103841411B (en) * 2014-02-26 2015-10-28 宁波大学 A kind of stereo image quality evaluation method based on binocular information processing
CN103903259A (en) * 2014-03-20 2014-07-02 宁波大学 Objective three-dimensional image quality evaluation method based on structure and texture separation
CN104933696B (en) * 2014-03-21 2017-12-29 联想(北京)有限公司 Determine the method and electronic equipment of light conditions
CN105791849B (en) * 2014-12-25 2019-08-06 中兴通讯股份有限公司 Picture compression method and device
CN105282543B (en) * 2015-10-26 2017-03-22 浙江科技学院 Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN105430397B (en) * 2015-11-20 2018-04-17 清华大学深圳研究生院 A kind of 3D rendering Quality of experience Forecasting Methodology and device
CN105635727B (en) * 2015-12-29 2017-06-16 北京大学 Evaluation method and device based on the image subjective quality for comparing in pairs
CN105894522B (en) * 2016-04-28 2018-05-25 宁波大学 A kind of more distortion objective evaluation method for quality of stereo images
CN105828061B (en) * 2016-05-11 2017-09-29 宁波大学 A kind of virtual view quality evaluating method of view-based access control model masking effect
CN106097327B (en) * 2016-06-06 2018-11-02 宁波大学 In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN106412569B (en) * 2016-09-28 2017-12-15 宁波大学 A kind of selection of feature based without referring to more distortion stereo image quality evaluation methods
CN107396095B (en) * 2017-08-28 2019-01-15 方玉明 A kind of no reference three-dimensional image quality evaluation method
CN107438180B (en) * 2017-08-28 2019-02-22 中国科学院深圳先进技术研究院 The depth perception quality evaluating method of 3 D video
CN112508856B (en) * 2020-11-16 2022-09-09 北京理工大学 Distortion type detection method for mixed distortion image
CN112770105B (en) * 2020-12-07 2022-06-03 宁波大学 Repositioning stereo image quality evaluation method based on structural features
CN115187519B (en) * 2022-06-21 2023-04-07 上海市计量测试技术研究院 Image quality evaluation method, system and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009075245A1 (en) * 2007-12-12 2009-06-18 Nec Corporation Image quality evaluation system, and device, method and program used for the evaluation system
CN101562758A (en) * 2009-04-16 2009-10-21 浙江大学 Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009075245A1 (en) * 2007-12-12 2009-06-18 Nec Corporation Image quality evaluation system, and device, method and program used for the evaluation system
CN101562758A (en) * 2009-04-16 2009-10-21 浙江大学 Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
一种基于人眼视觉特性的立体图像质量客观评价方法;王阿红;《光电工程》;20110115;第38卷(第1期);第134-141页 *
周武杰等.立体图像质量评价方法研究.《International Conference of China Communication and Information Technology》.2010,第273-278页.
基于SVM和GA的图像质量评价方法;王磊等;《计算机工程》;20110520;第37卷(第10期);第195-197页 *
基于人眼视觉特征的立体图像质量客观评价方法;杨嘉晨;《天津大学学报》;20000731;第42卷(第7期);第622-627页 *
基于掩盖效应的无参考数字图像质量评价;王正友等;《计算机应用》;20061231;第26卷(第12期);第2838-2840页 *
杨嘉晨.基于人眼视觉特征的立体图像质量客观评价方法.《天津大学学报》.2000,第42卷(第7期),第622-627页.
王正友等.基于掩盖效应的无参考数字图像质量评价.《计算机应用》.2006,第26卷(第12期),第2838-2840页.
王磊等.基于SVM和GA的图像质量评价方法.《计算机工程》.2011,第37卷(第10期),第195-197页.
王阿红.一种基于人眼视觉特性的立体图像质量客观评价方法.《光电工程》.2011,第38卷(第1期),第134-141页.
立体图像质量评价方法研究;周武杰等;《International Conference of China Communication and Information Technology》;20101231;第273-278页 *

Also Published As

Publication number Publication date
CN102333233A (en) 2012-01-25

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN102209257B (en) Stereo image quality objective evaluation method
CN102547368B (en) Objective evaluation method for quality of stereo images
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN104036501A (en) Three-dimensional image quality objective evaluation method based on sparse representation
CN104954778B (en) Objective stereo image quality assessment method based on perception feature set
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN103338379B (en) Stereoscopic video objective quality evaluation method based on machine learning
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN102843572B (en) Phase-based stereo image quality objective evaluation method
CN105654465B (en) A kind of stereo image quality evaluation method filtered between the viewpoint using parallax compensation
CN104036502A (en) No-reference fuzzy distorted stereo image quality evaluation method
CN104394403A (en) A compression-distortion-oriented stereoscopic video quality objective evaluating method
CN104408716A (en) Three-dimensional image quality objective evaluation method based on visual fidelity
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN104240248A (en) Method for objectively evaluating quality of three-dimensional image without reference
CN104361583A (en) Objective quality evaluation method of asymmetrically distorted stereo images
CN104346809A (en) Image quality evaluation method for image quality dataset adopting high dynamic range
CN106412571A (en) Video quality evaluation method based on gradient similarity standard deviation
CN103108209B (en) Stereo image objective quality evaluation method based on integration of visual threshold value and passage
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN103369348B (en) Three-dimensional image quality objective evaluation method based on regional importance classification
CN102708568A (en) Stereoscopic image objective quality evaluation method on basis of structural distortion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191219

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201123

Address after: 226500 Jiangsu city of Nantong province Rugao City Lin Zi Zhen Hong Wei River Road No. 8

Patentee after: NANTONG OUKE NC EQUIPMENT Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.