WO2012000137A1 - Method for measuring quality of a video with at least two different views, and corresponding device - Google Patents

Method for measuring quality of a video with at least two different views, and corresponding device Download PDF

Info

Publication number
WO2012000137A1
WO2012000137A1 PCT/CN2010/000999 CN2010000999W WO2012000137A1 WO 2012000137 A1 WO2012000137 A1 WO 2012000137A1 CN 2010000999 W CN2010000999 W CN 2010000999W WO 2012000137 A1 WO2012000137 A1 WO 2012000137A1
Authority
WO
WIPO (PCT)
Prior art keywords
quality
view
video
measure
value
Prior art date
Application number
PCT/CN2010/000999
Other languages
French (fr)
Inventor
Xiao Dong Gu
De Bing Liu
Feng Xu
Zhi Bo Chen
Original Assignee
Thomson Broadband R&D (Beijing) Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Broadband R&D (Beijing) Co., Ltd. filed Critical Thomson Broadband R&D (Beijing) Co., Ltd.
Priority to PCT/CN2010/000999 priority Critical patent/WO2012000137A1/en
Publication of WO2012000137A1 publication Critical patent/WO2012000137A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Definitions

  • This invention relates to a method for measuring the quality of video with two or more different views, and a
  • Stereo video is an important video technology to improve human's visual experiences. This technology developed rapidly within these years, and the application of it is coming into our daily life, such as 3D cartoon movies.
  • stereo video compression Various codec for stereo video compression are proposed, and the display technology is more and more practical. However, it is a problem to measure the quality of stereo video. In stereo video, there are two views with some disparity.
  • the normal spatial artefacts include blur, noise, blockiness, etc. Therefore, denote as the overall spatial quality of the i-th video frame, denote QuaUCy f),Bloek iess f t Blwr(f), tsetf) as the overall spatial quality, blockiness artefacts measure, blur artefacts measure and noise artefacts measure of a normal 2D frame f.
  • V— Q> fi tV ti being the stereo video views
  • the scheme proposed in the prior art 1 is simple and intuitive, the prediction accuracy is limited.
  • the human brain is able to compensate/conceal the quality degradation of lower quality view by the higher quality view to some extent.
  • the inventors of the present invention have found that the ability of the brain for compensation / concealment is quite different for different kinds of spatial degradations. For example, blur and noise degradation can be easier hidden by the higher quality view than blockiness degradation.
  • the present invention makes use of this finding, by
  • the correlation between the different views is weighted differently. That is, a measuring method and device take into account, individually for each quality degradation type, how effective the better view can compensate the additional distortion in the worse view. For this purpose, a weighing factor is used. A weighting factor of zero (or 0%) means that the better view totally compensates the
  • a weighting factor of one means that the better view fails to compensate any of the additional distortion in the worse view. That is, in terms of perceptive quality, the 3D video performs the same as the worse view.
  • a method for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other comprises steps of
  • analyzing the video wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined (separately for a first and a second view of the at least two views of the video) , calculating at least a first, a second and a third value representing quality components, the first value denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor al, the second value denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor oc2, and the third value denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3, and calculating a combined quality
  • an apparatus for measuring a quality of video with at least two different views wherein one of the at least two views has lower quality than the other,
  • analyzer means for analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined
  • first calculating means for a first and a second view of the at least two views of the video
  • the first value vl denoting a blockiness quality of the video wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first
  • weighting factor al the second value v2 denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure' for blur of the second view are weighted and added, the weighting using a second weighting factor 2, and the third value v3 denoting a noise quality of the video, wherein the first measure for noise of. the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3; and second calculating means for calculating a combined quality value using said first, second and third values, and output means for providing the combined value as a video quality measure.
  • Fig.l a flow-chart of an embodiment of a method for
  • Fig.2 a block diagram of an embodiment of a device for measuring stereo video quality
  • Stereo video is an important video technology to improve human's visual experiences. This technology developed rapidly, and the application of it is coming into our daily life, such as 3D cartoon movies. Various codecs for stereo video compression are proposed, and the display technology is more and more practical. However, it's a problem to measure the quality of stereo video.
  • stereo video there are two views with some disparity.
  • people's left eye sees left view and right eye sees right view separately. They compose to an integrated view with depth information in the brain, so that people can perceive a 3D sense of the video. It' s truer than 2D videos and closer to the sense like real world. This is the characteristic and advantage of stereo video, but it also results in a lot of differences between stereo video and common 2D video quality assessment.
  • 2D video quality assessment methods are not adequate to measure 3D video quality anymore. It is possible to keep the overall quality of stereo video high while reducing the bitrates of the whole video at the same time, by keeping e.g. the left' view at best quality and compressing the right view.
  • This invention focuses on the spatial artefacts of stereo videos, while the artefacts in temporal dimension are not considered.
  • the normal spatial artefacts including blockiness, blur, noise, etc.
  • This invention provides an artefacts-based stereo video quality measurement scheme, considering the three dominating spatial artefacts of 2D images: blockiness, blur, and noise. Obviously, also other types of distortion may be considered, and will be handled in a corresponding manner. Clearly, in order to identify the different compensation/concealment ability (of the lower quality view by higher quality view) of the three kinds of spatial artefacts the input features should be: Blackiness( t ') Bi «r(/ ) Noiseif ' B ckiness(ff)- Bhir(f ) r Notse(f i *)
  • Blocktness(f l R ) f and N se(ff) f an intuitive solution is to collect a sufficient stereo video database with a natural distribution of these features. And then there are a lot of mature machine learning techniques can be adopted to train a model which takes, these six features as input and output a score of the stereo video quality measurement.
  • DSCQS Double Stimulus Continuous Scale: it is a method to scale the subjective quality of video, established by ITU- RBT.500-11. In this method, original video and test video are displayed in an alternating manner. Observers watch them and give a score from 1 to 5 to reflect their own subjective impression of the test video. The mean value of these scores among different persons is the final scale of video quality.
  • the table is the standard of scoring.
  • the used environment to watch stereo video is a PC with Nvidia GeForce 9 GPU, Windows Vista OS, Sumsang syncmaster 2233RZ 120HZ LCD.
  • the software for playing stereo video is NVIDIA stereoscopic player. Observers need to wear special glasses and watch the stereo video at a distance of 3 times the width of LCD.
  • Fig.3 shows different kinds of distortion in the Heidelberg sequence, where the left view.s L are undistorted. More details are given below.
  • Gauss filter and H.264 encoding and decoding These operations on right video lead to effect as follows: blur, blockiness and noise.
  • HRC 1 Fig.3a: Gauss filter is applied on the right view with filter layer ranged from 1 to 5. Then we get five degraded videos with different blur degree.
  • multilayer gauss filter the right view is blurred. A lot of high frequency information is lost, the details of video are hard to discern, such as expression on human face. The contours of the image could still be seen, low frequency information was held. However, we found the overall quality of the whole combined view didn't come down much in the stereo video mode.
  • HRC 2 (Fig.3b): The right view is encoded by H.264 with different QP, and then decoded with de-block filter turned off. In the case blockiness is obvious in right view. The quality of the right view comes down quickly with the increase of QP value. When QP is smaller, blocking effect on right view is not obvious, the overall quality can still keep well by the repair action of left high quality view, but the tiny distortion of overall view could be perceived. When the QP value keeps on increasing, the blocking effect of right view is tended to severity; the overall quality comes down quickly to an un-acceptable level.
  • HRC 3 (Fig.3c) : The same as in HRC 2 , but turned on the deblocking filter in this case.
  • Blockiness3D means the overall blockiness perception for the stereo video
  • Blockiness means the blockiness perception for the traditional 2D video (same for the other two artefact types "blur” and “noise”) .
  • the artefact fusion strategy which is applied when generating a value indicating the overall perception and considering the artefacts of blockiness, blur and noise, is the same between stereo videos and traditional 2D videos.
  • the present invention provides a stereo video quality measurement with the analysis of the database generated above, and the mentioned assumptions. Below, different tests are shown for evaluating the effective of these assumptions.
  • Each stereo video in the database is composed by two views ii and J i . Since H is kept un-distorted,
  • the subjective quality marked for the right view is denoted as and the subjective quality marked for the overall stereo video is denoted as MOS3D(ff J ff k ) ' _
  • BlocMM.ess3B(ff t ff) tx 1 X Blockiness f )
  • MOS( ff) g(Blovki7Uiss(ff),Bhxr(ff), Noise(ff)) Finally the remaining issue is to determine the constant numbers ⁇ ⁇ , 3 ⁇ 4 and ff 3. These constant numbers are inside a range [0,1].
  • the artefacts estimation of the stereo video is defined as the linear composition of the artefacts level of the two views (left view and right view) , cf . eq. (2) .
  • a solution is provided to decide the individual values of the constants a i, a z and a s in eq. (2) .
  • One advantage of this invention lies in the following.
  • a method is proposed for evaluating artefacts (Blockiness, Blur, Noise) of stereo video, in cases where the artefacts level of the two views of the stereo video are different.
  • the evaluating method is as defined below:
  • a method is provided for determining the combination
  • human's compensation / concealment ability to different kinds of artefacts is quite different. In case the artefact level of two views are different, human is able to compensation the bad quality view with the information of high quality view. According to the determined combination coefficients, it is clear that the human's compensation operation deals best with blur artefacts, medium with noise artefacts and worst with blockiness artefacts.
  • the mechanism is the same between 2D video and stereo video. According to all above items, an artefacts based stereo video quality measurement is then established, as mentioned above.
  • Fig .1 shows an embodiment of a method for measuring a
  • the method 10 comprises steps of analyzing 11 the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video,
  • calculating 12 at least a first, a second and a third value representing quality components, the first value denoting a blockiness quality of .the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor al, the second value denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor a2, and
  • the third value denoting a noise quality of the video
  • the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3, and calculating 13 a combined quality value using said first, second and third values, and providing 14 the combined value as a video quality measure.
  • the first, the second and the third weighting factors are different from each other, so that a human's eye's best compensation ability for blur, medium compensation ability for noise and worst compensation ability for blockiness are considered.
  • the weighting factors define how strong the influence of the lower quality view is, compared with the higher quality view: a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 (100%) indicates that the perceived quality of both views is equal to the perceived quality of the lower quality view.
  • the blockiness quality is lower than the third weighting factor ⁇ x3 for noise quality, and the third weighting factor a3 for noise quality is lower than the second weighting factor cc2 for blur quality.
  • the third weighting factor a3 for noise quality is lower than the second weighting factor cc2 for blur quality.
  • the combined quality value is calculated by an Artificial Neural Network, using a training based on exemplary video sequences.
  • the combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components. For example,
  • Fig.2 shows one embodiment of an apparatus for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other.
  • the apparatus 20 comprises
  • analyzer means 21 for analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video,
  • first calculating means 22 for calculating at least a first, a second and a third value representing quality components, the first value vl denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first
  • the second value v2 denoting a blur quality of the video
  • the third value v3 denoting a noise quality of the video
  • the weighting using a third weighting factor oc3 second calculating means 23 for calculating a combined quality value using said first, second and third values
  • output means 24 for providing the combined value as a video quality measure.
  • the first al, the second a.2 and the third ot3 weighting factors are different from each other.
  • the weighting factors define how strong the influence of the lower quality view is, compared with the higher quality view, wherein a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 (or 100%) indicates that the perceived quality of both views is equal. to the perceived quality of the lower quality view.
  • the first weighting factor al for blockiness quality is lower than the third weighting factor a3 for noise quality, and the third
  • weighting factor a3 for noise quality is lower than the second weighting factor a2 for blur quality.
  • the apparatus in one embodiment of the apparatus, the
  • first weighting factor al is between 0.15 and 0.25
  • second weighting factor a2 is between 0.85 and 0.95
  • third weighting factor ct3 is between 0.55 and 0.65.
  • the second calculation means comprises processing means for processing an
  • it further comprises adding means 25a, wherein the combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components.
  • the second calculating means performs a calculation according to
  • MOS3D Blockiness3D + Blur3D + Noise3D
  • the second calculating means performs a calculation according to
  • MOS3D cl*Blockiness3D + c2*Blur3D + c3*Noise3D where cl,c2,c3 are constant factors. In one embodiment, the sum of cl+c2+c3 is 1.
  • stereo video coding Various possible applications of the invention in stereo video coding are e.g. help choosing a de-blocking filter (since the compensation/concealment ability to blockiness and blur are different, the proposed technique is important in deciding to de-block the lower quality view to which level) or adaptive stereo video streaming (unequal streaming of the two views is clearly decided by an accurate stereo video quality measurement) .
  • a method is proposed to determine three distinct combination coefficients i, a z and °3 ⁇ 4. These coefficients . can be
  • a user-interface is provided to allow modification of these constant numbers in special use cases: for example, for those users who are extremely critical to blur artefacts.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Stereo video is an important video technology to improve human's visual experiences. It is known that the human brain may conceal differences between left view and right view. Spatial video artefacts include blur, noise, blockiness, etc. The ability of the human brain for concealment is quite different for different kinds of spatial degradations. The quality of stereo video is difficult to measure. A method for measuring stereo video quality comprises determining (11) separate measures for each view for blockiness, blur and noise, calculating (12) respective values (v1,v2,v3) that represent quality components, wherein the views are weighted using different individual weighting factors (αl, α2, α3) for each distortion type, and calculating (13) combined quality value.

Description

Method for measuring quality of a video with at least two different views, and corresponding device
Field of the invention
This invention relates to a method for measuring the quality of video with two or more different views, and a
corresponding device. Background
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and , not as
admissions of prior art.
Stereo video is an important video technology to improve human's visual experiences. This technology developed rapidly within these years, and the application of it is coming into our daily life, such as 3D cartoon movies.
Various codec for stereo video compression are proposed, and the display technology is more and more practical. However, it is a problem to measure the quality of stereo video. In stereo video, there are two views with some disparity.
Through special display technology, people's left eye sees left view and right eye sees right view separately. If they compose to an integrated view with depth information in the brain, then people can perceive 3D sense of the video. It is perceived truer than 2D videos and closer to the real world. In 2D videos, the normal spatial artefacts include blur, noise, blockiness, etc. Therefore, denote
Figure imgf000003_0001
as the overall spatial quality of the i-th video frame, denote QuaUCy f),Bloek iess f t Blwr(f), tsetf) as the overall spatial quality, blockiness artefacts measure, blur artefacts measure and noise artefacts measure of a normal 2D frame f.
Denote V— Q>fitVti) being the stereo video views, where
VL = (fit ' '"ff ) is the left view and = ifl'>fl>—* ϊν } is the right view.
Previous art has pointed out that in stereo video browsing, the human brain is able to compensate/conceal the quality degradation of lower quality view by the higher quality view to some extent. Therefore
( 1 )
Figure imgf000003_0002
In the prior art1, authors directly define
Figure imgf000003_0003
(ia) , wherein the left view is defined as the higher quality view and PSNR is chosen as the traditional 2D video quality measure, a is a constant number lower than 0.5. Therefore, the influence of the quality of the lower quality view is lowered down in the overall stereo video quality evaluation.
Summary of the Invention Though the scheme proposed in the prior art1 is simple and intuitive, the prediction accuracy is limited. As mentioned above, the human brain is able to compensate/conceal the quality degradation of lower quality view by the higher quality view to some extent. However, the inventors of the present invention have found that the ability of the brain for compensation / concealment is quite different for different kinds of spatial degradations. For example, blur and noise degradation can be easier hidden by the higher quality view than blockiness degradation.
The present invention makes use of this finding, by
providing an improved method for measuring video quality, and a corresponding device.
According to the invention, for different types of quality degradation the correlation between the different views is weighted differently. That is, a measuring method and device take into account, individually for each quality degradation type, how effective the better view can compensate the additional distortion in the worse view. For this purpose, a weighing factor is used. A weighting factor of zero (or 0%) means that the better view totally compensates the
additional distortion in the worse view. That is, in terms of perceptive quality, the 3D video performs the same as the better view. On the other hand, a weighting factor of one (or 100%) means that the better view fails to compensate any of the additional distortion in the worse view. That is, in terms of perceptive quality, the 3D video performs the same as the worse view.
In one embodiment, a method for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other, comprises steps of
analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined (separately for a first and a second view of the at least two views of the video) , calculating at least a first, a second and a third value representing quality components, the first value denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor al, the second value denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor oc2, and the third value denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3, and calculating a combined quality value using said first, second and third values, and providing the combined value as a video quality measure.
In one embodiment, an apparatus for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other,
comprises analyzer means for analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined
(separately for a first and a second view of the at least two views of the video) , first calculating means for
calculating at least a first, .a second and a third value representing quality components,
the first value vl denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first
weighting factor al, the second value v2 denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure' for blur of the second view are weighted and added, the weighting using a second weighting factor 2, and the third value v3 denoting a noise quality of the video, wherein the first measure for noise of. the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3; and second calculating means for calculating a combined quality value using said first, second and third values, and output means for providing the combined value as a video quality measure.
Advantageous embodiments of the invention are disclosed in the dependent claims, the following description and the figures .
Brief description of the drawings
Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in
Fig.l a flow-chart of an embodiment of a method for
measuring stereo video quality;
Fig.2 a block diagram of an embodiment of a device for measuring stereo video quality; and
Fif.3 different kinds of exemplary distortion in two
different views of a stereo video.
Detailed description of the invention Stereo video is an important video technology to improve human's visual experiences. This technology developed rapidly, and the application of it is coming into our daily life, such as 3D cartoon movies. Various codecs for stereo video compression are proposed, and the display technology is more and more practical. However, it's a problem to measure the quality of stereo video.
In stereo video, there are two views with some disparity. Through special display technology, people's left eye sees left view and right eye sees right view separately. They compose to an integrated view with depth information in the brain, so that people can perceive a 3D sense of the video. It' s truer than 2D videos and closer to the sense like real world. This is the characteristic and advantage of stereo video, but it also results in a lot of differences between stereo video and common 2D video quality assessment.
Research achievements of 2D video quality assessment cannot be applied to stereo video directly. The primary difference between 2D and 3D video is: if there is only one view, as in the 2D video, then some calculation result of the architecture information can reflect the quality of the 2D video directly, such as PSNR, MSE and other improved methods.
However, the principle of stereo video showing in human' s brain is more complicated, it's affected by both two views. Human Visual System (HVS) take great effect on the "overall performance of stereo video. Former work proved that when the quality of one view is very good, even if the quality of the other view is poor, the overall quality could still stay at a high level. The lost information in the degraded view can be well compensated and the visual noise can be
concealed during the combination in the brain. Therefore, 2D video quality assessment methods are not adequate to measure 3D video quality anymore. It is possible to keep the overall quality of stereo video high while reducing the bitrates of the whole video at the same time, by keeping e.g. the left' view at best quality and compressing the right view.
Although some information is lost, the perceived quality can well be repaired by the left view.
In the stereo video quality assessment research, it's a common method to keep one view' s quality high and degrade the other view. For example, in the prior a t11, an approach is proposed that simply combines 2D quality scores of each image of the stereo pair. In the previously mentioned prior art1, the authors point out that the lower quality view shall contribute less to the overall quality measure. Hence, the weight for the higher quality view shall be larger than the lower quality view.
Denote ^ = .^ ^5) the stereo videos where V = {fi»fi> is the left view and Vs ~ &-£*fz* is the right view.
This invention focuses on the spatial artefacts of stereo videos, while the artefacts in temporal dimension are not considered. In 2D videos, the normal spatial artefacts including blockiness, blur, noise, etc.
We can make a very simple experiment: keep the left view un- degraded, while increasing a single type of spatial
degradation (blockiness, blur, or noise) step by step. A certain high level of blur/noise degradation in the right view can be totally hidden by the left view, and no quality degradation is perceived for the overall stereo video. On the other hand, even slight blockiness degradation will be identified, and in this case the left un-degraded view is hardly able to help hiding blockiness degradation in the right, lower quality view.
This invention provides an artefacts-based stereo video quality measurement scheme, considering the three dominating spatial artefacts of 2D images: blockiness, blur, and noise. Obviously, also other types of distortion may be considered, and will be handled in a corresponding manner. Clearly, in order to identify the different compensation/concealment ability (of the lower quality view by higher quality view) of the three kinds of spatial artefacts the input features should be: Blackiness( t ') Bi«r(/ ) Noiseif ' B ckiness(ff)- Bhir(f )r Notse(fi*)
Since blockiness detection, blur detection, noise detection methods are widely studied in the past, these methods are considered to be known to the reader.
ARTEFACTS-BASED STEREO VIDEO QUALITY MEASUREMENT SCHEME Given the six features Blockiness(fi # Blur(fi L)f Noisa(ff)f
Figure imgf000009_0001
Blocktness(fl R)f and N se(ff)f an intuitive solution is to collect a sufficient stereo video database with a natural distribution of these features. And then there are a lot of mature machine learning techniques can be adopted to train a model which takes, these six features as input and output a score of the stereo video quality measurement.
Such an idea seems reasonable but impractical:
First, though blockiness, blur, and noise are three dominating spatial artefacts, there are a lot of other kind's of spatial artefacts such as ringing, mosquito, etc. And despite spatial artefacts, various kinds of temporal
artefacts exist. The method is hard to be generated unless the database can be unlimited large, in cases where these additional artefacts should be taken into account.
Second, to train a stable model with six input features, a large enough database is required. In this case, it is required to collect representative samples and organize subjective test to setup a database. Till now, to setup a sufficiently large database (e.g. of size 10000) and organizing subjective test is far too expensive. However, the problem is not so serious for the traditional 2D video quality measurement. Considering only blockiness, blur and noise artefacts, stereo video quality measurement takes six features (the three artefacts for both views) as input, while traditional 2D video quality measurement only require three features (the above-mentioned three types of artefacts) . If 10000 samples are required to train a model for stereo videos, only \/lOGO0=lG0 samples are required to do the same thing for traditional 2D videos. In order to lower down the number of features in stereo video quality measurement, a subjective test was done for stereo videos with the left view kept un-distorted.
Subjective test for stereo videos with the left view kept un-distorted
Some distortions may happen when videos are compressed or transferred. Videos may lose information of spatial and temporal, or probably produce some error and noise. These distortions will affect the quality of single view, lower down the MOS value of subjective test. There are two views in the stereo video, the effect on the whole video may be different by different distortions of one view. For making an accurate measurement of overall quality, we adopted DSCQS method to organize the subjective test. DSCQS is "Double Stimulus Continuous Scale": it is a method to scale the subjective quality of video, established by ITU- RBT.500-11. In this method, original video and test video are displayed in an alternating manner. Observers watch them and give a score from 1 to 5 to reflect their own subjective impression of the test video. The mean value of these scores among different persons is the final scale of video quality. The table is the standard of scoring.
Figure imgf000011_0001
The used environment to watch stereo video is a PC with Nvidia GeForce 9 GPU, Windows Vista OS, Sumsang syncmaster 2233RZ 120HZ LCD. The software for playing stereo video is NVIDIA stereoscopic player. Observers need to wear special glasses and watch the stereo video at a distance of 3 times the width of LCD.
We choose three stereo video sources to create the database used in the subjective test: Katana ( 72 Op) , Knights_Quest
(576p) and Heidelberg ( 576p) . Fig.3 shows different kinds of distortion in the Heidelberg sequence, where the left view.s L are undistorted. More details are given below. For simulating several common distortions, we processed right view with following methods: Gauss filter and H.264 encoding and decoding. These operations on right video lead to effect as follows: blur, blockiness and noise. As mentioned before, the left view is kept un-distorted. HRC 1 (Fig.3a): Gauss filter is applied on the right view with filter layer ranged from 1 to 5. Then we get five degraded videos with different blur degree. By applying multilayer gauss filter, the right view is blurred. A lot of high frequency information is lost, the details of video are hard to discern, such as expression on human face. The contours of the image could still be seen, low frequency information was held. However, we found the overall quality of the whole combined view didn't come down much in the stereo video mode.
HRC 2 (Fig.3b): The right view is encoded by H.264 with different QP, and then decoded with de-block filter turned off. In the case blockiness is obvious in right view. The quality of the right view comes down quickly with the increase of QP value. When QP is smaller, blocking effect on right view is not obvious, the overall quality can still keep well by the repair action of left high quality view, but the tiny distortion of overall view could be perceived. When the QP value keeps on increasing, the blocking effect of right view is tended to severity; the overall quality comes down quickly to an un-acceptable level. HRC 3 (Fig.3c) : The same as in HRC 2 , but turned on the deblocking filter in this case. The blocking effect is less than the case in HRC 2, but the blur and noise is more severe. On the same QP value, the MOS value of right view descend a little or do not change, while on the contrary, the overall MOS value increase a. little compared with views without de-block filter operation. This result of
observation tells that the blockiness distortion make great effect on the overall quality, while the blur and noise distortion make less effect on the overall quality. We hence established a database composed of 114 samples of stereo videos. Each sample is of perfect quality left view and distorted right view, both the 2D right view and 3D stereo video are marked with a MOS score by viewers.
In the following, a Linear Model for Stereo Video Quality Measurement and a stereo video quality measurement scheme is described. The scheme is based on the below assumptions: First, the compensation/concealment ability of each artefact obey the linear rules:
Bl ockiness3 D(ff> ff)
Figure imgf000013_0001
where Blockiness3D means the overall blockiness perception for the stereo video, while Blockiness means the blockiness perception for the traditional 2D video (same for the other two artefact types "blur" and "noise") .
Second, the artefact fusion strategy, which is applied when generating a value indicating the overall perception and considering the artefacts of blockiness, blur and noise, is the same between stereo videos and traditional 2D videos.
The present invention provides a stereo video quality measurement with the analysis of the database generated above, and the mentioned assumptions. Below, different tests are shown for evaluating the effective of these assumptions. Each stereo video in the database is composed by two views ii and J i . Since H is kept un-distorted,
Blockiness(ff = Flur(f -) = NoiseCff} = 0
The subjective quality marked for the right view is denoted as
Figure imgf000014_0001
and the subjective quality marked for the overall stereo video is denoted as MOS3D(ffJffk)' _
Since the left view is un-distorted, the assumption can be further simplified to
BlocMM.ess3B(fftff) = tx1 X Blockiness f )
BhirSD(fl,ff) = c¾ X Blur(ff) ( 3 *
Noise3D(ff,ff ) = aa- X Noise ff)
According to assumption 2, the artefact fusion strategy, which is applied when generate overall perception
considering the artefacts of blockiness, blur and noise, is the same between stereo videos and traditional 2D videos.
Denote this fusion strategy as g (blockiness , blur, noise ) , we get
MOS (fi,f)
Figure imgf000014_0002
MOS( ff)= g(Blovki7Uiss(ff),Bhxr(ff), Noise(ff)) Finally the remaining issue is to determine the constant numbers αι, ¾ and ff3. These constant numbers are inside a range [0,1]. Their values are exemplarily evaluated with enumerating: Limiting the values of αι, ¾ and as in {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, for each potential selection, we create two sub-databases from the database of the section above: for the 1st sub-database, the input features are Blocktvess f Bl rQf) f and Noise(ff)f wh ± l e the output is MOS^ f^) . for the 2nd sub-database, the input features are Blocktvess faf BlurtD ,ff and Naise3D{fffff) a s defined in equation (3) .
Choose an existing machine learning scheme (classification / neural network etc., we choose Artificial Neural Network ANN in our implementation) . With the help of the chosen machine learning scheme, we train it on the 1st dataset and test it on the 2nd dataset, and get a prediction accuracy Yi;
correspondingly, we train it on the 2nd dataset and test it on the 1st dataset, and get a prediction accuracy Yi.
Among all potential selections, the one that provides the highest value of Oi ~ *2) is our best selection.
With the above method, we finally set ®i =f z = 0,9 anQ i a3— 0.6 j_n our experiments.
However, these values are selected under the assumption that minimum steps of 0.1 are possible. Generally, the values may deviate a little, for example 0.15< l<0.25, 0.85< 2< 0.95, and 0.55<a3<0.65.
At the end of this section, the proposed stereo video quality measurement is concluded: First, an artefacts based quality measurement method similar to that of traditional 2D videos is disclosed. Each kind of artefact is first estimated separately, and finally fused into an overall stereo video quality. Three dominating artefacts (blockiness, blur, and noise) are considered.
Second, the artefacts estimation of the stereo video is defined as the linear composition of the artefacts level of the two views (left view and right view) , cf . eq. (2) . And then, with the help of the database as described above, a solution is provided to decide the individual values of the constants ai, az and as in eq. (2) .
Finally, we use a fusion strategy that may be completely the same as that of traditional 2D video, to get the overall quality of the stereo video by combining the level of these three kinds of artefacts. This is according to the fact that combining different kinds of artefacts to an overall view perception is a human visual-psychological activity, and it will not be influenced by whether the object is a 2D video or a stereo video.
Below is an evaluation of the disclosed method. To evaluate the proposed stereo video quality measurement method, we first generate another relative small database. The way to generate the database is just similar as that described above. The only difference is that the left view is not kept un-distorted in this case. To test the proposed method on this relative small database, the prediction accuracy is on an acceptable level: 0.71. According to these results, the assumptions above are reasonable.
CONCLUSION
One advantage of this invention lies in the following.
A method is proposed for evaluating artefacts (Blockiness, Blur, Noise) of stereo video, in cases where the artefacts level of the two views of the stereo video are different. The evaluating method is as defined below:
BlockinessS D
Figure imgf000016_0001
Figure imgf000017_0001
= (1 - αΞ) χ miii(jBhi-r(/¾ Bha*(fffj + 2
X max jEM«r( f), B£ur( £ ))
Noise3D(fi
-f a3
Figure imgf000017_0002
A method is provided for determining the combination
coefficients £¾, e3 and i¾ in above equations, with the help of a designed stereo video database.
By analysis of the different combination coefficients, the correctness of the observation as mentioned above is
evaluated: human's compensation / concealment ability to different kinds of artefacts is quite different. In case the artefact level of two views are different, human is able to compensation the bad quality view with the information of high quality view. According to the determined combination coefficients, it is clear that the human's compensation operation deals best with blur artefacts, medium with noise artefacts and worst with blockiness artefacts.
To get an overall video quality by combining the perception of different kinds of artefacts, the mechanism is the same between 2D video and stereo video. According to all above items, an artefacts based stereo video quality measurement is then established, as mentioned above.
The whole scheme can be extended to other artefacts,
including different spatial artefacts and temporal artefacts.
Fig .1 shows an embodiment of a method for measuring a
quality of video with at least two different views, wherein one of the at least two views has lower quality than the other. The method 10 comprises steps of analyzing 11 the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video,
calculating 12 at least a first, a second and a third value representing quality components, the first value denoting a blockiness quality of .the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor al, the second value denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor a2, and
the third value denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor a3, and calculating 13 a combined quality value using said first, second and third values, and providing 14 the combined value as a video quality measure.
The first, the second and the third weighting factors are different from each other, so that a human's eye's best compensation ability for blur, medium compensation ability for noise and worst compensation ability for blockiness are considered.
In one embodiment, the weighting factors define how strong the influence of the lower quality view is, compared with the higher quality view: a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 (100%) indicates that the perceived quality of both views is equal to the perceived quality of the lower quality view.
In one embodiment, the first weighting factor al for
blockiness quality is lower than the third weighting factor <x3 for noise quality, and the third weighting factor a3 for noise quality is lower than the second weighting factor cc2 for blur quality. In one embodiment, the
first weighting factor al is between 0.15 and 0.25, the second weighting factor a2 is between 0.85 and 0.95, and the third weighting factor a3 is between 0.55 and 0.65. In one embodiment, the combined quality value is calculated by an Artificial Neural Network, using a training based on exemplary video sequences.
In one embodiment, the combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components. For example,
Fig.2 shows one embodiment of an apparatus for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other. The apparatus 20 comprises
analyzer means 21 for analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video,
first calculating means 22 for calculating at least a first, a second and a third value representing quality components, the first value vl denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first
weighting factor al,
the second value v2 denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor a2, and the third value v3 denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor oc3, second calculating means 23 for calculating a combined quality value using said first, second and third values, and output means 24 for providing the combined value as a video quality measure..
In one embodiment of the apparatus, the first al, the second a.2 and the third ot3 weighting factors are different from each other.
In one embodiment of the apparatus, the weighting factors, define how strong the influence of the lower quality view is, compared with the higher quality view, wherein a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 (or 100%) indicates that the perceived quality of both views is equal. to the perceived quality of the lower quality view.
In one embodiment of the apparatus, the first weighting factor al for blockiness quality is lower than the third weighting factor a3 for noise quality, and the third
weighting factor a3 for noise quality is lower than the second weighting factor a2 for blur quality.
In one embodiment of the apparatus, the
first weighting factor al is between 0.15 and 0.25, the second weighting factor a2 is between 0.85 and 0.95, and the third weighting factor ct3 is between 0.55 and 0.65.
In one embodiment of the apparatus, the second calculation means comprises processing means for processing an
Artificial Neural Network ANN, wherein the combined quality value is calculated by the Artificial Neural Network.
In one embodiment of the apparatus, it further comprises adding means 25a, wherein the combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components.
In one embodiment, the second calculating means performs a calculation according to
MOS3D = Blockiness3D + Blur3D + Noise3D
In one embodiment, the second calculating means performs a calculation according to
MOS3D = cl*Blockiness3D + c2*Blur3D + c3*Noise3D where cl,c2,c3 are constant factors. In one embodiment, the sum of cl+c2+c3 is 1.
Various possible applications of the invention in stereo video coding are e.g. help choosing a de-blocking filter (since the compensation/concealment ability to blockiness and blur are different, the proposed technique is important in deciding to de-block the lower quality view to which level) or adaptive stereo video streaming (unequal streaming of the two views is clearly decided by an accurate stereo video quality measurement) .
A method is proposed to determine three distinct combination coefficients i, az and °¾. These coefficients . can be
important configuration parameters to help improving
prediction accuracy. Further, a user-interface is provided to allow modification of these constant numbers in special use cases: for example, for those users who are extremely critical to blur artefacts.
While there has been shown, described, and pointed out fundamental novel features of the present invention as applied to preferred embodiments thereof, it will be
understood that various omissions and substitutions and changes in the apparatus and method described, in the form and details of the. devices disclosed, and in their operation, may be made by those skilled in the art without departing from the spirit of the present invention. It is expressly intended that all combinations of those elements that
perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Substitutions of elements from one described embodiment to another are also fully intended and
contemplated.
It will be understood that the present invention has been described purely by way of example, and modifications of detail can be made without departing from the scope of the invention. Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate ' combination . Features may, where appropriate be implemented in hardware, software, or a combination of the two.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Notes
N.Ozbek, and A.M.Tekalp, "Unequal inter-view rate-allocation using scalable stereo video coding and an objective stereo vide quality measure", ICME 2008
2 .
P.Campisi, P.Le Callet, and E.Marini, "Stereoscopic images quality assessment", EUSIPCO 2007, Sep.2007, Poznan, Poland

Claims

Claims
A method for measuring a quality of video with at least two different views, wherein one of the at least two views has lower quality than the other, the method comprising steps of
analyzing (11) the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video;
- calculating (12) at least a first, a second and a third value representing quality components, the first value denoting a blockiness quality of the video, wherein the first measure for blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor
( l) ,
the second value denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the weighting using a second weighting factor (oc2), and the third value denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the weighting using a third weighting factor (a3) ; and
calculating (13) a combined quality value using said first, second and third values; and providing (14) the combined value as a video quality measure.
Method according to claim 1, wherein the first (ocl) , the second (oo2) and the third (a3) weighting factors are different from each other, so that a human's eye's best compensation ability for blur, medium
compensation ability for noise and worst compensation ability for blockiness are considered.
Method according to claim 1 or 2, wherein the
weighting factors define how strong the influence of the lower quality view is, compared with the higher quality view, wherein a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 indicates that the perceived quality of both views is equal to the perceived quality of the lower quality view.
Method according to one of the claims 1-3, wherein the first weighting factor (otl) for blockiness quality is lower than the third weighting factor (oc3) for noise quality, and wherein the third weighting factor (ot3) for noise quality is lower than the second weighting factor (a2) for blur quality.
Method according to one of the claims 1-4, wherein the first weighting factor (al) is between 0.15 and 0.25, the second weighting factor (a2) is between 0.85 and 0.95, and the third weighting factor (a3) is between
0.55 and 0.65.
6. Method according to one of claims 1-5, wherein the
combined quality value is calculated by an Artificial Neural Network, using a training based on exemplary video sequences .
7. Method according to one of claims 1-5, wherein the
combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components.
8. An apparatus (20) for measuring a quality of video
with at least two different views, wherein one of the at least two views has lower quality than the other, the apparatus comprising
analyzer (21) for analyzing the video, wherein at least a first measure for blockiness, a second measure for blur and a third measure for noise are determined, separately for a first and a second view of the at least two views of the video;
- first calculating means (22) for calculating at
least a first, a second and a third value
representing quality components,
the first value (vl) denoting a blockiness quality of the video, wherein the first measure for
blockiness of the first view and the first measure for blockiness of the second view are weighted and added, the weighting using a first weighting factor
( l),
the second value (v2) denoting a blur quality of the video, wherein the first measure for blur of the first view and the first measure for blur of the second view are weighted and added, the
weighting using a second weighting factor (a2 ) , and the third value (v3) denoting a noise quality of the video, wherein the first measure for noise of the first view and the first measure for noise of the second view are weighted and added, the
weighting using a third weighting factor (a3) ; and second calculating means (23) for calculating a combined quality value using said first, second and third values; and
output means (24) for providing the combined value as a video quality measure.
9. Apparatus according to claim 8, wherein the first (ocl) the second (cc2) and the third ( 3) weighting factors are different from each other, so that a human's eye's best compensation ability for blur, medium
compensation ability for noise and worst compensation ability for blockiness are considered.
10. Apparatus according to claim 8 or 9, wherein the weighting factors define how strong the influence of the lower quality view is, compared with the higher quality view, wherein a value of 0 indicates that the perceived quality of both views is equal to the perceived quality of the higher quality view, and a value of 1 indicates that the perceived quality of both views is equal to the perceived quality of the lower quality view.
11. Apparatus according to one of the claims 8-10,
wherein the first weighting factor (al) for blockiness quality is lower than the third weighting factor (a3) for noise quality, and wherein the third weighting factor ( 3) for noise quality is lower than the second weighting factor (a2) for blur quality.
12. Apparatus according to one of the claims 8-11,
wherein the first weighting factor (al) is between
0.15 and 0.25, the second weighting factor (oc2) is between 0.85 and 0.95, and the third weighting factor (oc3) is between 0.55 and 0.65.
13. Apparatus according to one of claims 8-12, wherein the second calculation means (23) comprises processing means for processing an Artificial Neural Network, wherein the combined quality value is calculated by the Artificial Neural Network.
14. Apparatus according to one of claims 8-13, further comprising adding means (25a) , wherein the combined quality value is calculated by adding, or weighted adding, of the first, second and third values that represent quality components.
PCT/CN2010/000999 2010-07-02 2010-07-02 Method for measuring quality of a video with at least two different views, and corresponding device WO2012000137A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/000999 WO2012000137A1 (en) 2010-07-02 2010-07-02 Method for measuring quality of a video with at least two different views, and corresponding device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/000999 WO2012000137A1 (en) 2010-07-02 2010-07-02 Method for measuring quality of a video with at least two different views, and corresponding device

Publications (1)

Publication Number Publication Date
WO2012000137A1 true WO2012000137A1 (en) 2012-01-05

Family

ID=45401287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/000999 WO2012000137A1 (en) 2010-07-02 2010-07-02 Method for measuring quality of a video with at least two different views, and corresponding device

Country Status (1)

Country Link
WO (1) WO2012000137A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013107037A1 (en) * 2012-01-20 2013-07-25 Thomson Licensing Blur measurement
CN112437291A (en) * 2020-10-16 2021-03-02 天津大学 Stereoscopic video quality evaluation method based on binocular fusion network and saliency

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6236763B1 (en) * 1997-09-19 2001-05-22 Texas Instruments Incorporated Method and apparatus for removing noise artifacts in decompressed video signals
CN1422498A (en) * 2000-12-12 2003-06-04 皇家菲利浦电子有限公司 System and method for providing a scalable dynamic objective metric for automatic video quality evaluation
CN1465197A (en) * 2001-04-25 2003-12-31 皇家菲利浦电子有限公司 Apparatus and method for combining random set of video features in a non-linear scheme to best describe perceptual quality of video sequences using heuristic search methodology
CN101426148A (en) * 2008-12-01 2009-05-06 宁波大学 Video objective quality evaluation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6236763B1 (en) * 1997-09-19 2001-05-22 Texas Instruments Incorporated Method and apparatus for removing noise artifacts in decompressed video signals
CN1422498A (en) * 2000-12-12 2003-06-04 皇家菲利浦电子有限公司 System and method for providing a scalable dynamic objective metric for automatic video quality evaluation
CN1465197A (en) * 2001-04-25 2003-12-31 皇家菲利浦电子有限公司 Apparatus and method for combining random set of video features in a non-linear scheme to best describe perceptual quality of video sequences using heuristic search methodology
CN101426148A (en) * 2008-12-01 2009-05-06 宁波大学 Video objective quality evaluation method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013107037A1 (en) * 2012-01-20 2013-07-25 Thomson Licensing Blur measurement
US9280813B2 (en) 2012-01-20 2016-03-08 Debing Liu Blur measurement
CN112437291A (en) * 2020-10-16 2021-03-02 天津大学 Stereoscopic video quality evaluation method based on binocular fusion network and saliency

Similar Documents

Publication Publication Date Title
Wang et al. Quality prediction of asymmetrically distorted stereoscopic 3D images
Lin et al. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors
Kiran Adhikarla et al. Towards a quality metric for dense light fields
Chen et al. Full-reference quality assessment of stereopairs accounting for rivalry
Gorley et al. Stereoscopic image quality metrics and compression
CN103763552B (en) Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
De Silva et al. 3D video assessment with just noticeable difference in depth evaluation
Ryu et al. Stereoscopic image quality metric based on binocular perception model
Gupta et al. A novel full reference image quality index for color images
Su et al. Visual quality assessment of stereoscopic image and video: challenges, advances, and future trends
Wang et al. Perceptual depth quality in distorted stereoscopic images
Ma et al. Reduced-reference stereoscopic image quality assessment using natural scene statistics and structural degradation
Chen et al. Study of subject agreement on stereoscopic video quality
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
Yang et al. New metric for stereo image quality assessment based on HVS
Kim et al. Quality assessment of perceptual crosstalk on two-view auto-stereoscopic displays
Zhu et al. Perceptual distortion metric for stereo video quality evaluation
Wan et al. Depth perception assessment of 3D videos based on stereoscopic and spatial orientation structural features
Yang et al. No-reference quality assessment of stereoscopic videos with inter-frame cross on a content-rich database
WO2012000137A1 (en) Method for measuring quality of a video with at least two different views, and corresponding device
Fezza et al. Stereoscopic 3d image quality assessment based on cyclopean view and depth map
Wang et al. Stereoscopic 3D video coding quality evaluation with 2D objective metrics
Chetouani Full reference image quality metric for stereo images based on cyclopean image computation and neural fusion
CN102271279B (en) Objective analysis method for just noticeable change step length of stereo images
Chen et al. Full-reference quality assessment of stereoscopic images by modeling binocular rivalry

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10853844

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10853844

Country of ref document: EP

Kind code of ref document: A1