WO2009007133A2 - Procédé et appareil de détermination de la qualité visuelle des informations visuelles traitées - Google Patents

Procédé et appareil de détermination de la qualité visuelle des informations visuelles traitées Download PDF

Info

Publication number
WO2009007133A2
WO2009007133A2 PCT/EP2008/005693 EP2008005693W WO2009007133A2 WO 2009007133 A2 WO2009007133 A2 WO 2009007133A2 EP 2008005693 W EP2008005693 W EP 2008005693W WO 2009007133 A2 WO2009007133 A2 WO 2009007133A2
Authority
WO
WIPO (PCT)
Prior art keywords
visual
quality
visual quality
video
processed
Prior art date
Application number
PCT/EP2008/005693
Other languages
English (en)
Other versions
WO2009007133A3 (fr
Inventor
Tobias Oelbaum
Original Assignee
Technische Universität München
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technische Universität München filed Critical Technische Universität München
Publication of WO2009007133A2 publication Critical patent/WO2009007133A2/fr
Publication of WO2009007133A3 publication Critical patent/WO2009007133A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the invention relates to a method and an apparatus for determining the visual quality of processed visual information such as compressed images or videos.
  • PSNR Peak Signal to Noise Ratio
  • I max is the maximum value one pixel can have (e.g. 255 for 8 bit)
  • N and M are the number of rows and columns respectively
  • J coc j and I Or ig are the actual pixel values for the coded and original image respectively.
  • PSNR is the only video quality metric that is widely accepted and therefore PSNR is the de-facto standard for measuring video quality. Being the de-facto standard for objective video quality metrics, PSNR is still used for comparing AVC/H.264 encoded video with other video codecs or for comparing different encoder implementations or coding settings for AVC/H.264. This is in spite of the knowledge, that PSNR values may be heavily misleading.
  • the ITU released a recommendation which included four different full reference (not only the coded video but also the original video is needed for the evaluation) metrics which outperformed PSNR in terms of correlation to results of extensive subjective tests [ITU-T J.144. Objective perceptual video quality measurement techniques for digital cable television in the presence of a full reference, ITU-T 2004] .
  • SSIM Structuretural SIMilarity index
  • a very common approach for a full reference metric is to combine measurements such as block fidelity and content richness and distortions with some visibility masking functions.
  • NR For an NR metric, no information about the original video is needed.
  • One popular approach for an NR image and video quality metric is the inclusion of watermarks in the original image and then measuring the amount to which these watermarks can be recovered at the receiver.
  • Other common methods are estimation of PSNR or calculating the visual quality by evaluating different types of distortion such as blockiness.
  • Objective visual quality metrics normally deliver very imprecise results.
  • a reason for this is that different images or videos have different properties, for example regarding sharpness or color spectrum. These different properties are distorted in a different way during compression or transmission.
  • the invention is based on the insight that the preciseness of an objective visual quality metric can be improved if for a certain processed visual information the parameters of a regression function are known which correlate the quality metric with the ac- tual visual quality of the processed visual information.
  • the processed visual information is preferably at least one image or video or a part of at least one image or video.
  • the processing is preferably compressing and/or transmitting of the information.
  • a visual information is processed not only once, for example for a transmission, but at least one time more, preferably with a further compressor, to generate a further processed visual information.
  • the visual information is an image or a video and the processing is compressing or transmitting, the image or video is compressed once for a transmission and at least one time in addition.
  • This further image or video is used only for determining the visual quality of the transmitted image or video.
  • the method according to the invention is preferably carried out automatically on a calculation device or a computer, which is for example part of a transmission system or a compression system.
  • the parameters of the regression function that relates the actual visual quality and the visual quality calculated by a visual quality metric.
  • Setting of the further visual quality preferably means that this further quality is stored or entered and is based on estimated, determined , for example in tests determining the actual or subjective quality, or oth- erwise chosen values. The preciseness of the objective quality metrics can therefore be improved.
  • the determination of the actual visual quality is preferably performed on the basis of the parameters which were used for the processing of this further proc- essed information.
  • the calculated visual quality yi has to be calculated with an objective quality metric.
  • the regression function can be assumed to be linear. It is in this case determined by the parameters slope and offset or intersect. With the calculated quality of the further processed information, i.e. for example the com- pressed picture or video, either the slope or the offset of the linear regression function can be determined.
  • the visual information is not necessarily a complete image or video, but can also be a part of an image or video.
  • the visual information can also be at least one feature of at least a part of an image or video.
  • s — .
  • V h ⁇ V n o y n -v n -s, respectively.
  • a visual quality metric can preferably be regarded a black box building block as shown in figure 1, that produces a quality estimation y at the output if confronted with a coded video or image at the input. If the original image or video is also needed for input, this is a “full reference” (FR) or “reduced reference” (RR) quality metric, oth- erwise the metric is a “no reference” (NR) metric.
  • FR full reference
  • RR reduced reference
  • NR no reference
  • these regression parameters are therefore preferably determined by producing at least one, preferably two, additional instances of the original image or video and using these instances to calculate the slope s and offset o of the linear regression line.
  • the visual quality of these additional instances should preferably be inherently known. The gained parameters s and o are then used to correct the original quality prediction.
  • the accuracy of the correction mainly depends on three attributes :
  • the visual quality of the two additional instances should preferably be very different. These instances can be produced e.g. by encoding the original image or video using a fixed quanti- zation parameter (QP) .
  • QP quanti- zation parameter
  • the visual quality metric is an NR metric
  • one instance can preferably be the uncoded original, otherwise, a coded version of the video or image can be generated that most likely has no or only very few impairments.
  • the visual quality v h i 9h of this instance will preferably be assumed to be in the range of 0.8 to 1.0 on a 0 to 1 scale .
  • the low quality instance should preferably be of low visual quality but should not contain artifacts that are not present in the image or video of interest (e.g. should not contain skipped frames if the video of interest does not contain skipped frames) .
  • the visual quality v iow of this instance will preferably be assumed to be in the range of 0.1 to 0.3 on a 0 to 1 scale .
  • the encoder used to encode the additional instances should be close to the encoder used to encode the image or video of interest. If the encoder and its settings are unknown, at least the same coding technology should preferably be used.
  • the additional instances are then preferably rated by the same visual quality metric that is used to gain the prediction y.
  • the gained values y h igh and yi ow are used to predict the slope s and offset o of the regression line:
  • a set of simple no reference feature measurements can be selected representing the most common kind of distortions namely blocking, blurriness and noise.
  • One feature measurement can be added to measure the amount of detail present in the encoded video.
  • four different continuity measurements can be performed: predictability (shows how good one frame can be predicted using the previous frame only) , motion continuity (measurement for the smoothness of the motion) , color continuity (shows how much color changes between two successive images) and edge continuity (shows how much edge regions are changing between two successive images) .
  • predictability shows how good one frame can be predicted using the previous frame only
  • motion continuity measurement for the smoothness of the motion
  • color continuity shows how much color changes between two successive images
  • edge continuity shows how much edge regions are changing between two successive images.
  • the following quantities can be used.
  • Predictability A predicted image is built by motion compensation using a simple block matching al- gorithm. The actual image and its prediction are then compared block by block. A 8 x 8 block is considered to be noticeable different if the SAD exceeds 384. To avoid that single pixels dominate the SAD measurement, both images am filtered using first a Gaussian blur filter and a median filtering afterwards .
  • Edge Continuity The actual image and its motion compensated prediction are compared using the Edge- PSNR algorithm as described in C. Lee, S. Cho, J. Choc, T. Jeong, W. Ann and- E. Lee: Objective video quality assessment, SPIE Journal of Optical Engineering, Volume 45. Issue 1, Jan. 2006.
  • Motion Continuity Two motion vector fields are calculated: between the current and the previous frame and between the following and the current frame. The percentage of motion vectors where the difference between the two corresponding motion vectors exceeds 5 pixels (either in x- or y- direction) determines the motion continuity.
  • Color continuity A color histogram with 51 bins for each RGB channel is calculated for the actual image und its prediction. Color continuity is then given as the linear correlation between those two histograms .
  • All feature measurements can be done for each frame of the video separately and the mean value of all frames can then be used for further processing.
  • the above selected measurements are just one example for a set of variables that can be used for building such a model.
  • the presented variables can be used for their simplicity, using more complex measurements for artifacts like noise or blur, results in even more accurate models as well as adding measurements for artifacts (e.g. ringing). For this case, preferably only no reference feature measurements are considered, including some feature measurements that re- quire the original video a RR or FR metric could be built.
  • the nature of the multivariate calibration allows including an unrestricted number of fixed variables in the calibration step. If the calibration phase is done properly, fixed variables that do not contribute to the latent variable "video quality" do not spoil the calibration process.
  • the regression model will preferably contain these useless fixed variables with zero (or very close to zero) weight and those variables then can be removed from the model .
  • MSC multiplicative signal correction
  • the two variables c und d are obtained by simple lin- ear regression of the feature values of the sequence i compared to the average of the feature values of all calibration sequences.
  • Multivariate Regression with PLS The obtained feature values f ' mi can then be used together with the corresponding subjective ratings yi that form the column vector y to built a regression model using the method of Partial Least Squares Regression (PLSR) .
  • PLSR Partial Least Squares Regression
  • PLSR is an extension of the Principal Component Regression (PCR) , that tries to find the principal components (PC) that are most relevant not only for the interpretation of the variation in the input values in F but also for the variation in the output values y.
  • PC Principal Component Regression
  • the PCR is a bilinear regression method that consists of a Principal Component Analysis (PCA) of F' into the matrix T that contains the PCs of F 1 followed by a regression of y on T
  • PCA Principal Component Analysis
  • F' can be modelled as:
  • T being the scores of the 1 input sequences
  • / represents the row vector of the mean values of the fea- tures
  • E f is the error in F' that cannot be modelled.
  • the prediction y can then be modelled as :
  • PLSR PLSR
  • the NR quality metric gained by the previous steps faces the problem that even the original video may contain a certain amount of blur or blocking and dif- ferent sequences do not only have a different amount of details but also do have different motion properties.
  • the prediction accuracy for each single sequence is very high: the data points for one single sequence lie on one straight line only with unknown slope s and unknown offset o.
  • the overall prediction accuracy therefore can be improved by estimating the slope and the offset of these lines by calculating the predicted quality of the original video (y O rig) and of a low quality version of the video (yi ow ) preferably using the same quality predictor.
  • the proposed method determines these parameters by introducing at least one, preferably two, additional (coded) instances of the respective image or video and making a quality prediction not only for the video or image that should be evaluated but also for these additional instances.
  • the prediction values y' can preferably use a sigmoid nonlinear correction.
  • the general sigmoid function is given as
  • the applied correction function is preferably very close to be linear over a wide quality range.
  • PSNR Peak Signal to Noise Ratio
  • PSNR 101og/ m ⁇ l /- MlN- £) £ (I cort - I orig f
  • a visual quality metric can be regarded a black box building block as shown in figure 1, that produces a quality estimation y at the output if confronted with a coded video or image at the input. If the original image or video is also needed for input, this is a “full reference” (FR) or “reduced reference” (RR) quality metric, otherwise the metric is a "no reference” (NR) metric.
  • FR full reference
  • RR reduced reference
  • NR no reference
  • these regression parameters are therefore preferably determined by producing at least one, preferably two, additional instances of the original image or video and using these instances to calculate the slope s and offset o of the linear regression line.
  • the visual quality of these additional instances should preferably be inherently known. The gained parameters s and o are then used to correct the original quality prediction.
  • the accuracy of the correction mainly depends on three attributes: 1. Difference between the actual visual quality and the assumed visual quality of the additional instances,
  • the visual quality of the two additional instances should preferably be very different. These instances can be produced e.g. by encoding the original image or video using a fixed quanti- zation parameter (QP) .
  • QP quanti- zation parameter
  • the visual quality metric is an NR metric
  • one instance can preferably be the uncoded original, otherwise, a coded version of the video or image can be generated that most likely has no or only very few impairments.
  • the visual quality Vhigh of this instance will preferably be assumed to be in the range of 0.8 to 1.0 on a 0 to 1 scale.
  • the low quality instance should preferably be of low visual quality but should not contain artifacts that are not present in the image or video of interest (e.g. should not contain skipped frames if the video of interest does not contain skipped frames) .
  • the visual quality V ⁇ ow of this instance will preferably be assumed to be in the range of 0.1 to 0.3 on a 0 to 1 scale .
  • the encoder used to encode the additional in- stances should be close to the encoder used to encode the image or video of interest. If the encoder and its settings are unknown, at least the same coding technology should preferably be used.
  • the additional instances are then preferably rated by the same visual quality metric that is used to gain the prediction y.
  • the gained values yhigh and y iow are used to predict the slope s and offset o of the regression line:
  • the further processed information can be generated by proc- essing the first processed visual information of which the quality is to be determined.
  • This method preferably allows measuring the visual quality of processed images or videos.
  • the method preferably comprises the steps of extracting a number of features from the video or image I.
  • additional processed versions of the video or image denoted as I pr ocn with n [1...N] are generated using the already processed video or image I as input to the processing step.
  • a number of features is preferably extracted from the videos or images I pr ocn-
  • the extracted features from I and I prO cn are preferably combined into one quality value y.
  • the feature extraction process for I is performed in a way that no excess to the reference image or video I re£ is needed.
  • the same feature extraction process can be applied on I and Ip roc n-
  • the additional processing preferably comprises encoding and decoding the image or video using a stearable video or image encoder. It is furthermore preferred that the encoding is done in a way to produce a processed I proc n that most probably has a visual quality that is lower than the visual quality of I.
  • the encoding can be done using the same encoding technology that was used to generate the processed image or video I from the reference image or video Iref . It is sufficient that only one additional in- stance I prOc is generated which is therefore an option.
  • the gained features are combined into one quality value by a weighted summation.
  • the weights can be adjusted according to the extracted features.
  • the basic weights can be gained by the use of training data.
  • the invention allows improving the preciseness of ob- jective visual quality metrics for images, videos or other visual information.
  • the invention can be used for images or videos where the picture quality is impaired by compression and/or transmission.
  • Methods for compression of pictures and videos are e.g. JPEG, JPEG2000, MPEG- 2 or AVC/H.264.
  • the invention can be used for images or videos with arbitrary data rate and pictures of videos which are compressed arbitrarily high.
  • the invention can be applied for images and videos which are transmitted over an arbitrary channel .
  • the channel can have a band width which is restricted and/or errorous .
  • the transmission can be packet- oriented or connection-oriented.
  • the images and videos can have arbitrary spatial and time resolution, i.e. arbitrary image size or pictures per second in case of videos.
  • the invention is also applicable on quality metrics which need the compressed/transmitted image as well as the original image or video.
  • the invention is also applicable on quality metrics which only require the compressed/transmitted picture or video. In the latter case, the calculated parameters of the regression function can be transmitted together with the image or video.
  • the channel over which the additional further processed visual information can be transmitted can be arbitrary and does not have to be identical with the channel over which the actual image or video is transmitted. In the most simple case, the channel over which the further information is transmitted, is regarded as having an unlimited bandwidth and being free of errors. For the transmission of further proc- essed images or videos, multiple different channels can be employed.
  • Figure 1 shows a general use of a visual quality metric for an encoded video.
  • Figure 2 shows an example for the present invention.
  • Figure 3 shows a regression function according to the prior art as well as according to the present invention.
  • Figure 4 shows an example of the present invention where the quality is determined based on features extracted from the video.
  • Figure 5 shows an example of a preferred embodiment of the present invention.
  • Figure 1 shows a general setup of a visual quality metric of the full reference type or reduced reference type.
  • An encoded video is input into the visual quality metric 13.
  • the visual quality metric 13 generates an output value 14 which is an approximate measure of the visual quality of the encoded video 11. If the visual quality metric is of full reference or reduced reference type, it needs the input of the full or partial original video 12.
  • Figure 2 shows a block diagram of a method according to the present invention.
  • the quality of the encoded video 20 is determined with the visual quality metric 21 which generates the value y.
  • a processed video with high picture quality 22 is produced.
  • a video or picture 23 with low picture quality is produced.
  • the picture quality of the video 22 with high picture quality is determined with the visual quality metric 21b.
  • the picture quality of the video 23 with low picture quality is calculated with the visual quality metric 21c which is identical to the vis- ual quality metric 21b.
  • a visual image quality v h or actual image quality is set, assumed or determined experimentally, which preferably lies in a region between 0.8 and 1.0 on a scale reaching from 0 to 1.
  • a visual or actual image quality V n is set or estimated preferably in a region between 0.1 and 0.3 on a scale reaching from 0 to 1.
  • the slope s and zero-point or offset o is then determined from the values y h , y n , V h and v n in the regression function determining 24. The determining 24 thus produces the slope s and the zero-point o.
  • Those val- ues are input into the correction 25 where the quality y determined from the encoded video 20 in the visual quality metric 21a is corrected to yield the corrected quality value y + . If the visual quality metric is of the full reference or reduced reference type, parts or the full original video 25 can be input into the visual quality metric 21.
  • Figure 3 shows the regression functions 31 and 32 for two different pictures according to the prior art (left diagram) as well as according to the present invention (right diagram) .
  • the vertical axis 31 shows the quality determined from a quality metric while the horizontal axis 34 shows the visual or actual quality of the respective image.
  • the dashed line 35 shows an optimal regression function where the determined value is always equal to the actual value. It can be clearly seen that for both pictures 31 and 32, the regression functions determined in the present invention are much closer to the optimal regression function 35 than in the prior art.
  • Figure 4 shows the method according to the present invention where features are extracted from the original video 41.
  • the features are extracted in step 43a, 43b and 43c.
  • the step 44 combines the quality values y obtained by applying the quality model 42 on the features of the original video 41 as well as on the features of a low quality video 45 and an encoded video 46 with unknown quality.
  • the low quality video 45 is generated from the original video 41 by encoding and decoding the original video 41. From the low quality video 45, the features are extracted in 43b and then input into the quality model 42.
  • the encoded video 46 is produced by encoding the original video 41 transmitting the encoded video and decoding it in the decoding step 47.
  • the encoded video 46 features are extracted in the feature extraction 43c which are then input into the quality model 42 to give the quality value y.
  • the regression function can be corrected and the value y obtained from the encoded video 46 can be corrected in the correcting means 44.
  • the corrected quality value y' output by the correction means 44 is then input into sigmoid correction 49 to yield the final quality value y' ' 40.
  • Figure 5 shows a method according to a preferred embodiment of the present invention.
  • a first processed image or video 50 which may be processed by compress - ing, transmitting or also just recording visual information is on the one hand subject to feature extraction 53a and on the other hand subject to further processing 51.
  • the further processed image or video 52 generated from the image of video 50 in processing step 51 is then subject to feature extraction 53b.
  • the feature extraction 53a which extracts features from the first processed image or video 50 is the same as the feature extraction 53b which extracts features from the image or video 52.
  • the features extracted from the image or video 50 as well as the processed image of video 52 are then combined into one quality value y 55.
  • the additional processing 51 can comprise encoding and/or decoding of the image or video 50, e.g. using a stearable video or image encoder.
  • encoding can be done in a way to produce a processed image or video 52 that most probably has a visual quality which is lower than the visual quality of the image or video 50.
  • the image or video 50 can also be a processed visual information or image or video.
  • Such a processed image or video 50 can e.g.
  • image or video 50 can also be some other reference image or video.
  • the combination of the features extracted in step 53a and 53b in step 54 can e.g. be done by weighted summation.
  • the weights of such a weighted summation can be adjusted according to the extracted features.
  • Basic weights can be gained by using training data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention concerne un procédé et un appareil de détermination de la qualité visuelle des informations visuelles traitées telles que des images ou des vidéos compressées.
PCT/EP2008/005693 2007-07-11 2008-07-11 Procédé et appareil de détermination de la qualité visuelle des informations visuelles traitées WO2009007133A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP07013551 2007-07-11
EP07013551.2 2007-07-11
EP07021460 2007-11-05
EP07021460.6 2007-11-05

Publications (2)

Publication Number Publication Date
WO2009007133A2 true WO2009007133A2 (fr) 2009-01-15
WO2009007133A3 WO2009007133A3 (fr) 2009-07-16

Family

ID=40229138

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2008/005693 WO2009007133A2 (fr) 2007-07-11 2008-07-11 Procédé et appareil de détermination de la qualité visuelle des informations visuelles traitées

Country Status (1)

Country Link
WO (1) WO2009007133A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011134110A1 (fr) * 2010-04-30 2011-11-03 Thomson Licensing Procédé et dispositif de mesure de la qualité vidéo au moyen d'au moins un régresseur d'apprentissage semi-supervisé pour la prédiction d'une note d'observation moyenne
EP2833639A1 (fr) * 2012-05-22 2015-02-04 Huawei Technologies Co., Ltd. Procédé et dispositif d'évaluation de la qualité vidéo
CN105264896A (zh) * 2014-05-08 2016-01-20 华为终端有限公司 一种视频质量检测的方法及装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5568400A (en) * 1989-09-01 1996-10-22 Stark; Edward W. Multiplicative signal correction method and apparatus
US20040001633A1 (en) * 2002-06-26 2004-01-01 Koninklijke Philips Electronics N.V. Objective method and system for estimating perceived image and video sharpness

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5568400A (en) * 1989-09-01 1996-10-22 Stark; Edward W. Multiplicative signal correction method and apparatus
US20040001633A1 (en) * 2002-06-26 2004-01-01 Koninklijke Philips Electronics N.V. Objective method and system for estimating perceived image and video sharpness

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DE ANGELIS A ET AL: "Image Quality Assessment: an Overview and some Metrological Considerations" ADVANCED METHODS FOR UNCERTAINTY ESTIMATION IN MEASUREMENT, 2007 IEEE INTERNATIONAL WORKSHOP ON, IEEE, PI, 1 July 2007 (2007-07-01), pages 47-52, XP031152239 ISBN: 978-1-4244-0932-7 *
ROSARIO FEGHALI ET AL: "Video Quality Metric for Bit Rate Control via Joint Adjustment of Quantization and Frame Rate" IEEE TRANSACTIONS ON BROADCASTING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 53, no. 1, 1 March 2007 (2007-03-01), pages 441-446, XP011172020 ISSN: 0018-9316 *
SURESH S ET AL: "Image Quality Measurement Using Sparse Extreme Learning Machine Classifier" CONTROL, AUTOMATION, ROBOTICS AND VISION, 2006. ICARCV '06. 9TH INTERN ATIONAL CONFERENCE ON, IEEE, PI, 1 December 2006 (2006-12-01), pages 1-6, XP031103420 ISBN: 978-1-4244-0341-7 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011134110A1 (fr) * 2010-04-30 2011-11-03 Thomson Licensing Procédé et dispositif de mesure de la qualité vidéo au moyen d'au moins un régresseur d'apprentissage semi-supervisé pour la prédiction d'une note d'observation moyenne
US8824783B2 (en) 2010-04-30 2014-09-02 Thomson Licensing Method and apparatus for measuring video quality using at least one semi-supervised learning regressor for mean observer score prediction
EP2833639A1 (fr) * 2012-05-22 2015-02-04 Huawei Technologies Co., Ltd. Procédé et dispositif d'évaluation de la qualité vidéo
EP2833639A4 (fr) * 2012-05-22 2015-04-22 Huawei Tech Co Ltd Procédé et dispositif d'évaluation de la qualité vidéo
US10045051B2 (en) 2012-05-22 2018-08-07 Huawei Technologies Co., Ltd. Method and apparatus for assessing video quality
CN105264896A (zh) * 2014-05-08 2016-01-20 华为终端有限公司 一种视频质量检测的方法及装置
EP3076674A4 (fr) * 2014-05-08 2017-01-25 Huawei Device Co., Ltd. Procédé et dispositif de détection de qualité vidéo

Also Published As

Publication number Publication date
WO2009007133A3 (fr) 2009-07-16

Similar Documents

Publication Publication Date Title
Korhonen Two-level approach for no-reference consumer video quality assessment
Eden No-reference estimation of the coding PSNR for H. 264-coded sequences
US9756323B2 (en) Video quality objective assessment method based on spatiotemporal domain structure
Winkler Perceptual video quality metrics—A review
Pinson et al. A new standardized method for objectively measuring video quality
KR100798834B1 (ko) 영상품질 평가장치, 영상품질 평가방법, 영상품질 평가프로그램을 기록한 기록매체
Yang et al. A novel objective no-reference metric for digital video quality assessment
Thung et al. A survey of image quality measures
Winkler et al. Perceptual video quality and blockiness metrics for multimedia streaming applications
Narwaria et al. Low-complexity video quality assessment using temporal quality variations
WO2004008780A1 (fr) Procede et appareil de mesure de la qualite de donnees video
You et al. Attention modeling for video quality assessment: Balancing global quality and local quality
Feng et al. Saliency inspired full-reference quality metrics for packet-loss-impaired video
US20030039404A1 (en) Image processing
US8855213B2 (en) Restore filter for restoring preprocessed video image
Keimel et al. No-reference video quality evaluation for high-definition video
US20040175056A1 (en) Methods and systems for objective measurement of video quality
Konuk et al. A spatiotemporal no-reference video quality assessment model
Oelbaum et al. Rule-based no-reference video quality evaluation using additionally coded videos
WO2009007133A2 (fr) Procédé et appareil de détermination de la qualité visuelle des informations visuelles traitées
WO2010103112A1 (fr) Procédé et appareil de mesure de qualité vidéo sans référence
Oelbaum et al. Building a reduced reference video quality metric with very low overhead using multivariate data analysis
Keimel et al. Improving the prediction accuracy of video quality metrics
Bosse et al. A perceptually relevant shearlet-based adaptation of the PSNR
Oelbaum et al. A reduced reference video quality metric for AVC/H. 264

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08784732

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08784732

Country of ref document: EP

Kind code of ref document: A2