CN100505895C - Video quality evaluation method - Google Patents

Video quality evaluation method Download PDF

Info

Publication number
CN100505895C
CN100505895C CN 200510002201 CN200510002201A CN100505895C CN 100505895 C CN100505895 C CN 100505895C CN 200510002201 CN200510002201 CN 200510002201 CN 200510002201 A CN200510002201 A CN 200510002201A CN 100505895 C CN100505895 C CN 100505895C
Authority
CN
China
Prior art keywords
video
frame
pixel
video quality
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200510002201
Other languages
Chinese (zh)
Other versions
CN1809175A (en
Inventor
罗忠
王静
杨副正
常义林
万帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SnapTrack Inc
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN 200510002201 priority Critical patent/CN100505895C/en
Publication of CN1809175A publication Critical patent/CN1809175A/en
Application granted granted Critical
Publication of CN100505895C publication Critical patent/CN100505895C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

This invention provides one visual quality evaluation method, which determines visual sequence quality according to sequence adjacent pre-set area picture element changes. This invention fully considers the visual sequence adjacent frame image content changes, especially to the time changes in motion area picture elements to fully reflect visual quality factor through adjacent frame plus front frame motion area picture for reference.

Description

A kind of video quality evaluation method
Technical field
The present invention relates to field of multimedia communication, be specifically related to a kind of method that the communication quality of video is handled.
Background technology
Along with the arrival in multimedia messages epoch, all kinds of Video processing and video communication technology emerge in an endless stream, thereby the video quality assessment technology seems and becomes more and more important.
Video quality assessment has important effect in fields such as video compression, processing and video communications, service quality QoS (quality of service) as real-time or non real-time video system performance and various video transmission channels finally reflects by video quality, and can regulate the parameter of codec or channel by video quality, to guarantee that video quality is in the acceptable scope of people; Video quality can provide understandable measuring to the video image of various different codecs outputs for another example, be convenient to codec performance design, evaluate and optimize, thereby design, optimization meet the graph image display system of people's vision mode; Simultaneously, video quality assessment also has very important meaning for apparatus for video communication manufacturer and telecom operators, for apparatus for video communication manufacturer, market competition is fierce day by day, the client is in project bid, improve day by day for the apparatus for video communication performance demands,, will have very big impetus for the sale of its product if equipment vendor can provide the convictive video quality assessment result of its apparatus for video communication; For telecom operators, video traffic belongs to novel business, and the assessment data of video quality can help its promoting service propaganda, customer satisfaction survey etc.For common customer, under the situation of not understanding complex technology, the video quality assessment result determines important influence to it.In addition, utilize video quality assessment to monitor in real time to apparatus for video communication, when the video quality that detects apparatus for video communication output is in abnormal state for a long time, just can judge it is that apparatus for video communication or network are out of joint, thereby play effects such as positioning problems, failure diagnosis.
Video quality assessment comprises well as subjective video quality assessment and objective video quality assessment two big classes.
The well as subjective video quality assessment relies on human tester's (Human Test Subject) participation, assessment result is reliable, but, because this method has strict requirement for human tester, and the process complexity is so the method for well as subjective video quality assessment is difficult to widespread usage, particularly have in the applied environment of real-time requirement, more be unsuitable for using in assessment.
The objective video quality assessment adopts quantitative methods to measure the quality of video image, and calculates realization automatically by processor, and the efficient height need not artificial participation.But, all there are some fatal problems in present objective video quality appraisal procedure, inconsistent as the relation between the result of the result of objective video quality assessment and well as subjective video quality assessment, such as the high video sequence of objective video quality assessment result, its subjective video assessment result is not necessarily high, and vice versa.In addition, the close video image of two width of cloth well as subjective video quality assessment results, its objective video quality assessment result may be mutually far short of what is expected.
Therefore, the right way that solves video quality evaluation method widespread usage problem should be: develop a class objective quality assessment method, require simple, realize easily, the efficient height, automatically perform, of paramount importance is that its video quality assessment result and the mankind feel more approaching for the subjective quality of video.Here " approaching " should be carried out measurement based on different mathematics and index from statistical significance.So, adopt this class objective quality assessment method, just can provide result near human subjective sensation.
At present, the method for objective video quality assessment mainly comprises three kinds: promptly need the video quality evaluation method of the full reference model (Full Reference Model) of entire original video sequence, only need original video sequence part statistical nature partial reference model (Partial Reference Model) video quality evaluation method and do not need the video quality evaluation method of the no reference model (Referenceless Model) of any information of original video sequence.
In actual applications, owing to obtaining reasons such as original video sequence, the objective video quality appraisal procedure of full referrer module and partial reference module can't be used widely, and the video quality evaluation method that does not have a reference model will become the developing direction of video quality assessment.
At present, the video quality evaluation method of no reference model generally is to realize according to the distortion level of some special type of distortion characteristic in the video sequence, and the specific implementation method mainly comprises following three kinds:
Method one, when assessing video quality according to the Y-PSNR of the quantization parameter in the MPEG compression video code flows such as (Motion Picture Experts Groups) prediction compressed video, because Y-PSNR is a kind of typical objective quality index, and the consistency between this index and the well as subjective video quality assessment is lower, therefore, the scope of application of this method has limitation.
Method two, as shown in Figure 1, at feature that can the reflecting video quality, the various characteristic values of tested video are proposed respectively, as blocking artifact (blocking artifacts), concussion effect (ringing artifacts), clamper (clipping), noise (noise), contrast (contrast) and edge precipitous (sharpness) etc., unite the distortion level of various different characteristics then and assess tested video quality.
Blocking artifact is meant the discontinuity (Discontinuity) of block boundary in the image, and this discontinuity is to be caused by adjacent block separate DCT (discrete surplus profound conversion) and quantizing process in coding.The order of severity that can represent blocking artifact by the discontinuity of measuring block boundary.Blocking artifact is based on piece DCT compression algorithm such as MPEG1, MPEG2, MPEG4, H.263 waits the main distortion reason of video.
The concussion effect is meant the jitter phenomenon at high-contrast edges place in the distorted image.Usually utilize the variance of image border area pixel value to measure the order of severity of shaking effect.
Clamper is meant the truncation of pixel value in the image processing, can cause the appearance of clamper as sharp keen enhancement techniques, and clamper can cause losing of high-frequency information, thereby causes confusion effect.Usually can be that the ratio of minimum and maximum probable value is measured clamper by measuring pixel value.
Noise is meant and causes some change at random that image spatial domain or time domain produce in image compression or the processing.High fdrequency component by the measurement image smooth region can obtain noise.
Contrast is meant the dynamic change of image brightness signal.Contrast can reflect the variation of area-of-interest and background area brightness in the image.Usually can utilize the image pixel value histogram to come the contrast of computed image.
The precipitous readability that is meant image outline and texture in edge.Usually can utilize the kurtosis at image local edge to come edge calculation precipitous.
Because different video compression, the type of distortion difference that Processing Algorithm can cause video, so, when utilizing this method to carry out video quality assessment, key is how to unite a plurality of characteristic values is come evaluates video quality, as the video that generates by mpeg 2 encoder, wherein blocking artifact plays very main influence to video quality, and the video that generates for the encoder that has adopted the smothing filtering algorithm, the discontinuous meeting of block boundary obtains well level and smooth, and its blocking artifact just can not the reflecting video quality.
Owing to can not find the characteristic binding method that is fit to all videos, make this method not have general applicability equally.
Embed some label informations such as watermark etc. before method three, the coding in original video, the integrality of the label information that extracts during by decoded picture is come evaluates video quality.Principle by watermark assessment video quality method as shown in Figure 2.
Among Fig. 2, coding side is watermarked image information in original video sequence, decoding end according to the damage location of the watermarking images information that extracts and degree of injury predict that video sequence is quantized, transmission error code etc. influences the degree that video quality is descended.
This method must embed watermarking images information in the original video sequence before coding, and must know the embedding grammar of watermark and original watermarking images information.For not having watermarked video sequence or not knowing watermark embedding method and original watermark image information, all can't carry out video quality assessment.In addition, watermarked meeting causes that video quality descends in original video sequence, and the video quality evaluation method that adopts in order to carry out quality evaluation self has reduced video quality, and this logically is wrong.
In sum, in the existing video quality evaluation method, its assessment result of well as subjective video quality appraisal procedure is reliable, but has shortcomings such as poor for applicability, that efficient is low; And the topmost problem that the objective video quality appraisal procedure exists is that video quality assessment result and subjective video assessment result gap are big, in addition, the objective video quality appraisal procedure that has also has poor for applicability, and video quality evaluation method itself reduces shortcomings such as video quality.
Summary of the invention
The objective of the invention is to, a kind of video quality evaluation method is provided, to overcome poor for applicability, the inefficient shortcoming of well as subjective video quality assessment that exists in the prior art.
For achieving the above object, technical scheme provided by the invention comprises:
A kind of video quality evaluation method comprises:
Pixel according to presumptive area in the consecutive frame of video sequence changes the video quality of determining video sequence.
Described presumptive area comprises: the moving object zone.
Described method further comprises:
A, determine the motion vector field of each frame of video in the video sequence;
B, determine moving object zone in each frame of video respectively according to the motion vector field of described each frame of video;
C, determine in described each frame of video the mean value of square of the pixel value difference of corresponding region in the moving object zone and its former frame respectively;
D, determine the video quality of video sequence according to the mean value of square of the difference of described each pixel.
Described step a further comprises:
A1, to each the frame of video smothing filtering in the video sequence;
A2, determine the motion vector field of each frame of video behind the described smothing filtering respectively according to block matching algorithm.
5, a kind of video quality evaluation method as claimed in claim 4 is characterized in that, the smothing filtering among the described step a1 comprises: 2-d gaussian filters.
Described step a2 further comprises:
(a b) is (2N of geometric center with pixel in the n frame of setting video sequence 1+ 1) * (2N 1+ 1) Qu Yu piece is:
Figure C200510002201D0011155854QIETU
(x, y) (a-N 1≤ x≤a+N 1, b-N 1≤ y≤b+N 1);
(a b) is (2N of geometric center with pixel in the n-1 frame of video sequence 2+ 1) * (2N 2+ 1) zone
Figure C200510002201D0011155915QIETU
(m, n) (a-N 2≤ m≤a+N 2, b-N 2≤ n≤b+N 2) in carry out motion search, determine pixel (a, motion vector (mv b) X, n(a, b), mv Y, n(a, b)) is:
( mv x , n ( a , b ) , mv y , n ( a , b ) ) = arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( SAD ( u , v ) )
= arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( Σ k = - N 1 N 1 Σ l = - N 1 N 1 | f n ′ ( a + k , b + l ) - f n - 1 ′ ( a + k + u , b + l + v ) | )
Wherein: the coordinate of x, y, m, n, u, v remarked pixel, and x, y, m, n, N1, N2 be positive integer, simultaneously, N2〉N1;
Determine the video frame motion vector field of each frame of video respectively according to the motion vector of each pixel in each frame of video.
Described step a2 further comprises:
Each frame of video is divided into height respectively and width is the close region of NW several (2NW+1) * (2NW+1);
Determine center pixel (a, motion vector (mv b) of each close region in each frame of video respectively X, n(a, b), mv Y, n(a, b)) is:
( mv x , n ( a , b ) , mv y , n ( a , b ) ) = arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( SAD ( u , v ) )
= arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( Σ k = - N 1 N 1 Σ l = - N 1 N 1 | f n ′ ( a + k , b + l ) - f n - 1 ′ ( a + k + u , b + l + v ) | )
Wherein: f n ′ ( x , y ) ( a - NW ≤ x ≤ a + NW , b - NW ≤ y ≤ b + NW ) Represent the close region in the n frame, the coordinate of x, y, u, v remarked pixel, and x, y, n, N1, N2 be positive integer, simultaneously, N2〉N1;
Respectively with the center pixel of described each close region (a, motion vector b) is as the motion vector of each pixel in each close region;
Determine the motion vector field of each frame of video respectively according to the motion vector of each pixel in each frame of video.
8, a kind of video quality evaluation method as claimed in claim 3 is characterized in that, described step b further comprises:
B1, respectively determine in each frame of video (a b) is (2N at center with pixel 3+ 1) * (2N 3+ 1) variance of motion vector value is in the zone:
σ mv , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 ( ( mv x , n ( a + k , b + l ) - mv ‾ x , n ( a , b ) ) 2 + ;
( mv y , n ( a + k , b + l ) - mv ‾ y , n ( a , b ) ) 2 )
Wherein: the coordinate of x, y remarked pixel, x, y, N3, n are positive integer;
B2, respectively determine in each frame of video (a b) is (2N at center with pixel 3+ 1) * (2N 3+ 1) variance of pixel value is in the zone:
σ f , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 ( f n ′ ( a + k , b + l ) - f n ′ ‾ ( a , b ) ) 2 ;
Wherein
Figure C200510002201D00134
Be (2N 3+ 1) * (2N 3+ 1) average of pixel value in the zone, and
f n ′ ‾ ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 f ′ n ( a + k , b + l ) ;
Wherein: N3, n are positive integer;
B3, the variance of motion vector value in the frame of video is defined as the moving object zone less than the variance of predetermined threshold 1, pixel value greater than the non-vanishing pixel set of the motion vector of predetermined threshold 2 and level and vertical direction.
(a b) is the pixel in the moving object zone to pixel in the n frame of setting video sequence;
And described step c further comprises:
Determine that according to the motion vector field of filtered n frame (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame, and determine two zones pixel value difference square be:
D n ′ ( a , b ) = Σ k = - N 4 N 4 Σ l = - N 4 N 4 ( f n ′ ( a + k , b + l ) - f n - 1 ′ ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) ) 2 ;
Wherein: the coordinate of x, y remarked pixel, x, y, N4, n are positive integer;
And (the 2N of each pixel after the filtering in the n frame moving object zone 4+ 1) * (2N 4+ 1) mean value of square corresponding to the pixel value difference in the zone in the n-1 frame is:
D ‾ ′ n = 1 N Σ ( x , y ) ∈ I n D n ′ ( x , y ) ;
Wherein: the coordinate of x, y remarked pixel, x, yn are positive integer;
Determine that according to the video frame motion vector field of the n frame before the filtering (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame before the filtering, and determine two zones pixel value difference square be:
D n ( a , b ) = Σ k = - N 4 N 4 Σ l = - N 4 N 4 ( f n ( a + k , b + l ) - f n - 1 ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) 2 ;
Wherein: the coordinate of x, y remarked pixel, x, y, N4, n are positive integer;
And (the 2N of each the pixel correspondence in the n frame moving object zone before the filtering 4+ 1) * (2N 4+ 1) zone corresponding to the mean value of square of the pixel value difference in the zone in the n-1 frame before the filtering is:
D ‾ n = 1 N Σ ( x , y ) ∈ I n D n ( x , y ) ;
Wherein: the coordinate of x, y remarked pixel, x, y, n are positive integer.
Described steps d further comprises:
D1, determine that video sequence n frame video quality is: Q n=D ' n(a-(D n-D ' n)/D ' n);
Wherein: α is a predefined parameter;
D2, determine the video quality of video sequence according to the video quality average of each frame of video in the video sequence.
Described steps d 1 further comprises:
Activity of imagination according to each frame of video is determined its video quality Q respectively n' be:
Q n ′ = Q n / ( β + ( max ( A n , γ ) ) 2 / δ ) ;
Wherein: An is the activity of image, and β, γ, δ are predefined parameter.
The activity An of described image is:
A n = | mv x , n ( x , y ) ‾ | + | mv y , n ( x , y ) ‾ | ;
And described α comprises: 2.5, and described β comprises: 2.5, described γ comprises: 5, described δ comprises: 30.
Described method also comprised before step a:
Determine to comprise in the video sequence frame of video of image zoom, and adopt the convergent-divergent algorithm that it is carried out convergent-divergent and handle.
Description by technique scheme as can be known, the present invention has taken into full account the variation of the picture material of presumptive area in the consecutive frame, especially abundant this factor of well as subjective video quality of reflecting video sequence of the variation of pixel in the moving object zone, change to determine the video quality assessment result of video sequence by pixel in the moving object zone in consecutive frame such as the former frame, make video quality assessment result of the present invention be similar to the well as subjective video quality assessment result, and, the scope of application of the present invention can be restricted by carrying out reference with the pixel in the former frame moving object zone; By video sequence is carried out smothing filtering, strengthened the accuracy of the motion vector field of frame of video, thereby strengthened video quality assessment result's of the present invention accuracy; The relation between the pixel value difference of corresponding region adapts to the different distortion characteristic that the different coding algorithm causes in the pixel value difference of corresponding region, the former distortion video sequence consecutive frame by utilizing behind the smothing filtering in the consecutive frame, determine the video frame motion vector field by adopting block matching algorithm, convergent-divergent matching algorithm, by the activity that adopts image the video quality assessment result is revised, improved the similitude of video quality assessment result of the present invention and well as subjective video quality assessment result; Thereby realized improving the purpose of the consistency of objective video quality assessment result and well as subjective video quality assessment result, the adaptability that improves the objective video quality appraisal procedure and raising video quality evaluation method efficient by technical scheme provided by the invention.
Description of drawings
Fig. 1 is the schematic diagram that the distortion level based on various features of prior art is assessed tested video quality;
Fig. 2 is the schematic diagram that passes through watermark assessment video quality of prior art;
Fig. 3 is a block matching algorithm search pixel motion vector schematic diagram of the present invention;
Fig. 4 is video quality assessment result of the present invention and the corresponding distribution schematic diagram of well as subjective video quality assessment result.
Embodiment
Objective video quality assessment does not require the people, and have easy realization, can automatically perform, the efficient advantages of higher.How to make objective video quality assessment result and well as subjective video quality assessment result more approaching, make the objective video quality assessment result meet the subjective quality sensation of people to video, simultaneously, making the scope of application of objective video quality assessment unrestricted, is very important concerning the objective video quality assessment.
Why people can make the well as subjective video quality evaluation to the distortion video sequence, what in fact rely on is the processes such as analysis, understanding and memory of people to video information, and the people is to the must be guarded or looked after all the time video information of brain storage of processes such as the analysis of video information, understanding, memory.Therefore, process that we can say the well as subjective video quality assessment is not " not have reference " fully, and it is with reference to the video information of people's brain storage.
Video sequence has very strong temporal correlation, and the most of variation of the content that comprises between the consecutive frame is less.Because human vision system has the time shielding effect, if the variation of picture material is bigger between the consecutive frame, then can cause the video quality of subjective perception to descend, especially the variation of the picture material that causes of image fault is bigger to the well as subjective video quality influence.
Because the consistency of the picture material in the video sequence in the consecutive frame, especially the variation for moving object zone respective pixel in consecutive frame in the image is very big to people's subjective sensation influence, promptly the influence to the well as subjective video quality assessment result is very big, therefore, can draw, the variation of pixel can embody the well as subjective video quality of video sequence fully in the variation of picture material in the consecutive frame, especially the moving object zone.
Like this, if the video quality evaluation method of the no reference model in the objective video quality assessment is realized video quality assessment with reference to the method for well as subjective video quality assessment, just can access and meet the people, promptly obtain the video quality assessment result approaching with the well as subjective video quality assessment result to the subjective quality of video video quality assessment result sensation, correct.
Therefore, core of the present invention is: the pixel according to presumptive area in the consecutive frame of video sequence changes the video quality of determining video sequence.
Based on core concept of the present invention technical scheme provided by the invention is further described below.
Presumptive area in the consecutive frame among the present invention comprises: the moving object zone.Be that the present invention will be described for example with moving object zone below with high spatial complexity.
The present invention at first needs to determine the moving object zone that has the high spatial complexity in each frame of video of video sequence.Determine that the moving object zone with high spatial complexity can then, utilize block matching algorithm to realize by calculating the motion vector field of image in the frame of video.The specific implementation process is as follows:
The width of setting every two field picture in the video sequence is W, highly is H, and the ranks position of pixel is with (x y) represents.Wherein: x represents horizontal coordinate, column position just, and y represents vertical coordinate, just line position.Operator " * " expression discrete convolution (Discrete Convolution).
For all frame of video in the video sequence, to a last frame, the present invention adopts the method for handling frame by frame to handle whole video sequence from first frame.Therefore, only a certain frame is described as the n frame below.
Because the random noise in the reconstructed image of frame of video decoding back can influence the accurate assessment of motion vector field, therefore, the present invention at first needs the frame of video of video sequence is carried out smothing filtering, carries out smothing filtering as adopting the 2-d gaussian filters method.Frame of video behind the 2-d gaussian filters can be expressed as:
f n ′ ( x , y ) = f n ( x , y ) * G ( x , y ) 0≤x≤W-1,0≤y≤H-1 (1)
Wherein: (x y) is the 2-d gaussian filters device, f to G n(x, y) (0≤x≤W-1, the expression of 0≤y≤H-1) n frame distortion image, f n ′ ( x , y ) ( 0 ≤ x ≤ W - 1,0 ≤ y ≤ H - 1 ) Be filtered n two field picture, the size of image is W * H, and
G ( x , y ) = 1 2 πσ 2 exp ( - x 2 + y 2 2 σ 2 ) - - - ( 2 )
Wherein: σ 2Be the variance of Gaussian Profile, the level and smooth degree of image after its size decision filtering, σ 2Big more, then the level and smooth degree of image is high more.
Through behind the gaussian filtering, promptly the n-1 two field picture is as the reference frame with the former frame image of n two field picture, and utilizing block matching algorithm is each pixel prediction motion vector in the n two field picture, thereby obtains the motion vector field (mv of whole two field picture X, n(x, y), mv Y, n(x, y)) (0≤x≤W-1,0≤y≤H-1).
Be elaborated below in conjunction with 3 pairs of methods that the present invention utilizes block matching algorithm to obtain the motion vector field of n two field picture of accompanying drawing.
In Fig. 3, (a) figure is the n-1 two field picture in the video sequence, and (b) figure is the n two field picture in the video sequence, and (a b) is arbitrary pixel in the n two field picture to pixel.
(a b) is the geometric center of piece B to pixel, and piece B can be expressed as in method one, the setting n two field picture f n ′ ( x , y ) ( a - N 1 ≤ x ≤ a + N 1 , b - N 1 ≤ y ≤ b + N 1 ) , Wherein: N1 is the positive integer greater than zero.
In the n-1 two field picture corresponding to pixel (a, b) be the center, block size is (2N 2+ 1) * (2N 2+ 1) among the regional MB, seek can optimum Match with piece B piece, and the shift value of this best matching blocks and piece B is pixel (a, motion vector b).Wherein: MB can be expressed as:
f n - 1 ′ ( m , n ) ( a - N 2 ≤ m ≤ a + N 2 , b - N 2 ≤ n ≤ b + N 2 ) .
Wherein: N2 should be greater than N1.
The adaptation function of piece adopts minimum absolute difference and SAD function, then pixel (a, motion vector (mv b) in the n two field picture X, n(a, b), mv Y, n(a, b)) is:
( mv x , n ( a , b ) , mv y , n ( a , b ) ) = arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( SAD ( u , v ) )
= arg min ( N 1 - N 2 ≤ u , v ≤ N 2 - N 1 ) ( Σ k = - N 1 N 1 Σ l = - N 1 N 1 | f n ′ ( a + k , b + l ) - f n - 1 ′ ( a + k + u , b + l + v ) | ) - - - ( 3 )
Wherein: the coordinate of x, y, m, n, u, v remarked pixel, and x, y, m, n, N1, N2 be positive integer, simultaneously, N2〉N1.
Can calculate after the filtering motion vector of each pixel in the n two field picture according to this method, then, obtain the motion vector field (mv of n two field picture according to the motion vector of each pixel X, n(x, y), mv Y, n(x, y)).
Method two, for method one a kind of reduces improving one's methods of computational complexity, (a b) is defined as pixel (a, motion vector (mv b) than the motion vector of all pixels in the zonule on every side to be about to after the filtering pixel in the n two field picture X, n(a, b), mv Y, n(a, b)).
The specific implementation process is: with a little rectangular area remarked pixel (a, b) less on every side close region is introduced parameter N W for this reason, i.e. the height of close region and width, this close region can be expressed as:
f n ′ ( x , y ) ( a - NW ≤ x ≤ a + NW , b - NW ≤ y ≤ b + NW ) .
In this close region, do not need each the pixel calculation of motion vectors in the image, and only need obtain this regional center pixel (a according to formula (3) for each big or small rectangular area for (2NW+1) * (2NW+1), b) motion vector gets final product with the motion vector of this motion vector as each pixel in this close region then.
If close region upper left side of image from frame of video begins to divide, if W, H are not the integral multiples of NW, then the right hand edge of image or lower limb may need close region dwindled and handle.
Can calculate after the filtering motion vector of each pixel in the n two field picture equally according to this method, then, obtain the motion vector field (mv of n two field picture according to the motion vector of each pixel X, n(x, y), mv Y, n(x, y)).
After having obtained the motion vector field of n two field picture, the moving object zone with high spatial complexity in the n two field picture can be determined according to the consistency of neighborhood pixels motion vector.
The specific implementation process is: calculate that (a b) is (2N at center with pixel in the n two field picture 3+ 1) * (2N 3+ 1) variance of motion vector value in the zone.
σ mv , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 ( ( mv x , n ( a + k , b + l ) - mv ‾ x , n ( a , b ) ) 2 + - - - ( 4 )
( mv y , n ( a + k , b + l ) - mv ‾ y , n ( a , b ) ) 2 )
Mv wherein X, n(a, b), mv Y, n(a b) represents motion vector mv respectively X, n(x, y), mv Y, n(x is y) in this mean value of areas, promptly
mv ‾ x , n ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 mv x , n ( a + k , b + l )
mv ‾ y , n ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 mv y , n ( a + k , b + l )
(a b) is (2N at center with pixel 3+ 1) * (2N 3+ 1) Qu Yu space complexity can be represented by the variance of this area pixel value,
σ f , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 ( f n ′ ( a + k , b + l ) - f n ′ ‾ ( a , b ) ) 2 - - - ( 5 )
Wherein
Figure C200510002201D00196
Be the average of pixel value in this zone, promptly
f n ′ ‾ ( a , b ) = 1 ( 2 N 3 + 1 ) 2 Σ k = - N 3 N 3 Σ l = - N 3 N 3 f ′ n ( a + k , b + l )
Setting predetermined threshold 1 is T 1, predetermined threshold 2 is T 2, then have the pixel set I in the moving object zone of high spatial complexity in the n two field picture nBe defined as:
I n = { ( x , y ) | ( &sigma; mv , n 2 ( x , y ) < T 1 ) I ( &sigma; f , n 2 ( x , y ) > T 2 ) I ( mv &OverBar; x , n ( x , y ) &NotEqual; 0 ) I ( mv &OverBar; y , n ( x , y ) &NotEqual; 0 ) } . - - - ( 6 )
Its concrete implication is: all its motion vector variances reach less than predetermined threshold 1T 1, and its pixel value variance is greater than predetermined threshold 2T 2, and the set of all non-vanishing those pixels of the motion vector of level and vertical direction is the moving object zone with high spatial complexity.
Behind the moving object zone in having determined the n two field picture, can determine the video quality of n two field picture according to the mean value of square of the pixel value difference of corresponding region among moving object zone and the n-1 in the n two field picture with high spatial complexity.
The specific implementation method is: determine that according to the video frame motion vector field of filtered n frame (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame, and determine two zones pixel value difference square be:
D n &prime; ( a , b ) = &Sigma; k = - N 4 N 4 &Sigma; l = - N 4 N 4 ( f n &prime; ( a + k , b + l ) - f n - 1 &prime; ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) ) 2 - - - ( 7 )
Determine that according to the video frame motion vector field of the n frame before the filtering (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame, and determine two zones pixel value difference square be:
D n ( a , b ) = &Sigma; k = - N 4 N 4 &Sigma; l = - N 4 N 4 ( f n ( a + k , b + l ) - f n - 1 ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) 2 - - - ( 8 )
According to above-mentioned formula (7), (8) respectively to set of pixels I nIn each pixel to calculate with this pixel be the (2N at center 4+ 1) * (2N 4+ 1) the corresponding pixel value difference in zone square, and the mean value of square of definite pixel value difference is:
D &OverBar; n = 1 N &Sigma; ( x , y ) &Element; I n D n ( x , y ) - - - ( 9 )
D &OverBar; &prime; n = 1 N &Sigma; ( x , y ) &Element; I n D n &prime; ( x , y ) - - - ( 10 )
In above-mentioned formula (9), (10), N is set of pixels I nIn the number of pixels that comprises.
In order to adapt to the distortion that different coding methods causes different characteristics are arranged, bigger as the common random noise of reconstructed image that adopts intraframe coding, though objectively signal to noise ratio is lower, have subjective quality preferably.And adopt the reconstructed image after level and smooth smoother, higher signal to noise ratio is arranged but increase the fuzzy of image, so the video quality of n two field picture is:
Q n=D′ n·(α-(D n-D′ n)/D′ n) (11)
Wherein α is a predefined parameter, can be determined by experiment, as α=2.5.
Because human eye has the bigger ability of restraining oneself to the distortion of rapid movement object, therefore need do correction to video quality according to the activity of image.
The activity of image can be calculated by the average motion vector of image:
A n = | mv x , n ( x , y ) &OverBar; | + | mv y , n ( x , y ) &OverBar; | 0 &le; x &le; W - 1,0 &le; y &le; H - 1 - - - ( 12 )
The scoring of video quality is modified to:
Q n &prime; = Q n / ( &beta; + ( max ( A n , &gamma; ) ) 2 / &delta; ) - - - ( 13 )
Wherein β, γ, δ are predefined parameter, can be determined by experiment as β=2.5 γ=5, δ=30 etc.
The video quality of each frame of video of whole video sequence can be obtained by said method, the whole video quality assessment result can be obtained by the method that the video quality of all frame of video of video sequence is averaged.
In the above-described embodiments for to make the video quality assessment result of the frame of video that contains image zoom more reliable, can also at first carry out the convergent-divergent matching algorithm to each frame of video, and then determine the image motion vector field of each video sequence frame, influence the accuracy of image motion vector field to avoid image zoom in the frame of video.
Be example with a concrete tested video sequence below, the degree of approximation of video quality assessment result of the present invention and subjective video assessment result is described.
Tested video sequence adopts the test video sequence of using in the full reference mass assessment phase I experiment of VQEG (video quality expert group), wherein comprise 9 video sequences altogether, have 144 kinds of different situations, i.e. 625/50 form, sequence 2-10, HRCs are 1-16,9 sequences altogether, and therefore each video sequence, has 9 * 16=144 kind different situations at 16 different HRCs values.
Stipulate in the method by VQEG evaluates video quality assessment models precision, the performance of evaluates video quality appraisal procedure mainly adopts following 4 yardsticks to quantize: mean square error RMSE (Root Mean SquareError), Pearson correlation coefficient, Spearman rank correlation coefficient (Spearman rank order correlationcoefficient), OR (Outlier Ratio).
Utilize video quality evaluation method of the present invention, each performance parameter is as shown in table 1 in this model that draws.
Table 1
Cycle tests RMSE PCC SCC OR
Sequence except that sequence 3 and 8 0.1218 0.8496 0.7968 0.6429
Sequence except that sequence 8 0.1860 0.7673 0.7409 0.6719
All sequences 0.2196 0.6512 0.6430 0.8531
As can be seen from Table 1, objective scoring and well as subjective video quality assessment result that the objective video quality appraisal procedure of the no reference that the present invention proposes obtains compare, and less RMSE is arranged, and have reflected that the present invention has the better prediction precision; Bigger pearson coefficient correlation and Spearman rank correlation coefficient are arranged, reflected that the present invention has the better prediction monotonicity; Less OR is arranged, reflected that the present invention has the better prediction consistency.
In tested video sequence, do not belong to natural scene image owing to video sequence 8 (Horizontal scrolling 2), the present invention is better than the well as subjective video quality assessment to the video quality assessment of video sequence 8, and video quality assessment performance of the present invention is sharply descended.
Image zoom owing to sequence 3 (Harp) in tested video sequence has influenced determining of video frame motion vector field, thereby has influenced the image motion object area prediction accuracy of consecutive frame, makes video quality assessment decreased performance of the present invention.
If the present invention at first adopts the convergent-divergent matching algorithm to video sequence, then can improve the consistency of video quality assessment result of the present invention and well as subjective video quality assessment result.
As the video sequence except that video sequence 3 and video sequence 8, video quality assessment result of the present invention and well as subjective video quality assessment result have good consistency for the video sequence of natural scene distortion.
The method representation video quality assessment result of the present invention of employing scatter diagram (scatter diagram/scattergraph) and the relation between the well as subjective video quality assessment result are as shown in Figure 4.
In Fig. 4, horizontal coordinate u represents video quality assessment result of the present invention, vertical coordinate v represents the well as subjective video quality assessment result, should distinguish corresponding two value u for the video assignment assessment result of different tested video sequences, v, with (u, v) be the point of coordinate on coordinate plane to well as subjective video quality assessment result and video quality assessment result of the present invention that should video sequence, therefore, can be all corresponding one-tenth coordinate points of the video quality assessment result of tested each frame of video sequence.
The set that these coordinate points form, can measure the consistent degree of video quality assessment result of the present invention and subjective video assessment result with respect to the dispersion degree of straight line u=v, i.e. set concentrates near the straight line u=v more, the consistent degree of video quality assessment result then of the present invention and subjective video assessment result is high more, otherwise, then low more.
Coordinate points among Fig. 4 does not comprise the video quality assessment result of video sequence 3 and video sequence 8.
As can be seen from Figure 4, the present invention concentrates the assessment result of video sequence and subjective video assessment result and is distributed near the video quality straight line u=v, shows that fully video quality assessment result of the present invention is similar to the subjective video assessment result.
Though described the present invention by embodiment, those of ordinary skills know, the present invention has many distortion and variation and do not break away from spirit of the present invention, and the claim of application documents of the present invention comprises these distortion and variation.

Claims (13)

1, a kind of video quality evaluation method is characterized in that, comprising:
The pixel of determining presumptive area in the consecutive frame of video sequence changes;
Pixel according to presumptive area in the consecutive frame of described video sequence changes the video quality of determining described video sequence, and the pixel of presumptive area changes more greatly in the described consecutive frame, and then the video quality of described video sequence descends many more.
2, a kind of video quality evaluation method as claimed in claim 1 is characterized in that, described presumptive area comprises: the moving object zone.
3, a kind of video quality evaluation method as claimed in claim 2 is characterized in that, described method specifically comprises the steps:
A, determine the motion vector field of each frame of video in the video sequence;
B, determine moving object zone in each frame of video respectively according to the motion vector field of described each frame of video;
C, determine in described each frame of video the mean value of square of the pixel value difference of corresponding region in the moving object zone and its former frame respectively;
D, determine the video quality of video sequence according to the mean value of square of the difference of described each pixel.
4, a kind of video quality evaluation method as claimed in claim 3 is characterized in that, described step a specifically comprises the steps:
A1, to each the frame of video smothing filtering in the video sequence;
A2, determine the motion vector field of each frame of video behind the described smothing filtering respectively according to block matching algorithm.
5, a kind of video quality evaluation method as claimed in claim 4 is characterized in that, the smothing filtering among the described step a1 comprises: 2-d gaussian filters.
6, a kind of video quality evaluation method as claimed in claim 4 is characterized in that, described step a2 is specially:
(a b) is (2N of geometric center with pixel in the n frame of setting video sequence 1+ 1) * (2N 1+ 1) Qu Yu piece is: f n &prime; ( x , y ) ( a - N 1 &le; x &le; a + N 1 , b - N 1 &le; y &le; b + N 1 ) ;
(a b) is (2N of geometric center with pixel in the n-1 frame of video sequence 2+ 1) * (2N 2+ 1) zone f n - 1 &prime; ( m , n ) ( a - N 2 &le; m &le; a + N 2 , b - N 2 &le; n &le; b + N 2 ) In carry out motion search, determine pixel (a, motion vector (mv b) X, n(a, b), mv Y, n(a, b)) is:
( mv x , n ( a , b ) , mv y , n ( a , b ) ) = arg min ( N 1 - N 2 &le; u , v &le; N 2 - N 1 ) ( SAD ( u , v ) )
= arg min ( N 1 - N 2 &le; u , v &le; N 2 - N 1 ) ( &Sigma; k = - N 1 N 1 &Sigma; l = - N 1 N 1 | f n &prime; ( a + k , b + l ) - f n - 1 &prime; ( a + k + u , b + l + v ) | )
Wherein: the coordinate of x, y, m, n, u, v remarked pixel, and x, y, m, n, N1, N2 be positive integer, simultaneously, N2〉N1, (a b) is arbitrary pixel in the n frame to described pixel;
Determine the video frame motion vector field of each frame of video respectively according to the motion vector of each pixel in each frame of video.
7, a kind of video quality evaluation method as claimed in claim 4 is characterized in that, described step a2 is specially:
Each frame of video is divided into height respectively and width is the close region of NW several (2NW+1) * (2NW+1);
Determine center pixel (a, motion vector (mv b) of each close region in each frame of video respectively X, n(a, b), mv Y, n(a, b)) is:
( mv x , n ( a , b ) , mv y , n ( a , b ) ) = arg min ( N 1 - N 2 &le; u , v &le; N 2 - N 1 ) ( SAD ( u , v ) )
= arg min ( N 1 - N 2 &le; u , v &le; N 2 - N 1 ) ( &Sigma; k = - N 1 N 1 &Sigma; l = - N 1 N 1 | f n &prime; ( a + k , b + l ) - f n - 1 &prime; ( a + k + u , b + l + v ) | )
Wherein: f n &prime; ( x , y ) ( a - NW &le; x &le; a + NW , b - NW &le; y &le; b + NW ) Represent the close region in the n frame, the coordinate of x, y, u, v remarked pixel, and x, y, n, N1, N2 be positive integer, simultaneously, N2〉N1, (a b) is arbitrary pixel in the n frame to described pixel;
Respectively with the center pixel of described each close region (a, motion vector b) is as the motion vector of each pixel in each close region;
Determine the motion vector field of each frame of video respectively according to the motion vector of each pixel in each frame of video.
8, a kind of video quality evaluation method as claimed in claim 3 is characterized in that, described step b specifically comprises the steps:
B1, respectively determine in each frame of video (a b) is (2N at center with pixel 3+ 1) * (2N 3+ 1) variance of motion vector value is in the zone:
&sigma; mv , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 &Sigma; k = - N 3 N 3 &Sigma; l = - N 3 N 3 ( ( mv x , n ( a + k , b + l ) - mv &OverBar; x , n ( a , b ) ) 2 +
( mv y , n ( a + k , b + l ) - mv &OverBar; y , n ( a , b ) ) 2 ) ;
Wherein: the coordinate of x, y remarked pixel, x, y, N3, n are positive integer, (a b) is arbitrary pixel in the n frame to described pixel;
B2, respectively determine in each frame of video (a b) is (2N at center with pixel 3+ 1) * (2N 3+ 1) variance of pixel value is in the zone:
&sigma; f , n 2 ( a , b ) = 1 ( 2 N 3 + 1 ) 2 &Sigma; k = - N 3 N 3 &Sigma; l = - N 3 N 3 ( f n &prime; ( a + k , b + l ) - f n &prime; &OverBar; ( a , b ) ) 2 ;
Wherein Be (2N 3+ 1) * (2N 3+ 1) average of pixel value in the zone, and
f n &prime; &OverBar; ( a , b ) = 1 ( 2 N 3 + 1 ) 2 &Sigma; k = - N 3 N 3 &Sigma; l = - N 3 N 3 f &prime; n ( a + k , b + l ) ;
Wherein: N3, n are positive integer;
B3, the variance of motion vector value in the frame of video is defined as the moving object zone less than the variance of predetermined threshold 1, pixel value greater than the non-vanishing pixel set of the motion vector of predetermined threshold 2 and level and vertical direction.
9, a kind of video quality evaluation method as claimed in claim 3 is characterized in that:
(a b) is the pixel in the moving object zone, and (a b) is arbitrary pixel in the n frame to described pixel to set pixel in the n frame of video sequence;
And described step c specifically comprises:
Determine that according to the motion vector field of filtered n frame (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame, and determine two zones pixel value difference square be:
D n &prime; ( a , b ) = &Sigma; k = - N 4 N 4 &Sigma; l = - N 4 N 4 ( f n &prime; ( a + k , b + l ) - f n - 1 &prime; ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) ) 2 ;
Wherein: the coordinate of x, y remarked pixel, x, y, N4, n are positive integer;
And (the 2N of each pixel after the filtering in the n frame moving object zone 4+ 1) * (2N 4+ 1) mean value of square corresponding to the pixel value difference in the zone in the n-1 frame is:
D &prime; &OverBar; n = 1 N &Sigma; ( x , y ) &Element; I n D n &prime; ( x , y ) ;
Wherein: the coordinate of x, y remarked pixel, x, yn are positive integer;
Determine that according to the video frame motion vector field of the n frame before the filtering (a b) is (2N of geometric center with pixel in the n frame 4+ 1) * (2N 4+ 1) zone is corresponding to the zone in the n-1 frame before the filtering, and determine two zones pixel value difference square be:
D n ( a , b ) = &Sigma; k = - N 4 N 4 &Sigma; l = - N 4 N 4 ( f n ( a + k , b + l ) - f n - 1 ( a + k + mv x , n ( a , b ) , b + l + mv y , n ( a , b ) ) ) 2 ;
Wherein: the coordinate of x, y remarked pixel, x, y, N4, n are positive integer;
And (the 2N of each the pixel correspondence in the n frame moving object zone before the filtering 4+ 1) * (2N 4+ 1) zone corresponding to the mean value of square of the pixel value difference in the zone in the n-1 frame before the filtering is:
D &OverBar; n = 1 N &Sigma; ( x , y ) &Element; I n D n ( x , y ) ;
Wherein: the coordinate of x, y remarked pixel, x, y, n are positive integer.
10, a kind of video quality evaluation method as claimed in claim 9 is characterized in that, described steps d specifically comprises the steps:
D1, determine that video sequence n frame video quality is: Q n = D &prime; &OverBar; n &CenterDot; ( &alpha; - ( D &OverBar; n - D &prime; &OverBar; n ) / D &prime; &OverBar; n ) ;
Wherein: α is a predefined parameter;
D2, determine the video quality of video sequence according to the video quality average of each frame of video in the video sequence.
11, a kind of video quality evaluation method as claimed in claim 10 is characterized in that, described steps d 1 is specially:
Activity of imagination according to each frame of video is determined its video quality Q respectively n' be:
Q n &prime; = Q n / ( &beta; + ( max ( A n , &gamma; ) ) 2 / &delta; ) ;
Wherein: An is the activity of image, and β, γ, δ are predefined parameter.
12, a kind of video quality evaluation method as claimed in claim 11 is characterized in that, the activity An of described image is:
A n = | mv x , n ( x , y ) &OverBar; | + | mv y , n ( x , y ) &OverBar; | ;
And described α comprises: 2.5, and described β comprises: 2.5, described γ comprises: 5, described δ comprises: 30.
13, a kind of video quality evaluation method as claimed in claim 3 is characterized in that, described method also comprised before step a:
Determine to comprise in the video sequence frame of video of image zoom, and adopt the convergent-divergent algorithm that it is carried out convergent-divergent and handle.
CN 200510002201 2005-01-17 2005-01-17 Video quality evaluation method Expired - Fee Related CN100505895C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200510002201 CN100505895C (en) 2005-01-17 2005-01-17 Video quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200510002201 CN100505895C (en) 2005-01-17 2005-01-17 Video quality evaluation method

Publications (2)

Publication Number Publication Date
CN1809175A CN1809175A (en) 2006-07-26
CN100505895C true CN100505895C (en) 2009-06-24

Family

ID=36840821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200510002201 Expired - Fee Related CN100505895C (en) 2005-01-17 2005-01-17 Video quality evaluation method

Country Status (1)

Country Link
CN (1) CN100505895C (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557516B (en) * 2008-04-09 2011-04-06 北京中创信测科技股份有限公司 Video quality evaluation method and device
CN101626506B (en) 2008-07-10 2011-06-01 华为技术有限公司 Method, device and system for evaluating quality of video code stream
WO2010009637A1 (en) 2008-07-21 2010-01-28 华为技术有限公司 Method, system and equipment for evaluating video quality
CN101448176B (en) * 2008-12-25 2010-06-16 华东师范大学 Method for evaluating quality of streaming video based on video characteristics
CN102349296B (en) * 2009-03-13 2016-03-09 瑞典爱立信有限公司 For the treatment of the method and apparatus of coded bit stream
CN101599170B (en) * 2009-06-26 2011-09-14 东方网力科技股份有限公司 Image noise evaluation method and image noise evaluation device
CN101998137B (en) 2009-08-21 2016-09-07 华为技术有限公司 Video quality parameter acquisition methods and device and electronic equipment
WO2011150654A1 (en) * 2010-12-31 2011-12-08 华为技术有限公司 Video quality estimate method, terminal, server and system
CN102158729B (en) * 2011-05-05 2012-11-28 西北工业大学 Method for objectively evaluating encoding quality of video sequence without reference
CN102883179B (en) * 2011-07-12 2015-05-27 中国科学院计算技术研究所 Objective evaluation method of video quality
CN103581662B (en) * 2012-07-26 2016-08-31 腾讯科技(深圳)有限公司 video definition measuring method and system
CN103945214B (en) * 2013-01-23 2016-03-30 中兴通讯股份有限公司 End side time-domain method for evaluating video quality and device
CN104463339A (en) * 2014-12-23 2015-03-25 合一网络技术(北京)有限公司 Multimedia resource producer assessment method and device
CN105100789B (en) * 2015-07-22 2018-05-15 天津科技大学 A kind of method for evaluating video quality
CN107230208B (en) * 2017-06-27 2020-10-09 江苏开放大学 Image noise intensity estimation method of Gaussian noise
CN108184117B (en) * 2018-01-10 2021-11-26 北京工业大学 Content-based bit stream layer video quality evaluation model
CN115689968A (en) * 2021-07-22 2023-02-03 中兴通讯股份有限公司 Code stream processing method and device, terminal equipment and storage medium
CN116703742B (en) * 2022-11-04 2024-05-17 荣耀终端有限公司 Method for identifying blurred image and electronic equipment

Also Published As

Publication number Publication date
CN1809175A (en) 2006-07-26

Similar Documents

Publication Publication Date Title
CN100505895C (en) Video quality evaluation method
CN100512456C (en) Blocking effect measuring method and video quality estimation method
Narvekar et al. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD)
Huynh-Thu et al. The accuracy of PSNR in predicting video quality for different video scenes and frame rates
Moorthy et al. Wireless video quality assessment: A study of subjective scores and objective algorithms
Wei et al. Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain
Seshadrinathan et al. A subjective study to evaluate video quality assessment algorithms
Madhusudana et al. Subjective and objective quality assessment of high frame rate videos
Liu et al. A no-reference metric for perceived ringing artifacts in images
Søgaard et al. Applicability of existing objective metrics of perceptual quality for adaptive video streaming
Ma et al. Reduced-reference video quality assessment of compressed video sequences
US20040012675A1 (en) Method and apparatus for measuring the quality of video data
Liu et al. A perceptually relevant no-reference blockiness metric based on local image characteristics
US20040114685A1 (en) Method and system for objective quality assessment of image and video streams
US20140286441A1 (en) Video quality measurement
KR101327709B1 (en) Apparatus for monitoring video quality and method thereof
CN103458155A (en) Video scene changing detection method and system and experience quality detection method and system
He et al. Video quality assessment by compact representation of energy in 3D-DCT domain
Xu et al. Consistent visual quality control in video coding
Nezhivleva et al. Comparing of Modern Methods Used to Assess the Quality of Video Sequences During Signal Streaming with and Without Human Perception
Caviedes et al. No-reference metric for a video quality control loop
Vranjes et al. Objective video quality metrics
Lee et al. Temporal pooling of video quality estimates using perceptual motion models
Horita et al. No-reference image quality assessment for JPEG/JPEG2000 coding
CN101895787B (en) Method and system for subjectively evaluating video coding performance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160425

Address after: American California

Patentee after: Snaptrack, Inc.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: Huawei Technologies Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090624

Termination date: 20190117