CN102984540A - Video quality assessment method estimated on basis of macroblock domain distortion degree - Google Patents

Video quality assessment method estimated on basis of macroblock domain distortion degree Download PDF

Info

Publication number
CN102984540A
CN102984540A CN2012105325655A CN201210532565A CN102984540A CN 102984540 A CN102984540 A CN 102984540A CN 2012105325655 A CN2012105325655 A CN 2012105325655A CN 201210532565 A CN201210532565 A CN 201210532565A CN 102984540 A CN102984540 A CN 102984540A
Authority
CN
China
Prior art keywords
block
macro block
macro
pixel
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012105325655A
Other languages
Chinese (zh)
Inventor
陈耀武
林翔宇
田翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2012105325655A priority Critical patent/CN102984540A/en
Publication of CN102984540A publication Critical patent/CN102984540A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video quality assessment method estimated on the basis of a macroblock domain distortion degree. The method comprises the following steps that (1), a video image is divided into a plurality of macroblocks; (2), a blocking effect distortion degree is calculated; (3), a blurring effect distortion degree is calculated; (4), a luminance contrast degree is calculated; (5), texture complexity is calculated; (6), an exercise intensity contrast degree is calculated; (7), exercise direction consistency is calculated; (8), visual perceptibility is calculated; and (9), a video quality assessment value is calculated. During the course of video quality assessment, a model of the video quality assessment method is simple; the objective quality of a video can be obtained only when the to-be-assessed video is provided; the method is very high in flexibility; in addition, relatively accurate assessment results of various video scenes can be obtained; and the method is preferable in universality.

Description

A kind of method for evaluating video quality of estimating based on the macro block territory distortion factor
Technical field
The invention belongs to the video quality evaluation technical field, be specifically related to a kind of method for evaluating video quality of estimating based on the macro block territory distortion factor.
Background technology
Along with the develop rapidly of the computer and network communication technology, people are day by day vigorous to the demand of obtaining multimedia messages.In recent years, every field is contained in the application relevant with video, such as video conference, video monitoring and mobile TV etc.In these were used, video information all needed through overcompression and transmission before arriving the recipient, and these processes tend to cause video quality loss.In order to obtain better subjective effect, be necessary video quality is estimated, adjust the parameter of encoder and transmission channel according to the result.The final receptor of video is human eyes, and eye-observation is considered to the method for accurate evaluates video quality.Yet, because the amount of information of video is very large, rely on the subjective method of manual observation that video quality is estimated manpower and the time that needs consumption a large amount of, be not suitable for large-scale practical application.Therefore, how to set up the video quality evaluation model according to human visual system (HVS) characteristic, automatically finished on this basis the quality evaluation of video by computer, become a very significant problem.
Video objective quality evaluation method (Video Objective Quality Assessment) refers to by the design mathematic model video be carried out intelligent analysis, and video is carried out the method for objectively evaluating of automatic scoring by the yardstick of setting.According to the degree of dependence to original video, video objective quality evaluation method can be divided into full reference type, partial reference type and without reference type three classes.Because full reference type and partial reference type evaluation method all need extra bandwidth to transmit original video and relevant information, its practical value is very limited.By contrast, reference-free quality evaluation method does not need to rely on any information relevant with original video, directly according to the information calculations video quality of video to be evaluated, has better flexibility and adaptability, and using value widely.Particularly in the Video Applications relevant with network multimedia, without the reference video evaluating objective quality at quality of server (Quality of Service, QoS) detection and terminal quality are experienced (Quality of Experience, QoE) play an important role above, according to the video quality evaluation feedback information, video server can dynamic regulating video coder parameters and transport channel parameters, to guarantee transmission stability, improves the receiving terminal video quality.In addition, can replace human eye without the reference video evaluating objective quality, compare equitably the video quality of different video codec output, for video receiver provides reference, make optimal selection.
Although existing method for evaluating video quality has been obtained certain effect, formed the model of some comparative maturities; Be Image quality assessment:From error visibility to structural similarity (IEEE Transactios on Image Processing based on method for evaluating video quality and the Wang Zhou of PSNR (Y-PSNR) at title such as tradition, a kind of method for evaluating video quality based on SSIM (structural similarity) has been proposed in the document 2004,13 (4)); But these methods are not considered the effect of HVS in video quality evaluation, have ignored the impact of video content features on video quality, and accuracy is still waiting to improve; And be difficult to be applicable to the video of different scenes, universality is not high.
Summary of the invention
For the existing above-mentioned technological deficiency of prior art, the invention provides a kind of method for evaluating video quality of estimating based on the macro block territory distortion factor, its index that obtains has higher accuracy, and can satisfy the needs of various different scene videos.
A kind of method for evaluating video quality of estimating based on the macro block territory distortion factor comprises the steps:
(1) the every two field picture of video to be evaluated is divided into several macro blocks;
(2) calculate the blocking effect distortion factor of macro block;
(3) calculate the blurring effect distortion factor of macro block;
(4) calculate the luminance contrast of macro block;
(5) calculate the Texture complication of macro block;
(6) calculate the exercise intensity contrast of macro block;
(7) direction of motion that calculates macro block is unanimously spent;
(8) according to described luminance contrast, Texture complication, the exercise intensity contrast is consistent with the direction of motion spends, and calculates the visually-perceptible degree of macro block;
(9) according to the described blocking effect distortion factor, the blurring effect distortion factor and visually-perceptible degree, calculate the quality evaluation value of the every two field picture of video to be evaluated; Quality evaluation value to all images of video to be evaluated is averaging, and the mean value that obtains is the quality evaluation value of video to be evaluated.
In the described step (2), the method for the blocking effect distortion factor of computing macro block is as follows:
A. calculate the horizontal block effect distortion factor of current macro according to following formula:
S BH(i)=|A (16,i)-B (1,i)|
S IH(1,i)=|A (14,i)-A (15,i)|
S IH(2,i)=|A (15,i)-A (16,i)|
S IH(3,i)=|B (1,i)-B (2,i)|
S IH(4,i)=|B (2,i)-B (3,i)|
S IH _ AVG ( i ) = S IH ( 1 , i ) + S IH ( 2 , i ) + S IH ( 3 , i ) + S IH ( 4 , i ) 4
J H ( i ) = S BH ( i ) - S IH _ AVG ( i ) if ( S BH ( i ) > S IH _ AVG ( i ) ) 0 otherwise
D BLOCK _ H = 1 16 Σ i = 1 16 J H ( i )
Wherein, D BLOCK_HThe horizontal block effect distortion factor for current macro, A (16, i) be the brightness value of current macro the 16th row i row pixel, A (15, i) be the brightness value of current macro the 15th row i row pixel, A (14, i) be the brightness value of current macro the 14th row i row pixel, B (1, i) be and the brightness value of following adjacent macro block the 1st row i row pixel of current macro that B (2, i) be brightness value with following adjacent macro block the 2nd row i row pixel of current macro, B (3, i) be and the brightness value of following adjacent macro block the 3rd row i row pixel of current macro that i is natural number and 1≤i≤16;
B. calculate the vertical blocks effect distortion factor of current macro according to following formula:
S BV(i)=|A (i,16)-C (i,1)|
S IV(i,1)=|A (i,14)-A (i,15)|
S IV(i,2)=|A (i,15)-A (i,16)|
S IV(i,3)=|C (i,1)-C (i,2)|
S IV(i,4)=|C (i,2)-C (i,3)|
S IV _ AVG ( i ) = S IV ( i , 1 ) + S IV ( i , 2 ) + S IV ( i , 3 ) + S IV ( i , 4 ) 4
J V ( i ) = S BV ( i ) - S IV _ AVG ( i ) if ( S BV ( i ) > S IV _ AVG ( i ) ) 0 otherwise
D BLOCK _ V = 1 16 Σ i = 1 16 J V ( i )
Wherein, D BLOCK_vThe vertical blocks effect distortion factor for current macro, A (i, 16) be the brightness value of capable the 16th row pixel of current macro i, A (i, 15) be the brightness value of capable the 15th row pixel of current macro i, A (i, 14) be the brightness value of capable the 14th row pixel of current macro i, C (i, 1) be the brightness value of the macro block i capable 1st row pixel adjacent with current macro the right, C (i, 2) be the brightness value of capable the 2nd row pixel of macro block i adjacent on the right of current macro, C (i, 3) is the brightness value of the macro block i capable 3rd row pixel adjacent with current macro the right;
C. the described horizontal block effect distortion factor and the vertical blocks effect distortion factor are averaging, obtain the blocking effect distortion factor of current macro.
In the described step (3), the method for the blurring effect distortion factor of computing macro block is as follows:
A. utilize the Sobel operator that macro block is carried out rim detection, determine edge pixel and gradient direction thereof in the macro block;
B. the gradient direction of edge pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of edge pixel;
C. according to the clear-cut margin degree of following formula edge calculation pixel:
S = 1 4 Σ l = 1 4 | I ( θ , l ) - I | l
Wherein: S is the clear-cut margin degree of edge pixel, and I is the brightness value of edge pixel, I (θ, l) be with edge pixel along the approximate gradient direction θ distance brightness value for the pixel of l, l is natural number and 1≤l≤4;
D. the clear-cut margin degree of all edge pixels in the macro block is averaging, obtains the blurring effect distortion factor of macro block.
In the described step (4), the method for the luminance contrast of computing macro block is: according to the luminance contrast of each pixel in the following formula computing macro block, get the maximum of all pixel intensity contrasts in the macro block as the luminance contrast of macro block;
C Luma _ P ( θ , l ) = | I ( θ , l ) - I | l η
C Luma = Σ θ ∈ M M Σ l = 1 L C Luma _ P ( l , N )
Wherein, C LumaLuminance contrast for current pixel, I is the brightness value of current pixel, I (θ, l) be along the brightness value of direction θ distance for the pixel of l with current pixel, l is natural number and 1≤l≤L, L is default ultimate range, and η is pixel distance decline coefficient, and M is the set of 0 °, 90 °, 180 ° and 270 ° these four kinds of directions.
In the described step (5), the method for the Texture complication of computing macro block is as follows:
A. utilize the Sobel operator that macro block is carried out rim detection, determine total number of edge pixel in the macro block and the gradient direction of each pixel;
B. the gradient direction of pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of pixel;
C. the approximate gradient direction with pixel is divided into four classes: 0 ° and 180 ° is classified as a class, 45 ° and 225 ° and is classified as a class, 90 ° and 270 ° and is classified as a class, 135 ° and 315 ° and is classified as a class;
D. according to the Texture complication of following formula computing macro block:
T block = 0.5 if ( k θ = 1 ) 1 if ( k θ = 2 ) ( 2 - c e ) / 2 if ( k θ = 3 ) ( 1 - c e ) / 2 if ( k θ = 4 )
c e = 1 if ( n edge > T edge ) 0 otherwise
Wherein, T BlockBe the Texture complication of macro block, k θBe the species number of all pixel approximate gradient directions in the macro block, n EdgeBe total number of edge pixel in the macro block, T EdgeBe given edge pixel number threshold value.
In the described step (6), the method for the exercise intensity contrast of computing macro block is as follows:
A. calculate horizontal motion vector and the vertical motion vector of macro block by inter prediction; Described horizontal motion vector and vertical motion vector root mean square are obtained the exercise intensity of macro block;
B. centered by current macro, set up the reference windows that is formed by 7 * 7 macro blocks;
C. the exercise intensity difference distance of calculating each macro block in current macro and the reference windows according to following formula compares M I_diff, and get wherein that maximum is M Id_max:
M I _ diff = | M I - M I ( D ) | d δ
Wherein: M IBe the exercise intensity of current macro, M I(D) be the exercise intensity of macro block D, macro block D is the arbitrary macro block in the reference windows, and d is the distance of current macro and macro block D, and δ is that macro block is apart from the decline coefficient;
D. determine the exercise intensity contrast of current macro according to following relational expression:
C Motion _ block = M Id _ max M I _ max if ( M I _ max ≠ 0 ) 0 otherwise
Wherein: C Motion_blockBe the exercise intensity contrast of current macro, M I_maxMaximum for all macro block exercise intensities in the reference windows.
In the described step (7), the method for the consistent degree of the direction of motion of computing macro block is as follows:
A. calculate the horizontal motion vector mv of macro block by inter prediction xWith vertical motion vector mv yDescribed horizontal motion vector and vertical motion vector root mean square are obtained the exercise intensity of macro block, and according to formula θ Mv=arctan (mv y/ mv x) calculate the direction of motion θ of macro block Mv
B. 0 ° to 360 ° circumference equal dividing is become 12 sectors, make the direction of motion of macro block map to corresponding sector;
C. centered by current macro, set up the reference windows that is formed by 21 * 21 macro blocks;
D. unanimously spend according to the direction of motion of following formula calculating current macro:
M Con _ block = - Σ j = 1 12 p ( j ) log [ p ( j ) ] M I _ avg ( j ) M I _ max p ( j ) = k ( j ) 21 2
Wherein: M Con_blockFor the direction of motion of current macro is unanimously spent, k (j) is for belonging to the total number of macro block of j sector, M in the reference windows I_maxBe the maximum of all macro block exercise intensities in the reference windows, M I_avg(j) for belonging to the mean value of all macro block exercise intensities of j sector in the reference windows, j is natural number and 1≤j≤12.
In the described step (8), according to the visually-perceptible degree of following formula computing macro block:
VP S=log(α 1+C Luma_block)×(α 2+T block) 2
VP T=α 3C Motion_block4M con_block
VM=λ×VP S×VP T
Wherein: VM is the visually-perceptible degree of macro block, M Con_blockFor the direction of motion of macro block is unanimously spent C Motion_blockBe the exercise intensity contrast of macro block, T BlockBe the Texture complication of macro block, C Luma_blockBe the luminance contrast of macro block, λ, α 1, α 2, α 3And α 4Be given weight coefficient.
In the described step (9), calculate the quality evaluation value of the every two field picture of video to be evaluated according to following formula:
Q = Σ n = 1 N [ Q MB ( n ) × VM ( n ) ] Σ n = 1 N VM ( n )
Q MB(n)=γ 1×D BLOCK_MB(n)+γ 2×D BLUR_MB(n)
Wherein: Q is the quality evaluation value of the arbitrary two field picture of video to be evaluated, D BLOCK_MB(n) be the blocking effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, D BLUR_MB(n) be the blurring effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, VM (n) is the visually-perceptible degree of n macro block in the arbitrary two field picture of video to be evaluated, γ 1And γ 2Be given weight coefficient, n is natural number and 1≤n≤N, and N is total number of macro block in the arbitrary two field picture of video to be evaluated.
Useful technique effect of the present invention is as follows:
(1) the present invention only needs video to be evaluated just can obtain the video quality evaluation result, need not reference information, has good flexibility and adaptability.
(2) evaluation result accuracy of the present invention is higher, meets human eye to the subjective perception of video.
(3) the present invention can both obtain more accurately evaluation result to various video scene, has preferably universality.
Description of drawings
Fig. 1 is the schematic flow sheet of the inventive method.
Fig. 2 is the position view of macroblock boundaries pixel.
Fig. 3 is the classification schematic diagram of the direction of motion.
Embodiment
In order more specifically to describe the present invention, below in conjunction with the drawings and the specific embodiments technical scheme of the present invention is elaborated.
We at first are divided into several macro blocks with the every two field picture of video to be evaluated; As shown in Figure 1, a kind of method for evaluating video quality of estimating based on the macro block territory distortion factor comprises the steps:
(1) the computing block effect distortion factor.
Blocking effect generally appears in the compression algorithm based on discrete cosine transform (DCT), and coarse quantizing process also can aggravate blocking effect.Different macro blocks are subjected to the impact of DCT and quantification different, and loss of detail is also variant, therefore, often have significantly discontinuously in macroblock boundaries, form the blocking effect distortion.From the blocking effect distortion level of horizontal and vertical direction calculating macroblock edges, detailed process is as follows respectively for present embodiment:
A. calculate the horizontal block effect distortion factor of current macro according to following formula; As shown in Figure 2, the square of solid black lines above and below represents respectively current macro A and its following adjacent macro block B.
S BH(i)=|A (16,i)-B (1,i)|
S IH(1,i)=|A (14,i)-A (15,i)|
S IH(2,i)=|A (15,i)-A (16,i)|
S IH(3,i)=|B (1,i)-B (2,i)|
S IH(4,i)=|B (2,i)-B (3,i)|
S IH _ AVG ( i ) = S IH ( 1 , i ) + S IH ( 2 , i ) + S IH ( 3 , i ) + S IH ( 4 , i ) 4
J H ( i ) = S BH ( i ) - S IH _ AVG ( i ) if ( S BH ( i ) > S IH _ AVG ( i ) ) 0 otherwise
D BLOCK _ H = 1 16 Σ i = 1 16 J H ( i )
Wherein, D BLOCK_HThe horizontal block effect distortion factor for current macro, A (16, i) be the brightness value of current macro the 16th row i row pixel, A (15, i) be the brightness value of current macro the 15th row i row pixel, A (14, i) be the brightness value of current macro the 14th row i row pixel, B (1, i) be and the brightness value of following adjacent macro block the 1st row i row pixel of current macro that B (2, i) be brightness value with following adjacent macro block the 2nd row i row pixel of current macro, B (3, i) be and the brightness value of following adjacent macro block the 3rd row i row pixel of current macro that i is natural number and 1≤i≤16;
B. calculate the vertical blocks effect distortion factor of current macro according to following formula:
S BV(i)=|A (i,16)-C (i,1)|
S IV(i,1)=|A (i,14)-A (i,15)|
S IV(i,2)=|A (i,15)-A (i,16)|
S IV(i,3)=|C (i,1)-C (i,2)|
S IV(i,4)=|C (i,2)-C (i,3)|
S IV _ AVG ( i ) = S IV ( i , 1 ) + S IV ( i , 2 ) + S IV ( i , 3 ) + S IV ( i , 4 ) 4
J V ( i ) = S BV ( i ) - S IV _ AVG ( i ) if ( S BV ( i ) > S IV _ AVG ( i ) ) 0 otherwise
D BLOCK _ V = 1 16 Σ i = 1 16 J V ( i )
Wherein, D BLOCK_vThe vertical blocks effect distortion factor for current macro, A (i, 16) be the brightness value of capable the 16th row pixel of current macro i, A (i, 15) be the brightness value of capable the 15th row pixel of current macro i, A (i, 14) be the brightness value of capable the 14th row pixel of current macro i, C (i, 1) be the brightness value of the macro block i capable 1st row pixel adjacent with current macro the right, C (i, 2) be the brightness value of capable the 2nd row pixel of macro block i adjacent on the right of current macro, C (i, 3) is the brightness value of the macro block i capable 3rd row pixel adjacent with current macro the right;
C. the horizontal block effect distortion factor and the vertical blocks effect distortion factor are averaging, obtain the blocking effect distortion factor D of current macro BLOCK_MB
Because quantizing process carries out take macro block as unit, adjacent macro block can adopt different quantization parameters.Therefore, the difference between the macroblock edges pixel is often larger than the difference between interior pixels, and along with the increase of difference, the discontinuous saltus step between the macroblock edges is just more obvious.The blocking effect distortion factor D of macro block BLOCK_MBBe the mean value of the horizontal and vertical distortion factor; D BLOCK_MBLarger, it is more serious to represent blocking effect.
(2) calculate the blurring effect distortion factor.
Blurring effect is a kind of common video distortion effect, and from subjective vision, blurring effect is mainly reflected near the details degeneration object edge in the video, and sharp keen degree descends.Pixel on the object edge if be subject to the impact of blurring effect, generally all can have maximum partial gradient value.Present embodiment is carried out rim detection with the Sobel operator to video, to each pixel that is positioned at object edge, detects the clear-cut margin degree S of its gradient direction, assesses the distortion level that video is subject to the blurring effect impact with this; Detailed process is as follows:
A. utilize the Sobel operator that macro block is carried out rim detection, determine edge pixel and gradient direction thereof in the macro block;
B. the gradient direction of edge pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of edge pixel;
C. according to the clear-cut margin degree of following formula edge calculation pixel:
S = 1 4 Σ l = 1 4 | I ( θ , l ) - I | l
Wherein: S is the clear-cut margin degree of edge pixel, and I is the brightness value of edge pixel, I (θ, l) be with edge pixel along the approximate gradient direction θ distance brightness value for the pixel of l, l is natural number and 1≤l≤4;
D. the clear-cut margin degree of all edge pixels in the macro block is averaging, obtains the blurring effect distortion factor D of macro block BLUR_MB
Present embodiment is chosen four points at gradient direction, calculates S with their average gradual change speed, and S is less, and it is less to represent the edge sharpness, and blurring effect is more serious; At last, add up S value corresponding to each edge pixel point in the whole macro block, get its mean value as the blurring effect distortion factor D of this macro block BLUR_MB, D BLUR_MBLess, it is more serious to represent blurring effect.
(3) calculate luminance contrast.
The HVS characteristic research is shown, human when watching video, generally also insensitive to the absolute brightness of object in the video, and the relative size of the brightness of localized region and peripheral region is more responsive, therefore, luminance contrast can simply be described as the poor of brightness value between pixel.Research shows that simultaneously human eye increases along with the distance between pixel the susceptibility of luminance difference and reduces.In conjunction with above 2 points, present embodiment defines two luminance contrasts between pixel with pixel intensity difference and distance, and concrete grammar is as follows:
According to the luminance contrast of each pixel in the following formula computing macro block, get the maximum of all pixel intensity contrasts in the macro block as the luminance contrast C of macro block Luma_block
C Luma _ P ( θ , l ) = | I ( θ , l ) - I | l η
C Luma = Σ θ ∈ M M Σ l = 1 L C Luma _ P ( l , N )
Wherein, C LumaLuminance contrast for current pixel, I is the brightness value of current pixel, I (θ, l) be along the brightness value of direction θ distance for the pixel of l with current pixel, l is natural number and 1≤l≤L, present embodiment is preset L=5, and η is pixel distance decline coefficient (this coefficient gets 2 in the present embodiment), and M is the set of 0 °, 90 °, 180 ° and 270 ° these four kinds of directions.
Luminance contrast has certain regionality, and some point of contrast maximum is the most responsive in the human eye localized region.Therefore, present embodiment is divided into the macro block of the non-overlapping copies of 16 * 16 pixels with whole frame video image, with the pixel C in each piece LumaMaximum is as the luminance contrast C of this macro block Luma_blockC Luma_blockThis regional luminance contrast of the larger expression of value higher, namely should the zone and the peripheral region between luminance difference larger, the easier attention that causes human eye.
(4) calculate Texture complication.
According to the different qualities of texture, video image can be divided into structural texture zone and random grain zone.The texture in structural texture zone is comparatively simple, and with the image relevance is lower on every side, and the texture in random grain zone is than horn of plenty, and the spatial contrast degree is low, with the image relevance is higher on every side.The HVS characteristic research shows, the easier attention that attracts the mankind of the image fault in the structural texture zone.
Present embodiment is at first carried out convolution with the Sobel operator to entire image, extracts the edge pixel point, calculates respectively the horizontal and vertical gradient (G of each pixel according to following formula Hor, G Ver), and then total number n of edge pixel in definite macro block EdgeAnd the gradient direction θ (i, j) of each pixel;
G hor(i,j)=I(i,j)*S hor
G ver(i,j)=I(i,j)*S ver
θ ( i , j ) = arctan G ver ( i , j ) G hor ( i , j )
Then, the gradient direction of pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of pixel;
The approximate gradient direction of pixel is divided into four classes: 0 ° and 180 ° is classified as a class, 45 ° and 225 ° and is classified as a class, 90 ° and 270 ° and is classified as a class, 135 ° and 315 ° and is classified as a class;
At last, according to the Texture complication of following formula computing macro block:
T block = 0.5 if ( k θ = 1 ) 1 if ( k θ = 2 ) ( 2 - c e ) / 2 if ( k θ = 3 ) ( 1 - c e ) / 2 if ( k θ = 4 )
c e = 1 if ( n edge > T edge ) 0 otherwise
Wherein, T BlockBe the Texture complication of macro block, k θBe the species number of all pixel approximate gradient directions in the macro block, n EdgeBe total number of edge pixel in the macro block, T EdgeBe edge pixel number threshold value (this threshold value gets 16 in the present embodiment).
T BlockSpan be [0,1], work as T BlockTended to 0 o'clock, it is abundant to represent this zone-texture, belongs to the random grain zone, and its distortion effect is not easy to be discovered by human eye; Work as T BlockTended to 1 o'clock, it is simple to represent this zone-texture, belongs to the random grain zone, and human eye is relatively more responsive to distortion effect wherein.
(5) calculate the exercise intensity contrast.
In the video quality evaluation field, movable information is widely used in reflection HVS to the perception of visual signal as one of most important primary vision information.Exercise intensity contrast and direction of motion consistency are the characteristic values that best embodies the video motion characteristic.HVS studies show that, human eye will be higher than the absolute movement of single body to the susceptibility of relative motion of object, and the exercise intensity contrast has just in time reflected and the movement velocity difference of different objects in the video reflected the whole perception of HVS to video.The process of the exercise intensity contrast of present embodiment computing macro block is as follows:
A. calculate horizontal motion vector and the vertical motion vector of macro block by inter prediction; Horizontal motion vector and vertical motion vector root mean square are obtained the exercise intensity of macro block;
B. centered by current macro, set up the reference windows that is formed by 7 * 7 macro blocks;
C. the exercise intensity difference distance of calculating each macro block in current macro and the reference windows according to following formula is than MI_diff, and gets wherein that maximum is M Id_max:
M I _ diff = | M I - M I ( D ) | d δ
Wherein: M IBe the exercise intensity of current macro, M I(D) be the exercise intensity of macro block D, macro block D is the arbitrary macro block in the reference windows, and d is the distance of current macro and macro block D, and δ is that macro block is apart from decline coefficient (this coefficient gets 2 in the present embodiment);
Consider that the exercise intensity maximum in the regional area also can have influence on HVS to the perception of exercise intensity difference, determine at last the exercise intensity contrast of current macro according to following relational expression:
C Motion _ block = M Id _ max M I _ max if ( M I _ max ≠ 0 ) 0 otherwise
Wherein: C Motion_blockBe the exercise intensity contrast of current macro, M I_maxMaximum for all macro block exercise intensities in the reference windows.C Motion_blockBe worth greatlyr, the exercise intensity contrast that represents macro block is larger.
(6) calculating the direction of motion unanimously spends.
In video scene, the motor behavior of each object is diversified, and the motion consistency has reflected the similitude of these motor behaviors, wherein pays close attention to the easiest HVS that causes of the direction of motion again.In the present embodiment, by analyzing the probability distribution of the various directions of motion in the regional area, in conjunction with exercise intensity, represent direction of motion consistency.The method of the consistent degree of the direction of motion of present embodiment computing macro block is as follows:
A. calculate the horizontal motion vector mv of macro block by inter prediction xWith vertical motion vector mv yTo horizontal motion vector mv xWith vertical motion vector mv yRoot mean square obtains the exercise intensity of macro block, and according to formula θ Mv=arctan (mv y/ mv x) calculate the direction of motion θ of macro block Mv
B. 0 ° to 360 ° circumference equal dividing is become 12 sectors, make the direction of motion of macro block map to corresponding sector; As shown in Figure 3, the direction of motion roughly is divided into 12 kinds, be about to [0,360 °) be divided into 12 kinds of different direction Angle zones, adopt histogram method the direction of motion θ of each macro block MvBe mapped to corresponding Angle zone;
C. centered by current macro, set up the reference windows that is formed by 21 * 21 macro blocks;
D. unanimously spend according to the direction of motion of following formula calculating current macro:
M Con _ block = - Σ j = 1 12 p ( j ) log [ p ( j ) ] M I _ avg ( j ) M I _ max p ( j ) = k ( j ) 21 2
Wherein: M Con_blockFor the direction of motion of current macro is unanimously spent, k (j) is for belonging to the total number of macro block of j sector, M in the reference windows I_maxBe the maximum of all macro block exercise intensities in the reference windows, M I_avg(j) for belonging to the mean value of all macro block exercise intensities of j sector in the reference windows, j is natural number and 1≤j≤12; M Con_blockBe worth greatlyr, the motion consistency that represents macro block is higher.
(7) computation vision perceptibility.
According to luminance contrast, Texture complication, the exercise intensity contrast is consistent with the direction of motion spends, and calculates the visually-perceptible degree of every each macro block of two field picture of video to be evaluated according to following formula:
VP S=log(α 1+C Luma_block)×(α 2+T Block) 2
VP T=α 3C motion_block4M con_block
VM=λ×VP S×VP T
Wherein: VM is the visually-perceptible degree of macro block, M Con_blockFor the direction of motion of macro block is unanimously spent C Motion_blockBe the exercise intensity contrast of macro block, T BlockBe the Texture complication of macro block, C Luma_blockBe the luminance contrast of macro block, λ, α 1, α 2, α 3And α 4Be weight coefficient, λ gets 1.2, α in the present embodiment 1, α 2And α 4All get 1, α 3Get 2.
(8) the quality evaluation value of calculating video.
According to the blocking effect distortion factor, the blurring effect distortion factor and visually-perceptible degree, calculate the quality evaluation value of the every two field picture of video to be evaluated according to following formula;
Q = Σ n = 1 N [ Q MB ( n ) × VM ( n ) ] Σ n = 1 N VM ( n )
Q MB(n)=γ 1×D BLOCK_MB(n)+γ 2×D BLUR_MB(n)
Wherein: Q is the quality evaluation value of the arbitrary two field picture of video to be evaluated, D BLOCK_MB(n) be the blocking effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, D BLUR_MB(n) be the blurring effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, VM (n) is the visually-perceptible degree of n macro block in the arbitrary two field picture of video to be evaluated, γ 1And γ 2Be weight coefficient; N is natural number and 1≤n≤N, and N is total number of macro block in the arbitrary two field picture of video to be evaluated.Q MBThis piece distortion of larger representative is more serious, and video quality is poorer.Present embodiment is introduced the visual attention model based on visually-perceptible, adjusts the weight of video regional in quality evaluation system, γ in the present embodiment with this model 1Get 1, γ 2Get-0.5.
At last, the quality evaluation value of all images of video to be evaluated is averaging the mean value Q that obtains AVGBe the quality evaluation value of video to be evaluated; Q AVGSpan be [0,100], 0 to represent video quality best, 100 to represent video quality the poorest.
Below we come the effect of verification algorithm by 100 JM compressed videos, weigh respectively monotonicity and the accuracy of present embodiment with Spearman coefficient correlation and Pearson correlation coefficient, and compare based on PSNR with based on the SSIM method with tradition that (Spearman coefficient and Pearson coefficient are larger, monotonicity and the accuracy of illustration method are better, the accuracy that is method is higher), as shown in table 1:
Table 1
Quality evaluating method The Spearman coefficient correlation Pearson correlation coefficient
PSNR 0.5389 0.6135
SSIM 0.7023 0.7321
Present embodiment 0.7754 0.7912
Can see that from table 1 result the as a result accuracy rate of present embodiment will be higher than other 2 kinds of existing method for evaluating video quality.

Claims (9)

1. a method for evaluating video quality of estimating based on the macro block territory distortion factor comprises the steps:
(1) the every two field picture of video to be evaluated is divided into several macro blocks;
(2) calculate the blocking effect distortion factor of macro block;
(3) calculate the blurring effect distortion factor of macro block;
(4) calculate the luminance contrast of macro block;
(5) calculate the Texture complication of macro block;
(6) calculate the exercise intensity contrast of macro block;
(7) direction of motion that calculates macro block is unanimously spent;
(8) according to described luminance contrast, Texture complication, the exercise intensity contrast is consistent with the direction of motion spends, and calculates the visually-perceptible degree of macro block;
(9) according to the described blocking effect distortion factor, the blurring effect distortion factor and visually-perceptible degree, calculate the quality evaluation value of the every two field picture of video to be evaluated; Quality evaluation value to all images of video to be evaluated is averaging, and the mean value that obtains is the quality evaluation value of video to be evaluated.
2. method for evaluating video quality according to claim 1, it is characterized in that: in the described step (2), the method for the blocking effect distortion factor of computing macro block is as follows:
A. calculate the horizontal block effect distortion factor of current macro according to following formula:
S BH(i)=|A (16,i)-B (1,i)|
S IH(1,i)=|A (14,i)-A (15,i)|
S IH(2,i)=|A (15,i)-A (16,i)|
S IH(3,i)=|B (1,i)-B (2,i)|
S IH(4,i)=|B (2,i)-B (3,i)|
S IH _ AVG ( i ) = S IH ( 1 , i ) + S IH ( 2 , i ) + S IH ( 3 , i ) + S IH ( 4 , i ) 4
J H ( i ) = S BH ( i ) - S IH _ AVG ( i ) if ( S BH ( i ) > S IH _ AVG ( i ) ) 0 otherwise
D BLOCK _ H = 1 16 Σ i = 1 16 J H ( i )
Wherein, D BLOCK_HThe horizontal block effect distortion factor for current macro, A (16, i) be the brightness value of current macro the 16th row i row pixel, A (15, i) be the brightness value of current macro the 15th row i row pixel, A (14, i) be the brightness value of current macro the 14th row i row pixel, B (1, i) be and the brightness value of following adjacent macro block the 1st row i row pixel of current macro that B (2, i) be brightness value with following adjacent macro block the 2nd row i row pixel of current macro, B (3, i) be and the brightness value of following adjacent macro block the 3rd row i row pixel of current macro that i is natural number and 1≤i≤16;
B. calculate the vertical blocks effect distortion factor of current macro according to following formula:
S BV(i)=|A (i,16)-C (i,1)|
S IV(i,1)=|A (i,14)-A (i,15)|
S IV(i,2)=|A (i,15)-A (i,16)|
S IV(i,3)=|C (i,1)-C (i,2)|
S IV(i,4)=|C (i,2)-C (i,3)|
S IV _ AVG ( i ) = S IV ( i , 1 ) + S IV ( i , 2 ) + S IV ( i , 3 ) + S IV ( i , 4 ) 4
J V ( i ) = S BV ( i ) - S IV _ AVG ( i ) if ( S BV ( i ) > S IV _ AVG ( i ) ) 0 otherwise
D BLOCK _ V = 1 16 Σ i = 1 16 J V ( i )
Wherein, D BLOCK_vThe vertical blocks effect distortion factor for current macro, A (i, 16) be the brightness value of capable the 16th row pixel of current macro i, A (i, 15) be the brightness value of capable the 15th row pixel of current macro i, A (i, 14) be the brightness value of capable the 14th row pixel of current macro i, C (i, 1) be the brightness value of the macro block i capable 1st row pixel adjacent with current macro the right, C (i, 2) be the brightness value of capable the 2nd row pixel of macro block i adjacent on the right of current macro, C (i, 3) is the brightness value of the macro block i capable 3rd row pixel adjacent with current macro the right;
C. the described horizontal block effect distortion factor and the vertical blocks effect distortion factor are averaging, obtain the blocking effect distortion factor of current macro.
3. method for evaluating video quality according to claim 1, it is characterized in that: in the described step (3), the method for the blurring effect distortion factor of computing macro block is as follows:
A. utilize the Sobel operator that macro block is carried out rim detection, determine edge pixel and gradient direction thereof in the macro block;
B. the gradient direction of edge pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of edge pixel;
C. according to the clear-cut margin degree of following formula edge calculation pixel:
S = 1 4 Σ l = 1 4 | I ( θ , l ) - I | l
Wherein: S is the clear-cut margin degree of edge pixel, and I is the brightness value of edge pixel, I (θ, l) be with edge pixel along the approximate gradient direction θ distance brightness value for the pixel of l, l is natural number and 1≤l≤4;
D. the clear-cut margin degree of all edge pixels in the macro block is averaging, obtains the blurring effect distortion factor of macro block.
4. method for evaluating video quality according to claim 1, it is characterized in that: in the described step (4), the method of the luminance contrast of computing macro block is: according to the luminance contrast of each pixel in the following formula computing macro block, get the maximum of all pixel intensity contrasts in the macro block as the luminance contrast of macro block;
C Luma _ P ( θ , l ) = | I ( θ , l ) - I | l η
C Luma = Σ θ ∈ M M Σ l = 1 L C Luma _ P ( l , N )
Wherein, C LumaLuminance contrast for current pixel, I is the brightness value of current pixel, I (θ, l) be along the brightness value of direction θ distance for the pixel of l with current pixel, l is natural number and 1≤l≤L, L is default ultimate range, and η is pixel distance decline coefficient, and M is the set of 0 °, 90 °, 180 ° and 270 ° these four kinds of directions.
5. method for evaluating video quality according to claim 1, it is characterized in that: in the described step (5), the method for the Texture complication of computing macro block is as follows:
A. utilize the Sobel operator that macro block is carried out rim detection, determine total number of edge pixel in the macro block and the gradient direction of each pixel;
B. the gradient direction of pixel is ranged a kind of in the middle of { 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° } these eight kinds of directions according to nearby principle, and as the approximate gradient direction of pixel;
C. the approximate gradient direction with pixel is divided into four classes: 0 ° and 180 ° is classified as a class, 45 ° and 225 ° and is classified as a class, 90 ° and 270 ° and is classified as a class, 135 ° and 315 ° and is classified as a class;
D. according to the Texture complication of following formula computing macro block:
T block = 0.5 if ( k θ = 1 ) 1 if ( k θ = 2 ) ( 2 - c e ) / 2 if ( k θ = 3 ) ( 1 - c e ) / 2 if ( k θ = 4 )
c e = 1 if ( n edge > T edge ) 0 otherwise
Wherein, T BlockBe the Texture complication of macro block, k θBe the species number of all pixel approximate gradient directions in the macro block, n EdgeBe total number of edge pixel in the macro block, T EdgeBe given edge pixel number threshold value.
6. method for evaluating video quality according to claim 1, it is characterized in that: in the described step (6), the method for the exercise intensity contrast of computing macro block is as follows:
A. calculate horizontal motion vector and the vertical motion vector of macro block by inter prediction; Described horizontal motion vector and vertical motion vector root mean square are obtained the exercise intensity of macro block;
B. centered by current macro, set up the reference windows that is formed by 7 * 7 macro blocks;
C. the exercise intensity difference distance of calculating each macro block in current macro and the reference windows according to following formula compares M I_diff, and get wherein that maximum is M Id_max:
M I _ diff = | M I - M I ( D ) | d δ
Wherein: M IBe the exercise intensity of current macro, M I(D) be the exercise intensity of macro block D, macro block D is the arbitrary macro block in the reference windows, and d is the distance of current macro and macro block D, and δ is that macro block is apart from the decline coefficient;
D. determine the exercise intensity contrast of current macro according to following relational expression:
C Motion _ block = M Id _ max M I _ max if ( M I _ max ≠ 0 ) 0 otherwise
Wherein: C Motion_blockBe the exercise intensity contrast of current macro, M I_maxMaximum for all macro block exercise intensities in the reference windows.
7. method for evaluating video quality according to claim 1 is characterized in that: in the described step (7), the method for the consistent degree of the direction of motion of computing macro block is as follows:
A. calculate the horizontal motion vector mv of macro block by inter prediction xWith vertical motion vector mv yDescribed horizontal motion vector and vertical motion vector root mean square are obtained the exercise intensity of macro block, and according to formula θ Mv=arctan (mv y/ mv x) calculate the direction of motion θ of macro block Mv
B. 0 ° to 360 ° circumference equal dividing is become 12 sectors, make the direction of motion of macro block map to corresponding sector;
C. centered by current macro, set up the reference windows that is formed by 21 * 21 macro blocks;
D. unanimously spend according to the direction of motion of following formula calculating current macro:
M Con _ block = - Σ j = 1 12 p ( j ) log [ p ( j ) ] M I _ avg ( j ) M I _ max p ( j ) = k ( j ) 21 2
Wherein: M Con_blockFor the direction of motion of current macro is unanimously spent, k (j) is for belonging to the total number of macro block of j sector, M in the reference windows I_maxBe the maximum of all macro block exercise intensities in the reference windows, M I_avg(j) for belonging to the mean value of all macro block exercise intensities of j sector in the reference windows, j is natural number and 1≤j≤12.
8. method for evaluating video quality according to claim 1 is characterized in that: in the described step (8), according to the visually-perceptible degree of following formula computing macro block:
VP S=log(α 1+C Luma_block)×(α 2+T block) 2
VP T=α 3C Motion_block4M com_block
VM=λ×VP S×VP T
Wherein: VM is the visually-perceptible degree of macro block, M Con_blockFor the direction of motion of macro block is unanimously spent C Motion_blockBe the exercise intensity contrast of macro block, T BlockBe the Texture complication of macro block, C Luma_blockBe the luminance contrast of macro block, λ, α 1, α 2, α 3And α 4Be given weight coefficient.
9. method for evaluating video quality according to claim 1 is characterized in that: in the described step (9), calculate the quality evaluation value of the every two field picture of video to be evaluated according to following formula:
Q = Σ n = 1 N [ Q MB ( n ) × VM ( n ) ] Σ n = 1 N VM ( n )
Q MB(n)=γ1×D BLOCK_MB(n)+γ 2×D BLUR_MB(n)
Wherein: Q is the quality evaluation value of the arbitrary two field picture of video to be evaluated, D BLOCK_MB(n) be the blocking effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, D BLUR_MB(n) be the blurring effect distortion factor of n macro block in the arbitrary two field picture of video to be evaluated, VM (n) is the visually-perceptible degree of n macro block in the arbitrary two field picture of video to be evaluated, γ 1And γ 2Be given weight coefficient, n is natural number and 1≤n≤N, and N is total number of macro block in the arbitrary two field picture of video to be evaluated.
CN2012105325655A 2012-12-07 2012-12-07 Video quality assessment method estimated on basis of macroblock domain distortion degree Pending CN102984540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012105325655A CN102984540A (en) 2012-12-07 2012-12-07 Video quality assessment method estimated on basis of macroblock domain distortion degree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012105325655A CN102984540A (en) 2012-12-07 2012-12-07 Video quality assessment method estimated on basis of macroblock domain distortion degree

Publications (1)

Publication Number Publication Date
CN102984540A true CN102984540A (en) 2013-03-20

Family

ID=47858221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012105325655A Pending CN102984540A (en) 2012-12-07 2012-12-07 Video quality assessment method estimated on basis of macroblock domain distortion degree

Country Status (1)

Country Link
CN (1) CN102984540A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103269439A (en) * 2013-05-21 2013-08-28 杭州电子科技大学 OCT image quality objective non-reference type evaluation method
CN103414915A (en) * 2013-08-22 2013-11-27 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN104243973A (en) * 2014-08-28 2014-12-24 北京邮电大学 Video perceived quality non-reference objective evaluation method based on areas of interest
CN104506852A (en) * 2014-12-25 2015-04-08 北京航空航天大学 Objective quality assessment method facing video conference encoding
CN105100789A (en) * 2015-07-22 2015-11-25 天津科技大学 Method for evaluating video quality
CN105933705A (en) * 2016-07-07 2016-09-07 山东交通学院 HEVC (High Efficiency Video Coding) decoded video subjective quality evaluation method
CN106303507A (en) * 2015-06-05 2017-01-04 江苏惠纬讯信息科技有限公司 Video quality evaluation without reference method based on space-time united information
CN106412572A (en) * 2016-10-14 2017-02-15 中国传媒大学 Video stream encoding quality evaluation method based on motion characteristics
WO2018153161A1 (en) * 2017-02-24 2018-08-30 深圳市中兴微电子技术有限公司 Video quality evaluation method, apparatus and device, and storage medium
CN111985559A (en) * 2020-08-19 2020-11-24 合肥工业大学 Tire pattern structure similarity detection method based on boundary characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885954A (en) * 2005-06-23 2006-12-27 华为技术有限公司 Blocking effect measuring method and video quality estimation method
CN101426148A (en) * 2008-12-01 2009-05-06 宁波大学 Video objective quality evaluation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885954A (en) * 2005-06-23 2006-12-27 华为技术有限公司 Blocking effect measuring method and video quality estimation method
CN101426148A (en) * 2008-12-01 2009-05-06 宁波大学 Video objective quality evaluation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林翔宇: "《无参考视频质量评价方法研究》", 《浙江大学博士学位论文》, 1 October 2012 (2012-10-01), pages 4 - 5 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103269439A (en) * 2013-05-21 2013-08-28 杭州电子科技大学 OCT image quality objective non-reference type evaluation method
CN103414915A (en) * 2013-08-22 2013-11-27 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN103414915B (en) * 2013-08-22 2014-07-16 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN104243973A (en) * 2014-08-28 2014-12-24 北京邮电大学 Video perceived quality non-reference objective evaluation method based on areas of interest
CN104506852B (en) * 2014-12-25 2016-08-24 北京航空航天大学 A kind of objective quality assessment method towards video conference coding
CN104506852A (en) * 2014-12-25 2015-04-08 北京航空航天大学 Objective quality assessment method facing video conference encoding
CN106303507A (en) * 2015-06-05 2017-01-04 江苏惠纬讯信息科技有限公司 Video quality evaluation without reference method based on space-time united information
CN106303507B (en) * 2015-06-05 2019-01-22 江苏惠纬讯信息科技有限公司 Video quality evaluation without reference method based on space-time united information
CN105100789A (en) * 2015-07-22 2015-11-25 天津科技大学 Method for evaluating video quality
CN105100789B (en) * 2015-07-22 2018-05-15 天津科技大学 A kind of method for evaluating video quality
CN105933705A (en) * 2016-07-07 2016-09-07 山东交通学院 HEVC (High Efficiency Video Coding) decoded video subjective quality evaluation method
CN105933705B (en) * 2016-07-07 2019-01-29 山东交通学院 A kind of HEVC decoding video subjective quality assessment method
CN106412572A (en) * 2016-10-14 2017-02-15 中国传媒大学 Video stream encoding quality evaluation method based on motion characteristics
CN106412572B (en) * 2016-10-14 2018-02-09 中国传媒大学 A kind of video flowing coding quality evaluation method based on kinetic characteristic
WO2018153161A1 (en) * 2017-02-24 2018-08-30 深圳市中兴微电子技术有限公司 Video quality evaluation method, apparatus and device, and storage medium
CN108513132A (en) * 2017-02-24 2018-09-07 深圳市中兴微电子技术有限公司 A kind of method for evaluating video quality and device
CN108513132B (en) * 2017-02-24 2020-11-10 深圳市中兴微电子技术有限公司 Video quality evaluation method and device
CN111985559A (en) * 2020-08-19 2020-11-24 合肥工业大学 Tire pattern structure similarity detection method based on boundary characteristics

Similar Documents

Publication Publication Date Title
CN102984540A (en) Video quality assessment method estimated on basis of macroblock domain distortion degree
CN104243973B (en) Video perceived quality non-reference objective evaluation method based on areas of interest
CN102368821B (en) Adaptive noise intensity video denoising method and system thereof
CN101742355B (en) Method for partial reference evaluation of wireless videos based on space-time domain feature extraction
CN101621709B (en) Method for evaluating objective quality of full-reference image
CN101184221A (en) Vision attention based video encoding method
US10134121B2 (en) Method and system of controlling a quality measure
CN102420988B (en) Multi-view video coding system utilizing visual characteristics
CN104219525B (en) Perception method for video coding based on conspicuousness and minimum discernable distortion
CN106412572B (en) A kind of video flowing coding quality evaluation method based on kinetic characteristic
CN103281554B (en) Video objective quality evaluation method based on human eye visual characteristics
CN102984541B (en) Video quality assessment method based on pixel domain distortion factor estimation
CN103124347A (en) Method for guiding multi-view video coding quantization process by visual perception characteristics
CN101312544A (en) Video quality automatic evaluation system oriented to wireless network and evaluation method thereof
CN103152600A (en) Three-dimensional video quality evaluation method
CN105678700A (en) Image interpolation method and system based on prediction gradient
CN107959848A (en) Universal no-reference video quality evaluation algorithms based on Three dimensional convolution neutral net
CN102036098B (en) Full-reference type image quality evaluation method based on visual information amount difference
CN104023227B (en) A kind of objective evaluation method of video quality based on spatial domain and spatial structure similitude
CN104796690B (en) Human brain memory model based non-reference video quality evaluation method
CN103200421A (en) No-reference image quality evaluation method based on Curvelet transformation and phase coincidence
CN106534862A (en) Video coding method
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN103297801A (en) No-reference video quality evaluation method aiming at video conference
CN106937116A (en) Low-complexity video coding method based on random training set adaptive learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130320