CN102740114A - Non-parameter evaluation method for subjective quality of video - Google Patents

Non-parameter evaluation method for subjective quality of video Download PDF

Info

Publication number
CN102740114A
CN102740114A CN2012102464035A CN201210246403A CN102740114A CN 102740114 A CN102740114 A CN 102740114A CN 2012102464035 A CN2012102464035 A CN 2012102464035A CN 201210246403 A CN201210246403 A CN 201210246403A CN 102740114 A CN102740114 A CN 102740114A
Authority
CN
China
Prior art keywords
video
image
quality
subjective
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102464035A
Other languages
Chinese (zh)
Other versions
CN102740114B (en
Inventor
宋好好
邱梓华
顾健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Research Institute of the Ministry of Public Security
Original Assignee
Third Research Institute of the Ministry of Public Security
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Research Institute of the Ministry of Public Security filed Critical Third Research Institute of the Ministry of Public Security
Priority to CN201210246403.5A priority Critical patent/CN102740114B/en
Publication of CN102740114A publication Critical patent/CN102740114A/en
Application granted granted Critical
Publication of CN102740114B publication Critical patent/CN102740114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a non-parameter evaluation method for the subjective quality of a video, which comprises the following aspects of: 1, carrying out quantitative evaluation on the visuality of noise contained in a compressed video, establishing the relation between the visuality of the noise and the subjective visual quality and implementing evaluation on the subjective quality of the video; 2, carrying out quantitative evaluation on the video quality under the condition of different focusing properties by utilizing a passivation fuzzy evaluation algorithm, establishing the relation between the focusing property and the subjective visual quality and implementing evaluation on the subjective quality of the video; and 3, carrying out quantitative evaluation on the video quality under the condition of different contrast ratios, establishing the relation between the contrast ratio and the subjective visual quality and implementing evaluation on the subjective quality of the video. According to the invention, under the condition of no original video sequence reference, the key technology of accurately modeling on the coherence relation between pixels of an image is solved and the non-parameter evaluation on the subjective quality of a video sequence is implemented.

Description

A kind of nothing of video subjective quality participates in evaluation and electing and estimates method
Technical field
The nothing that the present invention relates to a kind of video subjective quality participates in evaluation and electing and estimates method, more particularly, relates to a kind of method of the monitor video sequence being carried out the printenv assessment of video subjective quality.
Background technology
Internationally recognized evaluates video quality method the most reliably is a subjective evaluation method, because in most of Video Applications processes, the ultimate recipient of video is people's eyes, and therefore, the subjective quality of assessment video is particularly important.At present, when video is carried out subjective assessment, rely on multidigit professional assessment personnel to utilize professional experiences that video is carried out manual work marking, the mean value that the result of video subjective quality grades to video for the multidigit appraiser in the industry mostly.Because be that manual work is given a mark, so video subjective quality assessment result inevitably receives artificial factor, interference; Simultaneously, because video to be assessed is too much, all need when the appraiser assesses each video this video is finished watching comprehensively, the time that assessment is spent is also longer.
Summary of the invention
The object of the present invention is to provide a kind of innovation, the video subjective quality does not have to participate in evaluation and electing and estimates system efficiently; Under unmanned situation for intervention; Realization is to the assessment of video subjective quality; According to human vision model technology, priori, setting up the subjective quality that getting in touch between noise visuality, focusing, contrast and the subjective visual quality do measure video is the technical problem that this area is needed solution badly.Utilize the present invention that the subjective quality of video is not had to participate in evaluation and electing and estimate, can reduce the interference of human factor, improved the speed of assessment and result's fairness greatly assessment result.
The technical problem that the present invention solved can adopt following technical scheme to realize:
A kind of nothing of video subjective quality participates in evaluation and electing and estimates method, and it comprises following three aspects:
1) visuality of the noise that comprises in the compressed video is carried out qualitative assessment, set up getting in touch between noise visuality and the subjective visual quality do, realize evaluation the video subjective quality;
2) video quality that difference is focused on utilizes the fuzzy assessment algorithm of passivation to carry out qualitative assessment, sets up getting in touch between focusing and the subjective visual quality do, realizes the evaluation to the video subjective quality;
3) picture quality of different contrast is quantitatively estimated, set up getting in touch between contrast and the subjective visual quality do, realize evaluation the video subjective quality.
2. the nothing of a kind of video subjective quality according to claim 1 participates in evaluation and electing and estimates method, it is characterized in that, saidly sets up getting in touch through picture structure similarity operator SSIM between noise visuality and the subjective visual quality do and realizes; Said SSIM index comprises:
Image brightness is equation relatively: l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 ,
μ wherein xAnd μ yDividing is the average of image x and y, C 1Be
Figure BDA00001893668800022
In the stability constant near zero time;
Contrast is equation relatively: c ( x , y ) = 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 ,
σ wherein xAnd σ yBe respectively the variance of image x and y, C 2Be
Figure BDA00001893668800024
In the stability constant near zero time;
The structurally associated degree equation of image x and y:
Figure BDA00001893668800025
σ wherein XyBe the degree of correlation of image x and y, C 3It is stability constant;
Form by above three equations:
SSIM(x,y)=[l(x,y)] α·[c(x,y)] β·[s(x,y)] γ
In one embodiment of the invention, getting in touch between said foundation focusing and the subjective visual quality do concerns through Spatial Sharpness Map and to realize; Definition Spectral sharpness map is following:
S 1 ( I ) = 1 - 1 1 + e - 3 ( a I - 2 ) ;
The total variable quantity of spatial domain is used to weigh the acutance and the ambiguity of image, an image block I in Spatial Measure of Sharpness bTotal variable quantity (The Total Variation) v (I b) can calculate through following formula:
Figure BDA00001893668800032
I wherein BiAnd I BjBe I bEight fields;
V (I b) shown image block I effectively bWith the summation of the absolute difference of its offset blocks, it is following that we define spatial sharpness map thus:
S 2 ( I ) = 1 4 max τ ∈ I v ( I b ) ;
Use S 3(I) the 1% peaked quality of on average weighing image in:
S 3 _ INDEX = 1 N Σ k = 1 N S ~ 3 ( K )
Wherein
Figure BDA00001893668800035
Be S 3(I) ordering falls, N is 1% of an image pixel quantitative value.
In one embodiment of the invention, saidly set up getting in touch between contrast and the subjective visual quality do and realize through total contrast function;
D n - 1 = 1 N pix ( N pix - 1 ) Σ i = 0 M - 2 Σ j = i + 1 M - 1 H n - 1 ( i ) H n - 1 ( j ) ( j - i ) i , j ∈ [ 0 , M - 1 ] ;
Said [0, M-1] is the tonal range of image, N PixBe the number of the pixel of image, H (i) is the amplitude of gradation of image i in histogram.
Description of drawings
Fig. 1 is the participate in evaluation and electing sketch map of the method for estimating of the nothing of a kind of video subjective quality of the present invention.
Fig. 2 is the sketch map of 8 * 8 movings range of the present invention.
Fig. 3 is the sketch map of the fuzzy assessment algorithm of passivation of the present invention.
Embodiment
For technological means, creation characteristic that the present invention is realized, reach purpose and effect and be easy to understand and understand, below in conjunction with embodiment, further set forth the present invention.
As shown in Figure 1, the nothing of a kind of video subjective quality of the present invention participates in evaluation and electing and estimates method, may further comprise the steps:
Step 1: the visuality to the noise that comprises in the compressed video is quantitatively estimated; With of the subjective quality assessment mutually match of the visual dimension of the noise that estimates with image; Set up getting in touch between noise visuality and the subjective visual quality do; Be picture structure similarity operator SSIM, utilize this internal relation, realize evaluation the video subjective quality.
Step 2: the video quality that difference is focused on utilizes the fuzzy assessment algorithm of passivation quantitatively to estimate; Set up getting in touch between focusing and the subjective visual quality do; Promptly set up Spatial Sharpness Map relation, utilize this internal relation, realize evaluation the video subjective quality.
Step 3: based on the Center-surround model, the picture quality of different contrast is quantitatively estimated, set up getting in touch between contrast and the subjective visual quality do, promptly total contrast function utilizes this internal relation, realizes the evaluation to the video subjective quality.
As everyone knows, some redundant informations that we can the applied compression image are predicted the information of original image, and the structural similarity of compressed image and original image can reflect image evaluation to picture quality in the human eye vision perception.This process also with the similar process of people's subjective assessment image.In the 1860's, Hermann von Helmholtz relates to complicated psychology and physiology course with regard to having been found that the people to the evaluation of visible objects.According to his conclusion, we think that human brain has an inner generation model at hypothesis, and this model can use the information of non-local to infer the reasoning information of former figure.Simultaneously, the people with outside input and the inner gap of inferring as a kind of evaluation to picture quality.The shifted window based filtering algorithm is used for describing the inner generation model of human brain in step 1, weighs outside input and the gap of the former figure information being inferred out by generation model through the operator SSIM that weighs the picture structure similarity.
As shown in Figure 1, we move each image block and in predefined interval, go to seek similar.And similar be through judging that whether two Mean Absolute Error (MAE) between the image block are less than predetermined value.Generally, we move each piece of 8 * 8 in the scope of [8,8], as shown above.
Because the discontinuous of piece be to be caused by quantization table, at this we to have selected a MAE who optimized (MAXMAE) to estimate piece similar, MAXMAE is as follows:
MAXMAE = 1 8 qt ( 0,0 ) exp ( 1 - Σ u = 0 7 Σ v = 0 7 qt ( u , v ) / L ) ,
Wherein (u v) is a quantization table to qt.L=255 is maximum gray scale quantization step.
Then similar set can be represented with following formula:
Θ ( a , b ) = { ( i , j ) | 1 64 Σ i = 0 7 Σ j = 0 7 | f ( a , b ) ( i , j ) ( n , m ) - f ( a , b ) ( n , m ) | ≤ MAXMAE }
Wherein (i j) is the moving parameter of piece ,-8≤i, and j≤8. (n m) is piece (a, b) index value of middle pixel.
Last reconstructed image f' (a, b) (x, definition y) is following:
f ′ ( a , b ) ( n , m ) = 1 N ( Θ ( a , b ) ) Σ i , j ∈ Θ ( a , b ) f ( a , b ) ( i , j ) ( n , m ) .
Algorithm about the SSIM evaluation criterion is as follows.
SSIM mainly is an algorithm that is used for weighing the picture structure similarity, and it is made up of three parts:
Image brightness is equation relatively:
Figure BDA00001893668800054
μ wherein xAnd μ yDividing is the average of image x and y.C 1Be
Figure BDA00001893668800055
In the stability constant near zero time.
Contrast is equation relatively:
Figure BDA00001893668800061
σ wherein xAnd σ yIt is respectively the variance of image x and y.C 2Be
Figure BDA00001893668800062
In the stability constant near zero time.
The structurally associated degree equation of image x and y:
Figure BDA00001893668800063
σ wherein XyBe the degree of correlation of image x and y, C 3It is stability constant.
The SSIM index just is made up of top three equations:
SSIM(x,y)=[l(x,y)] α·[c(x,y)] β·[s(x,y)] γ
Generally we replace SSIM index with MSSIM, to improve computational speed.
MSSIM ( x , y ) = 1 M Σ i = 1 M SSIM ( x i , y i ) ,
Wherein M is the quantity .x of local window iAnd y iIt is the content of image x and i window of y.
By two top parts, we can derive the new method of estimating about blocking effect:
MSSIM NF = 1 M Σ i = 1 M SSIM ( f i ′ , f i ) ,
Wherein
Figure BDA00001893668800066
is for predicting the original image of coming out.
As shown in Figure 2, the image blurring algorithm of step 2 is made up of two parts, and the one, the slope of measure local amplitude spectrum, the 2nd, the local maximum variable quantity in measurement space territory.As everyone knows, the loss of image high fdrequency component has caused the fuzzy of image.The M (f) that a good mode measuring this influence is exactly the measurement image amplitude spectrum is because itself and the inversely proportional decline of frequency.M (f) ∝ f for example -a, f is a frequency here.Though can be used for weighing the acutance or the fog-level of image for the evaluation of amplitude spectrum slope, this method is not considered the contrast of image, the contrast that experiment shows image has direct influence to the acutance and the ambiguity of image.For contrast is considered to come in to the influence of image sharpness and ambiguity, we will weigh the influence of contrast to image sharpness and ambiguity through the local maximum variable quantity in measurement space territory.The effective combination of these two parts can well be weighed the ambiguity of image.
In Spectral sharpness map we (x y) is DFT and changes and to obtain I to image I DFT(u, v), then with I DFT(u v) is converted into polar equation I DFT(f, θ), wherein
Figure BDA00001893668800071
Figure BDA00001893668800072
M is the size of localized mass.
Next we calculate under the same frequency summation of amplitude on all directions, as follows:
Z I ( f ) = Σ θ | I DFT ( f , θ ) | .
(x, amplitude spectrum slope y) is defined as-a image I I, can obtaining by the slope calculating of-alogf+log β, its mathematic(al) representation is following:
a I = arg min a | | β f - a - Z I ( f ) | | 2 .
Can know as-a by following formula IThe high-frequency content of image is few more when big more, and image is fuzzy more.At this, definition Spectral sharpness map is following:
S 1 ( I ) = 1 - 1 1 + e - 3 ( a I - 2 ) .
The total variable quantity of spatial domain is used for weighing the acutance and the ambiguity of image by us in Spatial Measure of Sharpness.An image block I bTotal variable quantity (The Total Variation) v (I b) can calculate through following formula:
v ( I b ) = 1 255 Σ i , j | I bi - I bj | .
I wherein BiAnd I BjBe I bEight fields.
V (I b) shown image block I effectively bSummation with the absolute difference of its offset blocks.It is following that we define spatial sharpness map thus.
S 2 ( I ) = 1 4 max τ ∈ I v ( I b ) ,
Wherein τ is 2 * 2 a piece among the I.
We use following formula comprehensively is a new coupling with the match map of spatial domain and frequency domain gained:
S 3(I)=S 1(I) γ×S 2(I) 1-γ
We use S 3(I) the 1% peaked quality of on average weighing image in:
S 3 _ INDEX = 1 N Σ k = 1 N S ~ 3 ( K ) ,
Wherein
Figure BDA00001893668800083
Be S 3(I) ordering falls, N is 1% of an image pixel quantitative value.
Step 3 mainly is based on the application and the central authorities-peripheral region of acceptance model (center-surround retinal receptive field model) of partial-band bandpass filter.Some viewpoints that relevant aberration influences the image comparison susceptibility human eye in the physiological Study, and the viewpoint of sub-& supra-threshold contrast perception also all is taken into account in the middle of our algorithm.Consider that there is the multiple spectra passage in human eye, our algorithm will be operated on the multiple spectra passage, comprehensively be a scalar about picture contrast with the resulting value about contrast on the different passages through the Lp norm then.
We think that the contrast in gray-scale map can be looked at as the ratio of localized variation and local average.Through analyzing the definition of weber&Michelson about contrast, we measure the contrast of being responsible for image with the contrast of partial-band limit simultaneously.
The definition of partial-band limit contrast is following:
c ( x , y ) = β ( x , y ) λ ( x , y )
Wherein:
β(x,y)=f(x,y)*b(x,y)
λ(x,y)=f(x,y)*l(x,y)
(x is that (x, the gray value of y) locating, b are a band pass filter to image, and l is a low pass filter, and * is a convolution operation at coordinate y) to f.
The perceptual phenomena of our simple account of human eyes retina; It is rhabodoid and cone that two kinds of main images receive neuron; The signal that has the light conduction in the retina to produce will produce excited and suppressed field, and excited caused center-surround vision acceptance field with suppressed field.This can be used for DOG again the center-surround vision is accepted a modeling.
O(x,y)=C(x,y)-S(x,y)
Wherein,
C(x,y)=f(x,y)*g 1
S(x,y)=f(x,y)*g 2
(x y) is based on the output of center-surround model DoG to this people O, and C represents Center, and S represents surround.g 1And g 2Be two Gauss equations, we overlap top theory with following DoG now has following this formula:
C ( x , y ) = β ( x , y ) λ ( x , y ) = O ( x , y ) S ( x , y )
What considered human eye this moment is multichannel, and in order to simulate this characteristic of human eye, we are provided with
σ g 1 = 2 log ( 1 M 2 ) v 2 ( 1 - M 2 ) , v = 72 π 80 , 69 π 80 , . . . , 6 π 80 , 3 π 80
Figure BDA00001893668800094
Be g 1Standard variance, simultaneously
Figure BDA00001893668800095
Generally we are provided with M=3.We have made up 24 passages like this.Like this we obtain about the contrast of image with regard to many-valued a standard, in order to obtain the calibration value of a monodrome, we use the multidimensional norm that multichannel value is combined.
C L ( x , y ) = Σ σ g 1 ( P - 1 × | c σ g 1 ( x , t ) | ( p ) ) ( 1 p )
Because 1-norm mainly reflects subthreshold contrast perception, this moment p=1, max-norm mainly reflects supra-threshold contrast perception, this moment p=∞, so we mainly consider this two norms,
Then the partial-band limit contrast (LBPC) of view picture image does
LBPC=std(C 1)+std(C )
For the contrast of the image better weighed, except considering local contrast, we also the variance of the image histogram through routine consider global contrast, we have used following mode and have estimated whole contrasts in our invention:
D n - 1 = 1 N pix ( N pix - 1 ) Σ i = 0 M - 2 Σ j = i + 1 M - 1 H n - 1 ( i ) H n - 1 ( j ) ( j - i ) i , j ∈ [ 0 , M - 1 ]
Here, [0, M-1] is the tonal range of image, N PixBe the number of the pixel of image, H (i) is the amplitude of gradation of image i in histogram.Select D as the reason of evaluation criterion to be: big more D has just explained that the distance between each the gray scale element inside the histogram is bigger, and the contrast of image being described from a side is bigger.
The nothing of the video subjective quality that the present invention proposes the participate in evaluation and electing method of estimating and traditional video subjective quality assessment method comparison have the following advantages:
1, adopts parameterless assessment technology.When video quality is assessed, do not need with reference to original video sequence, be applicable to that great majority can not get the application scenarios of original video sequence, for example, the highway video monitoring system of using in the public security industry, net bar video frequency monitor system etc.;
2, estimate owing to the nothing that adopts subjective algorithm to carry out video quality participates in evaluation and electing, need not manual intervention, make evaluation result not influenced by artificial subjective factor, have more objectivity;
3, owing to adopt computer generation for estimator, the assessment that can carry out comprehensively, add up very big, the chronic video sequence of data volume faster.
More than show and described basic principle of the present invention and principal character and advantage of the present invention.The technical staff of the industry should understand; The present invention is not restricted to the described embodiments; That describes in the foregoing description and the specification just explains principle of the present invention; Under the prerequisite that does not break away from spirit and scope of the invention, the present invention also has various changes and modifications, and these variations and improvement all fall in the scope of the invention that requires protection.The present invention requires protection range to be defined by appending claims and equivalent thereof.

Claims (4)

1. the nothing of a video subjective quality participates in evaluation and electing and estimates method, it is characterized in that, it comprises following three aspects:
1) visuality of the noise that comprises in the compressed video is carried out qualitative assessment, set up getting in touch between noise visuality and the subjective visual quality do, realize evaluation the video subjective quality;
2) video quality that difference is focused on utilizes the fuzzy assessment algorithm of passivation to carry out qualitative assessment, sets up getting in touch between focusing and the subjective visual quality do, realizes the evaluation to the video subjective quality;
3) picture quality of different contrast is quantitatively estimated, set up getting in touch between contrast and the subjective visual quality do, realize evaluation the video subjective quality.
2. the nothing of a kind of video subjective quality according to claim 1 participates in evaluation and electing and estimates method, it is characterized in that, saidly sets up getting in touch through picture structure similarity operator SSIM between noise visuality and the subjective visual quality do and realizes; Said SSIM index comprises:
Image brightness is equation relatively: l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 ,
μ wherein xAnd μ yDividing is the average of image x and y, C 1Be
Figure FDA00001893668700012
In the stability constant near zero time;
Contrast is equation relatively: c ( x , y ) = 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 ,
σ wherein xAnd σ yBe respectively the variance of image x and y, C 2Be
Figure FDA00001893668700014
In the stability constant near zero time;
The structurally associated degree equation of image x and y:
Figure FDA00001893668700015
σ wherein XyBe the degree of correlation of image x and y, C 3It is stability constant;
Form by above three equations:
SSIM(x,y)=[l(x,y)] α·[c(x,y)] β·[s(x,y)] γ
3. the nothing of a kind of video subjective quality according to claim 1 participates in evaluation and electing and estimates method, it is characterized in that, getting in touch between said foundation focusing and the subjective visual quality do concerns through Spatial Sharpness Map realizes; Definition Spectral sharpness map is following:
S 1 ( I ) = 1 - 1 1 + e - 3 ( a I - 2 ) ;
The total variable quantity of spatial domain is used to weigh the acutance and the ambiguity of image, an image block I in Spatial Measure of Sharpness bTotal variable quantity (The Total Variation) v (I b) can calculate through following formula: I wherein BiAnd I BjBe I bEight fields;
V (I b) shown image block I effectively bWith the summation of the absolute difference of its offset blocks, it is following that we define spatial sharpness map thus:
S 2 ( I ) = 1 4 max τ ∈ I v ( I b ) ;
Use S 3(I) the 1% peaked quality of on average weighing image in:
S 3 _ INDEX = 1 N Σ k = 1 N S ~ 3 ( K )
Wherein Be S 3(I) ordering falls, N is 1% of an image pixel quantitative value.
4. the nothing of a kind of video subjective quality according to claim 1 participates in evaluation and electing and estimates method, it is characterized in that, saidly sets up getting in touch through total contrast function between contrast and the subjective visual quality do and realizes;
D n - 1 = 1 N pix ( N pix - 1 ) Σ i = 0 M - 2 Σ j = i + 1 M - 1 H n - 1 ( i ) H n - 1 ( j ) ( j - i ) i , j ∈ [ 0 , M - 1 ] ;
Said [0, M-1] is the tonal range of image, N PixBe the number of the pixel of image, H (i) is the amplitude of gradation of image i in histogram.
CN201210246403.5A 2012-07-16 2012-07-16 A kind of nothing ginseng appraisal procedure of Subjective video quality Active CN102740114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210246403.5A CN102740114B (en) 2012-07-16 2012-07-16 A kind of nothing ginseng appraisal procedure of Subjective video quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210246403.5A CN102740114B (en) 2012-07-16 2012-07-16 A kind of nothing ginseng appraisal procedure of Subjective video quality

Publications (2)

Publication Number Publication Date
CN102740114A true CN102740114A (en) 2012-10-17
CN102740114B CN102740114B (en) 2016-12-21

Family

ID=46994778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210246403.5A Active CN102740114B (en) 2012-07-16 2012-07-16 A kind of nothing ginseng appraisal procedure of Subjective video quality

Country Status (1)

Country Link
CN (1) CN102740114B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103414915A (en) * 2013-08-22 2013-11-27 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN103458267A (en) * 2013-09-04 2013-12-18 中国传媒大学 Video picture quality subjective evaluation method and system
CN105635727A (en) * 2015-12-29 2016-06-01 北京大学 Subjective image quality evaluation method based on paired comparison and device thereof
CN106791353A (en) * 2015-12-16 2017-05-31 深圳市汇顶科技股份有限公司 The methods, devices and systems of auto-focusing
CN107371015A (en) * 2017-07-21 2017-11-21 华侨大学 One kind is without with reference to contrast modified-image quality evaluating method
CN108109147A (en) * 2018-02-10 2018-06-01 北京航空航天大学 A kind of reference-free quality evaluation method of blurred picture
CN108198160A (en) * 2017-12-28 2018-06-22 深圳云天励飞技术有限公司 Image processing method, device, image filtering method, electronic equipment and medium
CN108776958A (en) * 2018-05-31 2018-11-09 重庆瑞景信息科技有限公司 Mix the image quality evaluating method and device of degraded image
CN110378893A (en) * 2019-07-24 2019-10-25 北京市博汇科技股份有限公司 Image quality evaluating method, device and electronic equipment
CN112085102A (en) * 2020-09-10 2020-12-15 西安电子科技大学 No-reference video quality evaluation method based on three-dimensional space-time characteristic decomposition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885954A (en) * 2005-06-23 2006-12-27 华为技术有限公司 Blocking effect measuring method and video quality estimation method
CN101345891A (en) * 2008-08-25 2009-01-14 重庆医科大学 Non-reference picture quality appraisement method based on information entropy and contrast
US7733372B2 (en) * 2003-12-02 2010-06-08 Agency For Science, Technology And Research Method and system for video quality measurements
CN101853504A (en) * 2010-05-07 2010-10-06 厦门大学 Image quality evaluating method based on visual character and structural similarity (SSIM)
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structural sharpness image quality evaluation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7733372B2 (en) * 2003-12-02 2010-06-08 Agency For Science, Technology And Research Method and system for video quality measurements
CN1885954A (en) * 2005-06-23 2006-12-27 华为技术有限公司 Blocking effect measuring method and video quality estimation method
CN101345891A (en) * 2008-08-25 2009-01-14 重庆医科大学 Non-reference picture quality appraisement method based on information entropy and contrast
CN101853504A (en) * 2010-05-07 2010-10-06 厦门大学 Image quality evaluating method based on visual character and structural similarity (SSIM)
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structural sharpness image quality evaluation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CONG T VU ET AL: "S3:A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103414915B (en) * 2013-08-22 2014-07-16 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN103414915A (en) * 2013-08-22 2013-11-27 合一网络技术(北京)有限公司 Quality evaluation method and device for uploaded videos of websites
CN103458267A (en) * 2013-09-04 2013-12-18 中国传媒大学 Video picture quality subjective evaluation method and system
CN103458267B (en) * 2013-09-04 2016-07-06 中国传媒大学 A kind of video picture quality subjective evaluation and system
CN106791353B (en) * 2015-12-16 2019-06-14 深圳市汇顶科技股份有限公司 The methods, devices and systems of auto-focusing
CN106791353A (en) * 2015-12-16 2017-05-31 深圳市汇顶科技股份有限公司 The methods, devices and systems of auto-focusing
CN105635727A (en) * 2015-12-29 2016-06-01 北京大学 Subjective image quality evaluation method based on paired comparison and device thereof
CN105635727B (en) * 2015-12-29 2017-06-16 北京大学 Evaluation method and device based on the image subjective quality for comparing in pairs
CN107371015A (en) * 2017-07-21 2017-11-21 华侨大学 One kind is without with reference to contrast modified-image quality evaluating method
CN108198160A (en) * 2017-12-28 2018-06-22 深圳云天励飞技术有限公司 Image processing method, device, image filtering method, electronic equipment and medium
CN108109147A (en) * 2018-02-10 2018-06-01 北京航空航天大学 A kind of reference-free quality evaluation method of blurred picture
CN108109147B (en) * 2018-02-10 2022-02-18 北京航空航天大学 No-reference quality evaluation method for blurred image
CN108776958A (en) * 2018-05-31 2018-11-09 重庆瑞景信息科技有限公司 Mix the image quality evaluating method and device of degraded image
CN110378893A (en) * 2019-07-24 2019-10-25 北京市博汇科技股份有限公司 Image quality evaluating method, device and electronic equipment
CN112085102A (en) * 2020-09-10 2020-12-15 西安电子科技大学 No-reference video quality evaluation method based on three-dimensional space-time characteristic decomposition
CN112085102B (en) * 2020-09-10 2023-03-10 西安电子科技大学 No-reference video quality evaluation method based on three-dimensional space-time characteristic decomposition

Also Published As

Publication number Publication date
CN102740114B (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN102740114A (en) Non-parameter evaluation method for subjective quality of video
Wang et al. A fast roughness-based approach to the assessment of 3D mesh visual quality
Ferwerda et al. A model of visual masking for computer graphics
Wang et al. Multiscale structural similarity for image quality assessment
CN102881010B (en) Method for evaluating perception sharpness of fused image based on human visual characteristics
Vu et al. ${\bf S} _ {3} $: a spectral and spatial measure of local perceived sharpness in natural images
Moulden et al. The standard deviation of luminance as a metric for contrast in random-dot images
CN102113335B (en) Image processing apparatus and method
Gao et al. Image quality assessment and human visual system
Cavalcante et al. Measuring streetscape complexity based on the statistics of local contrast and spatial frequency
DE102015219547A1 (en) System and method for determining respiratory rate from a video
Jänicke et al. A salience‐based quality metric for visualization
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
Cadik et al. Evaluation of two principal approaches to objective image quality assessment
CN104361593A (en) Color image quality evaluation method based on HVSs and quaternions
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN106580350A (en) Fatigue condition monitoring method and device
CN106127234A (en) The non-reference picture quality appraisement method of feature based dictionary
DE102020132238A1 (en) PROCESSES, SYSTEMS, ITEMS OF MANUFACTURING AND EQUIPMENT FOR THE FURTHER DEVELOPMENT OF DEPTH TRUST MAPS
CN114120176A (en) Behavior analysis method for fusion of far infrared and visible light video images
Jiang et al. Quality assessment for virtual reality technology based on real scene
Bhatnagar et al. A new image fusion technique based on directive contrast
Lu et al. Point cloud quality assessment via 3D edge similarity measurement
CN107292866A (en) A kind of method for objectively evaluating image quality based on relative gradient
Chamaret et al. No-reference harmony-guided quality assessment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant