CN105991995B - No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis - Google Patents

No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis Download PDF

Info

Publication number
CN105991995B
CN105991995B CN201510080147.0A CN201510080147A CN105991995B CN 105991995 B CN105991995 B CN 105991995B CN 201510080147 A CN201510080147 A CN 201510080147A CN 105991995 B CN105991995 B CN 105991995B
Authority
CN
China
Prior art keywords
video
coefficient
dct
domain
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510080147.0A
Other languages
Chinese (zh)
Other versions
CN105991995A (en
Inventor
李学龙
卢孝强
郭群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN201510080147.0A priority Critical patent/CN105991995B/en
Publication of CN105991995A publication Critical patent/CN105991995A/en
Application granted granted Critical
Publication of CN105991995B publication Critical patent/CN105991995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a kind of no-reference video quality evaluating methods based on the domain 3D-DCT statistical analysis, mainly solve to apply type of distortion limited in existing method, to the insufficient limitation of Space-Time use of information of video.Implementation step is: (1) carrying out part 3D-DCT to all videos of data set and calculate;(2) according to 3D-DCT coefficient, the feature for reflecting different statistical properties is extracted;(3) video general characteristic is merged to obtain to feature;(4) data set is divided into training set and test set, training set for training characteristics to the Hui-Hui calendar model between real quality score, then by the mass fraction of the forecast of regression model test video learnt.Correlation between statistical forecast result and true score, the performance indicator as quality evaluation.The present invention compared with the conventional method, does not need additional estimation, it is only necessary to statistically analyze to the domain 3D-DCT, realize while capturing the Space-Time statistical property of video, improve the consistency of objective quality prediction and subjective assessment.

Description

No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis
Technical field
The invention belongs to image/video processing technology fields, in particular to method for evaluating video quality, can be used for multimedia The fields such as information processing, video display creation.
Background technique
With the fast development of multimedia and network transmission technology, the vision signal of magnanimity is in user terminal and Video service quotient Between generate, transmission, storage and display, and in different applications, such as Web TV, video monitoring, video conference.However Due to the limitation of the factors such as compression losses, transmission channel bandwidth, original video council inevitably introduces some distortions or loses Partial information is lost, the decline of video quality is caused.In order to which user terminal video quality is maintained within the acceptable range, to improve and use The Quality of experience at family, it is very necessary and important for designing accurate video quality evaluation algorithm.Objective quality evaluating method Purpose is the visual quality of the automatic Evaluation video in the case where no human viewer participates in.According in evaluation procedure, whether there is or not ginsengs Video participation is examined, method for objectively evaluating can be divided into three classes: having reference, half reference and without reference.Wherein no-reference video quality Evaluation be most widely used, the problem of a kind of work for most challenging and the present invention will explore.At present without reference video matter Amount evaluation method facing challenges mainly have following three points.1) missing of reference video information is so that can not be from similarity or fidelity The angle of degree defines quality;2) Space-time Complexity of video is high.It is difficult to extract the spy of the relevant reflecting video space-time characterisation of quality Sign;3) limited to human visual system's understanding, the mapping relations extracted between feature and subjective visual quality are difficult to model.
Currently, the method for evaluating video quality without reference is broadly divided into two classes:
First is that being directed to the evaluation method of particular video frequency type.This method is research certain distortion such as compression, transmission error The characteristic of distortion etc. and the mapping relations of itself and video visual quality, to evaluate this kind of distortion video quality.Such as Zhang et al. is in document " F.Zhang, W.Lin, Z.Chen, and K.N.Ngan.Additive log-logistic model for networked video quality assessment.IEEE Transactions on Image Processing, 22 (4): propose in 1536-1547,2013. " for database IPTV (Internet Protocol Television) Quality evaluating method.This method is extracted reflecting video compression, patch and the feature for freezing severity first, and proposes Non-linear relation of the ALM (Additive log-logistic model) between model modeling multidimensional characteristic and subjective assessment.It is logical Crossing classical statistical inference can choose feature and estimates model parameter.
Shortcoming existing for such method is the video that can be only applied to certain distortion type, while extracting feature It needs such as to quantify parameter, packet loss using the coding and channel information of video in the process, and in most applications, these letters Breath can not obtain.
Second is that general method for evaluating video quality.Current existing such methods are based on to NVS (natural Video statistics) research, such methods think nature (undistorted) scene video there are NVS, and video quality Decline can cause a deviation from NVS model, to evaluate video quality by measurement distortion the distance between video and NVS.This stranger It is to be evolved according to natural environment, therefore NVS also reflects human perception to a certain extent that class vision system, which is generally believed that, Characteristic.Nearest Saad et al. is in document " M.A.Saad, A.C.Bovik, C.Charrier.Blind prediction of natural video quality.IEEE Transactions on Image Processing,23(3):1352-1365, A kind of method based on NVS model is proposed in 2014. ".Part is obtained by 2 dimension dct transforms of frame-to-frame differences in this method Room and time frequency domain information, by generalized gaussian model to frequency coefficient model, and using low, middle and high frequency model parameter ratio as The NVS aspect of model.In addition this method also proposed the measurement to video motion coherence to reflect that coherent motion covers distortion Cover characteristic.
Shortcoming existing for this method is that the method for frame-to-frame differences can only capture the variation between two continuous frames, and can not Reflection continues time frequency domain information in short-term.In addition the measurement of coherent motion needs to make to estimation is carried out to video first It is more time-consuming to obtain algorithm operation.
Summary of the invention
It is an object of the invention to be directed to the deficiency of above-mentioned existing method, a kind of various types of no references views of adaptation are proposed Frequently, suitable for the universal no-reference video quality evaluating method based on the domain 3D-DCT statistical analysis of short time frequency domain information.
The specific technical solution of the present invention includes the following steps:
It is including following the present invention provides a kind of no-reference video quality evaluating method based on the domain 3D-DCT statistical analysis Step:
1) 3D-DCT calculating is carried out to each video in video quality assessment database, obtains reflecting video space-time frequency domain The ac coefficient of the one-dimensional vector form of information;
2) ac coefficient for utilizing step 1) one-dimensional vector form, extracts every M in each video × N number of space-time cube Statistical Charateristics;Every M × N number of space-time cube Statistical Charateristics include basic spectrum signature, energy wave in the video Dynamic feature, shape parameter feature and changes in distribution feature;
2.1) the basic spectrum signature in the domain video 3D-DCT is extracted;
2.1.1 the basic spectrum for) obtaining K a-c cycle using the ratio between the mean value of no symbol ac coefficient and variance measures is special Sign, the no symbol ac coefficient are the absolute value of ac coefficient;
2.1.2 the basic spectrum signature of all K a-c cycles) is connected into constitutive characteristic f1=[s1,s2…,sK]T
2.2) description video 3D-DCT domain energy fluctuation feature is extracted;Video 3D-DCT domain energy fluctuation characteristic includes Average spectrum energy and entropy measurement;
2.2.1 average spectrum energy and entropy measurement) are calculated, specific formula for calculation is respectively as follows:
Wherein, rkFor the average spectrum energy on k-th of a-c cycle, ekFor the entropy measurement on k-th of a-c cycle, pi(| Ck|) it is to fall in the probability in i-th case, N without symbol ac coefficient on k-th of a-c cyclebFor total case number, Ck(m, n) is indicated M row n-th arranges 3D-DCT domain coefficient block and is converted to k-th of ac coefficient after one-dimensional vector form;
2.2.2 the corresponding average spectrum energy of all K a-c cycles and entropy measurement) are separately connected constitutive characteristic f together2 =[r1,r2…,rK]TAnd f3=[e1,e2…,eK]T
2.3) the description domain video 3D-DCT shape parameter feature is extracted;
2.3.1 the probability distribution of ac coefficient on each a-c cycle) is counted, and using generalized Gaussian distribution to each exchange The probability distribution of ac coefficient is fitted in frequency, obtains the shape parameter being distributed on each a-c cycle;
2.3.2) shape parameter being distributed on all K a-c cycles is connected and constitutes corresponding feature f4=[γ1, γ2…,γK]T
2.4) domain video 3D-DCT changes in distribution feature is extracted;
2.4.1 ac coefficient distribution on each a-c cycle) is calculated to be averaged with ac coefficient on all K a-c cycles For city-block distance between distribution to the distance between measurement, calculation formula is as follows:
Wherein, dkIt is averaged for ac coefficient distribution on k-th of a-c cycle with ac coefficient on all K a-c cycles The distance between distribution measurement,It is evenly distributed for ac coefficient on all K a-c cycles, pi (|Ck|) it is that the probability in i-th case is fallen in without symbol ac coefficient on k-th of a-c cycle;
2.4.2) by calculated distance measurement connection composition characteristic f on all K a-c cycles5=[d1,d2…, dK];
3) by obtaining M × N number of space-time cube Statistical Charateristics in video in step 2), exist to Statistical Charateristics Implement PCA dimensionality reduction again after carrying out average merge on time shaft, obtains quality evaluation feature;
4) the training regression model in video quality evaluation database, is tested, and prediction result and real quality are calculated The related coefficient of score;
4.1) video in video quality evaluation database is divided into two parts: a part of video is as training set, remaining view Frequency is used as test set;Wherein, the video in the video and test set in training set is completely non-overlapping in terms of content;
Number of videos is P in training set, and the quality evaluation character representation that training set extracts is
Rd×PIndicate that d × P ties up real number space, training set real quality point Number is ytrain∈RP, RPIndicate that P ties up real number space;Utilize training set training regression model;
4.2) number of videos is set in test set as Q, the quality evaluation character representation that test set extracts
Rd×QIndicate that d × Q ties up real number space, test set real quality score is ytest∈RQ, RQIndicate that Q ties up real number space, by test set XtestTrained regression model is inputted, video in test set is obtained Predict score ypredict∈RQ
4.3) video in test set is calculated by Pierre's Si linearly dependent coefficient and Spearman rank correlation coefficient Predict score ypredictWith real quality score ytestBetween correlation:
4.4) step 4.1) is repeated to step 4.3), and phase relation is carried out to all videos in video quality assessment database Assessment is realized in several calculating.
Above-mentioned steps 1) comprise the concrete steps that:
1.1) each video in video quality assessment database is divided into size is a × a × a space-time cube, respectively A space-time cube overlaps in space two pixels, non-overlapping on time dimension;The time cube of each video has M × N × T;Wherein M, N, T respectively indicate video horizontal axis, corresponding space-time cube quantity on the longitudinal axis and time shaft;
1.2) 3D-DCT calculating is carried out to M × N × T space-time cube, obtains M × N × T 3D-DCT domain coefficient block; Each 3D-DCT domain coefficient block includes 1 DC coefficient and K ac coefficient;
Wherein, in a 3D-DCT domain coefficient block, the corresponding a-c cycle of each ac coefficient;In each video In, an a-c cycle corresponds to M × N × T ac coefficient;
1.3) 3D-DCT domain coefficient block is converted into one-dimensional vector form, C using reshape function in Matlabk(m,n) Indicate that m row n-th arranges 3D-DCT domain coefficient block and is converted to k-th of ac coefficient after one-dimensional vector form;
Above-mentioned steps 3) comprise the concrete steps that:
3.1) average merging is carried out on time dimension to the every kind of feature extracted in step 2), obtains video quality assessment number According to the global statistics feature of videos all in library
3.2) PCA dimensionality reduction is implemented to five kinds of global characteristics in step 3.1);It may finally obtain for quality evaluation Feature, expression formula are as follows:
Wherein, x ∈ RdIndicate that x is the vector in d dimension real number space.
The present invention has the advantages that
1, the present invention is since using 3D-DCT, by video transformation to three-dimensional domain space, transformation coefficient can capture view simultaneously The local space and time frequency domain information of frequency.
2, the four kinds of simple features extracted simultaneously effectively reflect video in the space time statistical properties of different aspect, this The visual quality of a little features and video be it is closely related by these features return to obtain prediction score have with true score it is higher Consistency, thus display the assessment without reference video.
Detailed description of the invention
Fig. 1 is the flow chart in the no-reference video quality evaluating method based on the domain 3D-DCT statistical analysis in the present invention.
Fig. 2 is that the forecast quality score that the method for the present invention obtains on LIVE database dissipates true subjective quality scores Point diagram.
Specific embodiment
Referring to Fig.1, the step of present invention realizes is as follows:
Step 1) carries out 3D-DCT calculating to each video in video quality assessment database, which comes from In document " K.Seshadrinathan, R.Soundararajan, A.C.Bovik, and L.K.Cormack.Study of subjective and objective quality assessment of video.IEEEtransaction on Image Processing, 19 (6): 1427-1441,2010 ", obtain the exchange of the one-dimensional vector form of reflecting video space-time frequency domain information Coefficient;
It comprises the concrete steps that:
It is a × a × a space-time cube that each video in video quality assessment database is divided into size by step 1.1) Body, each space-time cube overlap in space two pixels, non-overlapping on time dimension;The space-time cube of each video has M × N × T, wherein M, N, T respectively indicate video horizontal axis, corresponding space-time cube quantity on the longitudinal axis and time shaft;
Step 1.2) carries out 3D-DCT calculating to each space-time cubic block, obtains M × N × T 3D-DCT domain coefficient block, often A 3D-DCT domain coefficient block includes 1 DC coefficient and K ac coefficient;
Wherein, in a 3D-DCT domain coefficient block, the corresponding a-c cycle of each ac coefficient;In each video In, an a-c cycle corresponds to M × N × T ac coefficient;
Such as: for a 3D signalIts 3D-DCT calculation formula is as follows:
Wherein A (x, y, z) is the three dimensional signal to be converted, and C (u, v, w) is transformed frequency coefficient, x=0, 1,…,N1- 1, y=0,1 ..., N2- 1, z=0,1 ..., N3- 1 indexes for the spacetime coordinate of three dimensional signal, u=0,1 ..., N1- 1, v=0,1 ..., N2- 1, w=0,1 ..., N3- 1 indexes for three-dimension varying spatial frequency,
Wherein, i=1,2,3;
3D-DCT domain coefficient block is converted to one-dimensional vector form, C using reshape function in Matlab by step 1.3)k (m, n) indicates that m row n-th arranges 3D-DCT domain coefficient block and is converted to k-th of ac coefficient after one-dimensional vector form;
Step 2) extracts M × N number of space-time cube Statistical Charateristics in video;Germicidal efficacy obtains non-distorted video With distortion video 3D-DCT coefficients statistics difference be mainly reflected in amplitude, variance, spike hangover degree and each frequency distribution it Between similarity degree;The present invention proposes following several features more relevant to quality from this several respect;
The basic spectrum signature in step 2.1) the extraction domain video 3D-DCT.
The present invention using video part (M × N number of space-time cube) without symbol ac coefficient, (i.e. each ac coefficient it is exhausted To value) mean μ (| Ck|) and variances sigma (| Ck|) measurement K a-c cycle on spectral amplitude ratio and contrast.In view of visual impression The control of contrast gain present in knowing, the present invention describe the basic spectrum on each a-c cycle using the standardized amplitude of contrast Characteristic:
By the s of all K a-c cycleskConnect constitutive characteristic f1=[s1,s2…,sK]T
Step 2.2) extracts the feature of description video 3D-DCT domain energy fluctuation characteristic.
The present invention utilizes average spectrum energy and the domain entropy measurement representation video 3D-DCT energy;What average spectrum energy and entropy were measured Specific formula for calculation is respectively as follows:
Wherein, rkFor the average spectrum energy on k-th of a-c cycle, ekFor the entropy measurement on k-th of a-c cycle, pi(| Ck|) it is to fall in the probability in i-th case, N without symbol ac coefficient on k-th of a-c cyclebFor total case number, N in the present inventionb= 128, Ck(m, n) indicates that m row n-th arranges 3D-DCT domain coefficient block and is converted to k-th of ac coefficient after one-dimensional vector form;
By the corresponding r of all K a-c cycleskAnd ekIt is separately connected constitutive characteristic f together2=[r1,r2…,rK]TAnd f3= [e1,e2…,eK]TThe fluctuation of energy on a different frequency is described.
Step 2.3) extracts the description domain video 3D-DCT shape parameter feature, counts ac coefficient probability on each a-c cycle Distribution, and probability distribution is fitted using generalized Gaussian distribution.Generalized Gaussian distribution is a kind of ED~* class function, energy Enough effectively modelings have the distribution of peak and long streaking.The formula of generalized Gaussian distribution is as follows:
F (x | α, β, γ)=α exp (- (β | x- μ |γ))
Wherein x is stochastic variable, and α, beta, gamma is respectively the scale of probability Distribution Model, mean value and shape parameter.Shape ginseng Amount is most important parameter in the distribution, is used for decisive probability distributed attenuation rate, to influence the spike and hangover degree of distribution. The present invention utilizes document " K.Sharifi, A.Leon-Garcia.Estimation of shape parameter for generalized Gaussian distributions in subband decompositions of video.IEEE Transactions on Circuits and Systems for Video Technology, 5 (1): in 52-56,1995. " The method of proposition estimates the shape parameter of each frequency distribution and obtains character pair f4=[γ12…,γK]T
Step 2.4) extracts the domain video 3D-DCT changes in distribution feature.The present invention calculates ac coefficient point on each a-c cycle City-block distance metric on cloth and all K a-c cycles between being evenly distributed of ac coefficient between the two away from From calculation formula is as follows:
Wherein, dkIt is averaged for ac coefficient distribution on k-th of a-c cycle with ac coefficient on all K a-c cycles The distance between distribution measurement,It is evenly distributed for ac coefficient on all K a-c cycles, pi(|Ck|) it is to fall in the probability in i-th case, N without symbol ac coefficient on k-th of a-c cyclebFor total case number, all K Distance metric on a-c cycle connects composition characteristic f5=[d1,d2…,dK] description frequency domain changes in distribution.
Step 3) carries out time merging to video part (M × N number of space-time cube) Statistical Charateristics in step 2) Obtain the feature of reflecting video global statistics characteristic, and dimensionality reduction distinguished to each feature using principal component analysis, obtain eventually for The global characteristics of prediction of quality;
It comprises the concrete steps that:
Step 3.1) carries out average merging to the every kind of feature extracted in step 2) on time dimension, obtains the overall situation of video Statistical nature
Step 3.2) implements PCA dimensionality reduction to five kinds of global characteristics in step 3.1);It utilizesExpression pairKnot after dimensionality reduction Fruit.For each video, quality evaluation feature may finally be obtained
Wherein, x ∈ RdIndicate that x is the vector in d dimension real number space.
Step 4) training regression model in video quality evaluation database, is tested, and prediction score and true is calculated The related coefficient of mass fraction;
Video in database is divided into two parts by step 4.1): the video of a part of (80%) is as training set, another portion Divide the video of (20%) as test set.The video of training set and the video of test set are completely non-overlapping in terms of content.
Number of videos is P in training set, and the quality evaluation character representation that training set extracts is
Rd×PIndicate that d × P ties up real number space, the true matter of training set Amount score is ytrain∈RP, RPIndicate that P ties up real number space;(ε-SVR) model is returned using training set training;
Step 4.2) sets number of videos in test set, and, as Q, the quality evaluation character representation that test set extracts isRd×QIndicate that d × Q ties up real number space, the real quality of test set video Score is ytest∈RQ, RQIndicate that Q ties up real number space, by test set XtestTrained regression model (ε-SVR) is inputted, is obtained The prediction score y of test videopredict∈RQ
Step 4.3) is counted by Pierre's Si linearly dependent coefficient (PLCC) and Spearman rank correlation coefficient (SROCC) Calculate the prediction score y of video in test setpredictWith real quality score y in video quality evaluation databasetestBetween correlation Property;
Step 4.4) repeats step step 4.1) to step 4.3), to all videos in video quality assessment database into Assessment is realized in the calculating of Correlation series.
Effect of the invention can be described further by following emulation experiment.
1. simulated conditions
It is Intel (R) Core i3-2130 3.4GHZ, memory 16G, WINDOWS 8 behaviour that the present invention, which is in central processing unit, Make in system, the emulation carried out with MATLAB software.
Database is the LIVE video quality evaluation data of University of Texas's image and the publication of video engineering experiment room in experiment Collection.
2. emulation content
Video quality evaluation is carried out with the method for the present invention as follows:
Firstly, extracting the feature of video according to the step 1 in above-mentioned specific embodiment, 2,3;
Secondly, the feature of extraction is inputted SVR regression model, the mass fraction predicted.Calculate with true score it Between SROCC and PLCC.The intermediate value for counting successive ignition test latter two performance indicator indicates algorithm overall performance.It will be of the invention Method and classical have with reference to evaluation algorithms PSNR and SSIM and newest without being carried out with reference to universal method V-BLIINDS Compare, the results are shown in Table 1.
1 totality SROCC and PLCC of table compares
As seen from Table 1, in the case where the reference of no original non-distorted video, evaluation result of the invention and true Correlation between subjective assessment be still higher than it is classical have with reference to algorithm, while it is also excellent with existing newest no reference method V- BLIINDS.This is because the present invention analyzes the statistical information in the domain 3D-DCT, this mode can obtain video simultaneously Space-Time information.Different features is extracted simultaneously effectively describes video in the statistical information of three-dimension varying domain different aspect, because This obtains better prediction result.
Fig. 2 is that the forecast quality score that the method for the present invention obtains on LIVE database dissipates true subjective quality scores Point diagram.From Figure 2 it can be seen that having apparent linear relationship between prediction score and true score, elder generation of the invention is further demonstrated Into property.

Claims (3)

1. a kind of no-reference video quality evaluating method based on the domain 3D-DCT statistical analysis, which is characterized in that including following step It is rapid:
1) 3D-DCT calculating is carried out to each video in video quality assessment database, obtains reflecting video space-time frequency domain information One-dimensional vector form ac coefficient;
2) ac coefficient for utilizing step 1) one-dimensional vector form, extracts every M in each video × N number of space-time cube space-time Statistical nature;Every M × N number of space-time cube Statistical Charateristics include basic spectrum signature, energy fluctuation spy in the video Sign, shape parameter feature and changes in distribution feature;
2.1) the basic spectrum signature in the domain video 3D-DCT is extracted;
2.1.1 the basic spectrum signature of K a-c cycle) is obtained using the ratio between the mean value of no symbol ac coefficient and variance measures, The no symbol ac coefficient is the absolute value of ac coefficient;
2.1.2 the basic spectrum signature of all K a-c cycles) is connected into constitutive characteristic f1=[s1,s2…,sK]T
2.2) description video 3D-DCT domain energy fluctuation feature is extracted;Video 3D-DCT domain energy fluctuation characteristic includes average Spectrum energy and entropy measurement;
2.2.1 average spectrum energy and entropy measurement) are calculated, specific formula for calculation is respectively as follows:
Wherein, rkFor the average spectrum energy on k-th of a-c cycle, ekFor the entropy measurement on k-th of a-c cycle, pi(|Ck|) be The probability in i-th case, N are fallen in without symbol ac coefficient on k-th of a-c cyclebFor total case number, Ck(m, n) indicates m row the N column 3D-DCT domain coefficient block is converted to k-th of ac coefficient after one-dimensional vector form;
2.2.2 the corresponding average spectrum energy of all K a-c cycles and entropy measurement) are separately connected constitutive characteristic f together2= [r1,r2…,rK]TAnd f3=[e1,e2…,eK]T
2.3) the description domain video 3D-DCT shape parameter feature is extracted;
2.3.1 the probability distribution of ac coefficient on each a-c cycle) is counted, and using generalized Gaussian distribution to each a-c cycle The probability distribution of upper ac coefficient is fitted, and obtains the shape parameter being distributed on each a-c cycle;
2.3.2) shape parameter being distributed on all K a-c cycles is connected and constitutes corresponding feature f4=[γ1, γ2…,γK]T
2.4) domain video 3D-DCT changes in distribution feature is extracted;
2.4.1 ac coefficient distribution on each a-c cycle) is calculated to be evenly distributed with ac coefficient on all K a-c cycles Between city-block distance the distance between to measurement, calculation formula is as follows:
Wherein, dkIt is evenly distributed for ac coefficient distribution on k-th of a-c cycle with ac coefficient on all K a-c cycles The distance between measurement,It is evenly distributed for ac coefficient on all K a-c cycles, pi(|Ck |) it is that the probability in i-th case is fallen in without symbol ac coefficient on k-th of a-c cycle;
2.4.2) by calculated distance measurement connection composition characteristic f on all K a-c cycles5=[d1,d2…,dK];
3) by obtaining M × N number of space-time cube Statistical Charateristics in video in step 2), to Statistical Charateristics in the time Implement PCA dimensionality reduction again after carrying out average merge on axis, obtains quality evaluation feature;
4) the training regression model in video quality evaluation database, is tested, and prediction result and real quality score are calculated Related coefficient;
4.1) video in video quality evaluation database is divided into two parts: a part of video is made as training set, remaining video For test set;Wherein, the video in the video and test set in training set is completely non-overlapping in terms of content;
Number of videos is P in training set, and the quality evaluation character representation that training set extracts is Xtrain∈Rd×P, Rd×PIndicate that d × P ties up real number space, training set real quality score is ytrain∈RP, RPIndicate that P dimension real number is empty Between;Utilize training set training regression model;
4.2) number of videos is set in test set as Q, the quality evaluation character representation that test set extracts Xtest∈Rd×Q, Rd×QIndicate that d × Q ties up real number space, test set real quality score is ytest∈RQ, RQIndicate that Q dimension real number is empty Between, by test set XtestTrained regression model is inputted, the prediction score y of video in test set is obtainedpredict∈RQ
4.3) prediction of video in test set is calculated by Pierre's Si linearly dependent coefficient and Spearman rank correlation coefficient Score ypredictWith real quality score ytestBetween correlation:
4.4) step 4.1) is repeated to step 4.3), and related coefficient is carried out to all videos in video quality assessment database It calculates, realizes assessment.
2. the no-reference video quality evaluating method according to claim 1 based on the domain 3D-DCT statistical analysis, feature Be: the step 1) comprises the concrete steps that:
1.1) each video in video quality assessment database is divided into size is a × a × a space-time cube, Ge Geshi Two pixels are overlapped in empty cubic space, it is non-overlapping on time dimension;The space-time cube of each video has M × N × T It is a;Wherein M, N, T respectively indicate video horizontal axis, corresponding space-time cube quantity on the longitudinal axis and time shaft;
1.2) 3D-DCT calculating is carried out to M × N × T space-time cube, obtains M × N × T 3D-DCT domain coefficient block;Each 3D-DCT domain coefficient block includes 1 DC coefficient and K ac coefficient;
Wherein, in a 3D-DCT domain coefficient block, the corresponding a-c cycle of each ac coefficient;In each video, one A a-c cycle corresponds to M × N × T ac coefficient;
1.3) 3D-DCT domain coefficient block is converted into one-dimensional vector form, C using reshape function in Matlabk(m, n) is indicated M row n-th arranges 3D-DCT domain coefficient block and is converted to k-th of ac coefficient after one-dimensional vector form.
3. the no-reference video quality evaluating method according to claim 1 based on the domain 3D-DCT statistical analysis, feature Be: the step 3) comprises the concrete steps that:
3.1) average merging is carried out on time dimension to the every kind of feature extracted in step 2), obtains video quality assessment database In all videos global statistics feature
3.2) PCA dimensionality reduction is implemented to five kinds of global characteristics in step 3.1);The feature for quality evaluation may finally be obtained, Its expression formula are as follows:
Wherein, x ∈ RdIndicate that x is the vector in d dimension real number space.
CN201510080147.0A 2015-02-13 2015-02-13 No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis Active CN105991995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510080147.0A CN105991995B (en) 2015-02-13 2015-02-13 No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510080147.0A CN105991995B (en) 2015-02-13 2015-02-13 No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis

Publications (2)

Publication Number Publication Date
CN105991995A CN105991995A (en) 2016-10-05
CN105991995B true CN105991995B (en) 2019-05-31

Family

ID=57042102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510080147.0A Active CN105991995B (en) 2015-02-13 2015-02-13 No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis

Country Status (1)

Country Link
CN (1) CN105991995B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107318014B (en) * 2017-07-25 2018-11-16 西安电子科技大学 The video quality evaluation method of view-based access control model marking area and space-time characterisation
CN113573047B (en) * 2021-07-16 2022-07-01 北京理工大学 Video quality evaluation method based on eigen-map decomposition and motion estimation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100559881C (en) * 2008-05-09 2009-11-11 中国传媒大学 A kind of method for evaluating video quality based on artificial neural net
CN101742353B (en) * 2008-11-04 2012-01-04 工业和信息化部电信传输研究所 No-reference video quality evaluating method
CN103108210B (en) * 2013-03-07 2015-04-15 福州大学 No-reference video quality evaluation method based on airspace complexity
CN104023225B (en) * 2014-05-28 2016-08-31 北京邮电大学 Video quality evaluation without reference method based on Space-time domain natural scene statistical nature

Also Published As

Publication number Publication date
CN105991995A (en) 2016-10-05

Similar Documents

Publication Publication Date Title
Li et al. Spatiotemporal statistics for video quality assessment
CN105208374B (en) A kind of non-reference picture assessment method for encoding quality based on deep learning
Mittal et al. No-reference image quality assessment in the spatial domain
He et al. Sparse representation for blind image quality assessment
Vu et al. ViS 3: An algorithm for video quality assessment via analysis of spatial and spatiotemporal slices
Wang et al. Reduced-reference image quality assessment using a wavelet-domain natural image statistic model
CN107959848B (en) Universal no-reference video quality evaluation algorithms based on Three dimensional convolution neural network
Ma et al. Reduced-reference image quality assessment in reorganized DCT domain
CN101378519B (en) Method for evaluating quality-lose referrence image quality base on Contourlet transformation
CN103200421B (en) No-reference image quality evaluation method based on Curvelet transformation and phase coincidence
CN102945552A (en) No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN106303507B (en) Video quality evaluation without reference method based on space-time united information
CN104376565B (en) Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN107318014B (en) The video quality evaluation method of view-based access control model marking area and space-time characterisation
Ma et al. Reduced-reference stereoscopic image quality assessment using natural scene statistics and structural degradation
CN105049851A (en) Channel no-reference image quality evaluation method based on color perception
Jiang et al. No reference stereo video quality assessment based on motion feature in tensor decomposition domain
Moorthy et al. Visual perception and quality assessment
He et al. Video quality assessment by compact representation of energy in 3D-DCT domain
Wang et al. Reduced reference image quality assessment using entropy of primitives
CN105894507B (en) Image quality evaluating method based on amount of image information natural scene statistical nature
CN109447903A (en) A kind of method for building up of half reference type super-resolution reconstruction image quality evaluation model
CN109754390A (en) A kind of non-reference picture quality appraisement method based on mixing visual signature
CN105991995B (en) No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant